Cameron KaiserTenFourFox FPR14 available

TenFourFox Feature Parity Release 14 final is now available for testing (downloads, hashes, release notes). Besides outstanding security updates, this release fixes the current tab with TenFourFox's AppleScript support so that this exceptional script now functions properly as expected:

tell application "TenFourFoxG5"
  tell front browser window
    set URL of current tab to ""
    repeat while (current tab is busy)
      delay 1
    end repeat
    tell current tab
      run JavaScript "let f = document.getElementById('tsf');f.q.value='tenfourfox';f.submit();"
    end tell
    repeat while (current tab is busy)
      delay 1
    end repeat
    tell current tab
      run JavaScript "return document.getElementsByTagName('h3')[0].innerText + ' ' + document.getElementsByTagName('cite')[0].innerText"
    end tell
  end tell
end tell

The font blacklist has also been updated and I have also hard-set the frame rate to 30 in the pref even though the frame rate is capped at 30 internally and such a change is simply a placebo. However, there are people claiming this makes a difference, so now you have your placebo pill and I hope you like the taste of it. :P The H.264 wiki page is also available, if you haven't tried MPEG-4/H.264 playback. The browser will finalize Monday evening Pacific as usual.

For FPR15, the JavaScript update that slipped from this milestone is back on. It's hacky and I don't know if it will work; we may be approaching the limits of feature parity, but it should compile, at least. I'm trying to reduce the changes to JavaScript in this release so that regressions are also similarly limited. However, I'm also looking at adding some later optimizations to garbage collection and using Mozilla's own list of malware scripts to additionally seed basic adblock, which I think can be safely done simultaneously.

Mozilla VR BlogBringing WebXR to iOS

Bringing WebXR to iOS

The first version of the WebXR Device API is close to being finalized, and browsers will start implementing the standard soon (if they haven't already). Over the past few months we've been working on updating the WebXR Viewer (source on github, new version available now on the iOS App Store) to be ready when the specification is finalized, giving developers and users at least one WebXR solution on iOS. The current release is a step along this path.

Most of the work we've been doing is hidden from the user; we've re-written parts of the app to be more modern, more robust and efficient. And we've removed little-used parts of the app, like video and image capture, that have been made obsolete by recent iOS capabilities.

There are two major parts to the recent update of the Viewer that are visible to users and developers.


We've updated the app to support a new implementation of the WebXR API based on the official WebXR Polyfill. This polyfill currently implements a version of the standard from last year, but when it is updated to the final standard, the WebXR API used by the WebXR Viewer will follow quickly behind. Keep an eye on the standard and polyfill to get a sense of when that will happen; keep an eye on your favorite tools, as well, as they will be adding or improving their WebXR support over the next few months. (The WebXR Viewer continues to support our early proposal for a WebXR API, but that API will eventually be deprecated in favor of the official one.)

We've embedded the polyfill into the app, so the API will be automatically available to any web page loaded in the app, without the need to load a polyfill or custom API. Our goal is to have any WebXR web applications designed to use AR on a mobile phone or tablet run in the WebXR Viewer. You can try this now, by enabling the "Expose WebXR API" preference in the viewer. Any loaded page will see the WebXR API exposed on navigator.xr, even though most "webxr" content on the web won't work right now because the standard is in a state of constant change.

You can find the current code for our API in the webxr-ios-js, along with a set of examples we're creating to explore the current API, and future additions to the API.These examples are available online. A glance at the code, or the examples, will show that we are not only implementing the initial API, but also building implementations of a number of proposed additions to the standard, including anchors, hit testing, and access to real world geometry. Implementing support for requesting geospatial coordinate system alignment allows integration with the existing web Geolocation API, enabling AR experiences that rely on geospatial data (illustrated simply in the banner image above). We will soon be exploring an API for camera access to enable computer vision.

Most of these APIs were available in our previous WebXR API implementation, but the former here more closely aligns with the work of the Immersive Web Community group. (We have also kept a few of our previous ARKit-specific APIs, but marked them explicitly as not being on the standards-track yet.)

A New Approach to WebXR Permissions

The most visible change to the application is a permissions API that is popped up when a web page requests a WebXR Session. Previously, the app was an early experiment and devoted to WebXR applications built with our custom API, so we did not explicitly ask for permission, inferring that anyone running an app in our experimental web browser intends to use WebXR capabilities.

When WebXR is released, browsers will need to obtain a user's permission before a web page can access the potentially sensitive data available via WebXR. We are particularly interested in what levels of permissions WebXR should have, so that users retain control of what data apps may require. One approach that seems reasonable is to differentiate between basic access (e.g., device motion, perhaps basic hit testing against the world), access to more detailed knowledge of the world (e.g., illumination, world geometry) and finally access to cameras and other similar sensors. The corresponding permission dialogs in the Viewer are shown here.

Bringing WebXR to iOS

If the user gives permission, an icon in the URL bar shows the permission level, similar to the camera and microphone access icons in Firefox today.

Bringing WebXR to iOS

Tapping on the icon brings up the permission dialog again, allowing the user to temporarily restrict the flow of data to the web app. This is particularly relevant for modifying access to cameras, where a mobile user (especially when HMDs are more common) may want to turn off sensors depending on the location, or who is nearby.

A final aspect of permissions we are exploring is the idea of a "Lite" mode. In each of the dialogs above, a user can select Lite mode, which brings up a UI allowing them to select a single ARKit plane.
Bringing WebXR to iOS
The APIs that expose world knowledge to the web page (including hit testing in the most basic level, and geometry in the middle level) will only use that single plane as the source of their actions. Only that plane would produce hits, and only that plane would have it's geometry sent into the page. This would allow the user to limit the information passed to a page, while still being able to access AR apps on the web.

Moving Forward

We are excited about the future of XR applications on the web, and will be using the WebXR Viewer to provide access to WebXR on iOS, as well as a testbed for new APIs and interaction ideas. We hope you will join us on the next step in the evolution of WebXR!

Chris H-CVirtual Private Social Network: Tales of a BBM Exodus


On Thursday April 18, my primary mechanism for talking to friends notified me that it was going away. I’d been using BlackBerry Messenger (BBM) since I started work at Research in Motion in 2008 and had found it to be tolerably built. It messaged people instantly over any data connection I had access to, what more could I ask for?

The most important BBM feature in my circle of contacts was its Groups feature. A bunch of people with BBM could form a Group and then messages, video, pictures, lists were all shared amongst the people in the group.

Essentially it acted as a virtual private social network. I could talk to a broad group of friends about the next time were getting together or about some cute thing my daughter did. I could talk to the subset who lived in Waterloo about Waterloo activities, and whose turn it was for Sunday Dinner. The Beers group kept track of whose turn it was to pay, and it combined nicely with the chat for random nerdy tidbits and coordinating when each of us arrived at the pub. Even my in-laws had a group to coordinate visits, brag about child developmental milestones, and manage Christmas.

And then BBM announced it was going away, giving users six weeks to find a replacement… or, as seemed more likely to me, replacements.

First thing I did, since the notice came during working hours, was mutter angrily that Mozilla didn’t have an Instant Messaging product that I could, by default, trust. (We do have a messaging product, but it’s only for Desktop and has an email focus.)

The second thing I did was survey the available IM apps, cross-correlating them with whether or not various of my BBM contacts already had it installed… the existing landscape seemed to be a mess. I found that WhatsApp was by far the most popular but was bought by Facebook in 2014 and required a real phone number for your account. Signal’s the only one with a privacy/security story that I and others could trust (Telegram has some weight here, but not much) but it, too, required a phone number in order to sign up. Slack’s something only my tech friends used, and their privacy policy was a shambles. Discord’s something only my gaming friends used, and was basically Slack with push-to-talk.

So we fragmented. My extended friend network went to Google Hangouts, since just about everyone already had a Google Account anyway (even if they didn’t use it for anything). The Beers group went to Discord because a plurality of the group already had it installed.

And my in-laws’ family group… well, we still have two weeks left to figure that one out. Last I heard someone was stumping for Facebook Messenger, to which I replied “Could we not?”

The lack of reasonable options and the (sad, understandable) willingness of my relatives to trade privacy for convenience is bothering me so much that I’ve started thinking about writing my own IM/virtual private social network.

You’d think I’d know better than to even think about architecting anything even close to either of those topics… but the more I think about it the more webtech seems like an ideal fit for this. Notifications, Push, ServiceWorkers, WebRTC peer connections, TLS, WebSockets, OAuth: stir lightly et voila.

But even ignoring the massive mistake diving into either of those ponds full of crazy would be, the time was too short for that four weeks ago, and is trebly so now. I might as well make my peace that Facebook will learn my mobile phone number and connect it indelibly with its picture of what advertisements it thinks I would be most receptive to.



Hacks.Mozilla.OrgFaster smarter JavaScript debugging in Firefox DevTools

Script debugging is one of the most powerful and complex productivity features in the web developer toolbox. Done right, it empowers developers to fix bugs quickly and efficiently. So the question for us, the Firefox DevTools team, has been, are the Firefox DevTools doing it right?

We’ve been listening to feedback from our community. Above everything we heard the need for greater reliability and performance; especially with modern web apps. Moreover, script debugging is a hard-to-learn skill that should work in similar fashion across browsers, but isn’t consistent because of feature and UI gaps.

With these pain points in mind, the DevTools Debugger team – with help from our tireless developer community – landed countless updates to design a more productive debugging experience. The work is ongoing, but Firefox 67 marks an important milestone, and we wanted to highlight some of the fantastic improvements and features. We invite you to open up Firefox Quantum: Developer Edition, try out the debugger on the examples below and your projects and let us know if you notice the difference.

A rock-solid debugging experience

Fast and reliable debugging is the result of many smaller interactions. From initial loading and source mapping to breakpoints, console logging, and variable previews, everything needs to feel solid and responsive. The debugger should be consistent, predictable, and capable of understanding common tools like webpack, Babel, and TypeScript.

We can proudly say that all of those areas have improved in the past months:

  1. Faster load time. We’ve eliminated the worst performance cliffs that made the debugger slow to open. This has resulted in a 30% speedup in our performance test suite. We’ll share more of our performance adventures in a future post.
  2. Excellent source map support. A revamped and faster source-map backend perfects the illusion that you’re debugging your code, not the compiled output from Babel, Webpack, TypeScript, vue.js, etc.
    Generating correct source maps can be challenging, so we also contributed patches to build tools (i.e. babel, vue.js, regenerator) – benefiting the whole ecosystem.
  3. Reduced overhead when debugger isn’t focused. No need to worry any longer about keeping the DevTools open! We found and removed many expensive calculations from running in the debugger when it’s in the background.
  4. Predictable breakpoints, pausing, and stepping. We fixed many long-standing bugs deep in the debugger architecture, solving some of the most common and frustrating issues related to lost breakpoints, pausing in the wrong script, or stepping through pretty-printed code.
  5. Faster variable preview. Thanks to our faster source-map support (and lots of additional work), previews are now displayed much more quickly when you hover your mouse over a variable while execution is paused.

These are just a handful of highlights. We’ve also resolved countless bugs and polish issues.

Looking ahead

Foremost, we must maintain a high standard of quality, which we’ll accomplish by explicitly setting aside time for polish in our planning. Guided by user feedback, we intend to use this time to improve new and existing features alike.

Second, continued investment in our performance and correctness tests ensures that the ever-changing JavaScript ecosystem, including a wide variety of frameworks and compiled languages, is well supported by our tools.

Debug all the things with new features

Finding and pausing in just the right location can be key to understanding a bug. This should feel effortless, so we’ve scrutinized our own tools—and studied others—to give you the best possible experience.

Inline breakpoints for fine-grained pausing and stepping

Inline handlers are the perfect match for the extra granularity.

Why should breakpoints operate on lines, when lines can have multiple statements? Thanks to inline breakpoints, it’s now easier than ever to debug minified scripts, arrow functions, and chained method calls. Learn more about breakpoints on MDN or try out the demo.

Logpoints combine the power of Console and Debugger

Adding Logpoints to monitor application state.

Console logging, also called printf() debugging, is a quick and easy way to observe your program’s flow, but it rapidly becomes tedious. Logpoints break that tiresome edit-build-refresh cycle by dynamically injecting console.log() statements into your running application. You can stay in the browser and monitor variables without pausing or editing any code. Learn more about log points on MDN.

Seamless debugging for JavaScript Workers

The new Threads panel in the Debugger for Worker debugging

Web Workers power the modern web and need to be first-class concepts in DevTools. Using the new Threads panel, you can switch between and independently pause different execution contexts. This allows workers and their scripts to be debugged within the same Debugger panel, similarly to other modern browsers. Learn more about Worker debugging on MDN.

Human-friendly variable names for source maps

Debugging bundled and compressed code isn’t easy. The Source Maps project, started and maintained by Firefox, bridges the gap between minified code running in the browser and its original, human-friendly version, but the translation isn’t perfect. Often, bits of the minified build output shine through and break the illusion. We can do better!

From build output back to original human-readable variables

By combining source maps with the Babel parser, Firefox’s Debugger can now preview the original variables you care about, and hide the extraneous cruft from compilers and bundlers. This can even work in the console, automatically resolving human-friendly identifiers to their actual, minified names behind the scenes. Due to its performance overhead, you have to enable this feature separately by clicking the “Map” checkbox in the Debugger’s Scopes panel. Read the MDN documentation on using the map scopes feature.

What’s next

Developers frequently need to switch between browsers to ensure that the web works for everyone, and we want our DevTools to be an intuitive, seamless experience. Though browsers have converged on the same broad organization for tools, we know there are still gaps in both features and UI. To help us address those gaps, please let us know where you experience friction when switching browsers in your daily work.

Your input makes a big difference

As always, we would love to hear your feedback on how we can improve DevTools and the browser.

While all these updates will be ready to try out in Firefox 67, when it’s released next week, we’ve polished them to perfection in Firefox 68 and added a few more goodies. Download Firefox Developer Edition (68) to try the latest updates for devtools and platform now.

The post Faster smarter JavaScript debugging in Firefox DevTools appeared first on Mozilla Hacks - the Web developer blog.

Mike ConleyA few words on main thread disk access for general audiences

I’m writing this in lieu of a traditional Firefox Front-end Performance Update, as I think this will be more useful in the long run than just a snapshot of what my team is doing.

I want to talk about main thread disk access (sometimes referred to more generally as “main thread IO”). Specifically, I’m going to argue that main thread disk access is lethal to program responsiveness. For some folks reading this, that might be an obvious argument not worth making, or one already made ad nauseam — if that’s you, this blog post is probably not for you. You can go ahead and skip most or all of it, if you’d like. Or just skim it. You never know — there might be something in here you didn’t know or hadn’t thought about!

For everybody else, scoot your chairs forward, grab a snack, and read on.

Disclaimer: I wouldn’t call myself a disk specialist. I don’t work for Western Digital or Seagate. I don’t design file systems. I have, however, been using and writing software for computers for a significant chunk of my life, and I seem to have accumulated a bunch of information about disks. Some of that information might be incorrect or imprecise. Please send me mail at mike dot d dot conley at gmail dot com if any of this strikes you as wildly inaccurate (though forgive me if I politely disregard pedantry), and then I can update the post.

The mechanical parts of a computer

If you grab a screwdriver and (carefully) open up a laptop or desktop computer, what do you see? Circuit boards, chips, wires and plugs. Lots of electrons flowing around in there, moving quickly and invisibly.

Notably, there aren’t many mechanical moving parts of a modern computer. Nothing to grease up, nowhere to pour lubricant. Opening up my desktop at home, the only moving parts I can really see are the cooling fans near the CPU and power supply (and if you’re like me, you’ll also notice that your cooling fans are caked with dust and in need of a cleaning).

There’s another moving part that’s harder to see — the hard drive. This might not be obvious, because most mechanical drives (I’ve heard them sometimes referred to as magnetic drives, spinny drives, physical drives, platter drives and HDDs. There are probably more terms.) hide their moving parts inside of the disk enclosure.1

If you ever get the opportunity to open one of these enclosures (perhaps the disk has failed or is otherwise being replaced, and you’re just about to get rid of it) I encourage you to.

As you disassemble the drive, what you’ll probably notice are circular parts, layered on top of one another on a motor that spins them. In between those circles are little arms that can move back and forth. This next image shows one of those circles, and one of those little arms.

<figcaption>There are several of those circles stacked on top of one another, and several of those arms in between them. We’re only seeing the top one in this photo.</figcaption>

Does this remind you of anything? The circular parts remind me of CDs and DVDs, but the arms reaching across them remind me of vinyl players.

<figcaption>Vinyl’s back, baby!</figcaption>

The comparison isn’t that outlandish. If you ignore some of the lower-level details, CDs, DVDs, vinyl players and hard drives all operate under the same basic principles:

  1. The circular part has information encoded on it.
  2. An arm of some kind is able to reach across the radius of the circular part.
  3. Because the circular part is spinning, the arm is able to reach all parts of it.
  4. The end of the arm is used to read the information encoded on it.

There’s some extra complexity for hard drives. Normally there’s more than one spinning platter and one arm, all stacked up, so it’s more like several vinyl players piled on top of one another.

Hard drives are also typically written to as well as read from, whereas CDs, DVDs and vinyls tend to be written to once, and then used as “read-only memory.” (Though, yes, there are exceptions there.)

Lastly, for hard drives, there’s a bit I’m skipping over involving caches, where parts of the information encoded on the spinning platters are temporarily held elsewhere for easier and faster access, but we’ll ignore that for now for simplicity, and because it wrecks my simile.2

So, in general, when you’re asking a computer to read a file off of your hard drive, it’s a bit like asking it to play a tune on a vinyl. It needs to find the right starting place to put the needle, then it needs to put the needle there and only then will the song play.

For hard drives, the act of moving the “arm” to find the right spot is called seeking.

Contiguous blocks of information and fragmentation

Have you ever had to defragment your hard drive? What does that even mean? I’m going to spend a few moments trying to explain that at a high-level. Again, if this is something you already understand, go ahead and skip this part.

Most functional hard drives allow you to do the following useful operations:

  1. Write data to the drive
  2. Read data from the drive
  3. Remove data from the drive

That last one is interesting, because usually when you delete a file from your computer, the information isn’t actually erased from the disk. This is true even after emptying your Trash / Recycling Bin — perhaps surprisingly, the files that you asked to be removed are still there encoded on the circular platters as 1’s and 0’s. This is why it’s sometimes possible to recover deleted files even when it seems that all is lost.

Allow me to explain.

Just like there are different ways of organizing a sock drawer (at random, by colour, by type, by age, by amount of damage), there are ways of organizing a hard drive. These “ways” are called file systems. There are lots of different file systems. If you’re using a modern version of Windows, you’re probably using a file system called NTFS. One of the things that a file system is responsible for is knowing where your files are on the spinning platters. This file system is also responsible for knowing where there’s free space on the spinning platters to write new data to.

When you delete a file, what tends to happen is that your file system marks those sectors of the platter as places where new information can be written to, but doesn’t immediately overwrite those sectors. That’s one reason why sometimes deleted files can be recovered.

Depending on your file system, there’s a natural consequence as you delete and write files of different sizes to the hard drive: fragmentation. This kinda sounds like the actual physical disk is falling apart, but that’s not what it means. Data fragmentation is probably a more precise way of thinking about it.

Imagine you have a sheet of white paper broken up into a grid of 5 boxes by 5 boxes (25 boxes in total), and a box of paints and paintbrushes.

Each square on the paper is white to start. Now, starting from the top-left, and going from left-to-right, top-to-bottom, use your paint to fill in 10 of those boxes with the colour red. Now use your paint to fill in the next 5 boxes with blue. Now do 3 more boxes with yellow.

So we’ve got our colour-filled boxes in neat, organized rows (red, then blue, then yellow), and we’ve got 18 of them filled, and 7 of them still white.

Now let’s say we don’t care about the colour blue. We’re okay to paint over those now with a new colour. We also want to fill in 10 boxes with the colour purple. Hm… there aren’t enough free white boxes to put in that many purple ones, but we have these 5 blue ones we can paint over. Let’s paint over them with purple, and then put the next 5 at the end in the white boxes.

So now 23 of the boxes are filled, we’ve got 2 left at the end that are white, but also, notice that the purple boxes aren’t all together — they’ve been broken apart into two sections. They’ve been fragmented.

This is an incredibly simplified model, but (I hope) it demonstrates what happens when you delete and write files to a hard drive. Gaps open up that can be written to, and bits and pieces of files end up being distributed across the platters as fragments.

This also occurs as files grow. If, for example, we decided to paint two more white boxes red, we’d need to paint the ones at the very end, breaking up the red boxes so that they’re fragmented.

So going back to our vinyl player example for a second —  the ideal scenario is that you start a song at the beginning and it plays straight through until the end, right? The more common case with disk drives, however, is you read bits and pieces of a song from different parts of the vinyl: you have to lift and move the arm each time until eventually you have heard the song from start to finish. That seeking of the arm adds overhead to the time it takes to listen to the song from beginning to end.

When your hard drive undergoes defragmentation, what your computer does is try to re-organize your disk so that files are in contiguous sectors on the platters. That’s a fancy way of saying that they’re all in a row on the platter, so they can be read in without the overhead of seeking around to assemble it as fragments.

Skipping that overhead can have huge benefits to your computer’s performance, because the disk is usually the slowest part of your computer.

I’ve skipped over and simplified a bunch of stuff here in the interests of brevity, but this is a great video that gives a crash course on file systems and storage. I encourage you to watch it.

On the relative input / output speeds of modern computing components

I mentioned in the disclaimer at the start of this post that I’m not a disk specialist or expert. Scott Davis is probably a better bet as one of those. His bio lists an impressive wealth of experience, and mentions that he’s “a recognized expert in virtualization, clustering, operating systems, cloud computing, file systems, storage, end user computing and cloud native applications.”

I don’t know Scott at all (if you’re reading this, Hi, Scott!), but let’s just agree for now that he probably knows more about disks than I do.

I’m picking Scott as an expert because of a particularly illustrative analogy that was posted to a blog for a company he used to work for. The analogy compares the speeds of different media that can be used to store information on a computer. Specifically, it compares the following:

  1. RAM
  2. The network with a decent connection
  3. Flash drives
  4. Magnetic hard drives — what we’ve been discussing up until now.

For these media, the post claims that input / output speed can be measured using the following units:

  • RAM is in nanoseconds
  • 10GbE Network speed is in microseconds (~50 microseconds)
  • Flash speed is in microseconds (between 20-500+ microseconds)
  • Disk speed is in milliseconds

That all seems pretty fast. What’s the big deal? Well, it helps if we zoom in a little bit. The post does this by supposing that we pretend that RAM speed happens in minutes.

If that’s the case, then we’d have to measure network speed in weeks.

And if that’s the case, then we’d want to measure the speed of a Flash drive in months.

And if that’s the case, then we’d have to measure the speed of a magnetic spinny disk in decades.

I wish I had some ACM paper, or something written by a computer science professor that I could point to you to bolster the following claim. I don’t, not because one doesn’t exist, but because I’m too lazy to look for one. I hope you’ll forgive me for that, but I don’t think I’m saying anything super controversial when I say:

In the common case, for a personal computer, it’s best to assume that reading and writing to the disk is the slowest operation you can perform.

Sure, there are edge cases where other things in the system might be slower. And there is that disk cache that I breezed over earlier that might make reading or writing cheaper. And sometimes the operating system tries to do smart things to help you. For now, just let it go. I’m making a broad generalization that I think covers the common cases, and I’m talking about what’s best to assume.

Single and multi-threaded restaurants

When I try to describe threading and concurrency to someone, I inevitably fall back to the metaphor of cooks in a kitchen in a restaurant. This is a special restaurant where there’s only one seat, for a single customer — you, the user.

Single-threaded programs

Let’s imagine a restaurant that’s very, very small and simple. In this restaurant, the cook is also acting as the waiter / waitress / server. That means when you place your order, the server / cook goes into the kitchen and makes it for you. While they’re gone, you can’t really ask for anything else — the server / cook is busy making the thing you asked for last.

This is how most simple, single-threaded programs work—the user feeds in requests, maybe by clicking a button, or typing something in, maybe something else entirely—and then the program goes off and does it and returns some kind of result. Maybe at that point, the program just exits (“The restaurant is closed! Come back tomorrow!”), or maybe you can ask for something else. It’s really up to how the restaurant / program is designed that dictates this.

Suppose you’re very, very hungry, and you’ve just ordered a complex five-course meal for yourself at this restaurant. Blanching, your server / cook goes off to the kitchen. While they’re gone, nobody is refilling your water glass or giving you breadsticks. You’re pretty sure there’s activity going in the kitchen and that the server / cook hasn’t had a heart attack back there, but you’re going to be waiting a looooong time since there’s only one person working in this place.

Maybe in some restaurants, the server / cook will dash out periodically to refill your water glass, give you some breadsticks, and update you on how things are going, but it sure would be nice if we gave this person some help back there, wouldn’t it?

Multi-threaded programs

Let’s imagine a slightly different restaurant. There are more cooks in the kitchen. The server is available to take your order (but is also able to cook in the kitchen if need be), and you make your request from the menu.

Now suppose again that you order a five-course meal. The server goes to the kitchen and tells the cooks what you just ordered. In this restaurant, suppose the kitchen staff are a really great team and don’t get in each other’s way3, so they divide up the order in a way that makes sense and get to work.

The server can come back and refill your water glass, feed you breadsticks, perhaps they can tell you an entertaining joke, perhaps they can take additional orders that won’t take as long. At any rate, in this restaurant, the interaction between the user and the server is frequent and rarely interrupted.

The waiter / waitress / server is the main thread

In these two examples, the waiter / waitress / server is what is usually called the main thread of execution, which is the part of the program that the user interacts with most directly. By moving expensive operations off of the main thread, the responsiveness of the program increases.

Have you ever seen the mouse turn into an hourglass, seen the “This program is not responding” message on Windows? Or the spinny colourful pinwheel on macOS? In those cases, the main thread is off doing something and never came back to give you your order or refill your water or breadsticks — that’s how it generally manifests in common operating systems. The program seems “unresponsive”, “sluggish”, “frozen”. It’s “hanging”, or “stuck”. When I hear those words, my immediate assumption is that the main thread is busy doing something — either it’s taking a long time (it’s making you your massive five course meal, maybe not as efficiently as it could), or it’s stuck (maybe they fell down a well!).

In either case, the general rule of thumb to improving program responsiveness is to keep the server filling the user’s water and breadsticks by offloading complex things on the menu to other cooks in the kitchen.

Accessing the disk on the main thread

Recall that in the common case, for a personal computer, it’s best to assume that reading and writing to the disk is the slowest operation you can perform. In our restaurant example, reading or writing to the disk on the main thread is a bit like having your server hop onto their bike and ride out to the next town over to grab some groceries to help make what you ordered.

And sometimes, because of data fragmentation (not everything is all in one place), the server has to search amongst many many shelves all widely spaced apart to get everything.

And sometimes the grocery store is very busy because there are other restaurants out there that are grabbing supplies.

And sometimes there are police checks (anti-virus / anti-malware software) occurring for passengers along the road, where they all have to show their IDs before being allowed through.

It’s an incredibly slow operation. Hopefully by the time the server comes back, they don’t realize they have to go back out again to get more, but they might if they didn’t realize they were missing some more ingredients.4

Slow slow slow. And unresponsive. And a great way to lose a hungry customer.

For super small programs, where the kitchen is well stocked, or the ride to the grocery store doesn’t need to happen often, having a single-thread and having it read or write is usually okay. I’ve certainly written my fair share of utility programs or scripts that do main thread disk access.

Firefox, the program I spend most of my time working on as my job, is not a small program. It’s a very, very, very large program. Using our restaurant model, it’s many large restaurants with many many cooks on staff. The restaurants communicate with each other and ship food and supplies back and forth using messenger bikes, to provide to you, the customer, the best meals possible.

But even with this large set of restaurants, there’s still only a single waiter / waitress / server / main thread of execution as the point of contact with the user.

Part of my job is to help organize the workflows of this restaurant so that they provide those meals as quickly as possible. Sending the server to the grocery store (main thread disk access) is part of the workflow that we absolutely need to strike from the list.

Start-up main-thread disk access

Going back to our analogy, imagine starting the program like opening the restaurant. The lights go on, the chairs come off of the tables, the kitchen gets warmed up, and prep begins.

While this is occurring, it’s all hands on deck — the server might be off in the kitchen helping to do prep, off getting cutlery organized, whatever it takes to get the restaurant open and ready to serve. Before the restaurant is open, there’s no point in having the server be idle, because the customer hasn’t been able to come in yet.

So if critical groceries and supplies needed to open the restaurant need to be gotten before the restaurant is open, it’s fine to send the server to the store. Somebody has to do it.

For Firefox, there are various things that need to take place before we can display any UI. At that point, it’s usually fine to do main-thread disk access, so long as all of the things being read or written are kept to an absolute minimum. Find how much you need to do, and reduce it as much as possible.

But as soon as UI is presented to the user, the restaurant is open. At that point, the server should stay off their bike and keep chatting with the customer, even if the kitchen hasn’t finished setting up and getting all of their supplies. So to stay responsive, don’t do disk access on the main thread of execution after you’ve started to show the user some kind of UI.

Disk contention

There’s one last complication I want to capture here with our restaurant example before I wrap up. I’ve been saying that it’s important to send anyone except the server to the grocery store for supplies. That’s true — but be careful of sending too many other people at the same time.

Moving disk access off of the main thread is good for responsiveness, full stop. However, it might do nothing to actually improve the overall time that it takes to complete some amount of work. Put it another way: just because the server is refilling your glass and giving you breadsticks doesn’t mean that your five-course meal is going to show up any faster.

Also, disk operations on magnetic drives do not have a constant speed. Having the disk do many things at once within a single program or across multiple programs can slow the whole set of operations down due to the overhead of seeking and context switching, since the operating system will try to serve all disk requests at once, more or less.5

Disk contention and main thread disk access is something I think a lot about these days while my team and I work on improving Firefox start-up performance.

Some questions to ask yourself when touching disk

So it’s important to be thoughtful about disk access. Are you working on code that touches disk? Here are some things to think about:

Is UI visible, and responsiveness a goal?

If so, best to move the disk access off of the main-thread. That was the main thing I wanted to capture, and I hope I’ve convinced you of that point by now.

Does the access need to occur?

As programs age and grow and contributors come and go, sometimes it’s important to take a step back and ask, “Are the assumptions of this disk access still valid? Does this access need to happen at all?” The fastest code is the code that doesn’t run at all.

What else is happening during this disk access? Can disk access be prioritized more efficiently?

This is often trickier to answer as a program continues to run. Thankfully, tools like profilers can help capture recordings of things like disk access to gain evidence of simultaneous disk access.

Start-up is a special case though, since there’s usually a somewhat deterministic / reliably stable set of operations that occur in the same way in roughly the same order during start-up. For start-up, using a tool like a profiler, you can gain a picture of the sorts of things that tend to happen during that special window of time. If you notice a lot of disk activity occurring simultaneously across multiple threads, perhaps ponder if there’s a better way of ordering those operations so that the most important ones complete first.

Can we reduce how much we need to read or write?

There are lots of wonderful compression algorithms out there with a variety of performance characteristics that might be worth pondering. It might be worth considering compressing the data that you’re storing before writing it so that the disk has to write less and read less.

Of course, there’s compression and decompression overhead to consider here. Is it worth the CPU time to save the disk time? Is there some other CPU intensive task that is more critical that’s occurring?

Can we organize the things that we want to read ahead of time so that they’re more likely to be read contiguously (without seeking the disk)?

If you know ahead of time the sorts of things that you’re going to be reading off of the disk, it’s generally a good strategy to store them in that read order. That way, in the best case scenario (the disk is defragmented), the read head can fly along the sectors and read everything in, in exactly the right order you want them. If the user has defragmented their disk, but the things you’re asking for are all out of order on the disk, you’re adding overhead to seek around to get what you want.

Supposing that the data on the disk is fragmented, I suspect having the files in order anyways is probably better than not, but I don’t think I know enough to prove it.

Flawed but useful

One of my mentors, Greg Wilson, likes to say that “all models are flawed, but some are useful”. I don’t think he coined it, but he uses it in the right places at the right times, and to me, that’s what counts.

The information in this post is not exhaustive — I glossed over and left out a lot. It’s flawed. Still, I hope it can be useful to you.


Thanks to the following folks who read drafts of this and gave feedback:

  • Mandy Cheang
  • Emily Derr
  • Gijs Kruitbosch
  • Doug Thayer
  • Florian Quèze

  1. There are also newer forms of disks called Flash disks and SSDs. I’m not really going to cover those in this post. 

  2. The other thing to keep in mind is that the disk cache can have its contents evicted at any time for reasons that are out of your control. If you time it right, you can maybe increase the probability of a file you want to read being in the cache, but don’t bet the farm on it. 

  3. When writing multi-threaded programs, this is much harder than it sounds! Mozilla actually developed a whole new programming language to make that easier to do correctly. 

  4. Keen readers might notice I’m leaving out a discussion on Paging. That’s because this blog post is getting quite long, and because it kinda breaks the analogy a bit — who sends groceries back to a grocery store? 

  5. I’ve never worked on an operating system, but I believe most modern operating systems try to do a bunch of smart things here to schedule disk requests in efficient ways. 

Mozilla VR BlogMaking ethical decisions for the immersive web

Making ethical decisions for the immersive web

One of the promises of immersive technologies is real time communication unrestrained by geography. This is as transformative as the internet, radio, television, and telephones—each represents a pivot in mass communications that provides new opportunities for information dissemination and creating connections between people. This raises the question, “what’s the immersive future we want?”

We want to be able to connect without traveling. Indulge our curiosity and creativity beyond our physical limitations. Revolutionize the way we visualize and share our ideas and dreams. Enrich everyday situations. Improve access to limited resources like healthcare and education.

The internet is an integral part of modern life—a key component in education, communication, collaboration, business, entertainment and society as a whole.
— Mozilla Manifesto, Principle 1

My first instinct is to say that I want an immersive future that brings joy. Do AR apps that help me maintain my car bring me joy? Not really.

What I really want is an immersive future that respects individual creators and users. Platforms and applications that thoughtfully approach issues of autonomy, privacy, bias, and accessibility in a complex environment. How do we get there? First, we need to understand the broader context of augmented and virtual reality in ethics, identifying overlap with both other technologies (e.g. artificial intelligence) and other fields (e.g. medicine and education). Then, we can identify the unique challenges presented by spatial and immersive technologies. Given the current climate of ethics and privacy, we can anticipate potential problems, identify the core issues, and evaluate different approaches.

From there, we have an origin for discussion and a path for taking practical steps that enable legitimate uses of MR while discouraging abuse and empowering individuals to make choices that are right for them.

For details and an extended discussion on these topics, see this paper.

The immersive web

Whether you have a $30 or $3000 headset, you should be able to participate in the same immersive universe. No person should be excluded due to their skin color, hairstyle, disability, class, location, or any other reason.

The internet is a global public resource that must remain open and accessible.
Mozilla Manifesto, Principle 2

The immersive web represents an evolution of the internet. Immersive technologies are already deployed in education and healthcare. It's unethical to limit their benefits to a privileged few, particularly when MR devices can improve access to limited resources. For example, Americans living in rural areas are underserved by healthcare, particularly specialist care. In an immersive world, location is no longer an obstacle. Specialists can be virtually present anywhere, just like they were in the room with the patient. Trained nurses and assistants would be required for physical manipulations and interactions, but this could dramatically improve health coverage and reduce burdens on both patients and providers.

While we can build accessibility into browsers and websites, the devices themselves need to be created with appropriate accomodations, like settings that indicate a user is in a wheelchair. When we design devices and experiences, we need to consider how they'll work for people with disabilities. It's imperative to build inclusive MR devices and experiences, both because it's unethical to exclude users due to disability, and because there are so many opportunities to use MR as an assistive technology, including:

  • Real time subtitles
  • Gaze-based navigation
  • Navigation with vehicle and obstacle detection and warning

The immersive web is for everyone.

Representation and safety

Mixed reality offers new ways to connect with each other, enabling us to be virtually present anywhere in the world instantaneously. Like most technologies, this is both a good and a bad thing. While it transforms how we can communicate, it also offers new vectors for abuse and harassment.

All social VR platforms need to have simple and obvious ways to report abusive behavior and block the perpetrators. All social platforms, whether 2D or 3D should have this, but the VR-enabled embodiment intensifies the impact of harassment. Behavior that would be limited to physical presence is no longer geographically limited, and identities can be more obfuscated. Safety is not a 'nice to have' feature — it's a requirement. Safety is a key component in inclusion and freedom of expression, as well as being a human right.

Freedom of expression in this paradigm includes both choosing how to present yourself and having the ability to maintain multiple virtual identities. Immersive social experiences allow participants to literally build their identities via avatars. Human identity is infinitely complex (and not always very human — personally, I would choose a cat avatar). Thoughtfully approaching diversity and representation in avatars isn't easy, but it is worthwhile.

Individuals must have the ability to shape the internet and their own experiences on it.
Mozilla Manifesto, Principle 5

Suppose Banksy, a graffiti artist known both for their art and their anonymity, is an accountant by day who used an HMD to conduct virtual meetings. Outside of work, Banksy is a virtual graffiti artist. However, biometric data could tie the two identities together, stripping Banksy of their anonymity. Anonymity enables free speech; it removes the threats of economic retailiation and social ostracism and allows consumers to process ideas free of predjudices about the creators. There's a long history of women who wrote under assumed names to avoid being dismissed for their gender, including JK Rowling and George Sand.

Unique considerations in mixed reality

Immersive technologies differ from others in their ability to affect our physical bodies. To achieve embodiment and properly interact with virtual elements, devices use a wide range of data derived from user biometrics, the surrounding physical world, and device orientation. As the technology advances, the data sources will expand.

The sheer amount of data required for MR experiences to function requires that we rethink privacy. Earlier, I mentioned that gaze-based navigation can be used to allow mobility impaired users to participate more fully on the immersive web. Unfortunately, gaze tracking data also exposes large amounts of nonverbal data that can be used to infer characteristics and mental states, including ADHD and sexual arousal.

Individuals’ security and privacy on the internet are fundamental and must not be treated as optional.
Mozilla Manifesto, Principle 4

While there may be a technological solution to this problem, it highlights a wider social and legal issue: we've become too used to companies monetizing our personal data. It's possible to determine that, although users report privacy concerns, they don't really care, because they 'consent' to disclosing personal information for small rewards. The reality is that privacy is hard. It's hard to define and it's harder to defend. Processing privacy policies feels like it requires a law degree and quantifying risks and tradeoffs is nondeterministic. In the US, we've focused on privacy as an individual's responsibility, when Europe (with the General Data Protection Regulation) shows that it's society's problem and should be tackled comprehensively.

Concrete steps for ethical decision making

Ethical principles aren't enough. We also need to take action — while some solutions will be technical, there are also legal, regulatory, and societal challenges that need to be addressed.

  1. Educate and assist lawmakers
  2. Establish a regulatory authority for flexible and responsive oversight
  3. Engage engineers and designers to incorporate privacy by design
  4. Empower users to understand the risks and benefits of immersive technology
  5. Incorporate experts from other fields who have addressed similar problems

Tech needs to take responsibility. We've built technology that has incredible positive potential, but also serious risks for abuse and unethical behavior. Mixed reality technologies are still emerging, so there's time to shape a more respectful and empowering immersive world for everyone.

Hacks.Mozilla.OrgEmpowering User Privacy and Decentralizing IoT with Mozilla WebThings

Smart home devices can make busy lives a little easier, but they also require you to give up control of your usage data to companies for the devices to function. In a recent article from the New York Times’ Privacy Project about protecting privacy online, the author recommended people to not buy Internet of Things (IoT) devices unless they’re “willing to give up a little privacy for whatever convenience they provide.

This is sound advice since smart home companies can not only know if you’re at home when you say you are, they’ll soon be able to listen for your sniffles through their always-listening microphones and recommend sponsored cold medicine from affiliated vendors. Moreover, by both requiring that users’ data go through their servers and by limiting interoperability between platforms, leading smart home companies are chipping away at people’s ability to make real, nuanced technology choices as consumers.

At Mozilla, we believe that you should have control over your devices and the data that smart home devices create about you. You should own your data, you should have control over how it’s shared with others, and you should be able to contest when a data profile about you is inaccurate.

Mozilla WebThings follows the privacy by design framework, a set of principles developed by Dr. Ann Cavoukian, that takes users’ data privacy into account throughout the whole design and engineering lifecycle of a product’s data process. Prioritizing people over profits, we offer an alternative approach to the Internet of Things, one that’s private by design and gives control back to you, the user.

User Research Findings on Privacy and IoT Devices

Before we look at the design of Mozilla WebThings, let’s talk briefly about how people think about their privacy when they use smart home devices and why we think it’s essential that we empower people to take charge.

Today, when you buy a smart home device, you are buying the convenience of being able to control and monitor your home via the Internet. You can turn a light off from the office. You can see if you’ve left your garage door open. Prior research has shown that users are passively, and sometimes actively, willing to trade their privacy for the convenience of a smart home device. When it seems like there’s no alternative between having a potentially useful device or losing their privacy, people often uncomfortably choose the former.

Still, although people are buying and using smart home devices, it does not mean they’re comfortable with this status quo. In one of our recent user research surveys, we found that almost half (45%) of the 188 smart home owners we surveyed were concerned about the privacy or security of their smart home devices.

Bar graph showing about 45% of the 188 current smart home owners we surveyed were concerned about their privacy or security at least a few times per month.

User Research Survey Results

Last Fall 2018, our user research team conducted a diary study with eleven participants across the United States and the United Kingdom. We wanted to know how usable and useful people found our WebThings software. So we gave each of our research participants some Raspberry Pis (loaded with the Things 0.5 image) and a few smart home devices.

User research participants were given a raspberry Pi, a smart light, a motion sensor, a smart plug, and a door sensor.

Smart Home Devices Given to Participants for User Research Study

We watched, either in-person or through video chat, as each individual walked through the set up of their new smart home system. We then asked participants to write a ‘diary entry’ every day to document how they were using the devices and what issues they came across. After two weeks, we sat down with them to ask about their experience. While a couple of participants who were new to smart home technology were ecstatic about how IoT could help them in their lives, a few others were disappointed with the lack of reliability of some of the devices. The rest fell somewhere in between, wanting improvements such as more sophisticated rules functionality or a phone app to receive notifications on their iPhones.

We also learned more about people’s attitudes and perceptions around the data they thought we were collecting about them. Surprisingly, all eleven of our participants expected data to be collected about them. They had learned to expect data collection, as this has become the prevailing model for other platforms and online products. A few thought we would be collecting data to help improve the product or for research purposes. However, upon learning that no data had been collected about their use, a couple of participants were relieved that they would have one less thing, data, to worry about being misused or abused in the future.

By contrast, others said they weren’t concerned about data collection; they did not think companies could make use of what they believed was menial data, such as when they were turning a light on or off. They did not see the implications of how collected data could be used against them. This showed us that we can improve on how we demonstrate to users what others can learn from your smart home data. For example, one can find out when you’re not home based on when your door has opened and closed.

graph of door sensor logs over a week reveals when someone is not home

Door Sensor Logs can Reveal When Someone is Not Home

From our user research, we’ve learned that people are concerned about the privacy of their smart home data. And yet, when there’s no alternative, they feel the need to trade away their privacy for convenience. Others aren’t as concerned because they don’t see the long-term implications of collected smart home data. We believe privacy should be a right for everyone regardless of their socioeconomic or technical background. Let’s talk about how we’re doing that.

Decentralizing Data Management Gives Users Privacy

Vendors of smart home devices have architected their products to be more of service to them than to their customers. Using the typical IoT stack, in which devices don’t easily interoperate, they can build a robust picture of user behavior, preferences, and activities from data they have collected on their servers.

Take the simple example of a smart light bulb. You buy the bulb, and you download a smartphone app. You might have to set up a second box to bridge data from the bulb to the Internet and perhaps a “user cloud subscription account” with the vendor so that you can control the bulb whether you’re home or away. Now imagine five years into the future when you have installed tens to hundreds of smart devices including appliances, energy/resource management devices, and security monitoring devices. How many apps and how many user accounts will you have by then?

The current operating model requires you to give your data to vendor companies for your devices to work properly. This, in turn, requires you to work with or around companies and their walled gardens.

Mozilla’s solution puts the data back in the hands of users. In Mozilla WebThings, there are no company cloud servers storing data from millions of users. User data is stored in the user’s home. Backups can be stored anywhere. Remote access to devices occurs from within one user interface. Users don’t need to download and manage multiple apps on their phones, and data is tunneled through a private, HTTPS-encrypted subdomain that the user creates.

The only data Mozilla receives are the instances when a subdomain pings our server for updates to the WebThings software. And if a user only wants to control their devices locally and not have anything go through the Internet, they can choose that option too.

Decentralized distribution of WebThings Gateways in each home means that each user has their own private “data center”. The gateway acts as the central nervous system of their smart home. By having smart home data distributed in individual homes, it becomes more of a challenge for unauthorized hackers to attack millions of users. This decentralized data storage and management approach offers a double advantage: it provides complete privacy for user data, and it securely stores that data behind a firewall that uses best-of-breed https encryption.

The figure below compares Mozilla’s approach to that of today’s typical smart home vendor.

Comparison of Mozilla’s Approach to Typical Smart Home Vendor

Mozilla’s approach gives users an alternative to current offerings, providing them with data privacy and the convenience that IoT devices can provide.

Ongoing Efforts to Decentralize

In designing Mozilla WebThings, we have consciously insulated users from servers that could harvest their data, including our own Mozilla servers, by offering an interoperable, decentralized IoT solution. Our decision to not collect data is integral to our mission and additionally feeds into our Emerging Technology organization’s long-term interest in decentralization as a means of increasing user agency.

WebThings embodies our mission to treat personal security and privacy on the Internet as a fundamental right, giving power back to users. From Mozilla’s perspective, decentralized technology has the ability to disrupt centralized authorities and provide more user agency at the edges, to the people.

Decentralization can be an outcome of social, political, and technological efforts to redistribute the power of the few and hand it back to the many. We can achieve this by rethinking and redesigning network architecture. By enabling IoT devices to work on a local network without the need to hand data to connecting servers, we decentralize the current IoT power structure.

With Mozilla WebThings, we offer one example of how a decentralized, distributed system over web protocols can impact the IoT ecosystem. Concurrently, our team has an unofficial draft Web Thing API specification to support standardized use of the web for other IoT device and gateway creators.

While this is one way we are making strides to decentralize, there are complementary projects, ranging from conceptual to developmental stages, with similar aims to put power back into the hands of users. Signals from other players, such as FreedomBox Foundation, Daplie, and Douglass, indicate that individuals, households, and communities are seeking the means to govern their own data.

By focusing on people first, Mozilla WebThings gives people back their choice: whether it’s about how private they want their data to be or which devices they want to use with their system.

This project is an ongoing effort. If you want to learn more or get involved, check out the Mozilla WebThings Documentation, you can contribute to our documentation or get started on your own web things or Gateway.

If you live in the Bay Area, you can find us this weekend at Maker Faire Bay Area (May 17-19). Stop by our table. Or follow @mozillaiot to learn about upcoming workshops and demos.

The post Empowering User Privacy and Decentralizing IoT with Mozilla WebThings appeared first on Mozilla Hacks - the Web developer blog.

Hacks.Mozilla.OrgTLS 1.0 and 1.1 Removal Update

tl;dr Enable support for Transport Layer Security (TLS) 1.2 today!


As you may have read last year in the original announcement posts, Safari, Firefox, Edge and Chrome are removing support for TLS 1.0 and 1.1 in March of 2020. If you manage websites, this means there’s less than a year to enable TLS 1.2 (and, ideally, 1.3) on your servers, otherwise all major browsers will display error pages, rather than the content your users were expecting to find.

Screenshot of a Secure Connection Failed error page

In this article we provide some resources to check your sites’ readiness, and start planning for a TLS 1.2+ world in 2020.

Check the TLS “Carnage” list

Once a week, the Mozilla Security team runs a scan on the Tranco list (a research-focused top sites list) and generates a list of sites still speaking TLS 1.0 or 1.1, without supporting TLS ≥ 1.2.

Tranco list top sites with TLS <= 1.1

As of this week, there are just over 8,000 affected sites from the one million listed by Tranco.

There are a few potential gotchas to be aware of, if you do find your site on this list:

  • 4% of the sites are using TLS ≤ 1.1 to redirect from a bare domain ( to www ( on TLS ≥ 1.2 (or vice versa). If you were to only check your site post-redirect, you might miss a potential footgun.
  • 2% of the sites don’t redirect from bare to www (or vice versa), but do support TLS ≥ 1.2 on one of them.

The vast majority (94%), however, are just bad—it’s TLS ≤ 1.1 everywhere.

If you find that a site you work on is in the TLS “Carnage” list, you need to come up with a plan for enabling TLS 1.2 (and 1.3, if possible). However, this list only covers 1 million sites. Depending on how popular your site is, you might have some work to do regardless of whether you’re not listed by Tranco.

Run an online test

Even if you’re not on the “Carnage” list, it’s a good idea to test your servers all the same. There are a number of online services that will do some form of TLS version testing for you, but only a few will flag not supporting modern TLS versions in an obvious way. We recommend using one or more of the following:

Check developer tools

Another way to do this is open up Firefox (versions 68+) or Chrome (versions 72+) DevTools, and look for the following warnings in the console as you navigate around your site.

Firefox DevTools console warning

Chrome DevTools console warning

What’s Next?

This October, we plan on disabling old TLS in Firefox Nightly and you can expect the same for Chrome and Edge Canaries. We hope this will give enough time for sites to upgrade before affecting their release population users.

The post TLS 1.0 and 1.1 Removal Update appeared first on Mozilla Hacks - the Web developer blog.

Cameron KaiserZombieLoad doesn't affect Power Macs

The latest in the continued death march of speculative execution attacks is ZombieLoad (see our previous analysis of Spectre and Meltdown on Power Macs). ZombieLoad uses the same types of observable speculation flaws to exfiltrate data but bases it on a new class of Intel-specific side-channel attacks utilizing a technique the investigators termed MDS, or microarchitectural data sampling. While Spectre and Meltdown attack at the cache level, ZombieLoad targets Intel HyperThreading (HT), the company's implementation of symmetric multithreading, by trying to snoop on the processor's line fill buffers (LFBs) used to load the L1 cache itself. In this case, side-channel leakages of data are possible if the malicious process triggers certain specific and ultimately invalid loads from memory -- hence the nickname -- that require microcode assistance from the CPU; these have side-effects on the LFBs which can be observed by methods similar to Spectre by other processes sharing the same CPU core. (Related attacks against other microarchitectural structures are analogously implemented.)

The attackers don't have control over the observed address, so they can't easily read arbitrary memory, but careful scanning for the type of data you're targeting can still make the attack effective even against the OS kernel. For example, since URLs can be picked out of memory, this apparent proof of concept shows a separate process running on the same CPU victimizing Firefox to extract the URL as the user types it in. This works because as the user types, the values of the individual keystrokes go through the LFB to the L1 cache, allowing the malicious process to observe the changes and extract characters. There is much less data available to the attacking process but that also means there is less to scan, making real-time attacks like this more feasible.

That said, because the attack is specific to architectural details of HT (and the authors of the attack say they even tried on other SMT CPUs without success), this particular exploit wouldn't work even against modern high-SMT count Power CPUs like POWER9. It certainly won't work against a Power Mac because no Power Mac CPU ever implemented SMT, not even the G5. While Mozilla is deploying a macOS-specific fix, we don't need it in TenFourFox, nor do we need other mitigations. It's especially bad news for Intel because nearly every Intel chip since 2011 is apparently vulnerable and the performance impact of fixing ZombieLoad varies anywhere from Intel's Pollyanna estimate of 3-9% to up to 40% if HT must be disabled completely.

Is this a major concern for users? Not as such: although the attacks appear to be practical and feasible, they require you to run dodgy software and that's a bad idea on any platform because dodgy software has any number of better ways of pwning your computer. So don't run dodgy programs!

Meanwhile, TenFourFox FPR14 final should be available for testing this weekend.

The Rust Programming Language Blog4 years of Rust

On May 15th, 2015, Rust was released to the world! After 5 years of open development (and a couple of years of sketching before that), we finally hit the button on making the attempt to create a new systems programming language a serious effort!

It’s easy to look back on the pre-1.0 times and cherish them for being the wild times of language development and fun research. Features were added and cut, syntax and keywords were tried, and before 1.0, there was a big clean-up that removed a lot of the standard library. For fun, you can check Niko’s blog post on how Rust's object system works, Marijn Haverbeke’s talk on features that never made it close to 1.0 or even the introductory slides about Servo, which present a language looking very different from today.

Releasing Rust with stability guarantees also meant putting a stop to large visible changes. The face of Rust is still very similar to Rust 1.0. Even with the changes from last year’s 2018 Edition, Rust is still very recognizable as what it was in 2015. That steadiness hides that the time of Rust’s fastest development and growth is now. With the stability of the language and easy upgrades as a base, a ton of new features have been built. We’ve seen a bunch of achievements in the last year:

This list could go on and on. While the time before and after release was a time where language changes had huge impact how Rust is perceived, it's becoming more and more important what people start building in and around it. This includes projects like whole game engines, but also many small, helpful libraries, meetup formats, tutorials other educational material. Birthdays are a great time to take a look back over the last year and see the happy parts!

Rust would be nothing, and especially not winning prizes, without its community. Community happens everywhere! We would like to thank everyone for being along on this ride, from team members to small scale contributors to people just checking the language out and finding interest in it. Your interest and curiosity is what makes the Rust community an enjoyable place to be. Some meetups are running birthday parties today to which everyone is invited. If you are not attending one, you can take the chance to celebrate in any other fashion: maybe show us a picture of what you are currently working on or talk about what excites you. If you want to take it to social media, consider tagging our Twitter account or using the hashtag #rustbirthday.

Adrian Gaudebertreact-content-marker Released – Marking Content with React

Last year, in a React side-project, I had to replace some content in a string with HTML markup. That is not a trivial thing to do with React, as you can't just put HTML as string in your content, unless you want to use dangerouslySetInnerHtml — which I don't. So, I hacked a little code to smartly split my string into an array of sub-strings and DOM elements.

More recently, while working on Translate.Next — the rewrite of Pontoon's translate page to React — I stumbled upon the same problem. After looking around the Web for a tool that would solve it, and coming up short handed, I decided to write my own and make it a library.

Introducing react-content-marker v1.0

react-content-marker is a library for React to mark content in a string based on rules. These rules can be simple strings or regular expressions. Let's look at an example.

Say you have a blob of text, and you want to make the numbers in that text more visible, for example by making them bold.

const content = 'The fellowship had 4 Hobbits but only 1 Dwarf.';

Matching numbers can be done with a simple regex: /(\d+)/. If we turn that into a parser:

const parser = {
    rule: /(\d+)/,
    tag: x => <strong>{ x }</strong>,

We can now use that parser to create a content marker, and use it to enhance our content:

import createMarker from 'react-content-marker';
const Marker = createMarker([parser]);
render(<Marker>{ content }</Marker>);

This will show:

The fellowship had 4 Hobbits but only 1 Dwarf.


Advanced usage

Passing parsers

The first thing to note is that you can pass any number of parsers to the createMarker function, and they will all be called in turn. The order of the parsers is very important though, because content that has already been marked will not be parsed again. Let's look at another example.

Say you have a rule that matches content between brackets: /({.*})/, and a rule that matches content between brackets that contain only capital letters: /({[A-W]+})/. Now let's say you are marking this content: I have {CATCOUNT} cats. Whichever rule you passed first will match the content between brackets, and the second rule will not apply. You thus need to make sure that your rules are ordered so that the most important ones come first. Generally, that means you want to have the more specific rules first.

The reason why this happens is that, behind the scene, the matched content is turned into a DOM element, and parsers ignore non-string content. With the previous example, the initial string, I have {CATCOUNT} cats, would be turned into ['I have ', &lt;mark>{CATCOUNT}&lt;/mark>, ' cats'] after the first parser is called. The second one then only looks at 'I have ' and ' cats', which do not match.

Using regex

The second important thing to know relates to regex. You might have noticed that I put parentheses in my examples above: they are required for the algorithm to capture content. But that also gives you more flexibility: you can use a regex that matches some content that you do not want to mark. Let's say you want to match only the name of someone who's being greeted, with this rule: /hello (\w+)/i. Applying it to Hello Adrian will only mark the Adrian part of that content.

Sometimes, however, you need to use more complex regex that include several groups of parentheses. When that's the case, by default react-content-marker will mark the content of the last non-null capturing group. In such cases, you can add a matchIndex number to your parser: that index will be used to select the capture group to mark.

Here's a simple example:

const parser = {
    rule: /(hello (world|folks))/i,
    tag: x => <b>{ x }</b>,

Applying this rule to Hello World will show: Hello World. If we want to, instead, make the whole match bold, we'll have to use matchIndex:

const parser = {
    rule: /(hello (world|folks))/i,
    matchIndex: 0,
    tag: x => <b>{ x }</b>,

Now our entire string will correctly be made bold: Hello World.

Advanced example

If you're interested in looking at an advanced usage example of this library, I recommend you check out how we use in Pontoon, Mozilla's localization platform. We have a long list of parsers there, and they have a lot of edge-cases.

Installation and stuff

react-content-marker is available on npm, so you can easily install it with your favorite javascript package manager:

npm install -D react-content-marker
# or
yarn add react-content-marker

The code is released under the BSD 3-Clause License, and is available on github. If you hit any problems with it, or have a use case that is not covered, please file an issue. And of course, you are always welcome to contribute a patch!

I hope this is useful to someone out there. It has been for me at least, on Pontoon and on several React-based side-projects. I like how flexible it is, and I believe it does more than any other similar tools I could find around the Web.

Mike HoyeThe Next Part Of The Process


I’ve announced this upcoming change and the requirements we’ve laid out for a replacement service for IRC, but I haven’t widely discussed the evaluation process in any detail, including what you can expect it to look like, how you can participate, and what you can expect from me. I apologize for that, and really should have done so sooner.

Briefly, I’ll be drafting a template doc based on our stated requirements, and once that’s in good, markdowny shape we’ll be putting it on GitHub with preliminary information for each of the stacks we’re considering and opening it up to community discussion and participation.

From there, we’re going to be taking pull requests and assembling our formal understanding of each of the candidates. As well, we’ll be soliciting more general feedback and community impressions of the candidate stacks on Mozilla’s Community Discourse forum.

I’ll be making an effort to ferry any useful information on Discourse back to GitHub, which unfortunately presents some barriers to some members of our community.

While this won’t be quite the same as a typical RFC/RFP process – I expect the various vendors as well as members the Mozilla community to be involved – we’ll be taking a lot of cues from the Rust community’s hard-won knowledge about how to effectively run a public consultation process.

In particular, it’s critical to me that this process to be as open and transparent as possible, explicitly time-boxed, and respectful of the Mozilla Community Participation Guidelines (CPG). As I’ve mentioned before, accessibility and developer productivity will both weigh heavily on our evaluation process, and the Rust community’s “no new rationale” guidelines will be respected when it comes time to make the final decision.

When it kicks off, this step will be widely announced both inside and outside Mozilla.

As part of that process, our IT team will be standing up instances of each of the candidate stacks and putting them behind the Participation Systems team’s “Mozilla-IAM” auth system. We’ll be making them available to the Mozilla community at first, and expanding that to include Github and via-email login soon afterwards for broader community testing. Canonical links to these trial systems will be prominently displayed on the GitHub repository; as the line goes, accept no substitutes.

Some things to note: we will also be using this period to evaluate these tools from a community moderation and administration perspective as well, to make sure that we have the tools and process available to meaningfully uphold the CPG.

To put this somewhat more charitably than it might deserve, we expect that some degree of this testing will be a typical if unfortunate byproduct of the participative process. But we also have plans to automate some of that stress-testing, to test both platform API usability and the effectiveness of our moderation tools. Which I suppose is long-winded way of saying: you’ll probably see some robots in there play-acting at being jerks, and we’re going to ask you to play along and figure out how to flag them as bad actors so we can mitigate the jerks of the future.

As well, we’re going to be doing the usual whats-necessaries to avoid the temporary-permanence trap, and at the end of the evaluation period all the instances of our various candidates will be shut down and deleted.

Our schedule is still being sorted out, and I’ll have more about that and our list of candidates shortly.

Mozilla VR BlogSpoke, now on the Web

Spoke, now on the Web

Spoke, the editor that lets you create 3D scenes for use in Hubs, is now available as a fully featured web app. When we announced the beta for Spoke back in October, it was the first step towards making the process of creating social VR spaces easier for everyone. At Mozilla, we believe in the power of the web, and it was a natural decision for us to make Spoke more widely available by making the editor entirely available online - no downloads required.

The way that we communicate is often guided by the spaces that we are in. We use our understanding of the environment to give us cues to the tone of the room, and understanding how to build environments that reflect different use cases for social collaboration is an important element of how we view the Hubs platform. With Spoke, we want everyone to have the creative control over their rooms from the ground (plane) up.

We’re constantly impressed by the content that 3D artists and designers create and we think that Spoke is the next step in making it easier for everyone to learn how to make their own 3D environments. Spoke isn’t trying to replace the wide range of 3D modeling or animation software out there, we just want to make it easier to bring all of that awesome work into one place so that more people can build with the media all of these artists have shared so generously with the world.

Spoke, now on the Web

When it comes to building rooms for Hubs, we want everyone to feel empowered to create custom spaces regardless of whether or not they already have experience building 3D environments. Spoke aims to make it as simple as possible to find, add, and remix what’s already out there on the web under the Creative Commons license. You can bring in your own glTF models, find models on Sketchfab and Google Poly, and publish your rooms to Hubs with just a single click, which makes it easier than ever before to create shared spaces. In addition to 3D models, the Spoke editor allows you to bring in images, gifs, and videos for your spaces. Create your own themed rooms in Hubs for movie nights, academic meetups, your favorite Twitch streams: the virtual world is yours to create!

Spoke, now on the Web

Spoke works by providing an easy-to-use interface to consolidate content into a glTF 2.0 binary (.glb) file that is read by Hubs to generate the base environment for a room. Basing Spoke on glTF, an extensible, standardized 3D file format, makes it possible to export robust scenes that could also be used in other 3D applications or programs.

Once you’ve created scenes in Spoke, you can access them from your Projects page and use them as a base for making new rooms in Hubs. You can keep your scenes private for your own use, or publish them and make them available through the media browser for anyone in Hubs to enjoy. In the future, we plan on adding functionality that allows you to remix environments that others have marked as being open to derivative works.

We want to hear your feedback as you start using Spoke to create environments - try it out at You can view the source code and file issues on the GitHub repo, or jump into our community Discord server and say hi in the #spoke channel! If you have specific thoughts or considerations that you want to share with us, feel free to send us an email at to get in touch. We can’t wait to see what you make!

This Week In RustThis Week in Rust 286

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is panic-never, a crate to make every panic a link-time error. Thanks to ehsanmok for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

190 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs

New RFCs

Upcoming Events

North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

The big gorilla 3D game framework. Apparently it actually works.

SimonHeath on Amethyst

Thanks to Magnus Larsen for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nasa42, llogiq, and Flavsditz.

Discuss on r/rust.

The Rust Programming Language BlogAnnouncing Rust 1.34.2

The Rust team has published a new point release of Rust, 1.34.2. Rust is a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, getting Rust 1.34.2 is as easy as:

$ rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website.

What's in 1.34.2 stable

Sean McArthur reported a security vulnerability affecting the standard library that caused the Error::downcast family of methods to perform unsound casts when a manual implementation of the Error::type_id method returned the wrong TypeId, leading to security issues such as out of bounds reads/writes/etc.

The Error::type_id method was recently stabilized as part of Rust 1.34.0. This point release destabilizes it, preventing any code on the stable and beta channels to implement or use it, awaiting future plans that will be discussed in issue #60784.

An in-depth explaination of this issue was posted in yesterday's security advisory. The assigned CVE for the vulnerability is CVE-2019-12083.

Mozilla Reps CommunityRep of the Month – April 2019

Please join us in congratulating Lidya Christina, Rep of the Month for April 2019!

Lidya Christina is from Jakarta, Indonesia. Her contribution in SUMO event in 2016 lead her into a proud Mozillian, an active contributor of Mozilla Indonesia and last March 2019 she joined the Reps program.


In addition to that, she’s also a key member that oversees the day to day operational work in Mozilla Community space Jakarta, while at the same time regularly organizing localization event and actively participating in campaigns like Firefox 66 Support Mozilla Sprint, Firefox Fights for you, Become a dark funnel detective and Common Voice sprints.

Congratulations and keep rocking the open web! :tada: :tada:

To congratulate Lidya, please head over to Discourse!

Support.Mozilla.OrgIntroducing Josh and Jeremy to the SUMO team

Today the SUMO team would like to welcome Josh and Jeremy who will be joining our team from Boise, Idaho.

Josh and Jeremy will be joining our team to help out on Support for some of the new efforts Mozilla are working on towards creating a connected and integrated Firefox experience.

They will be helping out with new products, but also providing support on forums and social channels, as well as serving as an escalation point for hard to solve issues.

A bit about Josh:

Hey everyone! My name is Josh Wilson and I will be working as a contractor for Mozilla. I have been working in a variety of customer support and tech support jobs over the past ten years. I enjoy camping and hiking during the summers, and playing console RPG’s in the winters. I recently started cooking Indian food, but this has been quite the learning curve for me. I am so happy to be a part of the Mozilla community and look forward to offering my support.

A bit about Jeremy:

Hello! My name is Jeremy Sanders and I’m a contractor of Mozilla through a small company named PartnerHero. I’ve been working in the field of Information Technology since 2015 and have been working with a variety of government, educational, and private entities. In my free time, I like to get out of the office and go fly fishing, camping, or hiking. I also play quite a few video games such as Counterstrike: Global Offensive and League of Legends. I am very excited to start my time here with Mozilla and begin working in conjunction with the community to provide support for users!

Please say hi to them when you see them!

The Rust Programming Language BlogSecurity advisory for the standard library

This is a cross-post of the official security advisory. The official post contains a signed version with our PGP key, as well.

The CVE for this vulnerability is CVE-2019-12083.

The Rust team was recently notified of a security vulnerability affecting manual implementations of Error::type_id and their interaction with the Error::downcast family of functions in the standard library. If your code does not manually implement Error::type_id your code is not affected.


The Error::type_id function in the standard library was stabilized in the 1.34.0 release on 2019-04-11. This function allows acquiring the concrete TypeId for the underlying error type to downcast back to the original type. This function has a default implementation in the standard library, but it can also be overridden by downstream crates. For example, the following is currently allowed on Rust 1.34.0 and Rust 1.34.1:

struct MyType;

impl Error for MyType {
    fn type_id(&self) -> TypeId {
        // Enable safe casting to `String` by accident.

When combined with the Error::downcast* family of methods this can enable safe casting of a type to the wrong type, causing security issues such as out of bounds reads/writes/etc.

Prior to the 1.34.0 release this function was not stable and could not be either implemented or called in stable Rust.

Affected Versions

The Error::type_id function was first stabilized in Rust 1.34.0, released on 2019-04-11. The Rust 1.34.1 release, published 2019-04-25, is also affected. The Error::type_id function has been present, unstable, for all releases of Rust since 1.0.0 meaning code compiled with nightly may have been affected at any time.


Immediate mitigation of this bug requires removing manual implementations of Error::type_id, instead inheriting the default implementation which is correct from a safety perspective. It is not the intention to have Error::type_id return TypeId instances for other types.

For long term mitigation we are going to destabilize this function. This is unfortunately a breaking change for users calling Error::type_id and for users overriding Error::type_id. For users overriding it's likely memory unsafe, but users calling Error::type_id have only been able to do so on stable for a few weeks since the last 1.34.0 release, so it's thought that the impact will not be too great to overcome.

We will be releasing a 1.34.2 point release on 2019-05-14 (tomorrow) which reverts #58048 and destabilizes the Error::type_id function. The upcoming 1.35.0 release along with the beta/nightly channels will also all be updated with a destabilization.

The final fate of the Error::type_id API isn't decided upon just yet and is the subject of #60784. No action beyond destabilization is currently planned so nightly code may continue to exhibit this issue. We hope to fully resolve this in the standard library soon.

Timeline of events


Thanks to Sean McArthur, who found this bug and reported it to us in accordance with our security policy.

Daniel StenbergThe curl user survey 2019

the survey

For the 6th consecutive year, the curl project is running a “user survey” to learn more about what people are using curl for, what think think of curl, what the need of curl and what they wish from curl going forward.

the survey

As in most projects, we love to learn more about our users and how to improve. For this, we need your input to guide us where to go next and what to work on going forward.

the survey

Please consider donating a few minutes of your precious time and tell me about your views on curl. How do you use it and what would you like to see us fix?

the survey

The survey will be up for 14 straight days and will be taken down at midnight (CEST) May 26th. We appreciate if you encourage your curl friends to participate in the survey.

Bonus: the analysis from the 2018 survey.

Nick Desaulniersf() vs f(void) in C vs C++


Prefer f(void) in C to potentially save a 2B instruction per function call when targeting x86_64 as a micro-optimization. -Wstrict-prototypes can help. Doesn’t matter for C++.

The Problem

While messing around with some C code in godbolt Compiler Explorer, I kept noticing a particular funny case. It seemed with my small test cases that sometimes function calls would zero out the return register before calling a function that took no arguments, but other times not. Upon closer inspection, it seemed like a difference between function definitions, particularly f() vs f(void). For example, the following C code:

int foo();
int bar(void);

int baz() {
  return 0;

would generate the following assembly:

  pushq %rax # realign stack for callq
  xorl %eax, %eax # zero %al, non variadic
  callq foo
  callq bar # why you no zero %al?
  xorl %eax, %eax
  popq %rcx

In particular, focus on the call the foo vs the call to bar. foo is preceded with xorl %eax, %eax (X ^ X == 0, and is the shortest encoding for an instruction that zeroes a register on the variable length encoded x86_64, which is why its used a lot such as in setting the return value). (If you’re curious about the pushq/popq, see point #1.) Now I’ve seen zeroing before (see point #3 and remember that %al is the lowest byte of %eax and %rax), but if it was done for the call to foo, then why was it not additionally done for the call to bar? %eax being x86_64’s return register for the C ABI should be treated as call clobbered. So if you set it, then made a function call that may have clobbered it (and you can’t deduce otherwise), then wouldn’t you have to reset it to make an additional function call?

Let’s look at a few more cases and see if we can find the pattern. Let’s take a look at 2 sequential calls to foo vs 2 sequential calls to bar:

int foo();
int quux() {
  foo(); // notice %eax is always zeroed
  foo(); // notice %eax is always zeroed
return 0;
  pushq %rax
  xorl %eax, %eax
  callq foo
  xorl %eax, %eax
  callq foo
  xorl %eax, %eax
  popq %rcx
int bar(void);
int quuz() {
  bar(); // notice %eax is not zeroed
  bar(); // notice %eax is not zeroed
  return 0;
  pushq %rax
  callq bar
  callq bar
  xorl %eax, %eax
  popq %rcx

So it should be pretty clear now that the pattern is f(void) does not generate the xorl %eax, %eax, while f() does. What gives, aren’t they declaring f the same; a function that takes no parameters? Unfortunately, in C the answer is no, and C and C++ differ here.

An explanation

f() is not necessarily “f takes no arguments” but more of “I’m not telling you what arguments f takes (but it’s not variadic).” Consider this perfectly legal C and C++ code:

int foo();
int foo(int x) { return 42; }

It seems that C++ inherited this from C, but only in C++ does f() seems to have the semantics of “f takes no arguments,” as the previous examples all no longer have the xorl %eax, %eax. Same for f(void) in C or C++. That’s because foo() and foo(int) are two different function in C++ thanks to function overloading (thanks reddit user /u/OldWolf2). Also, it seems that C supported this difference for backwards compatibility w/ K & R C.

int bar(void);
int bar(int x) { return x + 42; }

Is an error in C, but in C++ thanks to function overloading these are two separate functions! (_Z3barv vs _Z3bari). (Thanks HN user pdpi, for helping me understand this. Cunningham’s Law ftw.)

Needless to say, If you write code like that where your function declarations and definitions do not match, you will be put in prison. Do not pass go, do not collect $200). Control flow integrity analysis is particularly sensitive to these cases, manifesting in runtime crashes.

What could a sufficiently smart compiler do to help?

-Wall and -Wextra will just flag the -Wunused-parameter. We need the help of -Wmissing-prototypes to flag the mismatch between declaration and definition. (An aside; I had a hard time remembering which was the declaration and which was the definition when learning C++. The mnemonic I came up with and still use today is: think of definition as in muscle definition; where the meat of the function is. Declarations are just hot air.) It’s not until we get to -Wstrict-prototypes do we get a warning that we should use f(void). -Wstrict-prototypes is kind of a stylistic warning, so that’s why it’s not part of -Wall or -Wextra. Stylistic warnings are in bikeshed territory (*cough* -Wparentheses *cough*).

One issue with C and C++’s style of code sharing and encapsulation via headers is that declarations often aren’t enough for the powerful analysis techniques of production optimizing compilers (whether or not a pointer “escapes” is a big one that comes to mind). Let’s see if a “sufficiently smart compiler” could notice when we’ve declared f(), but via observation of the definition of f() noticed that we really only needed the semantics of f(void).

int puts(const char*);
int __attribute__((noinline)) foo2() {
  return 0;

int quacks() {
  return 0;
  pushq %rax
  callq foo2
  callq foo2
  xorl %eax, %eax
  popq %rcx

A ha! So by having the full definition of foo2 in this case in the same translation unit, Clang was able to deduce that foo2 didn’t actually need the semantics of f(), so it could skip the xorl %eax, %eax we’d seen for f() style definitions earlier. If we change foo2 to a declaration (such as would be the case if it was defined in an external translation unit, and its declaration included via header), then Clang can no longer observe whether foo2 definition differs or not from the declaration.

So Clang can potentially save you a single instruction (xorl %eax, %eax) whose encoding is only 2B, per function call to functions declared in the style f(), but only IF the definition is in the same translation unit and doesn’t differ from the declaration, and you happen to be targeting x86_64. *deflated whew* But usually it can’t because it’s only been provided the declaration via header.


I certainly think f() is prettier than f(void) (so C++ got this right), but pretty code may not always be the fastest and it’s not always straightforward when to prefer one over the other.

So it seems that f() is ok for strictly C++ code. For C or mixed C and C++, f(void) might be better.


Daniel Stenbergtiny-curl

curl, or libcurl specifically, is probably the world’s most popular and widely used HTTP client side library counting more than six billion installs.

curl is a rock solid and feature-packed library that supports a huge amount of protocols and capabilities that surpass most competitors. But this comes at a cost: it is not the smallest library you can find.

Within a 100K

Instead of being happy with getting told that curl is “too big” for certain use cases, I set a goal for myself: make it possible to build a version of curl that can do HTTPS and fit in 100K (including the wolfSSL TLS library) on a typical 32 bit architecture.

As a comparison, the tiny-curl shared library when built on an x86-64 Linux, is smaller than 25% of the size as the default Debian shipped library is.


But let’s not stop there. Users with this kind of strict size requirements are rarely running a full Linux installation or similar OS. If you are sensitive about storage to the exact kilobyte level, you usually run a more slimmed down OS as well – so I decided that my initial tiny-curl effort should be done on FreeRTOS. That’s a fairly popular and free RTOS for the more resource constrained devices.

This port is still rough and I expect us to release follow-up releases soon that improves the FreeRTOS port and ideally also adds support for other popular RTOSes. Which RTOS would you like to support for that isn’t already supported?

Offer the libcurl API for HTTPS on FreeRTOS, within 100 kilobytes.

Maintain API

I strongly believe that the power of having libcurl in your embedded devices is partly powered by the libcurl API. The API that you can use for libcurl on any platform, that’s been around for a very long time and for which you can find numerous examples for on the Internet and in libcurl’s extensive documentation. Maintaining support for the API was of the highest priority.

Patch it

My secondary goal was to patch as clean as possible so that we can upstream patches into the main curl source tree for the changes makes sense and that aren’t disturbing to the general code base, and for the work that we can’t we should be able to rebase on top of the curl code base with as little obstruction as possible going forward.

Keep the HTTPS basics

I just want to do HTTPS GET

That’s the mantra here. My patch disables a lot of protocols and features:

  • No protocols except HTTP(S) are supported
  • HTTP/1 only
  • No cookie support
  • No date parsing
  • No alt-svc
  • No HTTP authentication
  • No DNS-over-HTTPS
  • No .netrc parsing
  • No HTTP multi-part formposts
  • No shuffled DNS support
  • No built-in progress meter

Although they’re all disabled individually so it is still easy to enable one or more of these for specific builds.

Downloads and versions?

Tiny-curl 0.9 is the first shot at this and can be downloaded from wolfSSL. It is based on curl 7.64.1.

Most of the patches in tiny-curl are being upstreamed into curl in the #3844 pull request. I intend to upstream most, if not all, of the tiny-curl work over time.


The FreeRTOS port of tiny-curl is licensed GPLv3 and not MIT like the rest of curl. This is an experiment to see how we can do curl work like this in a sustainable way. If you want this under another license, we’re open for business over at wolfSSL!

The Mozilla BlogGoogle’s Ad API is Better Than Facebook’s, But…

… with a few important omissions. Google’s tool meets four of experts’ five minimum standards


Last month, Mozilla released an analysis of Facebook’s ad archive API, a tool that allows researchers to understand how political ads are being targeted to Facebook users. Our goal: To determine if Facebook had fulfilled its promise to make political advertising more transparent. (It did not.)

Today, we’re releasing an analysis of Google’s ad archive API. Google also promised the European Union it would release an ad transparency tool ahead of the 2019 EU Parliament elections.

Our finding: Google’s API is a lot better than Facebook’s, but is still incomplete. Google’s API meets four of experts’ five minimum standards. (Facebook met two.)

Google does much better than Facebook in providing access to the data in a format that allows for real research and analysis. That is a hugely important requirement; this is a baseline researchers need. But while the data is usable, it isn’t complete. Google doesn’t provide data on the targeting criteria advertisers use, making it more difficult to determine whom people are trying to influence or how information is really spreading across the platform.

Below are the specifics of our Google API analysis:


Researchers’ guideline: A functional, open API should have comprehensive political advertising content.

Google’s API: The full list of ads, campaigns, and advertisers are available, and can be searched and filtered. The entire database can be downloaded in bulk and analyzed at scale. There are shortcomings, however: There is no data on the audience the ads reached, like their gender, age, or region. And Google has included fewer ads in their database than Facebook, perhaps due to a narrower definition of “political ads.”

[2] ❌

Researchers’ guideline: A functional, open API should provide the content of the advertisement and information about targeting criteria.

Google’s API: While Google’s API does provide the content of the advertisements, like Facebook, it provides no information on targeting criteria, nor does the API provide engagement data (e.g., clicks). Targeting and engagement data is critical for researchers because it lets them see what types of users an advertiser is trying to influence, and whether or not their attempts were successful.


Researchers’ guideline: A functional, open API should have up-to-date and historical data access.

Google’s API: The API appears to be up to date.

[4] ✅

Researchers’ guideline: A functional, open API should be accessible to and shareable with the general public.

Google’s API: Public access to the API is available through the Google Cloud Public Datasets program.

[5] ✅

Researchers’ guideline: A functional, open API should empower, not limit, research and analysis.

Google’s API: The tool has components that facilitate research, like: bulk download capabilities; no problematic bandwidth limits; search filters; and unique URLs for ads.


Overall: While the company gets a passing grade, Google doesn’t sufficiently allow researchers to study disinformation on its platform. The company also significantly delayed the release of their API, unveiling it only weeks before the upcoming EU elections and nearly two months after the originally promised deadline.

With the EU elections fewer than two weeks away, we hope Google (and Facebook) take action swiftly to improve their ad APIs — action that should have been taken months ago.

The post Google’s Ad API is Better Than Facebook’s, But… appeared first on The Mozilla Blog.

Support.Mozilla.OrgSUMO/Firefox Accounts integration

One of Mozilla’s goals is to deepen relationships with our users and better connect them with our products. For support this means integrating Firefox Accounts (FxA) as the authentication layer on

What does this mean?

Currently is using its own auth/login system where users are logging in using their username and password. We will replace this auth system with Firefox Accounts and both users and contributors will be asked to connect their existing profiles to FxA.

This will not just help align with other Mozilla products but also be a vehicle for users to discover FxA and its many benefits.

In order to achieve this we are looking at the following milestones (the dates are tentative):

Transition period (May-June)

We will start with a transition period where users can log in using both their old username/password as well as Firefox Accounts. During this period new users registering to the site will only be able to create an account through Firefox Accounts. Existing users will get a recommendation to connect their Firefox Account through their existing profile but they will still be able to use their old username/password auth method if they wish. Our intention is to have banners across the site that will let users know about the change and how to switch to Firefox Accounts. We will also send email communications to active users (logged in at least once in the last 3 years).

Switching to Firefox Accounts will also bring a small change to our AAQ (Ask a Question) flow. Currently when users go through the Ask a Question flow they are prompted to login/create an account in the middle of the flow (which is a bit of a frustrating experience). As we’re switching to Firefox Accounts and that login experience will no longer work, we will be moving the login/sign up step at the beginning of the flow – meaning users will have to log in first before they can go through the AAQ. During the transition period non-authenticated users will not be able to use the AAQ flow. This will get back to normal during the Soft Launch period.

Soft Launch (end of June)

After the transition period we will enter a so-called “Soft Launch” period where we integrate the full new log in/sign up experiences and do the fine tuning. By this time the AAQ flow should have been updated and non-authenticated users can use it again. We will also send more emails to active users who haven’t done the switch yet and continue having banners on the site to inform people of the change.

Full Launch (July-August)

If the testing periods above go well, we should be ready to do the full switch in July or August. This means that no old SUMO logins will be accepted and all users will be asked to switch over to Firefox Accounts. We will also do a final round of communications.

Please note: As we’re only changing the authentication mechanism we don’t expect there to be any changes to your existing profile, username and contribution history. If you do encounter an issue please reach out to Madalina or Tasos (or file a bug through Bugzilla).

We’re excited about this change, but are also aware that we might encounter a few bumps on the way. Thank you for your support in making this happen.

If you want to help out, as always you can follow our progress on Github and/or join our weekly calls.

SUMO staff team

The Mozilla BlogWhat we do when things go wrong

We strive to make Firefox a great experience. Last weekend we failed, and we’re sorry.

An error on our part prevented new add-ons from being installed, and stopped existing add-ons from working. Now that we’ve been able to restore this functionality for the majority of Firefox users, we want to explain a bit about what happened and tell you what comes next.

Add-ons are an important feature of Firefox. They enable you to customize your browser and add valuable functionality to your online experience. We know how important this is, which is why we’ve spent a great deal of time over the past few years coming up with ways to make add-ons safer and more secure. However, because add-ons are so powerful, we’ve also worked hard to build and deploy systems to protect you from malicious add-ons. The problem here was an implementation error in one such system, with the failure mode being that add-ons were disabled. Although we believe that the basic design of our add-ons system is sound, we will be working to refine these systems so similar problems do not occur in the future.

In order to address this issue as quickly as possible, we used our “Studies” system to deploy the initial fix, which requires users to be opted in to Telemetry.  Some users who had opted out of Telemetry opted back in, in order to get the initial fix as soon as possible. As we announced in the Firefox Add-ons blog at 2019-05-08T23:28:00Z there is now no longer a need to have Studies on to receive updates anymore; please check that your settings match your personal preferences before we re-enable Studies, which will happen sometime after 2019-05-13T16:00:00Z. In order to respect our users’ potential intentions as much as possible, based on our current set up, we will be deleting all of our source Telemetry and Studies data for our entire user population collected between 2019-05-04T11:00:00Z and 2019-05-11T11:00:00Z.

Our CTO, Eric Rescorla, shares more about what happened technically in this post.

We would like to extend our thanks to the people who worked hard to address this issue, including the hundred or so community members and employees localizing content and answering questions on, Twitter, and Reddit.

There’s a lot more detail we will be sharing as part of a longer post-mortem which we will make public — including details on how we went about fixing this problem and why we chose this approach. You deserve a full accounting, but we didn’t want to wait until that process was complete to tell you what we knew so far. We let you down and what happened might have shaken your confidence in us a bit, but we hope that you’ll give us a chance to earn it back.

The post What we do when things go wrong appeared first on The Mozilla Blog.

Hacks.Mozilla.OrgTechnical Details on the Recent Firefox Add-on Outage

Editor’s Note: May 9, 8:22 pt – Updated as follows: (1) Fixed verb tense (2) Clarified the situation with downstream distros. For more detail, see Bug 1549886.

Recently, Firefox had an incident in which most add-ons stopped working. This was due to an error on our end: we let one of the certificates used to sign add-ons expire which had the effect of disabling the vast majority of add-ons. Now that we’ve fixed the problem for most users and most people’s add-ons are restored, I wanted to walk through the details of what happened, why, and how we repaired it.

Background: Add-Ons and Add-On Signing

Although many people use Firefox out of the box, Firefox also supports a powerful extension mechanism called “add-ons”. Add-ons allow users to add third party features to Firefox that extend the capabilities we offer by default. Currently there are over 15,000 Firefox add-ons with capabilities ranging from blocking ads to managing hundreds of tabs.

Firefox requires that all add-ons that are installed be digitally signed. This requirement is intended to protect users from malicious add-ons by requiring some minimal standard of review by Mozilla staff. Before we introduced this requirement in 2015, we had serious problems with malicious add-ons.

The way that the add-on signing works is that Firefox is configured with a preinstalled “root certificate”. That root is stored offline in a hardware security module (HSM). Every few years it is used to sign a new “intermediate certificate” which is kept online and used as part of the signing process. When an add-on is presented for signature, we generate a new temporary “end-entity certificate” and sign that using the intermediate certificate. The end-entity certificate is then used to sign the add-on itself. Shown visually, this looks like this:

Diagram showing the digital signature workflow from Root to Add-on

Note that each certificate has a “subject” (to whom the certificate belongs) and an “issuer” (the signer). In the case of the root, these are the same entity, but for other certificates, the issuer of a certificate is the subject of the certificate that signed it.

An important point here is that each add-on is signed by its own end-entity certificate, but nearly all add-ons share the same intermediate certificate [1]. It is this certificate that encountered a problem: Each certificate has a fixed period during which it is valid. Before or after this window, the certificate won’t be accepted, and an add-on signed with that certificate can’t be loaded into Firefox. Unfortunately, the intermediate certificate we were using expired just after 1AM UTC on May 4, and immediately every add-on that was signed with that certificate become unverifiable and could not be loaded into Firefox.

Although add-ons all expired around midnight, the impact of the outage wasn’t felt immediately. The reason for this is that Firefox doesn’t continuously check add-ons for validity. Rather, all add-ons are checked about every 24 hours, with the time of the check being different for each user. The result is that some people experienced problems right away, some people didn’t experience them until much later. We at Mozilla first became aware of the problem around 6PM Pacific time on Friday May 3 and immediately assembled a team to try to solve the issue.

Damage Limitation

Once we realized what we were up against, we took several steps to try to avoid things getting any worse.

First, we disabled signing of new add-ons. This was sensible at the time because we were signing with a certificate that we knew was expired. In retrospect, it might have been OK to leave it up, but it also turned out to interfere with the “hardwiring a date” mitigation we discuss below (though eventually didn’t use) and so it’s good we preserved the option. Signing is now back up.

Second, we immediately pushed a hotfix which suppressed re-validating the signatures on add-ons. The idea here was to avoid breaking users who hadn’t re-validated yet. We did this before we had any other fix, and have removed it now that fixes are available.

Working in Parallel

In theory, fixing a problem like this looks simple: make a new, valid certificate and republish every add-on with that certificate. Unfortunately, we quickly determined that this wouldn’t work for a number of reasons:

  1. There are a very large number of add-ons (over 15,000) and the signing service isn’t optimized for bulk signing, so just re-signing every add-on would take longer than we wanted.
  2. Once add-ons were signed, users would need to get a new add-on. Some add-ons are hosted on Mozilla’s servers and Firefox would update those add-ons within 24 hours, but users would have to manually update any add-ons that they had installed from other sources, which would be very inconvenient.

Instead, we focused on trying to develop a fix which we could provide to all our users with little or no manual intervention.

After examining a number of approaches, we quickly converged on two major strategies which we pursued in parallel:

  1. Patching Firefox to change the date which is used to validate the certificate. This would make existing add-ons magically work again, but required shipping a new build of Firefox (a “dot release”).
  2. Generate a replacement certificate that was still valid and somehow convince Firefox to accept it instead of the existing, expired certificate.

We weren’t sure that either of these would work, so we decided to pursue them in parallel and deploy the first one that looked like it was going to work. At the end of the day, we ended up deploying the second fix, the new certificate, which I’ll describe in some more detail below.

A Replacement Certificate

As suggested above, there are two main steps we had to follow here:

  1. Generate a new, valid, certificate.
  2. Install it remotely in Firefox.

In order to understand why this works, you need to know a little more about how Firefox validates add-ons. The add-on itself comes as a bundle of files that includes the certificate chain used to sign it. The result is that the add-on is independently verifiable as long as you know the root certificate, which is configured into Firefox at build time. However, as I said, the intermediate certificate was broken, so the add-on wasn’t actually verifiable.

However, it turns out that when Firefox tries to validate the add-on, it’s not limited to just using the certificates in the add-on itself. Instead, it tries to build a valid chain of certificates starting at the end-entity certificate and continuing until it gets to the root. The algorithm is complicated, but at a high level, you start with the end-entity certificate and then find a certificate whose subject is equal to the issuer of the end-entity certificate (i.e., the intermediate certificate). In the simple case, that’s just the intermediate that shipped with the add-on, but it could be any certificate that the browser happens to know about. If we can remotely add a new, valid, certificate, then Firefox will try that as well. The figure below shows the situation before and after we install the new certificate.

a diagram showing two workflows, before and after we installed a new valid certificate

Once the new certificate is installed, Firefox has two choices for how to validate the certificate chain, use the old invalid certificate (which won’t work) and use the new valid certificate (which will work). An important feature here is that the new certificate has the same subject name and public key as the old certificate, so that its signature on the End-Entity certificate is valid. Fortunately, Firefox is smart enough to try both until it finds a path that works, so the add-on becomes valid again. Note that this is the same logic we use for validating TLS certificates, so it’s relatively well understood code that we were able to leverage.[2]

The great thing about this fix is that it doesn’t require us to change any existing add-on. As long as we get the new certificate into Firefox, then even add-ons which are carrying the old certificate will just automatically verify. The tricky bit then becomes getting the new certificate into Firefox, which we need to do automatically and remotely, and then getting Firefox to recheck all the add-ons that may have been disabled.

Normandy and the Studies System

Ironically, the solution to this problem is a special type of add-on called a system add-on (SAO). In order to let us do research studies, we have developed a system called Normandy which lets us serve SAOs to Firefox users. Those SAOs automatically execute on the user’s browser and while they are usually used for running experiments, they also have extensive access to Firefox internal APIs. Important for this case, they can add new certificates to the certificate database that Firefox uses to verify add-ons.[3]

So the fix here is to build a SAO which does two things:

  1. Install the new certificate we have made.
  2. Force the browser to re-verify every add-on so that the ones which were disabled become active.

But wait, you say. Add-ons don’t work so how do we get it to run? Well, we sign it with the new certificate!

Putting it all together… and what took so long?

OK, so now we’ve got a plan: issue a new certificate to replace the old one, build a system add-on to install it on Firefox, and deploy it via Normandy. Starting from about 6 PM Pacific on Friday May 3, we were shipping the fix in Normandy at 2:44 AM, or after less than 9 hours, and then it took another 6-12 hours before most of our users had it. This is actually quite good from a standing start, but I’ve seen a number of questions on Twitter about why we couldn’t get it done faster. There are a number of steps that were time consuming.

First, it took a while to issue the new intermediate certificate. As I mentioned above, the Root certificate is in a hardware security module which is stored offline. This is good security practice, as you use the Root very rarely and so you want it to be secure, but it’s obviously somewhat inconvenient if you want to issue a new certificate on an emergency basis. At any rate, one of our engineers had to drive to the secure location where the HSM is stored. Then there were a few false starts where we didn’t issue exactly the right certificate, and each attempt cost an hour or two of testing before we knew exactly what to do.

Second, developing the system add-on takes some time. It’s conceptually very simple, but even simple programs require taking some care, and we really wanted to make sure we didn’t make things worse. And before we shipped the SAO, we had to test it, and that takes time, especially because it has to be signed. But the signing system was disabled, so we had to find some workarounds for that.

Finally, once we had the SAO ready to ship, it still takes time to deploy. Firefox clients check for Normandy updates every 6 hours, and of course many clients are offline, so it takes some time for the fix to propagate through the Firefox population. However, at this point we expect that most people have received the update and/or the dot release we did later.

Final Steps

While the SAO that was deployed with Studies should fix most users, it didn’t get to everyone. In particular, there are a number of types of affected users who will need another approach:

  • Users who have disabled either Telemetry or Studies.
  • Users on Firefox for Android (Fennec), where we don’t have Studies.
  • Users of downstream builds of Firefox ESR that don’t opt-in to
    telemetry reporting.
  • Users who are behind HTTPS Man-in-the-middle proxies, because our add-on installation systems enforce key pinning for these connections, which proxies interfere with.
  • Users of very old builds of Firefox which the Studies system can’t reach.

We can’t really do anything about the last group — they should update to a new version of Firefox anyway because older versions typically have quite serious unfixed security vulnerabilities. We know that some people have stayed on older versions of Firefox because they want to run old-style add-ons, but many of these now work with newer versions of Firefox. For the other groups we have developed a patch to Firefox that will install the new certificate once people update. This was released as a “dot release” so people will get it — and probably have already — through the ordinary update channel. If you have a downstream build, you’ll need to wait for your build maintainer to update.

We recognize that none of this is perfect. In particular, in some cases, users lost data associated with their add-ons (an example here is the “multi-account containers” add-on).

We were unable to develop a fix that would avoid this side effect, but we believe this is the best approach for the most users in the short term. Long term, we will be looking at better architectural approaches for dealing with this kind of issue.


First, I want to say that the team here did amazing work: they built and shipped a fix in less than 12 hours from the initial report. As someone who sat in the meeting where it happened, I can say that people were working incredibly hard in a tough situation and that very little time was wasted.

With that said, obviously this isn’t an ideal situation and it shouldn’t have happened in the first place. We clearly need to adjust our processes both to make this and similar incidents it less likely to happen and to make them easier to fix.

We’ll be running a formal post-mortem next week and will publish the list of changes we intend to make, but in the meantime here are my initial thoughts about what we need to do. First, we should have a much better way of tracking the status of everything in Firefox that is a potential time bomb and making sure that we don’t find ourselves in a situation where one goes off unexpectedly. We’re still working out the details here, but at minimum we need to inventory everything of this nature.

Second, we need a mechanism to be able to quickly push updates to our users even when — especially when — everything else is down.  It was great that we are able to use the Studies system, but it was also an imperfect tool that we pressed into service, and that had some undesirable side effects. In particular, we know that many users have auto-updates enabled but would prefer not to participate in Studies and that’s a reasonable preference (true story: I had it off as well!) but at the same time we need to be able to push updates to our users; whatever the internal technical mechanisms, users should be able to opt-in to updates (including hot-fixes) but opt out of everything else. Additionally, the update channel should be more responsive than what we have today. Even on Monday, we still had some users who hadn’t picked up either the hotfix or the dot release, which clearly isn’t ideal. There’s been some work on this problem already, but this incident shows just how important it is.

Finally, we’ll be looking more generally at our add-on security architecture to make sure that it’s enforcing the right security properties at the least risk of breakage.

We’ll be following up next week with the results of a more thorough post-mortem, but in the meantime, I’ll be happy to answer questions by email at


[1] A few very old add-ons were signed with a different intermediate.

[2] Readers who are familiar with the WebPKI will recognize that this is also the way that cross-certification works.
[3] Technical note: we aren’t adding the certificate with any special privileges; it gets its authority by being signed for the root. We’re just adding it to the pool of certificates which can be used by Firefox. So, it’s not like we are adding a new privileged certificate to Firefox.

The post Technical Details on the Recent Firefox Add-on Outage appeared first on Mozilla Hacks - the Web developer blog.

Chris H-CGoogle I/O Extended 2019 – Report

I attended a Google I/O Extended event on Tuesday at Google’s Kitchener office. It’s a get-together where there are demos, talks, workshops, and networking opportunities centred around watching the keynote live on the screen.

I treat it as an opportunity to keep an eye on what they’re up to this time, and a reminder that I know absolutely no one in the tech scene around here.

The first part of the day was a workshop about how to build Actions for the Google Assistant. I found the exercise to be very interesting.

The writing of the Action itself wasn’t interesting, that was a bunch of whatever. But it was interesting that it refused to work unless you connected it to a Google Account that had Web & Search Activity tracking turned on. Also I found it interesting that, though they said it required Chrome, it worked just fine on Firefox. It was interesting listening to laptops (including mine) across the room belt out welcome phrases because the simulator defaults to a hot mic and a loud speaker. It was interesting to notice that the presenter spent thirty seconds talking about how to name your project, and zero seconds talking about the Terms of Use of the application we were being invited to use. It was interesting to see that the settings defaulted to allowing you to test on all devices registered to the Google Account, without asking.

After the workshop the tech head of the Google Home App stood up and delivered a talk about trying to get manufacturers to agree on how to talk to Google Home and the Google Assistant.

I asked whether these efforts in trying to normalize APIs and protocols was leading them to publish a standard with a standards body. “No idea, sorry.”

Then I noticed the questions from the crowd were following a theme: “Can we get finer privacy controls?” (The answer seemed to be that Google believes the controls are already fine enough) “How do you educate users about the duration the data is retained?” (It’s in the Terms of Service, but it isn’t read aloud. But Google logs every “consent moment” and keeps track of settings) “For the GDPR was there a challenge operating in multiple countries?” (Yes. They admitted that some of the “fine enough” privacy controls are finer in certain jurisdictions due to regs.) And, after the keynote, someone in the crowd asked what features Android might adopt (self-destruct buttons, maybe) to protect against Border Security-style threats.

It was very heartening to hear a room full of tech nerds from Toronto and Waterloo Region ask questions about Privacy and Security of a tech giant. It was incredibly validating to hear from the keynote that Chrome is considering privacy protections Firefox introduced last year.

Maybe we at Mozilla aren’t crazy to think that privacy is important, that users care about it, that it is at risk and big tech companies have the power and the responsibility to protect it.

Maybe. Maybe not.

Just keep those questions coming.


Daniel StenbergSometimes I speak

I view myself as primarily a software developer. Perhaps secondary as someone who’s somewhat knowledgeable in networking and is participating in protocol development and discussions. I do not regularly proclaim myself to be a “speaker” or someone who’s even very good at talking in front of people.

Time to wake up and face reality? I’m slowly starting to realize that I’m actually doing more presentations than ever before in my life and I’m enjoying it.

Since October 2015 I’ve done 53 talks and presentations in front of audiences – in ten countries. That’s one presentation done every 25 days on average. (The start date of this count is a little random but it just happens that I started to keep a proper log then.) I’ve talked to huge audiences and to small. I done presentations that were appreciated and I’ve done some that were less successful.

<figcaption>The room for the JAX keynote, May 2019, as seen from the stage, some 20 minutes before 700 persons sat down in the audience to hear my talk on HTTP/3.</figcaption>

My increased frequency in speaking engagements coincides with me starting to work full-time from home back in 2014. Going to places to speak is one way to get out of the house and see the “real world” a little bit and see what the real people are doing. And a chance to hang out with humans for a change. Besides, I only ever talk on topics that are dear to me and that I know intimately well so I rarely feel pressure when delivering them. 2014 – 2015 was also the time frame when HTTP/2 was being finalized and the general curiosity on that new protocol version helped me find opportunities back then.

Public speaking is like most other things: surprisingly enough, practice actually makes you better at it! I still have a lot to learn and improve, but speaking many times has for example made me better at figuring out roughly how long time I need to deliver a particular talk. It has taught me to “find myself” better when presenting and be more relaxed and the real me – no need to put up a facade of some kind or pretend. People like seeing that there’s a real person there.

<figcaption>I talked HTTP/2 at Techday by Init, in November 2016.</figcaption>

I’m not even getting that terribly nervous before my talks anymore. I used to really get a raised pulse for the first 45 talks or so, but by doing it over and over and over I think the practice has made me more secure and more relaxed in my attitude to the audience and the topics. I think it has made me a slightly better presenter and it certainly makes me enjoy it more.

I’m not “a good presenter”. I can deliver a talk and I can do it with dignity and I think the audience is satisfied with me in most cases, but by watching actual good presenters talk I realize that I still have a long journey ahead of me. Of course, parts of the explanation is that, to connect with the beginning of this post, I’m a developer. I don’t talk for a living and I actually very rarely practice my presentations very much because I don’t feel I can spend that time.

<figcaption>The JAX keynote in May 2019 as seen from the audience. Photo by Bernd Ruecker.</figcaption>

Some of the things that are still difficult include:

The money issue. I actually am a developer and that’s what I do for a living. Taking time off the development to prepare a presentation, travel to a distant place, sacrifice my spare time for one or more days and communicating something interesting to an audience that demands and expects it to be both good and reasonably entertaining takes time away from that development. Getting travel and accommodation compensated is awesome but unfortunately not enough. I need to insist on getting paid for this. I frequently turn down speaking opportunities when they can’t pay me for my time.

Saying no. Oh my god do I have a hard time to do this. This year, I’ve been invited to so many different conferences and the invitations keep flying in. For every single received invitation, I get this warm and comfy feeling and I feel honored and humbled by the fact that someone actually wants me to come to their conference or gathering to talk. There’s the calendar problem: I can’t be in two places at once. Then I also can’t plan events too close to each other in time to avoid them holding up “real work” too much or to become too much of a nuisance to my family. Sometimes there’s also the financial dilemma: if I can’t get compensation, it gets tricky for me to do it, no matter how good the conference seems to be and the noble cause they’re working for.

<figcaption>At SUE 2016 in the Netherlands.</figcaption>

Feedback. To determine what parts of the presentation that should be improved for the next time I speak of the same or similar topic, which parts should be removed and if something should be expanded, figuring what works and what doesn’t work is vital. For most talks I’ve done, there’s been no formal way to provide or receive this feedback, and for the small percentage that had a formal feedback form or a scoring system or similar, taking care of a bunch of distributed grades (for example “your talk was graded 4.2 on a scale between 1 and 5”) and random comments – either positive or negative – is really hard… I get the best feedback from close friends who dare to tell me the truth as it is.

Conforming to silly formats. Slightly different, but some places want me to send me my slides in, either a long time before the event (I’ve had people ask me to provide way over a week(!) before), or they dictate that the slides should be sent to them using Microsoft Powerpoint, PDF or some other silly format. I want to use my own preferred tools when designing presentations as I need to be able to reuse the material for more and future presentations. Sure, I can convert to other formats but that usually ruins formatting and design. Then a lot the time and sweat I put into making a fine and good-looking presentation is more or less discarded! Fortunately, most places let me plug in my laptop and everything is fine!

Upcoming talks?

As a little service to potential audience members and conference organizers, I’m listing all my upcoming speaking engagements on a dedicated page on my web site:

I try to keep that page updated to reflect current reality. It also shows that some organizers are forward-planning waaaay in advance…

<figcaption>Here’s me talking about DNS-over-HTTPS at FOSDEM 2019. Photo by Steve Holme.</figcaption>

Invite someone like me to talk?

Here’s some advice on how to invite a speaker (like me) with style:

  1. Ask well in advance (more than 2-3 months preferably, probably not more than 9). When I agree to a talk, others who ask for talks in close proximity to that date will get declined. I get a surprisingly large amount of invitations for events just a month into the future or so, and it rarely works for me to get those into my calendar in that time frame.
  2. Do not assume for-free delivery. I think it is good tone of you to address the price/charge situation, if not in the first contact email at least in the following discussion. If you cannot pay, that’s also useful information to provide early.
  3. If the time or duration of the talk you’d like is “unusual” (ie not 30-60 minutes) do spell that out early on.
  4. Surprisingly often I get invited to talk without a specified topic or title. The inviter then expects me to present that. Since you contact me you clearly had some kind of vision of what a talk by me would entail, it would make my life easier if that vision was conveyed as it could certainly help me produce a talk subject that will work!
<figcaption>Presenting HTTP/2 at the Velocity conference in New York, October 2015, together with Ragnar Lönn.</figcaption>

What I bring

To every presentation I do, I bring my laptop. It has HDMI and USB-C ports. I also carry a HDMI-to-VGA adapter for the few installations that still use the old “projector port”. Places that need something else than those ports tend to have their own converters already since they’re then used with equipment not being fitted for their requirements.

I always bring my own clicker (the “remote” with which I can advance to next slide). I never use the laser-pointer feature, but I like being able to move around on the stage and not have to stand close to the keyboard when I present.


I never create my presentations with video or sound in them, and I don’t do presentations that need Internet access. All this to simplify and to reduce the risk of problems.

I work hard on limiting the amount of text on each slide, but I also acknowledge that if a slide set should have value after-the-fact there needs to be a certain amount. I’m a fan of revealing the text or graphics step-by-step on the slides to avoid having half the audience reading ahead on the slide and not listening.

I’ve settled on 16:9 ratio for all presentations. Luckily, the remaining 4:3 projectors are now scarce.

I always make and bring a backup of my presentations in PDF format so that basically “any” computer could display that in case of emergency. Like if my laptop dies. As mentioned above, PDF is not an ideal format, but as a backup it works.

<figcaption>I talked “web transport” in the Mozilla devroom at FOSDEM, February 2017 in front of this audience. Not a single empty seat…</figcaption>

Mike HommeyAnnouncing git-cinnabar 0.5.1

Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.

Get it on github.

These release notes are also available on the git-cinnabar wiki.

What’s new since 0.5.0?

  • Updated git to 2.21.0 for the helper.
  • Experimental native mercurial support (used when mercurial libraries are not available) now has feature parity.
  • Try to read the git system config from the same place as git does. This fixes native HTTPS support with Git on Windows.
  • Avoid pushing more commits than necessary in some corner cases (see e.g.
  • Added an –abbrev argument for git cinnabar {git2hg,hg2git} to display shortened sha1s.
  • Can now pass multiple revisions to git cinnabar fetch.
  • Don’t require the requests python module for git cinnabar download.
  • Fixed git cinnabar fsck file checks to actually report errors.
  • Properly return an error code from git cinnabar rollback.
  • Track last fsck’ed metadata and allow git cinnabar rollback --fsck to go back to last known good metadata directly.
  • git cinnabar reclone can now be rolled back.
  • Added support for git bundles as cinnabarclone source.
  • Added alternate styles of remote refs.
  • More resilient to interruptions when HTTP Range requests are supported.
  • Fixed off-by-one when storing mercurial heads.
  • Better handling of mercurial branchmap tips.
  • Better support for end of parts in bundle v2.
  • Improvements handling urls to local mercurial repositories.
  • Fixed compatibility with (very) old mercurial servers when using mercurial 5.0 libraries.
  • Converted Continuous Integration scripts to Python 3.

Mozilla ThunderbirdWeTransfer File Transfer Now Available in Thunderbird

WeTransfer’s file-sharing service is now available within Thunderbird for sending large files (up to 2GB) for free, without signing up for an account.

Even better, sharing large files can be done without leaving the composer. While writing an email, just attach a large file and you will be prompted to choose whether you want to use file link, which will allow you to share a large file with a link to download it. Via this prompt you can select to use WeTransfer.

Filelink prompt in Thunderbird

Filelink prompt in Thunderbird

You can also enable File Link through the Preferences menu, under the attachments tab and the Outgoing page. Click “Add…” and choose “WeTransfer” from the drop down menu.

WeTransfer in Preferences

Once WeTransfer is set up in Thunderbird it will be the default method of linking for files over the size that you have specified (you can see that is set to 5MB in the screenshot above).

WeTransfer and Thunderbird are both excited to be able to work together on this great feature for our users. The Thunderbird team thinks that this will really improve the experience of collaboration and and sharing for our users.

WeTransfer is also proud of this feature. Travis Brown, WeTransfer VP of Business Development says about the collaboration:

“Mozilla and WeTransfer share similar values. We’re focused on the user and on maintaining our user’s privacy and an open internet. We’ll continue to work with their team across multiple areas and put privacy at the front of those initiatives.”

We hope that all our users will give this feature a try and enjoy being able to share the files they want with co-workers, friends, and family – easily.

QMOFirefox 67 Beta 16 Testday Results

Hello Mozillians!

As you may already know, last Friday May 3rd – we held a new Testday event, for Firefox 67 Beta 16.

Thank you all for helping us make Mozilla a better place: Rok Žerdin, Fernando Espinoza, Kamila Kamciatek.

Result: Several test cases were executed for: Track Changes M2 & WebExtensions compatibility & support.

Thanks for another awesome testday, we appreciate your contribution! 🙂

We hope to see you all in our next events, keep an eye on QMO. We will make announcements as soon as something shows up!

This Week In RustThis Week in Rust 285

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is select-rustc, a crate for conditional compilation according to rustc version. Thanks to ehsanmok for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

No issues were proposed for CfP.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

235 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs

New RFCs

Upcoming Events

North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

A compile_fail test that fails to fail to compile is also a failure.

David Tolnay in the try-build README

Llogiq is pretty self-congratulatory for picking this awesome quote.

Please submit quotes and vote for next week!

This Week in Rust is edited by: nasa42, llogiq, and Flavsditz.

Discuss on r/rust.

Daniel Stenberglive-streamed curl development

As some of you already found out, I’ve tried live-streaming curl development recently. If you want to catch previous and upcoming episodes subscribe on my twitch page.

Why stream

For the fun of it. I work alone from home most of the time and this is a way for me to interact with others.

To show what’s going on in curl right now. By streaming some of my development I also show what kind of work that’s being done, showing that a lot of development and work are being put into curl and I can share my thoughts and plans with a wider community. Perhaps this will help getting more people to help out or to tickle their imagination.

<figcaption>A screenshot from live stream #11 when parallel transfers with curl was shown off for the first time ever!</figcaption>

For the feedback and interaction. It is immediately notable that one of the biggest reasons I enjoy live-streaming is the chat with the audience and the instant feedback on mistakes I do or thoughts and plans I express. It becomes a back-and-forth and it is not at all just a one-way broadcast. The more my audience interact with me, the more fun I have! That’s also the reason I show the chat within the stream most of the time since parts of what I say and do are reactions and follow-ups to what happens there.

I can only hope I get even more feedback and comments as I get better at this and that people find out about what I’m doing here.

And really, by now I also think of it as a really concentrated and devoted hacking time. I can get a lot of things done during these streaming sessions! I’ll try to keep them going a while.


I decided to go with twitch simply because it is an established and known live-streaming platform. I didn’t do any deeper analyses or comparisons, but it seems to work fine for my purposes. I get a stream out with video and sound and people seem to be able to enjoy it.

As of this writing, there are 1645 people following me on twitch. Typical recent live-streams of mine have been watched by over a hundred simultaneous viewers. I also archive all past streams on Youtube, so you can get almost the same experience my watching back issues there.

I announce my upcoming streaming sessions as “events” on Twitch, and I announce them on twitter (@bagder you know). I try to stick to streaming on European day time hours basically because then I’m all alone at home and risk fewer interruptions or distractions from family members or similar.


It’s not as easy as it may look trying to write code or debug an issue while at the same time explaining what I do. I learnt that the sessions get better if I have real and meaty issues to deal with or features to add, rather than to just have a few light-weight things to polish.

I also quickly learned that it is better to now not show an actual screen of mine in the stream, but instead I show a crafted set of windows placed on the output to look like it is a screen. This way there’s a much smaller risk that I actually show off private stuff or other content that wasn’t meant for the audience to see. It also makes it easier to show a tidy, consistent and clear “desktop”.

Streaming makes me have to stay focused on the development and prevents me from drifting off and watching cats or reading amusing tweets for a while


So far we’ve been spared from the worst kind of behavior and people. We’ve only had some mild weirdos showing up in the chat and nothing that we couldn’t handle.

Equipment and software

I do all development on Linux so things have to work fine on Linux. Luckily, OBS Studio is a fine streaming app. With this, I can setup different “scenes” and I can change between them easily. Some of the scenes I have created are “emacs + term”, “browser” and “coffee break”.

When I want to show off me fiddling with the issues on github, I switch to the “browser” scene that primarily shows a big browser window (and the chat and the webcam in smaller windows).

When I want to show code, I switch to “emacs + term” that instead shows a terminal and an emacs window (and again the chat and the webcam in smaller windows), and so on.

OBS has built-in support for some of the major streaming services, including twitch, so it’s just a matter of pasting in a key in an input field, press ‘start streaming’ and go!

The rest of the software is the stuff I normally use anyway for developing. I don’t fake anything and I don’t make anything up. I use emacs, make, terminals, gdb etc. Everything this runs on my primary desktop Debian Linux machine that has 32GB of ram, an older i7-3770K CPU at 3.50GHz with a dual screen setup. The video of me is captured with a basic Logitech C270 webcam and the sound of my voice and the keyboard is picked up with my Sennheiser PC8 headset.

Some viewers have asked me about my keyboard which you can hear. It is a FUNC-460 that is now approaching 5 years, and I know for a fact that I press nearly 7 million keys per year.


In a reddit post about my live-streaming, user ‘digitalsin’ suggested “Maybe don’t slurp RIGHT INTO THE FUCKING MIC”.

How else am I supposed to have my coffee while developing?

<figcaption>This is my home office standard setup. On the left is my video conference laptop and on the right is my regular work laptop. The two screens in the middle are connected to the desktop computer.</figcaption>

Matthew NoorenberghePassword Manager Improvements in Firefox 67

There have been many improvements to the password manager in Firefox and some of them may take a while to be noticed so I thought I would highlight some of the user-facing ones in version 67:

Credit for the fixes goes to Jared Wein, Sam Foster, Prathiksha Guruprasad, and myself. The full list of password manager improvements in Firefox 67 can be found on Bugzilla and there are many more to come in Firefox 68 so stay tuned…

  1. Due to interactions with the Master Password dialog, this change doesn't apply if a Master Password is enabled

The Mozilla BlogThe Firefox EU Elections Toolkit helps you to prevent pre-vote online manipulation

What comes to your mind when you hear the term ‘online manipulation’? In the run-up to the EU parliamentary elections at the end of May, you probably think first and foremost of disinformation. But what about technical ways to manipulate voters on the internet? Although they are becoming more and more popular because they are so difficult to recognize and therefore particularly successful, they probably don’t come to mind first. Quite simply because they have not received much public attention so far. Firefox tackles this issue today: The ‘Firefox EU Election Toolkit’ not only provides important background knowledge and tips – designed to be easily understood by non-techies – but also tools to enable independent online research and decision-making.

Manipulation on the web: ‘fake news’ isn’t the main issue (anymore)

Few other topics have been so present in public perception in recent years, so comprehensively discussed in everyday life, news and science, and yet have been demystified as little as disinformation. Also commonly referred to as ‘fake news’, it’s defined as “deliberate disinformation or hoaxes spread via traditional print and broadcast news media or online social media.” Right now, so shortly before the next big elections at the end of May, the topic seems to be bubbling up once more: According to the European Commission’s Eurobarometer, 73 percent of Internet users in Europe are concerned about disinformation in the run-up to the EU parliamentary elections.

However, research also proves: The public debate about disinformation takes place in great detail, which significantly increases awareness of the ‘threat’. The fact that more and more initiatives against disinformation and fact-checking actors have been sprouting up for some time now – and that governments are getting involved, too – may be related to the zeitgeist or connected to individuals’ impression that they are constantly confronted with ‘fake news’ and cannot protect themselves on their own.

It’s important to take action against disinformation. Also, users who research the elections and potential candidates on the Internet, for example, should definitely stay critical and cautious. After all, clumsy disinformation campaigns are still taking place, revealing some of the downsides of a global, always available Internet; and they even come with a wide reach and rapid dissemination. Countless actors, including journalists, scientists and other experts now agree that the impact of disinformation is extremely limited and traditional news is still the primary and reliable source of information. This does not, however, mean that the risk of manipulation has gone away; in fact, we must make sure to stay alert and not close our eyes to new, equally problematic forms of manipulation, which have just been less present in the media and science so far. At Firefox we understand that this may require some support – and we’re happy to provide it today.

A toolkit for well-informed voters

Tracking has recently been a topic of discussion in the context of intrusive advertising, big data and GDPR. To refresh your memory: When browsing from site to site, users’ personal information may be collected through scripts or widgets on the websites. They’re called trackers. Many people don’t like that user information collected through trackers is used for advertising, often times without people’s knowledge (find more info here). But there’s another issue a lot less people are aware of and which hasn’t been widely discussed so far: User data can also be used for manipulation attempts, micro-targeted at specific groups or individuals. We believe that this needs to change – and in order to make that happen, more people need to hear about it.

Firefox is committed to an open and free Internet that provides access to independent information to everyone. That’s why we’ve created the ‘Firefox EU Elections Toolkit’: a website where users can find out how tracking and opaque online advertising influence their voting behavior and how they can easily protect themselves – through browser add-ons and other tools. Additionally, disinformation and the voting process are well represented on the site. The toolkit is now available online in English, German and French. No previous technical or policy-related knowledge is required. Among other things, the toolkit contains:

  • background information on how tracking, opaque election advertising and other questionable online activities affect people on the web, including a short, easy-to-digest video.
  • selected information about the EU elections as well as the EU as an institution – only using trustworthy sources.
  • browser extensions, checked on and recommended by Firefox, that support independent research and opinion making.

Make an independent choice when it matters the most

Of course, manipulation on the web is not only relevant in times of major political votes. With the forthcoming parliamentary elections, however, we find ourselves in an exceptional situation that calls for practical measures also because there might be greater interest in the election, the programmes, parties and candidates than in recent years: More and more EU citizens are realizing how important the five-yearly parliamentary election is; the demands on parliamentarians are rising; and last but not least, there are numerous new voters again this May for whom Internet issues play an important role, but who need to find out about the election, its background and consequences.

Firefox wants to make sure that everyone has the chance to make informed choices. That detailed technical knowledge is not mandatory for getting independent information. And that the internet with all of its many advantages and (almost) unlimited possibilities is open and available to everyone, independent from demographics. Firefox fights for you.

The post The Firefox EU Elections Toolkit helps you to prevent pre-vote online manipulation appeared first on The Mozilla Blog.

Mozilla Addons BlogAdd-ons disabled or failing to install in Firefox

Incident summary

Updates – Last updated 14:35 PST May 14, 2019. We expect this to be our final update.

  • If you are running Firefox versions 61 – 65 and 1) did not receive the deployed fix and 2) do not want to update to the current version (which includes the permanent fix): Install this extension to resolve the expired security certificate issue and re-enable extensions and themes.
  • If you are running Firefox versions 57 – 60: Install this extension to resolve the expired security certificate issue and re-enable extensions and themes.
  • If you are running Firefox versions 47 – 56: install this extension to resolve the expired security certificate issue and re-enable extensions and themes.
  • A less technical blog post about the outage is also available. If you enabled telemetry to get the initial fix, we’re deleting all data collected since May 4. (May 9, 17:04 EDT)
  • Mozilla CTO Eric Rescorla posted a blog on the technical details of what went wrong last weekend. (May 9, 16:20 EDT)
  • We’ve released Firefox 66.0.5 for Desktop and Android, and Firefox ESR 60.6.3, which include the permanent fix for re-enabling add-ons that were disabled starting on May 3rd. The initial, temporary fix that was deployed May 4th through the Studies system is replaced by these updates, and we recommend updating as soon as possible. Users who enabled Studies to receive the temporary fix, and have updated to the permanent fix, can now disable Studies if they desire.For users who cannot update to the latest version of Firefox or Firefox ESR, we plan to distribute an update that automatically applies the fix to versions 52 through 60. This fix will also be available as a user-installable extension. For anyone still experiencing issues in versions 61 through 65, we plan to distribute a fix through a user-installable extension. These extensions will not require users to enable Studies, and we’ll provide an update when they are available. (May 8, 19:28 EDT)
  • Firefox 66.0.5 has been released, and we recommend that people update to that version if they continue to experience problems with extensions being disabled. You’ll get an update notification within 24 hours, or you can initiate an update manually. An update to ESR 60.6.3 is also available as of 16:00 UTC May 8th. We’re continuing to work on a fix for older versions of Firefox, and will update this post and on social media as we have more information. (May 8, 11:51 EDT)
  • A Firefox release has been pushed — version 66.0.4 on Desktop and Android, and version 60.6.2 for ESR. This release repairs the certificate chain to re-enable web extensions, themes, search engines, and language packs that had been disabled (Bug 1549061). There are remaining issues that we are actively working to resolve, but we wanted to get this fix out before Monday to lessen the impact of disabled add-ons before the start of the week. More information about the remaining issues can be found by clicking on the links to the release notes above. (May 5, 16:25 EDT)
  • Some users are reporting that they do not have the “hotfix-update-xpi-signing-intermediate-bug-1548973” study active in “about:studies”. Rather than using work-arounds, which can lead to issues later on, we strongly recommend that you continue to wait. If it’s possible for you to receive the hotfix, you should get it by 6am EDT, 24 hours after it was first released. For everyone else, we are working to ship a more permanent solution. (May 5, 00:54 EDT)
  • There are a number of work-arounds being discussed in the community. These are not recommended as they may conflict with fixes we are deploying. We’ll let you know when further updates are available that we recommend, and appreciate your patience. (May 4, 15:01 EDT)
  • Temporarily disabled commenting on this post given volume and duplication. They’ll be re-enabled as more updates become available. (May 4, 13:02 EDT)
  • Updated the post to clarify that deleting extensions can result in data loss, and should not be used to attempt a fix. (May 4, 12:58 EDT)
  • Clarified that the study may appear in either the Active studies or Completed studies of “about:studies” (May 4, 12:10 EDT)
  • We’re aware that some users are reporting that their extensions remain disabled with both studies active. We’re tracking this issue on Bugzilla in bug 1549078. (May 4, 12:03 EDT)
  • Clarified that the Studies fix applies only to Desktop users of Firefox distributed by Mozilla. Firefox ESR, Firefox for Android, and some versions of Firefox included with Linux distributions will require separate updates. (May 4, 12:03 EDT)

Late on Friday May 3rd, we became aware of an issue with Firefox that prevented existing and new add-ons from running or being installed. We are very sorry for the inconvenience caused to people who use Firefox.

Our team  identified and rolled-out a temporary fix for all Firefox Desktop users on Release, Beta and Nightly. The fix will be automatically applied in the background within 24 hours. No active steps need to be taken to make add-ons work again. In particular, please do not delete and/or re-install any add-ons as an attempt to fix the issue. Deleting an add-on removes any data associated with it, where disabling and re-enabling does not.

Please note: The fix does not apply to Firefox ESR or Firefox for Android. We’re working on releasing a fix for both, and will provide updates here and on social media.

To provide this fix on short notice, we are using the Studies system. This system is enabled by default, and no action is needed unless Studies have been disabled. Firefox users can check if they have Studies enabled by going to:

  • Firefox Options/Preferences -> Privacy & Security -> Allow Firefox to install and run studies (scroll down to find the setting)

  • Studies can be disabled again after the add-ons have been re-enabled

It may take up to six hours for the Study to be applied to Firefox. To check if the fix has been applied, you can enter “about:studies” in the location bar. If the fix is in the active, you’ll see “hotfix-update-xpi-signing-intermediate-bug-1548973” in either the Active studies or Completed studies as follows:

You may also see “hotfix-reset-xpi-verification-timestamp-1548973” listed, which is part of the fix and may be in the Active studies or Completed studies section(s).

We are working on a general fix that doesn’t use the Studies system and will keep this blog post updated accordingly. We will share a more substantial update in the coming days.

Additional sources of information:

The post Add-ons disabled or failing to install in Firefox appeared first on Mozilla Add-ons Blog.

Cameron KaiserTenFourFox not affected by the addon apocalypse

Tonight's Firefox add-on apocalypse, traced to a mistakenly expired intermediate signing certificate, is currently roiling Firefox users worldwide. It bit me on my Talos II, which really cheesed me off because it tanked all my carefully constructed site containers. (And that's an official Mozilla addon!)

This brief post is just to reassure you that TenFourFox is unaffected -- I disagreed with signature enforcement on add-ons from the beginning and explicitly disabled it.

Mike HoyeGoals And Constraints

This way to art.

I keep coming back to this:

“Open” in this context inextricably ties source control to individual agency. The checks and balances of openness in this context are about standards, data formats, and the ability to export or migrate your data away from sites or services that threaten to go bad or go dark. This view has very little to say about – and is often hostile to the idea of – granular access restrictions and the ability to impose them, those being the tools of this worldview’s bad actors.

The blind spots of this worldview are the products of a time where someone on the inside could comfortably pretend that all the other systems that had granted them the freedom to modify this software simply didn’t exist. Those access controls were handled, invisibly, elsewhere; university admission, corporate hiring practices or geography being just a few examples of the many, many barriers between the network and the average person.

And when we’re talking about blind spots and invisible social access controls, of course, what we’re really talking about is privilege.

How many people get to have this, I wonder: the sense that they can sit down in front of a computer and be empowered by it. The feeling of being able, the certainty that you are able to look at a hard problem, think about it, test and iterate; that easy rapid prototyping with familiar tools is right there in your hands, that a toolbox the size of the world is within reach. That this isn’t some child’s wind up toy I turn a crank on until the powerpoint clown pops up.

It’s not a universal or uniform experience, to be sure; they’re machines made of other people’s choices, and computers are gonna computer. But the only reason I get to have that feeling at all is that I got my start when the unix command line was the only decent option around, and I got to put the better part of a decade grooving in that muscle memory on machines and forums where it was safe – for me at least – to be there, fully present, make mistakes and learn from them.

(Big shoutout to everyone out there who found out how bash wildcards work by inadvertently typing mv * in a directory with only two files in it.)

That world doesn’t exist anymore; the internet that birthed it isn’t coming back. But I want everyone to have this feeling, that the machine is more than a glossy appliance. That it’s not a constraint. That with patience and tenacity it can work with you and for you, not just a tool for a task but an extension and expression of ourselves and our intent. That a computer can be a tool for expressing ourselves, for helping us be ourselves better.

Last week I laid out the broad strokes of Mozilla’s requirements for our next synchronous-text platform. They were pretty straightforward, but I want to thank a number of people from different projects who’ve gotten in touch on IRC or email to ask questions and offer their feedback.

Right now I’d like to lay out those requirements in more detail, and talk about some of the reasons behind them. Later I’m going to lay out the process and the options we’re looking at, and how we’re going to gather information, test those options and evaluate what we learn.

While the Rust community is making their own choices now about the best fit for their needs, the Rust community’s processes are going to strongly inform the steps for Mozilla. They’ve learned a lot the hard way about consensus-building and community decision-making, and it’s work that I have both a great deal of respect for and no intention of re-learning the hard way myself. I’ll have more about that shortly as well.

I mentioned our list of requirements last week but I want to drill into some of them here; in particular:

  • It needs to be accessible to the greater Mozilla community.

This one implies a lot more than it states, and it would be pretty easy to lay out something trite like “we think holistically about accessibility” the way some organizations say “a diversity of ideas”, as though that means anything at all. But that’s just not good enough.

Diversity, accessibility and community are all tightly interwoven ideas we prize, and how we approach, evaluate and deploy the technologies that connect us speaks deeply to our intentions and values as an organization. Mozilla values all the participants in the project, whether they rely on a screen reader, a slow network or older hardware; we won’t – we can’t – pick a stack that treats anyone like second-class citizens. That will not be allowed.

  • While we’re investigating options for semi-anonymous or pseudonymous connections, we will require authentication, because:
  • The Mozilla Community Participation Guidelines will apply, and they’ll be enforced.

Last week Dave Humphrey wrote up a reminiscence about his time on IRC soon after I made the announcement. Read the whole thing, for sure. Dave is wiser and kinder than I am, and has been for as long as we’ve known each other; his post spoke deeply to many of us who’ve been in and around Mozilla for a while, and two sentences near the end are particularly important:

“Having a way to get deeply engaged with a community is important, especially one as large as Mozilla. Whatever product or tool gets chosen, it needs to allow people to join without being invited.”

We’ve got a more detailed list of functional and organizational requirements for this project, and this is an important part of it: “New users must be able to join the service without manual intervention from a Mozilla employee.”

We’ve understood this as an accessibility issue for a long time as well, though I don’t think we’ve ever given it a name. “Involvement friction”, maybe – everything about becoming part of a project and community that’s hard not because it’s inherently difficult, but because nobody’s taken the time to make it easy.

I spend a lot of time thinking about something Sid Wolinsky said about the first elevators installed in the New York subway system: “This elevator is a gift from the disability community and the ADA to the nondisabled people of New York”. If you watch who’s using the elevators, ramps or automatic doors in any public building long enough, anything with wheelchair logo on it, you’ll notice a trend: it’s never somebody in a wheelchair. It’s somebody pushing a stroller or nursing a limp. It’s somebody carrying an awkward parcel, or a bag of groceries. Sometimes it’s somebody with a coffee in one hand and a phone in the other. Sometimes it’s somebody with no reason at all, at least not one you can see. It’s people who want whatever thing they’re doing, however difficult, to be a little bit easier. It’s everybody.

If you cost out accessible technology for the people who rely on it, it looks really expensive; if you cost it out for everyone who benefits from it, though, it’s basically free. And none of us in the “benefit” camp are ever further than a sprained ankle away from “rely”.

We’re getting better at this at Mozilla in hundreds of different ways, at recognizing how important it is that the experience of getting from “I want to help” to “I’m set up to help” to “I’m helping” be as simple and painless as possible. As one example, our bootstrap scripts and mach-build have reduced our once-brittle, failure-prone developer setup process down to “answer these questions and wait for the downloads to finish”, and in the process have done more to make the Firefox codebase accessible than I ever will. And everyone relies on them now, first-touch contributors and veteran devs alike.

Getting involved in the community, though, is still harder than it needs to be; try watching somebody new to open source development try to join an IRC channel sometime. Watch them go from “what’s IRC” to finding a client to learning how to use the client to joining the right server, then the right channel, only to find that the reward for all that effort is no backscroll, no context, and no idea who you’re talking to or if you’re in the right place or if you’re shouting into the void because the people you’re looking for aren’t logged in at the same time. It’s like asking somebody to learn to operate an airlock on their own so they can toss themselves out of it.

It’s more than obvious that you don’t build products like that anymore, but I think it’s underappreciated that it’s just as true of communities. I think it’s critical that we bring that same discipline of caring about the details of the experience to our communications channels and community forums, and the CPG is the cornerstone of that effort.

It was easy not to care about this when somebody who wanted to contribute to an open source project with global impact had maybe four choices, the Linux kernel, the Mozilla suite, the GNU tools and maybe Apache. But that world was pre-Github, pre-NPM. If you want to work on hard problems with global impact now you have a hundred thousand options, and that means the experience of joining and becoming a part of the Mozilla community matters.

In short, the amount of effort a project puts into making the path from “I want to help” to “I’m helping” easier is a reliable indicator of the value that project puts on community involvement. So if we say we value our community, we need to treat community involvement and contribution like a product, with all the usability and accessibility concerns that implies. To drive involvement friction as close to zero as possible.

One tool we’ll be relying on – and this one, we did build in-house – is called Mozilla-IAM, Mozilla’s Identity and Access Management tool. I’ll have more to say about this soon, but at its core it lets us proxy authentication from various sources and methods we trust, Github, Firefox Accounts, a link in your email, a few others. We think IAM will let us support pseudonymous participation and a low-cost first-contact experience, but also let us keep our house in order and uphold the CPG in the process.

Anyway, here’s a few more bullet points; what requirements doc isn’t full of them?

A synchronous messaging system that meets our needs:

  • Must work correctly in unmodified, release-channel Firefox.
  • Must offer a solid mobile experience.
  • Must support thousands of simultaneous users across the service.
  • Must support easy sharing of hyperlinks and graphics as well as text.
  • Must have persistent scrollback. Users reconnecting to a channel or joining the channel for the first time must be able to read up to acquire context of the current conversation in the backscroll.
  • Programmatic access is a hard requirement. The service must support a mature, reasonably stable and feature-rich API.
  • As mentioned, people participating via accessible technologies including screen readers or high-contrast display modes must be able to participate as first-class citizens of the service and the project.
  • New users must be able to join the service without manual intervention from a Mozilla employee.
  • Whether or not we are self-hosting, the service must allow Mozilla to specify a data retention and security policy that meets our institutional standards.
  • The service must have a customizable first-contact experience to inform new participants about Mozilla’s CPG and privacy notice.
  • The service must have effective administrative tooling including user and channel management, alerting and banning.
  • The service must support delegated authentication.
  • The service must pass an evaluation by our legal, trust and security teams. This is obviously also non-negotiable.

I doubt any of that will surprise anyone, but they might, and I’m keeping an eye out for questions. We’re still talking this out in #synchronicity on irc.m.o, and you’re welcome to jump in.

I suppose I should tip my hand at this point, and say that as much as I value the source part of open source, I also believe that people participating in open source communities deserve to be free not only to change the code and build the future, but to be free from the brand of arbitrary, mechanized harassment that thrives on unaccountable infrastructure, federated or not. We’d be deluding ourselves if we called systems that are just too dangerous for some people to participate in at all “open” just because you can clone the source and stand up your own copy. And I am absolutely certain that if this free software revolution of ours ends up in a place where asking somebody to participate in open development is indistinguishable from asking them to walk home at night alone, then we’re done. People cannot be equal participants in environments where they are subject to wildly unequal risk. People cannot be equal participants in environments where they are unequally threatened.

I think we can get there; I think we can meet our obligations to the Mission and the Manifesto as well as the needs of our community, and help the community grow and thrive in a way that grows and strengthens the web want and empowers everyone using and building it to be who we’re aspiring to be, better.

The next steps are going to be to lay out the evaluation process in more detail; then we can start pulling in information, stand up instances of the candidate stacks we’re looking at and trying them out.

The Firefox FrontierHow to research smarter, not harder with 10 tools on Firefox

Whether you’re in school or working on a project, knowing how to research is an essential skill. However, understanding how to do something and doing it smarter are two different … Read more

The post How to research smarter, not harder with 10 tools on Firefox appeared first on The Firefox Frontier.

Mozilla Addons BlogAdd-on Policy and Process Updates

As part of our ongoing work to make add-ons safer for Firefox users, we are updating our Add-on Policy to help us respond faster to reports of malicious extensions. The following is a summary of the changes, which will go into effect on June 10, 2019.

  • We will no longer accept extensions that contain obfuscated code. We will continue to allow minified, concatenated, or otherwise machine-generated code as long as the source code is included. If your extension is using obfuscated code, it is essential to submit a new version by June 10th that removes it to avoid having it rejected or blocked.

We will also be clarifying our blocking process. Add-on or extension blocking (sometimes referred to as “blocklisting”), is a method for disabling extensions or other third-party software that has already been installed by Firefox users.

  • We will be blocking extensions more proactively if they are found to be in violation of our policies. We will be casting a wider net, and will err on the side of user security when determining whether or not to block.
  • We will continue to block extensions for intentionally violating our policies, critical security vulnerabilities, and will also act on extensions compromising user privacy or circumventing user consent or control.

You can preview the policy and blocking process documents and ensure your extensions abide by them to avoid any disruption. If you have questions about these updated policies or would like to provide feedback, please post to this forum thread.


May 4, 2019 9:09 AM PST update: A certificate expired yesterday and has caused add-ons to stop working or fail to install. This is unrelated to the policy changes. We will be providing updates about the certificate issue in other posts on this blog.

9:55 am PST: Because a lot of comments on this post are related to the certificate issue, we are temporarily turning off comments for this post. 

The post Add-on Policy and Process Updates appeared first on Mozilla Add-ons Blog.

Will Kahn-GreeneSocorro: April 2019 happenings


Socorro is the crash ingestion pipeline for Mozilla's products like Firefox. When Firefox crashes, the crash reporter collects data about the crash, generates a crash report, and submits that report to Socorro. Socorro saves the crash report, processes it, and provides an interface for aggregating, searching, and looking at crash reports.

This blog post summarizes Socorro activities in April.

Read more… (6 min remaining to read)

Axel HechtMigrate to Fluent


A couple of weeks ago the Localization Team at Mozilla released the Fluent Syntax specification. As mentioned in our announcement, we already have over 3000 Fluent strings in Firefox. You might wonder how we introduced Fluent to a running project. In this post I’ll detail on how the design of Fluent plays into that effort, and how we pulled it off.

Fluent’s Design for Simplicity

Fluent abstracts away the complexities of human languages from programmers. At the same time, Fluent makes easy things easy for localizers, while making complex things possible.

When you migrate a project to Fluent, you build on both of those design principles. You will simplify your code, and move the string choices from your program into the Fluent files. Only then you’ll expose Fluent to localizers to actually take advantage of the capabilities of Fluent, and to perfect the localizations of your project.

Fluent’s Layered Design

When building runtime implementations, we created several layers to tightly own particular tasks.

  1. Fluent source files are parsed into Resources.
  2. Multiple resources are aggregated in Bundles, which expose APIs to resolve single strings. Message and Term references resolve inside Bundles, but not necessarily inside Resources. A Bundle is associated with a single language, as well as fallback languages for i18n libraries.
  3. Language negotiation and language fallback happen in the Localization level. Here you’d implement that someone looking for Frisian would get a Frisian string. If that’s missing or has a runtime problem, you might want to try Dutch, and then English.
  4. Bindings use the Localization API, and integrate it into the development stack. They marshal data models from the programming language into Fluent data models like strings, numbers, and dates. Declarative bindings also apply the localizations to the rendered UI.

Invest in Bindings

Bindings integrate Fluent into your development workflow. For Firefox, we focused on bindings to generate localized DOM. We also have bindings for React. These bindings determine how fluent Fluent feels to developers, but also how much Fluent can help with handling the localized return values. To give an example, integrating Fluent into Android app development would probably focus on a LayoutInflator. In the bindings we use at Mozilla, we decided to localize as close to the actual display of the strings as possible.

If you have declarative UI generation, you want to look into a declarative binding for Fluent. If your UI is generated programmatically, you want a programmatic binding.

The Localization classes also integrate IO into your application runtime, and making the right choices here has strong impact on performance characteristics. Not just on speed, but also the question of showing untranslated strings shortly.

Migrate your Code

Migrating your code will often be a trivial change from one API to another. Most of your code will get a string and show it, after all. You might convert several different APIs into just one in Fluent, in particular dedicated plural APIs will go away.

You will also move platform-specific terminology into the localization side, removing conditional code. You should also be able to stop stitching several localized strings together in your application logic.

As we’ll go through the process here, I’ll show an example of a sentence with a link. The project wants to be really sure the link isn’t broken, so it’s not exposed to localizers at all. This is shortened from an actual example in Firefox, where we link to our privacy policy. We’ll convert to DOM overlays, to separate localizable and non-localizable aspects of the DOM in Fluent. Let’s just look at the HTML code snippet now, and look at the localizations later.


<li>&msg-start;<a href="">&msg-middle;</a>&msg-end;</li>


<li data-l10n-id="msg"><a href="" data-l10n-name="msg-link"></a></li>

Migrate your Localizations

Last but not least, we’ll want to migrate the localizations. While migrating code is work, losing all your existing localizations is just outright a bad idea.

For our work on Firefox, we use a Python package named fluent.migrations. It’s building on top of the fluent.syntax package, and programmatically creates Fluent files from existing localizations.

It allows you to copy and paste existing localizations into a Fluent string for the most simple cases. It also concats several strings into a single result, which you used to do in your code. For these very simple cases, it even uses Fluent syntax, with specialized global functions to copy strings.


msg = {COPY(from_path,"msg-start")}<a data-l10n-name="msg-link">{COPY(from_path,"msg-middle")}</a>{COPY(from_path,"msg-end")}

Then there are a bit more complicated tasks, notably involving variable references. Fluent only supports its built-in variable placement, so you need to migrate away from printf and friends. That involves firstly normalizing the various ways that a printf parameter can be formatted and placed, and then the code can do a simple replacement of the text like %2$S with a Fluent variable reference like {user-name}.

We also have logic to read our Mozilla-specific plural logic from legacy files, and to write them out as select-expressions in Fluent, with a variant for each plural form.

These transforms are implemented as pseudo nodes in a template AST, which is then evaluated against the legacy translations and creates an actual AST, which can then be serialized.

Concluding our example, before:

<ENTITY msg-start "This is a link to an ">
<ENTITY msg-middle "example">
<ENTITY msg-end ".">


msg = This is a link to an <a data-l10n-name="msg-link">example</a> site.

Find out more about this package and its capabilities in the documentation.

Given that we’re OpenSource, we also want to carry over attribution. Thus our code not only migrates all the data, but also splits the migration into individual commits, one for each author of the migrated translations.

Once the baseline is migrated, localizers can dive in and improve. They can then start using parameterized Terms to adjust grammar, for example. Or add a plural form where English didn’t need one. Or introduce a platform-specific terminology that only exists in their language.

Mozilla Addons BlogMay’s featured extensions

Firefox Logo on blue background

Pick of the Month: Google Translator for Firefox

by nobzol
Sleek translation tool. Just highlight text, hit the toolbar icon and your translation appears right there on the web page itself. You can translate selected text (up to 1100 characters) or the entire page.

Bonus feature: the context menu presents an option to search your highlighted word or phrase on Wikipedia.

“Sehr einfache Bedienung, korrekte Übersetzung aller Texte.”

Featured: Google Container

by Perflyst
Isolate your Google identity into a container. Make it difficult for Google to track your moves around the web.

(NOTE: Though similarly titled to Mozilla’s Facebook Container and Multi-Account Containers, this extension is not affiliated with Mozilla.)

“Thanks a lot for making this. Works great! I’m only sorry I did not find this extension sooner.”

The post May’s featured extensions appeared first on Mozilla Add-ons Blog.

Hacks.Mozilla.OrgOwning it: browser compatibility data and open source governance

What does it mean to “own” an open-source project? With the browser-compat-data project (“BCD”), the MDN (Mozilla Developer Network) community and I recently had the opportunity to find out.

In 2017, the MDN Web Docs team invited me to work on what was described to me as a small, but growing project (previously on Hacks). The little project had a big goal: to provide detailed and reliable structured data about what Web platform features are supported by different browsers. It sounded ambitious, but my part was narrow: convert hand-written HTML compatibility tables on MDN into structured JSON data.

As a technical writer and consultant, it was an unusual project to get to work on. Ordinarily, I look at data and code and use them to write words for people. For BCD, I worked in the opposite direction: reading what people wrote and turning it into structured data for machines. But I think I was most excited at the prospect of working on an open source project with a lot of reach, something I’d never done before.

Plus the project appealed to my sense of order and tidiness. Back then, most of the compatibility tables looked something like this:

A screenshot of a cluttered, inconsistent table of browser support for the CSS linear-gradient feature

In their inconsistent state, they couldn’t be updated in bulk and couldn’t be redesigned without modifying thousands upon thousands of pages on MDN. Instead, we worked to liberate the data in the tables to a structured, validated JSON format that we could publish in an npm package. With this change, new tables could be generated and other projects could use the data too.

A screenshot of a tidy, organized table of browser support for the CSS linear-gradient feature

Since then, the project has grown considerably. If there was a single inflection point, it was the Hack on MDN event in Paris, where we met in early 2018 to migrate more tables, build new tools, and play with the data. In the last year and a half, we’ve accomplished so many things, including replacing the last of the legacy tables on MDN with shiny, new BCD-derived tables, and seeing our data used in Visual Studio Code.

Building a project to last

We couldn’t have built BCD into what it is now without the help of the hundreds of new contributors that have joined the project. But some challenges have come along with that growth. My duties shifted from copying data into the repository to reviewing others’ contributions, learning about the design of the schema, and hacking on supporting tools. I had to learn so much about being a thoughtful, helpful guide for new and established contributors alike. But the increased size of the project also put new demands on the project as a whole.

Florian Scholz, the project leader, took on answering a question key to the long-term sustainability of the project: how do we make sure that contributors can be more than mere inputs, and can really be part of the project? To answer that question, Florian wrote and helped us adopt a governance document that defines how any contributor—not just MDN staff—can become an owner of the project.

Inspired by the JS Foundation’s Technical Advisory Committee, the ESLint project, and others, BCD’s governance document lays out how contributors can become committers (known as peers), how important decisions are made by the project leaders (known as owners), and how to become an owner. It’s not some stuffy rule book about votes and points of order; it speaks to the project’s ambition of being a community-led project.

Since adopting the governance document, BCD has added new peers from outside Mozilla, reflecting how the project has grown into a cross-browser community. For example, Joe Medley, a technical writer at Google, has joined us to help add and confirm data about Google Chrome. We’ve also added one new owner: me.

If I’m being honest, not much has changed: peers and owners still review pull requests, still research and add new data, and still answer a lot of questions about BCD, just as before. But with the governance document, we know what’s expected and what we can do to guide others on the journey to project ownership, like I experienced. It’s reassuring to know that as the project grows so too will its leadership.

More to come

We accomplished a lot in the past year, but our best work is ahead. In 2019, we have an ambitious goal: get 100% real data for Firefox, Internet Explorer, Edge, Chrome, Safari, mobile Safari, and mobile Chrome for all Web platform features. That means data about whether or not any feature in our data set is supported by each browser and, if it is, in what version it first appeared. If we achieve our goal, BCD will be an unparalleled resource for Web developers.

But we can’t achieve this goal on our own. We need to fill in the blanks, by testing and researching features, updating data, verifying pull requests, and more. We’d love for you to join us.

The post Owning it: browser compatibility data and open source governance appeared first on Mozilla Hacks - the Web developer blog.

Andrew HalberstadtPython 3 at Mozilla

Mozilla uses a lot of Python. Most of our build system, CI configuration, test harnesses, command line tooling and countless other scripts, tools or Github projects are all handled by Python. In mozilla-central there are over 3500 Python files (excluding third party files), comprising roughly 230k lines of code. Additionally there are 462 repositories labelled with Python in the Mozilla org on Github (though many of these are not active). That’s a lot of Python, and most of it is Python 2.

With Python 2’s exaugural year well underway, it is a good time to take stock of the situation and ask some questions. How far along has Mozilla come in the Python 3 migration? Which large work items lie on the critical path? And do we have a plan to get to a good state in time for Python 2’s EOL on January 1st, 2020?

Mozilla VR BlogFirefox Reality coming to SteamVR

Firefox Reality coming to SteamVR

We are excited to announce that we’re working with Valve to bring the immersive web to SteamVR!

This January, we announced that we were bringing the Firefox Reality experience to desktop devices and the Vive stores. Since then, collaborating closely with Valve, we have been working to also bring Firefox Reality to the SteamVR immersive experience. In the coming months, users will be offered a way to install Firefox Reality via a new web dashboard button, and then launch a browser window over any OpenVR experience.

With a few simple clicks, users will be able to access web content such as tips or guides or stream a Twitch comment channel without having to exit their immersive experiences. In addition, users will be able to log into their Firefox account once, and access synced bookmarks and cookies across both Firefox and Firefox Reality — no need to log in twice!

Firefox Reality coming to SteamVR

We are excited to collaborate with Valve and release Firefox for SteamVR this summer.

Mozilla GFXWebRender newsletter #44

WebRender is a GPU based 2D rendering engine for web written in Rust, currently powering Mozilla’s research web browser servo and on its way to becoming Firefox‘s rendering engine.

WebRender on Linux in Firefox Nightly

Right after the previous newsletter was published, Andrew and Jeff enabled WebRender for Linux users on Intel integrated GPUs with Mesa 18.2 or newer on Nightly if their screen resolution is 3440×1440 or less.
We decided to start with Mesa is thanks to the quality of the drivers. Users with 4k screens will have to wait a little longer though (or enable WebRender manually) as there are a number of specific optimizations we want to do before we are comfortable getting WebRender used on these very high resolution screens. While most recent discreet GPUs can stomach about anything we throw at them, integrated GPUs operate on a much tighter budget and compete with the CPU for memory bandwidth. 4k screens are real little memory bandwidth eating monsters.

WebRender roadmap

Jessie put together a roadmap of the WebRender project and other graphics endeavors from the items discussed in the week in Toronto.
It gives a good idea of the topics that we are focusing on for the coming months.

A week in Toronto – Part deux

In the previous newsletter I went over a number of the topics that we discussed during the graphics team’s last get-together in Toronto. Let’s continue here.

WebRender on Android

We went over a number of the items in WebRender’s Android TODO-list. Getting WebRender to work at all on Android is one thing. A lot of platform-specific low level glue code which Sotaro has been steadily improving lately.

On top of that come mores questions:

  • Which portion of the Android user population support the OpenGL features that WebRender relies on?
  • Which OpenGL features we could stop relying on to cover more users
  • What do we do about the remaining users which have such a small OpenGL feature set available that we don’t plan to get WebRender in the foreseeable future.

Among the features that WebRender currently heavily relies on but are (surprisingly) not universally supported in this day and age:

  • texture arrays,
  • float 32 textures,
  • texture fetches in vertex shaders,
  • instancing,

We discussed various workarounds. Some of them will be easy to implement, some harder, some will come at a cost, some we are not sure will provide an acceptable user experience. As it turns out, building a modern rendering engine while also targetting devices that are everything but modern is quite a challenge, who would have thought!

Frame scheduling

Rendering a frame, from a change of layout triggered by some JavaScript to photons flying out of the screen, goes through a long pipeline. Sometimes some steps in that pipeline take longer than we would want but other parts of the pipeline sort of absorb and hide the issue and all is mostly fine. Sometimes, though, slowdowns in particular places with the wrong timing can cause a chain of bad interactions which results a back and forth between a rapid burst of a few frames followed by a couple of missed frames as parts the system oscillate between throttle themselves on and off.

I am describing this in the abstract because the technical description of how and why this can happen in Gecko is complicated. It’s a big topic that impacts the design of a lot of pieces in Firefox’s rendering engine. We talked about this and came up with some short and long term potential improvements.

Intel 4K performance

I mentioned this towards the beginning of this post. Integrated GPUs tend to be more limited in, well in most things, but most importantly in memory bandwidth, which is exacerbated by sharing RAM with the CPU. When high resolution screens don’t fit in the integrated GPU’s dedicated caches, Jeff and Markus made the observation that it can be significantly faster to split the screen into a few large regions and render them one by one. This is at the cost of batch breaks and an increased amount of draw calls, however the restricting rendering to smaller portions of the screen gives the GPU a more cache-friendly workload than rendering the entire screen in a single pass.

This approach is interestingly similar to the way tiled GPUs common on mobile devices work.
On top of that there are some optimizations that we want to investigate to reduce the amount of batch breaks caused by text on platforms that do not support dual-source blending, as well as continued investigation in progress of what is slow specifically on Intel devices.

Other topics

We went over a number of other technical topics such as WebRender’s threading architecture, gory details of support for backface-visibility, where to get the best Thaï food in downtown Toronto and more. I won’t cover them here because they are somewhat hard and/or boring to explain (or because I wasn’t involved enough in the topics do them justice on this blog).

In conclusion

It’s been a very useful and busy week. The graphics team will meet next in Whistler in June along with the rest of Mozilla. By then Firefox 67 will ship, enabling WebRender for a subset of Windows users in the release channel which is huge milestone for us.

Enabling WebRender in Firefox Nightly

In about:config, enable the pref gfx.webrender.all and restart the browser.

Reporting bugs

The best place to report bugs related to WebRender in Firefox is the Graphics :: WebRender component in bugzilla.

Note that it is possible to log in with a github account.

Using WebRender in a Rust project

WebRender is available as a standalone crate on (documentation)

QMOFirefox 67 Beta 16 Testday, May 3rd

Hello Mozillians,

We are happy to let you know that Friday, May 3rd, we are organizing Firefox 67 Beta 16 Testday. We’ll be focusing our testing on: Track Changes M2 and WebExtensions compatibility & support.

Check out the detailed instructions via this etherpad.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better!

See you on Friday! 🙂

The Mozilla Blog$2.4 Million in Prizes for Schools Teaching Ethics Alongside Computer Science

Omidyar Network, Mozilla, Schmidt Futures, and Craig Newmark Philanthropies are announcing the Stage I winners of our Responsible Computer Science Challenge


Today, we are announcing the first winners of the Responsible Computer Science Challenge. We’re awarding $2.4 million to 17 initiatives that integrate ethics into undergraduate computer science courses.

The winners’ proposed curricula are novel: They include in-class role-playing games to explore the impact of technology on society. They embed philosophy experts and social scientists in computer science classes. They feature “red teams” that probe students’ projects for possible negative societal impacts. And they have computer science students partner with local nonprofits and government agencies.

The winners will receive awards of up to $150,000, and they span the following categories: public university, private university, liberal arts college, community college, and Jesuit university. Stage 1 winners are located across 13 states, with computer science programs ranging in size from 87 students to 3,650 students.

The Responsible Computer Science Challenge is an ambitious initiative by Omidyar Network, Mozilla, Schmidt Futures, and Craig Newmark Philanthropies. It aims to integrate ethics and responsibility into undergraduate computer science curricula and pedagogy at U.S. colleges and universities.

Says Kathy Pham, computer scientist and Mozilla Fellow co-leading the Challenge: “Today’s computer scientists write code with the potential to affect billions of people’s privacy, security, equality, and well-being. Technology today can influence what journalism we read and what political discussions we engage with; whether or not we qualify for a mortgage or insurance policy; how results about us come up in an online search; whether we are released on bail or have to stay; and so much more.”

Pham continues: “These 17 winners recognize that power, and take crucial steps to integrate ethics and responsibility into core courses like algorithms, compilers, computer architecture, neural networks, and data structures. Furthermore, they will release their materials and methodology in the open, allowing other individuals and institutions to adapt and use them in their own environment, broadening the reach of the work. By deeply integrating ethics into computer science curricula and sharing the content openly, we can create more responsible technology from the start.”

Says Yoav Schlesinger, principal at Omidyar Network’s Tech and Society Lab co-leading the Challenge: “Revamping training for the next generation of technologists is critical to changing the way tech is built now and into the future. We are impressed with the quality of submissions and even more pleased to see such outstanding proposals awarded funding as part of Stage I of the Responsible Computer Science Challenge. With these financial resources, we are confident that winners will go on to develop exciting, innovative coursework that will not only be implemented at their home institutions, but also scaled to additional colleges and universities across the country.”

Challenge winners are announced in two stages: Stage I (today), for concepts that deeply integrate ethics into existing undergraduate computer science courses, either through syllabi changes or teaching methodology adjustments. Stage I winners receive up to $150,000 each to develop and pilot their ideas. Stage II (summer 2020) supports the spread and scale of the most promising approaches developed in Stage I. In total, the Challenge will award up to $3.5 million in prizes.

The winners announced today were selected by a panel of 19 independent judges from universities, community organizations, and the tech industry. Judges deliberated over the course of three weeks.

<The Winners>

(School | Location | Principal Investigator)

Allegheny College | Meadville, PA | Oliver Bonham-Carter 

While studying fields like artificial intelligence and data analytics, students will investigate potential ethical and societal challenges. For example: They might interrogate how medical data is analyzed, used, or secured. Lessons will include relevant readings, hands-on activities, and talks from experts in the field.


Bemidji State University | Bemidji, MN | Marty J. Wolf, Colleen Greer

The university will lead workshops that guide faculty at other institutions in developing and implementing responsible computer science teaching modules. The workshops will convene not just computer science faculty, but also social science and humanities faculty.


Bowdoin College | Brunswick, ME | Stacy Doore

Computer science students will participate in “ethical narratives laboratories,” where they experiment with and test the impact of technology on society. These laboratories will include transformative engagement with real and fictional narratives including case studies, science fiction readings, films, shows, and personal interviews.


Columbia University | New York, NY | Augustin Chaintreau

This approach integrates ethics directly into the computer science curriculum, rather than making it a stand-alone course. Students will consult and engage with an “ethical companion” that supplements a typical course textbook, allowing ethics to be addressed immediately alongside key concepts. The companion provides examples, case studies, and problem sets that connect ethics with topics like computer vision and algorithm design.


Georgetown University | Washington, DC | Nitin Vaidya

Georgetown’s computer science department will collaborate with the school’s Ethics Lab to create interactive experiences that illuminate how ethics and computer science interact. The goal is to introduce a series of active-learning engagements across a semester-long arc into selected courses in the computer science curriculum.


Georgia Institute of Technology | Atlanta, GA | Ellen Zegura

This approach embeds social responsibility into the computer science curriculum, starting with the introductory courses. Students will engage in role-playing games (RPGs) to examine how a new technology might impact the public. For example: How facial recognition or self-driving cars might affect a community.


Harvard University | Cambridge, MA | Barbara Grosz

Harvard will expand the open-access resources of its Embedded EthiCS program which pairs computer science faculty with philosophy PhD students to develop ethical reasoning modules that are incorporated into courses throughout the computer science curriculum. Computer science postdocs will augment module development through design of activities relevant to students’ future technology careers.


Miami Dade College | Miami, FL | Antonio Delgado

The college will integrate social impact projects and collaborations with local nonprofits and government agencies into the computer science curriculum. Computer science syllabi will also be updated to include ethics exercises and assignments.


Northeastern University | Boston, MA | Christo Wilson

This initiative will embed an ethics component into the university’s computer science, cybersecurity, and data science programs. The ethics component will include lectures, discussion prompts, case studies, exercises, and more. Students will also have access to a philosophy faculty advisor with expertise in information and data ethics.


Santa Clara University | Santa Clara, CA | Sukanya Manna, Shiva Houshmand, Subramaniam Vincent

This initiative will help CS students develop a deliberative ethical analysis framework that complements their technical learning. It will develop software engineering ethics, cybersecurity ethics, and data ethics modules, with integration of case studies and projects. These modules will also be adapted into free MOOC materials, so other institutions worldwide can benefit from the curriculum.


University of California, Berkeley | Berkeley, CA | James Demmel, Cathryn Carson

This initiative integrates a “Human Contexts and Ethics Toolkit” into the computer science/data science curriculum. The toolkit helps students discover when and how their work intersects with social power structures. For example: bias in data collection, privacy impacts, and algorithmic decision making.


University at Buffalo | Buffalo, NY | Atri Rudra

In this initiative, freshmen studying computer science will discuss ethics in the first-year seminar “How the internet works.” Sophomores will study responsible algorithmic development for real-­world problems. Juniors will study the ethical implications of machine learning. And seniors will incorporate ethical thinking into their capstone course.


University of California, Davis | Davis, CA | Annamaria (Nina) Amenta, Gerardo Con Díaz, and Xin Liu

Computer science students will be exposed to social science and humanities while pursuing their major, culminating in a “conscientious” senior project. The project will entail developing technology while assessing its impact on inclusion, privacy, and other factors, and there will be opportunities for projects with local nonprofits or government agencies.


University of Colorado, Boulder | Boulder, CO | Casey Fiesler

This initiative integrates an ethics component into introductory programming classes, and features an “ethics fellows program” that embeds students with an interest in ethics into upper division computer science and technical classes.


University of Maryland, Baltimore County | Baltimore, MD | Helena Mentis

This initiative uses three avenues to integrate ethics into the computer science curriculum: peer discussions on how technologies might affect different populations; negative implications evaluations, i.e. “red teams” that probe the potential negative societal impacts of students’ projects; and a training program to equip teaching assistants with ethics and equality literacy.


University of Utah | Salt Lake City, UT | Suresh Venkatasubramanian, Sorelle A. Friedler (Haverford College), Seny Kamara (Brown University)

Computer science students will be encouraged to apply problem solving and critical thinking not just to design algorithms, but also the social issues that their algorithms intersect with. For example: When studying bitcoin mining algorithms, students will focus on energy usage and environmental impact. The curriculum will be developed with the help of domain experts who have expertise in sustainability, surveillance, criminal justice, and other issue areas.


Washington University | St. Louis, MO | Ron Cytron

Computer science students will participate in “studio sessions,” or group discussions that unpack how their technical education and skills intersect with issues like individual privacy, data security, and biased algorithms.


The Responsible Computer Science Challenge is part of Mozilla’s mission to empower the people and projects on the front lines of internet health work. Learn more about Mozilla Awards.

Launched in October 2018, the Responsible Computer Science Challenge, incubated at Omidyar Network’s Tech and Society Solutions Lab, is part of Omidyar Network’s growing efforts to mitigate the unintended consequences of technology on our social fabric, and ensure products are responsibly designed and brought to market.

The post $2.4 Million in Prizes for Schools Teaching Ethics Alongside Computer Science appeared first on The Mozilla Blog.

Mark CôtéDeconstruction of a Failure

Something I regularly tell my daughter, who can tend towards perfectionism, is that we all fail. Over the last few years, I’ve seen more and more talks and articles about embracing failure. The key is, of course, to learn from the failure. I’ve written a bit before about what I learned from leading the MozReview project, Mozilla’s experiment with a new approach to code review that lasted from about 2014 to 2018.

This Week In RustThis Week in Rust 284

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is cast-rs, a crate with ergonomic, checked cast functions for primitive types. Thanks to mark-i-m for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

No issues were proposed for CfP.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

229 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs

New RFCs

Upcoming Events

Asia Pacific
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Clippy’s Favorite Activity Is Criticizing Clippy’s Codebase

ReductRs on twitter!

Llogiq is pretty self-congratulatory for picking this awesome quote.

Please submit quotes and vote for next week!

This Week in Rust is edited by: nasa42, llogiq, and Flavsditz.

Discuss on r/rust.

About:CommunityFirefox 67 new contributors

With the release of Firefox 67, we are pleased to welcome the 75 developers who contributed their first code change to Firefox in this release, 66 of whom were brand new volunteers! Please join us in thanking each of these diligent and enthusiastic individuals, and take a look at their contributions:

The Servo BlogThis Week In Servo 129

In the past week, we merged 68 PRs in the Servo organization’s repositories.

Planning and Status

Our roadmap is available online, including the team’s plans for 2019.

This week’s status updates are here.

Exciting works in progress

Notable Additions

  • ferjm implemented enough Shadow DOM support to build user agent widgets include media controls.
  • miller-time standardized the use of referrers in fetch requests.
  • krk added a build-time validation that the DOM inheritance hierarchy matches the WebIDL hierarchy.
  • paulrouget redesigned part of the embedding API to separate per-window from per-application APIs.
  • AZWN created an API for using the type system to represent important properties of the JS engine.
  • Akhilesh1996 implemented the setValueCurveAtTime Web Audio API.
  • jdm transitioned the Windows build to rely on clang-cl instead of the MSVC compiler.
  • snarasi6 implemented the setPosition and setOrientation Web Audio APIs.

New Contributors

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!

The Mozilla BlogFacebook’s Ad Archive API is Inadequate

Facebook’s tool meets only two of experts’ five minimum standards. That’s a failing grade.


Facebook pledged in February to release an ad archive API, in order to make political advertising on the platform more transparent. The company finally released this API in late March — and we’ve been doing a review to determine if it is up to snuff.

While we appreciate Facebook following through on its commitment to make the ad archive API public, its execution on the API leaves something to be desired. The European Commission also hinted at this last week in its analysis when it said that “further technical improvements” are necessary.

The fact is, the API doesn’t provide necessary data. And it is designed in ways that hinders the important work of researchers, who inform the public and policymakers about the nature and consequences of misinformation.

Last month, Mozilla and more than sixty researchers published five guidelines we hoped Facebook’s API would meet. Facebook’s API fails to meet three of these five guidelines. It’s too early to determine if it meets the two other guidelines. Below is our analysis:


Researchers’ guideline: A functional, open API should have comprehensive political advertising content.

Facebook’s API: It’s impossible to determine if Facebook’s API is comprehensive, because it requires you to use keywords to search the database. It does not provide you with all ad data and allow you to filter it down using specific criteria or filters, the way nearly all other online databases do. And since you cannot download data in bulk and ads in the API are not given a unique identifier, Facebook makes it impossible to get a complete picture of all of the ads running on their platform (which is exactly the opposite of what they claim to be doing).

[2] ❌

Researchers’ guideline: A functional, open API should provide the content of the advertisement and information about targeting criteria.

Facebook’s API: The API provides no information on targeting criteria, so researchers have no way to tell the audience that advertisers are paying to reach. The API also doesn’t provide any engagement data (e.g., clicks, likes, and shares), which means researchers cannot see how users interacted with an ad. Targeting and engagement data is important because it lets researchers see what types of users an advertiser is trying to influence, and whether or not their attempts were successful.


Researchers’ guideline: A functional, open API should have up-to-date and historical data access.

Facebook’s API: Ad data will be available in the archive for seven years, which is actually pretty good. Because the API is new and still hasn’t been properly populated, we cannot yet assess whether it is up-to-date, whether bugs will be fixed, or whether Facebook will support long-term studies.


Researchers’ guideline: A functional, open API should be accessible to and shareable with the general public.

Facebook’s API: This data is now available as part of Facebook’s standard GraphAPI and governed by Facebook Developers Terms of Service. It is too early to determine what exact constraints this will create for public availability and disclosure of data.

[5] ❌

Researchers’ guideline: A functional, open API should empower, not limit, research and analysis.

Facebook’s API: The current API design puts huge constraints on researchers, rather than allowing them to discover what is really happening on the platform. The limitations in each of these categories, coupled with search rate limits, means it could take researchers months to evaluate ads in a certain region or on a certain topic.


It’s not too late for Facebook to fix its API. We hope they take action soon. And, we hope bodies like the European Commission carefully scrutinize the tool’s shortcomings.

Mozilla will also be conducting an analysis of Google’s ad API when it is released in the coming weeks. Since Facebook’s ad archive API fails to let researchers do their jobs ahead of the upcoming European Parliamentary elections, we hope that Google will step up and deliver an API that enables this important research.

The post Facebook’s Ad Archive API is Inadequate appeared first on The Mozilla Blog.

Daniel StenbergWhat is the incentive for curl to release the library for free?

(This is a repost of the answer I posted on stackoverflow for this question. This answer immediately became my most ever upvoted answer on stackoverflow with 516 upvotes during the 48 hours it was up before a moderator deleted it for unspecified reasons. It had then already been marked “on hold” for being “primarily opinion- based” and then locked but kept: “exists because it has historical significance”. But apparently that wasn’t good enough. I’ve saved a screenshot of the deletion. Debated on Status now: it was brought back but remains locked.)

I’m Daniel Stenberg.

I made curl

I founded the curl project back in 1998, I wrote the initial curl version and I created libcurl. I’ve written more than half of all the 24,000 commits done in the source code repository up to this point in time. I’m still the lead developer of the project. To a large extent, curl is my baby.

I shipped the first version of curl as open source since I wanted to “give back” to the open source world that had given me so much code already. I had used so much open source and I wanted to be as cool as the other open source authors.

Thanks to it being open source, literally thousands of people have been able to help us out over the years and have improved the products, the documentation. the web site and just about every other detail around the project. curl and libcurl would never have become the products that they are today were they not open source. The list of contributors now surpass 1900 names and currently the list grows with a few hundred names per year.

Thanks to curl and libcurl being open source and liberally licensed, they were immediately adopted in numerous products and soon shipped by operating systems and Linux distributions everywhere thus getting a reach beyond imagination.

Thanks to them being “everywhere”, available and liberally licensed they got adopted and used everywhere and by everyone. It created a defacto transfer library standard.

At an estimated six billion installations world wide, we can safely say that curl is the most widely used internet transfer library in the world. It simply would not have gone there had it not been open source. curl runs in billions of mobile phones, a billion Windows 10 installations, in a half a billion games and several hundred million TVs – and more.

Should I have released it with proprietary license instead and charged users for it? It never occured to me, and it wouldn’t have worked because I would never had managed to create this kind of stellar project on my own. And projects and companies wouldn’t have used it.

Why do I still work on curl?

Now, why do I and my fellow curl developers still continue to develop curl and give it away for free to the world?

  1. I can’t speak for my fellow project team members. We all participate in this for our own reasons.
  2. I think it’s still the right thing to do. I’m proud of what we’ve accomplished and I truly want to make the world a better place and I think curl does its little part in this.
  3. There are still bugs to fix and features to add!
  4. curl is free but my time is not. I still have a job and someone still has to pay someone for me to get paid every month so that I can put food on the table for my family. I charge customers and companies to help them with curl. You too can get my help for a fee, which then indirectly helps making sure that curl continues to evolve, remain free and the kick-ass product it is.
  5. curl was my spare time project for twenty years before I started working with it full time. I’ve had great jobs and worked on awesome projects. I’ve been in a position of luxury where I could continue to work on curl on my spare time and keep shipping a quality product for free. My work on curl has given me friends, boosted my career and taken me to places I would not have been at otherwise.
  6. I would not do it differently if I could back and do it again.

Am I proud of what we’ve done?

Yes. So insanely much.

But I’m not satisfied with this and I’m not just leaning back, happy with what we’ve done. I keep working on curl every single day, to improve, to fix bugs, to add features and to make sure curl keeps being the number one file transfer solution for the world even going forward.

We do mistakes along the way. We make the wrong decisions and sometimes we implement things in crazy ways. But to win in the end and to conquer the world is about patience and endurance and constantly going back and reconsidering previous decisions and correcting previous mistakes. To continuously iterate, polish off rough edges and gradually improve over time.

Never give in. Never stop. Fix bugs. Add features. Iterate. To the end of time.

For real?

Yeah. For real.

Do I ever get tired? Is it ever done?

Sure I get tired at times. Working on something every day for over twenty years isn’t a paved downhill road. Sometimes there are obstacles. During times things are rough. Occasionally people are just as ugly and annoying as people can be.

But curl is my life’s project and I have patience. I have thick skin and I don’t give up easily. The tough times pass and most days are awesome. I get to hang out with awesome people and the reward is knowing that my code helps driving the Internet revolution everywhere is an ego boost above normal.

curl will never be “done” and so far I think work on curl is pretty much the most fun I can imagine. Yes, I still think so even after twenty years in the driver’s seat. And as long as I think it’s fun I intend to keep at it.

Robert O'CallahanGoodbye Mozilla IRC

I've been connected to Mozilla IRC for about 20 years. When I first started hanging out on Mozilla IRC I was a grad student at CMU. It's how I got to know a lot of Mozilla people. I was never an IRC op or power user, but when #mozilla was getting overwhelmed with browser user chat I was the one who created #developers. RIP.

I'll be sad to see it go, but I understand the decision. Technologies have best-before dates. I hope that Mozilla chooses a replacement that sucks less. I hope they don't choose Slack. Slack deliberately treats non-Chrome browsers as second-class — in particular, Slack Calls don't work in Firefox. That's obviously a problem for Mozilla users, and it would send a bad message if Mozilla says that sort of attitude is fine with them.

I look forward to finding out what the new venue is. I hope it will be friendly to non-Mozilla-staff and the community can move over more or less intact.


Today I read Mike Hoye's blog post about Mozilla's IRC server coming to an end.  He writes:

Mozilla has relied on IRC as our main synchronous communications tool since the beginning...While we still use it heavily, IRC is an ongoing source of abuse and  harassment for many of our colleagues and getting connected to this now-obscure forum is an unnecessary technical barrier for anyone finding their way to Mozilla via the web.  

And, while "Mozilla intends to deprecate IRC," he goes on to say:

we definitely still need a globally-available, synchronous and text-first communication tool.

While I made dinner tonight, I thought back over my long history using Mozilla's IRC system, and tried to understand its place in my personal development within Mozilla and open source.


I remember the very first time I used IRC.  It was 2004, and earlier in the week I had met with Mike Shaver at Seneca, probably for the first time, and he'd ended our meeting with a phrase I'd never heard before, but I nodded knowingly nevertheless: "Ping me in #developers."

Ping me.  What on earth did that mean!? Little did I know that this phrase would come to signify so much about the next decade of my life.  After some research and initial trial and error, 'dave' joined and found his way to the unlisted #developers channel.  And there was 'shaver', along with 300 or so other #developers.

The immediacy of it was unlike anything I'd used before (or since).  To join irc was to be transported somewhere else.  You weren't anywhere, or rather, you were simultaneously everywhere.  For many of these years I was connecting to irc from an old farm house in the middle of rural Ontario over a satellite internet connection.  But when I got online, there in the channels with me were people from New Zealand, the US, Sweden, and everywhere in between.

Possibly you've been on video calls with people from around the world, and felt something similar.  However, what was different from a video call, or teleconference, or any other medium I've used since, is that the time together didn't need to end.  You weren't meeting as such, and there wasn't a timebox or shared goal around your presence there.  Instead, you were working amongst one another, co-existing, listening, and most importantly for me, learning.


Over the next year, irc went from being something I used here and there to something I used all the time.  I became 'humph' (one day Brendan confused me for Dave Herman, and shaver started calling me 'humph' to clarify) and have remained so ever since.  There are lots of people who have only ever called me 'humph' even to my face, which is hilarious and odd, but also very special.

Mike Beltzner taught me how to overcome one of the more difficult aspects of IRC: maintaining context after you log off.  Using screen and irssi I was able to start, leave, and then pick up conversations at a later time.  It's something you take for granted on Slack, but was critical to me being able to leverage IRC as a source of knowledge: if I asked a question, it might be hours before the person who could answer it would wake up and join irc from another part of the planet.

I became more engaged with different areas of the project.  IRC is siloed.  A given server is partitioned into many different channels, and each has its own sub-culture, appropriate topics, and community.  However, people typically participate in many channels.  As you get to know someone in one channel, you'll often hear more about the work happening in another.  Slowly I got invited into other channels and met more and more people across the Mozilla ecosystem.

Doing so took me places I hadn't anticipated.  For example, at some point I started chatting with people in #thunderbird, which led to me becoming an active contributor--I remember 'dascher' just started assigning me bugs to fix!  Another time I discovered the #static channel and a guy named 'taras' who was building crazy static analysis tools with gcc.  Without irc I can confidently say that I would have never started DXR, or worked on web audio, WebGL, all kinds of Firefox patches, or many of the other things I did.  I needed to be part of a community of peers and mentors for this work to be possible.

At a certain point I went from joining other channels to creating my own.  I started to build many communities within Mozilla to support new developers.  It was incredible to watch them fill up with a mix of experienced Mozilla contributors and people completely new to the project.  Over the years it helped to shape my approach to getting students involved in open source through direct participation.


In some ways, IRC was short for "I Really Can do this."  On my own?  No.  No way. But with the support of a community that wasn't going to abandon me, who would answer my questions, spend long hours helping me debug things, or introduce me to people who might be able to unlock my progress, I was able to get all kinds of new things done.  People like shaver, ted, gavin, beltzner, vlad, jorendorff, reed, preed, bz, stuart, Standard8, Gijs, bsmedberg, rhelmer, dmose, myk, Sid, Pomax, and a hundred other friends and colleagues.

The kind of help you get on irc isn't perfect.  I can remember many times asking a question, and having bsmedberg give a reply, which would take me the rest of the day (or week!) to unpack and fully understand.  You got hints.  You got clues.  You were (sometimes) pointed in the right direction.  But no one was going to hold your hand the whole way.  You were at once surrounded by people who knew, and also completely on your own.  It still required a lot of personal research.  Everyone was also struggling with their own pieces of the puzzle, and it was key to know how much to ask, and how much to do on your own.


Probably the most rewarding part of irc were the private messages.  Out of the blue, someone would ping you, sometimes in channel (or a new channel), but often just to you personally.  I developed many amazing friendships this way, some of them with people I've never met outside of a text window.

When I was working on the Firefox Audio Data API, I spent many weeks fighting with the DOM implementation.  There were quite a few people who knew this code, but their knowledge of it was too far beyond me, and I needed to work my way up to a place where we could discuss things.  I was very much on my own, and it was hard work.

One day I got a ping from someone calling themselves 'notmasteryet'.  I'd been blogging about my work, and linked to my patches, and 'notmasteryet' had started working on them.  You can't imagine the feeling of having someone on the internet randomly find you and say, "I think I figured out this tricky bit you've been struggling to make work."  That's exactly what happened, and we went on to spend many amazing weeks and months working on this together, sharing this quiet corner of Mozilla's irc server, moving at our own pace.

I hesitated to tell a story like this because there is no way to do justice to the many relationships I formed during the next decade.  I can't tell you all the amazing stories.  At one time or another, I got to work with just about everyone in Mozilla, and many became friends.  IRC allowed me to become a part of Mozilla in ways that would have been impossible just reading blogs, mailing lists, or bugzilla.  To build relationships, one needs long periods of time together.  It happens slowly.


But then, at a certain point, I stopped completely.  It's maybe been four or five years since I last used irc.  There are lots of reasons for it.  Partly it was due to things mhoye discussed in his blog post (I can confirm that harassment is real on irc). But also Mozilla had changed, and many of my friends and colleagues had moved on.  IRC, and the Mozilla that populated it, is part of the past.

Around the same time I was leaving IRC, Slack was just starting to take off.  Since then, Slack has come to dominate the space once occupied by tools like irc.  As I write this, Slack is in the process of doing its IPO, with an impressive $400M in revenue last year.  Slack is popular.

When I gave up irc, I really didn't want to start in on another version of the same thing.  I've used it a lot out of necessity, and even in my open source classes as a way to expose my students to it, so they'll know how it works.  But I've never really found it compelling.  Slack is a better irc, there's no doubt.  But it's also not what I loved about

Mike writes that he's in the process of evaluating possible replacements for irc within Mozilla.  I think it's great that he and Mozilla are wrestling with this.  I wish more open source projects would do it, too.  Having a way to get deeply engaged with a community is important, especially one as large as Mozilla.

Whatever product or tool gets chosen, it needs to allow people to join without being invited.  Tools like Slack do a great job with authentication and managing identity.  But to achieve it they rely on gatekeeping.  I wasn't the typical person who used when I started; but by using it for a long time, I made it a different place.  It's really important that any tool like this does more than just support the in-groups (e.g., employees, core contributors, etc).  It's also really important that any tool like this does better than create out-groups.


IRC was a critical part of my beginnings in open source.  I loved it.  I still miss many of the friends I used to talk to daily.  I miss having people ping me.  As I work with my open source students, I think a lot about what I'd do if I was starting today.  It's not possible to follow the same path I took.  The conclusion I've come to is that the only way to get started is to focus on connecting with people.  In the end, the tools don't matter, they change.  But the people matter a lot, and we should put all of our effort into building relationships with them.  

Robert O'CallahanUpdate To rr Master To Debug Firefox Trunk

A few days ago Firefox started using LMDB (via rkv) to store some startup info. LMDB relies on file descriptor I/O being coherent with memory-maps in a way that rr didn't support, so people have had trouble debugging Firefox in rr, and Pernosco's CI test failure reproducer also broke. We have checked in a fix to rr master and are in the process of updating the Pernosco pipeline.

The issue is that LMDB opens a file, maps it into memory MAP_SHARED, and then opens the file again and writes to it through the new file descriptor, and requires that the written data be immediately reflected in the shared memory mapping. (This behavior is not guaranteed by POSIX but is guaranteed by Linux.) rr needs to observe these writes and record the necessary memory changes, otherwise they won't happen during replay (because writes to files don't happen during replay) and replay will fail. rr already handled the case when the application write to the file descriptor (technically, the file description) that was used to map the file — Chromium has needed this for a while. The LMDB case is harder to handle. To fix LMDB, whenever the application opens a file for writing, we have to check to see if any shared mapping of that file exists and if so, mark that file description so writes to it have their shared-memory effects recorded. Unfortunately this adds overhead to writable file opens, but hopefully it doesn't matter much since in many workloads most file opens are read-only. (If it turns out to be a problem there are ways we can optimize further.) While fixing this, we also added support for the case where the application opens a file (possibly multiple times with different file descriptions) and then creates a shared mapping of one of them. To handle that, when creating a shared mapping we have to scan all open files to see if any of them refer to the mapped file, and if so, mark them so the effects of their writes are recorded.

Update Actually, at least this commit is required.

Firefox NightlyThese Weeks in Firefox: Issue 58


  • New and wonderful DevTools goodies:
    • New CSS debugging feature coming up soon (likely with Firefox 69): Inactive CSS. This will be tremendously helpful to know when certain CSS declarations don’t have the desired effect and why (join the fun on twitter, check out the bug, demo GIF).
      • The CSS rules pane is showing a helpful infobox explaining why a CSS rule is not being applied.

        The Firefox DevTools will make it much easier to find out why certain styles aren’t being applied!

    • Edit an existing request and running some JS on the response is powerful. Requests can now be formatted in fetch format (in addition to cURL). The created fetch command can also be used directly in the Console. (Bug 1540054, Mrigank Krishan 🌟)
      • The Network Monitor tool shows a request that has a context menu option to re-create that request as a window.fetch command. That command has been automatically put into the console input.

        We’re totally making fetch happen here.

  • User Initiated Picture-in-Picture has been enabled by default on Nightly on Windows
      • A YouTube video with the Picture-in-Picture toggle being displayed over top of it.

        Clicking on the little blue toggle on the right will pop the video out into its own always-on-top player window.

    • See some bugs? Please file them against this metabug
  • Worried about personal information leaking when posting performance profiles from the Gecko Profiler add-on? Now it’s much easier to select exactly what information you share:
      • The new profile publish panel with different data to include/filter out from profile (e.g. hidden threads, hidden time range, screenshots resource URLs and extensions)

        Worried about what’s in those performance profiles you’ve been submitting? Worry no longer!

  • We are now showing an icon in the identity block when a permission prompt got automatically denied by the browser (e.g. because it was lacking user interaction).
  • Cryptomining and Fingerprinting protections have been enabled in Nightly by default in both Standard and Strict content blocking modes.
    • Please file breakage against either of these two bugs

Friends of the Firefox team

Resolved bugs (excluding employees)

Fixed more than one bug

  • Arpit Bharti [:arpit73]
  • Damien
  • Florens Verschelde :fvsch
  • Gary Chen [:xeonchen]
  • jaril
  • Kestrel
  • Nidhi Kumari
  • Oriol Brufau [:Oriol]
  • Richard Marti (:Paenglab)
  • Syeda Asra Arshia Qadri [:aqadri]
  • Tim Nguyen :ntim

New contributors (🌟 = first patch)

Project Updates

Activity Stream

Add-ons / Web Extensions

Developer Tools

  • Adopting Prettier on the DevTools codebase (as a pilot before potentially applying it to more of m-c). This way, we’d have auto-formatting like on C++ code! RFC conversation is here.
  • Continued Rock-solid & Fast Debugging™ work and polishing features landed in 67 and 68 (Worker Debugging, Logpoints, Column Breakpoints)
  • Paused indicator and reason in Debugger is more visible! (Issue 8163, derek-li)
    • The DevTools Debugger is being more obvious that execution is paused, and also is explaining why it's paused (in this case, it's saying "Paused while stepping").

      According to the Debugger, execution is paused because we’re stepping through the code, line-by-line.

  • The Debugger team is showing their GitHub contributors what it’s like to contribute to mozilla-central via Phabricator and Bugzilla directly. Transitioned roughly 12 GitHubbers – really excited about this number!
  • Print emulation landed in Inspector – timely before Earth Day to save the trees 🌲!
  • Reducing some noise, the Browser Console will provide the option to hide content messages (behind devtools.browserconsole.filterContentMessages). Bug 1260877.
  • We’re also adding a way to list all of the elements impacted by a CSS warning in the console. When one of those CSS parser warnings occur inside a rule, the console will find this rule’s selector and let users log the matching elements (bug, demo GIF).
  • Wield more filter power in the Console with the support of regular expressions (bug 1441079, Hemakshi Sachdev [:hemakshis] 🎉)
  • “Race Cache With Network” status is shown for resources in the Network panel (Bug 1358038, Heng Yeow :tanhengyeow)
  • Continued improvements to Responsive Design Mode
  • The new Remote Debugging page is ON (about:debugging). WebIDE and the connect page is slotted for removal. All Debug Targets can be inspected with about:devtools-toolbox.
    • The old Connect page and WebIDE DevTool are riding off into the sunset.

    • Latest features: unplugged USB devices remain in the sidebar as “Unplugged” (bug, screenshot), remote debugging toolboxes show nicer headers with icons depending on what your remote target is (bug, example), and the same tab is reused when you connect again to the same target (bug).
  • Specific resources can be blocked in the network monitor – contributed by the renowned :jryans (bug 1151368) – and the first step to having a fully-fledged resource blocking feature
    • The Network Monitor is showing a network request that was blocked, and a context menu entry to unblock the request.

      Stop, block and roll!



Performance tools

  • Big deploy last week!
  • We show larger screenshots while hovering the screenshots track now.
    • The Firefox Profiler is showing a larger thumbnail when hovering the "Screenshot" timeline track

      Now it’s easier to see what was happening on screen when Firefox was being slow in a profile.

  • Landed splitter for the timeline and detail view.
    • The Call Tree section of the Profiler is being resized via the splitter between the Timeline and the Detail view.

      Ahhh, breathing room.

  • Landed some network panel & tooltip improvements
    • More accessible colors
    • More accurate timing information
    • Graphs for different phases in tooltips
    • MIME types in tooltips
      • A tooltip for a network request is showing timing information and the MIME type for the request.

        The more information we have about a slow Firefox, the easier it is to make Firefox faster.


Policy Engine


  • Prathiksha has started her internship working on streamlining the way we do message passing between about: pages and privileged code, and particularly on about:certerror.
  • Firefox Monitor now enabled by default in Nightly, pending bug 1544875.

Search and Navigation

Quantum Bar

Continuing on fixing regressions in QuantumBar, including improvements for RTL, less visual flicker and lots more.

Chris H-CFirefox Origin Telemetry: Putting Prio in Practice

Prio is neat. It allows us to learn counts of things that happen across the Firefox population without ever being able to learn which Firefox sent us which pieces of information.

For example, Content Blocking will soon be using this to count how often different trackers are blocked and exempted from blocking so we can more quickly roll our Enhanced Tracking Protection to our users to protect them from companies who want to track their activities across the Web.

To get from “Prio is neat” to “Content Blocking is using it” required a lot of effort and the design and implementation of a system I called Firefox Origin Telemetry.

Prio on its own has some very rough edges. It can only operate on a list of at most 2046 yes or no questions (a bit vector). It needs to know cryptographic keys from the servers that will be doing the sums and decryption. It needs to know what a “Batch ID” is. And it needs something to reliably and reasonably-frequently send the data once it has been encoded.

So how can we turn “tracker was blocked” into a bit in a bit vector into an encoded prio buffer into a network payload…

Firefox Origin Telemetry has two lists: a list of “origins” and a list of “metrics”. The list of origins is a list of where things happen. Did you block or Each of those trackers are “origins”. The list of metrics is a list of what happened. Did you block or did you have to exempt it from blocking because otherwise the site broke? Both “blocked” and “exempt” are “metrics”.

In this way Content Blocking can, whenever is blocked, call

Telemetry::RecordOrigin(OriginMetricID::ContentBlocking_Blocked, "");

And Firefox Origin Telemetry will take it from there.

Step 0 is in-memory storage. Firefox Origin Telemetry stores tables mapping from encoding id (ContentBlocking_Blocked) to tables of origins mapped to counts (“”: 1). If there’s any data in Firefox Origin Telemetry, you can view it in about:telemetry and it might look something like this:


Step 1 is App Encoding: turning “ContentBlocking_Blocked: {“”: 1}” into “bit twelve on shard 2 should be set to 1 for encoding ‘content-blocking-blocked’ ”

The full list of origins is too long to talk to Prio. So Firefox Origin Telemetry splits the list into 2046-element “shards”. The order of the origins list and the split locations for the shards must be stable and known ahead of time. When we change it in the future (either because Prio can start accepting larger or smaller buffers, or when the list of origins changes) we will have to change the name of the encoding from ‘content-blocking-blocked’ to maybe ‘content-blocking-blocked-v2’.

Step 2 is Prio Encoding: Firefox Origin Telemetry generates batch IDs of the encoding name suffixed with the shard number: for our example the batch ID is “content-blocking-blocked-1”. The server keys are communicated by Firefox Preferences (you can see them in about:config). With those pieces and the bit vector shards themselves, Prio has everything it needs to generate opaque binary blobs about 50 kilobytes in size.

Yeah, 2kb of data in a 50kb package. Not a small increase.

Step 3 is Base64 Encoding where we turn those 50kb binary blobs into 67kb strings of the letters a-z and A-Z, the numbers 0-9, and the symbols “+” or “/”. This is so we can send it in a normal Telemetry ping.

Step 4 is the “prio” ping. Once a day or when Firefox shuts down we need to send a ping containing these pairs of batch ids and base64-encoded strings plus a minimum amount of environmental data (Firefox version, current date, etc.), if there’s data to be sent. In the event that sending fails, we need to retry (TelemetrySend). After sending the ping should be available to be inspected for a period of time (TelemetryArchive).

…basically, this is where Telemetry does what Telemetry does best.

And then the ping becomes the problem of the servers who need to count and verify and sum and decode and… stuff. I dunno, I’m a Firefox Telemetry Engineer, not a Data Engineer. :amiyaguchi’s doing that part, not me : )

I’ve smoothed over some details here, but I hope I’ve given you an idea of what value Firefox Origin Telemetry brings to Firefox’s data collection systems. It makes Prio usable for callers like Content Blocking and establishes systems for managing the keys and batch IDs necessary for decoding on the server side (Prio will generate int vector shards for us, but how will we know which position of which shard maps back to which origin and which metric?).

Firefox Origin Telemetry is shipping in Firefox 68 and is currently only enabled for Firefox Nightly and Beta. Content Blocking is targeting Firefox 69 to start using Origin Telemetry to measure tracker blocking and exempting for 0.014% of pageloads of 1% of clients.


Mike HoyeSynchronous Text


Let’s lead with the punchline: the question of what comes after IRC, for Mozilla, is now on my desk.

I wasn’t in the room when was stood up, but from what I’ve heard IRC wasn’t “chosen” so much as it was the obvious default, the only tool available in the late ’90s. Suffice to say that as a globally distributed organization, Mozilla has relied on IRC as our main synchronous communications tool since the beginning. For much of that time it’s served us well, if for some less-than-ideal values of “us” and “well”.

Like a lot of the early internet IRC is a quasi-standard protocol built with far more of the optimism of the time than the paranoia the infosec community now refers to as “common sense”, born before we learned how much easier it is to automate bad acts than it is to foster healthy communities. Like all unauthenticated systems on the modern net it’s aging badly and showing no signs of getting better.

While we still use it heavily, IRC is an ongoing source of abuse and harassment for many of our colleagues and getting connected to this now-obscure forum is an unnecessary technical barrier for anyone finding their way to Mozilla via the web. Available interfaces really haven’t kept up with modern expectations, spambots and harassment are endemic to the platform, and in light of that it’s no coincidence that people trying to get in touch with us from inside schools, colleges or corporate networks are finding that often as not IRC traffic isn’t allowed past institutional firewalls at all.

All of that adds up to a set of real hazards and unnecessary barriers to participation in the Mozilla project; we definitely still need a globally-available, synchronous and text-first communication tool; our commitment to working in the open as an organization hasn’t changed. But we’re setting a higher bar for ourselves and our communities now and IRC can’t meet that bar. We’ve come to the conclusion that for all IRC’s utility, it’s irresponsible of us to ask our people – employees, volunteers, partners or anyone else – to work in an environment that we can’t make sure is healthy, safe and productive.

In short, it’s no longer practical or responsible for us to keep that forum alive.

In the next small number of months, Mozilla intends to deprecate IRC as our primary synchronous-text communications platform, stand up a replacement and decommission soon afterwards. I’m charged with leading that process on behalf of the organization.

Very soon, I’ll be setting up the evaluation process for a couple of candidate replacement stacks. We’re lucky; we’re spoiled for good options these days. I’ll talk a bit more about them in a future post, but the broad strokes of our requirements are pretty straightforward:

  • We are not rolling our own. Whether we host it ourselves or pay for a service, we’re getting something off the shelf that best meets our needs.
  • It needs to be accessible to the greater Mozilla community.
  • We are evaluating products, not protocols.
  • We aren’t picking an outlier; whatever stack we choose needs to be a modern, proven service that seems to have a solid provenance and a good life ahead of it. We’re not moving from one idiosyncratic outlier stack to another idiosyncratic outlier stack.
  • While we’re investigating options for semi-anonymous or pseudonymous connections, we will require authentication, because:
  • The Mozilla Community Participation Guidelines will apply, and they’ll be enforced.

I found this at the top of a draft FAQ I’d started putting together a while back. It might not be what you’d call “complete”, but maybe it is:

Q: Why are we moving away from IRC? IRC is fine!
A: IRC is not fine.

Q: Seriously? You’re kidding, right?
A: I’m dead serious.

I don’t do blog comments anymore – unfortunately, for a lot of the same reasons I’m dealing with this – but if you’ve got questions, you can email me.

Or, if you like, you can find me on IRC.

Christopher ArnoldAn Author-Optimized Social Network Approach

Sciam Art credit:
In this month’s edition of Scientific American magazine, Wade Roush comments on social networks' potential deleterious impact on emotional well-being. (Scientific American May 2019: Turning Off the Emotion Pump)  He prompts, "Are there better social technologies than Facebook?" and cites previous attempts such as now-defunct Path and still struggling Diaspora as potential promising developments. I don’t wish to detract from the contemporary concerns about notification overload and privacy leaks. But I’d like to highlight the positive side of social platforms for spurring creative collaboration and suggest an approach to potentially expand the positive impacts they can facilitate in the future. I think the answer to his question is: More diversity of platforms and better utilities needed. 

In our current era, everyone is a participant, in some way, in the authorship of the web. That's a profound and positive thing. We are all enfranchised in a way that previously most were not.  As an advocate for the power of the internet for advancing creative expression, I believe the benefits we've gained by this online enfranchisement should not be overshadowed by aforementioned bumps along the road.  We need more advancement, perhaps in a different way than has been achieved in most mainstream social platforms to date.  Perhaps it is just the utilization that needs to shift, more than the tools themselves. But as a product-focused person, I think some design factors could shape this change we'd need to see to have social networks be a positive force in everybody's lives. 

When Facebook turned away from "the Facebook Wall", its earliest iteration, I was fascinated by this innovation.  It was no longer a bunch of different profile destinations interlinked by notifications of what people said about each other. It became an atomized webpage that looked different to everyone who saw it, depending on the quality of contributions of the linked users.  The outcome was a mixed bag because the range of experiences of each visitor were so different. Some people saw amazing things, from active creators/contributors they'd linked to.  Some people saw the boredom of a stagnant or overly-narrow pool of peer contributors reflected back to them. Whatever your opinion of the content of Facebook, Twitter and Reddit, as subscription services they provide tremendous utility in today's web.  They are far superior to the web-rings and Open Directory Project of the 1990s, as they are reader-driven rather than author/editor driven. 

The experimental approach I'm going to suggest for advancement of next-generation social networks should probably happen outside the established platforms. For when experimentation is done within these services it can jeopardize the perceived user control and trust that attracted their users in the first place.   

In a brainstorm with an entrepreneur, named Lisa, she pointed out that the most engaging and involved collaborative discussions she'd seen had taken place in Ravelry and Second Life.  Knitting and creating 3D art takes an amazing amount of time investment.  She posited that it may be this invested time that leads to the quality of the personal interactions that happen on such platforms.  It may actually be the casualness of engagement on conventional public forums that makes those interactions more haphazard, impersonal and less constructive or considerate. Our brainstorm spread to how might more such platforms emerge to spur ever greater realization of new authorship, artistry and collaboration. We focused not on volume of people nor velocity of engagement, but rather greatest individual contribution. 

The focus (raison d'être) of a platform tends to skew the nature of the behaviors on it and can hamper or facilitate the individual creation or art represented based on the constraints of the platform interface. (For instance Blogger, Wordpress and Medium are great for long form essays. Twitter, Instagram and Reddit excel as forums for sharing observations about other works or references.) If one were to frame a platform objective on the maximum volume of individual contribution or artistry and less on the interactions, you'd get a different nature of network. And across a network of networks, it would be possible to observe what components of a platform contribute best to the unfettered artistry of the individual contributors among them. 

I am going to refer to this platform concept as "Mikoshi", because it reminds me of the Japanese portable shrines of the same name, pictured at right. In festival parades, dozens of people heft a one-ton shrine atop their shoulders.  The bobbing of the shrine is supposed to bring good luck to the participants and onlookers. The time I participated in a mikoshi parade, I found it to be exhausting effort, fun as it was.  The thing that stuck out to me was that that whole group is focused toward one end.  There were no detractors.   

Metaphorically, I see the mikoshi act of revelry as somewhat similar to the collaborative creative artistry sharing that Lisa was pointing out. In Lisa's example, there was a barrier to entry and a shared intent in the group. You had to be a knitter or a 3D artist to have a seat at the table. Why would hurdles create the improved quality of engagement and discourse? Presumably, if you're at that table you want to see others succeed and create more! There is a certain amount of credibility and respect the community gives contributors based on the table-stakes of participation that got them there.  This is the same with most other effort-intensive sharing platforms, like Mixcloud and Soundcloud, where I contribute. The work of others inspires us to increase our level of commitment and quality as well.  The shared direction, the furtherance of art, propels ever more art by all participants.  It virtuously improves in a cycle.  This drives greater complexity, quality and retention with time.   

To achieve a pure utility of greatest contributor creation would be a different process than creating a tool optimized purely for volume or velocity of engagement. Lisa and I posited an evolving biological style of product "mutation" that might create a proliferating organic process, driven by participant contribution and automated selection of attributes observed across the most healthy offshoot networks. Maximum individual authorship should be the leading selective pressure for Mikoshi to work. This is not to say that essays are better than aphorisms because of their length. But the goal to be incentivized by a creativity-inspiring ecosystem should be one where the individuals participating feel empowered to create to the maximum extent. There are other tools designed for optimizing velocity and visibility, but those elements could be detrimental to individual participation or group dynamics. 

To give over control to contribution-driven optimization as an end, it would Mikoshi would need to be a modular system akin to the Wordpress platform of Automattic. But platform mutation would have to be achieved agnostic of author self-promotion. The optimizing mutation of Mikoshi would need to be outside of the influence of content creators' drive for self promotion. This is similar to the way that "Pagerank" listened to interlinking of non-affiliated web publishers to drive its anti-spam filter, rather than the publishers' own attempts to promote themselves. Visibility and promulgation of new Mikoshi offshoots should be delegated to a different promotion-agnostic algorithm entirely, one looking at the health of a community of active authors in other preceding Mikoshi groups. Evolutionary adaptation is driven by what ends up dying. But Mikoshi would be driven by what previously thrived.

I don't think Mikoshi should be a single tool, but an approach to building many different web properties. It's centered around planned redundancy and planned end-of-life for non-productive forks of Mikoshi. Any single Mikoshi offshoot could exist indefinitely. But ideally, certain of them would thrive and attract greater engagement and offshoots.

The successive alterations of Mikoshi would be enabled by its capability to fork, like open source projects such as Linux or Gecko do.  As successive deployments are customized and distributed, the most useful elements of the underlying architecture can be notated with telemetry to suggest optimizations to other Mikoshi forks that may not have certain specific tools.  This quasi-organic process, with feedback on the overall contribution "health" of the ecosystem represented by participant contribution, could then suggest attributes for viable offshoot networks to come.  (I'm framing this akin to a browser's extensions, or a Wordpress template's themes and plugins which offer certain optional expansions to pages using past templates of other developers.)  The end products of Mikoshi are multitudinous and not constrained.  Similar to Wordpress, attributes to be included in any future iteration are at the discretion of the communities maintaining them.

Of course Facebook and Reddit could facilitate this.  Yet, "roll your own platform" doesn't fit their business models particularly.  Mozilla, manages several purpose-built social networks for their communities. (Bugzilla and Mozillians internally,  and the former Webmaker and new Hubs for web enthusiasts)  But Mikoshi doesn't particularly fit their mission or business model either.  I believe Automattic is better positioned to go after this opportunity, as it already powers 1/3 of global websites, and has competencies in massively-scaled hosting of web pages with social components. 

I know from my own personal explorations on dozens of web publishing and media platforms that they have each, in different ways, facilitated and drawn out different aspects of my own creativity.  I've seen many of these platforms die off.  It wasn't that those old platforms didn't have great utility or value to their users.  Most of them were just not designed to evolve.  They were essentially too rigid, or encountered political problems within the organizations that hosted them.  As the old Ani Difranco song "Buildings and Bridges" points out, "What doesn't bend breaks." (Caution that lyrics contain some potentially objectionable language.)  The web of tomorrow may need a new manner of collaborative social network that is able to weather the internal and external pressures that threaten them.  Designing an adaptive platform like Mikoshi may accomplish this.  

Cameron KaiserAnother interesting TenFourFox downstream

Because we're one of the few older forks of Firefox to still backport security updates, TenFourFox code turns up in surprising places sometimes. I've known about roytam's various Pale Moon and Mozilla builds; the patches are used in both the rebuilds of Pale Moon 27 and 28 and his own fork of 45ESR. Arctic Fox, which is a Pale Moon 27 (descended from Firefox 38, with patches) rebuild for Snow Leopard and PowerPC Linux, also uses TenFourFox security patches as well as some of our OS X platform code.

Recently I was also informed of a new place TenFourFox code has turned up: OS/2. There's no Rust for OS/2, so they're in the same boat that PowerPC OS X is, and it doesn't look like 52ESR was ever successfully ported to OS/2 either; indeed, the last "official" Firefox I can find from Bitwise is 45.9. Dave Yeo took that version (as well as Thunderbird 45.9 and SeaMonkey 2.42.9) and backported our accumulated security patches along with other fixes to yield updated "SUa1" Firefox, Thunderbird and SeaMonkey builds for OS/2. If you're curious, here are the prerequisites.

Frankly, I'm glad that we can give back to other orphaned platforms, and while I'm definitely not slow to ding Mozilla for eroding cross-platform support, they've still been the friendliest to portability even considering recent lapses. Even though we're not current on Firefox anymore other than the features I rewrite for TenFourFox, we're still part of the family and it's nice to see our work keeping other systems and niche userbases running.

An update for FPR14 final, which is still scheduled for mid-May, is a new localization for Simplified Chinese from a new contributor. Thanks, paizhang! Updated language packs will be made available with FPR14 for all languages except Japanese, which is still maintained separately.

Niko MatsakisAiC: Language-design team meta working group

On internals, I just announced the formation of the language-design team meta working group. The role of the meta working group is to figure out how other language-design team working groups should work. The plan is to begin by enumerating some of our goals – the problems we aim to solve, the good things we aim to keep – and then move on to draw up more details plans. I expect this discussion will intersect the RFC process quite heavily (at least when it comes to language design changes). Should be interesting! It’s all happening in the open, and a major goal of mine is for this to be easy to follow along with from the outside – so if talking about talking is your thing, you should check it out.

The Rust Programming Language BlogMozilla IRC Sunset and the Rust Channel

The Rust community has had a presence on Mozilla’s IRC network almost since Rust’s inception. Over time, the single channel grew into a set of pretty active channels where folks would come to ask Rust questions, coordinate work on Rust itself, and just in general chat about Rust.

Mozilla recently announced that it would be shutting down its IRC network, citing a growing maintenance and moderation burden. They are looking into new options for the Mozilla community, but this does leave the question open as to what the Rust project will do.

Last year a lot of the teams started exploring new communication platforms. Almost all the Rust teams no longer use IRC as their official discussion platform, instead using Discord or Zulip (as well as a variety of video chat tools for synchronous meetings). The few teams that do use IRC are working with us to find a new home, likely a channel on Discord or Zulip.

This leaves the #rust and #rust-beginners channels on Mozilla’s IRC network, which are still quite active, that will need a new home when Mozilla’s network shuts down. Rust’s official Discord server does have the #users, #help, and #beginners channels that fill in this purpose, and we recommend people start using those.

We understand that not everyone wishes to switch to Discord for many reasons. For people who wish to continue using IRC, there is an unofficial freenode channel which you can hang out in, though we’d like to emphasize that this is not associated with the Rust teams and is not moderated by our Moderation team. You’re also free to create new channels on freenode in accordance with the freenode rules.

There are still a couple months before shuts down — we’ll work at making this transition as smooth as possible in this time. Thanks to everyone who made #rust and #rust-beginners on Mozilla IRC a great place to hang out! We are sad to see it go. 😢

The Mozilla BlogFirefox and Emerging Markets Leadership

Building on the success of Firefox Quantum, we have a renewed focus on better enabling people to take control of their internet-connected lives as their trusted personal agent — through continued evolution of the browser and web platform — and with new products and services that provide enhanced security, privacy and user agency across connected life.

To accelerate this work, we’re announcing some changes to our senior leadership team:

Dave Camp has been appointed SVP Firefox. In this new role, Dave will be responsible for overall Firefox product and web platform development.

As a long time Mozillian, Dave joined Mozilla in 2006 to work on Gecko, building networking and security features and was a contributor to the release of Firefox 3. After a short stint at a startup he rejoined Mozilla in 2011 as part of the Firefox Developer Tools team. Dave has since served in a variety of senior leadership roles within the Firefox product organization, most recently leading the Firefox engineering team through the launch of Firefox Quantum.

Under Dave’s leadership the new Firefox organization will pull together all product management, engineering, technology and operations in support of our Firefox products, services and web platform. As part of this change, we are also announcing the promotion of Marissa (Reese) Wood to VP Firefox Product Management, and Joe Hildebrand to VP Firefox Engineering. Both Joe and Reese have been key drivers of the continued development of our core browser across platforms, and the expansion of the Firefox portfolio of products and services globally.

In addition, we are increasing our investment and focus in emerging markets, building on the early success of products like Firefox Lite which we launched in India earlier this year, we are also formally establishing an emerging markets team based in Taipei:

Stan Leong appointed as VP and General Manager, Emerging Markets. In this new role, Stan will be responsible for our product development and go-to-market strategy for the region. Stan joins us from DCX Technology where he was Global Head of Emerging Product Engineering. He has a great combination of start-up and large company experience having spent years at Hewlett Packard, and he has worked extensively in the Asian markets.

As part of this, Mark Mayo, who has served as our Chief Product Officer (CPO), will move into a new role focused on strategic product development initiatives with an initial emphasis on accelerating our emerging markets strategy. We will be conducting an executive search for a CPO to lead the ongoing development and evolution of our global product portfolio.

I’m confident that with these changes, we are well positioned to continue the evolution of the browser and web platform and introduce new products and services that provide enhanced security, privacy and user agency across connected life.

The post Firefox and Emerging Markets Leadership appeared first on The Mozilla Blog.

Nathan Froydan unexpected benefit of standardizing on clang-cl

I wrote several months ago about our impending decision to switch to clang-cl on Windows.  In the intervening months, we did that, and we also dropped MSVC as a supported compiler.  (We still build on Linux with GCC, and will probably continue to do that for some time.)  One (extremely welcome) consequence of the switch to clang-cl has only become clear to me in the past couple of weeks: using assembly language across platforms is no longer painful.

First, a little bit of background: GCC (and Clang) support a feature called inline assembly, which enables you to write little snippets of assembly code directly in your C/C++ program.  The syntax is baroque, it’s incredibly easy to shoot yourself in the foot with it, and it’s incredibly useful for a variety of low-level things.  MSVC supports inline assembly as well, but only on x86, and with a completely different syntax than GCC.

OK, so maybe you want to put your code in a separate assembly file instead.  The complementary assembler for GCC (courtesy of binutils) is called gas, with its own specific syntax for various low-level details.  If you give gcc an assembly file, it knows to pass it directly to gas, and will even run the C preprocessor on the assembly before invoking gas if you request that.  So you only ever need to invoke gcc to compile everything, and the right thing will just happen. MSVC, by contrast, requires you to invoke a separate, differently-named assembler for each architecture, with different assembly language syntaxes (e.g. directives for the x86-64 assembler are quite different than the arm64 assembler), and preprocessing files beforehand requires you to jump through hoops.  (To be fair, a number of these details are handled for you if you’re building from inside Visual Studio; the differences are only annoying to handle in cross-platform build systems.)

In short, dealing with assembler in a world where you have to support MSVC is somewhat painful.  You have to copy-and-paste code, or maybe you write Perl scripts to translate from the gas syntax to whatever flavor of syntax the Microsoft assembler you’re using is.  Your build system needs to handle Windows and non-Windows differently for assembly files, and may even need to handle different architectures for Windows differently.  Things like our ICU data generation have been made somewhat more complex than necessary to support Windows platforms.

Enter clang-cl.  Since clang-cl is just clang under the hood, it handles being passed assembly files on the command line in the same way and will even preprocess them for you.  Additionally, clang-cl contains a gas-compatible assembly syntax parser, so assembly files that you pass on the command line are parsed by clang-cl and therefore you can now write a single assembly syntax that works on Unix-y and Windows platforms.  (You do, of course, have to handle differing platform calling conventions and the like, but that’s simplified by having a preprocessor available all the time.)  Finally, clang-cl supports GCC-style inline assembly, so you don’t even have to drop into separate assembly files if you don’t want to.

In short, clang-cl solves every problem that made assembly usage painful on Windows. Might we have a future world where open source projects that have to deal with any amount of assembly standardize on clang-cl for their Windows support, and declare MSVC unsupported?

Henri SivonenIt’s Time to Stop Adding New Features for Non-Unicode Execution Encodings in C++

Henri Sivonen, 2019-04-24

Disclosure: I work for Mozilla, and my professional activity includes being the Gecko module owner for character encodings.

Disclaimer: Even though this document links to code and documents written as part of my Mozilla actitivities, this document is written in personal capacity.


Text processing facilities in the C++ standard library have been mostly agnostic of the actual character encoding of text. The few operations that are sensitive to the actual character encoding are defined to behave according to the implementation-defined “narrow execution encoding” (for buffers of char) and the implementation-defined “wide execution encoding” (for buffers of wchar_t).

Meanwhile, over the last two decades, a different dominant design has arisen for text processing in other programming languages as well as in C and C++ usage despite what the C and C++ standard-library facilities provide: Representing text as Unicode, and only Unicode, internally in the application even if some other representation is required externally for backward compatibility.

I think the C++ standard should adopt the approach of “Unicode-only internally” for new text processing facilities and should not support non-Unicode execution encodings in newly-introduced features. This allows new features to have less abstraction obfuscation for Unicode usage, avoids digging legacy applications deeper into non-Unicode commitment, and avoids the specification and implementation effort of adapting new features to make sense for non-Unicode execution encodings.

Concretely, I suggest:

  • In new features, do not support numbers other than Unicode scalar values as a numbering scheme for abstract characters, and design new APIs to be aware of Unicode scalar values as appropriate instead of allowing other numbering schemes. (I.e. make Unicode the only coded character set supported for new features.)
  • Use char32_t directly as the concrete type for an individual Unicode scalar value without allowing for parametrization of the type that conceptually represents a Unicode scalar value. (For sequences of Unicode scalar values, UTF-8 is preferred.)
  • When introducing new text processing facilities (other than the next item on this list), support only UTF in-memory text representations: UTF-8 and, potentially, depending on feature, also UTF-16 or also UTF-16 and UTF-32. That is, do not seek to make new text processing features applicable to non-UTF execution encodings. (This document should not be taken as a request to add features for UTF-16 or UTF-32 beyond iteration over string views by scalar value. To avoid distraction from the main point, this document should also not be taken as advocating against providing any particular feature for UTF-16 or UTF-32.)
  • Non-UTF character encodings may be supported in a conversion API whose purpose is to convert from a legacy encoding into a UTF-only representation near the IO boundary or at the boundary between a legacy part (that relies on execution encoding) and a new part (that uses Unicode) of an application. Such APIs should be std::span-based instead of iterator-based.
  • When an operation logically requires a valid sequence of Unicode scalar values, the API must either define the operation to fail upon encountering invalid UTF-8/16/32 or must replace each error with a U+FFFD REPLACEMENT CHARACTER as follows: What constitutes a single error in UTF-8 is defined in the WHATWG Encoding Standard (which matches the “best practice” from the Unicode Standard). In UTF-16, each unpaired surrogate is an error. In UTF-32, each code unit whose numeric value isn’t a valid Unicode scalar value is an error.
  • Instead of standardizing Text_view as proposed, standardize a way to obtain a Unicode scalar value iterator from std::u8string_view, std::u16string_view, and std::u32string_view.


This write-up is in response to (and in disagreement with) the “Character Types” section in the P0244R2 Text_view paper:

This library defines a character class template parameterized by character set type used to represent character values. The purpose of this class template is to make explicit the association of a code point value and a character set.

It has been suggested that char32_t be supported as a character type that is implicitly associated with the Unicode character set and that values of this type always be interpreted as Unicode code point values. This suggestion is intended to enable UTF-32 string literals to be directly usable as sequences of character values (in addition to being sequences of code unit and code point values). This has a cost in that it prohibits use of the char32_t type as a code unit or code point type for other encodings. Non-Unicode encodings, including the encodings used for ordinary and wide string literals, would still require a distinct character type (such as a specialization of the character class template) so that the correct character set can be inferred from objects of the character type.

This suggestion raises concerns for the author. To a certain degree, it can be accommodated by removing the current members of the character class template in favor of free functions and type trait templates. However, it results in ambiguities when enumerating the elements of a UTF-32 string literal; are the elements code point or character values? Well, the answer would be both (and code unit values as well). This raises the potential for inadvertently writing (generic) code that confuses code points and characters, runs as expected for UTF-32 encodings, but fails to compile for other encodings. The author would prefer to enforce correct code via the type system and is unaware of any particular benefits that the ability to treat UTF-32 string literals as sequences of character type would bring.

It has also been suggested that char32_t might suffice as the only character type; that decoding of any encoded string include implicit transcoding to Unicode code points. The author believes that this suggestion is not feasible for several reasons:

  1. Some encodings use character sets that define characters such that round trip transcoding to Unicode and back fails to preserve the original code point value. For example, Shift-JIS (Microsoft code page 932) defines duplicate code points for the same character for compatibility with IBM and NEC character set extensions. [sic; dead link]
  2. Transcoding to Unicode for all non-Unicode encodings would carry non-negligible performance costs and would pessimize platforms such as IBM’s z/OS that use EBCIDC by default for the non-Unicode execution character sets.

To summarize, it raises three concerns:

  1. Ambiguity between code units and scalar values (the paper says “code points”, but I say “scalar values” to emphasize the exclusion of surrogates) in the UTF-32 case.
  2. Some encodings, particularly Microsoft code page 932, can represent one Unicode scalar value in more than one way, so the distinction of which way does not round-trip.
  3. Transcoding non-Unicode execution encodings has a performance cost that pessimizes particularly IBM z/OS.

Terminology and Background

(This section and the next section should not be taken as ’splaining to SG16 what they already know. The over-explaining is meant to make this document more coherent for a broader audience of readers who might be interested in C++ standardization without full familiarity with text processing terminology or background, or the details of Microsoft code page 932.)

An abstract character is an atomic unit of text. Depending on writing system, the analysis of what constitutes an atomic unit may differ, but a given implementation on a computer has to identify some things as atomic units. Unicode’s opinion of what is an abstract character is the most widely applied opinion. In fact, Unicode itself has multiple opinions on this, and Unicode Normalization Forms bridge these multiple opinions.

A character set is a set of abstract characters. In principle, a set of characters can be defined without assigning numbers to them.

A coded character set assigns numbers, code points, to the items in the character set to each abstract character.

When the Unicode code space was extended beyond the Basic Multilingual Plane, some code points were set aside for the UTF-16 surrogate mechanism and, therefore, do not represent abstract characters. A Unicode scalar value is a Unicode code point that is not a surrogate code point. For consistency with Unicode, I use the term scalar value below when referring to non-Unicode coded character sets, too.

A character encoding is a way to represent a conceptual sequence of scalar values from one or more coded character sets as a concrete sequence of bytes. The bytes are called code units. Unicode defines in-memory Unicode encoding forms whose code unit is not a byte: UTF-16 and UTF-32. (For these Unicode encoding forms, there are corresponding Unicode encoding schemes that use byte code units and represent a non-byte code unit from a correspoding encoding form as multiple bytes and, therefore, could be used in byte-oriented IO even though UTF-8 is preferred for interchange. UTF-8, of course, uses byte code units as both a Unicode encoding form and as a Unicode encoding scheme.)

Coded character sets that assign scalar values in the range 0...255 (decimal) can be considered to trivially imply a character encoding for themselves: You just store the scalar value as an unsigned byte value. (Often such coded character sets import US-ASCII as the lower half.)

However, it is possible to define less obvious encodings even for character sets that only have up to 256 characters. IBM has several EBCDIC character encodings for the set of characters defined in ISO-8859-1. That is, compared to the trivial ISO-8859-1 encoding (the original, not the Web alias for windows-1252), these EBCDIC encodings permute the byte value assignments.

Unicode is the universal coded character set that by design includes abstract characters from all notable legacy coded character sets such that character encodings for legacy coded character sets can be redefined to represent Unicode scalar values. Consider representing ż in the ISO-8859-2 encoding. When we treat the ISO-8859-2 encoding as an encoding for the Unicode coded character set (as opposed treating it as an encoding for the ISO-8859-2 coded character set), byte 0xBF decodes to Unicode scalar value U+017C (and not as scalar value 0xBF).

A compatibility character is a character that according to Unicode principles should not be a distinct abstract character but that Unicode nonetheless codes as a distinct abstract character because some legacy coded character set treated it as distinct.

The Microsoft Code Page 932 Issue

Usually in C++ a “character type” refers to a code unit type, but the Text_view paper uses the term “character type” to refer to a Unicode scalar value when the encoding is a Unicode encoding form. The paper implies that an analogous non-Unicode type exists for Microsoft code page 932 (Microsoft’s version of Shift_JIS), but does one really exist?

Microsoft code page 932 takes the 8-bit encoding of JIS X 0201 coded character set, whose upper half is half-width katakana and lower half is ASCII-based, and replaces the lower half with actual US-ASCII (moving the difference between US-ASCII and the lower half of 8-bit-encoded JIS X 0201 into a font problem!). It then takes the JIS X 0208 coded character set and represents it with two-byte sequences (for the lead byte making use of the unassigned range of JIS X 0201). JIS X 0208 code points aren’t really one-dimensional scalars, but instead two-dimensional row and column numbers in a 94 by 94 grid. (See the first 94 rows of the visualization supplied with the Encoding Standard; avoid opening the link on RAM-limited device!) Shift_JIS / Microsoft code page 932 does not put these two numbers into bytes directly, but conceptually arranges each two rows of 94 columns into one row of a 188 columns and then transforms these new row and column numbers into bytes with some offsetting.

While the JIS X 0208 grid is rearranged into 47 rows of a 188-column grid, the full 188-column grid has 60 rows. The last 13 rows are used for IBM extensions and for private use. The private use area maps to the (start of the) Unicode Private Use Area. (See a visualization of the rearranged grid with the private use part showing up as unassigned; again avoid opening the link on a RAM-limited device.)

The extension part is where the concern that the Text_view paper seeks to address comes in. NEC and IBM came up with some characters that they felt JIS X 0208 needed to be extended with. NEC’s own extensions go onto row 13 (in one-based numbering) of the 94 by 94 JIS X 0208 grid (unallocated in JIS X 0208 proper), so that extension can safely be treated as if it had always been part of JIS X 0208 itself. The IBM extension, however, goes onto the last 3 rows of the 60-row Shift_JIS grid, i.e. outside the space that the JIS X 0208 94 by 94 grid maps to. However, US-ASCII, the half-width katakana part of JIS X 0201, and JIS X 0208 are also encoded, in a different way, by EUC-JP. EUC-JP can only encode the 94 by 94 grid of JIS X 0208. To make the IBM extensions fit into the 94 by 94 grid, NEC relocated the IBM extensions within the 94 by 94 grid in space that the JIS X 0208 standard left unallocated.

When considering IBM Shift_JIS and NEC EUC-JP (without later JIS X 0213 extension), both encode the same set of characters, but in a different way. Furthermore, both can round-trip via Unicode. Unicode principles analyze some of the IBM extension kanji as duplicates of kanji that were already in the original JIS X 0208. However, to enable round-tripping (which was thought worthwhile to achieve at the time), Unicode treats the IBM duplicates as compatibility characters. (Round-tripping is lost, of course, if the text decoded into Unicode is normalized such that compatibility characters are replaced with their canonical equivalents before re-encoding.)

This brings us to the issue that the Text_view paper treats as significant: Since Shift_JIS can represent the whole 94 by 94 JIS X 0208 grid and NEC put the IBM extension there, a naïve conversion from EUC-JP to Shift_JIS can fail to relocate the IBM extension characters to the end of the Shift_JIS code space and can put them in the position where they land if the 94 by 94 grid is simply transformed as the first 47 rows of the 188-column-wide Shift_JIS grid. When decoding to Unicode, Microsoft code page 932 supports both locations for the IBM extensions, but when encoding from Unicode, it has to pick one way of doing things, and it picks the end of the Shift_JIS code space.

That is, Unicode does not assign another set of compatibility characters to Microsoft code page 932’s duplication of the IBM extensions, so despite NEC EUC-JP and IBM Shift_JIS being round-trippable via Unicode, Microsoft code page 932, i.e. Microsoft Shift_JIS, is not. This makes sense considering that there is no analysis that claims the IBM and NEC instances of the IBM extensions as semantically different: They clearly have provenance that indicates that the duplication isn’t an attempt to make a distinction in meaning. The Text_view paper takes the position that C++ should round-trip the NEC instance of the IBM extensions in Microsoft code page 932 as distinct from the IBM instance of the IBM extensions even though Microsoft’s own implementation does not. In fact, the whole point of the Text_view paper mentioning Microsoft code page 932 is to give an example of a legacy encoding that doesn’t round-trip via Unicode, despite Unicode generally having been designed to round-trip legacy encodings, and to opine that it ought to round-trip in C++.


  • The Text_view paper wants there to exist a non-transcoding-based, non-Unicode analog for what for UTF-8 would be a Unicode scalar value but for Microsoft code page 932 instead.
  • The standards that Microsoft code page 932 has been built on do not give us such a scalar.
    • Even if the private use space and the extensions are considered to occupy a consistent grid with the JIS X 0208 characters, the US-ASCII plus JIS X 0201 part is not placed on the same grid.
    • The canonical way of referring to JIS X 0208 independently of bytes isn’t a reference by one-dimensional scalar but a reference by two (one-based) numbers identifying a cell on the 94 by 94 grid.
  • The Text_view paper wants the scalar to be defined such that a distinction between the IBM instance of the IBM extensions and the NEC instance of the IBM extensions is maintained even though Microsoft, the originator of the code page, does not treat these two instances as meaningfully distinct.

Inferring a Coded Character Set from an Encoding

(This section is based on the constraints imposed by Text_view paper instead of being based on what the reference implementation does for Microsoft code page 932. From code inspection, it appears that support for multi-byte narrow execution encodings is unimplemented, and when trying to verify this experimentally, I timed out trying to get it running due to an internal compiler error when trying to build with a newer GCC and a GCC compilation error when trying to build the known-good GCC revision.)

While the standards don’t provide a scalar value definition for Microsoft code page 932, it’s easy to make one up based on tradition: Traditionally, the two-byte characters in CJK legacy encodings have been referred to by interpreting the two bytes as 16-bit big-endian unsigned number presented as hexadecimal (and single-byte characters as a 8-bit unsigned number).

As an example, let’s consider 猪 (which Wiktionary translates as wild boar). Its canonical Unicode scalar value is U+732A. That’s what the JIS X 0208 instance decodes to when decoding Microsoft code page 932 into Unicode. The compatibility character for the IBM kanji purpose is U+FA16. That’s what both the IBM instance of the IBM extension and the NEC instance of the IBM extension decode to when decoding Microsoft code page 932 into Unicode. (For reasons unknown to me, Unicode couples U+FA16 with the IBM kanji compatibility purpose and assigns another compatibility character, U+FAA0, for compatibility with North Korean KPS 10721-2000 standard, which is irrelevant to Microsoft code page 932. Note that not all IBM kanji have corresponding DPRK compatibility characters, so we couldn’t repurpose the DPRK compatibility characters for distinguishing the IBM and NEC instances of the IBM extensions even if we wanted to.)

When interpreting the Microsoft code page 932 bytes as a big-endian integer, the JIS X 0208 instance of 猪 would be 0x9296, the IBM instance would be 0xFB5E, and the NEC instance would be 0xEE42. To highlight how these “scalars” are coupled with the encoding instead of the standard character sets that the encodings originally encode, in EUC-JP the JIS X 0208 instance would be 0xC3F6 and the NEC instance would be 0xFBA3. Also, for illustration, if the same rule was applied to UTF-8, the scalar would be 0xE78CAA instead of U+732A. Clearly, we don’t want the scalars to be different between UTF-8, UTF-16, and UTF-32, so it is at least theoretically unsatisfactory for Microsoft code page 932 and EUC-JP to get different scalars for what are clearly the same characters in the underlying character sets.

It would be possible to do something else that’d give the same scalar values for Shift_JIS and EUC-JP without a lookup table. We could number the characters on the two-dimensional grid starting with 256 for the top left cell to reserve the scalars 0…255 for the JIS X 0201 part. It’s worth noting, though, that this approach wouldn’t work well for Korean and Simplified Chinese encodings that take inspiration from the 94 by 94 structure of JIS X 0208. KS X 1001 and GB2312 also define a 94 by 94 grid like JIS X 0208. However, while Microsoft code page 932 extends the grid down, so a consecutive numbering would just add greater numbers to the end, Microsoft code pages 949 and 936 extend the KS X 1001 and GB2312 grids above and to the left, which means that a consecutive numbering of the extended grid would be totally different from the consecutive numbering of the unextended grid. On the other hand, interpreting each byte pair as a big-endian 16-bit integer would yield the same values in the extended and unextended Korean and Simplified Chinese cases. (See visualizations for 949 and 936; again avoid opening on a RAM-limited device. Search for “U+3000” to locate the top left corner of the original 94 by 94 grid.)

What About EBCDIC?

Text_view wants to avoid transcoding overhead on z/OS, but z/OS has multiple character encodings for the ISO-8859-1 character set. It seems conceptually bogus for all these to have different scalar values for the same character set. However, for all of them to have the same scalar values, a lookup table-based permutation would be needed. If that table permuted to the ISO-8859-1 order, it would be the same as the Unicode order, at which point the scalar values might as well be Unicode scalar values, which Text_view wanted to avoid on z/OS citing performance concerns. (Of course, z/OS also has EBCDIC encodings whose character set is not ISO-8859-1.)

What About GB18030?

The whole point of GB18030 is that it encodes Unicode scalar values in a way that makes the encoding byte-compatible with GBK (Microsoft code page 936) and GB2312. This operation is inherently lookup table-dependent. Inventing a scalar definition for GB18030 that achieved the Text_view goal of avoiding lookup tables would break the design goal of GB18030 that it encodes all Unicode scalar values. (In the Web Platform, due to legacy reasons, all but one scalar value and representing one scalar value twice.)

What’s Wrong with This?

Let’s evaluate the above in the light of P1238R0, the SG16: Unicode Direction paper.

The reason why Text_view tries to fit Unicode-motivated operations onto legacy encodings is that, as noted by “1.1 Constraint: The ordinary and wide execution encodings are implementation defined”, non-UTF execution encodings exist. This is, obviously, true. However, I disagree with the conclusion of making new features apply to these pre-existing execution encodings. I think there is no obligation to adapt new features to make sense for non-UTF execution encodings. It should be sufficient to keep existing legacy code running, i.e. not removing existing features should be sufficient. On the topic of wchar_t the Unicode Direction paper, says “1.4. Constraint: wchar_t is a portability deadend”. I think char with non-UTF-8 execution encoding should also be declared as a deadend whereas the Unicode Direction paper merely notes “1.3. Constraint: There is no portable primary execution encoding”. Making new features work with deadend foundation lures applications deeper into deadends, which is bad.

While inferring scalar values for an encoding by interpreting the encoded bytes for each character as a big-endian integer (thereby effectively inferring a, potentially non-standard, coded character set from an encoding) might be argued to be traditional enough to fit “2.1. Guideline: Avoid excessive inventiveness; look for existing practice”, it is a bad fit for “1.6. Constraint: Implementors cannot afford to rewrite ICU”. If there is concern about implementors not having the bandwidth to implement text processing features from scratch and, therefore, should be prepared to delegate to ICU, it makes no sense make implementations or the C++ standard come up with non-Unicode numberings for abstract characters, since such numberings aren’t supported by ICU and necessarily would require writing new code for anachronistic non-Unicode schemes.

Aside: Maybe analyzing the approach of using byte sequences interpreted as big-endian numbers looks like attacking a straw man and there could be some other non-Unicode numbering instead, such as the consecutive numbering outlined above. Any alternative non-Unicode numbering would still fail “1.6. Constraint: Implementors cannot afford to rewrite ICU” and would also fail “2.1. Guideline: Avoid excessive inventiveness; look for existing practice”.

Furthermore, I think the Text_view paper’s aspiration of distinguishing between the IBM and NEC instances of the IBM extensions in Microsoft code page 932 fails “2.1. Guideline: Avoid excessive inventiveness; look for existing practice”, because it effectively amounts to inventing additional compatibility characters that aren’t recognized as distinct by Unicode or the originator of the code page (Microsoft).

Moreover, iterating over a buffer of text by scalar value is a relatively simple operation when considering the range of operations that make sense to offer for Unicode text but that may not obviously fit non-UTF execution encodings. For example, in the light of “4.2. Directive: Standardize generic interfaces for Unicode algorithms” it would be reasonable and expected to provide operations for performing Unicode Normalization on strings. What does it mean to normalize a string to Unicode Normalization Form D under the ISO-8859-1 execution encoding? What does it mean to apply any Unicode Normalization Form under the windows-1258 execution encoding, which represents Vietnamese in a way that doesn’t match any Unicode Normalization Form? If the answer just is to make these no-ops for non-UTF encodings, would that be the right answer for GB18030? Coming up with answers other than just saying that new text processing operations shouldn’t try to fit non-UTF encodings at all would very quickly violate the guideline to “Avoid excessive inventiveness”.

Looking at other programming languages in the light of “2.1. Guideline: Avoid excessive inventiveness; look for existing practice” provides the way forward. Notable other languages have settled on not supporting coded character sets other than Unicode. That is, only the Unicode way of assigning scalar values to abstract characters is supported. Interoperability with legacy character encodings is achieved by decoding into Unicode upon input and, if non-UTF-8 output is truly required for interoperability, by encoding into legacy encoding upon output. The Unicode Direction paper already acknowledges this dominant design in “4.4. Directive: Improve support for transcoding at program boundaries”. I think C++ should consider the boundary between non-UTF-8 char and non-UTF-16/32 wchar_t on one hand and Unicode (preferably represented as UTF-8) on the other hand as a similar transcoding boundary between legacy code and new code such that new text processing features (other than the encoding conversion feature itself!) are provided on the char8_t/char16_t/char32_t side but not on the non-UTF execution encoding side. That is, while the Text_view paper says “Transcoding to Unicode for all non-Unicode encodings would carry non-negligible performance costs and would pessimize platforms such as IBM’s z/OS that use EBCIDC [sic] by default for the non-Unicode execution character sets.”, I think it’s more appropriate to impose such a cost at the boundary of legacy and future parts of z/OS programs than to contaminate all new text processing APIs with the question “What does this operation even mean for non-UTF encodings generally and EBCDIC encodings specifically?”. (In the case of Windows, the system already works in UTF-16 internally, so all narrow execution encodings already involve transcoding at the system interface boundary. In that context, it seems inappropriate to pretend that the legacy narrow execution encodings on Windows were somehow free of transcoding cost to begin with.)

To avoid a distraction from my main point, I’m explicitly not opining in this document on whether new text processing features should be available for sequences of char when the narrow execution encoding is UTF-8, for sequences of wchar_t when sizeof(wchar_t) is 2 and the wide execution encoding is UTF-16, or for sequences of wchar_t when sizeof(wchar_t) is 4 and the wide execution encoding is UTF-32.

The Type for a Unicode Scalar Value Should Be char32_t

The conclusion of the previous section is that new C++ facilities should not support number assignments to abstract characters other than Unicode, i.e. should not support coded character sets (either standardized or inferred from an encoding) other than Unicode. The conclusion makes it unnecessary to abstract type-wise over Unicode scalar values and some other kinds of scalar values. It just leaves the question of what the concrete type for a Unicode scalar value should be.

The Text_view paper says:

“It has been suggested that char32_t be supported as a character type that is implicitly associated with the Unicode character set and that values of this type always be interpreted as Unicode code point values. This suggestion is intended to enable UTF-32 string literals to be directly usable as sequences of character values (in addition to being sequences of code unit and code point values). This has a cost in that it prohibits use of the char32_t type as a code unit or code point type for other encodings.

I disagree with this and am firmly in the camp that char32_t should be the type for a Unicode scalar value.

The sentence “This has a cost in that it prohibits use of the char32_t type as a code unit or code point type for other encodings.” is particularly alarming. Seeking to use char32_t as a code unit type for encodings other than UTF-32 would dilute the meaning of char32_t into another wchar_t mess. (I’m happy to see that P1041R4 “Make char16_t/char32_t string literals be UTF-16/32” was voted into C++20.)

As for the appropriateness of using the same type both for a UTF-32 code unit and a Unicode scalar value, the whole point of UTF-32 is that its code unit value is directly the Unicode scalar value. That is what UTF-32 is all about, and UTF-32 has nothing else to offer: The value space that UTF-32 can represent is more compactly represented by UTF-8 and UTF-16 both of which are more commonly needed for interoperation with existing interfaces. When having the code units be directly the scalar values is UTF-32’s whole point, it would be unhelpful to distinguish type-wise between UTF-32 code units and Unicode scalar values. (Also, considering that buffers of UTF-32 are rarely useful but iterators yielding Unicode scalar values make sense, it would be sad to make the iterators have a complicated type.)

To provide interfaces that are generic across std::u8string_view, std::u16string_view, and std::u32string_view (and, thereby, strings for which these views can be taken), all of these should have a way to obtain a scalar value iterator that yields char32_t values. To make sure such iterators really yield only Unicode scalar values in an interoperable way, the iterator should yield U+FFFD upon error. What constitutes a single error in UTF-8 is defined in the WHATWG Encoding Standard (matches the “best practice” from the Unicode Standard). In UTF-16, each unpaired surrogate is an error. In UTF-32, each code unit whose numeric value isn’t a valid Unicode scalar value is an error. (The last sentence might be taken as admission that UTF-32 code units and scalar values are not the same after all. It is not. It is merely an acknowledgement that C++ does not statically prevent programs that could erroneously put an invalid value into a buffer that is supposed to be UTF-32.)

In general, new APIs should be defined to handle invalid UTF-8/16/32 either according to the replacement behavior described in the previous paragraph or by stopping and signaling error on the first error. In particular, the replacement behavior should not be left as implementation-defined, considering that differences in the replacement behavior between V8 and Blink lead to a bug. (See another write-up on this topic.)

Transcoding Should Be std::span-Based Instead of Iterator-Based

Since the above contemplates a conversion facility between legacy encodings and Unicode encoding forms, it seems on-topic to briefly opine on what such an API should look like. The Text_view paper says:

Transcoding between encodings that use the same character set is currently possible. The following example transcodes a UTF-8 string to UTF-16.

std::string in = get_a_utf8_string();
std::u16string out;
std::back_insert_iterator<std::u16string> out_it{out};
auto tv_in = make_text_view<utf8_encoding>(in);
auto tv_out = make_otext_iterator<utf16_encoding>(out_it);
std::copy(tv_in.begin(), tv_in.end(), tv_out);

Transcoding between encodings that use different character sets is not currently supported due to lack of interfaces to transcode a code point from one character set to the code point of a different one.

Additionally, naively transcoding between encodings using std::copy() works, but is not optimal; techniques are known to accelerate transcoding between some sets of encoding. For example, SIMD instructions can be utilized in some cases to transcode multiple code points in parallel.

Future work is intended to enable optimized transcoding and transcoding between distinct character sets.

I agree with the assessment that iterator and std::copy()-based transcoding is not optimal due to SIMD considerations. To enable the use of SIMD, the input and output should be std::spans, which, unlike iterators, allow the converter to look at more than one element of the std::span at a time. I have designed and implemented such an API for C++, and I invite SG16 to adopt its general API design. I have a written a document that covers the API design problems that I sought to address and design of the API (in Rust but directly applicable to C++). (Please don’t be distracted by the implementation internals being Rust instead of C++. The API design is still valid for C++ even if the design constraint of the implementation internals being behind C linkage is removed. Also, please don’t be distracted by the API predating char8_t.)

Implications for Text_view

Above I’ve opined that only UTF-8, UTF-16, and UTF-32 (as Unicode encoding forms—not as Unicode encoding schemes!) should be supported for iteration by scalar value and that legacy encodings should be addressed by a conversion facility. Therefore, I think that Text_view should not be standardized as proposed. Instead, I think std::u8string_view, std::u16string_view, and std::u32string_view should gain a way to obtain a Unicode scalar value iterator (that yields values of type char32_t), and a std::span-based encoding conversion API should be provided as a distinct feature (as opposed to trying to connect Unicode scalar value iterators with std::copy()).

The Rust Programming Language BlogAnnouncing Rust 1.34.1

The Rust team is happy to announce a new version of Rust, 1.34.1, and a new version of rustup, 1.18.1. Rust is a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, getting Rust 1.34.1 and rustup 1.18.1 is as easy as:

$ rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website.

What's in 1.34.1 stable

This patch release fixes two false positives and a panic when checking macros in Clippy. Clippy is a tool which provides a collection of lints to catch common mistakes and improve your Rust code.

False positive in clippy::redundant_closure

A false positive in the redundant_closure lint was fixed. The lint did not take into account differences in the number of borrows.

In the following snippet, the method required expects dep: &D but the actual type of dep is &&D:

dependencies.iter().filter(|dep| dep.required());

Clippy erronously suggested .filter(Dependency::required), which is rejected by the compiler due to the difference in borrows.

False positive in clippy::missing_const_for_fn

Another false positive in the missing_const_for_fn lint was fixed. This lint did not take into account that functions inside trait implementations cannot be const fns. For example, when given the following snippet, the lint would trigger:

#[derive(PartialEq, Eq)] // warning: this could be a const_fn
struct Point(isize, isize);

impl std::ops::Add for Point {
    type Output = Self;

    fn add(self, other: Self) -> Self { // warning: this could be a const_fn
        Point(self.0 + other.0, self.1 + other.1)

What's new in rustup 1.18.1

A recent rustup release, 1.18.0, introduced a regression that prevented installing Rust through the shell script on older platforms. A patch was released that fixes the issue, avoiding to force TLS v1.2 on the platforms that don't support it.

You can check out other rustup changes in its full release notes.

Mike ConleyFirefox Front-End Performance Update #17

Hello, folks. I wanted to give a quick update on what the Firefox Front-end Performance team is up to, so let’s get into it.

The name of the game continues to be start-up performance. We made some really solid in-roads last quarter, and this year we want to continue to apply pressure. Specifically, we want to focus on reducing IO (specifically, main-thread IO) during browser start-up.

Reducing main thread IO during start-up

There are lots of ways to reduce IO – in the best case, we can avoid start-up IO altogether by not doing something (or deferring it until much later). In other cases, when the browser might be servicing events on the main thread, we can move IO onto another thread. We can also re-organize, pack or compress files differently so that they’re read off of the disk more efficiently.

If you want to change something, the first step is measuring it. Thankfully, my colleague Florian has written a rather brilliant test that lets us take accounting of how much IO is going on during start-up. The test is deterministic enough that he’s been able to write a whitelist for the various ways we touch the disk on the main thread during start-up, and that whitelist means we’ve made it much more difficult for new IO to be introduced on that thread.

That whitelist has been processed by the team, and have been turned into bugs, bucketed by the start-up phase where the IO is occurring. The next step is to estimate the effort and potential payoff of fixing those bugs, and then try to whittle down the whitelist.

And that’s effectively where we’re at. We’re at the point now where we’ve got a big list of work in front of us, and we have the fun task of burning that list down!

Being better at loading DLLs on Windows

While investigating the warm-up service for Windows, Doug Thayer noticed that we were loading DLLs during start-up oddly. Specifically, using a tool called RAMMap, he noticed that we were loading DLLs using “read ahead” (eagerly reading the entirety of the DLL into memory) into a region of memory marked as not-executable. This means that anytime we actually wanted to call a library function within that DLL, we needed to load it again into an executable region of memory.

Doug also noticed that we were unnecessarily doing ReadAhead for the same libraries in the content process. This wasn’t necessary, because by the time the content process wanted to load these libraries, the parent process would have already done it and it’d still be “warm” in the system file cache.

We’re not sure why we were doing this ReadAhead-into-unexecutable-memory work – it’s existence in the Firefox source code goes back many many years, and the information we’ve been able to gather about the change is pretty scant at best, even with version control. Our top hypothesis is that this was a performance optimization that made more sense in the Windows XP era, but has since stopped making sense as Windows has evolved.

UPDATE: Ehsan pointed us to this bug where the change likely first landed. It’s a long and wind-y bug, but it seems as if this was indeed a performance optimization, and efforts were put in to side-step effects from Prefetch. I suspect that later changes to how Prefetch and SuperFetch work ultimately negated this optimization.

Doug hacked together a quick prototype to try loading DLLs in a more sensible way, and the he was able to capture quite an improvement in start-up time on our reference hardware:

<figcaption>This graph measures various start-up metrics. The scatter of datapoints on the left show the “control” build, and they tighten up on the right with the “test” build. Lower is better.

At this point, we all got pretty excited. The next step was to confirm Doug’s findings, so I took his control and test builds, and tested them independently on the reference hardware using frame recording. There was a smaller1, but still detectable improvement in the test build. At this point, we decided it was worth pursuing.

Doug put together a patch, got it reviewed and landed, and we immediately saw an impact in our internal benchmarks.

We’re also seeing the impact reflected in Telemetry. The first Nightly build with Doug Thayer’s patch went out on April 14th, and we’re starting to see a nice dip in some of our graphs here:

<figcaption>This graph measures the time at which the browser window reports that it has first painted. April 14th is the second last date on the X axis, and the Y axis is time. The top-most line is plotting the 95th percentile, and there’s a nice dip appearing around April 14th.

There are other graphs that I’d normally show for improvements like this, except that we started tracking an unrelated regression on April 16th which kind of muddies the visualization. Bad timing, I guess!

We expect this improvement to have the greatest impact on weaker hardware with slower disks, but we’ll be avoiding some unnecessary work for all Windows users, and that gets a thumbs-up in my books.

If all goes well, this fix should roll out in Firefox 68, which reaches our release audience on July 9th!

  1. My test machine has SuperFetch disabled to help reduce noise and inconsistency with start-up tests, and we suspect SuperFetch is able to optimize start-up better in the test build 

Daniel StenbergWhy they use curl

As a reader of my blog you know curl. You also most probably already know why you would use curl and if I’m right, you’re also a fan of using the right tool for the job. But do you know why others use curl and why they switch from other solutions to relying on curl for their current and future data transfers? Let me tell you the top reasons I’m told by users.

Logging and exact error handling

What exactly happened in the transfer and why are terribly important questions to some users, and with curl you have the tools to figure that out and also be sure that curl either returns failure or the command worked. This clear and binary distinction is important to users for whom that single file every transfer is important. For example, some of the largest and most well-known banks in the world use curl in their back-ends where each file transfer can mean a transfer of extremely large sums of money.

A few years ago I helped a money transaction service switch to curl to get that exact line in the sand figured out. To know exactly and with certainty if money had been transferred – or not – for a given operation. Vital for their business.

curl does not have the browsers’ lenient approach of “anything goes as long as we get something to show” when it comes to the Internet protocols.

Verbose goodness

curl’s verbose output options allow users to see exactly what curl sends and receives in a quick and non-complicated way. This is invaluable for developers to figure out what’s happening and what’s wrong, in either end involved in the data transfer.

curl’s verbose options allows developers to see all sent and received data even when encryption is used. And if that is not enough, its SSLKEYLOGFILE support allows you to take it to the next level when you need to!

Same behavior over time

Users sometimes upgrade their curl installations after several years of not having done so. Bumping a software’s version after several years and many releases, any software really, can be a bit of a journey and adventure many times as things have changed, behavior is different and things that previously worked no longer do etc.

With curl however, you can upgrade to a version that is a decade newer, with lots of new fancy features and old crummy bugs fixed, only to see that everything that used to work back in the day still works – the same way. With curl, you can be sure that there’s an enormous focus on maintaining old functionality when going forward.

Present on all platforms

The fact that curl is highly portable, our users can have curl and use curl on just about any platform you can think of and use it with the same options and behaviors across them all. Learn curl on one platform, then continue to use it the same way on the next system. Platforms and their individual popularity vary over time and we enjoy to allow users to pick the ones they like – and you can be sure that curl will run on them all.


When doing the occasional file transfer every once in a while, raw transfer performance doesn’t matter much. Most of the time will then just be waiting on network anyway. You can easily get away with your Python and java frameworks’ multiple levels of overhead and excessive memory consumption.

Users who scan the Internet or otherwise perform many thousands of transfers per second from a large number of threads and machines realize that they need fewer machines that spend less CPU time if they build their file transfer solutions on top of curl. In curl we have a focus on only doing what’s required and it’s a lean and trimmed solution with a well-documented API built purely for Internet data transfers.

The features you want

The author of a banking application recently explained for us that one of the top reasons why they switched to using curl for doing their Internet data transfers, is curl’s ability to keep the file name from the URL.

curl is a feature-packed tool and library that most likely already support the protocols you need and provide the power features you want. With a healthy amount of “extension points” where you can extend it or hook in your custom extra solution.

Support and documentation

No other tool or library for internet transfers have even close to the same amount of documentation, examples available on the net, existing user base that can help out and friendly users to support you when you run into issues. Ask questions on the mailing lists, post a bug on the bug tracker or even show your non-working code on stackoverflow to further your project.

curl is really the only Internet transfer option available to get something that’s old and battle-proven proven by the giants of the industry, that is trustworthy, high-performing and yet for which you can also buy commercial support for, today.

This blog post was also co-posted on

Ryan HarterWhen the Bootstrap Breaks - ODSC 2019

I'm excited to announce that I'll be presenting at the Open Data Science Conference in Boston next week. My colleague Saptarshi and I will be talking about When the Bootstrap Breaks.

I've included the abstract below, but the high-level goal of this talk is to strip some varnish off the bootstrap. Folks often look to the bootstrap as a panacea for weird data, but all tools have their failure cases. We plan on highlighting some problems we ran into when trying to use the bootstrap for Firefox data and how we dealt with the issues, both in theory and in practice.


Resampling methods like the bootstrap are becoming increasingly common in modern data science. For good reason too; the bootstrap is incredibly powerful. Unlike t-statistics, the bootstrap doesn’t depend on a normality assumption nor require any arcane formulas. You’re no longer limited to working with well understood metrics like means. One can easily build tools that compute confidence for an arbitrary metric. What’s the standard error of a Median? Who cares! I used the bootstrap.

With all of these benefits the bootstrap begins to look a little magical. That’s dangerous. To understand your tool you need to understand how it fails, how to spot the failure, and what to do when it does. As it turns out, methods like the bootstrap and the t-test struggle with very similar types of data. We’ll explore how these two methods compare on troublesome data sets and discuss when to use one over the other.

In this talk we’ll explore what types to data the bootstrap has trouble with. Then we’ll discuss how to identify these problems in the wild and how to deal with the problematic data. We will explore simulated data and share the code to conduct the simulations yourself. However, this isn’t just a theoretical problem. We’ll also explore real Firefox data and discuss how Firefox’s data science team handles this data when analyzing experiments.

At the end of this session you’ll leave with a firm understanding of the bootstrap. Even better, you’ll understand how to spot potential issues in your data and avoid false confidence in your results.

The Mozilla BlogIt’s Complicated: Mozilla’s 2019 Internet Health Report

Our annual open-source report examines how humanity and the internet intersect. Here’s what we found


Today, Mozilla is publishing the 2019 Internet Health Report — our third annual examination of the internet, its impact on society and how it influences our everyday lives.

The Report paints a mixed picture of what life online looks like today. We’re more connected than ever, with humanity passing the ‘50% of us are now online’ mark earlier this year. And, while almost all of us enjoy the upsides of being connected, we also worry about how the internet and social media are impacting our children, our jobs and our democracies.

When we published last year’s Report, the world was watching the Facebook-Cambridge Analytica scandal unfold — and these worries were starting to grow. Millions of people were realizing that widespread, laissez-faire sharing of our personal data, the massive growth and centralization of the tech industry, and the misuse of online ads and social media was adding up to a big mess.

Over the past year, more and more people started asking: what are we going to do about this mess? How do we push the digital world in a better direction?

As people asked these questions, our ability to see the underlying problems with the system — and to imagine solutions — has evolved tremendously. Recently, we’ve seen governments across Europe step up efforts to monitor and thwart disinformation ahead of the upcoming EU elections. We’ve seen the big tech companies try everything from making ads more transparent to improving content recommendation algorithms to setting up ethics boards (albeit with limited effect and with critics saying ‘you need to do much more!’). And, we’ve seen CEOs and policymakers and activists wrestling with each other over where to go next. We have not ‘fixed’ the problems, but it does feel like we’ve entered a new, sustained era of debate about what a healthy digital society should look like.

The 2019 Internet Health Report examines the story behind these stories, using interviews with experts, data analysis and visualization, and original reporting. It was also built with input from you, the reader: In 2018, we asked readers what issues they wanted to see in the next Report.

In the Report’s three spotlight articles, we unpack three big issues: One examines the need for better machine decision making — that is, asking questions like Who designs the algorithms? and What data do they feed on? and Who is being discriminated against? Another examines ways to rethink the ad economy, so surveillance and addiction are no longer design necessities.  The third spotlight article examines the rise of smart cities, and how local governments can integrate tech in a way that serves the public good, not commercial interests.

Of course, the Report isn’t limited to just three topics. Other highlights include articles on the threat of deepfakes, the potential of user-owned social media platforms, pornography literacy initiatives, investment in undersea cables, and the dangers of sharing DNA results online.

So, what’s our conclusion? How healthy is the internet right now? It’s complicated — the digital environment is a complex ecosystem, just like the planet we live on. There have been a number of positive trends in the past year that show that the internet — and our relationship with it — is getting healthier:

Calls for privacy are becoming mainstream. The last year brought a tectonic shift in public awareness about privacy and security in the digital world, in great part due to the Cambridge Analytica scandal. That awareness is continuing to grow — and also translate into action. European regulators, with help from civil society watchdogs and individual internet users, are enforcing the GDPR: In recent months, Google has been fined €50 million for GDPR violations in France, and tens of thousands of violation complaints have been filed across the continent.

There’s a movement to build more responsible AI. As the flaws with today’s AI become more apparent, technologists and activists are speaking up and building solutions. Initiatives like the Safe Face Pledge seek facial analysis technology that serves the common good. And experts like Joy Buolamwini, founder of the Algorithmic Justice League, are lending their insight to influential bodies like the Federal Trade Commission and the EU’S Global Tech Panel.

Questions about the impact of ‘big tech’ are growing. Over the past year, more and more people focused their attention on the fact that eight companies control much of the internet. As a result, cities are emerging as a counterweight, ensuring municipal technology prioritizes human rights over profit — the Cities for Digital Rights Coalition now has more than two dozen participants. Employees at Google, Amazon, and Microsoft are demanding that their employers don’t use or sell their tech for nefarious purposes. And ideas like platform cooperativism and collaborative ownership are beginning to be discussed as alternatives.

On the flipside, there are many areas where things have gotten worse over the past year — or where there are new developments that worry us:

Internet censorship is flourishing. Governments worldwide continue to restrict internet access in a multitude of ways, ranging from outright censorship to requiring people to pay additional taxes to use social media. In 2018, there were 188 documented internet shutdowns around the world. And a new form of repression is emerging: internet slowdowns. Governments and law enforcement restrict access to the point where a single tweet takes hours to load. These slowdowns diffuse blame, making it easier for oppressive regimes to deny responsibility.

Biometrics are being abused. When large swaths of a population don’t have access to physical IDs, digital ID systems have the potential to make a positive difference. But in practice, digital ID schemes often benefit heavy-handed governments and private actors, not individuals. In India, over 1 billion citizens were put at risk by a vulnerability in Aadhaar, the government’s biometric ID system. And in Kenya, human rights groups took the government to court over its soon-to-be-mandatory National Integrated Identity Management System (NIIMS), which is designed to capture people’s DNA information, the GPS location of their home, and more.

AI is amplifying injustice. Tech giants in the U.S. and China are training and deploying AI at a breakneck pace that doesn’t account for potential harms and externalities. As a result, technology used in law enforcement, banking, job recruitment, and advertising often discriminates against women and people of color due to flawed data, false assumptions, and lack of technical audits. Some companies are creating ‘ethics boards’ to allay concerns — but critics say these boards have little or no impact.

When you look at trends like these — and many others across the Report — the upshot is: the internet has the potential both to uplift and connect us. But it also has the potential to harm and tear us apart. This has become clearer to more and more people in the last few years. It has also become clear that we need to step up and do something if we want the digital world to net out as a positive for humanity rather than a negative.

The good news is that more and more people are dedicating their lives to creating a healthier, more humane digital world. In this year’s Report, you’ll hear from technologists in Ethiopia, digital rights lawyers in Poland, human rights researchers from Iran and China, and dozens of others. We’re indebted to these individuals for the work they do every day. And also to the countless people in the Mozilla community — 200+ staff, fellows, volunteers, like-minded organizations — who helped make this Report possible and who are committed to making the internet a better place for all of us.

This Report is designed to be both a reflection and resource for this kind of work. It is meant to offer technologists and designers inspiration about what they might build; to give policymakers context and ideas for the laws they need to write; and, most of all, to provide citizens and activists with a picture of where others are pushing for a better internet, in the hope that more and more people around the world will push for change themselves. Ultimately, it is by more and more of us doing something in our work and our lives that we will create an internet that is open, human and humane.

I urge you to read the Report, leave comments and share widely.

PS. This year, you can explore all these topics through reading “playlists,” curated by influential people in the internet health space like Esra’a Al Shafei, Luis Diaz Carlos, Joi Ito and others.

The post It’s Complicated: Mozilla’s 2019 Internet Health Report appeared first on The Mozilla Blog.

Mark SurmanWhy AI + consumer tech?

In my last post, I shared some early thoughts on how Mozilla is thinking about AI as part of our overall internet health agenda. I noted in that post that we’re leaning towards consumer tech as the focus and backdrop for whatever goals we take on in AI. In our draft issue brief we say:

Mozilla is particularly interested in how automated decision making is being used in consumer products and services. We want to make sure the interests of all users are designed into these products and services. Where they aren’t, we want to call that out.

After talking to nearly 100 AI experts and activists, this consumer tech focus feels right for Mozilla. But it also raises a number of questions: what do we mean by consumer tech? What is in scope for this work? And what is not? Are we missing something important with this focus?

At its simplest, the consumer tech platforms that we are talking about are general purpose internet products and services aimed at a wide audience for personal use. These include things like social networks, search engines, retail e-commerce, home assistants, computers, smartphones, fitness trackers, self-driving cars, etc. — almost all of which are connected to the internet and are fueled by our personal data. The leading players in these areas are companies like Google, Amazon, Facebook, Microsoft and Apple in the US as well as companies like Baidu, TenCent, and AliBaba in China. These companies are also amongst the biggest developers and users of AI, setting trends and shipping technology that shapes the whole of the tech industry. And, as long as we remain in the era of machine learning, these companies have a disproportionate advantage in AI development as they control huge amounts for data and computing power that can be used to train automated systems.

Given the power of the big tech companies in shaping the AI agenda — and the growing pervasiveness of automated decision making in the tech we all use everyday — we believe we need to set a higher bar for the development, use and impact of AI in consumer products and services. We need a way to reward companies who reach that bar. And push back and hold to account those who do not.

Of course, AI isn’t bad or good on it’s own — it is just another tool in the toolbox of computer engineering. Benefits, harms and side effects come from how systems are designed, what data is selected to train them and what business rules they are given. For example, search for ‘doctor’ on Google, you mostly see white doctors because that bias is in the training data. Similarly, content algorithms on sites like YouTube often recommend increasingly extreme content because the main business rule they are optimized for is to keep people on the site or app for as long as possible. Humans — and the companies they work in — can avoid or fix problems like these. Helping them do so is important work. It’s worth doing.

Of course, there are important issues related to AI and the health of the internet that go beyond consumer technology. The use of biased facial recognition software by police and immigration authorities. Similarly biased and unfair resume sorting algorithms used by human resource departments as part of hiring processes. The use of AI by the military to automate and add precision to killing from a distance. Ensuring that human rights and dignity are protected as the use of machine decision making grows within government and the back offices of big business is critical. Luckily, there is an amazing crew of organizations stepping up to address these issues such as AI Now in the US and Algorithm Watch in Europe. Mozilla taking a lead in these areas wouldn’t add much. Here, we should play a supporting role.

In contrast, there are few players focused squarely on how AI is showing up in consumer products and services. Yet this is one of the places where the power and the impact of AI is moving most rapidly. Also, importantly, consumer tech is the field on which Mozilla has always played. As we try to shape where AI is headed, it makes sense to do so here. We’re already doing so in small ways with technology, showing a more open way to approach machine learning with projects like Deep Speech and Common Voice. However, we have a chance to do much more by using our community, brand and credibility to push the development and use of AI in consumer tech in the right direction. We might do this as a watchdog. Or by collecting a brain trust of fellows with new ideas about privacy in AI. Or by helping to push for policies that lead to real accountability. There are many options. Whatever we pick, it feels like the consumer tech space is both in need of attention and well suited to the strengths that Mozilla brings to the table.

I would say that we’re 90% decided that consumer tech is the right place to focus Mozilla’s internet health movement building work around AI. That means there is a 9/10 chance that this is where we will go — but there is a chance that we hear something at this stage that changes this thinking in a meaningful way. As we zero in on this decision, I’d be interested to know what others think: If we go in this direction, what are the most important things to be thinking about? Where are the big opportunities? On the flip side, are there important things we’ll be missing if we go down this path? Feel free to comment on this post, tweet or email me if you have thoughts.

The post Why AI + consumer tech? appeared first on Mark Surman.

The Firefox Frontier5 times when video ads autoplay and ruin everything.

The room is dark and silent. Suddenly, a loud noise pierces your ears. You panic as everyone turns in your direction. You just wanted to read an article about cute … Read more

The post 5 times when video ads autoplay and ruin everything. appeared first on The Firefox Frontier.

Ian Bicking“Users want control” is a shoulder shrug

Making the claim “users want control” is the same as saying you don’t know what users want, you don’t know what is good, and you don’t know what their goals are.

I first started thinking about this during the debate over what would become the ACA. The rhetoric was filled with this idea that people want choice in their medical care: people want control.

No! People want good health care. If they don’t trust systems to provide them good health care, if they don’t trust their providers to understand their priorities, then choice is the fallback: it’s how you work the system when the system isn’t working for you. And it sucks! Here you are, in the middle of some health issue, with treatments and symptoms and the rest of your life duties, and now you have to become a researcher on top of it? But the politicians and the pundits could not stop talking about control.

Control is what you need when you want something and it won’t happen on its own. But (usually) it’s not control you want, it’s just a means.

So when we say users want control over X – their privacy, their security, their data, their history – we are first acknowledging that current systems act against users, but we aren’t proposing any real solution. We’re avoiding even talking about the problems.

For instance, we say “users want control over their privacy,” but what people really want is some subset of:

  1. To avoid embarrassment
  2. To avoid persecution
  3. … sometimes for doing illegal and wrong things
  4. To keep from having the creeping sensation they left something sitting out that they didn’t want to
  5. They want to make some political statement against surveillance
  6. They want to keep things from the prying eyes of those close to them
  7. They want to avoid being manipulated by bad-faith messaging

There’s no easy answers, not everyone holds all these desires, but these are concrete ways of thinking about what people want. They don’t all point in the same direction. (And then consider the complex implications of someone else talking about you!)

There are some cases when a person really does want control. If the person wants to determine their own path, if having choice is itself a personal goal, then you need control. That’s a goal about who you are not just what you get. It’s worth identifying moments when this is important. But if a person does not pay attention to something then that person probably does not identify with the topic and is not seeking control over it. “Privacy advocates” pay attention to privacy, and attain a sense of identity from the very act of being mindful of their own privacy. Everyone else does not.

Let’s think about another example: users want control over their data. What are some things they want?

  1. They don’t want to lose their data
  2. They don’t want their data used to hold them hostage (e.g., to a subscription service)
  3. They don’t want to delete data and have it still reappear
  4. They want to use their data however they want, but more likely they want their data available for use by some other service or tool
  5. They feel it’s unfair if their data is used for commercial purposes without any compensation
  6. They are offended if their data is used to manipulate themselves or others
  7. They don’t want their data used against them in manipulative ways
  8. They want to have shared ownership of data with other people
  9. They want to prevent unauthorized or malicious access to their data

Again these motivations are often against each other. A person wants to be able to copy their data between services, but also delete their data permanently and completely. People don’t want to lose their data, but having personal control over your data is a great way to lose it, or even to lose control over it. The professionalization and centralization of data management by services has mostly improved access control and reliability.

When we simply say users want control it’s giving up on understanding people’s specific desires. Still it’s not exactly wrong: it’s reasonable to assume people will use control to achieve their desires. But if, as technologists, we can’t map functionality to desire, it’s a bit of a stretch to imagine everyone else will figure it out on the fly.