Air MozillaSUMO Community Meeting

SUMO Community Meeting SUMO - 04.13.2018

Chris AtLeeTaskcluster migration update: we're finished!

We're done!

Over the past few weeks we've hit a few major milestones in our project to migrate all of Firefox's CI and release automation to taskcluster.

Firefox 60 and higher are now 100% on taskcluster!


At the end of March, our Release Operations and Project Integrity teams finished migrating Windows tests onto new hardware machines, all running taskcluster. That work was later uplifted to beta so that CI automation on beta would also be completely done using taskcluster.

This marked the last usage of buildbot for Firefox CI.

Periodic updates of blocklist and pinning data

Last week we switched off the buildbot versions of the periodic update jobs. These jobs keep the in-tree versions of blocklist, HSTS and HPKP lists up to date.

These were the last buildbot jobs running on trunk branches.

Partner repacks

And to wrap things up, yesterday the final patches landed to migrate partner repacks to taskcluster. Firefox 60.0b14 was built yesterday and shipped today 100% using taskcluster.

A massive amount of work went into migrating partner repacks from buildbot to taskcluster, and I'm really proud of the whole team for pulling this off.

So, starting today, Firefox 60 and higher will be completely off taskcluster and not rely on buildbot.

It feels really good to write that :)

We've been working on migrating Firefox to taskcluster for over three years! Code archaeology is hard, but I think the first Firefox jobs to start running in Taskcluster were the Linux64 builds, done by Morgan in bug 1155749.

Into the glorious future

It's great to have migrated everything off of buildbot and onto taskcluster, and we have endless ideas for how to improve things now that we're there. First we need to spend some time cleaning up after ourselves and paying down some technical debt we've accumulated. It's a good time to start ripping out buildbot code from the tree as well.

We've got other plans to make release automation easier for other people to work with, including doing staging releases on try(!!), making the nightly release process more similar to the beta/release process, and for exposing different parts of the release process to release management so that releng doesn't have to be directly involved with the day-to-day release mechanics.

The Mozilla BlogBuilding Bold New Worlds With Virtual Reality


“I wanted people to feel the whole story with their bodies, not just with their minds. Once I discovered virtual reality was the place to do that, it was transformative.”
– Nonny de la Peña, CEO of Emblematic


Great creators can do more than just tell a story. They can build entirely new worlds for audiences to experience and enjoy.

From rich text to video to podcasts, the Internet era offers an array of new ways for creators to build worlds. Here at Mozilla, we are particularly excited about virtual reality. Imagine moving beyond watching or listening to a story; imagine also feeling that story. Imagine being inside it with your entire mind and body. Now imagine sharing and entering that experience with something as simple as a web URL. That’s the potential before us.

To fully realize that potential, we need people who think big. We need artists and developers and engineers who are driven to push the boundaries of the imagination. We need visionaries who can translate that imagination into virtual reality.

The sky is the limit with virtual reality, and we’re driven to serve as the bridge that connects artists and developers. We are also committed to providing those communities with the tools and resources they need to begin building their own worlds. Love working with Javascript? Check out the A-Frame framework. Do you prefer building with Unity? We have created a toolkit to bring your VR Unity experience to the web with WebVR.

We believe browsers are the future of virtual and augmented reality. The ability to click on a link and enter into an immersive, virtual world is a game-changer. This is why we held our ‘VR the People’ panel at the Sundance Film Festival, and why we will be at the Tribeca Film Festival in New York next week. We want to connect storytellers with this amazing technology. If you’re at Tribeca (or just in the area), please reach out. We’d love to chat.

This concludes our four part series about virtual reality, storytelling, and the open web. It’s our mission to empower creators, and we hope these posts have left you inspired. If you’d like to watch our entire VR the People panel. Check out the video below.


Be sure to visit to learn more about the tools and resources Mozilla offers to help you build new worlds from your imagination.

Read more on VR the People

The post Building Bold New Worlds With Virtual Reality appeared first on The Mozilla Blog.

Mozilla VR BlogThis Week in Mixed Reality: Issue 3

This Week in Mixed Reality: Issue 3

This week we’re heads down focusing on adding features in the three broad areas of Browsers, Social and the Content Ecosystem.


This week we focused on building Firefox Reality and we’re excited to announce additional features:

  • Implemented private tabs
  • Tab overflow popup list
  • Added contextual menu for “more options” in the header
  • Improvements for SVR based devices:
    • Update SDK to v2.1.2 for tracking improvements
    • Fallback to head tracking based input when there are not controllers available
    • Implement scrolling using wheel and trackpad input buttons in ODG devices
  • Working on the virtual keyboard across Android platform
  • We are designing the transitions for WebVR immersive mode

Check out the video clip of additional features we added this week of the contextual menu and private tabs:

Firefox Reality private browsing from Imanol Fernández Gorostizaga on Vimeo.


We're working on a web-based social experience for Mixed Reality.

In the last week, we have:

  • Landed next 2D UX pass which cleans up a bunch of CSS and design inconsistencies, and prompts users for avatar and name customization before entry until they customize their name.
  • Ongoing work for final push of in-VR UX: unified 3D cursor, “pause/play” mode for blocking UX, finalized HUD design and positioning, less error-prone teleporting component should all land this week.
  • Worked through remaining issues with deployments, cleaned up bugs and restart issues with Habitat (as well as filed a number of bugs.)
  • Set-up room member capping and room closing.

Join our public WebVR Slack #social channel to join in the discussion!

Content ecosystem

This week, Blair MacIntyre released a new version of the iOS WebXR Viewer app that includes support for experimenting with Computer Vision.

Check out the video below:
This Week in Mixed Reality: Issue 3

Stay tuned next week for some exciting news!

Chris CooperNew to me: the Taskcluster team

All entities move and nothing remains still.

At this time last year, I had just moved on from Release Engineering to start managing the Sheriffs and the Developer Workflow teams. Shortly after the release of Firefox Quantum, I also inherited the Taskcluster team. The next few months were *ridiculously* busy as I tried to juggle the management responsibilities of three largely disparate groups.

By mid-January, it became clear that I could not, in fact, do it all. The Taskcluster group had the biggest ongoing need for management support, so that’s where I chose to land. This sanity-preserving move also gave a colleague, Kim Moir, the chance to step into management of the Developer Workflow team.

Meet the Team

Let me start by introducing the Taskcluster team. We are:

We are an eclectic mix of curlers, snooker players, pinball enthusiasts, and much else besides. We also write and run continous integration (CI) software at scale.

What are we doing?

Socrates gets booked
The part I understand is excellent, and so too is, I dare say, the part I do not understand…

One of the reasons why I love the Taskcluster team so much is that they have a real penchant for documentation. That includes their design and post-mortem processes. Previously, I had only managed others who were using Taskcluster…consumers of their services. The Taskcluster documentation made it really easy for me to plug-in quickly and help provide direction.

If you’re curious about what Taskcluster is at a foundational level, you should start with the tutorial.

The Taskcluster team currently has three, big efforts in progress.

1. Redeployability

Many Taskcluster team members initially joined the team with the dream of building a true, open source CI solution. Dustin has a great post explaining the impetus behind redeployability. Here’s the intro:

Taskcluster has always been open source: all of our code is on Github, and we get lots of contributions to the various repositories. Some of our libraries and other packages have seen some use outside of a Taskcluster context, too.

But today, Taskcluster is not a project that could practically be used outside of its single incarnation at Mozilla. For example, we hard-code the name in a number of places, and we include our config in the source-code repositories. There’s no legal or contractual reason someone else could not run their own Taskcluster, but it would be difficult and almost certainly break next time we made a change.

The Mozilla incarnation is open to use by any Mozilla project, although our focus is obviously Firefox and Firefox-related products like Fennec. This was a practical decision: our priority is to migrate Firefox to Taskcluster, and that is an enormous project. Maintaining an abstract ability to deploy additional instances while working on this project was just too much work for a small team.

The good news is, the focus is now shifting. The migration from Buildbot to Taskcluster is nearly complete, and the remaining pieces are related to hardware deployment, largely by other teams. We are returning to work on something we’ve wanted to do for a long time: support redeployability.

We’re a little further down that path than when he first wrote about it in January, but you can read more about our efforts to make Taskcluster more widely deployable in Dustin’s blog.

2. Support for provides some interesting services, like baremetal servers and access to ARM hardware, that other cloud providers are only starting to offer. Experiments with our existing emulator tests on the baremetal servers have shown incredible speed-ups in some cases. The promise of ARM hardware is particularly appealing for future mobile testing efforts.

Over the next few months, we plan to add support for to the Mozilla instance of Taskcluster. This lines up well with the efforts around redeployability, i.e. we need to be able to support different and/or multiple cloud providers anyway.

3. Keeping the lights on (KTLO)

While not particularly glamorous, maintenance is a fact of life for software engineers supporting code that in running in production. That said, we should actively work to minimize the amount of maintenance work we need to do.

One of the first things I did when I took over the Taskcluster team full-time was halt *all* new and ongoing work to focus on stability for the entire month of February. This was precipitated by a series of prolonged outages in January. We didn’t have an established error budget at the time, but if we had, we would have completely blown through it.

Our focus on stability had many payoffs, including more robust deployment stories for many of our services, and a new IRC channel (#taskcluster-bots) full of deployment notices and monitoring alerts. We needed to put in this stability work to buy ourselves the time to work on redeployability.

What are we *not* doing?

With all the current work on redeployability, it’s tempting to look ahead to when we can incorporate some of these improvements into the current Firefox CI setup. While we do plan to redeploy Firefox CI at some point this year to take advantage of these systemic improvements, it is not our focus…yet.

One of the other things I love about the Taskcluster team is that they are really good at supporting community contribution. If you’re interested in learning more about Taskcluster or even getting your feet wet with some bugs, please drop by the #taskcluster channel on IRC and say Hi!

Nick CameronDev-tools in 2018

This is a bit late (how is it the middle of April already?!), but the dev-tools team has lots of exciting plans for 2018 and I want to talk about them!

Our goals for 2018

Here's a summary of our goals for the year.

Ship it!

We want to ship high quality, mature, 1.0 tools in 2018. Including,

  • Rustfmt (1.0)
  • Rust Language Server (RLS, 1.0)
  • Rust extension for Visual Studio Code using the RLS (non-preview, 1.0)
  • Clippy (1.0, though possibly not labeled that, including addressing distribution issues)

Support the epoch transition

2018 will bring a step change in Rust with the transition from 2015 to 2018 epochs. For this to be a smooth transition it will need excellent tool support. Exactly what tool support will be required will emerge during the year, but at the least we will need to provide a tool to convert crates to the new epoch.

We also need to ensure that all the currently existing tools continue to work through the transition. For example, that Rustfmt and IntelliJ can handle new syntax such as dyn Trait, and the RLS copes with changes to the compiler internals.


The Cargo team have their own goals. Some things on the radar from a more general dev-tools perspective are integrating parts of Xargo and Rustup into Cargo to reduce the number of tools needed to manage most Rust projects.

Custom test frameworks

Testing in Rust is currently very easy and natural, but also very limited. We intend to broaden the scope of testing in Rust by permitting users to opt-in to custom testing frameworks. This year we expect the design to be complete (and an RFC accepted) and for a solid and usable implementation to exist (though stabilisation may not happen until 2019).The current benchmarking facilities will be reimplemented as a custom test framework. The framework should support testing for WASM and embedded software.


Doxidize is a successor to Rustdoc. It adds support for guide-like documentation as well as API docs. This year there should be an initial release and it should be practical to use for real projects.

Maintain and improve existing tools

Maintenance and consistent improvement is essential to avoid bit-rot. Existing mature tools should continue to be well-maintained and improved as necessary. This includes

  • debugging support,
  • Rustdoc,
  • Rustup,
  • Bindgen,
  • editor integration.

Good tools info on the Rust website

The Rust website is planned to be revamped this year. The dev-tools team should be involved to ensure that there is clear and accurate information about key tools in the Rust ecosystem and that high quality tools are discoverable by new users.

Organising the team

The dev-tools team should be reorganised to continue to scale and to support the goals in this roadmap. I'll outline the concrete changes next.

Re-organising the dev-tools team

The dev-tools team has always been large and somewhat broad - there are a lot of different tools at different levels of maturity with different people working on them. There has always been a tension between having a global, strategic view vs having a detailed, focused view. The peers system was one way to tackle that. This year we're trying something new - the dev-tools team will become something of an umbrella team, coordinating work across multiple teams and working groups.

We're creating two new teams - Rustdoc, and IDEs and editors - and going to work more closely with the Cargo team. We're also spinning up a bunch of working groups. These are more focused, less formal teams, they are dedicated to a single tool or task, rather than to strategy and decision making. Primarily they are a way to let people working on a tool work more effectively. The dev-tools team will continue to coordinate work and keep track of the big picture.

We're always keen to work with more people on Rust tooling. If you'd like to get involved, come chat to us on Gitter in the following rooms:

The teams


Manish Goregaokar, Steve Klabnik, and Without Boats will be joining the dev-tools team. This will ensure the dev-tools team covers all the sub-teams and working groups.

IDEs and editors

The new IDEs and editors team will be responsible for delivering great support for Rust in IDEs and editors of every kind. That includes the foundations of IDE support such as Racer and the Rust Language Server. The team is Nick Cameron (lead), Igor Matuszewski, Vlad Beskrovnyy, Alex Butler, Jason Williams, Junfeng Li, Lucas Bullen, and Aleksey Kladov.


The new Rustdoc team is responsible for the Rustdoc software,, and related tech. The docs team will continue to focus on the documentation itself, while the Rustdoc team focuses on the software. The team is QuietMisdreavus (lead), Steve Klabnik, Guillaume Gomez, Oliver Middleton, and Onur Aslan.


No change to the Cargo team.

Working groups
  • Bindgen
    • Bindgen and C Bindgen
    • Nick Fitzgerald and Emilio Álvarez
  • Debugging
    • Debugger support for Rust - from compiler support, through LLVM and debuggers like GDB and LLDB, to the IDE integration.
    • Tom Tromey, Manish Goregaokar, and Michael Woerister
  • Clippy
    • Oliver Schneider, Manish Goregaokar, llogiq, and Pascal Hertleif
  • Doxidize
    • Steve Klabnik, Alex Russel, Michael Gatozzi, QuietMisdreavus, and Corey Farwell
  • Rustfmt
    • Nick Cameron and Seiichi Uchida
  • Rustup
    • Nick Cameron, Alex Crichton, Without Boats, and Diggory Blake
  • Testing
    • Focused on designing and implementing custom test frameworks.
    • Manish Goregaokar, Jon Gjengset, and Pascal Hertleif
  • 2018 edition tooling
    • Using Rustfix to ease the edition transition; ensure a smooth transition for all tools.
    • Pascal Hertleif, Manish Goregaokar, Oliver Schneider, and Nick Cameron

Thank you to everyone for the fantastic work they're been doing on tools, and for stepping up to be part of the new teams!

Air MozillaEmerging Tech Speaker Series Talk with Rian Wanstreet

Emerging Tech Speaker Series Talk with Rian Wanstreet Precision Agriculture, or high tech farming, is being heralded as a panacea solution to the ever-growing demands of an increasing global population - but the...

Air MozillaReps Weekly Meeting, 19 Apr 2018

Reps Weekly Meeting This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

The Mozilla BlogNonny de la Peña & the Power of Immersive Storytelling


“I want you to think: if she can walk into that room and change her entire life and help create this whole energy and buzz, you can do it too.”
– Nonny de la Peña


This week, we’re highlighting VR’s groundbreaking potential to take audiences inside stories with a four part video series. There aren’t many examples of creators doing that more effectively and powerfully than Nonny de la Peña.

Nonny de la Peña is a former correspondent for Newsweek, the New York Times and other major outlets. For more than a decade now, de la Peña has been focused on merging her passion for documentary filmmaking with a deep-seeded expertise in VR. She essentially invented the field of “immersive journalism” through her company, Emblematic Group.

What makes de la Peña’s work particularly noteworthy (and a primary reason we’ve been driven to collaborate with her), is that her journalism often uses virtual reality to bring attention to under-served and overlooked groups.

To that end, our panel at this year’s Sundance Festival doubled as another installation in Nonny’s latest project, Mother Nature.

Mother Nature is an open and collaborative project that amplifies the voices of women and creators working in tech. It rebukes the concept that women are underrepresented in positions of power in tech and engineer roles because of anything inherent in their gender.

It’s a clear demonstration of how journalists and all storytellers can use VR to create experiences that can change minds and hearts, and help move our culture towards a more open and human direction.

For more on Nonny de la Peña and her immersive projects, visit Emblematic Group. I’d also encourage you to access our resources and open tools at and learn how you can use virtual reality and the web to tell your own stories.

Read more on VR the People

The post Nonny de la Peña & the Power of Immersive Storytelling appeared first on The Mozilla Blog.

Mike ConleyFirefox Performance Update #6

Hi there folks, just another Firefox Performance update coming at you here.

These updates are going to shift format slightly. I’m going to start by highlighting the status of some of the projects the Firefox Performance Team (the front-end team working to make Firefox snappy AF), and then go into the grab-bag list of improvements that we’ve seen landing in the tree.

But first a word from our sponsor:!

This performance update is brought to you by! On Nightly versions of Firefox, a component called BackgroundHangReporter (or “BHR”) notices anytime the main-threads hang too long, and then collect a stack to send via Telemetry. We’ve been doing this for years, but we’ve never really had a great way of visualizing or making use of the data1. Enter by Doug Thayer! Initially a fork of perf.html, lets us see graphs of hangs on Nightly broken down by category2, and then also lets us explore the individual stacks that have come in using a perf.html-like interface! (You might need to be patient on that last link – it’s a lot of data to download).

Hot damn! Note the finer-grain categories showing up on April 1st.

Early first blank paint (lead by Florian Quèze)

This is a start-up perceived performance project where early in the executable life-cycle, long before we’ve figured out how to layout and paint the browser UI, we show a white “blank” area on screen that is overtaken with the UI once it’s ready. The idea here is to avoid having the user stare at nothing after clicking on the Firefox icon. We’ll also naturally be working to reduce the amount of time that the blank window appears for users, but our research shows users feel like the browser starts-up faster when we show something rather than nothing. Even if that nothing is… well, mostly nothing. Florian recently landed a Telemetry probe for this feature, made it so that we can avoid initting the GPU process for the blank window, and is in the midst of fixing an issue where the blank window appears for too long. We’re hoping to have this ready to ship enabled on some platforms (ideally Linux and Windows) in Firefox 61.

Faster content process start-up time (lead by Felipe Gomes)

Explorations are just beginning here. Felipe has been examining the scripts that are running for each tab on creation, and has a few ideas on how to both reduce their parsing overhead, as well as making them lazier to load. This project is mostly at the research stage. Expect concrete details on sub-projects and linked bugs soon!

Get ContentPrefService init off of the main thread (lead by Doug Thayer)

This is so, so close to being done. The patch is written and reviewed, but landing it is being stymied by a hard-to-reproduce locally but super-easy-to-reproduce-in-automation shutdown leak during test runs. Unfortunately, the last 10% sometimes takes 90% of the effort, and this looks like one of those cases.

Blocklist improvements (lead by Gijs Kruitbosch)

Gijs is continuing to make our blocklist asynchronous. Recently, he made the getAddonBlocklistEntry method of the API asynchronous, which is a big-deal for start-up, since it means we drop another place where the front-end has to wait for the blocklist to be ready! The getAddonBlocklistState method is next on the list.

As a fun exercise, you can follow the “true” value for the BLOCKLIST_SYNC_FILE_LOAD probe via this graph, and watch while Gijs buries it into the ground.

LRU cache for tab layers (lead by Doug Thayer)

Doug Thayer is following up on some research done a few years ago that suggests that we can make ~95% of our user’s tab switches feel instantaneous by implementing an LRU cache for the painted layers. This is a classic space-time trade-off, as the cache will necessarily consume memory in order to hold onto the layers. Research is currently underway here to see how we can continue to improve our tab switching performance without losing out on the memory advantage that we tend to have over other browsers.

Tab warming (lead by Mike Conley)

Tab warming has been enabled on Nightly for a few weeks, and besides one rather serious glitch that we’ve fixed, we’ve been pretty pleased with the result! There’s one issue on macOS that’s been mulled over, but at this point I’m starting to lean towards getting this shipped on at least Windows for the Firefox 61 release.

Firefox’s Most Wanted: Performance Wins (lead by YOU!)

Before we go into the grab-bag list of performance-related fixes – have you seen any patches landing that should positively impact Firefox’s performance? Let me know about it so I can include it in the list, and give appropriate shout-outs to all of the great work going on! That link again!

Grab-bag time

And now, without further ado, a list of performance work that took place in the tree:

(🌟 indicates a volunteer contributor)

Thanks to all of you! Keep it coming!

  1. Pro-tip: if you’re collecting data, consider figuring out how you want to visualize it first, and then make sure that visualization work actually happens. 

  2. since April 1st, these categories have gotten a lot finer-grain 

Nick CameronAnnouncing cargo src (beta)

cargo src is a new tool for exploring your Rust code. It is a cargo plugin which runs locally and lets you navigate your project in a web browser. It has syntax highlighting, jump to definition, type on hover, semantic search, find uses, find impls, and more.

Today I'm announcing version 0.1, our first beta; you should try it out! (But be warned, it is definitely beta quality - it's pretty rough around the edges).

To install: cargo install cargo-src, to run: cargo src --open in your project directory. You will need a nightly Rust toolchain. See below for more directions.


When cargo src starts up it will need to check and index your project. If it is a large project, that can take a while. You can see the status in the bottom left of the web page (this is currently not live, it'll update when you load a file). Build information from Cargo is displayed on the console where you ran cargo src. While indexing, you'll be able to see your code with syntax highlighting, but won't get any semantic information or be able to search.

Actionable identifiers are underlined. Click on a reference to jump to the definition. Click on a definition to search for all references to that definition. Right click on a link to see more options (such as 'find impls').


Hover over an identifier to see it's type, documentation, and fields (or similar info).


On the left-hand side there are tabs for searching, and for browsing files and symbols (which to be honest that last one is not working that well yet). Searching is for identifiers only and is case-sensitive. I hope to support text search and fuzzy search in the future.


A big thank you to Nicole Anderson and Zahra Traboulsi for their work - they've helped tremendously with the frontend, making it look and function much better than my attempts. Thanks to everyone who has contributed by testing or coding!

Cargo src is powered by the same tech as the Rust Language Server, taking it's data straight from the compiler. The backend is a Rust web server using Hyper. The frontend uses React and is written in Javascript with a little TypeScript. I think it's a fun project to work on because it's at the intersection of so many interesting technologies. It grew out of an old project - rustw - which was a web-based frontend for the Rust compiler.

Contributions are welcome! It's really useful to file issues if you encounter problems. If you want to get more involved, the code is on GitHub; come chat on Gitter.

K Lars LohnThings Gateway - Series 2, Episode 1

In my previous seven part posting on the Things Gateway from Mozilla, I explored the various built in options for connecting with existing home automation technologies.  While interesting, at that point, the Things Gateway hadn't really broken any new ground.  The features could be found in other Home Automation projects, arguably in more mature environments.

With the release of version of 0.4, the Things Gateway introduces something entirely new that the other products in the field don't yet do. Mozilla is thinking about the Internet of Things in a different way: a way that plays directly to the company's strengths. What if all these home automation devices, switches, plugs, bulbs, spoke a protocol that already exists and is cross platform, cross language and fully open: the Web protocols.  Imagine if each plug or bulb would respond to HTTP requests as if it were a Web app.  You could just use a browser to control them: no need for proprietary software stacks and phone apps.  This could be revolutionary.

In this, the beginning of Series Two of my blog posts about the Things Gateway, I'm going to show how to use the Things Framework to create virtual Web things.

Right now, the Mozilla team on this project is focused intensely on making the Web Things Framework easy to implement by hardware manufacturers.  Targeting the Maker Movement, the team is pushing to make it easy to enable small Arduino and similar tiny computers to speak the Web of Things (WoT) protocol.  They've created libraries and modules in various languages that implement the Things Framework: Javascript, Java, Python 3 have been written, with C++ and Rust on the horizon.

I'm going to focus on the Python implementation of the Things Framework.  It is pip installable with this command on a Linux machine:

$ sudo pip3 install webthing

The webthing-python github repo provides some programming examples on how to use the module.

One of the first things that a Python programmer is going to notice about this module is that it closely tracks the structure of a reference implementation. That reference implementation is written in Javascript. As such, it imposes a rather Javascript style and structure onto the Python API. For some that can roll with the punches, this is not a problem, for others, like myself, I'd rather have a more Pythonic API to deal with. So I've wrapped the webthing module with my own pywot (python Web of Things) module.

pywot paves over some of the awkward syntax exposed in the Python webthing implementation and offers some services that further reduce the amount code it takes to create a Web thing.

For example, I don't have one of those fancy home weather stations in my yard.  However, I can make a virtual weather station that fetches data from Weather Underground with the complete set of current conditions for my community.  Since I can access a RESTful API from Weather Underground in a Python program, I can wrap that API as a Web Thing.  The Thing Gateway then sees it as a device on the network and integrates it into the UI as a sensor for multiple values.

Weather Underground offers a software developers license that will allows up to 500 API calls per day at no cost.  All you have to do is sign up and they'll give you an API key.  Embed that key in a URL and you can fetch data from just about any weather station on their network.  The license agreement says that if you publicly post data from their network, you must provide attribution. However, this application of their data is totally private.  Of course, it could be argued that turning your porch light blue when Weather Underground says the temperature is cold may be considered a public display of WU data.

There is really very little programming that needs to be done to make a Web Thing this way.  Error handling and configuration boilerplate (omitted here) outweigh the actual code that defines my Virtual Weather Station:
class WeatherStation(WoTThing):

async def get_weather_data(self):
async with aiohttp.ClientSession() as session:
async with async_timeout.timeout(config.seconds_for_timeout):
async with session.get(config.target_url) as response:
self.weather_data = json.loads(await response.text())
self.temperature = self.weather_data['current_observation']['temp_f']
self.wind_speed = self.weather_data['current_observation']['wind_mph']

temperature = WoTThing.wot_property(
description='the temperature in ℉',
'units': ''
wind_speed = WoTThing.wot_property(
'wind speed',
description='the wind speed in MPH',
'units': 'MPH'

The ability to change the status of devices in my home based on the weather is very useful.  I could turn off the irrigation system if there's been enough rain. I could have a light give warning if frost is going to endanger my garden. I could have a light tell me that it is windy enough to go fly a kite.

Is the wind calm or is it perfect kite flying weather?

If you want to jump right in, you can see the full code in my pywot git hub repo.  The demo directory has several examples.  However, in my next posting, I'm going to explain the virtual weather station in detail.

A few words about security: As I said before the Things Gateway and Things Framework are experimental software.  They are not yet hardened enough for more than experimental use.  Under no circumstances should a Web Thing be exposed directly to the Internet - they are for trusted local network use only.  Standards for authentication and authorization have not yet been engineered into the product.  They are in the works, hopefully by the next iteration, version 0.5.

From Ben Francis of the Mozilla ET IoT team: ...there is currently no authentication and while HTTPS support is provided, it can only really be used with self-signed certificates on a local network. We're not satisfied with that level of security and are exploring ways to provide authentication (in discussions with the W3C WoT Interest Group) and a solution for HTTPS on local networks (via the HTTPS in Local Network Community Group This means that for the time being we would strongly recommend against exposing native web things directly to the Internet using the direct integration pattern unless some form of authentication is used.

Wladimir PalantThe ticking time bomb: Fake ad blockers in Chrome Web Store

People searching for a Google Chrome ad blocking extension have to choose from dozens of similarly named extensions. Only few of these are legitimate, most are forks of open source ad blockers trying to attract users with misleading extension names and descriptions. What are these up to? Thanks to Andrey Meshkov we now know what many people already suspected: these extensions are malicious. He found obfuscated code hidden carefully within a manipulated jQuery library that accepted commands from a remote server.

As it happens, I checked out some fake ad blockers only in February. Quite remarkably, all of these turned up clean: the differences to their respective open source counterparts were all minor, mostly limited to renaming and adding Google Analytics tracking. One of these was the uBlock Plus extension which now showed up on Andrey’s list of malicious extensions and has been taken down by Google. So at some point in the past two months this extension was updated in order to add malicious code.

And that appears to be the point here: somebody creates these extensions and carefully measures user counts. Once the user count gets high enough the extension gets an “update” that attempts to monetize the user base by spying on them. At least stealing browsing history was the malicious functionality that Andrey could see, additional code could be pushed out by the server at will. That’s what I suspected all along but this is the first time there is actual proof.

Chrome Web Store has traditionally been very permissive as far as the uploaded content goes. Even taking down extensions infringing trademarks took forever, extensions with misleading names and descriptions on the other hand were always considered “fine.” You have to consider that updating extensions on Chrome Web Store is a fully automatic process, there is no human review like with Mozilla or Opera. So nobody stops you from turning an originally harmless extension bad.

On the bright side, I doubt that Andrey’s assumption of 20 million compromised Chrome users is correct. There are strong indicators that the user numbers of these fake ad blockers have been inflated by bots, simply because the user count is a contributing factor to the search ranking. I assume that this is also the main reason behind the Google Analytics tracking: whoever is behind these extensions, they know exactly that their Chrome Web Store user numbers are bogus.

For reference, the real ad blocking extensions are:

Georg FritzscheFirefox Data engineering newsletter Q1 / 2018

As the Firefox data engineering teams we provide core tools for using data to other teams. This spans from collection through Firefox Telemetry, storage & processing in our Data Platform to making data available in Data Tools.

To make new developments more visible we aim to publish a quarterly newsletter. As we skipped one, some important items from Q4 are also highlighted this time.

This year our teams are putting their main focus on:

  • Making experimentation easy & powerful.
  • Providing a low-latency view into product release health.
  • Making it easy to work with events end-to-end.
  • Addressing important user issues with our tools.

Usage improvements

Last year we started to investigate how our various tools are used by people working on Firefox in different roles. From that we started addressing some of the main issues users have.

Most centrally, the Telemetry portal is now the main entry point to our tools, documentation and other resources. When working with Firefox data you will find all the important tools linked from there.

We added the probe dictionary to make it easy to find what data we have about Firefox usage.

For STMO, our Redash instance, we deployed a major UI refresh from the upstream project.

There is new documentation on prototyping and optimizing STMO queries.

Our data documentation saw many other updates, from cookbooks on how to see your own pings and sending new pings to adding more datasets. We also added documentation on how our data pipeline works.

Enabling experimentation

For experimentation, we have focused on improving tooling. Test Tube will soon be our main experiment dashboard, replacing experiments viewer. It displays the results of multivariant experiments that are being conducted within Firefox.

We now have St. Moab as a toolkit for automatically generating experiment dashboards.

Working with event data

To make working with events easier, we improved multiple stages in the pipeline. Our documentation has an overview of the data flow.

On the Firefox side, events can now be recorded through the events API, from add-ons, and whitelisted Firefox content. From Firefox 61, all recorded events are automatically counted into scalars, to easily get summary statistics.

Event data is available for analysis in Redash in different datasets. We can now also connect more event data to Amplitude, a product analytics tool. A connection for some mobile events to Amplitude is live, for Firefox Desktop events it will be available soon.

Low-latency release health data

To enable low-latency views into release health data, we are working on improving Mission Control, which will soon replace

It has new features that enable comparing quality measures like crashes release-over-release across channels.

Firefox Telemetry tools

For Firefox instrumentation we expanded on the event recording APIs. To make build turnaround times faster, we now support adding scalars in artifact builds and will soon extend this to events.

Following the recent Firefox data preferences changes, we adopted Telemetry to only differentiate between “release” and “prerelease” data.

This also impacted the measurement dashboard and telemetry.js users as the current approach to publishing this data from the release channel does not work anymore.

The measurement dashboard got some smaller usability improvements thanks to a group of contributors. We also prototyped a use counter dashboard for easier analysis.

Datasets & analysis tools

To power LetsEncrypt stats, we publish a public Firefox SSL usage dataset.

The following datasets are newly available in Redash or through Spark:

  • client_counts_daily — This is useful for estimating user counts over a few dimensions and a long history with daily precision.
  • first_shutdown_summary — A summary of the first main ping of a client’s lifetime. This accounts for clients that do not otherwise appear in main_summary.
  • churn — A pre-aggregated dataset for calculating the 7-day churn for Firefox Desktop.
  • retention — A pre-aggregated dataset for calculating retention for Firefox Desktop. The primary use-case is 1-day retention.

For analysis tooling we now have Databricks available. This offers instant-on-notebooks with no more waiting for clusters to spin up and supports Scala, SQL and R. If you’re interested sign up to the databricks-discuss mailing list.

We also got the probe info service into production, which scrapes the probe data in Firefox code and makes a history of it available to consumers. This is what powers the probe dictionary, but can also be used to power other data tooling.

Getting in touch

Please reach out to us with any questions or concerns.

Cheers from

  • The data engineering team (Katie Parlante), consisting of
  • The Firefox Telemetry team (Georg Fritzsche)
  • The Data Platform team (Mark Reid)
  • The Data Tools team (Rob Miller)

Firefox Data engineering newsletter Q1 / 2018 was originally published in Georg Fritzsche on Medium, where people are continuing the conversation by highlighting and responding to this story.

Air MozillaThe Joy of Coding - Episode 136

The Joy of Coding - Episode 136 mconley livehacks on real Firefox bugs while thinking aloud.

The Firefox FrontierWorking for Good: Metalwood Salvage of Portland

The web should be open to everyone, a place for unbridled innovation, education, and creative expression. That’s why Firefox fights for Net Neutrality, promotes online privacy rights, and supports open-source … Read more

The post Working for Good: Metalwood Salvage of Portland appeared first on The Firefox Frontier.

Air MozillaWeekly SUMO Community Meeting, 18 Apr 2018

Weekly SUMO Community Meeting This is the SUMO weekly call

Mozilla Addons BlogFriend of Add-ons: Viswaprasath Ks

Please meet our newest Friend of Add-ons, Viswaprasanth Ks! Viswa began contributing to Mozilla in January 2013, when he met regional community members while participating in a Firefox OS hackathon in Bangalore, India. Since then, he has been a member of the Firefox Student Ambassador Board, a Sr. Firefox OS app reviewer, and a Mozilla Rep and Tech Speaker.

In early 2017, Viswa began developing extensions for Firefox using the WebExtensions API. From the start, Viswa wanted to invite his community to learn this framework and create extensions with him. At community events, he would speak about extension development and help participants build their first extensions. These presentations served as a starting point for creating the Activate campaign “Build Your Own Extension.” Viswa quickly became a leader in developing the campaign and testing iterations with a variety of different audiences. In late 2017, he collaborated with community members Santosh Viswanatham and Trishul Goel to re-launch the campaign with a new event flow and more learning resources for new developers.

Viswa continues to give talks about extension development and help new developers become confident working with WebExtensions APIs. He is currently creating a series of videos about the WebExtensions API to be released this summer. When he isn’t speaking about extensions, he mentors students in the Tamilnadu region in Rust and Quality Assurance.

These experiences have translated into skills Viswa uses in everyday life. “I learned about code review when I became a Sr. Firefox OS app reviewer,” he says. “This skill helps me a lot at my office. I am able to easily point out errors in the product I am working on. The second important thing I learned by contributing to Mozilla is how to build and work with a diverse team. The Mozilla community has a lot of amazing people all around the world, and there are unique things to learn from each and every one.”

In his free time, Viswa watches tech-related talks on YouTube, plays chess online, and explores new Mozilla-related projects like Lockbox.

He’s also quick to add, “I feel each and every one who cares about the internet should become Mozilla contributors so the journey will be awesome in future.”

If that describes you and you would like get more involved with the add-ons community, please take a look at our wiki for some opportunities to contribute to the project.

Thank you so much for all of your contributions, Viswa! We’re proud to name you Friend of Add-ons.


The post Friend of Add-ons: Viswaprasath Ks appeared first on Mozilla Add-ons Blog.

The Mozilla BlogVirtual Reality at the Intersection of Art & Technology


“If someone can imagine a world…they can create an experience.”
– Reggie Watts


This is the second video in our four part series around creators, virtual reality, and the open web. As we laid out in the opening post of this series, virtual reality is more than a technology, and it is far more than mere eye-candy. VR is an immensely powerful tool that is honed and developed every day. In the hands of a creator, that tool has the potential to transport audiences into new worlds and provide new perspectives.

It’s one thing to read about the crisis in Sudan, but being transported inside that crisis is deeply affecting in a way we haven’t seen before.

The hard truth is that all the technological capabilities in the world won’t matter if creators don’t have the proper tools to shape that technology into experiences. To make a true impact, technology and art can’t live parallel lives. They must intersect. Bringing together those worlds was the thrust for our VR the People panel at the Sundance Festival.

“You’re gonna end up finding someone who’s a 16-year-old in the basement with an open-source VR headset and some crappy computer and they download free software so they can build [an experience].”
– Brooks Brown, Global Director of Virtual Reality, Starbreeze Studios


That quote above is exactly why Mozilla spent years working to build WebVR, and why we held our panel at Sundance. It’s why we are writing these posts. We’re hoping they reach someone out there – anyone, anywhere – who has a world in their head and a story to tell. We’re hoping they pick up the tools our engineers built and use them in ways that inspire and force those same engineers to build new tools that keep pace with the evolving creative force.

So go ahead, check out our resources and tools at We promise you won’t be creating alone. You bring the art, we’ll bring the technology, and together we can make something special.

Read more on VR the People

Hacks.Mozilla.OrgHello wasm-pack!

2 panels, one showing ferris the crab with assorted rust and wasm packages and one with the npm wombat with assorted js wasm and css/html packages. the crab is throwing a package over towards the wombat

As Lin Clark emphasizes in her article about Rust and WebAssembly: the goal of WebAssembly is not to replace JavaScript, but to be an awesome tool to use with JavaScript. Lots of amazing work has been done to simplify crossing the language boundary between JavaScript and WebAssembly, and you can read all about that in Alex Crichton’s post on wasm-bindgen. This post focuses on a different type of JavaScript/Rust integration: package ecosystem and developer workflows.

Both Rust and JavaScript have vibrant package ecosystems. Rust has cargo and JavaScript has several CLI tools, including the npm CLI, that interface with the npm registry. In order for WebAssembly to be successful, we need these two systems to work well together, specifically:

  • Rust developers should be able to produce WebAssembly packages for use in JavaScript without requiring a Node.js development environment
  • JavaScript developers should be able to use WebAssembly without requiring a Rust development environment

✨📦 Enter: wasm-pack.

wasm-pack is a tool for assembling and packaging Rust crates that target WebAssembly. These packages can be published to the npm Registry and used alongside other packages. This means you can use them side-by-side with JS and other packages, and in many kind of applications, be it a Node.js server side app, a client-side application bundled by Webpack, or any other sort of application that uses npm dependencies. You can find wasm-pack on and GitHub.

Development of this tooling has just begun and we’re excited to get more developers from both the Rust and JavaScript worlds involved. Both the JavaScript and Rust ecosystems are focused on developer experience. We know first hand that the key to a productive and happy ecosystem is good tools that automate the boring tasks and get out of the developer’s way. In this article, I’ll talk about where we are, where we’re headed, how to get started using the tooling now, and how to get involved in shaping its future.

💁 What it does today

ferris stands between 2 open packages, one labelled rust, one labelled npm. there is a flow from the Rust package to the npm package with 4 stages. first stage: a and cargo.toml file, then a .wasm file, then a .wasm and a .js file, then a .wasm, .js, package.json and a readme

Today, wasm-pack walks you through four basic steps to prepare your Rust code to be published as a WebAssembly package to the npm registry:

1. Compile to WebAssembly

wasm-pack will add the appropriate WebAssembly compilation target using rustup and will compile your Rust to Web Assembly in release mode.

To do this, wasm-pack will:

  • Add the wasm32-unknown-unknown compilation target, if needed
  • Compile your Rust project for release using the wasm target

2. Run wasm-bindgen

wasm-pack wraps the CLI portion of the wasm-bindgen tool and runs it for you! This does things like wrapping your WebAssembly module in JS wrappers which make it easier for people to interact with your module. wasm-bindgen supports both ES6 modules and CommonJS and you can use wasm-pack to produce either type of package!

To do this, wasm-pack will:

  • If needed, install and/or update wasm-bindgen
  • Run wasm-bindgen, generating a new .wasm file and a .js file
  • Moves the generated files to a new pkg directory

3. Generate package.json

wasm-pack reads your Cargo.toml and generates the package.json file necessary to publish your package to the npm registry. It will:

To do this, wasm-pack will:

  • Copy over your project name and description
  • Link to your Rust project’s repository
  • List the generated JavaScript files in the files key. This ensures that those files, and only those files, are included in your npm package. This is particularly important for ensuring good performance if you intend to use this package, or a bundle including this package, in the browser!

4. Documentation

wasm-pack will copy your Rust project’s to the npm package it produces. We’ve got a lot of great ideas about extending this further to support the Rust ecosystem’s documentation feature, rustdoc– more on the in the next section!

🔮 Future Plans

Integrate with rustdoc

The team surveyed developers, and learned that good documentation was the number one feature that developers looked for when evaluating the use of crate. Contributor Yoshua Wuyts introduced the brilliant idea of generating further content by integrating wasm-pack with the Rust API documentation tool, rustdoc. The Rust-wasm team is committed to making Rust a first class way to write WebAssembly. Offering documentation for Rust-generated WebAssembly packages that’s both easy to write and easy to discover aligns neatly with our goals. Read more about the team’s thoughts in this issue and join in the discussion!

Manage and Optimize your Rust and JS dependency graphs

The next large piece of development work on wasm-pack will focus on using custom segments in compiled WebAssembly to declare dependencies on local Javascript files or other npm packages.

The preliminary work for this feature has already landed in wasm-bindgen, so the next step will be integrating it into wasm-pack. The naive integration won’t be too tricky- but we’re excited to explore the opportunities we have to streamline and optimize Rust dependency trees that contain npm dependencies on several levels! This work will be similar to the optimizations that bundlers like webpack deliver, but on the level of Rust dependencies.

There’s a lot of questions we still have to answer and there’s going be a lot of neat engineering work to do. In a few weeks there will be a full post on this topic, so keep an eye out!

ferris is sitting in a package on a scale, in the distance several interconnected and dependent packages are linked with lines flowing into the package. the scale says "heavy"

Grow Node.js toolchain in Rust

The largest and most ambitious goal of this project is to rewrite the required npm login, npm pack and npm publish steps in Rust so that the required dependency on a Node.js development environment becomes optional for those who don’t currently use Node.js in their workflow. As we’ve said before, we want to ensure that both WebAssembly package producers and users can remain in their familiar workflows. Currently, that is true for JavaScript developers- they do not need to have a Rust development environment or any knowledge of Rust to get started using a Rust-generated WebAssembly module that’s been published with wasm-pack. However, Rust developers still need to install Node.js and npm to publish with wasm-pack, we’re excited to change that by writing a npm package publisher in Rust- and who knows, perhaps we can eventually integrate some Rust elements (perhaps compiled to WebAssembly!) into the npm client!

Further collaboration with npm and bundlers

We’re always communicating with the npm CLI team members Kat Marchan and Rebecca Turner, as well as the folks who work on webpack and Parcel– we’re excited to keep working with them to make it easy for developers to release and use WebAssembly code!

🛠 Start using it today!

wasm-pack is currently a command line tool distributed via Cargo. To install it, setup a Rust development environment, and then run:

cargo install wasm-pack

If you aren’t sure where to start, we have a tutorial for you! This tutorial, by Michael Gattozzi and the Rust-wasm working group, walks you through:

  • writing a small Rust library
  • compiling it to WebAssembly, packaging, and publishing with wasm-pack
  • bundling with webpack to produce a small website

a gif of the wasm pack CLI tool. first we ls a directory with a rust crate, then we run wasm pack. it completes in one minute, then we ls the target directory to see that a wasm binary was compiled, then we ls the pkg directory to see that an npm package was created


The key to all excellent developer tooling is a short feedback cycle between developers of the tool and developers using the tool in their day to day workflows. In order to be successful with wasm-pack, and all of our WebAssembly developer tooling, we need developers of all skill levels and backgrounds to get involved!

Take a look at our Contributor Guidelines and our Issue Tracker (we regularly label things as “good first issue” and provide mentors and mentoring instructions!)- we’re excited to work with you!

The Firefox FrontierNo-Judgment Digital Definitions: App vs Web App

Just when you think you’ve got a handle on this web stuff, things change. The latest mixup? Apps vs Web Apps. An app should be an app no matter what, … Read more

The post No-Judgment Digital Definitions: App vs Web App appeared first on The Firefox Frontier.

Air Mozilla2018 Global Sprint Orientation Webinar 3 - April 17th, 2018

2018 Global Sprint Orientation Webinar 3 - April 17th, 2018 Learn about working open at the Global Sprint and hear stories and tips from past participants.

Air MozillaHolochain Meetup

Holochain Meetup Holochain Meetup 4/17/2018

Mozilla Open Policy & Advocacy BlogDecision in Oracle v. Google Fair Use Case Could Hinder Innovation in Software Development

The technology industry was dealt a major setback when the Federal Circuit recently decided in Oracle v. Google that Google’s use of Java “declaring code” was not a fair use. The copyright doctrine of Fair Use impacts a developer’s ability to learn from and improve on the work of others, which is a crucial part of software development. Because of this ruling, copyright law today is now at odds with how software is developed.*

This is the second time in this eight year case that the Federal Circuit’s ruling has diverged from how software is written. In 2014, the court decided that declaring code can be copyrighted, a ruling with which we disagreed. Last year we filed another amicus brief in this case, advocating that Google’s implementation of the APIs should be considered a fair use. In this recent decision, the court found that copying the Java declaring code was not a protected fair use of that code.

We believe that open source software is vital to security, privacy, and open access to the internet. We also believe that Fair Use is critical to developing better, more secure, more private, and more open software because it allows developers to learn from each other and improve on existing work. Even the Mozilla Public License explicitly acknowledges that it “is not intended to limit any rights” under applicable copyright doctrines such as fair use.

The Federal Circuit’s decision is a big step in the wrong direction. We hope Google appeals to the Supreme Court and that the Supreme Court sets us back on a better course.


* When Google released its Android operating system, it incorporated some code from Sun Microsystem’s Java APIs into the software. Google copied code in those APIs that merely names functions and performs other general housekeeping functions (called “declaring code”) but wrote all the substantive code (called “implementing code”) from scratch. Software developers generally use declaring code to define the names, format, and organization ideas for certain functions, and implementing code to do the actual work (telling the program how to perform the functions). Developers specifically rely on “declaring code” to enable their own programs to interact with other software, resulting in code that is efficient and easy for others to use.

The post Decision in Oracle v. Google Fair Use Case Could Hinder Innovation in Software Development appeared first on Open Policy & Advocacy.

The Mozilla BlogAn Open Call to Storytellers: Make Something Amazing With Virtual Reality and the Open Web


“This is not about creating something that appeals to people simply because of its novel technical achievements; rather it is [about creating] something that has real meaning…”
– Kamal Sinclair, Director of New Frontier Lab Programs at the Sundance Institute


Virtual Reality is coming. In many ways, it’s already here.

Media outlets like the New York Times now regularly create VR content that is showcased alongside its other digital journalism efforts. On the entertainment side, serious buzz is building around the release of Steven Spielberg’s Ready Player One, and home VR headsets are becoming increasingly accessible.

These are all indications that the tides are shifting VR towards the mainstream, but significant obstacles remain for both creating content and enjoying most of the current immersive experiences. These include expensive hardware, confusing distribution methods, complicated configurations, and more.

The mixed reality team at Mozilla devoted two years to brainstorming and experimenting to find a way to bring virtual reality to the web. That’s because we believe the web is the best possible platform for virtual and augmented reality. The ability to share and access virtual experiences with a URL is a game-changer; the key needed to take this amazing technology and make it mainstream.

This type of direct access is critical for filmmakers and creators who wish to use VR as their next storytelling medium.

That’s why Mozilla convened our VR the People panel at this spring’s Sundance Festival. It was an opportunity to connect with some of the world’s most innovative visual storytellers and bring together some incredible names, including Mozilla’s own Sean White, VR journalist Nonny de la Peña, the creatively explosive Reggie Watts and immersive artist Chris Milk, CEO of WITHIN.

That Sundance panel was an extraordinary experience, but true to Mozilla’s mission, that panel had to be more than an isolated event. We want it to be a catalyst for an ongoing effort to blur (and eventually erase) the boundaries between VR, film-making, and visual storytelling. Next week, we’ll be traveling to Tribeca Film Festival in New York to continue the conversation. If you are around, let us know. We’d love to chat.

If you’re a creator reading this, I certainly hope you feel inspired, but I also hope you feel something more. I hope you feel empowered, and then you turn that empowerment into action. I hope you go on to create something amazing that inspires someone else to create something amazing. I hope you create things with VR that we can’t even imagine right now.

Check out to view the VR tools and resources Mozilla offers to help you create immersive experiences.

The post An Open Call to Storytellers: Make Something Amazing With Virtual Reality and the Open Web appeared first on The Mozilla Blog.

Hacks.Mozilla.OrgA new video series: Web Demystified

We don’t have to tell you that video is a key channel for sharing information and instructional skills especially for students and developers who’ve grown up with YouTube. At Mozilla, we’ve always been a leader in supporting the open technologies that bring unencumbered video into the browser and onto the web.

But on top of the technology, there’s content. In 2018, Mozilla’s Developer Outreach team has launched some projects to share more knowledge in video. Earlier this year, Jen Simmons set a high bar with the launch of Layout Land, a series about “what’s now possible in graphic design on the web — layout, CSS Grid, and more.”

This post introduces Web Demystified, a new series targeting web makers. By web makers, I have in mind everyone who builds things for the web: designers, developers, project and team managers, students, hobbyists, and experts. Today we’ve released the opening two episodes on the Mozilla Hacks YouTube channel, introducing web basics.

Our goal is to provide basic information for beginner web makers, at the start of their web journey. The subject matter will also serve as a refresher on web fundamentals.

Our starting point

To begin, there is one question that needs to be answered: What is the web? And voila, here is our opener:

What to expect next

The next four episodes cover some basic technologies at the heart of the web (HTML, CSS, JavaScript, and SVG). We will release a new show every couple weeks for your viewing pleasure. And then we will continue our journey into details, covering stuff like: how the browser works, image formats for the web, domain names, WebAssembly, and more…

As an added attraction, here is Episode #1 (the second show). It’s all about HTML:

An invitation to participate

In true Mozilla fashion, we’d welcome your help sharing this new content and helping us promote it.

  • If you enjoy those videos, please like them on YouTube, and share them with your friends, colleagues, family, and networks.
  • If you have constructive feedback on the series, please share it here in comments. (Reminder: these shows are aimed at beginners and we aim to keep them brief.)
  • In general, if there are topics you wish to see covered, tell us and if you have questions about the content itself: Ask!
  • Last but not least, if you’re not a native English speaker, please feel free to translate the video captions into your own language. Many people will thank you for that.

Enjoy Web Demystified! And see you in a fortnight.


This Week In RustThis Week in Rust 230

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

From Rust All Hands in Berlin

Crate of the Week

This week's crate is rain, a framework for large-scale distributed computations. Thanks to Vikrant for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

143 pull requests were merged in the last week

New Contributors

  • Alec Mocatta
  • Chris Coulson
  • Fabio B
  • Hero
  • Joshua Barretto
  • Nikita Popov
  • Steven Malis

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Upcoming Events

The community team is trying to improve outreach to meetup organisers. Please fill out their call for contact info if you are running or used to run a meetup.

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Rust is one of those friends that take some time to get along with, but that you'll finally want to engage with for a long term relationship.

Sylvain Wallez.

Thanks to u/rushmorem and saethlin for the suggestion!

Submit your quotes for next week!

This Week in Rust is edited by: nasa42 and llogiq.

Air MozillaMozilla Weekly Project Meeting, 16 Apr 2018

Mozilla Weekly Project Meeting The Monday Project Meeting

Mozilla Addons BlogApply to Join the Featured Extensions Advisory Board

Are you an extensions enthusiast? Do you want to help people find excellent ways to improve their browsing experience? If so, please consider applying to join our Featured Extensions Community Board!

Every six months, we assemble a small group of dedicated community members to help nominate and select new featured extensions for (AMO) each month. Their picks help millions of Firefox users discover top-quality extensions.

The current board is currently wrapping up their six-month term, and we are now assembling a new board of talented contributors for the months July – December.

All active members of the add-ons community — including fans, developers, and advocates —  are invited to apply to join the board. Priority will be given to applicants who have not served on the board before, followed by those with previous experience, and finally from the outgoing board. You can learn more about the duties of the Featured Add-on Advisory Board on MDN web docs.

To apply, please send us an email at amo-featured [at] mozilla [dot] org with your name, a few sentences about how you’re involved with AMO, and why you are interested in joining the board. The deadline is Monday, April 30, 2018 at 23:59 PDT. The new board will be announced shortly thereafter.

The post Apply to Join the Featured Extensions Advisory Board appeared first on Mozilla Add-ons Blog.

Hacks.Mozilla.OrgBuild your own web things with the Things Framework

Last year Mozilla started Project Things to help developers connect physical “things” to the web in a safe, secure and interoperable way.

A diagram of the Things Framework

In February we announced the 0.3 release of the Things Gateway which bridges existing smart home devices to the Web of Things. Today we’re excited to tell you about the beginnings of the Things Framework which aims to help developers build their own “native web things”, and the 0.4 release of the Things Gateway which now supports a wider range of smart home devices.

Things Framework

The Things Framework is a collection of reusable software components to help you build your own web things, which directly expose the Web Thing API. That means they can be discovered by a Web of Things client or gateway, which can monitor and control them over the web.

An illustration of the Things Framework by Mozilla

A web thing has a Web Thing Description which describes the device’s capabilities, and exposes a Web Thing REST API and/or WebSocket API, so that it can be monitored and controlled. The Thing Description provides machine-readable metadata about a device and its available properties, actions and events. The Web Thing API lets a client read and write its properties, request actions and subscribe to its events.

You can get started today by turning Android things into web things using our Java web thing library, or if you prefer to build things with Python or NodeJS, we also have you covered there. We have some early examples of how to build web things using WiFi-enabled microcontrollers like the ESP8266, and a serial gateway adapter for chipsets with more constrained resources. We’re releasing these libraries at a very early stage of development so that you can provide us with feedback and help us to help you build better web things.

In the coming days we’ll be blogging about how to use each of these new web thing libraries, to help you get hands-on building your own devices.

These are still experimental technologies in the process of standardisation at the W3C, but we hope our early open source implementations will help developers try out the Web of Things and help us to improve it.

Add Web Things

With the 0.4 release of the Things Gateway, you can now add native web things to your gateway, to control them alongside all your other smart home devices. The advantage of native web things is that they don’t need a custom gateway adapter because they follow a common standard using existing web technologies.

The interface when "scanning for new devices" to add a Web Thing

Web things can broadcast their web address using mDNS or a Bluetooth beacon so that they can be discovered by the gateway, or they can be manually added by their URL. Simply click on the “+” button in the Things screen of the gateway and either allow it to scan for devices, or manually copy and paste a web thing URL using the “Add by URL…” link.

New Add-ons

The 0.4 gateway release also comes with a larger selection of add-on adapters which add support for smart home protocols like Apple HomeKit, and devices from LIFX and Broadlink.

New add-ons for the Things Gateway

You will also notice that, like the gateway itself, add-ons now get automatically updated so you’ll always have the latest version.

Other Changes

There are a host of other changes in the 0.4 release. You can now create rules based on numerical and color properties, we’ve added support for color temperature light bulbs and there’s a new configuration UI for add-ons. There are new developer features like Windows support, the ability to view and download logs and a new local token service. The local token service provides a simple onboarding experience for people who want to use OAuth to access the gateway’s Web Thing API.

You can see a full changelog for the 0.4 release here.

We want to say a big thank you to our growing Mozilla IoT community for contributing some of the new add-ons, providing us with feedback, reporting bugs, writing documentation and for generally helping us push Project Things forward. We can’t do this without you!

You can find out more about the Things Framework at, and feel free to head over to Discourse with any questions and comments.

QMOFirefox DevEdition 60 Beta 14, April 20th

Hello Mozillians,

We are happy to let you know that Friday, April 20th, we are organizing Firefox DevEdition 60 Beta 14 Testday. We’ll be focusing our testing on: Search Suggestions, Site Storage Redesign UI and Web Compatibility.

Check out the detailed instructions via this etherpad .

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better!

See you on Friday!

Mozilla GFXWebRender newsletter #18

WebRender’s 18th newsletter is here, with its usual share of bug fixes and a few performance improvements. Just after the previous newsletter was published, Patrick Walton landed an experimental integration of pathfinder’s text renderer in WebRender, that can draw native-looking text on Mac using the GPU. The pathfinder integration is taking shape although it is behind a compile time flag for now and there’s some work left to support native-looking text on Windows and Linux.
Another exciting update is a long standing issue in WebRender’s integration that was causing Gecko to continuously re-build the display list at 60 frames per second during off-main-thread animations, even though these don’t need to affect the display list. Thanks to Hiro and Kats we are no longer doing a ton of extra work on the CPU for no reason during animations.

Notable WebRender changes

  • Glenn implemented the infrastructure for retaining more things between frame, and enabled it for some blurs.
  • Jeff made various improvement to the blob image invalidation logic.
  • mbrubeck fixed y-inverted images in servo in some cases.
  • Glenn made drop shadows be drawn as a single primitive.
  • Martin improved border dot handling for small radii.
  • Kvark added logging for driver debug messages.
  • Glenn stored render task cache entries into a freelist to avoid some hash map lookups.
  • Jeff switched reftests to a serial osmesa build to fix Mac CI issues.
  • Eijebong updated the serde dependency.
  • Kvark prevented zero-sized border edges from being generated.
  • Glenn made opaque solid borders use a solid color shadow with clip instead of the border shader. This improves batching and paves the way for future other future optimizations.
  • Kvark reoganized some of the shader code to help with the gfx-rs porting effort.
  • waywardmonkeys fixed some clippy lints and typos.
  • waywardmonkeys updated the glutin and winit dependencies in wrench.
  • Glenn implemented support for snapping clip masks.
  • Kvark made shader compilation errors much easier to understand.
  • Nical implemented sampling from render tasks in brush images.
  • Kats implemented various parts required for hooking Gecko’s APZ in WebRender’s threads.
  • Martin made stacking context clipping explicit.
  • Glenn fixed the anti-aliasing of ellipses in some edge cases.
  • jonleighton made serialization use serde_bytes to speed up servo’s display list serialization.
  • pcwalton started integrating pathfinder‘s text renderer in WebRender behind a feature flag.
  • Gankro added some reftests.
  • Glenn fixed fast shadow clips.

Notable Gecko changes

  • Huge thanks to Hiro for filing bugs and fixing many latent issues with the reftest harness and animation throttling, which unblocked us on refresh driver throttling.
  • Kats then throttled the main thread’s refresh driver during off-main-thread animations.
  • Kats audited and enabled WebRender reftests on Windows.
  • Gankro fixed a bug with masks on groups of items.
  • Sotaro relaxed an assertion that was too strict and reenabled some tests.
  • Miko avoided re-building WebRender display lists when the Gecko display list hasn’t changed.
  • Sotaro improved the scheduling of video frames.
  • Sotaro fixed a race condition when recreating canvases and images.
  • Sotaro fixed a blinking issue when resizing the browser window.
  • Kats did a ton of work towards integrating APZ with async scene building.
  • Jeff fixed a blob image invalidation related crash.
  • Jeff did something related to blob image invalidation, which I assume fixed a bug (only he knows).
  • Ryan worked around a crash.
  • Jeff hooked up paint flashing (a debugging tool to visualize how much is invalidated) to blob image invalidation.
  • Gankro fixed a blob image pixel snapping issue.
  • Jeff avoided painting outside the ivnalidated region.
  • Kats added some fuzzing to a few intermittent tests.
  • Sotaro hooked up the ForcePresent IPDL message.
  • Jeff fixed a bug caused by clips being pushed and popped on different draw targets with blob images.

Enabling WebRender in Firefox Nightly

In about:config, just set “gfx.webrender.all” to true and restart the browser. No need to toggle any other pref.

Note that WebRender can only be enabled in Firefox Nightly. We will make it possible to enable it on other release channels as soon as we consider it stable enough to reach a broader audience.

Reporting bugs

The best place to report bugs related to WebRender in Gecko is the Graphics :: WebRender component in bugzilla.
Note that it is possible to log in with a github account.

Daniel Stenbergcurl up 2018 summary

curl up 2018

The event that occurred this past weekend was the second time we gathered a bunch of curl enthusiasts and developers in the same physical room to discuss the past, the present and the future from a curl perspective.

Stockholm, Sweden, was the center of gravity this time when Goto 10 hosted our merry collective. Spring has finally arrived here and as the sun was out a lot, it made it a lovely weekend weather wise too. As a bonus, the little coffee shop on the premises was open all through both days. Just for us!

This time we were 22 peeps coming in from Sweden, Germany, UK, Spain, the US, and Czechia.

This is what it looked like (photos by me):


We had a bunch of presentations over the two days, done by a bunch of people. I recorded the screens and recorded the voice on most of them, and they're available online. (Apologies for only recording a part of the screen for many of them!)

The talks were around how things work in curl or in the curl project, how people have used curl and a bit about upcoming technologies that we hope to get curl to support (soon?): QUIC, DOH, Alt-Svc, tests, CI, proxies, libcurl in Apache, using curl on a CDN, fuzzing curl, parsing email with curl etc.


We rounded off the Saturday with a twelve question curl quiz. The winner, Fernando, managed to hit the right answer in 8 questions and did it faster than the competition. He got himself a signed copy of Everything curl the second print edition as a prize!


46 Elks was graciously sponsoring us with awesome food and t-shirts.

Sticker Mule provided us with stickers.

Goto 10 let us occupy their place during the weekend when it is otherwise closed!

This event was possible only thanks to their help!


Several people asked me about next year already. I certainly hope we can run another curl up in 2019, but I don't know yet where this should happen. Ideally, I would like to move it around to different countries to give different people the ability to show up easier, but I also value having a local "host" that can provide the room and facilities for us. I'll send out probing questions about the 2019 location later this year. If you have a usable office or another suitable place that could host us, (preferably outside of Germany or Sweden), feel most welcome and encouraged to contact me!

(me, photographed by Christian Schmitz)

Niko MatsakisRust pattern: Rooting an Rc handle

I’ve decided to do a little series of posts about Rust compiler errors. Each one will talk about a particular error that I got recently and try to explain (a) why I am getting it and (b) how I fixed it. The purpose of this series of posts is partly to explain Rust, but partly just to gain data for myself. I may also write posts about errors I’m not getting – basically places where I anticipated an error, and used a pattern to avoid it. I hope that after writing enough of these posts, I or others will be able to synthesize some of these facts to make intermediate Rust material, or perhaps to improve the language itself.

The error: Rc-rooting

The inaugural post concerns Rc-rooting. I am currently in the midst of editing some code. In this code, I have a big vector of data:

struct Data {
  vector: Vec<Datum>

struct Datum {

Many different consumers are sharing this data, but in a read-only fashion, so the data is stored in an Rc<Data>, and each consumer has their own handle. Here is one such consumer:

struct Consumer {
  data: Rc<Data>

In that consumer, I am trying to iterate over the data and process it, one datum at a time:

impl Consumer {
  fn process_data(&mut self) {
    for datum in & {

  fn process_datum(&mut self, datum: &Datum) {
    /* ... */

This seems reasonable enough, but when I try to compile this, I find that I get a borrow check error:

error[E0502]: cannot borrow `*self` as mutable because `` is also borrowed as immutable
  --> src/
18 |     for datum in & {
   |                   ---------      - immutable borrow ends here
   |                   |
   |                   immutable borrow occurs here
19 |       self.process_datum(datum);
   |       ^^^^ mutable borrow occurs here

Why is that? Well, the borrow checker is pointing out a legitimate concern here (though the span for “immutable borrow ends here” is odd, I filed a bug). Basically, when I invoke process_datum, I am giving it both &mut self and a reference to a Datum; but that datum is owned by self – or, more precisely, it’s owned by a Data, which is in an Rc, and that Rc is owned by self. This means it would be possible for process_datum to cause that to get freed, e.g. by writing to

fn process_datum(&mut self, datum: &Datum) {
  // Overwriting `data` field will lower the ref-count
  // on the `Rc<Data>`; if this is the last handle, then
  // that would cause the `Data` to be freed, in turn invalidating
  // `datum` in the caller we looked at: = Rc::new(Data { vector: vec![] });

Now, of course you and I know that process_datum is not going to overwrite data, because that data is supposed to be an immutable input. But then again – can we say with total confidence that all other people editing this code now and in the future know and understand that invariant? Maybe there will be a need to swap in new data in the future.

To fix this borrow checker bug, we need to ensure that mutating self cannot cause datum to get freed. Since the data is in an Rc, one easy way to do this is to get a second handle to that Rc, and store it on the stack:

fn process_data(&mut self) {
  let data =; // this is new
  for datum in &data.vector {

If you try this, you will find the code compiles, and with good reason: even if process_datum were to modify now, we have a second handle onto the original data, and it will not be deallocated until the loop in process_data completes.

(Note that invoking clone on an Rc, as we do here, merely increases the reference count; it doesn’t do a deep clone of the data.)

How the compiler thinks about this

OK, now that we understand intuitively what’s going on, let’s dive in a bit into how the compiler’s check works, so we can see why the code is being rejected, and why the fixed code is accepted.

The first thing to remember is that the compiler checks one method at a time, and it makes no assumptions about what other methods may or may not do beyond what is specified in the types of their arguments or their return type. This is a key property – it ensures that, for example, you are free to modify the body of a function and it won’t cause your callers to stop compiling1. It also ensures that the analysis is scalable to large programs, since adding functions doesn’t make checking any individual function harder (so total time scales linearly with the number of functions2).

Next, we have to apply the borrow checker’s basic rule: “While some path is shared, it cannot be mutated.” In this case, the shared borrow occurs in the for loop:

    for datum in & {
    //           ^^^^^^^^^^^^^^^^^ shared borrow

Here, the path being borrowed is The compiler’s job here is to ensure that, so long as the reference datum is in use, that path is not mutated (because mutating it could cause datum to be freed).

So, for example, it would be an error to write *self = ..., because that would overwrite self with a new value, which might cause the old value of data to be freed, which in turn would free the vector within, which would invalidate datum. Similarly, writing = ... could cause the vector to be freed as well (as we saw earlier).

In the actual example, we are not directly mutating self, but we are invoking process_datum, which takes an &mut self argument:

  for datum in & {
            // ----------------- shared borrow
    //   ^^^^^^^^^^^^^ point of error

Since process_datum is declared as &mut self, invoking self.process_datum(..) is treated as a potential write to *self (and, and hence an error is reported.

Now compare what happens after the fix. Remember that we cloned into a local variable and borrowed that:

  let data =;
  for datum in &data.vector {
            // ^^^^^^^^^^^^ shared borrow

Now that path being borrowed is data.vector, and so when we invoke self.process_datum(..), the compiler does not see any potential writes to data (only self). Therefore, no errors are reported. Note that the compiler still assumes the worst about process_datum: process_datum may mutate *self or But even if it does so, that won’t cause datum to be freed, because it is borrowed from data, which is an independent handle to the vector.


Sometimes it is useful to clone the data you are iterating over into a local variable, so that the compiler knows it will not be freed. If the data is immutable, storing that data in an Rc or Arc makes that clone cheap (i.e., O(1)). (Another way to make that clone cheap is to use a persistent collection type – such as those provided by the im crate.)

If the data is mutable, there are various other patterns that you could deploy, which I’ll try to cover in follow-up articles – but often it’s best if you can get such data into a local variable, instead of a field, so you can track it with more precision.

How we could accept this code in the future

There would be various ways for the compiler to accept this code: for example, we’ve thought about extensions to let you declare the sets of fields accessed by a function (and perhaps the ways in which they are accessed), which might let you declare that process_datum will never modify the data field.

I’ve also kicked around the idea of “immutable” fields from time to time, which would basically let you declare that nobody will ovewrite that field, but that gets complicated in the face of generics. For example, one can mutate the field data not just by doing = ... but by doing *self = ...; and the latter might be in generic code that works for any &mut T: this implies we’d have to start categorizing the types T into “assigable or not”3. I suspect we would not go in this direction.


  1. Or crash, as would happen without the compiler’s checks.

  2. Total time for the safety check, that is. Optimizations and other things are sometimes inter-procedural.

  3. Interestingly, C++ does this when you have const fields.

Cameron KaiserTenFourFox FPR7b2 available

Despite being currently in the grip of my annual case of bronchitis, in which I sweat and secrete more foul cloudy phlegm than Jabba the Hutt, TenFourFox Feature Parity Release 7 beta 2 is now available for testing (downloads, hashes, release notes).

The reason for beta 2 was to fix a serious regression caused by issue 495. Since I was issuing a second beta anyway, I took the opportunity to make a few more fixes, including a glitch with image layout that manifested on the MorphOS site, fixing SourceForge's tendency to corrupt our pulldown menus with unrenderable icons, and adding support for TLS SHA-384 ciphers to fix an issue with HTTP/2 sites. There are also some security fixes for testing and instead of changing the fx shorthand UA template to Firefox 60, I just added a new fx60 one as well as a user agent for Internet Explorer 11. This means users whose sites incorrectly sniff them and are using the current Firefox 52 impersonation won't unexpectedly get Fx60 features enabled that the browser can't handle right now (this is particularly an issue for Gmail).

As a result, I'm going to rework the user agent support in TenFourFoxBox, meaning version 1.1 will not get an official release (I'll be going to 1.2). Yes, this will include allowing custom agent strings, though you'll need to do that from the command line. It will be left up at the old URL until the new one is completed.

Meanwhile, my excitement over the Talos systems finally shipping cannot be contained by my mucus. It looks like motherboard orders are being fulfilled and are already out in the public. The system depicted there, however, is not the Talos II as shipped. On the other hand, however, Phoronix actually did get access to a confirmed Talos II, pitted the POWER9 versus AMD Epyx Epyc and Intel Xeon, and came away very impressed. Especially when we consider that the software packages in question were in no way optimized for POWER9 (and are for x86_64), the Talos has very competitive performance even pitted against non-libre systems. I haven't heard that the full workstations are shipping yet, but given that the rest of the components are all off-the-shelf, it shouldn't take long.

2005 called, folks. They want their PowerPC performance jokes back.

Mozilla B-TeamHappy BMO Push Day!

David LawrenceHappy BMO Push Day!

release tag

the following changes have been pushed to

  • [1450283] JobQueue should treat “no jobs” as a trace-level message, and all other logs as info
  • [1450920] Instant Search doesn’t work when not logged in
  • [1447028] Add auth delegation test script
  • [1446431] Allow Baseline scan to ignore forms that dont need CSRF Tokens
  • [1450791] SES handler needs to support both “event” and “notification” messages (to handle complaint messages in production)
  • [1449282] Create an endpoint that will report back the number of jobs currently in the jobqueue
  • [1450679] Replace custom Sentry integration with Logging
  • [1328900] Create new group called ‘disableusers’ that can only edit the bugmail and disabledtext fields of a user
  • [1450990] Refactor a bunch of the logging config files
  • [1451416] Bugzilla sometimes sends emails to accounts when it shouldn’t
  • [1453126] Bugzilla::Bloomfilter should encourage preserving the input files for its filters
  • [1421110] Add REST API endpoint for /bug/possible_duplicates
  • [1453122] The phabbugz feed daemon should use IO::Async::Timer instead of using while loop
  • [1451960] Improve opengraph metadata for access-denied bugs
  • [1451573] Remove Firefox Marketplace custom forms
  • [1453681] Phabricator when searching for a specific project name can return more than one match
  • [1453697] ensure error_message is escaped in opengraph description

discuss these changes on

Mozilla VR BlogThis Week in Mixed Reality: Issue 2

This Week in Mixed Reality: Issue 2

We’ve made great progress this week in the three broad areas of Browsers, Social and the Content Ecosystem.


As we’re building out Firefox Reality, the front-end is really coming together as we continue to integrate the UI/UX designs.

  • Added a more complete 2D browser chrome with tabs & dialogs
  • Fixed the display density and UI scale factors by adding a formula to improve the size calculation based on device DPI
  • Now we have a gradle flavor to generate the Daydream build without flat mode
  • Added private browsing
  • Started building a virtual keyboard
  • UI specs completed: error pages, a virtual keyboard, the button tray, focus mode, and private browsing. There is also now a user flow demonstrating how Firefox Reality can be used in an educational setting

Check out our source code; it’s now public!


We are working on a new social platform for Mixed Reality.

In the last week, we have:

  • First public VR standup! We will be doing these publicly going forward. Check out the public WebVR channel below on Friday mornings, US Pacific for details!
  • Navmesh work landed, which prevents teleporting and walking through walls
  • Three.js editor enhancements by Kevin Lee that allows laying out primitive-based colliders for physics

Join our public WebVR Slack #social channel to participate in on the discussion!

Content ecosystem

We finished our Unity WebVR exporter v1.2.0 release! This release is heavily focused on input systems, and we added enhancements to make it easier for developers to integrate this package into your scenes.

Some highlights include:

  • Added keyboard and mouse controls for navigating scenes on desktop
  • Upgraded WebVR Polyfill, for improved mobile support, which supports new Sensor APIs in Chrome
  • Major refactor to camera controls in Unity, for ease of development and maintenance

This Week in Mixed Reality: Issue 2

We'd like to invite Unity game designers and developers to try it out and reach out to us on the public WebVR Slack #unity channel, where we have also recently moved our development discussion.

Don Martiworking post-creepy ads, and stuff

Post-creepy web ad sightings: What's next for web advertising after browser privacy improvements and regulatory changes make conventional adtech harder and harder?

The answer is probably something similar to what's already starting to pop up on niche sites. Here's a list of ad platforms that work more like print, less like spam: list of post-creepy web ad systems. Comments and suggestions welcome (mail me, or do a GitHub pull request from the link at the bottom.)

Fun with bug futures: we're in Mozilla's Internet Health Report. Previous items in that series:

ICYMI: Mozilla experiment aims to reduce bias in code reviews

Bonus links

Lots of GDPR and next-generation web ads stories in the past few weeks. A few must-read ones.

Publishers Haven't Realized Just How Big a Deal GDPR is My advice to you is rethink your approach to GDPR. This is your chance to be a part of the solution, rather than being part of the problem.

Brand Safety Is Not Driving Media Allocation Decisions in 2018/19

Mark Ritson: This is a critical point in marketers’ relationship with data privacy

What GDPR really means

Mozilla Localization (L10N)L10N Report: April Edition

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet.


New localizers

  • German: Felix came to us through the Common Voice project. He is now actively involved in Engagement and MDN projects.
  • For the new Ixil locale, we have 4 new l10n community members: Miguel Eliseo, Miguel, Manuela and Gerardo. Welcome to all!

Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

New community/locales added

  • Ixil (“ixl”) is a new community that just got added this week, and already started working on Focus for Android. Ixil is a Mayan language spoken in Guatemala, and you can learn more about it here. Welcome!

New content and projects

What’s new or coming up in Firefox desktop

In the past weeks we have completed the migration to Fluent of all XUL panes in Preferences. Today we landed one more major bug, migrating about 150 strings that cover the XUL portion of all the subdialogs (Fonts, Languages, Proxy, Colors, etc.). This leaves out only a few edge cases that require code changes in Fluent itself, and some strings in .properties files used also outside of Preferences. As of today, only 14 strings remain in DTD files, and 115 in .properties.

Given the extent of the changes we’re doing, make sure to test your localization on Nightly, and report any issue you find in migrated strings, or in the way Preferences work with Fluent.

In case you’ve missed it, this week we also published a blog post explaining what’s been done to integrate CLDR data into Firefox in the past months, and the next steps for 2018.

One final reminder: Firefox 60 is an ESR version, and it’s possible to localize strings only until April 25. Make sure to complete translations before this deadline, and give yourself enough time to test, otherwise they won’t be included in this release.

What’s new or coming up in mobile

This month has been packed with good stuff in mobile land. Firefox for iOS v11 just launched with RTL support, which means we are now shipping our Arab, Hebrew, Persian and Urdu localizations. We now have RTL support on all our mobile projects, which is a really great accomplishment. Congrats and THANKS to all those involved in this! You can also learn more about this latest update of Firefox iOS 11 on the Mozilla blog: Latest Firefox for iOS Now Available with Tracking Protection by Default plus iPad Features

We’re now shipping eight new locales on Firefox iOS with this new version: Aragonese (an), Arabic (ar), Persian (fa), Hebrew (he), Croatian (hr), Georgian (ka), Occitan (oc) and Urdu (ur). Congrats to all these teams!

Vietnamese (vi) is a new language that shipped on Firefox Android 59 last month, so congrats to the team on getting that going too.

On Focus Android side, we had five new locales ship with v4.1: an (Aragonese), gu-IN (Gujarati), hr (Croatian), oc (Occitan) and tt (Tatar). We are now at a total of 75 shipping locales on Focus for Android \o/

To conclude, just like for desktop, a friendly reminder that it’s only possible to localize strings for Firefox Android 60 until April 25.

What’s new or coming up in web projects

The CPG, or the Community Participate Guidelines, has been published for a while. We now make it a bit more discoverable by adding it to the Pontoon Term page. Please take a read of the document if you haven’t had a chance before. Whenever the guideline is updated, you will be prompted to review the amendment before proceeding on Pontoon. We encourage you to periodically refer to these guidelines when collaborating with others from different regions and cultures, and especially when resolving differences.

The Common Voice project has brought quite a few new contributors to many communities. This is very exciting! These contributors are new to Pontoon, probably new to the localization process and to the way Mozilla localization community works. As a manager or a translator for enabled locales, please review the suggestions in a timely manner, provide constructive feedback, and re-evaluate the roles of these new localizers based on the quality of their work. Additionally, reach out to them, and get them signed up to the web project mailing list.

What’s new or coming up in Foundation projects

Facebook & Privacy campaign


Last month we reported that things were quiet on the campaign side of things. Well, it didn’t last long. All of you should be aware of the Facebook / Cambridge Analytica scandal by now. We launched a Rapid Response campaign, and this is the first time we’re localizing it, so here are some details of what happened over the past few weeks.

Here’s a rough timeline of events on the Policy/Advocacy side:

  • The news broke over the weekend of March 18th.
  • On Monday, it has been decided to launch a Rapid Response campaign, and localize it. First time we’re doing that!
  • By Monday evening, we had a campaign strategy. An email & a petition were drafted and went through an expedite approval process. Many teams at Mozilla were all hands on deck, including top executives. We began localizing the email right after this.
  • On Tuesday morning we evaluated our options to localize the petition on our brand new website, which does not have localization support. We found a hacky way to publish multiple pages with the same petition, so we just did that for localized petitions. It’s not perfect, but we had to be creative!
  • By Tuesday evening, the email was translated into French, German, Spanish and Brazilian Portuguese. Translated emails were coded into our emailing platform and email proofs were sent to localizers for a formal approval. Translated petitions were also pushed live.
  • On Wednesday, localized emails were sent, covering the vast majority of Mozilla subscribers!

We were able to help launch the initial petition and email into five languages in less than 72 hours. It’s been incredibly helpful to be able to mobilize so many people in just a few hours. It turned out the multiple initiatives launched by Mozilla & other organizations have been noticed by Facebook and they did what we asked — update their privacy settings.

Thanks to everyone who helped translate, review and approve these messages!

What’s next?

This is a first win towards a healthier Internet, but we won’t stop just yet. It’s actually a great way to engage on these critical issues. The campaign will continue over the next few months. We will keep widening the debate to other aspects of online privacy where we can, and not focus exclusively on Facebook. And try to move the debate outside the U.S., because everyone is affected by those issues.

On the localization side, we’re not yet in an ideal position where we can scale our localization efforts, but this first Rapid Response campaign has been encouraging and it will help shape up the next steps of our work.

Internet Health Report

Mozilla released the Internet Health Report 2018, you should check it out! It comes right on time and is quite relevant at a time where data & privacy issues are in the headlines. There is also an interesting piece on Building a multilingual Internet. Also feel free to report issues using this GitHub repository.

Newly published localizer facing documentation

Pontoon documentation has been updated to reflect the new search capabilities, and possibility to search and translate across all projects.


  • Read this blog post in case you want to know what’s cooking in regards to localization community events planning in 2018 (and more)!
  • Want to showcase an event coming up that your community is participating in? Reach out to any l10n-driver and we’ll include that (see links to emails at the bottom of this report)


Friends of the Lion


Image by Elio Qoshi

Huge thank you to Guillermo Movia, Drashti, Irvin Chen, Cécile Bertin and Mozinet for reporting issues on the Internet Health Report. And to Cécile for also completing the French translation of the Talk plugin from the Coral Project, which is used in the report.

  • Know someone in your l10n community who’s been doing a great job and should appear here? Contact on of the l10n-drivers and we’ll make sure they get a shout-out (see list at the bottom)!

Useful Links

Questions? Want to get involved?

Did you enjoy reading this report? Let us know how we can improve by reaching out to any one of the l10n-drivers listed above.

Air MozillaGDPR Expert Panel

GDPR Expert Panel GDPR was formally adopted by the EU Parliament in the spring of 2016. It will come into force on May 25, 2018 but as this...

The Firefox FrontierCross-site tracking: Let’s unpack that

If you’ve been following the Facebook hearings this week, there’s one term that has come up over and over: “cross-site tracking” — and for good reason. For something that’s getting … Read more

The post Cross-site tracking: Let’s unpack that appeared first on The Firefox Frontier.

Will Kahn-GreeneAWS Lambda dev with Python

A story of a pigeon

I work on Socorro which is the crash ingestion pipeline for Mozilla's products.

The pipeline starts at the collector which handles incoming HTTP POST requests, pulls out the payload, futzes with it a little, and then saves it to AWS S3. Socorro then processes some of those crashes in the processor. The part that connects the two is called Pigeon. It was intended as a short-term solution to bridge the collector and the processor, but it's still around a year later and the green grass grows all around all around and the green grass grows all around.

Pigeon is an AWS Lambda function that triggers on S3 ObjectCreated:Put events, looks at the filename, and then adds things to the processing queue depending on the filename structure. We called it Pigeon for various hilarious reasons that are too mundane to go into in this blog post.

It's pretty basic. It doesn't do much. It was a short term solution we thought we'd throw away pretty quickly. I wrote some unit tests for the individual parts of it and a "client" that invoked the function in a faux AWS Lambda like way. That was good enough.

But then some problems

Pigeon was written with Python 2 because at the time AWS Lambda didn't have a Python 3 runtime. That changed--now there's one with Python 3.6.

In January, I decided to update Pigeon to work with Python 3.6. I tweaked the code, tweaked the unit tests, and voila--it was done! Then we deployed it to our -stage environment where it failed epically in technicolor glory (but no sound!) and we had to back it out and return to the Python 2 version.

What happened? I'll tell you what happened--we had a shit testing environment. Sure, we had tests, but they lacked several things:

  1. At no point do we test against the build artifact for Pigeon. The build artifact for AWS Lambda jobs in Python is a .zip file that includes the code and all the libraries that it uses.
  2. The tests "invoke" Pigeon with a "client", but it was pretty unlike the AWS Lambda Python 3.6 runtime.
  3. Turns out I had completely misunderstood how I should be doing exception handling in AWS Lambda.

So our tests tested some things, but missed some important things and a big bug didn't get caught before going to -stage.

It sucked. I felt chagrinned. I like to think I have a tolerance for failure since I do it a lot, but this felt particularly faily and some basic safeguards would have prevented it from happening.

Fleshing out AWS Lambda in Python project

We were thinking of converting another part of the Socorro pipeline to AWS Lambda, but I put that on hold until I had wrapped my head around how to build a development environment that included scaffolding for testing AWS Lambda functions in a real runtime.

Miles or Brian mentioned aws-sam-local. I looked into that. It's written in Go, they suggest installing it with npm, it does a bunch of things, and it has some event generation code. But for the things I needed, it seemed like it would just be a convenience cli for docker-lambda.

I had been aware of docker-lambda for a while, but hadn't looked at the project recently. They added support for passing events via stdin. Their docs have examples of invoking Lambda functions. That seemed like what I needed.

I took that and built the developer environment scaffolding that we've got in Pigeon now. Further, I decided to use this same model for future AWS Lambda function development.

How does it work?

Pigeon is a Python project, so it uses Python libraries. I maintain those requirements in a requirements.txt file.

I install the requirements into a ./build directory:

$ pip install --ignore-installed --no-cache-dir -r requirements.txt -t build/

I copy the Pigeon source into that directory, too:

$ cp build/

That's all I need for the runtime to use.

The tests are in the tests/ directory. I'm using pytest and in the file have this at the top:

import os
import sys

# Insert build/ directory in sys.path so we can import pigeon

I'm using Docker and docker-compose to aid development. I use a test container which is a python:3.6 image with the test requirements installed in it.

In this way, tests run against the ./build directory.

Now I want to be able to invoke Pigeon in an AWS Lambda runtime so I can debug issues and also write an integration test.

I set up a lambda-run container that uses the lambci/lambda:python3.6 image. I mount ./build as /var/task since that's where the AWS Lambda runtime expects things to be.

I created a shell script for invoking Pigeon:


docker-compose run \
    --rm \
    -v "$PWD/build":/var/task \
    --service-ports \
    lambda-run pigeon.handler $@

That's based on the docker-lambda invoke examples.

Let's walk through that:

  1. It runs the lambda-run container with the services it depends on as defined in my docker-compose.yml file.
  2. It mounts the ./build directory as /var/task because that's where the runtime expectes the code it's running to be.
  3. The DOCKER_LAMBDA_USE_STDIN=1 environment variable causes it to look at stdin for the event. That's pretty convenient.
  4. It runs invokes pigeon.handler which is the handler function in the pigeon Python module.

I have another script that generates fake AWS S3 ObjectCreated:Put events. I cat the result of that into the invoke shell script. That runs everything nicely:

$ ./bin/ --key v2/raw_crash/000/20180313/00007bd0-2d1c-4865-af09-80bc00180313 > event.json
$ cat event.json | ./bin/
Starting socorropigeon_rabbitmq_1 ... done
START RequestId: 921b4ecf-6e3f-4bc1-adf6-7d58e4d41f47 Version: $LATEST
{"Timestamp": 1523588759480920064, "Type": "pigeon", "Logger": "antenna", "Hostname": "300fca32d996", "EnvVersion": "2.0", "Severity": 4, "Pid": 1, "Fields": {"msg": "Please set PIGEON_AWS_REGION. Returning original unencrypted data."}}
{"Timestamp": 1523588759481024512, "Type": "pigeon", "Logger": "antenna", "Hostname": "300fca32d996", "EnvVersion": "2.0", "Severity": 4, "Pid": 1, "Fields": {"msg": "Please set PIGEON_AWS_REGION. Returning original unencrypted data."}}
{"Timestamp": 1523588759481599232, "Type": "pigeon", "Logger": "antenna", "Hostname": "300fca32d996", "EnvVersion": "2.0", "Severity": 6, "Pid": 1, "Fields": {"msg": "number of records: 1"}}
{"Timestamp": 1523588759481796864, "Type": "pigeon", "Logger": "antenna", "Hostname": "300fca32d996", "EnvVersion": "2.0", "Severity": 6, "Pid": 1, "Fields": {"msg": "looking at key: v2/raw_crash/000/20180313/00007bd0-2d1c-4865-af09-80bc00180313"}}
{"Timestamp": 1523588759481933056, "Type": "pigeon", "Logger": "antenna", "Hostname": "300fca32d996", "EnvVersion": "2.0", "Severity": 6, "Pid": 1, "Fields": {"msg": "crash id: 00007bd0-2d1c-4865-af09-80bc00180313 in dev_bucket"}}
{"Timestamp": 1523588759497482240, "Type": "pigeon", "Logger": "antenna", "Hostname": "300fca32d996", "EnvVersion": "2.0", "Severity": 6, "Pid": 1, "Fields": {"msg": "00007bd0-2d1c-4865-af09-80bc00180313: publishing to socorrodev.normal"}}
END RequestId: 921b4ecf-6e3f-4bc1-adf6-7d58e4d41f47
REPORT RequestId: 921b4ecf-6e3f-4bc1-adf6-7d58e4d41f47 Duration: 101 ms Billed Duration: 200 ms Memory Size: 1536 MB Max Memory Used: 28 MB


Then I wrote an integration test that cleared RabbitMQ queue, ran the invoke script with a bunch of different keys, and then checked what was in the processor queue.

Now I've got:

  • tests that test the individual bits of Pigeon
  • a way to run Pigeon in the same environment as -stage and -prod
  • an integration test that runs the whole setup

A thing I hadn't mentioned was that Pigeon's documentation is entirely in the README. The docs cover setup and development well enough that I can hand this off to normal people and future me. I like simple docs. Building scaffolding such that docs are simple makes me happy.

The Mozilla BlogLatest Firefox for iOS Now Available with Tracking Protection by Default plus iPad Features

Today, we’re rolling out Tracking Protection by default for Firefox for iPhone and iPad users.  It’s been a little over six months since we added Tracking Protection as an option in regular browsing. With Tracking Protection now turned on by default in both regular and private browsing mode, you can easily control which sites you want to deploy Tracking Protection for — like retail sites — and which ones you don’t. It’s simple, just tap your menu button and you’ll see it listed there!

Tracking Protection uses the same ad blocking technology as Firefox Focus for Android and iOS, Firefox for Desktop and Firefox for Android.

At Mozilla we’ve always believed it’s important to respect people’s privacy and give them the control to decide which information they want to share, and what information they don’t. Now more than ever consumers are demanding this from the companies with whom they share their data. As an added bonus most people using tracking protection will notice that their web content loads faster for many websites, allowing them to save on data usage and providing optimized battery performance.

New iPad Features

Re-order your Tabs

For some people, when you’re on the web you might be checking out recipes for dinner, but then check email, or look to see what’s the weather like for your weekend. To help get you quickly to the places you want to visit, we’ve added support to organize and prioritize your tabs.  For example, you’re waiting for a work email from your boss, now you can move that tab to the either the far left or far right so you can easily access it. Simply, long-press the specific tab and drag it in the order that works best for you.

Drag & Drop links between Firefox and other Apps

Do you find yourself on a website and you want to share the link with a friend, but find the task of cutting and pasting the link to an email or another app a bit tedious?  Well, we’ve made it easier for you. On your iPad, you can now drag and drop links to and from Firefox to any application, whether it’s an email or a tweet.

More iPad Keyboard Shortcuts

We’re all about making your web browsing experience more efficient, and in this case we’re talking about keyboard shortcuts.  We’ve implemented standard navigation keys as well as  several improvements for easier tab tray navigation, like Command + Option + Tab to get to and from the all tabs view.

To learn more about our full list of shortcuts, visit here.

To get the latest version of Firefox for iOS, on the App Store.

The post Latest Firefox for iOS Now Available with Tracking Protection by Default plus iPad Features appeared first on The Mozilla Blog.

The Firefox FrontierProtect Your Privacy With Firefox for iOS 11

Privacy is important on mobile devices. That’s why Firefox for iOS 11 is shipping with improved, always-on Tracking Protection. How Tracking Protection Works It’s pretty simple: Firefox for iOS Tracking … Read more

The post Protect Your Privacy With Firefox for iOS 11 appeared first on The Firefox Frontier.

Manish GoregaokarDown a Rusty Rabbit Hole

Last week I fell down a rather interesting rabbit hole in Rust, which was basically me discovering a series of quirks of the Rust compiler/language, each one leading to the next when I asked “why?”.

It started when someone asked why autogenerated Debug impls use argument names like __arg_0 which start with a double underscore.

This happened to be my fault. The reason we used a double underscore was that while a single underscore tells rustc not to warn about a possibly-unused variable, there’s an off- by-default clippy lint that warns about variables that start with a single underscore that are used, which can be silenced with a double underscore. Now, the correct fix here is to make the lint ignore derive/macros (which I believe we did as well), but at the time we needed to add an underscore anyway so a double underscore didn’t seem worse.

Except of course, this double underscore appears in the docs. Oops.

Ideally the rustc derive infrastructure would have a way of specifying the argument name to use so that we can at least have descriptive things here, but that’s a bit more work (I’m willing to mentor this work though!). So I thought I’d fix this by at least removing the double underscore, and making the unused lint ignore #[derive()] output.

While going through the code to look for underscores I also discovered a hygiene issue. The following code throws a bunch of very weird type errors:

pub const __cmp: u8 = 1;

#[derive(PartialOrd, PartialEq)]
pub enum Foo {
    A(u8), B(u8)


error[E0308]: mismatched types
 --> src/
6 |     A(u8), B(u8)
  |       ^^^ expected enum `std::option::Option`, found u8
  = note: expected type `std::option::Option<std::cmp::Ordering>`
             found type `u8`

This is because the generated code for PartialOrd contains the following:

match foo.cmp(bar) {
    Some(Ordering::Equal) => .....,
    __cmp => __cmp,

__cmp can both be a binding to a wildcard pattern match as well as a match against a constant named __cmp, and in the presence of such a constant it resolves to the constant, causing type errors.

One way to fix this is to bind foo.cmp(bar) to some temporary variable x and use that directly in a _ => x branch.

I thought I could be clever and try cmp @ _ => cmp instead. match supports syntax where you can do foo @ <pattern>, where foo is bound to the entire matched variable. The cmp here is unambiguously a binding; it cannot be a pattern. So no conflicting with the const, problem solved!

So I made a PR for both removing the underscores and also fixing this. The change for __cmp is no longer in that PR, but you can find it here.

Except I hit a problem. With that PR, the following still breaks:

pub const cmp: u8 = 1;

#[derive(PartialOrd, PartialEq)]
pub enum Foo {
    A(u8), B(u8)

throwing a slightly cryptic error:

error[E0530]: match bindings cannot shadow constants
4 | pub const cmp: u8 = 1;
  | ---------------------- a constant `cmp` is defined here
9 |     B(u8)
  |       ^^^ cannot be named the same as a constant

You can see a reduced version of this error in the following code:

pub const cmp : u8 = 1;

fn main() {
    match 1 {
        cmp @ _ => ()


Huh. Wat. Why? cmp @ _ seems to be pretty unambiguous, what’s wrong with it shadowing a constant?

Turns out bindings cannot shadow constants at all, for a rather subtle reason:

const A: u8 = ...; // A_const
let A @ _ = ...; // A_let
match .. {
    A => ...; // A_match

What happens here is that constants and variables occupy the same namespace. So A_let shadows A_const here, and when we attempt to match, A_match is resolved to A_let and rejected (since you can’t match against a variable), and A_match falls back to resolving as a fresh binding pattern, instead of resolving to a pattern that matches against A_const.

This is kinda weird, so we disallow shadowing constants with variables. This is rarely a problem because variables are lowercase and constants are uppercase. We could technically allow this language-wise, but it’s hard on the implementation (and irrelevant in practice) so we don’t.

So I dropped that fix. The temporary local variable approach is broken as well since you can also name a constant the same as the local variable and have a clash (so again, you need the underscores to avoid surprises).

But then I realized that we had an issue with removing the underscores from __arg_0 as well.

The following code is also broken:

pub const __arg_0: u8 = 1;

struct Foo(u8);


error[E0308]: mismatched types
 --> src/
3 | #[derive(Debug)]
  |          ^^^^^ expected mutable reference, found u8
  = note: expected type `&mut std::fmt::Formatter<'_>`
             found type `u8`

You can see a reduced version of this error in the following code:

pub const __arg_0: u8 = 1;

fn foo(__arg_0: bool) {}
error[E0308]: mismatched types
 --> src/
3 | fn foo(__arg_0: bool) {}
  |        ^^^^^^^ expected bool, found u8


This breakage is not an issue with the current code because of the double underscores – there’s a very low chance someone will create a constant that is both lowercase and starts with a double underscore. But it’s a problem when I remove the underscores since that chance shoots up.

Anyway, this failure is even weirder. Why are we attempting to match against the constant in the first place? fn argument patterns1 are irrefutable, i.e. all possible values of the type should match the argument. For example, fn foo(Some(foo): Option<u8>) {} will fail to compile with “refutable pattern in function argument: None not covered”.

There’s no point trying to match against constants here; because even if we find a constant it will be rejected later. Instead, we can unambiguously resolve identifiers as new bindings, yes?


Firm in my belief, I filed an issue.

I was wrong, it’s not going to always be rejected later. With zero-sized types this can totally still work:

struct S;

const C: S = S;

fn main() {
    let C = S;

Here because S has only one state, matching against a constant of the type is still irrefutable.

I argued that this doesn’t matter – since the type has a single value, it doesn’t matter whether we resolved to a new binding or the constant; the value and semantics are the same.

This is true.


Except for when destructors come in.

It was at this point that my table found itself in the perplexing state of being upside-down.

This is still really fine, zero-sized-constants-with-destructors is a pretty rare thing in Rust and I don’t really see folks relying on this behavior.

However I later realized that this entire detour was pointless because even if we fix this, we end up with a way for bindings to shadow constants. Which … which we already realized isn’t allowed by the compiler till we fix some bugs.


The actual fix to the macro stuff is to use hygenic generated variable names, which the current infrastructure supports. I plan to make a PR for this eventually.

But it was a very interesting dive into the nuances of pattern matching in Rust.

  1. Yes, function arguments in Rust are patterns. You can totally do things like (a, b): (u8, u8) in function arguments (like you can do in let)

Firefox NightlyThese Weeks in Firefox: Issue 36


  • Dark theme for the new tab page landed in Nightly!
    • The Activity Stream tab with dark colours.

      If Batman had a browser, this would be it.

Friends of the Firefox team

(Give a shoutout/thanks to people for helping fix and test bugs. Introductions)

Project Updates


Activity Stream

  • The thumbnail service will now use tracking protection when taking screenshots.
  • Wide Layout Experiment early results are in – neutral to slightly positive preference to wide layout in Beta 60!

Browser Architecture


Policy Engine

  • Finished work on policies for 60: 45 from the MVP, and 14 extras! Everything targeted for 60 has been uplifted to beta.
  • Tweaked names and arguments for some policies.
  • We got our first request for a policy from someone in the wild!



Search and Navigation

Address Bar & Search


Sync / Firefox Accounts

Web Payments

  • Milestone 1 completed!
  • Milestone 2 is mainly focused on address/credit card add/edit screens that show in the PaymentRequest dialog
  • Team has completed 49% of the Milestone 3 commitment.

Michael KellyData Collection at Mozilla: Browser Errors

I’ve spent the past few months working on a project involving data collection from users of Nightly, the pre-release channel of Firefox that updates twice a day. I’d like to share the process from conception to prototype to illustrate

  1. One of the many ways ideas become reality at Mozilla, and
  2. How we care about and protect user privacy with regards to data collection.

Maybe JavaScript errors are a bad thing

The user interface of Firefox is written in JavaScript (along with XUL, HTML, and CSS). JavaScript powering the UI is “privileged” JavaScript, which is separate from JavaScript in a normal webpage, and can do things that normal webpages cannot do, such as read the filesystem.

When something goes wrong and an error occurs in this privileged JavaScript (let’s call them “browser errors”), it ends up logged to the Browser Console. Most users aren’t looking at the Browser Console, so these errors often go unnoticed.

While working on Shield, I found that our QA cycle1 involved a lot of time noticing and reporting errors in the Browser Console. Our code would often land on the Nightly channel before QA review, so why couldn’t we just catch errors thrown from our code and report them somewhere?2

So let’s a great plan

I told my boss a few times that browser error collection was a problem that I was interested in solving. I was pretty convinced that there was useful info to be gleaned from collecting these errors, but my beliefs aren’t really enough to justify building a production-quality error collection service. This was complicated by the fact that errors may contain info that can personally identify a user:

  • There’s no limits or checks on what goes into an error message in Firefox, so we can’t guarantee that error messages don’t contain things like, say, an auth token for a URL that we couldn’t connect to.
  • Tracebacks for errors may signal that a user was using a specific feature in Firefox, like private browsing. It’s not clear whether “user was using private browsing” is private user data or not, but it’s gray enough to be concerning.

On top of all that, we didn’t even know how often these errors were occurring in the wild. Was this a raging fire of constant errors we were just ignoring, or was I getting all worried about nothing?

In the end, I proposed a 3-step research project:

  1. Run a study to measure the number of errors occurring in Nightly as well as the distribution of signatures.
  2. Estimate potential load using the study data, and build a prototype service. Grant access to the data to a limited set of employees and discover whether the data helps us find and diagnose errors.
  3. Shut down the prototype after 6 months or so and evaluate if we should build a production version of the system.

I wrote up this plan as a document that could be shared among people asking why this was an important project to solve. Eventually, my boss threw the idea past Firefox leadership, who agreed that it was a problem worth pursuing.

What even is happening out there

The first step was to find out how many errors we’d be collecting. One tool at our disposal at Mozilla is Shield, which lets us run small studies at targeted subsets of users. In this case, I wanted to collect data on how many errors were being logged on the Nightly channel.

To run the study, I had to fill out a Product Hypothesis Document (PHD) describing my experiment. The PHD is approved by a group in Mozilla with data science and experiment design experience. It’s an important step that checks multiple things:

  • Do you know how to interpret the results of your experiment? Is success vs failure clear?
  • Have you enumerated the user data you’ll need to collect? Mozilla has a classification system for user data that needs to be applied to prevent collection of sensitive data.
  • Are you sending your experiment to the minimally-effective group? If we can make do with only collecting data from 3000 users rather than 30,000, we should avoid the over-collection of data.

Once the PHD was approved, I implemented the code for my study and created a Bugzilla bug for final review. Mozilla has a group of “data stewards” who are responsible for reviewing data collection to ensure it complies with our policies. Studies are not allowed to go out until they’ve been reviewed, and the results of the review are, in most cases, public and available in Bugzilla.

In our case, we decided to compute hashes from the error stacktraces and submit those to Mozilla’s data analysis pipeline. That allowed us to count the number of errors and view the distribution of specific errors without accidentally collecting personal data that may be in file paths.

I am perfect and infallible

The last steps after passing review in the bug were to announce the study on a few mailing lists to both solicit feedback from Firefox developers, and to inform our release team that we intended to ship a new study to users. Once the release team approved our launch plan, we launched and started to collect data. Yay!

A few days after launching Ekr, who had noticed the study on the mailing lists, reached out and voiced some concerns with our study.

While we were hashing errors before sending them, an adversary could precompute the hashes by running Firefox, triggering bugs they were interested in, and generating their own hash using the same method we were using. This, paired with direct access to our telemetry data, would reveal that an individual user had run a specific piece of code.

It was unclear if knowing that a user had run a piece of code could be considered sensitive data. If, for example, the error came from code involved with private browsing mode, would that constitute knowing that the user had used private browsing mode for something? Was that sensitive enough for us to not want to collect?

We decided to turn the study off while we tried to address these concerns. By that point, we had collected 2-3 days-worth of data, and decided that the risk wasn’t large enough to justify dropping the data we already had. I was able to perform a limited analysis on that data and determine that we were seeing tens of millions of errors per day, which was enough of an estimate for building the prototype. With that question answered, we opted to keep the study disabled and consider it finished rather than re-tool it based on Ekr’s feedback.

Can I collect the errors now

Mozilla already runs our own instance of Sentry for collecting and aggregating errors, and I have prior experience with it, so it seemed the obvious choice for the prototype.

With roughly 50 million errors per-day, I figured we could sample sending them to the collection service at a rate of 0.1%, or about 50,000 per-day. The operations team that ran our Sentry instance agreed that an extra 50,000 errors wasn’t an issue.

I spent a few weeks writing up a Firefox patch that collected the errors, mangled them into a Sentry-compatible format, and sent them off. Once the patch was ready, I had to get a technical review from a Firefox peer and a privacy review from a data steward. The patch and review process can be seen in the Bugzilla bug.

The process, as outlined on the Data Collection wiki page, involves three major steps:

Requesting Review

First, I had to fill out a form with several questions asking me to describe the data collection. I’m actually a huge fan of this form, because the questions force you to consider many aspects about data collection that are easy to ignore:

“Why does Mozilla need to answer these questions? Are there benefits for users? Do we need this information to address product or business requirements?”
It’s really easy to let curiosity or mild suspicion drive big chunks of work. The point of this question is to force you to think of a reason for doing the collection. Collecting data just because it is mildly convenient or interesting isn’t a good enough reason; it needs a purpose.
“What alternative methods did you consider to answer these questions? Why were they not sufficient?”
Data collection can’t simply be the first tool you reach for to answer your questions. If we want to be respectful of user privacy, we need to consider other ways of answering questions that don’t involve collecting data.
“List all proposed measurements and indicate the category of data collection for each measurement, using the Firefox data collection categories on the Mozilla wiki.”
The classification system we use for data makes it very clear how to apply our policies to the data you’re collecting. Browser errors, for example, are mostly category 2 data, but may potentially contain category 3 data and as such must be held to a higher standard.
“How long will this data be collected?”
If we can limit the time period in which we collect a piece of data, we can reduce the impact of data collection on users. I didn’t actually know time-limited collection was something to consider until I saw this question for the first time, but in fact several of our data collection systems enforce time limits by default.

Reviewing Request

Data stewards have their own form to fill out when reviewing a collection request. This form helps stewards be consistent in their judgement. Besides reviewing the answers to the review form from above, reviewers are asked to confirm a few other things:

Is the data collection documented in a publicly accessible place?
Sufficiently technical users should be able to see the schema for data being collected without having to read through the Firefox source code. Failing to provide this documentation mandates a failing review.
Is there a way for users to disable the collection?
There must be some way for users to disable the data collection. Missing this is also considered grounds for failure.

It’s important to note that this mechanism doesn’t need to be, say, a checkbox in the preferences UI. Depending on the context of the data collection, an about:config preference or some other mechanism may be good enough.

Rereing Viewquest?

In certain cases, requests may be escalated to Mozilla’s legal team if they involve changes to our privacy policy or other special circumstances. In the case of browser error collection, we wanted a legal review to double-check whether a user having used private browsing mode was considered category 2 or 3 data, as well as to approve our proposal for collecting category 3 data in error messages and file paths.

Our approach was to mimic what Mozilla already does with crashes; we collect the data and restrict access to the data to a subset of employees who are individually approved access. This helps make the data accessible only to people who need it, and their access is contingent on employment3. Legal approved the plan, which we implemented using built-in Sentry access control.

Welcome to errortown

With code and privacy review finished, I landed the patch and waited patiently for Sentry to start receiving errors. And it did!

Since we started receiving the data, I’ve spent most of my time recruiting Firefox developers who want to search through the errors we’re collecting, and refining the data we’re collecting to make it more more useful to those developers. Of course, changes to the data collection require new privacy reviews, although the smaller the changes are, the easier it is to fill out and justify the data collection.

But from my standpoint as a Mozilla employee, these data reviews are the primary way I see Mozilla making good on its promise to respect user privacy and avoid needless data collection. A lot of thought has gone into this process, and I can personally attest to their effectiveness.

  1. Firefox uses tons of automated testing, but we also have manual testing for certain features. In Shield's case, the time being wasted was in the manual phase.

  2. Actually, we already do collect crashes as part of the Socorro project, which I currently work on. But Socorro does not collect any info about the browser errors in question.

  3. Only some parts of crash data are actually private, and certain contributors who sign an NDA are also allowed access to that private data. We use centralized authorization to control access.

The Firefox FrontierUse Firefox Focus to keep Facebook contained on your mobile device

Most of us signed up for Facebook to keep in touch with friends and family. We didn’t sign up to share our personal information with mysterious third-party organizations. That’s why … Read more

The post Use Firefox Focus to keep Facebook contained on your mobile device appeared first on The Firefox Frontier.

Air MozillaWeekly SUMO Community Meeting, 11 Apr 2018

Weekly SUMO Community Meeting This is the SUMO weekly call

Hacks.Mozilla.OrgSneak Peek at WebAssembly Studio

WebAssembly.Studio is an online IDE (integrated development environment) that helps you learn and teach others about WebAssembly. It’s also a Swiss Army knife that comes in handy whenever working with WebAssembly.

We started working on WebAssembly Studio in late December 2017, in an attempt to merge two existing tools that we had developed: WasmExplorer and WasmFiddle. Since then, thanks to several contributors who jumped into the project early, we’ve made quite a bit of progress. We’ve merged those two tools and added several new features. Our beta (more like an alpha) release is now live at and we are very interested in your feedback.

Quick Start

To get started with the example above, simply click Build and then Run. WebAssembly Studio first compiles main.c to out/main.wasm and then creates an iframe sandbox in which it loads main.html. The HTML file loads main.js which loads and executes the WebAssembly module that ultimately prints “Hello World”. To understand exactly what’s going on, read the file included in the project. This is an example I put together to show how C programs interact with WebAPIs. Our hope is that others will put together interesting examples and use WebAssembly Studio as a teaching tool.

Overview of Features

C/C++/Rust Support

WebAssembly Studio has basic (very primitive) support for C, C++ and Rust out of the box. At the moment, compilation services run mostly server-side but we’re hoping to do more of this work on the client.

Editable Compiler Artifacts

WebAssembly binary modules (.wasm) as well as text files (.wat) are fully editable in WebAssembly Studio. Try opening out/main.wasm and you’ll see the disassembled .wat output. You can actually edit this text, and when you save, the original .wasm file will be reassembled.

<figcaption class="imageCaption">Note that you can hover over various keywords in the WebAssembly text format to gain insights into what they do. See below:</figcaption>
<figcaption class="imageCaption">WebAssembly Documentation</figcaption>

Easily Accessible Tools

Many of the interesting features in WebAssembly Studio are stashed away under context menus. For instance, if you right-click on the out/main.wasm file, you’ll see a pop-up menu appear with several commands:

<figcaption class="imageCaption">Context Menu for .wasm Files</figcaption>

You can use these context menu commands to apply various transformations on .wasm files:

  • Validate uses Binaryen to verify that a WebAssmebly Module is valid.
  • Optimize runs several Binaryen optimization passes over a WebAssembly module.
<figcaption class="imageCaption">Optimized with Binaryen</figcaption>
  • Disassemble uses Wabt to convert the file to WebAssembly text format. This can then be edited and reassembled back into a WebAssembly file.

Some of the commands generate new files, for example “Firefox x86” will produce a .x86 file with the disassembled output from Firefox’s WebAssembly engine. While this may not be very useful (or actionable) to a JavaScript developer, I find it useful when teaching others about WebAssembly. (It’s proof that WebAssembly is low-level!)

<figcaption class="imageCaption">Firefox x86 Disassembly</figcaption>
  • Binary Explorer helps you understand how WebAssembly code is represented at a binary level.
<figcaption class="imageCaption">Binary Code Explorer</figcaption>
<figcaption class="imageCaption">Binary File View</figcaption>
  • Generate Call Graph plots the caller/callee relationships between functions (including imports and exports) to help you understand what’s included in a WebAssembly module.
<figcaption class="imageCaption">Call Graph</figcaption>

Some of the features in WebAssembly Studio need hosted back-end services (compilation), but many others run directly in the browser. Binaryen, Wabt, Capstone.js are all compiled to WebAssembly and run in the browser. This has the added benefit that we can scale much more easily, with less load on the server.

For a dose of WebAssembly magic, right click on main.c and select:

… that’s right, Clang Format is also compiled to WebAssembly, runs locally, and works great.

Interactive Embeddings

Interactive embeddings of WebAssembly Studio projects are now possible thanks to, a system for embedding interactive content in a wide variety of web platforms, including You can simply paste the link to a Forked project into your post


What’s Next

Over the next few months we’re going to:

  • Add better support for C/C++/Rust projects. For C/C++ applications we’re currently using the LLVM backend by itself, but we’re also hoping to add support for Emscripten using that backend so that you can use APIs like SDL and OpenGL. For Rust, we’d like to support Cargo.
  • Continue to add new features and integrate additional tools into WebAssembly Studio.
  • Make it possible to download and build WebAssembly Studio projects locally using familiar tools.
  • Improve UX, error reporting, and general performance optimizations.

Want to learn more or get more involved in this project? Please share feedback, file issues, and add feature requests on the WebAssembly Studio GitHub repo. If you want to get more involved with WebAssembly check out the main repo to learn more about the project and its infrastructure.

Marco ZeheIntroducing the Accessibility Inspector in the Firefox Developer Tools

The built-in Firefox Developer Tools just received a new family member. The Accessibility Inspector allows you to inspect your website’s exposure to assistive technologies.


As a web developer, have you wondered what your web site might look like to a screen reader for the blind? Have you wondered why you get reports that some people with disabilities cannot use your web application? Or, as a blind user, have you been frustrated by your screen reader not reading something right on a web page you were visiting, or which were even putting up barriers that make it impossible for you to get anything done on such a web page?

The accessibility team is proud to present the new Accessibility Inspector in the Firefox Developer Tools. This compliments the inspection tools that are already there, and works directly with the accessibility engine that also delivers information to screen readers or other assistive technologies. It has several features:

  1. Allows you to inspect the hierarchical tree of objects that was created from the information provided by the web page. For each node, a separate tree gives you all the relevant information such as role (What is this thing?), name (And what does it do?), states, and other information that assistive technologies use to create a suitable representation for their target audiences.
  2. A visual highlighter that accompanies the selection within that tree, giving a visual indication of which element is being referenced by this object.
  3. An ability to jump from the accessible object’s properties straight into the HTML inspector and inspect the element this accessible object was created from.
  4. The ability to jump from a selected HTML element in the HTML inspector to the corresponding accessibility object and inspect its properties.
  5. A context menu item in the main browser that allows you to inspect a given element’s accessibility. This will open the Accessibility Inspector with the relevant node already selected and its properties displaying.
  6. It is accessible for mouse, keyboards, and assistive technologies such as screen readers.

How do I get my hands on it?

The fully functional inspector debuted in the April 11, 2018, Firefox Nightly build and will make its way into Firefox 61, due for beta in May, and for release in July of 2018. You can download the Nightly build for desktop now and try it out, or wait for beta and dev edition to get it in May if you like.

How does it work?

Enable it first

For now, you need to manually enable the panel once so it will show up in the tool box. Go to the Settings And Feedback menu button in the Developer Toolbox, select Settings, and tab to the Accessibility checkbox. Press Space to check it. This will add it to the toolbox, and also enable the relevant context menu items.

Enable the engine

If you are a screen reader user or using other assistive technologies in Firefox by default, the following step can most likely be skipped. For all others, you need to enable the engine.

Select the Accessibility panel. There is a button that indicates whether accessibility is enabled or not. To inspect a web page, you must enable it.

Inspect elements

Now that you’ve got it up and running, load any web page of your liking, and right-click an element to inspect its accessibility. You’ll notice that below the already familiar Inspect Element item, there is also an Inspect Accessibility item now.

This will open the Accessibility Inspector and drop you right into the node that is relevant to the element you right-clicked on. You can now see or listen to the information. Tab to the other panel and arrow up and down through the information such as name, role, description, states. Some of these can be expanded, like States, Actions, and Attributes. Shift-tab back to the objects tree and select another tree. Or, focus the DOM Node element and press Enter to be taken right to the HTML Inspector tree where you can inspect the actual HTML element and its vicinity.

Screenshot of the accessibility inspector panel showing properties of an object such as name, role, description etc.<figcaption class="wp-caption-text">Screenshot of the accessibility inspector panel showing properties of an object</figcaption>

If you are sighted, you can also observe that, as you navigate the accessibility object tree, a visual highlighter indicates which element this object corresponds to.

A visual highlighter indicating the actual on-screen object corresponding to the selection made in the accessible objects tree<figcaption class="wp-caption-text">The highlight that accompanies the selection in the accessibility inspector object tree.</figcaption>

Turning off the engine after you’re done

If you do not normally use an assistive technology in Firefox, after you’re done inspecting, you might want to turn the accessibility engine back off. The same button that initially started it will also turn it off for you.

If you are a screen reader or other assistive techhnology user, do not worry. If the Inspector detects that accessibility was already running when it was launched, it will not let you turn off the engine, so accessibility isn’t being pulled out from under you accidentally.

What this Inspector is not

This inspector is not meant as an evaluation tool. It is an inspection tool. So it will not give you hints about low contrast ratios, or other things that would tell you whether your site is WCAG compliant. It helps you inspect your code, helps you understand how your web site is translated into objects for assistive technologies. It is a good tool to prepare for other tools like Axe,, Colour Contrast Analyzer or whatever tools you might want to use to actually evaluate your site for accessibility.

An example

So, let’s say you encounter this little broken form (opens in new tab):

<html><head><title>Testing a broken form</title></head>
<form action="post">
<label>E-mail address:</label><input id="email" type="email" /><br />
<label>Password:</label><input type="password" id="pw" /><br />
<input type="submit"/><input type="reset"/>

Those experienced with accessibility immediately see the two errors here, but let’s see. Load this page into a new tab. Focus the first input, press the context menu key and select Inspect Accessibility.

The inspector comes up, and when you inspect the properties of the entry object, you’ll notice that the name is given as “null”. “null” is never good, because it means it is not present, which means a screen reader will not automatically indicate what this field is doing, or what input is needed. There is a label, yes, but it is not properly associated with the input field.

You see the label object right above the entry field. Select that one, then go to the DOM Node item in the properties and press Enter.

You’ll be taken to the HTML inspector, and placed on the label element. To associate the label with the input, we need to add a for attribute to the label whose value is the ID of the input field we want to associate this label with. To do this:

  1. Press Enter to go into editing mode for the element.
  2. Tab to the Add New Attribute button and press Space
  3. Enter for="email" and press Enter to save.
  4. Press Escape to leave editing mode for the element and return to the tree.

Now, arrow down to the input, press your context menu key, and select Inspect Accessibility Properties. This will take you back to the properties of the entry field in the Accessibility Inspector. Notice that the name has now changed from “null” to “E-mail address:”, which is exactly what we want it to be.

Go ahead and fix the password field as well! Just remember that the above actions will not save the changes back to your file, you’ll have to apply the changes there separately, and then you can reload it in Firefox to test whether the changes indeed fixed all the problems.

A note about right-clicking an item

In the above example and descriptions, I was purposely referring to right-clicking instead of using the context menu key when on a web page. The reason is that the context menu key only works on focusable elements, not on any node or text. So, to get the best results, even when you’re a screen reader user, use your mouse emulation to route the mouse pointer to the element you want to inspect, and use the right mouse button simulation to right-click. Refer to your screen reader’s documentation if you are unsure how this is done.

We hope to provide a better solution for this in the future.

Getting in touch

Naturally, we would like to hear what you think. We are super excited, and we hope you will be, too! But there may always be cases where things might not work. So to report that to us, or just share how much you like the inspector, you can find us in these locations:

Of course, you can also comment below this blog post.


The screenshots were provided by my team mate Yura Zenevich, who is also the author of the Accessibility Inspector.

What is this thing, and what does it do? (Youtube link) is a talk given by Karl Groves on several occasions, and is a great introduction to the accessibility tree. This is a worthwhile watch for both accessibility novices and experienced professionals.

Nick CameronRust all-hands (dev-tools stuff)

Last week (sigh, the week before last now) we held an 'all-hands' event in Berlin. It was a great event - fantastic to meet so many Rust people in real life and really energising to see how much is being planned and implemented. There is a blog post describing the whole event on the Rust blog.

In this post I want to summarise some of the important dev-tools stuff that happened. Our planning and notes from some meetings is in the dev-tools team repo.

2018 edition planning

We discussed how the edition transition would affect various tools and how the dev-tools team would ensure we coped with the transition. We also discussed how the Rustfix tool would make the transition easier for users.

Full notes

Clippy lint audit

We planned an audit of Clippy's lints in preparation for a 1.0 release. The expectation is that we'll keep the current categories of lints and add a new category for unstable lints.Mostly lints would be moved to being off by default. The Clippy team were open to moving some compiler lints to Clippy, but we felt that moving all lints to Clippy was going too far.

The Clippy team will being auditing lints immediately and hope to have an RFC or other document prepared by September. Some work has already been done.

Full notes.

Custom test frameworks

See also the RFC.

We basically just discussed details of the planned implementation.

IDE planning

We discussed a whole bunch of things that we might consider doing post-RLS-1.0. We decided that a debug mode (probably before 1.0, actually), testing (of the RLS), and low-hanging feature fruit would be our priorities. Although we'd like compiler-backed code completion, it is probably not happening soon, and mostly Racer is good enough for now.

Full notes


The Cargo team discussed:

  • custom registries
  • profiles
  • public dependencies
  • build system and RLS integration (build plans)
  • templates
  • project-scoped custom commands


We discussed how to better support cross-compilation in Cargo without requiring Xargo. This probably means bringing Xargo entirely into Cargo.


We discussed moving toolchain management from Rustup into Cargo. We decided we probably wouldn't try and move all of Rustup into Cargo, but go for an 80/20 split where most users wouldn't need to explicitly use Rustup.

Compiler consumers

Lots of tools and other consumers want access to the compiler's data. We enumerated them and discussed how to make this data available in a disciplined and more stable form than currently available. The API we decided on is an extension and replacement to the current save-analysis API which is used by the RLS and others. It will be based on the queries used internally in the compiler, but the public queries will generally be higher-level.

Libsyntax 2

A long-term proposal to improve tooling by providing an alternate AST library that would power tools and (probably) the compiler (maybe useful for procedural macros, etc.). It would be optimised for IDEs and similar tools. We decided there were some good goals and interesting ideas here, but that there is no short-term application, so this is low priority for now.

See also the RFC


Steve demo'ed Doxidize a possible successor/alternative to Rustdoc. It supports richer documentation including docs beyond API docs such as guides and examples.

See also the implementation

Firefox Test PilotNotes v4 with multi-note support

Multi-note support is now available in the new Test Pilot Notes v4 update. This was the most requested feature after going through all of the user research and feedback. You may also notice more UX changes to make Notes feel more like the rest of Firefox by following the Photon design system guidelines.

<figcaption>Multi-note feature</figcaption>

In the new release, each note has a menu bar that gives you access to export, delete, and extension feedback functionality.

By using the “Send to Notes” context button, you can either create new notes or add to an existing opened note:

We also fixed a few outstanding issues, such as the “downloads” extension permission. We found a different way to build the “Export as HTML” feature and will no longer need the extra permissions.

We would like to thank our QA team for investigating and resolving over 40 bugs that were in the path of this release. Don’t forget to provide feedback using the “Give Feedback” button in Notes and report any bugs you find in the Notes issue tracker.

Notes v4 with multi-note support was originally published in Firefox Test Pilot on Medium, where people are continuing the conversation by highlighting and responding to this story.

Mark SurmanA scandal, a napkinand the health of the internet

Today marks the launch of Mozilla’s first full edition of the Internet Health Report, an open source effort to explore the state of human life on the internet.

As we put our final touches on the report, the United States scrambled to prepare for testimony by Facebook’s Mark Zuckerberg, following revelations about user data obtained by Cambridge Analytica. The conversation: what should the Senate and Congress ask him?

The list of questions is long.What do we do about the data of up to 87 million people floating around, unrecoverable? Can artificial intelligence help address suspicious behaviour around elections? What are Facebook’s responsibilities to users and the public? Unsurprisingly, it was also quite scattered. We do not yet have a collective mental map of how issues like these connect.

Creating such a map is a key goal of the Internet Health Report, in which people from across Mozilla’s extended community compile research and analysis about the internet.

There were moments when weaving together this year’s report felt just as scattered. We’re all mapping new territory, and we each only have part of the map.

As an example: we spent weeks wrestling over which angle to focus on in the report’s spotlight story on ‘fake news’ and misinformation. We went back and forth on which aspects of this complex topic we each thought was most important. And then, there was an “Aha!” moment: all of the issues we’d each been championing needed to be pieced together into bigger story.

I sketched out a list on a napkin to help order our thoughts:

What the napkin said: Collecting all our data + precision targeted ads + bots and fake accounts + FB dominates news distribution + not enough web literacy = fuel for fraud and abuse, and very bad real world outcomes

As with the rest of the report, the back and forth process produced a solid piece: a piece that connected misinformation into the big picture of the technology and economics that underpin the internet.

This process also served as a reminder that the internet is a complex social, business and technical ecosystem: a living system made up of computers and data and humans. Grappling with things like the Cambridge Analytica scandal requires stepping back and looking at this whole ecosystem. The Internet Health Report is designed to help us do exactly this.

The report is also designed to inspire. It includes interviews with: volunteer cybersecurity first responders; activists building open-source, civic-minded bots; and engineers creating a truly multilingual internet. While the loudest news media headlines may be bad, there is a great deal of good news happening off most people’s radar. This is where we find hope and aspiration.

This sense of hope and aspiration is common across the broader movement that Mozilla is a part of. The open source movement. The internet freedom movement. The digital rights movement. Whatever you call it, there is a growing force that stands for the idea that we can build a digital world that is open, accessible and welcoming to all.

It is people from across this movement who compiled the Internet Health Report: smart and dedicated researchers, engineers, data scientists, policy analysts, even artists. Together, we had tough, collaborative conversations. We simplified complex ideas. We connected the dots.  And, we started to map the problems, opportunities and conundrums of human life online.

Of course, creating a map like this is not just something you publish and then it’s done. Just like the internet, it is something that we can only build together, over time. Which is why we encourage you to join us.The internet is an ecosystem that humans build. It’s people who decide what it is and isn’t. It’s up to all of us to envision – and create – something healthy and humane.

This article cross posted from

The post A scandal, a napkin<br>and the health of the internet appeared first on Mark Surman.

Firefox NightlyImproving DevTools’ performance, one iteration at a time

Firefox Quantum, released last November, was a very important milestone for Firefox. Huge performance gains were achieved by replacing parts of the engine with bits of Servo, written in Rust.  But the technology shift did not stop at that: the developer tools –the inspector, the console, etc.– are being re-programmed using modern web technologies: JavaScript and React.

However, when the new developers tools were released with Firefox Quantum, some of you noticed that they were being slower and not as good as the old ones performance-wise. This was somewhat expected, as they were initial versions – and they have been getting better in the months following Quantum’s release.

The DevTools team has made performance a priority, they are working on it, and we –the web developers– can now start to reap some results.

What is faster now?

Improving the performance is a work in progress, and, as such, there have been some regressions, but some of the interactions that should feel faster and more responsive are:

  • Viewing the CSS rules cascade in the Inspector, specially for big CSS files.
  • Updating the  inspector when many DOM elements change at a time.
  • Opening the Layout panel in the inspector (so you can debug your CSS Grid layouts!)
  • Opening the Console with many logged messages.
  • Expanding Objects with a lot of attributes in the Console.
  • When updating the requests in the Network Monitor for a large page with hundreds of requests, the UI should feel much more responsive now.
  • Stepping into a source-mapped file in the Debugger.
  • Loading pages with hundreds of source maps or very large files in the Debugger.

Here are some short clips showcasing some of these.

The following video shows how long it takes for the CSS rules to be displayed in the sidebar of the Inspector, when dealing with a large stylesheet:


And this video shows how long it takes for the Console to load when a large amount of messages have been logged:


How do we know?

There is a public dashboard in which we can follow the evolution of the performance gains. It’s important to have in mind that sometimes we are speaking in the order of milliseconds, so shaving time off gets harder in some cases.

For instance, this is the evolution over the last three months of how long it takes for the Network Monitor panel to open:

While these charts and tests are useful in a day-to-day basis, they are not so good to have an overall, long term view. Why? You can see some  sharp changes in the chart above: this is because the tests used to measure performance are changing –or being replaced– over time as well. The reason this happens is because tests –as well as any other code base– need to be evolving to better reflect real examples or common scenarios. The hardware in which those tests are run is also changed from time to time, so that affects the measurements too.

But in order to do a fair, more scientific, comparison we would need a benchmark that uses the same tests. So that is what I did: run the tests of Firefox 60 (Nightly at the time) and compare them against the same Firefox 60 tests, but running on Firefox Quantum (57).

It wasn’t straightforward, because just copying the new tests over the old ones doesn’t work –for instance, some tests were relying on events that were not being dispatched on older versions. But luckily for me, Yulia Startsev, a DevTools engineer, not only helped me a lot, but she also created a patch for Firefox 57 so I could run the new tests on it.

After running the test suite –which, roughly, runs the tests multiple times and averages the time it took–, a JSON file is generated with the results. And with that, and a bit of JavaScript magic, we can compare the output  between Firefox 57 and 60.

And although it is satisfying seeing all the “green” in those tests, we have taken note of what needs to improve, including changes on the tests themselves to better reflect real use cases.

Remember that the new DevTools are being developed using web technologies (HTML, CSS, JavaScript and React), and you are more than welcome to collaborate. A good place to start is joining the Devtools Slack community, and taking a look at the list of open bugs tagged as “Good First Bug”. There is also the official documentation with a Getting Started section, along with another section dedicated to performance.

Mozilla VR BlogThe Design of Firefox Reality

The Design of Firefox Reality

For web designers and developers, the arrival of a new platform and a new web browser can be fraught with excitement but also with anxiety. There are new technical constraints, new interaction techniques, and the visual patterns worn smooth by decades of print and web design must be revisited with new eyes. This post summarizes the excitement and anxiety that we face while bringing Firefox Reality to stand-alone augmented and virtual reality headsets.

The Design of Firefox Reality

John Maeda wrote, “Simplicity is about subtracting the obvious and adding the meaningful.” For the coming class of stand-alone headsets, we have revisited the web's core assumptions about what is obvious and what is meaningful. We have found a new set of constraints, goals, and criteria.

The Design of Firefox Reality

Constraints we have in bucketfuls. Headset hardware has limited battery power and computation. Each operating system has unique capabilities and quirks. Some headsets come with hand-held controllers, some recognize hand gestures, and some use gaze controls and touchpads. The size of the user interfaces must be designed to support relatively clumsy input devices (compared to mouse or touch) and so must be larger. Pass-through displays in augmented reality glasses require different rendering techniques than the opaque screens in virtual reality head mounted displays. With the loss of the keyboard and mouse, traditional approaches to core features such as text entry and window management completely fail. Unlike laptop or mobile web devices, VR headsets require users to find a safe location with enough room to move around a virtual scene.

The Design of Firefox Reality

Goals in stand-alone headsets are inherently different than when using a laptop or a mobile device. Headset manufacturers will continue to experiment with comfort, power management, and visual quality, but for the next few years headset sessions will be shorter than the 8 to 16 hours per day that people use laptops and mobiles. Designs for headset software must to be oriented around short sessions with many breaks between intense interactions. User interfaces must be placed to prevent neck strain and repetitive injuries. Laptop and mobile web applications serve many purposes, but the initial wave of headsets will target a few specific domains like on-site industrial use and entertainment experiences like gaming. While immersive experiences are an important new aspect of the web, the vast majority of the web remains in 2D so headsets must enable users smooth transitions between the 2D and 3D content.

The Design of Firefox Reality

Criteria for Firefox Reality are different because stand-alone headsets are essentially different from all other platforms. They are more intimately connected to our bodies, and so must be more personable. They are between our eyes and our environment, and so must be our trusted intermediaries. They can cause nausea and disorientation, and so must be visibly smooth and consistent.

With those constraints, goals, and criteria in mind, here are the major components for the first version of Firefox Reality:

The Design of Firefox Reality

In augmented reality headsets, the environment in which Firefox Reality is running is actual reality. In VR headsets, it's depressing to work in a vast black void so it's necessary to spruce up VR with virtual environments. While the environment will eventually be fully configurable by users, to get started we will have at least three choices.

The Design of Firefox Reality

Stand-alone headsets have no fallback screens like on a PC tethered VR system or the screen on a mobile device that can clip into a headset. Therefore, it is important that the existing 2D web work well in Firefox Reality. We've taken the decades of web compatibility investment in Firefox's Gecko renderer and wrapped it in a user interface that is both comfortably familiar and designed specifically for new types of input. We’re experimenting with the usual tabs, URL bars, full-screen mode, and windows that you expect from a browser, but each has been reworked for use in headsets and for both hand and voice controls.

The tray
The Design of Firefox Reality

Where desktop and mobile browsers have top bar menus, Firefox Reality has a button tray. Here you will find all of the application-wide actions, like opening a new window or editing your settings. The tray is like a workbench or a dashboard, providing you with a solid place to orient yourself as you move through virtual and actual environments.

The sigil
The Design of Firefox Reality

Unlike on desktop browsers, immersive web pages using WebVR can take over the entire visible environment. It is possible that a malicious web coder could create something that looks and acts exactly like trusted browser UI and then act maliciously, such as capturing passwords or other private information. To prevent this, we’ve created a new concept: the security sigil. Its shape, color, and pattern are chosen by the user and those choices are not shared with any web page. If the user sees their unique sigil on the tray then they know that they are interacting with the real Firefox Reality instead of a spoofed user interface.

New realities
The Design of Firefox Reality

Firefox was the first web browser to ship virtual reality for the desktop. With Firefox Reality, we're bringing the same capabilities to AR and VR headsets. You will paint and sculpt fantastic landscapes using A-Painter. You will be able to explore new worlds with the games and educational experiences from organizations you already know. All of these will be based on web standards that enable anything you make to work across the entire web.

Future work
The initial design of Firefox Reality focuses on what is possible today, but we are also working on what will be possible tomorrow. Our next-generation rendering engine, Servo, provides us with the ability to use computer vision, parallel processing, and experimental rendering techniques to prototype the future of the immersive web. The Mixed Reality social team is building new ways to bring distributed groups together for work and play. We’re also experimenting with new ways to share links and tabs between all of your devices so that you can move from mobile to desktop to headset in that uniquely web-ish way.

Augmented and virtual reality give us another chance to see better. Firefox Reality will succeed only when it enables us to better see ourselves, each other, and the world. If you would like to see better, you can download an experimental build. If you are a designer or developer, you can help us build Firefox Reality by visiting the GitHub repo or emailing us.

(Thanks to Nadja Haldimann (@thenadj) and Jenni Hoang (@jen_maicha) for their work on the design of Firefox Reality)

Mozilla Reps CommunityRep of the Month – March 2018

Please join us in congratulating Md Shahbaz Alam, our Rep of the Month for March 2018!

Shahbaz is a Full Stack Developer and Tech Evangelist by profession. He loves to talk about WebVR and auth0, he single-handedly created a Mozilla community in his home town Ranchi once he moved there. He is a Reps Mentor, Reps Resource member and community builder and loves traveling, cooking, organizing/speaking at events/conferences.


Recently Shahbaz has been speaking at Virtual Reality @ Hackfest in Dhanbad as well as at FootPrintsX8. He also organized Rust hack MITS. You can also read more about his contributions in 2017 in his blog post.

Thank you Shahbaz, keep rocking the open Web! :tada: :tada:

Head over to Discourse to congratulate him!

The Mozilla BlogA Scandal, a Napkin and the Health of the Internet

Today marks the launch of Mozilla’s first full edition of the Internet Health Report, an open source effort to explore the state of human life on the internet.

As we put our final touches on the report, the United States scrambled to prepare for testimony by Facebook’s Mark Zuckerberg, following revelations about user data obtained by Cambridge Analytica. The conversation: what should the Senate and Congress ask him?

The list of questions is long. What do we do about the data of up to 87 million people floating around, unrecoverable? Can artificial intelligence help address suspicious behaviour around elections? What are Facebook’s responsibilities to users and the public? Unsurprisingly, it was also quite scattered. We do not yet have a collective mental map of how issues like these connect.

Creating such a map is a key goal of the Internet Health Report, in which people from across Mozilla’s extended community compile research and analysis about the internet.

There were moments when weaving together this year’s report felt just as scattered. We’re all mapping new territory, and we each only have part of the map.

As an example: we spent weeks wrestling over which angle to focus on in the report’s spotlight story on ‘fake news’ and misinformation. We went back and forth on which aspects of this complex topic we each thought was most important. And then, there was an “Aha!” moment: all of the issues we’d each been championing needed to be pieced together into a bigger story.

I sketched out a list on a napkin to help order our thoughts:

What the napkin said:

Collecting all our data

+ precision targeted ads

+ bots and fake accounts

+ FB dominates news distribution

+ not enough web literacy

= fuel for fraud and abuse,

and very bad real world outcomes

As with the rest of the report, the back and forth process produced a solid piece: a piece that connected misinformation into the big picture of the technology and economics that underpin the internet.

This process also served as a reminder that the internet is a complex social, business and technical ecosystem: a living system made up of computers and data and humans. Grappling with things like the Cambridge Analytica scandal requires stepping back and looking at this whole ecosystem. The Internet Health Report is designed to help us do exactly this.

The report is also designed to inspire. It includes interviews with: volunteer cybersecurity first responders; activists building open-source, civic-minded bots; and engineers creating a truly multilingual internet. While the loudest news media headlines may be bad, there is a great deal of good news happening off most people’s radar. This is where we find hope and aspiration.

This sense of hope and aspiration is common across the broader movement that Mozilla is a part of. The open source movement. The internet freedom movement. The digital rights movement. Whatever you call it, there is a growing force that stands for the idea that we can build a digital world that is open, accessible and welcoming to all.

It is people from across this movement that compiled the Internet Health Report: smart and dedicated researchers, engineers, data scientists, policy analysts, even artists. Together, we had tough, collaborative conversations. We simplified complex ideas. We connected the dots. And, we started to map the problems, opportunities and conundrums of human life online.

Of course, creating a map like this is not just something you publish and then it’s done. Just like the internet, it is something that we can only build together, over time. Which is why we encourage you to join us. The internet is an ecosystem that humans build. It’s people who decide what it is and isn’t. It’s up to all of us to envision – and create – something healthy and humane.

The post A Scandal, a Napkin and the Health of the Internet appeared first on The Mozilla Blog.

Mozilla Localization (L10N)CLDR as source of key internationalization data in Firefox: milestones achieved and next steps

In case you’re not familiar with the acronym, CLDR stands for Common Locale Data Repository: it’s a repository of locale data maintained by the Unicode Consortium, and used in several libraries that power internationalization (i18n) features in products developed by Mozilla, Apple, IBM, Microsoft, and many other companies. Firefox uses the data provided by CLDR mostly through the ICU library.

You can find an exhaustive list of the type of data provided in the CLDR home page. Within Firefox, these are currently the main focus areas:

  • Date and time formatting, calendar preferences.
  • Plural cases.
  • Translation of language and region names.

Date and time formatting, calendar preferences

Firefox 57 shipped with a native datetime picker that can be used in HTML forms. Here’s how it looks in Italian:

The localization data used to generate this dialog comes from CLDR:

  • Date formatting for the placeholder.
  • Month and day names.
  • First day of the week (e.g. Sunday for en-US, Monday for it).

The same data is also available to Firefox developers to properly internationalize other parts of the user interface. It’s provided via an API called ‘mozIntl’, which extends the standard JavaScript Internalization library (ECMA402).

Firefox 61 will also ship with a relative time format API (“in 5 seconds”, “5 seconds ago”), finally allowing front-end developers to use a more natural language in the interface.

Plural cases

Currently, there are 3 completely different sources of truth for plurals:

  • Fluent uses CLDR to determine the number of plural forms for each language, and CLDR categories (zero, one, two, few, other, many).
  • Pontoon stores its own internal rules, using CLDR categories.
  • Gecko stores an internal plural rule, in form of a localizable key with an integer value. Each rule maps to a different number of plural forms, and doesn’t have any relation with CLDR.

It suffices to say that this fragmentation generated a lot of inconsistencies over the years.

Given the renewed focus on Fluent, last December I started analyzing all Gecko plural forms, to identify inconsistencies between our settings and CLDR. This led to correcting the plural form for 10 languages by aligning with the CLDR values. In a couple of cases, I also reported issues back to CLDR: for Macedonian our ticket was accepted and the changes included, for Latvian it was rejected.

A significant amount of time was also invested in correcting errors in Gecko before starting migrating strings to Fluent: several locales had a wrong number of plural forms, but weren’t aware of the issue, given the hacky nature of plural support in .properties. Starting from January, dashboards are reporting this type of errors, allowing localizers to quickly correct them. Soon, these errors will be reported directly in Pontoon when submitting a new translation.

Work is still underway to fix plurals in other projects in Pontoon, and minimize the impact on localizers: for example, if a string moves from 2 plural forms to 6, you need to invalidate existing translations, and possibly copy over one of the existing values to reduce the need for copy and paste.

Translation of language and region names

Localized names for languages and regions are used in Firefox preferences and other parts of the UI. They’re defined as localizable strings in toolkit, and currently consist of 203 language names and 272 region names.

Since CLDR provides this data as well, the plan is to start using it to localize Firefox UI. This poses a few challenges:

  • Can we replace the current list of country names from GENC with region names from CLDR? This proposal already received a green light.
  • What data is missing from CLDR? We ship languages that are missing from CLDR, we’ll need to file tickets to get those language names added.
  • Since we already have localized names, can we compare them with data from CLDR and see how big the difference is? Strictly related: can the CLDR data be used directly?

Right now, the work is mostly focused on the last point, and tracked in this bug. I started analyzing the difference for a couple of languages, including my own (Italian):

  • 53 language names (26.11%) were translated differently between Mozilla and CLDR. After comparing the two translations for each name to identify the best one, in most cases conflicts were resolved by using the CLDR data. Only seven differences remain after this work (3.45%), with five improvements that need to be reported back to CLDR using their Survey Tool. Two more differences are expected, since they are caused by differences in the English source (Frysian vs Western Frysian, Rundi vs Kirundi).
  • 51 region names were translated differently (18.75%). After, only 11 differences remain (4.04%).
  • Language names are not usable directly: in Mozilla they’re uppercase, since they’re only used as stand-alone labels. In CLDR they’re all lowercase, since the language name is always lowercase when used in the middle of a sentence in Italian.

Analysis is now moving to other languages, with higher percentage of differences. The average difference for language names is 45.49%, while for region names is 30.80%, but we have locales with up to 96% of differences, and we need to figure out why that happens.

The full statistical analysis is available in this spreadsheet. If you’re interested in getting a list of the actual differences for your language, feel free to reach out. One thing to keep in mind is that there are differences for English itself, e.g. “Acoli” vs “Acholi”, or “Caribbean Netherlands” vs “Bonaire, Sint Eustatius, and Saba”, and this inevitably affects the data.

Next steps for 2018

Fluent represents the future of localization for Mozilla products, and it relies heavily on CLDR data. But that’s not the only reason to invest resources in improving the CLDR integration within Firefox:

  • Using CLDR means unifying our approach to internationalization data with the one used in products like Windows, macOS, Android, Twitter, Wikipedia, etc. It also means offering a consistent and more familiar experience to our users.
  • It lowers the burden on our localizers. What’s the point of translating hundreds of strings, if there is an established, high-quality dataset that could be safely reused? This data is a live archive, collected and maintained by a large body of linguistic experts cooperating on CLDR, and exposed on a daily basis to millions of users.
  • We can help extending CLDR support to minority languages that are not relevant for commercial software companies. For example, Firefox Nightly currently ships in 101 languages. While Microsoft covers about the same number of languages through Windows language packs, other browsers support half that number (or less).

As already seen, some parts of UI already use CLDR data: if a locale is not available in the CLDR repository, it won’t have a localized datetime picker, or properly localized dates, and it won’t pick the right plural form when using Fluent strings.

In the coming months we’re going to invest resources on building a pathway for locales to be included as seed locales in CLDR: it will likely be a stand-alone project in Pontoon, with Fluent as storage format, used to collect information that will be converted and used to bootstrap locale data in CLDR. Kekoa, who will be back as a intern in the summer, will contribute to this project (among other things).

We also plan to extend mozIntl API to provide localized language and region names. The current idea is to generate a local data source from CLDR, and integrate it with our own data for locales that are not yet available in the CLDR repository. In order to do that, we need to keep investigating the differences between our current translations and CLDR, and identify potential issues before fully switching to CLDR as source for this data.

This Week In RustThis Week in Rust 229

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

From Rust All Hands in Berlin

Crate of the Week

This week's crate is shiplift, a rusty API for talking with docker daemons. Thanks to Matthias Beyer for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

200 (!) pull requests were merged in the last week

New Contributors

  • Chris Manchester
  • Dan Callaghan
  • Francis Gagné
  • lloydmeta
  • nabijaczleweli
  • Valentine Valyaeff
  • Wim Looman

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

No RFCs are currently in final comment period.

New RFCs

Upcoming Events

The community team is trying to improve outreach to meetup organisers. Please fill out their call for contact info if you are running or used to run a meetup.

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

No quote was selected for QotW.

Submit your quotes for next week!

This Week in Rust is edited by: nasa42 and llogiq.

Air MozillaMozilla Weekly Project Meeting, 09 Apr 2018

Mozilla Weekly Project Meeting The Monday Project Meeting

Gregory SzorcRelease of python-zstandard 0.9

I have just released python-zstandard 0.9.0. You can install the latest release by running pip install zstandard==0.9.0.

Zstandard is a highly tunable and therefore flexible compression algorithm with support for modern features such as multi-threaded compression and dictionaries. Its performance is remarkable and if you use it as a drop-in replacement for zlib, bzip2, or other common algorithms, you'll frequently see more than a doubling in performance.

python-zstandard provides rich bindings to the zstandard C library without sacrificing performance, safety, features, or a Pythonic feel. The bindings run on Python 2.7, 3.4, 3.5, 3.6, 3.7 using either a C extension or CFFI bindings, so it works with CPython and PyPy.

I can make a compelling argument that python-zstandard is one of the richest compression packages available to Python programmers. Using it, you will be able to leverage compression in ways you couldn't with other packages (especially those in the standard library) all while achieving ridiculous performance. Due to my focus on performance, python-zstandard is able to outperform Python bindings to other compression libraries that should be faster. This is because python-zstandard is very diligent about minimizing memory allocations and copying, minimizing Python object creation, reusing state, etc.

While python-zstandard is formally marked as a beta-level project and hasn't yet reached a 1.0 release, it is suitable for production usage. python-zstandard 0.8 shipped with Mercurial and is in active production use there. I'm also aware of other consumers using it in production, including at Facebook and Mozilla.

The sections below document some of the new features of python-zstandard 0.9.

File Object Interface for Reading

The 0.9 release contains a stream_reader() API on the compressor and decompressor objects that allows you to treat those objects as readable file objects. This means that you can pass a ZstdCompressor or ZstdDecompressor around to things that accept file objects and things generally just work. e.g.:

   with open(compressed_file, 'rb') as ifh:
       cctx = zstd.ZstdDecompressor()
       with cctx.stream_reader(ifh) as reader:
           while True:
               chunk =
               if not chunk:

This is probably the most requested python-zstandard feature.

While the feature is usable, it isn't complete. Support for readline(), readinto(), and a few other APIs is not yet implemented. In addition, you can't use these reader objects for opening zstandard compressed tarball files because Python's tarfile package insists on doing backward seeks when reading. The current implementation doesn't support backwards seeking because that requires buffering decompressed output and that is not trivial to implement. I recognize that all these features are useful and I will try to work them into a subsequent release of 0.9.

Negative Compression Levels

The 1.3.4 release of zstandard (which python-zstandard 0.9 bundles) supports negative compression levels. I won't go into details, but negative compression levels disable extra compression features and allow you to trade compression ratio for more speed.

When compressing a 6,472,921,921 byte uncompressed bundle of the Firefox Mercurial repository, the previous fastest we could go with level 1 was ~510 MB/s (measured on the input side) yielding a 1,675,227,803 file (25.88% of original).

With level -1, we compress to 1,934,253,955 (29.88% of original) at ~590 MB/s. With level -5, we compress to 2,339,110,873 bytes (36.14% of original) at ~720 MB/s.

On the decompress side, level 1 decompresses at ~1,150 MB/s (measured at the output side), -1 at ~1,320 MB/s, and -5 at ~1,350 MB/s (generally speaking, zstandard's decompression speeds are relatively similar - and fast - across compression levels).

And that's just with a single thread. zstandard supports using multiple threads to compress a single input and python-zstandard makes this feature easy to use. Using 8 threads on my 4+4 core i7-6700K, level 1 compresses at ~2,000 MB/s (3.9x speedup), -1 at ~2,300 MB/s (3.9x speedup), and -5 at ~2,700 MB/s (3.75x speedup).

That's with a large input. What about small inputs?

If we take 456,599 Mercurial commit objects spanning 298,609,254 bytes from the Firefox repository and compress them individually, at level 1 we yield a total of 133,457,198 bytes (44.7% of original) at ~112 MB/s. At level -1, we compress to 161,241,797 bytes (54.0% of original) at ~215 MB/s. And at level -5, we compress to 185,885,545 bytes (62.3% of original) at ~395 MB/s.

On the decompression side, level 1 decompresses at ~260 MB/s, -1 at ~1,000 MB/s, and -5 at ~1,150 MB/s.

Again, that's 456,599 operations on a single thread with Python.

python-zstandard has an experimental API where you can pass in a collection of inputs and it batch compresses or decompresses them in a single operation. It releases and GIL and uses multiple threads. It puts the results in shared buffers in order to minimize the overhead of memory allocations and Python object creation and garbage collection. Using this mode with 8 threads on my 4+4 core i7-6700K, level 1 compresses at ~525 MB/s, -1 at ~1,070 MB/s, and -5 at ~1,930 MB/s. On the decompression side, level 1 is ~1,320 MB/s, -1 at ~3,800 MB/s, and -5 at ~4,430 MB/s.

So, my consumer grade desktop i7-6700K is capable of emitting decompressed data at over 4 GB/s with Python. That's pretty good if you ask me. (Full disclosure: the timings were taken just around the compression operation itself: overhead of loading data into memory was not taken into account. See the script in the source repository for more.

Long Distance Matching Mode

Negative compression levels take zstandard into performance territory that has historically been reserved for compression formats like lz4 that are optimized for that domain. Long distance matching takes zstandard in the other direction, towards compression formats that aim to achieve optimal compression ratios at the expense of time and memory usage.

python-zstandard 0.9 supports long distance matching and all the configurable parameters exposed by the zstandard API.

I'm not going to capture many performance numbers here because python-zstandard performs about the same as the C implementation because LDM mode spends most of its time in zstandard C code. If you are interested in numbers, I recommend reading the zstandard 1.3.2 and 1.3.4 release notes.

I will, however, underscore that zstandard can achieve close to lzma's compression ratios (what the xz utility uses) while completely smoking lzma on decompression speed. For a bundle of the Firefox Mercurial repository, zstandard level 19 with a long distance window size of 512 MB using 8 threads compresses to 1,033,633,309 bytes (16.0%) in ~260s wall, 1,730s CPU. xz -T8 -8 compresses to 1,009,233,160 (15.6%) in ~367s wall, ~2,790s CPU.

On the decompression side, zstandard takes ~4.8s and runs at ~1,350 MB/s as measured on the output side while xz takes ~54s and runs at ~114 MB/s. Zstandard, however, does use a lot more memory than xz for decompression, so that performance comes with a cost (512 MB versus 32 MB for this configuration).

Other Notable Changes

python-zstandard now uses the advanced compression and decompression APIs everywhere. All tunable compression and decompression parameters are available to python-zstandard. This includes support for disabling magic headers in frames (saves 4 bytes per frame - this can matter for very small inputs, especially when using dictionary compression).

The full dictionary training API is exposed. Dictionary training can now use multiple threads.

There are a handful of utility functions for inspecting zstandard frames, querying the state of compressors, etc.

Lots of work has gone into shoring up the code base. We now build with warnings as errors in CI. I performed a number of focused auditing passes to fix various classes of deficiencies in the C code. This includes use of the buffer protocol: python-zstandard is now able to accept any Python object that provides a view into its underlying raw data.

Decompression contexts can now be constructed with a max memory threshold so attempts to decompress something that would require more memory will result in error.

See the full release notes for more.


Since I last released a major version of python-zstandard, a lot has changed in the zstandard world. As I blogged last year, zstandard circa early 2017 was a very compelling compression format: it already outperformed popular compression formats like zlib and bzip2 across the board. As a general purpose compression format, it made a compelling case for itself. In my mind, snappy was its only real challenger.

As I wrote then, zstandard isn't perfect. (Nothing is.) But a year later, it is refreshing to see advancements.

A criticism one year ago was zstandard was pretty good as a general purpose compression format but it wasn't great if you live at the fringes. If you were a speed freak, you'd probably use lz4. If you cared about compression ratios, you'd probably use lzma. But recent releases of zstandard have made huge strides into the territory of these niche formats. Negative compression levels allow zstandard to flirt with lz4's performance. Long distance matching allows zstandard to achieve close to lzma's compression ratios. This is a big friggin deal because it makes it much, much harder to justify a domain-specific compression format over zstandard. I think lzma still has a significant edge for ultra compression ratios when memory utilization is a concern. But for many consumers, memory is readily available and it is easy to justify trading potentially hundreds of megabytes of memory to achieve a 10x speedup for decompression. Even if you aren't willing to sacrifice more memory, the ability to tweak compression parameters is huge. You can do things like store multiple versions of a compressed document and conditionally serve the one most appropriate for the client, all while running the same zstandard-only code on the client. That's huge.

A year later, zstandard continues to impress me for its set of features and its versatility. The library is continuing to evolve - all while maintaining backwards compatibility on the decoding side. (That's a sign of a good format design if you ask me.) I was honestly shocked to see that zstandard was able to change its compression settings in a way that allowed it to compete with lz4 and lzma without requiring a format change.

Snappy is another promising modern compression format. However, the project appears to have died off. There are only 5 commits so far in 2018. (zstandard has 357.) Competition is good and I was hoping Google/snappy would provide some competitive pressure for Facebook/zstandard. But who knows what's happening on the snappy side of the world. Zstandard continues to make meaningful improvements, so I can't complain about its pace of progress.

The more I use zstandard, the more I think that everyone should use this and that popular compression formats just aren't cut out for modern computing any more. Every time I download a zlib/gz or bzip2 compressed archive, I'm thinking if only they used zstandard this archive would be smaller, it would have decompressed already, and I wouldn't be thinking about how annoying it is to wait for compression operations to complete. In my mind, zstandard is such an obvious advancement over the status quo and is such a versatile format - now covering the gamut of super fast compression to ultra ratios - that it is bordering on negligent to not use zstandard. With the removal of the controversial patent rights grant license clause in zstandard 1.3.1, that justifiable resistance to widespread adoption of zstandard has been eliminated. Zstandard is objectively superior for many workloads and I heavily encourage its use. I believe python-zstandard provides a high-quality interface to zstandard and I encourage you to give it and zstandard a try the next time you compress data.

If you run into any problems or want to get involved with development, python-zstandard lives at indygreg/python-zstandard on GitHub.

Alex VincentVerbosio is dead… but I have a new code name, Aluminium, with the same ambition

Perhaps the fastest evolution in the world today is knowledge, not software.  This has become painfully clear in the last few years.

The way I see it, Mozilla and I are both going through major turns in our respective life cycles:

  1. I turned 40 this past December.  That has all sorts of implications by itself, good and bad, in the current technology sector…
  2. I am roughly 18-24 months away from finishing a Bachelor of Science degree in Computer Science (with no college debt, thank you very much).
  3. Mozilla has ended support for their (admittedly proprietary) add-ons model, XBL is on the way out, and XUL will follow.  This means that the most specialized professional experience I have is not obsolete, but is dying a hard death, faster than I can keep up.  So the bachelor’s degree is coming around at just the right time.
  4. Rust has definitely arrived in the Mozilla toolchain, and Servo is on the way, as a wholly new layout engine.  (By the way, Lin Clark:  thank you so much for your articles on web engine design.  They’re priceless.)
  5. Emscripten is here, and new libraries to bridge between generated WebAssembly code and DOM/JavaScript API’s are under construction.
  6. Mozilla Firefox WebExtensions replaced the old add-ons model, which means a whole new API to learn – and later, when I work in compiled code on my own projects, customize and extend.  I’m really envious:  these are amazing technologies in their own right, but only Rust, WebExtensions and maybe Emscripten are “mature” at this time from this list…
  7. I’m still working on es-membrane (formerly es7-membrane), which I think is pretty interesting in its own right.  It’s maturing nicely as well, despite a lack of my available time.  (I’m still looking for help on that, by the way.)
  8. I’m a little surprised that the D programming language hasn’t received much traction… although until they become part of GCC and LLVM natively, I suppose I should be.  Supposedly, that’s in progress as well.

All of the above means that Verbosio, as a Mozilla Firefox-based XML editor with specific XML languages as add-ons to the editor, is truly and finally dead, and there’s no point trying to believe otherwise.  Similarly, the need for a XUL IDE is dead as well.  (Daniel Glazman and I need to get together to cry over a beer sometime.)

Enter a new code name, “Aluminium”.

I still want to build a stand-alone (but not too complex) web page editor supporting mathematics students at the high school, community college and university levels.  Amaya remains my inspiration.  I want to build a successor to that project, focusing on HTML5, MathML and SVG, with a conscious bias towards assisting students in doing their homework (but not doing the homework for them).

Of course, naming a future web page editor Aluminium, and basing it on arriving Mozilla technologies, leads to all sorts of bad puns:

  • By inheriting the concepts and some of the non-legacy code from Verbosio, I may be able to call Aluminium “powderware”, not “vaporware”.
  • When you mix powdered Aluminium with Rust, and a small spark of inspiration, you get some solid iron (very useful in repairing metal fatigue, I hear) and lots of heat to feed the fire (fox?)…
  • Apologies to all the Mozilla Firefox theme add-ons with a name of “Aluminum” or “Aluminium”.  Obviously, the name isn’t original.
  • I want to use the spelling and pronunciation common in scientific publishing and outside North America.  (This is again because I wish to place an emphasis on mathematics editing.)
  • Aluminium is the name of a metal, like a certain extremely popular web browser.
  • On the other hand, with all the features I want to implement, especially in-page version control for mathematics formulae, I’m not sure I can make it all that lightweight.  (Mozilla is no simple beast to build, either.)

I know, I know, that’s enough jokes for now.  But about the editor project itself, I’m actually quite serious.

Right now, one of the classes I’m taking at California State University, East Bay is titled “Graphical User Interface Programming Using a Rapid Application Development Tool”.  In short, it’s an introduction to building GUI windows (using Qt as a baseline and toolkit) as opposed to a command line application.  This is a course I am extremely lucky, and extremely happy, to get:  it doesn’t appear in the catalogs for any other CSU campus that I could find, much less go to, and the Computer Science department had told me repeatedly it wasn’t supposed to be in the CSUEB Catalog anymore.  All my programming experience and studies to date have either been for command-line applications, inside a web page, or with Mozilla’s platform code.  None of that taught me how to build GUI applications from scratch, or how to embed a web rendering engine like Servo.  That’s going to change…

Later down the line, I’m not planning on taking easy courses either:  I’m looking forward to classes on “Automata and Computation”, “Analysis of Algorithms”, “Numerical Analysis”, “Compiler Design”, and existing mathematics software.  All of these can only be force-multipliers on my computer programming experience going forward.

So yes, the old, non-standardized technologies of the last twenty years are being eliminated in Darwinian fashion… and a whole new generation of standards-based and safe-to-program-in computer languages are arriving.  The way I see it, I’m earning my Bachelor’s of Science degree at exactly the right time.  I hope future employers see it the same way, to work on some truly ground-breaking software.

Thanks for reading!


Georg FritzscheKnow your limits

When building software systems, we usually deal with data from external sources. This can be user input, data coming from other systems, etc. My basic assumption on any external data is: don’t trust it!

Any data that we don’t know completely ahead of time can and will behave differently than what we expected. A classic example for this is user input, say a text field. If we don’t limit the length and contents, somebody will eventually enter a book lengths worth of data or try to use it to attack a system.

But the same problem extends to data from systems we control and that we might have faith in. At Mozilla we have a variety of teams and products which are deployed to a large and diverse population of users across the world. We may think we know how those products and the data they generate behave, but practically we always find surprises.

A case study

Let’s consider a hypothetical mobile application. The mobile application has a basic system to collect product data, and makes it easy to collect new string values. To make it easier and more flexible for our teams to add something, we don’t impose a hard limit on the length of the string. We have documentation on the various instrumentation options available, making it easy to choose the best for each use-case.

Now, this works great and everybody can add their instrumentation easily. Then one day a team needs data on a specific view in the application to better understand how it gets used. Specifically they need to know how long it was visible to the user, which buttons were interacted with in which order and what the screen size of the device was. This doesn’t seem to directly fit into our existing instrumentation options, but our string recording is flexible enough to accommodate different needs.

So they put that data together in a string, making it structured so it’s reasonable to use later and we start sending it out in our JSON data packages:

"view_data": "{\"visible_ms\": 35953, \"buttons_used\": [\"change_name\", \"confirm\", \"help\"], \"screen_size\": \"960×540\"}",

The change gets tested and works fine so it gets shipped. Some time later we get a report that our product dashboards are not updated. An investigation shows that the jobs to update the dashboards were timing out, due to unusually large strings being submitted. It turns out that some users click buttons in the view 100 times or more.

What’s more, a detailed review shows that the user churn rate in our dashboard started to increase slightly, but permanently, around the time the change shipped. The favored hypothesis is that the increased data size for some users leads to lower chances of the data getting uploaded.

So, what went wrong?

To be clear, this is built as a bad example. There is a whole bunch of things that could be learnt from the above example; from getting expert review to adding instrumentation options to building the data system for robustness on both ends. However, here i want to highlight how the lack of a limit for the string length propagated through the system.

No software component exists in isolation. Looking at a high-level data flow through a product analytics system, any component in this system has a set of important parameters with resulting trade-offs from our design choices. The flexibility of a component in an early stage puts fewer constraints on the data that flows through, which propagates through the system and enlarges the problem space for each component after it.

The unbound string length of the data collection system here means that we know less about the shape of data we collect, which impacts all components in the later stages. Choosing explicit limits on incoming data is critical and allows us to reason about the behavior of the whole system.

Find the right limit

Choosing a limit is important, but that doesn’t mean we should restrict our data input as much as we can. If we pick limits that are too strict, we end up blocking use-cases that are legitimate but not anticipated. For each system that we build, we have to make a design decision on the scale from most strict to arbitrary values and weigh the trade-offs.

For me, my take-away is: Have a limit. Reason about it. Document it. The right limit will come out of conversations and lessons learnt — as long as we have one.

Know your limits was originally published in Georg Fritzsche on Medium, where people are continuing the conversation by highlighting and responding to this story.

Mozilla VR BlogThis Week in Mixed Reality: Issue 1

This Week in Mixed Reality: Issue 1

In the spirit of This week in Firefox/Rust/Servo, we’ve decided to start sharing weekly updates on the progress of the Mozilla Mixed Reality team. Late last year, we brought together all of the people working on Virtual and Augmented Reality at Mozilla to work in our new Mixed Reality program.

As part of that program, we're working in three broad areas.


We recently announced Firefox Reality, a new browser for standalone AR/VR headsets based on the best pieces of Firefox with both GeckoView and Servo.


We are working on a new social platform for Mixed Reality.

In the last week, we have:

Content ecosystem

And of course without great experiences, there’s no reason to put on a headset in the first place! Making sure that developers - whether they are web-first or coming from tools such as Unity - are successful in targeting WebVR and later WebXR...

  • Created initial prototype adding WebGL2 support to three.js
  • Reached the end of the A-Frame / Sketchfab creation experiment

Mike ConleyFirefox Performance Update #5

And here we are with another Firefox Performance Update!

This performance update is brought to you by perf.html! perf.html is our web-based profile analysis tool. We use it to analyze profiles gathered with the Gecko Profiler Add-on which helps us figure out why Firefox is feeling slow or sluggish. It’s probably the most important performance tool in our toolbox.

Before we go into the list – have you seen any patches landing that should positively impact Firefox’s performance? Let me know about it so I can include it in the list, and give appropriate shout-outs to all of the great work going on! That link again!

And now, without further ado, our list of excellent performance work:

(🌟 indicates a volunteer contributor)

Thanks to all of you! Keep it coming!

Hacks.Mozilla.OrgMDN Changelog for March 2018

Editor’s note: A changelog is “a log or record of all notable changes made to a project. [It] usually includes records of changes such as bug fixes, new features, etc.”

Publishing a changelog is kind of a tradition in open source, and a long-time practice on the web. We thought readers of Hacks and folks who use and contribute to MDN Web Docs would be interested in learning more about the work of the MDN engineering team, and the impact they have in a given month. We’ll also introduce code contribution opportunities, interesting projects, and new ways to participate.

Done in March

Here’s what happened in March to the code, data, and tools that support MDN Web Docs:

Here’s the plan for April:

Improved compatibility data at the Hack on MDN event

In March, the MDN team focused on the Browser Compatibility Data project, meeting in Paris with dozens of collaborators for the “Hack on MDN” event. The work resulted in 221 Pull Requests in the BCD repo, as well as additional work in other repos. See Jean-Yves Perrier’s Hack on MDN article for details about the tools created and data updated during the week.

We reviewed and merged over 250 PRs in March. The compatibility data conversion jumped from 57% to 63% complete. Jeremie Patonnier led the effort to convert SVG data (BCD PR 1371). API data was also a focus, both converting the existing MDN tables and using other data sources to fill out missing APIs.

There’s now 264 open PRs in the BCD repo, about a month’s worth, and the contributions keep coming in. BCD is one of the most active GitHub repositories at Mozilla this year by pull requests (880) and by authors (95), only second to rust (1268 PRs, 267 authors). The rate of contributions continue to increase, so BCD may be #1 for Q2 2018.

Graph that shows weekly commits to the browser-compat-data project growing from about 25 commits per week to over 60 per week in March.

Experimented with brotli compression

Brotli is a Google-developed compression algorithm designed for serving web assets, which can outperform the widely used gzip algorithm. By the end of 2017, all major browsers support the br content-encoding method, and Florian requested a test of brotli on MDN in bug 1412891. Maton Anthony wrote a Python-2 compatible brotli middleware in PR 4686, with the default compression level of 11. This went live on March 7th.

Brotli does compress our content better than gzip. The homepage goes from 36k uncompressed to 9.5k with gzip to 7k with brotli. The results are better on wiki pages like CSS display page, which goes from 144k uncompressed to 20k with gzip and 15k with brotli.

However, brotli was a net-negative for MDN performance. Average response time measured at the server was slower, going from 75 ms to 120 ms, due to the increased load of the middleware. Google Analytics showed a 6% improvement in page download time (1 ms better), but a 23% decline in average server response time (100 ms worse). We saw no benefit in static assets, because CloudFront handles gzip compression and ignores brotli.

Antony adjusted to compression level 5 (Kuma PR 4712), which a State of Brotli article tested to be comparable to gzip 9 in compression time but still results in smaller assets. When this went live on March 26th, we saw similar results, with our response time returning to pre-brotli levels, but with a slight improvement in HTML size when br was used.

Graph of average server response time from New Relic, showing a doubling from about 50 ms to 100 ms around March 7, then back to 50 ms after March 21.

When we ship CloudFront in April, the CDN will take care of compression, and brotli will go away again. It looks like a promising technology, but requires a CDN that supports it, and works best with a “pre-compress” workflow rather than a “when requested” middleware workflow, which means it won’t be a good fit for MDN for a while.

Shipped tweaks and fixes

There were 370 PRs merged in March:

137 of these were from first-time contributors:

Other significant PRs:

Planned for April

We’ll be busy in April with the output of the Hack on MDN event, reviewing PRs and turning prototypes into production code. We’ll continue working on our other projects as well.

Get HTML interactive examples ready for production

The interactive examples are transitioning from rapid experimentation to a shipping feature. Will Bamberg published Bringing interactive examples to MDN in March, which details how this project went from idea to implementation.

The interactive examples team will firm up the code and processes, to build a better foundation for contributors and for future support. At the same time, they will firm up the design for HTML examples, which often require a mix of CSS and HTML to illustrate usage.

Ship the CDN and Django 1.11

Ryan Johnson finished up caching headers for the 60 or so MDN endpoints, and he and Dave Parfitt added CloudFront in front of the staging site. We’ll spend some time with automated and manual testing, and then reconfigure production to also be behind a CDN.

We’ve missed the April 1 deadline for the Django 1.11 update. We plan to focus on the minimum tasks needed to update in April, and will defer other tasks, like updating out-of-date but compatible 3rd-party libraries, until later.

Improve the front-end developer experience

Schalk Neethling has had some time to get familiar with the front-end code of MDN, using tools like Sonarwhal and Chrome devtools’ performance and network panels to find performance issues faced by Chrome visitors to MDN. He’s put together a list of changes that he think will improve development and performance.

Analyzing the performance of with Chrome developer tools, showing how long it takes to download assets, run javascript, etc. JS starts running around 1 second, and the page is ready around 3 seconds.

One quick win is to replace FontAwesome’s custom font face with SVG icons. Instead of loading all the icons in a single file, only the icons used by a page will be loaded. The SVG will be included in the HTML, avoiding additional HTTP requests. SVG icons are automatically scaled, so they will look good on all devices. This should also improve the development experience. It’s easy to make mistakes when using character encodings like "\f108\0020\eae6", and much clearer to use names like "icon-ie icon-mobile".

Schalk is thinking about other challenges to front-end development, and how to bring in current tools and techniques. He’s cleaning up code to make it more consistent, and formalizing the rules in MDN Fiori, a front-end style guide and pattern library. This may be the first step to switching to a UI component structure, such as React.

A bigger improvement would be updating the front-end build pipeline. MDN’s build system was developed years ago (by Python developers) when JavaScript was less mature, and the system hasn’t kept up. Webpack is widely used to bundle code and assets for deployment, and a new pipeline could allow developers to use a broader ecosystem of tools like Babel to write modern JavaScript.

Finally, we continue to look for the right way to test JavaScript. We’re currently using Selenium to automate testing in a browser environment. We’ve had issues getting a stable set of tools for consistent Selenium testing, and it has proven to be too heavy for unit testing of JavaScript. Schalk has had good experiences with Jest, and wants to start testing MDN Javascript with Jest in April.

Support.Mozilla.OrgProposal: Knowledge Base Spring Cleaning at SUMO – June 2018

Hi everyone,

People say that spring is a good time to try new things and refresh one’s body and mind. While our site does not really have a body (unless you count the HTML tags…) and its collective mind is made up of all of us (including you, reading these words), it does need refreshing every now and then, mostly due to the nature of the open, living web we are all part of.

That said, let’s get to the details, some of which may sound like Rosana’s post from about 4 years ago.

What’s the proposal about?

The localization coordinators across Mozilla want to consolidate Mozillians and our resources around active locales. In the context of SUMO’s Knowledge Base, this means taking a closer look at the Knowledge Base, which at the moment is available for 76 locales (at least “on paper”).

The proposal is to redirect the mostly inactive locales of the SUMO Knowledge Base to English (or another best equivalent locale). You can find the proposed (draft) list of all the locales here.

  • 23 locales will remain active, with localizers providing good coverage both via Pontoon and SUMO’s Knowledge Base – and no action will be required for them.
  • In 30 cases, the existing (and historically active) localizers will be contacted to decide on reviving the localization effort for SUMO content. If there is enough interest, they will remain active (with a plan to update content). If there is not enough interest, they will be redirected at the end of June.
  • In 23 cases, the locales will be redirected at the end of June due to little or no localization activity over an extended period of time. These can also be reactivated at a later time, if need be.

It is important to note that no content created so far would be deleted.

Why do we want to do this?

There are several reasons behind this proposal:

  1. Fewer locales mean more administrator focus on actually active locales – more time for joint projects or experiments, more attention to the needs of localizers putting a lot of time and effort into making SUMO global on a continuous basis.
  2. Firefox and the main Mozilla pages have a higher priority than SUMO, so for many localizers it’s better to focus on these projects, rather than getting involved with Knowledge Base localization.
  3. The “long tail” locales on SUMO are accessed by a very small number of users each month, so there is little need for serving them at this point.
  4. Revisiting this initiative in 6 months will help us see progress made by local communities in building up their active localizer numbers.

What are the next steps?

The 23 locales counted as “no change” will keep functioning as usual, with more locally driven activities coming this year – check the last section of this L10n blog post for just one of the many possibilities.

During April and May, I will reach out to all the contributors listed in SUMO and Pontoon for the 30 locales that are listed as candidates for the clean up – using Private Messages on SUMO or emails listed in Pontoon. Depending on the answers received, we may be keeping some of these locales online, and coming up with a realistic (but ambitious) localization progress timeline for each of them.

At the end of June (after the All Hands), all the locales that are not active will be redirected to English (or another best equivalent locale).

After that, localization for the redirected locales will be focused on Firefox and other Mozilla properties. If there is interest in reactivating a locale, it will happen according to a “re/launch plan” – including creating or participating in a SUMO Knowledge Base sprint event aimed at localizing at least the 20 most popular articles in the Knowledge Base as the minimum requirement, and additional sprints to localize an additional 80 most popular articles.

Is anything else being cleaned up at this stage?

No, the Knowledge Base is a big enough project for now.

Still, this is just the start of this year’s clean up – we will also look into reviewing and reorganizing our contributor documentation, English Knowledge Base, and other properties containing content relevant to SUMO (for example our MozWiki pages).

Please let us know what you think about this in the comments or on our forums: SUMO / Discourse.

The Rust Programming Language BlogThe Rust Team All Hands in Berlin: a Recap

Last week we held an “All Hands” event in Berlin, which drew more than 50 people involved in 15 different Rust Teams or Working Groups, with a majority being volunteer contributors. This was the first such event, and its location reflects the current concentration of team members in Europe. The week was a smashing success which we plan to repeat on at least an annual basis.

The impetus for this get-together was, in part, our ambitious plans to ship Rust, 2018 edition later this year. A week of work-focused facetime was a great way to kick off these efforts!

We’ve also asked attendees to blog and tweet about their experiences at the #RustAllHands hashtag; the Content Team will be gathering up and summarizing this content as well.

Highlights by team

Below we’ll go through the biggest items addressed last week. Note that, as always, reaching consensus in a meeting does not mean any final decision has been made. All major decisions will go through the usual RFC process.

Language design

  • Macros: put together a proposal for the 2018 edition.
    • Stabilize a forward-compatible subset of procedural macros that explicitly opts out of hygiene (by asking all names to be interpreted at the call site).
    • Work out a revised API surface for procedural macros based on what we’ve learned so far.
    • Stabilize importing macros through normal use statements.
    • Alex Crichton will drive the stabilization process.
  • Extern types: worked through remaining issues for stabilization.
  • Improvements to derive: a proposal to make derive more ergonomic in Rust 2018.
  • Best practices: set up a cross-cutting guidelines effort, encompassing the API guidelines but also including style, lints, and Lang Team recommendations.


  • SIMD: talked through final steps of stabilization; we hope to stabilize in 1.27.
  • Allocators: developed a conservative proposal for stabilizing global allocators; Simon Sapin has set up a new tracking issue.


  • Tool integration: extensive discussion and planning about the needs of compiler clients, composable plugins, and the compiler’s new query architecture.
  • Query compilation: a plan for end-to-end query compilation, i.e. fully-incremental compilation.
  • libsyntax: a long-run plan for a revamped libsyntax, shared amongst a variety of tools.
  • Contribution: improved contribution experience for the compiler.



  • Edition planning: determined resources needed for the 2018 edition, what that means for the Rust Bookshelf, and who will be responsible for each.
  • Team blog: “This week in Rust docs” is going to move to a new docs team blog!
  • Doxidize (aka rustdoc2): made initial public release; it’s like but for Rust.
  • Intermediate-level docs: contributed to idea generation.


  • Edition planning, especially for the rustfix tool.
  • Clippy lint audit: developed plan for reaching 1.0 on Clippy this year, based on categorizing lints into Correctness, Performance, Style, Complexity, Pedantry, and Nursery categories.
  • Custom test frameworks: reached consensus on most of the details for the RFC
  • IDEs: discussed improvements to code completion, stability improvements, and new features like refactoring and auto-import support.


  • Xargo integration: making a few more platforms tier 1 addresses the immediate need for embedded; otherwise, plan to go the “std-aware Cargo” route late in 2018. Key insight: will entirely obsolete the concept of “targets” in rustup.
  • Rustup integration: with Xargo integration we can simplify rustup; plan to expose new interface via Cargo late in 2018.
  • Custom registries: ready to stabilize.
  • Profiles: the v2 design is clear, and Aleksey Kladov plans to implement.
  • Public dependencies: significantly revised plan to have more conservative ecosystem impact. Aaron Turon will write a blog post.
  • Build system integration: determined two pieces of functionality to implement to decouple the RLS from Cargo.
  • Project templates: developed a minimal design; withoutboats will write an RFC.
  • Custom workflows: designed workflow customization, which is useful for frameworks; Aaron Turon has written a blog post.


  • bors queue: hatched and resourced lots of ideas to reduce the PR queue for Rust 2018.
  • crater: pietroalbini is testing a bot for running crater!
  • Travis log bot: TimNN has written a bot to extract errors from Travis logs

WG: CLI apps

  • WG overview slides.
  • Survey and strategy: dove into survey data and developed strategy from it; posts forthcoming.
  • Distribution: “distribution-friendly” badge on
  • Extensible Cargo install: wrote an RFC on-site!

WG: network services

  • WG overview slides.
  • Launching the WG: determined goals for the WG, including async/await, documentaiton, mid-level HTTP libraries, and the Tower ecosystem. Kickoff announcement coming soon!
  • Async/await: finalized design and stabilization approach for RFCs (blog post and links to RFCs here).

WG: embedded devices

  • WG overview slides
  • Embedded Rust on stable: addressed all known blockers and several mid-priority issues as well.
  • The Embedded Rust book: defined audience and basic outline.

WG: WebAssembly

  • WG overview slides.
  • 2018 edition planning, including scoping the toolchain and book efforts for the release.
  • JS integration: dove into integrating JS callbacks vs Rust functions in wasm-bindgen.
  • Ecosystem integration: wasm-bindgen now works with CommonJS!
  • Code bloat, reducing the footprint of panicking from 44k to 350 bytes.

Unsafe code guidelines

  • Restarted the WG: dug back into two competing approaches (“validity” and “access”-based), substantially iterated on both. Niko Matsakis and Ralf Jung will be writing blog posts about these ideas.
  • Pressing issues: tackled a few near-term decisions that need to be made, including union semantics, Pin semantics, thread::abort and more.

Web site

  • Domain WG sketching: over a couple of sessions, the four domain-centered working groups (listed above) developed some initial sketches of landing pages for each domain.

Rust reach

New working groups

In addition to the work by existing teams, we had critical mass to form two new working groups:

  • Verification: bringing together folks interested in testing, static analysis, and formal verification.
  • Codegen: work to improve the quality of code rustc generates, both in terms of size and runtime performance.

The Verification WG has been formally announced, and the Codegen WG should be announced soon!

General reflections and notes of appreciation

The combination of having a clear goal – Rust 2018 – and a solid majority of team member present made the All Hands extremely fun and productive. We strive to keep the Rust community open and inclusive through asynchronous online communication, but it’s important to occasionally come together in person. The mix of ambition and kindness at play last week really captured the spirit of the Rust Community.

Of course, an event like this is only possible with a lot of help, and I’d like to thank the co-organizers and Mozilla Berlin office staff:

  • Johann Hofmann
  • Jan-Erik Rediger
  • Florian Gilcher
  • Ashley Williams
  • Martyna Sobczak

as well as all the team leads who organized and led sessions!

The Firefox FrontieruBlock Origin is Back-to-Back March Addonness Champion

It’s been three weeks and we’ve almost run out of sports metaphors. We’re happy to announce that after three rounds and thousands of votes you have crowned uBlock Origin March … Read more

The post uBlock Origin is Back-to-Back March Addonness Champion appeared first on The Firefox Frontier.

Mozilla VR BlogProgressive WebXR

Progressive WebXR

Imagine you wanted to have your store’s web page work in 2D, and also take advantage of the full range of AR and VR devices. WebXR will provide the foundation you need to create pages that work everywhere, and let you focus on compelling User Experiences on each of the devices.

In a recent blog post, we touched on one aspect of progressive WebXR, showcasing a version of A-Painter that was adapted to handheld AR and immersive VR. In this post, we will dive a bit deeper into the idea of progressive WebXR apps that are accessible across a much wider range of XR-supported devices.

The WebXR Device API expands on the WebVR API to include a broader range of mixed reality devices (i.e., AR/VR, immersive/handheld). By supporting all mixed reality devices in one API, the Immersive Web community hopes to make it easier for web apps to respond to the capabilities of a user’s chosen device, and present an appropriate UI for AR, VR, or traditional 2D displays.

At Mozilla, this move aligns with experiments we started last fall, when we created a draft WebXR API proposal, a WebXR polyfill based on it, and published our WebXR Viewer experimental web browser application to the iOS App Store. Publishing the app for iOS allowed us (and others) to experiment with WebXR on iOS, and is one of the target platforms for the XR Store demo that is the focus of this article. This demo shows how future sites can support the WebXR API across many different devices.

Before introducing the example store we've create, I’ll give an overview of the spectrum of devices that might need to be supported by a UX strategy to design this kind of WebXR-compatible site.

The spectrum of WebXR displays/realities

Progressive WebXR

Broadly speaking, there are three categories of displays that need to be supported by a responsive WebXR application:

  • current non-WebXR “flat displays” on desktop and handheld devices,
  • “portal displays” where these same screens present the illusion of a portal into a 3D world by leveraging device motion and 3D sensing, and
  • “immersive displays” such as head-worn displays that encompass the user’s senses in the 3D world.

Non-WebXR Displays

Current flat displays, such as desktop monitors, phones and tablets, may not have access to VR/AR capabilities via WebXR, although some will be able to simulate WebXR using a WebXR polyfill. Such desktop and mobile displays will remain the most common ways to consume web content for the forseeable future.

Mobile devices with 3DoF orientation sensors (that use accelerometers, gyroscopes, and magnetometers to give 3 Degrees of Freedom for the device pose) can simulate monoscopic 3D VR (and AR, if they use getUserMedia to access the video camera on the device), by leveraging the deviceorientation or Orientation Sensor APIs to access the device orientation.

Apps written for "Cardboard" display holders for these mobile devices (i.e., cases that use the phone's screen as their display, such as a Google Cardboard) use the same 3DoF sensors, but render stereoscopic VR on the phone display.

XR Displays

XR displays come in two sorts: AR and VR. The most common XR displays right now are Handheld or "Magic Window" AR, made possible by Apple’s ARKit for iOS (used by our WebXR Viewer) or Google’s ARCore for Android (used by the WebAROnARCore experimental browser). These give the user the illusion that the phone is a portal, or magic window, into an AR scene surrounding the user.

While currently less common, optically see-through headsets such as Microsoft’s Hololens provide an immersive 3D experience where virtual content can more convincingly surround the user. Other displays are in development or limited release, such as those from Magic Leap and Meta. We’ve shown the Meta 2 display working with WebVR in Firefox, and hope most upcoming AR displays will support WebXR eventually.

Thanks to WebVR, web-based VR is possible now on a range of immersive VR displays. The most common VR displays are 3DoF Headsets, such as Google Daydream or Samsung Gear VR. 6DoF Headsets (that provide position and orientation, giving 6 degrees of freedom), such as the HTC Vive, Oculus Rift, and Windows Mixed Reality-compatible headsets, can deliver fully immersive VR experiences where the user can move around.

The second category of VR displays are what we call Magic Window VR. These displays use flat 2D displays, but leverage 3D position and orientation tracking provided by the WebXR API to determine the pose of the camera and simulate a 3D scene around the user. Similar to Magic Window AR, these are possible now by leveraging the tracking capabilities of ARKit and ARCore, but not showing video of the world around the user.

In the table below we have listed the common OS, browsers, and devices for each category.

Progressive WebXR

A Case Study: XR Store

Creating responsive web apps that adapt to the full range of non-XR and WebXR devices will allow web developers to reach the widest audience. The challenge is to create a site that provides an appropriate interface for each of the modalities, rather than designing for one modality and simulating it on the others. To demonstrate this, we created a realistic XR Store demo, implementing a common scenario for WebXR: a product sheet on a simulated e-commerce site that allows the visitor to see a product in 3D on a traditional display, in a virtual scene in VR, or in the world around them in AR.

Progressive WebXR

Applying a Progressive Enhancement XR Design Strategy

We selected a representative set of current XR modes to let us experiment with a wide range of user experiences: Desktop / Mobile on Flat displays, AR Handheld on Portal Displays), and 3DoF / 6DoF Headsets on Immersive Displays.

The image below shows some initial design sketches of the interfaces and interactions for the XR Store application, focusing on the different user experiences for each platform.

Progressive WebXR

Selecting the Best User Interface for Each Platform

In the XR Store demo we used four types of User Interfaces (UIs), borrowing from terminology commonly used in 3D UIs (including in video games):

  • Diegetic: UI elements exist within the 3D world.
  • Spatial: UI elements placed in a fixed position in the 3D world.
  • Non-Diegetic: UI elements in 2D over the 3D scene. These are sometimes known as HUD (Heads-Up Display) UI elements.
  • Page: UI elements in 2D in the DOM of the website.

We also considered using these two types as well:

  • Anchored (VR exclusive): A mixed version of spatial and diegetic where the UI elements are anchored within the user’s gaze, or to the interaction controllers’ positions.
  • Direct manipulation (AR exclusive): UIs to directly manipulate objects using the touch screen, as opposed to manipulating the scene and camera.

For Non-XR / Flat Displays we are using exclusively Page UI controls, and a Non-Diegetic element for a button to enter Fullscreen mode (a common pattern in UIs in 2D applications). We opted not to mix the page elements with diegetic controls embedded in the 3D scene, as page-based metaphors are what current web users would expect to find on a product detail page.

Progressive WebXR

For XR / AR, we start with a very similar interface to the one used on Flat Displays (page controls, a diegetic button to switch to AR mode), but once in AR use Non-Diegetic UI elements over the 3D scene to make it easier to change the product properties. We could also have used a Direct Manipulation UI to scale, rotate, or move the product in the projected reality (but decided not to, in this example, for simplicity).

Progressive WebXR

For XR / VR we are using Diegetic UI for interaction controllers and a Spatial UI to give the user a representation of the selectable part of the product sheet. We could have used an Anchored UI to make it easier to find this panel, as we did in the VR painting interface for A-Painter. We ended up using the same UI for 3DoF and 6DoF VR, but could have changed the UI slightly for these two different cases, such as (for example) moving the 2D panel to stay behind-and-to-the-right of the object as a 6DoF user walks around. Another option would have been to have the 2D panel slide into view whenever it goes offscreen, a common metaphor used in applications for Microsoft’s HoloLens. Each of these choices has its pros and cons; the important part is enabling designers to explore different options and use the one that is more appropriate for their particular use case.

Progressive WebXR
Progressive WebXR

Technical Details

The source code for the XR Store demo currently uses a custom version of A-Frame combined with our aframe-xr WebXR binding for A-Frame. The aframe-xr bindings are based on our experimental webxr-polyfill and our three-xr WebXR bindings for three.js.

To try it out, visit XR Store on the appropriate device and browser. To try the 3DoF VR mode, you can use Google Chrome on an Android phone supporting Daydream, or the Samsung Internet/Oculus Browser on a phone supporting Gear VR. The demo supports any 6DoF VR device that works with Firefox, such as the HTC Vive or Oculus Rift, or Windows Mixed Reality headsets with Edge.

If you want to try the application in AR mode, you can use any iOS device that supports ARKit with our experimental WebXR Viewer app, or any Android device that supports ARCore with Google’s experimental WebAROnARCore (WebARonARCore has some limitations that limit entering and leaving AR mode).

Future Challenges

This demo is an exploration of how the web could be accessed in the near future, where people will be able to connect from multiple devices with very diverse capabilities. Beyond the platforms implement here, we will soon face an explosion of AR see-through head-worn displays that will offer a new ways of interact with our content. Such displays will likely support voice commands and body gestures as input, rather than 3D controllers like their immersive VR counterparts. One day, people may have multiple devices they use simultaneously: instead of visiting a web page on their phone and then doing AR or VR on that phone, they may visit on their phone and then send the page to their AR or VR headset, and expect the two devices to coordinate with each other.

One interface we intentionally didn't explore here is the idea of presenting the full 2D web page in 3D (AR or VR) and having the product "pop" out of the 2D page into the 3D world. As web browsers evolve to displaying full HTML pages in 3D, such approaches might be possible, and desirable in some cases. One might imagine extensions to HTML that mark parts of the page as "3D capable" and provide 3D models that can be rendered instead of 2D content.

Regardless of how the technology evolves, designers should continue to focus on offering visitors the best options for each modality, as they do with 2D pages, rather than offering all the possible UI options in every possible modality. Designers should still focus on their users’ experience, and offering the best support for each of the possible displays that the users might have, building on the Progressive-Enhancement design strategy popular on todays 2D web.

Today, the Web is a 2D platform accessible by all, and with the WebXR API, we will soon be using it to connect with one another in the VR and AR. As we move toward this future, supporting the widest range of devices will continue to be a critical aspect of designing experiences for the web.

The Firefox FrontierFacebook Container extension now includes Instagram and Facebook Messenger

To help you control the amount of data Facebook can gather about you, we have updated the Facebook Container extension to include Instagram and Facebook Messenger. This way, users of … Read more

The post Facebook Container extension now includes Instagram and Facebook Messenger appeared first on The Firefox Frontier.

Hacks.Mozilla.OrgWhat Makes a Great Extension?

We’re in the middle of our Firefox Quantum Extensions Challenge and we’ve been asking ourselves: What makes a great extension?

Great extensions add functionality and fun to Firefox, but there’s more to it than that. They’re easy to use, easy to understand, and easy to find. If you’re building one, here are some simple steps to help it shine.

Make It Dynamic

Firefox 57 added dynamic themes. What does that mean? They’re just like standard themes that change the look and feel of Firefox, but they can change over time. Create new themes for daytime, nighttime, private browsing, and all your favorite things.

Mozilla developer advocate Potch created a wonderful video explaining how dynamic themes work in Firefox:

Make It Fun

Browsing the web is fun, but it can be downright hilarious with an awesome extension. Firefox extensions support JavaScript, which means you can create and integrate full-featured games into the browser. Tab Invaders is a fun example. This remake of the arcade classic Space Invaders lets users blast open tabs into oblivion. It’s a cathartic way to clear your browsing history and start anew.

Tab Invaders is an extension game that harks back to Space Invaders, and lets you blast inactive open tabs.

But you don’t have to build a full-fledged game to have fun. Tabby Cat adds an interactive cartoon cat to every new tab. The cats nap, meow, and even let you pet them. Oh, and the cats can wear hats.

An image from Tabby Cat, a playful extension that puts a kitty cat on every tab.

Make It Functional

A fantastic extension helps users do everyday tasks faster and more easily. RememBear, from the makers of TunnelBear, remembers usernames and passwords (securely) and can generate new strong passwords. Tree Style Tab lets users order tabs in a collapsible tree structure instead of the traditional tab structure. The Grammarly extension integrates the entire Grammarly suite of writing and editing tools in any browser window. Excellent extensions deliver functionality. Think about ways to make browsing the web faster, easier, and more secure when you’re building your extension.

RememBear is an add-on companion to the RememBear app.

Make It Firefox

The Firefox UI is built on the Photon Design System. A good extension will fit seamlessly into the UI design language and seem to be a native part of the browser. Guidelines for typography, color, layout, and iconography are available to help you integrate your extension with the Firefox browser. Try to keep edgy or unique design elements apart from the main Firefox UI elements and stick to the Photon system when possible.

Make It Clear

When you upload an extension to (the Firefox add-ons site), pay close attention to its listing information. A clear, easy-to-read description and well-designed screenshots are key. The Notebook Web Clipper extension is a good example of an easy-to-read page with detailed descriptions and clear screenshots. Users know exactly what the extension does and how to use it. Make it easy for users to get started with your extension.

Make It Fresh

Firefox 60, now available in Firefox Beta, includes a host of brand-new APIs that let you do even more with your extensions. We’ve cracked open a cask of theme properties that let you control more parts of the Firefox browser than ever before, including tab color, toolbar icon color, frame color, and button colors.

The tabs API now supports a tabs.captureTab method that can be passed a tabId to capture the visible area of the specified tab. There are also new or improved APIs for proxies, network extensions, keyboard shortcuts, and messages.

For a full breakdown of all the new improvements to extension APIs in Firefox 60, check out Firefox engineer Mike Conca’s excellent post on the Mozilla Add-ons Blog.

Submit Your Extension Today

The Quantum Extensions Challenge is running until April 15, 2018. Visit the Challenge homepage for rules, requirements, tips, tricks, and more. Prizes will be awarded to the top extensions in three categories: Games & Entertainment, Dynamic Themes, and Tab Manager/Organizer. Winners will be awarded an Apple iPad Pro 10.5” Wifi 256GB and be featured on Runners up in each category will receive a $250 USD Amazon gift card. Enter today and keep making awesome extensions!

Eric ShepherdResults of the MDN “Internal Link Optimization” SEO experiment

Our fourth and final SEO experiment for MDN, to optimize internal links within the open web documentation, is now finished. Optimizing internal links involves ensuring that each page (in particular, the ones we want to improve search engine results page (SERP) positions for, are easy to find.

This is done by ensuring that each page is linked to from as many topically relevant pages as possible. In addition, it should in turn link outward to other relevant pages. The more quality links we have among related pages, the better our position is likely to be. The object, from a user’s perspective, is to ensure that even if the first page they find doesn’t answer their question, it will link to a page that does (or at least might help them find the right one).

Creating links on MDN is technically pretty easy. There are several ways to do it, including:

  • Selecting the text to turn into a link and using the toolbar’s “add link” button
  • Using the “add link” hotkey (Ctrl-K or Cmd-K)
  • Any one of a large number of macros that generate properly-formatted links automatically, such as the domxref macro, which creates a link to a page within the MDN API reference; for example: {{domxref(“RTCPeerConnection.createOffer()”)}} creates a link to, which looks like this: RTCPeerConnection.createOffer(). Many of the macros offer customization options, but the default is usually acceptable and is almost always better than trying to hand-make the links.

Our style guide talks about where links should be used. We even have a guide to creating links on MDN that covers the most common ways to do so. Start with these guidelines.

The content updates

10 pages were selected for the internal link optimization experiment.

Changes made to the selected pages

In general, the changes made were only to add links to pages; sometimes content had to be added to accomplish this but ideally the changes were relatively small.

  • Existing text that should have been a link but was not, such as mentions of terms that need definition, concepts for which pages exist that should have been linked to, were turned into links.
  • The uses of API names, element names, attributes, CSS properties, SVG element names, and so forth were turned into links when either first used in a section or if a number of paragraphs had elapsed since they were last a link. While repeated links to the same page don’t count, this is good practice for usability purposes.
  • Any phrase for which a more in-depth explanation is available was turned into a link.
  • Links to related concepts or topics were added where appropriate; for example, on the article about HTMLFormElement.elements, a note is provided with a link to the related Document.forms property.
  • Links to related functions or HTML elements or whatever were added.
  • The “See also” section was reviewed and updated to include appropriate related content.

Changes made to targeted pages

Targeted pages—pages to which links were added—in some cases got smaller changes made, such as the addition of a link back to the original page, and in some cases new links were added to other relevant content if the pages were particularly in need of help.

Pages to be updated

The pages selected to be updated for this experiment:

The results

The initial data was taken during the four weeks from December 29, 2017 through January 25th, 2018. The “after” data was taken just over a month after the work was completed, covering the period of March 6 through April 2, 2018.

The results from this experiment were fascinating. Of all of the SEO experiments we’ve done, the results of this one were the most consistently positive.

Landing Page Unique Pageviews Organic Searches Entrances Hits
/en-us/docs/web/api/document_object_model/locating_dom_elements_using_selectors +46.38% +23.68% +29.33% +59.35%
/en-us/docs/web/api/htmlcollection/item +52.89% +35.71% +53.60% +38.56%
/en-us/docs/web/api/htmlformelement/elements +69.14% +57.83% +69.30% +70.74%
/en-us/docs/web/api/mediadevices/enumeratedevices +23.55% +14.53% +16.71% +15.67%
/en-us/docs/web/api/xmlhttprequest/html_in_xmlhttprequest +49.93% -3.50% +24.67% +59.63%
/en-us/docs/web/api/xmlserializer +36.24% +46.94% +31.50% +37.66%
/en-us/docs/web/css/all +46.15% +16.52% +23.51% +48.28%
/en-us/docs/web/css/inherit +22.55% +27.16% +20.58% +17.12%
/en-us/docs/web/css/object-position +102.78% +119.01% +105.56% +405.52%
/en-us/docs/web/css/unset +24.60% +18.45% +19.20% +35.01%

These results are amazing. Every single value is up, with the sole exception of a small decline in organic search views (that is, appearances in Google search result lists) for the article “HTML in XMLHttpRequest.” Most values are up substantially, with many being impressively improved.

Note: The data in the table above was updated on April 12, 2018 after realizing the “before” data set was inadvertently one day shorter than the “after” set. This reduced the improvements marginally, but did not affect the overall results.


Due to the implementation of the experiment and certain timing issues, there are uncertainties surrounding these results. Those include:

  • Ideally, much more time would have elapsed between completing the changes and collecting final data.
  • The experiment began during the winter holiday season, when overall site usage is at a low point.
  • There was overall site growth of MDN traffic over the time this experiment was underway.


Certain conclusions can be reached:

  1. The degree to which internal link improvements benefited traffic to these pages can’t be ignored, even after factoring in the uncertainties. This is easily the most benefit we got from any experiment, and on top of that, the amount of work required was often much lower. This should be a high priority portion of our SEO plans.
  2. The MDN meta-documentation will be further improved to enhance recommendations around linking among pages on MDN.
  3. We should consider enhancements to macros used to build links to make them easier to use, especially in cases where we commonly have to override default behavior to get the desired output. Simplifying the use of macros to create links will make linking faster and easier and therefore more common.
  4. We’ll re-evaluate the data after more time has passed to ensure the results are correct.


If you’d like to comment on this, or ask questions about the results or the work involved in making these changes, please feel free to follow up or comment on the thread I’ve created on our Discourse forum.