Mozilla Future Releases BlogMoving Firefox to a faster 4-week release cycle

This article is cross-posted from Mozilla Hacks


We typically ship a major Firefox browser (Desktop and Android) release every 6 to 8 weeks. Building and releasing a browser is complicated and involves many players. To optimize the process, and make it more reliable for all users, over the years we’ve developed a phased release strategy that includes ‘pre-release’ channels: Firefox Nightly, Beta, and Developer Edition. With this approach, we can test and stabilize new features before delivering them to the majority of Firefox users via general release.

Today’s announcement

And today we’re excited to announce that we’re moving to a four-week release cycle! We’re adjusting our cadence to increase our agility, and bring you new features more quickly. In recent quarters, we’ve had many requests to take features to market sooner. Feature teams are increasingly working in sprints that align better with shorter release cycles. Considering these factors, it is time we changed our release cadence.

Starting Q1 2020, we plan to ship a major Firefox release every 4 weeks. Firefox ESR release cadence (Extended Support Release for the enterprise) will remain the same. In the years to come, we anticipate a major ESR release every 12 months with 3 months support overlap between new ESR and end-of-life of previous ESR. The next two major ESR releases will be ~June 2020 and ~June 2021.

Shorter release cycles provide greater flexibility to support product planning and priority changes due to business or market requirements. With four-week cycles, we can be more agile and ship features faster, while applying the same rigor and due diligence needed for a high-quality and stable release. Also, we put new features and implementation of new Web APIs into the hands of developers more quickly. (This is what we’ve been doing recently with CSS spec implementations and updates, for instance.)

In order to maintain quality and minimize risk in a shortened cycle, we must:

  • Ensure Firefox engineering productivity is not negatively impacted.
  • Speed up the regression feedback loop from rollout to detection to resolution.
  • Be able to control feature rollout based on release readiness.
  • Ensure adequate testing of larger features that span multiple release cycles.
  • Have clear, consistent mitigation and decision processes.

Firefox rollouts and feature experiments

Given a shorter Beta cycle, support for our pre-release channel users is essential, including developers using Firefox Beta or Developer Edition. We intend to roll out fixes to them as quickly as possible. Today, we produce two Beta builds per week. Going forward, we will move to more frequent Beta builds, similar to what we have today in Firefox Nightly.

Staged rollouts of features will be a continued best practice. This approach helps minimize unexpected (quality, stability or performance) disruptions to our release end-users. For instance, if a feature is deemed high-risk, we will plan for slow rollout to end-users and turn the feature off dynamically if needed.

We will continue to foster a culture of feature experimentation and A/B testing before rollout to release. Currently, the duration of experiments is not tied to a release cycle length and therefore not impacted by this change. In fact, experiment length is predominantly a factor of time needed for user enrollment, time to trigger the study or experiment and collect the necessary data, followed by data analysis needed to make a go/no-go decision.

Despite the shorter release cycles, we will do our best to localize all new strings in all locales supported by Firefox. We value our end-users from all across the globe. And we will continue to delight you with localized versions of Firefox.

Firefox release schedule 2019 – 2020

Firefox engineering will deploy this change gradually, starting with Firefox 71. We aim to achieve 4-week release cadence by Q1 2020. The table below lists Firefox versions and planned launch dates. Note: These are subject to change due to business reasons.

 a table showing the release dates for Firefox GA and pre-release channels, 2019-2020. Follow the link for data.

Process and product quality metrics

As we slowly reduce our release cycle length, from 7 weeks down to 6, 5, 4 weeks, we will monitor closely. We’ll watch aspects like release scope change; developer productivity impact (tree closure, build failures); beta churn (uplifts, new regressions); and overall release stabilization and quality (stability, performance, carryover regressions). Our main goal is to identify bottlenecks that prevent us from being more agile in our release cadence. Should our metrics highlight an unexpected trend, we will put in place appropriate mitigations.

Finally, projects that consume Firefox mainline or ESR releases, such as SpiderMonkey and Tor, now have more frequent releases to choose from. These 4-week releases will be the most stable, fastest, and best quality Firefox builds.

In closing, we hope you’ll enjoy the new faster cadence of Firefox releases. You can always refer to for the latest release dates and other information. Got questions? Please send email to

The post Moving Firefox to a faster 4-week release cycle appeared first on Future Releases.

The Firefox FrontierWhat to do after a data breach

You saw the news alert. You got an email, either from Firefox Monitor or a company where you have an account. There’s been a security incident — a data breach. … Read more

The post What to do after a data breach appeared first on The Firefox Frontier.

Hacks.Mozilla.OrgMoving Firefox to a faster 4-week release cycle


We typically ship a major Firefox browser (Desktop and Android) release every 6 to 8 weeks. Building and releasing a browser is complicated and involves many players. To optimize the process, and make it more reliable for all users, over the years we’ve developed a phased release strategy that includes ‘pre-release’ channels: Firefox Nightly, Beta, and Developer Edition. With this approach, we can test and stabilize new features before delivering them to the majority of Firefox users via general release.

Today’s announcement

And today we’re excited to announce that we’re moving to a four-week release cycle! We’re adjusting our cadence to increase our agility, and bring you new features more quickly. In recent quarters, we’ve had many requests to take features to market sooner. Feature teams are increasingly working in sprints that align better with shorter release cycles. Considering these factors, it is time we changed our release cadence.

Starting Q1 2020, we plan to ship a major Firefox release every 4 weeks. Firefox ESR release cadence (Extended Support Release for the enterprise) will remain the same. In the years to come, we anticipate a major ESR release every 12 months with 3 months support overlap between new ESR and end-of-life of previous ESR. The next two major ESR releases will be ~June 2020 and ~June 2021.

Shorter release cycles provide greater flexibility to support product planning and priority changes due to business or market requirements. With four-week cycles, we can be more agile and ship features faster, while applying the same rigor and due diligence needed for a high-quality and stable release. Also, we put new features and implementation of new Web APIs into the hands of developers more quickly. (This is what we’ve been doing recently with CSS spec implementations and updates, for instance.)

In order to maintain quality and minimize risk in a shortened cycle, we must:

  • Ensure Firefox engineering productivity is not negatively impacted.
  • Speed up the regression feedback loop from rollout to detection to resolution.
  • Be able to control feature rollout based on release readiness.
  • Ensure adequate testing of larger features that span multiple release cycles.
  • Have clear, consistent mitigation and decision processes.

Firefox rollouts and feature experiments

Given a shorter Beta cycle, support for our pre-release channel users is essential, including developers using Firefox Beta or Developer Edition. We intend to roll out fixes to them as quickly as possible. Today, we produce two Beta builds per week. Going forward, we will move to more frequent Beta builds, similar to what we have today in Firefox Nightly.

Staged rollouts of features will be a continued best practice. This approach helps minimize unexpected (quality, stability or performance) disruptions to our release end-users. For instance, if a feature is deemed high-risk, we will plan for slow rollout to end-users and turn the feature off dynamically if needed.

We will continue to foster a culture of feature experimentation and A/B testing before rollout to release. Currently, the duration of experiments is not tied to a release cycle length and therefore not impacted by this change. In fact, experiment length is predominantly a factor of time needed for user enrollment, time to trigger the study or experiment and collect the necessary data, followed by data analysis needed to make a go/no-go decision.

Despite the shorter release cycles, we will do our best to localize all new strings in all locales supported by Firefox. We value our end-users from all across the globe. And we will continue to delight you with localized versions of Firefox.

Firefox release schedule 2019 – 2020

Firefox engineering will deploy this change gradually, starting with Firefox 71. We aim to achieve 4-week release cadence by Q1 2020. The table below lists Firefox versions and planned launch dates. Note: These are subject to change due to business reasons.

a table showing the release dates for Firefox GA and pre-release channels, 2019-2020

Process and product quality metrics

As we slowly reduce our release cycle length, from 7 weeks down to 6, 5, 4 weeks, we will monitor closely. We’ll watch aspects like release scope change; developer productivity impact (tree closure, build failures); beta churn (uplifts, new regressions); and overall release stabilization and quality (stability, performance, carryover regressions). Our main goal is to identify bottlenecks that prevent us from being more agile in our release cadence. Should our metrics highlight an unexpected trend, we will put in place appropriate mitigations.

Finally, projects that consume Firefox mainline or ESR releases, such as SpiderMonkey and Tor, now have more frequent releases to choose from. These 4-week releases will be the most stable, fastest, and best quality Firefox builds.

In closing, we hope you’ll enjoy the new faster cadence of Firefox releases. You can always refer to for the latest release dates and other information. Got questions? Please send email to

The post Moving Firefox to a faster 4-week release cycle appeared first on Mozilla Hacks - the Web developer blog.

Alexandre PoirotTrabant Calculator - A data visualization of TreeHerder Jobs durations

Link to this tool (its sources)

What is this tool about?

Its goal is to give a better sense on how much computations are going on in Mozilla automation. Current TreeHerder UI surfaces job durations, but only per job. To get a sense on how much we stress our automation, we have to click on each individual job and do the sum manually. This tool is doing this sum for you. Well, it also tries to rank the jobs by their durations. I would like to open minds about the possible impact on the environment we may have here. For that, I am translating these durations into something fun that doesn’t necessarily make any sense.

What is that car’s GIF?

The car is a Trabant. This car is often seen as symbolic of the former East Germany and the collapse of the Eastern Bloc in general. This part of the tool is just a joke. You may only consider looking at durations, which are meant to be trustable data. Translating a worker duration into CO2 emission is almost impossible to get right. And that’s what I do here: Translate worker duration into a potential energy consumption, which I translate into a potential CO2 emission, before finally translating that CO2 emission into the equivalent emission of a trabant over a given distance in kilometers.

Power consumption of an AWS worker per hour

Here is a really weak computation of Amazon AWS CO2 emissions for a t4.large worker. The power usage of the machines these workers are running on could be 0.6 kW. Such worker uses 25% of these machines. Then let’s say that Amazon Power Usage Effectiveness is 1.1. It means that one hour of a worker consumes 0.165 kWh (0.6 * 0.25 * 1.1).

CO2 emission of electricity per kWh

Based on US Environmental Protection Agency (source), the average CO2 emission per MWh is 998.4 lb/MWh. So 998.4 * 453.59237(g/lb) = 452866 g/MWh, and, 452866 / 1000 = 452 g of CO2/kWh. Unfortunately, the data is already old. It comes from a 2018 report, which seems to be about 2017 data.

CO2 emission of a Trabant per km

A Trabant emits 170 g of CO2 / km (source). (Another [source] reports 140g, but let’s say it emits a lot.)

Final computation

Trabant’s kilometers = "Hours of computation" * "Power consumption of a worker per hour"
                       * "CO2 emission of electribity per kWh"
                       / "CO2 emission of a trabant per km"
Trabant’s kilometers = "Hours of computation" * 0.165 * 452 / 170
=> Trabant’s kilometers = "Hours of computation" * 0.4387058823529412 **

All of this must be wrong

Except the durations! Everything else is highly subject to debate.
Sources are here, and contributions or feedback are welcomed.

The Mozilla BlogExamining AI’s Effect on Media and Truth

Mozilla is announcing its eight latest Creative Media Awards. These art and advocacy projects highlight how AI intersects with online media and truth — and impacts our everyday lives


Today, one of the biggest issues facing the internet — and society — is misinformation.

It’s a complicated issue, but this much is certain: The artificial intelligence (AI) powering the internet is complicit. Platforms like YouTube and Facebook recommend and amplify content that will keep us clicking, even if it’s radical or flat out wrong.

Earlier this year, Mozilla called for art and advocacy projects that illuminate the role AI plays in spreading misinformation. And today, we’re announcing the winners: Eight projects that highlight how AI like machine learning impacts our understanding of the truth.

These eight projects will receive Mozilla Creative Media Awards totalling $200,000, and will launch to the public by May 2020. They include a Turing Test app; a YouTube recommendation simulator; educational deepfakes; and more. Awardees hail from Japan, the Netherlands, Uganda, and the U.S. Learn more about each awardee below.

Mozilla’s Creative Media Awards fuel the people and projects on the front lines of the internet health movement. Past Creative Media Award winners have built mock dating apps that highlight algorithmic discrimination; they’ve created games that simulate the inherent bias of automated hiring; and they’ve published clever tutorials that mix cosmetic advice with cybersecurity best practices.

These eight awards align with Mozilla’s focus on fostering more trustworthy AI.

The winners


[1] Truth-or-Dare Turing Test | by Foreign Objects in the U.S.

This project explores deceptive AI that mimic real humans. Users play truth-or-dare with another entity, and at the conclusion of the game, must guess if they were playing with a fellow human or an AI. (“Truths” are played out using text, and “dares” are played out using an online sketchpad.) The project also includes a website outlining the state of mimicry technology, its uses, and its dangers.


[2] Swap the Curators in the Tube | by Tomo Kihara in Japan

This project explores how recommendation engines present different realities to different people. Users will peruse the YouTube recommendations of five wildly different personas — including a conspiracist and a racist persona — to experience how their recommendations differ.


[3] An Interview with ALEX | by Carrie Wang in the U.S.

The project is a browser-based experience that simulates a job interview with an AI in a future of gamified work and total surveillance. As the interview progresses, users learn that this automated HR manager is covering up the truth of this job, and using facial and speech recognition to make assumptions and decisions about them.


[4] The Future of Memory | by Xiaowei Wang, Jasmine Wang, and Yang Yuting in the U.S.

This project explores algorithmic censorship, and the ways language can be made illegible to such algorithms. It reverse-engineers how automated censors work, to provide a toolkit of tactics using a new “machine resistant” language, composed of emoji, memes, steganography and homophones. The project will also archive censored materials on a distributed, physical network of offline modules.


[5] Choose Your Own Fake News | by Pollicy in Uganda

This project uses comics and audio to explore how misinformation spreads across the African continent. Users engage in a choose-your-own-adventure game that simulates how retweets, comments, and other digital actions can sow misinformation, and how that misinformation intersects with gender, religion, and ethnicity.



[6] Deep Reckonings | by Stephanie Lepp in the U.S.

This project uses deepfakes to address the issue of deepfakes. Three false videos will show public figures — like tech executives — reckoning with the dangers of synthetic media. Each video will be clearly watermarked and labeled as a deepfake to prevent misinformation.


[7] In Event of Moon Disaster | by Halsey Burgund, Francesca Panetta, Magnus Bjerg Mortensen, Jeff DelViscio and the MIT Center for Advanced Virtuality

This project uses the 1969 moon landing to explore the topic of modern misinformation. Real coverage of the landing will be presented on a website alongside deepfakes and other false content, to highlight the difficulty of telling the two apart. And by tracking viewers’ attention, the project will reveal which content captivated viewers more.


[8] Most FACE Ever | by Kyle McDonald in the U.S.

This project teaches users about computer vision and facial recognition technology through playful challenges. Users will enable their webcam, engage with a facial recognition AI, and try to “look” a certain way — say, “criminal,” or “white.” The game reveals how inaccurate and biased facial recognition can often be.

These eight awardees were selected based on quantitative scoring of their applications by a review committee, and a qualitative discussion at a review committee meeting. Committee members included Mozilla staff, current and alumni Mozilla Fellows and Awardees, and outside experts. Selection criteria is designed to evaluate the merits of the proposed approach. Diversity in applicant background, past work, and medium were also considered.

These awards are part of the NetGain Partnership, a collaboration between Mozilla, Ford Foundation, Knight Foundation, MacArthur Foundation, and the Open Society Foundation. The goal of this philanthropic collaboration is to advance the public interest in the digital age.

Also see (May 2019): Seeking Art that Explores AI, Media, and Truth

The post Examining AI’s Effect on Media and Truth appeared first on The Mozilla Blog.

Nick FitzgeraldFlatulence, Crystals, and Happy Little Accidents

The recording of my Rust Conf talk on algorithmic art and pen plotters is up on YouTube!

Here is the abstract:

Sometimes programming Rust can feel like serious business. Let’s reject the absurdity of the real world and slip into solipsism with generative art. How does Rust hold up as a paint brush? And what can we learn when our fantasy worlds bleed back into reality?

I really enjoyed giving this talk, and I think it went well. I want more creative coding, joy, surprise, and silliness in the Rust community. This talk is a small attempt at contributing to that, and I hope folks left inspired.

Without further ado, here is the video:

And here are the slides. You can view them below, or open them in a new window. Navigate between slides with the arrow keys or space bar.


Mozilla Open Policy & Advocacy BlogGovernments should work to strengthen online security, not undermine it

On Friday, Mozilla filed comments in a case brought by Privacy International in the European Court of Human Rights involving government “computer network exploitation” (“CNE”)—or, as it is more colloquially known, government hacking.

While the case focuses on the direct privacy and freedom of expression implications of UK government hacking, Mozilla intervened in order to showcase the further, downstream risks to users and internet security inherent in state CNE. Our submission highlights the security and related privacy threats from government stockpiling and use of technology vulnerabilities and exploits.

Government CNE relies on the secret discovery or introduction of vulnerabilities—i.e., bugs in software, computers, networks, or other systems that create security weaknesses. “Exploits” are then built on top of the vulnerabilities. These exploits are essentially tools that take advantage of vulnerabilities in order to overcome the security of the software, hardware, or system for purposes of information gathering or disruption.

When such vulnerabilities are kept secret, they can’t be patched by companies, and the products containing the vulnerabilities continue to be distributed, leaving people at risk. The problem arises because no one—including government—can perfectly secure information about a vulnerability. Vulnerabilities can be and are independently discovered by third parties and inadvertently leaked or stolen from government. In these cases where companies haven’t had an opportunity to patch them before they get loose, vulnerabilities are ripe for exploitation by cybercriminals, other bad actors, and even other governments,1 putting users at immediate risk.

This isn’t a theoretical concern. For example, the findings of one study suggest that within a year, vulnerabilities undisclosed by a state intelligence agency may be rediscovered up to 15% of the time.2 Also, one of the worst cyber attacks in history was caused by a vulnerability and exploit stolen from NSA in 2017 that affected computers running Microsoft Windows.3 The devastation wreaked through use of that tool continues apace today.4

This example also shows how damaging it can be when vulnerabilities impact products that are in use by tens or hundreds of millions of people, even if the actual government exploit was only intended for use against one or a handful of targets.

As more and more of our lives are connected, governments and companies alike must commit to ensuring strong security. Yet state CNE significantly contributes to the prevalence of vulnerabilities that are ripe for exploitation by cybercriminals and other bad actors and can result in serious privacy and security risks and damage to citizens, enterprises, public services, and governments. Mozilla believes that governments can and should contribute to greater security and privacy for their citizens by minimizing their use of CNE and disclosing vulnerabilities to vendors as they find them.

2 Rediscovery (belfer-revision).pdf

The post Governments should work to strengthen online security, not undermine it appeared first on Open Policy & Advocacy.

William Lachancemozregression update: python 3 edition

For those who are still wondering, yup, I am still maintaining mozregression, though increasingly reluctantly. Given how important this project is to the development of Firefox (getting a regression window using mozregression is standard operating procedure whenever a new bug is reported in Firefox), it feels like this project is pretty vital, so I continue out of some sense of obligation — but really, someone more interested in Mozilla’a build, automation and testing systems would be better suited to this task: over the past few years, my interests/focus have shifted away from this area to building up Mozilla’s data storage and visualization platform.

This post will describe some of the things that have happened in the last year and where I see the project going. My hope is to attract some new blood to add some needed features to the project and maybe take on some of the maintainership duties.

python 3

The most important update is that, as of today, the command-line version of mozregression (v3.0.1) should work with python 3.5+. modernize did most of the work for us, though there were some unit tests that needed updating: special thanks to @gloomy-ghost for helping with that.

For now, we will continue to support python 2.7 in parallel, mainly because the GUI has not yet been ported to python 3 (more on that later) and we have CI to make sure it doesn’t break.

other updates

The last year has mostly been one of maintenance. Thanks in particular to Ian Moody (:kwan) for his work throughout the year — including patches to adapt mozregression support to our new updates policy and shippable builds (bug 1532412), and Kartikaya Gupta (:kats) for adding support for bisecting the GeckoView example app (bug 1507225).

future work

There are a bunch of things I see us wanting to add or change with mozregression over the next year or so. I might get to some of these if I have some spare cycles, but probably best not to count on it:

  • Port the mozregression GUI to Python 3 (bug 1581633) As mentioned above, the command-line client works with python 3, but we have yet to port the GUI. We should do that. This probably also entails porting the GUI to use PyQT5 (which is pip-installable and thus much easier to integrate into a CI process), see bug 1426766.
  • Make self-contained GUI builds available for MacOS X (bug 1425105) and Linux (bug 1581643).
  • Improve our mechanism for producing a standalone version of the GUI in general. We’ve used cx_Freeze which mostly works ok, but has a number of problems (e.g. it pulls in a bunch of unnecessary dependencies, which bloats the size of the installer). Upgrading the GUI to use python 3 may alleviate some of these issues, but it might be worth considering other options in this space, like Gregory Szorc’s pyoxidizer.
  • Add some kind of telemetry to mozregression to measure usage of this tool (bug 1581647). My anecdotal experience is that this tool is pretty invaluable for Firefox development and QA, but this is not immediately apparent to Mozilla’s leadership and it’s thus very difficult to convince people to spend their cycles on maintaining and improving this tool. Field data may help change that story.
  • Supporting new Mozilla products which aren’t built (entirely) out of mozilla-central, most especially Fenix (bug 1556042) and Firefox Reality (bug 1568488). This is probably rather involved (mozregression has a big pile of assumptions about how the builds it pulls down are stored and organized) but that doesn’t mean that this work isn’t necessary.

If you’re interested in working on any of the above, please feel free to dive in on one of the above bugs. I can’t offer formal mentorship, but am happy to help out where I can.

William LachanceTime for some project updates

I’ve been a bit bad about updating this blog over the past year or so, though this hasn’t meant there haven’t been things to talk about. For the next couple weeks, I’m going to try to give some updates on the projects I have been spending time on in the past year, both old and new. I’m going to begin with some of the less-loved things I’ve been working on, partially in an attempt to motivate some forward-motion on things that I believe are rather important to Mozilla.

More to come.

QMOFirefox 70 Beta 6 Testday Results

Hello Mozillians!

As you may already know,  Friday, September 13th – we held a new Testday event, for Firefox 70 Beta 6.

Thank you all for helping us make Mozilla a better place: Gabriela (gaby2300), Dan Caseley (Fishbowler) and Aishwarya Narasimhan!

Result: Several test cases were executed for Protection Report and Privacy Panel UI Updates.

Thanks for another awesome testday, we appreciate your contribution! 🙂

We hope to see you all in our next events, keep an eye on QMO.

We will make announcements as soon as something shows up!


Onno EkkerChecklist

Because I always seem to forget one step or another when creating a new version of my add-on, I decided this time to make myself a nice checklist. Let’s hope that when I find out I forgot something this time too, I remember to update the checklist 🙂

☑ Fix bug or implement a new feature.
☑ Update status of tickets on SourceForge.
☑ Git push the changes to SourceForge and GitHub.
☑ Document the changes in the release notes and in the version history.
☑ Upload alpha/beta version to BabelZilla and Crowdin so localizers can translate it.
☑ Create screenshots showing where new strings are used to help localizers and upload them to Crowdin.
☑ Verify that the new version works as expected Windows, Linux and Mac.
☑ Verify that the new version works in the latest Thunderbird and is also compatible with older versions.
☑ Verify that the new version works in the latest SeaMonkey and is also compatible with older versions.
☑ Verify that I have downloaded and merged all language changes and synchronize them also on BabelZilla and Crowdin.
☑ Verify that all strings in all languages are localized and if not, urge localizers to do so.
☑ Decide on the version number of the new version x.y.z. Only bug fixes? Increase z. New function? Increase y.
☑ Finalize the release notes and version history and update the compatibility info.
☑ Build the new version for SourceForge and for
☑ Generate sha256 checksum and gpg signature for the new version for SourceForge.
☑ Git push the new version and gpg signature to SourceForge.
☑ Upload the new version to SourceForge and move old version to archive.
☑ Wait a bit until the new version has been copied to SourceForge’s mirrors and shows up as the latest version.
☑ Update the website with the release notes, version history, new download link, and auto update information.
☑ Verify that auto update from older versions of Mail Redirect to new version works.
☑ Register the new version on SourceForge’s ticket system, close the old version and create a new target version.
☑ Git push the website to SourceForge to save version history.
☑ Draft the release also on GitHub and make it available.
☑ Write a message to the mailing list to tell people about the new release, functions, and bug fixes.
☑ Do the same in a blog post.
☑ Wait a bit for reactions.
☑ Submit the new version of Mail Redirect to the review queue of
☑ Keep fingers crossed that it isn’t automatically rejected by the automatic validation process because of an error I made.
☑ Wait a couple of days before the add-on is reviewed and accepted by one of the volunteer reviewers.
☑ Wait a couple of more days and ping a volunteer reviewer on irc and ask him/her to review my add-on.
☑ Keep an eye on the download and usage statistics to see the uptake of the new version and wonder about how many people are still using an older version even though Mail Redirect is backwards compatible.
☐ Ask for donations to support the continued development show how happy you are to use this add-on.

Armen ZambranoA web performance issue

Back in July and August, I was looking into a performance issue in Treeherder . Treeherder is a Django app running on Heroku with a MySql database via RDS. This post will cover some knowledge gained while investigating the performance issue and the solutions for it.

NOTE: Some details have been skipped to help the readability of this post. It’s a long read as it is!


Treeherder is a public site mainly used by Mozilla staff. It’s used to determine if engineers have introduced code regressions on Firefox and other products. The performance issue that I investigated would make the site unusable for a long period of time (a few minutes to 20 minutes) multiple times per week. An outage like this would require blocking engineers from pushing new code since it would be practially impossible to determine the health of the code tree during an outage. In other words, the outages would keep “the trees” closed for business. You can see the tracking bug for this work here.

The smoking gun

On June 18th during Mozilla’s All Hands conference, I received a performance alert and decided to investigate it. I decided to use New Relic which was my first time using it and it also was my first time investigating a performance issue of a complex web site. New Relic made it easy and intiutive to get to what I wanted to see.

<figcaption>JobsViewSet API affected by MySQL job selection’s slow down</figcaption>

The UI slow downs came from API slow downs (and timeouts) due to database slow downs. The API that was most affected was JobsViewSet API which is heavily used by the front-end developers. The spike shown on the graph above was rather anomoulous. After some investigation I found that a developer unintentionally pushed code with a command that would trigger an absurd number of performance jobs. A developer normally would request one performance job per code push rather than ten. As these jobs finished (very close together in time) their performance data would be inserted into the database and make the DB crawl.

<figcaption>Normally you would see 1 letter per performance job instead of 10</figcaption>

Since I was new to the team and the code-base, I tried to get input from the rest of my coworkers. We discussed using Django’s bulk_create to reduce the impact on the DB. I was not completely satisfied with the solution because we did not yet understand the root issue. From my Release Engineering years I remembered that you need to find the root issue or you’re just putting a band-aid on that will fall off sooner or later. Treeherder’s infrastructure had a limitation somewhere and a code change might only solve the problem temporarily. We would hit a different performance issue down the road. A fix at the root of the problem was required.

Gaining insight

I knew I needed proper insight as to what was happening plus an understanding of how each part of the data ingestion pipeline worked together. In order to know these things I needed metrics, and New Relic helped me to create a custom dashboard.

<figcaption>Few graphs from the custom NewRelic dashboard</figcaption>

Similar set-up to test fixes

I made sure that the Heroku and RDS set-up between production and stage were as similar as possible. This is important if you want to try changes on stage first, measure it, and compare it with production.

For instance, I requested EC2 type instance changes plus upgrading to the current EC M5 instance types. I can’t find the exact Heroku changes that I produced, but I made the various ingestion workers to be similar in type and in number.

Consult others

I had a very primitive knowledge of MySql at scale and I knew that I would have to lean on others to understand the potential solution. I want to thank dividehex, coop and ckolos for all their time spent listening and all the knowledge they shared with me.

The cap you didn’t know you have

After reading a lot of documentation about Amazon’s RDS set-up I determined that slow downs in the database were related to IOPS spikes. Amazon gives you 3 IOPS per Gb and with a storage of 1 Terabyte we had 3,000 IOPS as our baseline. The graph below shows that at times we would get above that max baseline.

CloudWatch graph showing the SUM of read IOPS and write IOPS for Treeherder’s database<figcaption>CloudWatch graph showing Treeherder’s SUM of read & write IOPS operations</figcaption>

To increase the IOPS baseline we could either increase the storage size or switch from General SSD to Provisioned IOPS storage. The cost of the different storage type was much higher so we decided to double our storage, thus, doubling our IOPS baseline. You can see in the graph below that we’re constantly above our previous baseline. This change helped Treeherder’s performance a lot.

<figcaption>Graph after doubling Treeherder’s IOPS limit</figcaption>

In order to prevent getting into such a state in the future, I also created a CloudWatch alert. We would get alerted if the combined IOPS is greater than 5,700 IOPS for 6 datapoints within 10 minutes.

Auto Scaling

One of the problems with Treeherder’s UI is that it hits the backend quite heavily. The load depends on the number of users using the site, the number of pushes that are in view and the number of jobs that each push has determines the load on the backend.

Fortunately, Heroku allows auto scaling for web nodes. This required upgrading from the Standard 2x nodes to the Performance nodes. Configuring the auto scaling is very simple as you can see in the screenshot below. All you have to do is define the minimum and maximum number of nodes, plus the threshold after which you want new nodes to be spun up.

Heroku’s auto scaling feature in display<figcaption>Heroku’s auto scaling feature in display</figcaption>

Final words

Troubleshooting this problem was quite a learning experience. I learned a lot about the project, the monitoring tools available, the RDS set up, Treeherder’s data pipeline, the value of collaboration and the importance of measuring.

I don’t want to end this post without mentioning that this was not excruciating because of the great New Relic set up. This is something that Ed Morley accomplished while at Mozilla and we should be very greatful that he did.

Mozilla VR BlogCreating privacy-centric virtual spaces

Creating privacy-centric virtual spaces

We now live in a world with instantaneous communication unrestrained by geography. While a generation ago, we would be limited by the speed of the post, now we’re limited by the speed of information on the Internet. This has changed how we connect with other people.

As immersive devices become more affordable, social spaces in virtual reality (VR) will become more integrated into our daily lives and interactions with friends, family, and strangers. Social media has enabled rapid pseudonymous communication, which can be directed at both a single person and large groups. If social VR is the next evolution of this, what approaches will result in spaces that respect user identities, autonomy, and safety?

We need spaces that reflect how we interact with others on a daily basis.

Social spaces: IRL and IVR

Often, when people think about social VR, what tends to come to mind are visions from the worlds of science fiction stories: Snow Crash, Ready Player One, The Matrix - huge worlds that involve thousands of strangers interacting virtually on a day to day basis. In today’s social VR ecosystem, many applications take a similarly public approach: new users are often encouraged (or forced) by the system to interact with new people in the name of developing relationships with strangers who are also participating in the shared world. This can result in more dynamic and populated spaces, but in a way that isn’t inherently understood from our regular interactions.

This approach doesn’t mirror our usual day-to-day experiences—instead of spending time with strangers, we mostly interact with people we know. Whether we’re in a private, semi-public, or public space, we tend to stick to familiarity. We can define the privacy of space by thinking about who has access to a location, and the degree to which there is established trust among other people you encounter there.

Private: a controlled space where all individuals are known to each other. In the physical world, your home is an example of a private space—you know anyone invited into your home, whether they’re a close associate, or a passing acquaintance (like a plumber)
Semi-public: a semi-controlled space where all individuals are associated with each other. For example, you might not know everyone in your workplace, but you’re all connected via your employer
Public: a public space made up of a lot of different, separate groups of people who might not have established relationships or connections. In a restaurant, while you know the group you’re dining with, you likely don’t know anyone else

Creating privacy-centric virtual spaces

While we might encounter strangers in public or semi-public spaces, most of our interactions are still with people we know. This should extend to the virtual world. However, VR devices haven’t been widely available until recently, so most companies building virtual worlds have designed their spaces in a way that prioritizes getting people in the same space, regardless of whether or not those users already know each other.

For many social VR systems, the platform hosting spaces often networks different environments and worlds together and provides a centralized directory of user-created content to go explore. While this type of discovery has benefits and values, in the physical world, we largely spend time with the same people from day to day. Why don’t we design a social platform around this?

Mozilla Hubs is a social VR platform created to provide spaces that more accurately emulate our IRL interactions. Instead of hosting a connected, open ecosystem, users create their own independent, private-by-default rooms. This creates a world where instead of wandering into others’ spaces, you intentionally invite people you know into your space.

Private by default

Communities and societies often establish their own cultural norms, signals, inside jokes, and unspoken (or written) rules — these carry over to online spaces. It can be difficult for people to be thrown into brand-new groups of users without this understanding, and there are often no guarantees that the people you’ll be interacting with in these public spaces will be receptive to other users who are joining. In contrast to these public-first platforms, we’ve designed our social VR platform, Hubs, to be private by default. This means that instead of being in an environment with strangers from the outset, Hubs rooms are designed to be private to the room owner, who can then choose who they invite into the space with the room access link.

Protecting public spaces

When we’re in public spaces, we have different sets of implied rules than the social norms that we might abide by when we’re in private. In virtual spaces, these rules aren’t always as clear and different people will behave differently in the absence of established rules or expectations. Hubs allows communities to set up their own public spaces, so that they can bring their own social norms into the spaces. When people are meeting virtually, it’s important to consider the types of interactions that you’re encouraging.

Because access to a Hubs room is predicated on having the invitation URL, the degree to which that link is shared by the room owner or visitors to the room will dictate how public or private a space is. If you know that the only people in a Hubs room are you and two of your closest friends, you probably have a pretty good sense of how the three of you interact together. If you’re hosting a meetup and expecting a crowd, those behaviors can be less clear. Without intentional community management practices, virtual spaces can turn ugly. Here are some things that you could consider to keep semi-public or public Hubs rooms safe and welcoming:

  • Keep the distribution of the Hubs link limited to known groups of trusted individuals and consider using a form or registration to handle larger events.
  • Use an integration like the Hubs Discord Bot to back users against a known identity. Removing users from a linked Discord channel also removes their ability to enter an authenticated Hubs room.
  • Lock down room permissions in the room to limit who can create new objects in the room or draw with the pen tool.
  • Create a code of conduct or set of community guidelines that are posted in the Hubs room near the room entry point.
  • Assign trusted users to act as moderators so users who is available in the space to help welcome visitors and enforce positive conduct.

We need social spaces that respect and empower participants. Here at Mozilla, we’re creating a platform that more closely reflects how we interact with others IRL. Our spaces are private by default, and Hubs allows users to control who enters their space and how visitors can behave.

Mozilla Hubs is an open source social VR platform—come try it out at or contribute here.

Mozilla VR BlogMultiview on WebXR

Multiview on WebXR

The WebGL multiview extension is already available in several browsers and 3D web engines and it could easily help to improve the performance on your WebXR application

What is multiview?

When VR first arrived, many engines supported stereo rendering by running all the render stages twice, one for each camera/eye. While it works it is highly inefficient.

for (eye in eyes)

Where renderScene will setup the viewport, shaders, and states every time is being called. This will double the cost of rendering every frame.

Later on, some optimizations started to appear in order to improve the performance and minimize the state changes.

for (object in scene) 
	for (eye in eyes)
		renderObject(object, eye)

Even if we reduce the number of state changes, by switching programs and grouping objects, the number of draw calls remains the same: two times the number of objects.

In order to minimize this bottleneck, the multiview extension was created. The TL;DR of this extension is: Using just one drawcall you can draw on multiple targets, reducing the overhead per view.

Multiview on WebXR

This is done by modifying your shader uniforms with the information for each view and accessing them with the gl_ViewID_OVR, similar to how the Instancing API works.

in vec4 inPos;
uniform mat4 u_viewMatrices[2];
void main() {
    gl_Position = u_viewMatrices[gl_ViewID_OVR] * inPos;

The resulting render loop with the multiview extension will look like:

for (object in scene)
    setUniformsForBothEyes() // Left/Right camera matrices

This extension can be used to improve multiple tasks as cascaded shadow maps, rendering cubemaps, rendering multiple viewports as in CAD software, although the most common use case is stereo rendering.

Stereo rendering is also our main target as this will improve the VR rendering path performance with just a few modifications in a 3D engine. Currently, most of the headsets have two views, but there are prototypes of headset with ultra-wide FOV using 4 views which is currently the maximum number of views supported by multiview.

Multiview in WebGL

Once the OpenGL OVR_multiview2 specification was created, the WebGL working group started to make a WebGL version of this API.

It’s been a while since our first experiment supporting multiview on servo and three.js. Back then it was quite a challenge to support WEBGL_multiview: it was based on opaque framebuffers and it was possible to use it with WebGL1 but the shaders need to be compiled with GLSL 3.0 support, which was only available on WebGL2, so some hacks on the servo side were needed in order to get it running.
At that time the WebVR spec had a proposal to support multiview but it was not approved.

Thanks to the work of the WebGL WG, the multiview situation has improved a lot in the last few months. The specification is already in the Community Approved status, which means that browsers could ship it enabled by default (As we do on Firefox desktop 70 and Firefox Reality 1.4)

Some important restrictions of the final specification to notice:

  • It only supports WebGL2 contexts, as it needs GLSL 3.00 and texture arrays.
  • Currently there is no way to use multiview to render to a multisampled backbuffer, so you should create contexts with antialias: false. (The WebGL WG is working on a solution for this)

Web engines with multiview support

We have been working for a while on adding multiview support to three.js (PR). Currently it is possible to get the benefits of multiview automatically as long as the extension is available and you define a WebGL2 context without antialias:

var context = canvas.getContext( 'webgl2', { antialias: false } );
renderer = new THREE.WebGLRenderer( { canvas: canvas, context: context } );

You can see a three.js example using multiview here (source code).

A-Frame is based on three.js so they should get multiview support as soon as they update to the latest release.

Babylon.js has had support for OVR_multiview2 already for a while (more info).

For details on how to use multiview directly without using any third party engine, you could take a look at the three.js implementation, see the specification’ sample code or read this tutorial by Oculus.

Browsers with multiview support

The extension was just approved by the Community recently so we expect to see all the major browsers adding support for it by default soon

  • Firefox Desktop: Firefox 71 will support multiview enabled by default. In the meantime, you can test it on Firefox Nightly by enabling draft extensions.
  • Firefox Reality: It’s already enabled by default since version 1.3.
  • Oculus Browser: It’s implemented but disabled by default, you must enable Draft WebGL Extension preference in order to use it.
  • Chrome: You can use it on Chrome Canary for Windows by running it with the following command line parameters: --use-cmd-decoder=passthrough --enable-webgl-draft-extensions

Performance improvements

Most WebGL or WebXR applications are CPU bound, the more objects you have on the scene the more draw calls you will submit to the GPU. In our benchmarks for stereo rendering with two views, we got a consistent improvement of ~40% compared to traditional rendering.
As you can see on the following chart, the more cubes (drawcalls) you have to render, the better the performance will be.
Multiview on WebXR

What’s next?

The main drawback when using the current multiview extension is that there is no way to render to a multisampling backbuffer. In order to use it with WebXR you should set antialias: false when creating the context. However this is something the WebGL WG is working on.

As soon as they come up with a proposal and is implemented by the browsers, 3D engines should support it automatically. Hopefully, we will see new extensions arriving to the WebGL and WebXR ecosystem to improve the performance and quality of the rendering, such as the ones exposed by Nvidia VRWorks (eg: Variable Rate Shading and Lens Matched Shading).


* Header image by Nvidia VRWorks

Mozilla VR BlogFirefox Reality 1.4

Firefox Reality 1.4

Firefox Reality 1.4 is now available for users in the Viveport and Oculus stores.

With this release, we’re excited to announce that users can enjoy browsing in multiple windows side-by-side. Each window can be set to the size and position of your choice, for a super customizable experience.

Firefox Reality 1.4

And, by popular demand, we’ve enabled local browsing history, so you can get back to sites you've visited before without typing. Sites in your history will also appear as you type in the search bar, so you can complete the address quickly and easily. You can clear your history or turn it off anytime from within Settings.

The Content Feed also has a new and improved menu of hand-curated “Best of WebVR” content for you to explore. You can look forward to monthly updates featuring a selection of new content across different categories including Animation, Extreme (sports/adrenaline/adventure), Music, Art & Experimental and our personal favorite way to wind down a day, 360 Chill.

Additional highlights

  • Movable keyboard, so you can place it where it’s most comfortable to type.
  • Tooltips on buttons and actions throughout the app.
  • Updated look and feel for the Bookmarks and History views so you can see and interact better at all window sizes.
  • An easy way to request the desktop version of a site that doesn’t display well in VR, right from the search bar.
  • Updated and reorganized settings to be easier to find and understand.
  • Added the ability to set a preferred website language order.

Full release notes can be found in our GitHub repo here.

Stay tuned as we keep improving Firefox Reality! We’re currently working on integrating your Firefox Account so you’ll be able to easily send tabs to and from VR from other devices. New languages and copy/paste are also coming soon, in addition to continued improvements in performance and stability.

Firefox Reality is available right now. Go and get it!
Download for Oculus Go
Download for Oculus Quest
Download for Viveport (Search for Firefox Reality in Viveport store)

Mike HoyeDuty Of Care

A colleague asked me what I thought of this Medium article by Owen Bennett on the application of the UK’s Duty of Care laws to software. I’d had… quite a bit of coffee at that point, and this (lightly edited) was my answer:

I think the point Bennett makes early about the shortcomings of analogy is an important one, that however critical analogy is as a conceptual bridge it is not a valuable endpoint. To some extent analogies are all we have when something is new; this is true ever since the first person who saw fire had to explain to somebody else, it warms like the sun but it is not the sun, it stings like a spear but it is not a spear, it eats like an animal but it is not an animal. But after we have seen fire, once we know fire we can say, we can cage it like an animal, like so, we can warm ourselves by it like the sun like so. “Analogy” moves from conceptual, where it is only temporarily useful, to functional and structural where the utility endures.

I keep coming back to something Bryan Cantrill said in the beginning of an old DTrace talk – – (even before he gets into the dtrace implementation details, the first 10 minutes or so of this talk are amazing) – that analogies between software and literally everything else eventually breaks down. Is software an idea, or is it a machine? It’s both. Unlike almost everything else.

(Great line from that talk – “Does it bother you that none of this actually exists?”)

But: The UK has some concepts that really do have critical roles as functional- and structural-analogy endpoints for this transition. What is your duty of care here as a developer, and an organization? Is this software fit for purpose?

Given the enormous potential reach of software, those concepts absolutely do need to survive as analogies that are meaningful and enforceable in software-afflicted outcomes, even if the actual text of (the inevitable) regulation of software needs to recognize software as being its own, separate thing, that in the wrong context can be more dangerous than unconstrained fire.

With that in mind, and particularly bearing in mind that the other places the broad “duty of care” analogy extends go well beyond beyond immediate action, and covers stuff like industrial standards, food safety, water quality and the million other things that make modern society work at all, I think Bennett’s argument that “Unlike the situation for ‘offline’ spaces subject to a duty of care, it is rarely the case that the operator’s act or omission is the direct cause of harm accruing to a user — harm is almost always grounded in another user’s actions” is incorrectly omitting an enormous swath of industrial standards and societal norms that have already made the functional analogy leap so effectively as to be presently invisible.

Put differently, when Toyota recalls hundreds of thousands of cars for potential defects in which exactly zero people were harmed, we consider that responsible stewardship of their product. And when the people working at Uber straight up murder a person with an autonomous vehicle, they’re allowed to say “but software”. Because much of software as an industry, I think, has been pushing relentlessly against the notion that the industry and people in it can or should be held accountable for the consequences of their actions, which is another way of saying that we don’t have and desperately need a clear sense of what a “duty of care” means in the software context.

I think that the concluding paragraph – “To do so would twist the law of negligence in a wholly new direction; an extremely risky endeavour given the context and precedent-dependent nature of negligence and the fact that the ‘harms’ under consideration are so qualitatively different than those subject to ‘traditional’ duties.” – reflects a deep genuflection to present day conceptual structures, and their specific manifestations as text-on-the-page-today, that is (I suppose inevitably, in the presence of this Very New Thing) profoundly at odds with the larger – and far more noble than this article admits – social and societal goals of those structures.

But maybe that’s just a superficial reading; I’ll read it over a few times and give it some more thought.

Mozilla Reps CommunityRep of the Month – July 2019

Please join us in congratulating Bhuvana Meenakshi Koteeswaran, Rep of the Month for July 2019!

Bhuvana is from Salem, India. She joined the Reps program at the end of 2017 and since then she has been involved with Virtual and Augmented Reality projects.


Bhuvana has recently held talks about WebXR at FOSSCon India and BangPypers. In October she will be a Space Wrangler at the Mozilla Festival in London.

Congratulations and keep rocking the open web! :tada:

Daniel Stenbergcurl 7.66.0 – the parallel HTTP/3 future is here

I personally have not done this many commits to curl in a single month (August 2019) for over three years. This increased activity is of course primarily due to the merge of and work with the HTTP/3 code. And yet, that is still only in its infancy…

Download curl here.


the 185th release
6 changes
54 days (total: 7,845)

81 bug fixes (total: 5,347)
214 commits (total: 24,719)
1 new public libcurl function (total: 81)
1 new curl_easy_setopt() option (total: 269)

4 new curl command line option (total: 225)
46 contributors, 23 new (total: 2,014)
29 authors, 14 new (total: 718)
2 security fixes (total: 92)
450 USD paid in Bug Bounties

Two security advisories

TFTP small blocksize heap buffer overflow

(CVE-2019-5482) If you told curl to do TFTP transfers using a smaller than default “blocksize” (default being 512), curl could overflow a heap buffer used for the protocol exchange. Rewarded 250 USD from the curl bug bounty.

FTP-KRB double-free

(CVE-2019-5481) If you used FTP-kerberos with curl and the server maliciously or mistakenly responded with a overly large encrypted block, curl could end up doing a double-free in that exit path. This would happen on applications where allocating a large 32 bit max value (up to 4GB) is a problem. Rewarded 200 USD from the curl bug bounty.


The new features in 7.66.0 are…


This experimental feature is disabled by default but can be enabled and works (by some definition of “works”). Daniel went through “HTTP/3 in curl” in this video from a few weeks ago:

Parallel transfers

You can now do parallel transfers with the curl tool’s new -Z / –parallel option. This is a huge change that might change a lot of use cases going forward!


There’s a standard HTTP header that some servers return when they can’t or won’t respond right now, which indicates after how many seconds or at what point in the future the request might be fulfilled. libcurl can now return that number easily and curl’s –retry option makes use of it (if present).


curl_multi_poll is a new function offered that is very similar to curl_multi_wait, but with one major benefit: it solves the problem for applications of what to do for the occasions when libcurl has no file descriptor at all to wait for. That has been a long-standing and perhaps far too little known issue.

SASL authzid

When using SASL authentication, curl and libcurl now can provide the authzid field as well!


Some interesting bug-fixes included in this release..

.netrc and .curlrc on Windows

Starting now, curl and libcurl will check for and use the dot-prefixed versions of these files even on Windows and only fall back and check for and use the underscore-prefixed versions for compatibility if the dotted one doesn’t exist. This unifies curl’s behavior across platforms.

asyn-thread: create a socketpair to wait on

With this perhaps innocuous-sounding change, libcurl on Linux and other Unix systems will now provide a file descriptor for the application to wait on while name resolving in a background thread. This lets applications know better when to call libcurl again and avoids having to just blindly wait and retry. A performance gain.

Credentials in URL when using HTTP proxy

We found and fixed a regression that made curl not use credentials properly from the URL when doing multi stage authentication (like HTTP Digest) with a proxy.

Move code into vssh for SSH backends

A mostly janitor-style fix that also now abstracted away more SSH-using code to not know what particular SSH backend that is being used while at the same time making it easier to write and provide new SSH backends in the future. I’m personally working a little slowly on one, to be talked about at a later point.

Disable HTTP/0.9 by default

If you want libcurl to accept and deliver HTTP/0.9 responses to your application, you need to tell it to do that. Starting in this version, curl will consider those invalid HTTP responses by default.

alt-svc improvements

We introduced alt-svc support a while ago but as it is marked experimental and nobody felt a strong need to use it, it clearly hasn’t been used or tested much in real life. When we’ve worked on using alt-svc to bootstrap into HTTP/3 we found and fixed a whole range of little issues with the alt-svc support and it is now in a much better shape. However, it is still marked experimental.

IPv6 addresses in URLs

It was reported that the URL parser would accept malformatted IPv6 addresses that subsequently and counter-intuitively would get resolved as a host name internally! An example URL would be “https://[]/’ – where all the letters and symbols within the brackets are individually allowed components of a IPv6 numerical address but it still isn’t a valid IPv6 syntax and instead is a legitimate and valid host name.

Going forward!

We recently ran a poll among users of what we feel are the more important things to work on, and with that the rough roadmap has been updated. Those are things I want to work on next but of course I won’t guarantee anything and I will greatly appreciate all help and assistance that I can get. And sure, we can and will work on other things too!

Niko MatsakisAiC: Shepherds 3.0

I would like to describe an idea that’s been kicking around in my head. I’m calling this idea “shepherds 3.0” – the 3.0 is to distinguish it from the other places we’ve used the term in the past. This proposal actually supplants both of the previous uses of the term, replacing them with what I believe to be a preferred alternative (more on that later).


This is an idea that has been kicking around in my head for a while. It is not a polished plan and certainly not an accepted one. I’ve not talked it over with the rest of the lang team, for example. However, I wanted to put it out there for discussion, and I do think we should be taking some step in this direction soon-ish.


What I’m proposing, at its heart, is very simple. I want to better document the “agenda” of the lang-team. Specifically, if we are going to be moving a feature forward1, then it should have a shepherd (or multiple) who is in charge of doing that.

In order to avoid unbounded queues, the number of things that any individual can shepherd should be limited. Ideally, each person should only shepherd one thing at a time, though I don’t think we need to make a firm rule about it.

Becoming a shepherd is a commitment on the part of the shepherd. The first part of the lang team meeting should be to review the items that are being actively shepherded and get any updates. If we haven’t seen any movement in a while, we should consider changing the shepherd, or officially acknowleding that something is stalled and removing the shepherd altogether.

Assigning a shepherd is a commitment on the part of the rest of the lang-team as well. Before assigning a shepherd, we should discuss if this agenda item is a priority. In particular, if someone is shepherding something, that means we all agree to help that item move towards some kind of completion. This means giving feedback, when feedback is requested. It means doing the work to resolve concerns and conflicts. And, sometimes, it will mean giving way. I’ll talk more about this in a bit.

What was shepherds 1.0 and how is this different?

The initial use of the term shepherd, as I remember it, was actually quite close to the way I am using it here. The idea was that we would assign RFCs to a shepherd that should either drive to be accepted or to be closed. This policy was, by and large, a failure – RFCs got assigned, but people didn’t put in the time. (To be clear, sometimes they did, and in those cases the system worked reasonably well.)

My proposal here differs in a few key respects that I hope will make it more successful:

  • We limit how many things you can shepherd at once.
  • Assigning a shepherd is also a commitment from the lang team as a whole to review progress, resolve conflicts, and devote some time to the issue.
  • We don’t try to shepherd everything – in contrast, shepherding marks the things we are moving forward.
  • The shepherd is not something specific to an RFC, it refers to all kinds of “larger decisions”. For example, stabilization would be a shepherd activity as well.

What was shepherds 2.0 and how is this different?

We’ve also used the term shepherd to refer to a role that is moving towards full lang team membership. That’s different from this proposal in that it is not tied to a specific topic area. But there is also some interaction – for example, it’s not clear that shepherds need to be active lang team members.

I think it’d be great to allow shepherds to be any person who is sufficiently committed to help see something through. The main requirement for a shepherd should be that they are able to give us regular updates on the progress. Ideally, this would be done by attending the lang team meeting. But that doesn’t work for everyone – whether it because of time zones, scheduling, or language barriers – and so I think that any form of regular, asynchronous report would work jsut fine.

I think I would prefer for this proposal – and this kind of “role-specific shepherding” – to entirely replace the “provisional member” role on the lang team. It seems strictly better to me. Among other things, it’s naturally time-limited. Once the work item completes, that gives us a chance to decide whether it makes sense for someone to become a full member of the lang team, or perhaps try shepherding another idea, or perhaps just part ways. I expect there are a lot of people who have interest in working through a specific feature but for whom there is little desire to be long-term members of the lang team.

How do I get a shepherd assigned to my work item?

Ultimately, I think this question is ill-posed: there is no way to “get” a shepherd assigned to your work. Having the expectation that a shepherd will be assigned runs smack into the problems of unbounded queues and was, I think, a crucial flaw in the Shepherds 1.0 system.

Basically, the way a shepherd gets assigned in this scheme is roughly the same as the way things “get done” today. You convince someone in the lang team that the item is a priority, and they become the shepherd. That convincing takes place through the existing channels: nominated issues, discord or zulip, etc. It’s not that I don’t think this is something else we should be thinking about, it’s just that it’s something of an orthogonal problem.

My model is that shepherds are how we quantify and manage the things we are doing. The question of “what happens to all the existing things” is more a question of how we select which things to do – and that’s ultimately a priority call.

OK, so, what happens to all the existing things?

That’s a very good question. And one I don’t intend to answer here, at least not in full. That said, I do think this is an important problem that we should think about. I would like to be exposing more “insight” into our overall priorities.

In my ideal world, we’d have a list of projects that we are not working on, grouped somewhat by how likely we are to work on them in the future. This might then indicate ideas that we do not want to pursue; ideas that we have mild interest in but which have a lot of unknowns. Ideas that we started working on but got blocked at some point (hopefully with a report of what’s blocking them). And so forth. But that’s all a topic for another post.

One other idea that I like is documenting on the website the “areas of interest” for each of the lang team members (and possibly other folks) who might be willing shepherds. This would help people figure out who to reach out to.

Isn’t there anything I can do to help move Topic X along?

This proposal does offer one additional option that hadn’t formally existing before. If you want to see something happen, you can offer to shepherd it yourself – or in conjunction with a member of the lang team. You could do this by pinging folks on discord, attending a lang team meeting, or nominating an issue to bring it to the lang team’s attention.

How many active shepherds can we have then?

It is important to emphasize that having a willing shepherd is not necessarily enough to unblock a project. This is because, as I noted above, assigning a shepherd is also a commitment on the part of the lang-team – a commitment to review progress, resolve conflicts, and keep up with things. That puts a kind of informal cap on how many active things can be occurring, even if there are shepherds to spare. This is particularly true for subtle things. This cap is probably somewhat fundamental – even increasing the size of the lang team wouldn’t necessarily change it that much.

I don’t know how many shepherds we should have at a time, I think we’ll have to work that out by experience, but I do think we should be starting small, with a handful of items at a time. I’d much rather we are consistently making progress on a few things than spreading ourselves too thin.

Expectations for a shepherd

I think the expectations for a shepherd are as follows.

First, they should prepare updates for the lang team meeting on a weekly basis (even if it’s “no update”). This doesn’t have to be a long detailed write-up – even a “no update” suffices.

Second, when a design concern or conflict arises, they should help to see it resolved. This means a few things. First and foremost, they have to work to understand and document the considerations at play, and be prepared to summarize those. (Note: they don’t necessarily have to do all this work themselves! I would like to see us making more use of collaborative summary documents, which allow us to share the work of documenting concerns.)

They should also work to help resolve the conflict, possibly by scheduling one-off meetings or through other means. I won’t go into too much detail here because I think looking into how best to resolve design conflicts is worthy of a separate post.

Finally, while this is not a firm expectation, it is expected that shepherds will become experts in their area, and would thus be able to give useful advice about similar topics in the future (even if they are not actively shepherding that area anymore).

Expectations from the lang team

I want to emphasize this part of things. I think the lang team suffers from the problem of doing too many things at once. Part of agreeing that someone should shepherd topic X, I think, is agreeing that we should be making progress on topic X.

This implies that the team agrees to follow along with the status updates and give moderate amounts of feedback when requested.

Of course, as the design progresses, it is natural that lang team members will have concerns about various aspects. Just as today, we operate on a consensus basis, so resolving those concerns is needed to make progress. When an item has an active shepherd, though, that means it is a priority, and this implies then that lang team members with blocking concerns should make time to work with the shepherd and get them resolved. (And, is always the case, this may mean accepting an outcome that you don’t personally agree with, if the rest of the team is leaning the other way.)


So, that’s it! In the end, the specifics of what I propose are the following:

  • We’ll post on the lang team repository the list of active shepherds and their assigned areas.
  • In order for a formal decision to be made (e.g., stabilization proposal accepted, RFC accepted, etc), a shepherd must be assigned.
    • This happens at the lang team meeting. We should prepare a list of factors to take into account when making this decision, but one of the key ones is whether we agree as a team that this is something that is high enough priority that we can devote the required energy to seeing it progress.
  • Shepherds will keep the lang-team updated on major developers and help to resolve conflicts that arise, with the cooperation of the lang-team, as described above.
    • If a shepherd seems inactive for a long time, we’ll discuss if that’s a problem.


  1. I could not find any single page that documents Rust’s feature process from beginning to end. Seems like something we should fix. But what I mean by moving a feature forward is basically things like “accepting an RFC” or “stabilzing an unstable feature” – basically the formal decisions governed by the lang team. 

Mozilla VR BlogWebXR emulator extension

WebXR emulator extension

We are happy to announce the release of our WebXR emulator browser extension which helps WebXR content creation.

We understand that developing and debugging WebXR experiences is hard for many reasons:

  • You must own a physical XR device
  • Lack of support of XR devices on some platforms, as macOS
  • Putting on and taking off the headset all the time is an uncomfortable task
  • In order to make your app responsive across form factors, you must own tons of devices: mobile, tethered, 3dof, 6dof, and so on

With this extension, we aim to soften most of these issues.

WebXR emulator extension emulates XR devices so that you can directly enter immersive(VR) mode from your desktop browser and test your WebXR application without the need of any XR devices. It emulates multiple XR devices, so you can select which one you want to test.

The extension is built on top of the WebExtensions API, so it works on Firefox, Chrome, and other browsers supporting the API.

WebXR emulator extension

How can I use it?

  1. Install the extension from the extension stores (Firefox, Chrome)
  2. Launch a WebXR application, for example the Three.js examples. You will notice that the application detects that you have a VR device (emulated) and it will let you enter the immersive (VR) mode.
  3. Open the “WebXR” tab in the browser’s developer tool (Firefox, Chrome) to control the emulated device. You can move the headset and controllers and trigger the controller buttons. You will see their transforms reflected in the WebXR application.
    WebXR emulator extension

What’s next?

The development of this extension is still at an early stage. We have many awesome features planned, including:

  • Recording and replaying of actions and movements of your XR devices so you don’t have to replicate them every time you want to test your app and can share them with others.
  • Incorporate new XR devices
  • Control the headset and controllers using a standard gamepad like the Xbox or PS4 controllers or use your mobile as 3dof device
  • Something else?

We would love your feedback! What new features do you want next? Any problems with the extension on your WebXR application? Please join us on GitHub to discuss them.

Lastly, we would like to give a shout out to the WebVR API emulation Extension by Jaume Sanchez as it was a true inspiration for us when building this one.

The Firefox FrontierUnderstand how hackers work

Forget about those hackers in movies trying to crack the code on someone’s computer to get their top secret files. The hackers responsible for data breaches usually start by targeting … Read more

The post Understand how hackers work appeared first on The Firefox Frontier.

Mozilla Open Policy & Advocacy BlogCASE Act Threatens User Rights in the United States

This week, the House Judiciary Committee is expected to mark up the Copyright Alternative in Small Claims Enforcement (CASE) Act of 2019 (H.R. 2426). While the bill is designed to streamline the litigation process, it will impose severe costs upon users and the broader internet ecosystem. More specifically, the legislation would create a new administrative tribunal for claims with limited legal recourse for users, incentivizing copyright trolling and violating constitutional principles. Mozilla has always worked for copyright reform that supports businesses and internet users, and we believe that the CASE Act will stunt innovation and chill free expression online. With this in mind, we urge members to oppose passage of H.R. 2426.

First, the tribunal created by the legislation conflicts with well-established separation of powers principles and limits due process for potential defendants. Under the CASE Act, a new administrative board would be created within the Copyright Office to review claims of infringement. However, as Professor Pamela Samuelson and Kathryn Hashimoto of Berkeley Law point out, it is not clear that Congress has the authority under Article I of the Constitution to create this tribunal. Although Congress can create tribunals that adjudicate “public rights” matters between the government and others, the creation of a board to decide infringement disputes between two private parties would represent an overextension of its authority into an area traditionally governed by independent Article III courts.

Moreover, defendants subject to claims under the CASE Act will be funneled into this process with strictly limited avenues for appeal. The legislation establishes the tribunal as a default legal process for infringement claims–defendants will be forced into the process unless they explicitly opt-out. This implicitly places the burden on the user, and creates a more coercive model that will disadvantage defendants who are unfamiliar with the nuances of this new legal system. And if users have objections to the decision issued by the tribunal, the legislation severely restricts access to justice by limiting substantive court appeals to cases in which the board exceeded its authority; failed to render a final determination; or issued a determination as a result of fraud, corruption, or other misconduct.

While the board is supposed to be reserved for small claims, the tribunal is authorized to award damages of up to $30,000 per proceeding. For many people, this supposedly “small” amount would be enough to completely wipe out their household savings. Since the forum allows for statutory damages to be imposed, the plaintiff does not even have to show any actual harm before imposing potentially ruinous costs on the defendant.

These damages awards are completely out of place in what is being touted as a small claims tribunal. As Stan Adams of the Center for Democracy and Technology notes, awards as high as $30,000 exceed the maximum awards for small claims courts in 49 out of 50 states. In some cases, they would be ten times higher than the damages available in small claims court.

The bill also authorizes the Register of Copyrights to unilaterally establish a forum for claims of up to $5,000 to be decided by a singular Copyright Claims Officer, without any pre-established explicit due process protections for users. These amounts may seem negligible in the context of a copyright suit, where damages can reach up to $150,000, but nearly 40 percent of Americans cannot cover a $400 emergency today.

Finally, the CASE Act will give copyright trolls a favorable forum. In recent years, some unscrupulous actors made a business of threatening thousands of Internet users with copyright infringement suits. These suits are often based on flimsy, but potentially embarrassing, allegations of infringement of pornographic works. Courts have helped limit the worst impact of these campaigns by making sure the copyright owner presented evidence of a viable case before issuing subpoenas to identify Internet users. But the CASE Act will allow the Copyright Office to issue subpoenas with little to no process, potentially creating a cheap and easy way for copyright trolls to identify targets.

Ultimately, the CASE Act will create new problems for internet users and exacerbate existing challenges in the legal system. For these reasons, we ask members to oppose H.R. 2426.

The post CASE Act Threatens User Rights in the United States appeared first on Open Policy & Advocacy.

The Mozilla BlogFirefox’s Test Pilot Program Returns with Firefox Private Network Beta

Like a cat, the Test Pilot program has had many lives. It originally started as an Add-on before we relaunched it three years ago. Then in January, we announced that we were evolving our culture of experimentation, and as a result we closed the Test Pilot program to give us time to further explore what was next.

We learned a lot from the Test Pilot program. First, we had a loyal group of users who provided us feedback on projects that weren’t polished or ready for general consumption. Based on that input we refined and revamped various features and services, and in some cases shelved projects altogether because they didn’t meet the needs of our users. The feedback we received helped us evaluate a variety of potential Firefox features, some of which are in the Firefox browser today.

If you haven’t heard, third time’s the charm. We’re turning to our loyal and faithful users, specifically the ones who signed up for a Firefox account and opted-in to be in the know about new products testing, and are giving them a first crack to test-drive new, privacy-centric products as part of the relaunched Test Pilot program. The difference with the newly relaunched Test Pilot program is that these products and services may be outside the Firefox browser, and will be far more polished, and just one step shy of general public release.

We’ve already earmarked a couple of new products that we plan to fine-tune before their official release as part of the relaunched Test Pilot program. Because of how much we learned from our users through the Test Pilot program, and our ongoing commitment to build our products and services to meet people’s online needs, we’re kicking off our relaunch of the Test Pilot program by beta testing our project code named Firefox Private Network.

Try our first beta – Firefox Private Network

One of the key learnings from recent events is that there is growing demand for privacy features. The Firefox Private Network is an extension which provides a secure, encrypted path to the web to protect your connection and your personal information anywhere and everywhere you use your Firefox browser.

There are many ways that your personal information and data are exposed: online threats are everywhere, whether it’s through phishing emails or data breaches. You may often find yourself taking advantage of the free WiFi at the doctor’s office, airport or a cafe. There can be dozens of people using the same network — casually checking the web and getting social media updates. This leaves your personal information vulnerable to those who may be lurking, waiting to take advantage of this situation to gain access to your personal info. Using the Firefox Private Network helps protect you from hackers lurking in plain sight on public connections.

Start testing the Firefox Private Network today, it’s currently available in the US on the Firefox desktop browser. A Firefox account allows you to be one of the first to test potential new products and services, you can sign up directly from the extension.


Key features of Firefox Private Network are:

  • Protection when in public WiFi access points – Whether you are waiting at your doctor’s office, the airport or working from your favorite coffee shop, your connection to the internet is protected when you use the Firefox browser thanks to a secure tunnel to the web, protecting all your sensitive information like the web addresses you visit, personal and financial information.
  • Internet Protocol (IP) addresses are hidden so it’s harder to track you – Your IP address is like a home address for your computer. One of the reasons why you may want to keep it hidden is to keep advertising networks from tracking your browsing history. Firefox Private Network will mask your IP address providing protection from third party trackers around the web.
  • Toggle the switch on at any time. By clicking in the browser extension, you will find an on/off toggle that shows you whether you are currently protected, which you can turn on at anytime if you’d like additional privacy protection, or off if not needed at that moment.

Your feedback on Firefox Private Network beta is important

Over the next several months you will see a number of variations on our testing of the Firefox Private Network. This iterative process will give us much-needed feedback to explore technical and possible pricing options for the different online needs that the Firefox Private Network meets.

Your feedback will be essential in making sure that we offer a full complement of services that address the problems you face online with the right-priced service solutions. We depend on your feedback and we will send a survey to follow up. We hope you can spend a few minutes to complete it and let us know what you think. Please note this first Firefox Private Network Beta test will only be available to start in the United States for Firefox account holders using desktop devices. We’ll keep you updated on our eventual beta test roll-outs in other locales and platforms.

Sign up now for a Firefox account and join the fight to keep the internet open and accessible to all.

The post Firefox’s Test Pilot Program Returns with Firefox Private Network Beta appeared first on The Mozilla Blog.

This Week In RustThis Week in Rust 303

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is viu, a terminal image viewer.

Thanks to Willi Kappler for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

303 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs

New RFCs

Upcoming Events

Asia Pacific
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

The Rust compiler is basically 30 years of trying to figure out how to teach a computer how to see the things we worry about as C developers.

James Munns (@bitshiftmask) on Twitter

Thanks to llogiq for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nasa42, llogiq, and Flavsditz.

Discuss on r/rust.

Hacks.Mozilla.OrgCaniuse and MDN compatibility data collaboration

Web developers spend a good amount of time making web compatibility decisions. Deciding whether or not to use a web platform feature often depends on its availability in web browsers.

A brief history of compatibility data

More than 10 years ago, @fyrd created the caniuse project, to help developers check feature availability across browsers. Over time, caniuse has evolved into the go-to resource to answer the question that comes up day to day: “Can I use this?”

About 2 years ago, the MDN team started re-doing its browser compatibility tables. The team was on a mission to take the guesswork out of web compatibility. Since then, the BCD project has become a large dataset with more than 10,000 data points. It stays up to date with the help of over 500 contributors on GitHub.

MDN compatibility data is available as open data on npm and has been integrated in a variety of projects including VS Code and auditing.

Two great data sources come together

Today we’re announcing the integration of MDN’s compat data into the caniuse website. Together, we’re bringing even more web compatibility information into the hands of web developers.

Caniuse table for Intl.RelativeTimeFormat. Data imported from mdn-compat-data.

Before we began our collaboration, the caniuse website only displayed results for features available in the caniuse database. Now all search results can include support tables for MDN compat data. This includes data types already found on caniuse, specifically the HTML, CSS, JavaScript, Web API, SVG & and HTTP categories. By adding MDN data, the caniuse support table count expands from roughly 500 to 10,500 tables! Developers’ caniuse queries on what’s supported where will now have significantly more results.

The new feature tables will look a little different. Because the MDN compat data project and caniuse have compatible yet somewhat different goals, the implementation is a little different too. While the new MDN-based tables don’t have matching fields for all the available metadata (such as links to resources and a full feature description), support notes and details such as bug information, prefixes, feature flags, etc. will be included.

The MDN compatibility data itself is converted under the hood to the same format used in caniuse compat tables. Thus, users can filter and arrange MDN-based data tables in the same way as any other caniuse table. This includes access to browser usage information, either by region or imported through Google Analytics to help you decide when a feature has enough support for your users. And the different view modes available via both datasets help visualize support information.

Differences in the datasets

We’ve been asked why the datasets are treated differently. Why didn’t we merge them in the first place? We discussed and considered this option. However, due to the intrinsic differences between our two projects, we decided not to. Here’s why:

MDN’s support data is very broad and covers feature support at a very granular level. This allows MDN to provide as much detailed information as possible across all web technologies, supplementing the reference information provided by MDN Web Docs.

Caniuse, on the other hand, often looks at larger features as a whole (e.g. CSS Grid, WebGL, specific file format support). The caniuse approach provides developers with higher level at-a-glance information on whether the feature’s supported. Sometimes detail is missing. Each individual feature is added manually to caniuse, with a primary focus on browser support coverage rather than on feature coverage overall.

Because of these and other differences in implementation, we don’t plan on merging the source data repositories or matching the data schema at this time. Instead, the integration works by matching the search query to the feature’s description on Then, caniuse generates an appropriate feature table, and converts MDN support data to the caniuse format on the fly.

What’s next

We encourage community members of both repos, caniuse and mdn-compat-data, to work together to improve the underlying data. By sharing information and collaborating wherever possible, we can help web developers find answers to compatibility questions.

The post Caniuse and MDN compatibility data collaboration appeared first on Mozilla Hacks - the Web developer blog.

Wladimir PalantState of the art protection in Chrome Web Store

All of you certainly know already that Google is guarding its Chrome Web Store vigilantly and making sure that no bad apples get in. So when you hit “Report abuse” your report will certainly be read carefully by another human being and acted upon ASAP. Well, eventually… maybe… when it hits the news. If it doesn’t, then it probably wasn’t important anyway and these extensions might stay up despite being taken down by Mozilla three months ago.

Canons protecting an old fort
Image by Sheba_Also

As to your legitimate extensions, these will be occasionally taken down as collateral damage in this fierce fight. Like my extension which was taken down due to missing a screenshot because of not having any user interface whatsoever. It’s not possible to give an advance warning either, like asking the developer to upload a screenshot within a week. This kind of lax policies would only encourage the bad guys to upload more malicious extensions without screenshots of course.

And the short downtime of a few weeks and a few hours of developer time spent trying to find anybody capable of fixing the problem are surely a small price to pay for a legitimate extension in order to defend the privilege of staying in the exclusive club of Chrome extension developers. So I am actually proud that this time my other browser extension, PfP: Pain-free Passwords, was taken down by Google in its relentless fight against the bad actors.

Here is the email I’ve got:

Garbled text of Google's mail

Hard to read? That might be due to the fact that this plain text email was sent as text/html. A completely understandable mistake given how busy all Google employees are. We only need to copy the link to the policy here and we’ll get this in a nicely formatted document.

Policy requiring privacy policy to be added to the designated field

So there we go. All I need to do is to write a privacy policy document for the extension which isn’t collecting any data whatsoever, then link it from the appropriate field. Could it be so easy? Of course not, the bad guys would be able to figure it out as well otherwise. Very clever of Google not to specify which one the “designated field” is. I mean, if you publish extensions on Mozilla Add-ons, there is literally a field saying “Privacy Policy” there. But in Chrome Web Store you only get Title, Summary, Detailed Description, Category, Official Url, Homepage Url, Support Url, Google Analytics ID.

See what Google is doing here? There is really only one place where the bad guys could add their privacy policy, namely that crappy unformatted “Detailed Description” field. Since it’s so unreadable, users ignore it anyway, so they will just assume that the extension has no privacy policy and won’t trust it with any data. And as an additional bonus, “Detailed Description” isn’t the designated field for privacy policy, which gives Google a good reason to take bad guys’ extensions down at any time. Brilliant, isn’t it?

In the meantime, PfP takes a vacation from Chrome Web Store. I’ll let you know how this situation develops.

Update (2019-09-10): As commenter drawdrove points out, the field for the privacy policy actually exists. Instead of placing it under extension settings, Google put it in the overall developer settings. So all of the developer’s extensions share the same privacy policy, no matter how different. Genius!

PfP is now back in Chrome Web Store. But will the bad guys also manage to figure it out?

IRL (podcast)Privacy or Profit - Why Not Both?

Every day, our data hits the market when we sign online. It’s for sale, and we’re left to wonder if tech companies will ever choose to protect our privacy rather than reap large profits with our information. But, is the choice — profit or privacy — a false dilemma? Meet the people who have built profitable tech businesses while also respecting your privacy. Fact check if Facebook and Google have really found religion in privacy. And, imagine a world where you could actually get paid to share your data.

In this episode, Oli Frost recalls what happened when he auctioned his personal data on eBay. Jeremy Tillman from Ghostery reveals the scope of how much ad-tracking is really taking place online. Patrick Jackson at breaks down Big Tech’s privacy pivot. DuckDuckGo’s Gabriel Weinberg explains why his private search engine has been profitable. And Dana Budzyn walks us through how her company, UBDI, hopes to give consumers the ability to sell their data for cash.

IRL is an original podcast from Firefox. For more on the series, go to

The IRL production team would love your feedback. Take this short 2-minute survey.

Read about Patrick Jackson and Geoffrey Fowler's privacy experiment.

Learn more about DuckDuckGo, an alternative to Google search, at

And, we're pleased to add a little more about Firefox's business here as well — one that puts user privacy first and is also profitable. Mozilla was founded as a community open source project in 1998, and currently consists of two organizations: the 501(c)3 Mozilla Foundation, which backs emerging leaders and mobilizes citizens to create a global movement for the health of the internet; and its wholly owned subsidiary, the Mozilla Corporation, which creates Firefox products, advances public policy in support of internet user rights and explores new technologies that give people more control and privacy in their lives online. Firefox products have never — and never will never — buy or sell user data. Because of its unique structure, Mozilla stands apart from its peers in the technology field as one of the most impactful and successful social enterprises in the world. Learn more about Mozilla and Firefox at

Henri SivonenIt’s Not Wrong that "🤦🏼‍♂️".length == 7

From time to time, someone shows that in JavaScript the .length of a string containing an emoji resulting in a number greater than 1 (typically 2) and then proceeds to the conclusion that haha JavaScript is so broken. In this post, I will try to convince you that ridiculing JavaScript for this is less insightful than it first appears.

Mike HoyeForward Motion


This has been a while coming; thank you for your patience. I’m very happy to be able to share the final four candidates for Mozilla’s new community-facing synchronous messaging system.

These candidates were assessed on a variety of axes, most importantly Community Participation Guideline enforcement and accessibility, but also including team requirements from engineering, organizational-values alignment, usability, utility and cost. To close out, I’ll talk about the options we haven’t chosen and why, but for the moment let’s lead with the punchline.

Our candidates are:

We’ve been spoiled for choice here – there were a bunch of good-looking options that didn’t make it to the final four – but these are the choices that generally seem to meet our current institutional needs and organizational goals.

We haven’t stood up a test instance for Slack, on the theory that Mozilla already has a surprising number of volunteer-focused Slack instances running already – Common Voice, Devtools and A-Frame, for example, among many others – but we’re standing up official test instances of each of the other candidates shortly, and they’ll be available for open testing soon.

The trial period for these will last about a month. Once they’re spun up, we’ll be taking feedback in dedicated channels on each of those servers, as well as in #synchronicity on, and we’ll be creating a forum on Mozilla’s community Discourse instance as well. We’ll have the specifics for you at the same time as those servers will be opened up and, of course you can always email me.

I hope that if you’re interested in this stuff you can find some time to try out each of these options and see how well they fit your needs. Our timeline for this transition is:

  1. From September 12th through October 9th, we’ll be running the proof of concept trials and taking feedback.
  2. From October 9th through the 30th, we’re going discuss that feedback, draft a proposed post-IRC plan and muster stakeholder approval.
  3. On December 1st, assuming we can gather that support, we will stand up the new service.
  4. And finally – allowing transition time for support tooling and developers – no later than March 1st 2020, IRC.m.o will be shut down.

In implementation terms, there are a few practical things I’d like to mention:

  • At the end of the trial period, all of these instances will be turned off and all the information in them will be deleted. The only way to win the temporary-permanent game is not to play; they’re all getting decommed and our eventual selection will get stood up properly afterwards.
  • The first-touch experiences here can be a bit rough; we’re learning how these things work at the same time as you’re trying to use them, so the experience might not be seamless. We definitely want to hear about it when setup or connection problems happen to you, but don’t be too surprised if they do.
  • Some of these instances have EULAs you’ll need to click through to get started. Those are there for the test instances, and you shouldn’t expect that in the final products.
  • We’ll be testing out administration and moderation tools during this process, so you can expect to see the occasional bot, or somebody getting bounced arbitrarily. The CPG will be in effect on these test instances, and as always if you see something, say something.
  • You’re welcome to connect with mobile or alternative clients where those are available; we expect results there to be uneven, and we’d be glad for feedback there as well. There will be links in the feedback document we’ll be sending out when the servers are opened up to collections of those clients.
  • Regardless of our choice of public-facing synchronous communications platform, our internal Slack instance will continue to be the “you are inside a Mozilla office” confidential forum. Internal Slack is not going away; that has never been on the table. Whatever the outcome of this process, if you work at Mozilla your manager will still need to be able to find you on Slack, and that is where internal discussions and critical incident management will take place.

… and a few words on some options we didn’t pick and why:

  • Zulip, Gitter.IM and Spectrum.Chat all look like strong candidates, but getting them working behind IAM turned out to be either very difficult or impossible given our resources.
  • Discord’s terms of service, particularly with respect to the rights they assert over participants’ data, are expansive and very grabby, effectively giving them unlimited rights to do anything they want with anything we put into their service. Coupling that with their active hostility towards interoperability and alternative clients has disqualified them as a community platform.
  • Telegram (and a few other mobile-first / chat-first products in that space) looked great for conversations, but not great for work.
  • IRCv3 is just not there yet as a protocol, much less in terms of standardization or having extensive, mature client support.

So here we are. It’s such a relief to be able to finally click send on this post. I’d like to thank everyone on Mozilla’s IT and Open Innovation teams for all the work they’ve done to get us this far, and everyone who’s expressed their support (and sympathy, we got lots of that too) for this process. We’re getting closer.

Mozilla Future Releases BlogWhat’s next in making Encrypted DNS-over-HTTPS the Default

In 2017, Mozilla began working on the DNS-over-HTTPS (DoH) protocol, and since June 2018 we’ve been running experiments in Firefox to ensure the performance and user experience are great. We’ve also been surprised and excited by the more than 70,000 users who have already chosen on their own to explicitly enable DoH in Firefox Release edition. We are close to releasing DoH in the USA, and we have a few updates to share.

After many experiments, we’ve demonstrated that we have a reliable service whose performance is good, that we can detect and mitigate key deployment problems, and that most of our users will benefit from the greater protections of encrypted DNS traffic. We feel confident that enabling DoH by default is the right next step. When DoH is enabled, users will be notified and given the opportunity to opt out.

This post includes results of our latest experiment, configuration recommendations for systems administrators and parental controls providers, and our plans for enabling DoH for some users in the USA.

Results of our Latest Experiment

Our latest DoH experiment was designed to help us determine how we could deploy DoH, honor enterprise configuration and respect user choice about parental controls.

We had a few key learnings from the experiment.

  • We found that OpenDNS’ parental controls and Google’s safe-search feature were rarely configured by Firefox users in the USA. In total, 4.3% of users in the study used OpenDNS’ parental controls or safe-search. Surprisingly, there was little overlap between users of safe-search and OpenDNS’ parental controls. As a result, we’re reaching out to parental controls operators to find out more about why this might be happening.
  • We found 9.2% of users triggered one of our split-horizon heuristics. The heuristics were triggered in two situations: when websites were accessed whose domains had non-public suffixes, and when domain lookups returned both public and private (RFC 1918) IP addresses. There was also little overlap between users of our split-horizon heuristics, with only 1% of clients triggering both heuristics.

Moving Forward

Now that we have these results, we want to tell you about the approach we have settled on to address managed networks and parental controls. At a high level, our plan is to:

  • Respect user choice for opt-in parental controls and disable DoH if we detect them;
  • Respect enterprise configuration and disable DoH unless explicitly enabled by enterprise configuration; and
  • Fall back to operating system defaults for DNS when split horizon configuration or other DNS issues cause lookup failures.

We’re planning to deploy DoH in “fallback” mode; that is, if domain name lookups using DoH fail or if our heuristics are triggered, Firefox will fall back and use the default operating system DNS. This means that for the minority of users whose DNS lookups might fail because of split horizon configuration, Firefox will attempt to find the correct address through the operating system DNS.

In addition, Firefox already detects that parental controls are enabled in the operating system, and if they are in effect, Firefox will disable DoH. Similarly, Firefox will detect whether enterprise policies have been set on the device and will disable DoH in those circumstances. If an enterprise policy explicitly enables DoH, which we think would be awesome, we will also respect that. If you’re a system administrator interested in how to configure enterprise policies, please find documentation here. If you find any bugs, please report them here.

Options for Providers of Parental Controls

We’re also working with providers of parental controls, including ISPs, to add a canary domain to their blocklists. This helps us in situations where the parental controls operate on the network rather than an individual computer. If Firefox determines that our canary domain is blocked, this will indicate that opt-in parental controls are in effect on the network, and Firefox will disable DoH automatically. If you are a provider of parental controls, details are available here. Please reach out to us for more information at We’re also interested in connecting with commercial blocklist providers, in the US and internationally.

This canary domain is intended for use in cases where users have opted in to parental controls. We plan to revisit the use of this heuristic over time, and we will be paying close attention to how the canary domain is adopted. If we find that it is being abused to disable DoH in situations where users have not explicitly opted in, we will revisit our approach.

Plans for Enabling DoH Protections by Default

We plan to gradually roll out DoH in the USA starting in late September. Our plan is to start slowly enabling DoH for a small percentage of users while monitoring for any issues before enabling for a larger audience. If this goes well, we will let you know when we’re ready for 100% deployment. For the moment, we encourage enterprise administrators and parental control providers to check out our config documentation and get in touch with any questions.


The post What’s next in making Encrypted DNS-over-HTTPS the Default appeared first on Future Releases.

Honza BambasVisual Studio Code auto-complete displays MDN reference for CSS and HTML tags

Mozilla Developer Network (now MDN Web Docs) is great, probably the best Web development reference site from them all. And therefor even Microsoft defaults to us now in Visual Studio Code.

Snippet from they Release Notes for 1.38.0:


MDN Reference for HTML and CSS

VS Code now displays a URL pointing to the relevant MDN Reference in completion and hover of HTML & CSS entities:

HTML & CSS MDN Reference

We thank the MDN documentation team for their effort in curating mdn-data / mdn-browser-compat-data and making MDN resources easily accessible by VS Code.

The post Visual Studio Code auto-complete displays MDN reference for CSS and HTML tags appeared first on mayhemer's blog.

Mozilla VR BlogSemantic Placement in Augmented Reality using MrEd

Semantic Placement in Augmented Reality using MrEd

In this article we’re going to take a brief look at how we may want to think about placement of objects in Augmented Reality. We're going to use our recently released lightweight AR editing tool MrEd to make this easy to demonstrate.

Designers often express ideas in a domain appropriate language. For example a designer may say “place that chair on the floor” or “hang that photo at eye level on the wall”.

However when we finalize a virtual scene in 3d we often keep only the literal or absolute XYZ position of elements and throw out the original intent - the deeper reason why an object ended up in a certain position.

It turns out that it’s worth keeping the intention - so that when AR scenes are re-created for new participants or in new physical locations that the scenes still “work” - that they still are satisfying experiences - even if some aspects change.

In a sense this recognizes the Japanese term 'Wabi-Sabi'; that aesthetic placement is always imperfect and contends between fickle forces. Describing placement in terms of semantic intent is also similar to responsive design on the web or the idea of design patterns as described by Christopher Alexander.

Let’s look at two simple examples of semantic placement in practice.

1. Relative to the Ground

When you’re placing objects in augmented reality you often want to specify that those objects should be relationally placed in a position relative to other objects. A typical, in fact ubiquitous, example of placement is that often you want an object to be positioned relative to “the ground”.

Sometimes the designer's intent is to select the highest relative surface underneath the object in question (such as placing a lamp on a table) or at other times to select the lowest relative surface underneath an object (such as say placing a kitten on the floor under a table). Often, as well, we may want to express a placement in the air - such as say a mailbox, or a bird.

In this very small example I’ve attached a ground detection script to a duck, and then sprinkled a few other passive objects around the scene. As the ground is detected the duck will pop down from a default position to be offset relative to the ground (although still in the air). See the GIF above for an example of the effect.

To try this scene out yourself you will need WebXR for iOS which is a preview of emerging WebXR standards using iOS ARKit to expose augmented reality features in a browser environment. This is the url for the scene above in play mode (on a WebXR capable device):

Here is what it should look like in edit mode:

Semantic Placement in Augmented Reality using MrEd

You can also clone the glitch and edit the scene yourself (you’ll want to remember to set a password in the .env file and then login from inside MrEd). See:!/painted-traffic

Here’s my script itself:

/// #title grounded
/// #description Stick to Floor/Ground - dynamically and constantly searching for low areas nearby
    start: function(evt) {
    tick: function(e) {
        let floor = this.sgp.getFloorNear({})
        if(floor) {
   = floor.y

This is relying on code baked into MrEd (specifically inside of findFloorNear() in XRWorldInfo.js if you really want to get detailed).

In the above example I begin by calling startWorldInfo() to start painting the ground planes (so that I can see them since it’s nice to have visual feedback). And, every tick, I call a floor finder subroutine which simply returns the best guess as to the floor in that area. The floor finder logic in this case is pre-defined but one could easily imagine other kinds of floor finding strategies that were more flexible.

2. Follow the player

Another common designer intent is to make sure that some content is always visible to the player. As designers in virtual or augmented reality it can be more challenging to direct a users attention to virtual objects. These are 3d immersive worlds, the player can be looking in any direction. Some kind of mechanic is needed to help make sure that the player sees what they need to see.

One common simple solution is to build an object that stays in front of the user. This can be itself a combination of multiple simpler behaviors. An object can be ordered to seek a position in front of the user, be at a certain height, and ideally billboarded so that any signage or message is always legible.

In this example a sign is decorated with two separate scripts, one to keep the sign in front of the player, and another to billboard the sign to face the player.

Closing thoughts

We’ve only scratched the surface of the kinds of intent could be expressed or combined together. If you want to dive deeper there is a longer list in a separate article Laundry List of UX Patterns). I also invite you to help extend the industry; think both about what high level intentions you mean when you place objects and also how you'd communicate those intentions.

The key insight here is that preserving semantic intent means thinking of objects as intelligent, able to respond to simple high level goals. Virtual objects are more than just statues or art at a fixed position, but can be entities that can do your bidding, and follow high level rules.

Ultimately future 3d tools will almost certainly provide these kinds of services - much in the way CSS provides layout directives. We should also expect to see conventions emerge as more designers begin to work in this space. As a call to action, it's worth it to notice the high level intentions that you want, and to get the developers of the tools that you use to start to incorporate those intentions as primitives.

Hacks.Mozilla.OrgDebugging TypeScript in Firefox DevTools

Firefox Debugger has evolved into a fast and reliable tool chain over the past several months and it’s now supporting many cool features. Though primarily used to debug JavaScript, did you know that you can also use Firefox to debug your TypeScript applications?

Before we jump into real world examples, note that today’s browsers can’t run TypeScript code directly. It’s important to understand that TypeScript needs to be compiled into Javascript before it’s included in an HTML page.

Also, debugging TypeScript is done through a source-map, and so we need to instruct the compiler to produce a source-map for us as well.

You’ll learn the following in this post:

  • Compiling TypeScript to JavaScript
  • Generating source-map
  • Debugging TypeScript

Let’s get started with a simple TypeScript example.

TypeScript Example

The following code snippet shows a simple TypeScript hello world page.

// hello.ts
interface Person {
  firstName: string;
  lastName: string;
function hello(person: Person) {
  return "Hello, " + person.firstName + " " + person.lastName;
function sayHello() {
  let user = { firstName: "John", lastName: "Doe" };
  document.getElementById("output").innerText = hello(user);

TypeScript (TS) is very similar to JavaScript and the example should be understandable even for JS developers unfamiliar with TypeScript.

The corresponding HTML page looks like this:

// hello.html
<!DOCTYPE html>
  <script src="hello.js"></script>
  <button onclick="sayHello()">Say Hello!</button>
  <div id="output"></div>

Note that we are including the hello.js not the hello.ts file in the HTML file. Today’s browser can’t run TS directly, and so we need to compile our hello.ts file into regular JavaScript.

The rest of the HTML file should be clear. There is one button that executes the sayHello() function and <div id="output"> that is used to show the output (hello message).

Next step is to compile our TypeScript into JavaScript.

Compiling TypeScript To JavaScript

To compile TypeScript into JavaScript you need to have a TypeScript compiler installed. This can be done through NPM (Node Package Manager).

npm install -g typescript

Using the following command, we can compile our hello.ts file. It should produce a JavaScript version of the file with the *.js extension.

tsc hello.ts

In order to produce a source-map that describes the relationship between the original code (TypeScript) and the generated code (JavaScript), you need to use an additional --sourceMap argument. It generates a corresponding *.map file.

tsc hello.ts --sourceMap

Yes, it’s that simple.

You can read more about other compiler options if you are interested.

The generated JS file should look like this:

function greeter(person) {
  return "Hello, " + person.firstName + " " + person.lastName;
var user = {
  firstName: "John",
  lastName: "Doe"
function sayHello() {
  document.getElementById("output").innerText = greeter(user);

The most interesting thing is probably the comment at the end of the generated file. The syntax comes from old Firebug times and refers to a source map file containing all information about the original source.

Are you curious what the source map file looks like? Here it is.


It contains information (including location) about the generated file (hello.js), the original file (hello.ts), and, most importantly, mappings between those two. With this information, the debugger knows how to interpret the TypeScript code even if it doesn’t know anything about TypeScript.

The original language could be anything (RUST, C++, etc.) and with a proper source-map, the debugger knows what to do. Isn’t that magic?

We are all set now. The next step is loading our little app into the Debugger.

Debugging TypeScript

The debugging experience is no different from how you’d go about debugging standard JS. You’re actually debugging the generated JavaScript, but since source-map is available the debugger knows how to show you the original TypeScript instead.

This example is available online, so if you are running Firefox you can try it right now.

Let’s start with creating a breakpoint on line 9 in our original TypeScript file. To hit the breakpoint you just need to click on the Say Hello! button introduced earlier.

Debugging TypeScript

See, it’s TypeScript there!

Note the Call stack panel on the right side, it properly shows frames coming from hello.ts file.

One more thing: If you are interested in seeing the generated JavaScript code you can use the context menu and jump right into it.

This action should navigate you to the hello.js file and you can continue debugging from the same location.

You can see that the Sources tree (on the left side) shows both these files at the same time.

Map Scopes

Let’s take a look at another neat feature that allows inspection of variables in both original and generated scopes.

Here is a more complex glitch example.

  1. Load
  2. Open DevTools Toolbox and select the Debugger panel
  3. Create a breakpoint in Webpack/src/index.tsx file on line 45
  4. The breakpoint should pause JS execution immediately

Screenshot of the DevTools debugger panel allowing inspection of variables in both original and generated scopes

Notice the Scopes panel on the right side. It shows variables coming from generated (and also minified) code and it doesn’t correspond to the original TSX (TypeScript with JSX) code, which is what you see in the Debugger panel.

There is a weird e variable instead of localeTime, which is actually used in the source code.

This is where the Map scopes feature comes handy. In order to see the original variables (used in the original TypeScript code) just click the Map checkbox.

Debugger panel in Firefox DevTools,using the Map checkbox to see original TypeScript variables

See, the Scopes panel shows the localeTime variable now (and yes, the magic comes from the source map).

Finally, if you are interested in where the e variable comes from, jump into the generated location using the context menu (like we just did in the previous example).

DevTools showing Debugger panel using the context menu to locate the e variable

Stay tuned for more upcoming Debugger features!

Jan ‘Honza’ Odvarko

The post Debugging TypeScript in Firefox DevTools appeared first on Mozilla Hacks - the Web developer blog.

Georg FritzscheIntroducing Glean — Telemetry for humans

Introducing Glean — Telemetry for humans

<figcaption>Glean logo — subtitled “telemetry for humans”</figcaption>

When Firefox Preview shipped, it was also the official launch of Glean, our new mobile product analytics & telemetry solution true to Mozillas values. This post goes into how we got there and what it’s design principles are.


In the last few years, Firefox development has become increasingly data-driven. Mozilla’s larger data engineering team builds & maintains most of the technical infrastructure that makes this possible; from the Firefox telemetry code to the Firefox data platform and hosting analysis tools. While data about our products is crucial, Mozilla has a rare approach to data collection, following our privacy principles. This includes requiring data review for every new piece of data collection to ensure we are upholding our principles — even when it makes our jobs harder.

One great success story for us is having the Firefox telemetry data described in machine-readable and clearly structured form. This encourages best practices like mandatory documentation, steering towards lean data practices and enables automatic data processing — from generating tables to powering tools like our measurement dashboard or the Firefox probe dictionary.

However, we also learned lessons about what didn’t work so well. While the data types we used were flexible, they were hard to interpret. For example, we use plain numbers to store counts, generic histograms to store multiple timespan measures and allow for custom JSON submissions for uncovered use-cases. The flexibility of these data types means it takes work to understand how to use them for different use-cases & leaves room for accidental error on the instrumentation side. Furthermore, it requires manual effort in interpreting & analysing these data points. We noticed that we could benefit from introducing higher-level data types that are closer to what we want to measure — like data types for “counters” and “timing distributions”.

What about our mobile telemetry?

Another factor was that our mobile product infrastructure that was not ideally integrated yet with the Firefox telemetry infrastructure above. Different products used different analytics solutions & different versions of our own mobile telemetry code, across Android & iOS. Also, our own mobile telemetry code did not describe its metrics in machine-readable form. This meant analysis was potentially different for each product & new instrumentations were higher effort. Integrating new products into the Firefox telemetry infrastructure meant substantial manual effort.

From reviewing the situation, one main question came up: What if we could provide one consistent telemetry SDK for our mobile products, bringing the benefits of our Firefox telemetry infrastructure but without the above mentioned drawbacks?

Introducing Glean

In 2018, we looked at how we could integrate Mozilla’s mobile products better. Putting together what we learned from our existing Firefox Telemetry system, feedback from various user interviews and what we found mattered for our mobile teams, we decided to reboot our telemetry and product analytics solution for mobile. We took input from a cross-functional set of people, data science, engineering, product management, QA and others to form a complete picture of what was required.

From that, we set out to build an end-to-end solution called Glean, consisting of different pieces:

  • Product-side tools — The data enters our system here through the Glean SDK, which is what products integrate and record data into. It provides mobile APIs and aims to hide away the complexities of reliable data collection.
  • Services — This is where the data is stored and made available for analysis, building on our Firefox data platform.
  • Data Tools — Here our users are able to look at the data, performing analysis and setting up dashboards. This goes from running SQL queries, visualizing core product analytics to data scientists digging deep into the raw data.

Our main goal was to support our typical mobile analytics & engineering use-cases efficiently, which came down to the following principles:

  • Basic product analytics are collected out-of-the-box in a standardized way. A baseline of analysis is important for all our mobile applications, from counting active users to retention and session times. This is supported out-of-the-box by our SDK and works consistently across mobile products that integrate it.
  • No custom code is required for adding new metrics to a product. To make our engineers more productive, the SDK keeps the amount of instrumentation code required for metrics as small as possible. Engineers only need to specify what they want to instrument, with which semantics and then record the data using the Glean SDK.
  • New metrics should be available for basic analysis without additional effort. Once a released product is enabled for Glean, getting access to newly added metrics shouldn’t require a time-consuming process. Instead they should show up automatically, both for end-to-end validation and basic analysis through SQL.

To make sure that what we build is true to Mozilla’s values, encourages best practices and is sustainable to work with, we added these principles:

  • Lean data practices are encouraged through SDK design choices. It’s easy to limit data collection to only what’s necessary and documentation can be generated easily, aiding both transparency & understanding for analysis.
  • Use of standardized data types & registering them in machine-readable files. By having collected data described in machine-readable files, our various data tools can read them and support metrics automatically, without manual work, including schema generation, etc.
  • Introduce high-level metric types, so APIs & data tools can better match the use-cases. To make the choice easier for which metric type to use, we introduced higher-level data types that offer clear and understandable semantics — for example, when you want to count something, you use the “counter” type. This also gives us opportunities to offer better tooling for the data, both on the client and for data tooling.
  • Basic semantics on how the data is collected are clearly defined by the library. To make it easier to understand the general semantics of our data, the Glean SDK will define and document when which kind of data will get sent. This makes data analysis easier through consistent semantics.

One crucial design choice here was to use higher-level metric types for the collected metrics, while not supporting free-form submissions. This choice allows us to focus the Glean end-to-end solution on clearly structured, well-understood & automatable data and enables us to scale analytics capabilities more efficiently for the whole organization.

Let’s count something

So how does this work out in practice? To have a more concrete example, let’s say we want to introduce a new metric to understand how many times new tabs are opened in a browser.

In Glean, this starts from declaring that metric in a YAML file. In this case we’ll add a new “counter” metric:

type: counter
description: Count how often a new tab is opened. …

Now from here, an API is automatically generated that the product code can use to record when something happens:

import org.mozilla.yourApplication.GleanMetrics.BrowserUsage

override fun tabOpened() {


That’s it, everything else is handled internally by the SDK — from storing the data, packaging it up correctly and sending it out.

This new metric can then be unit-tested or verified in real-time, using a web interface to confirm the data is coming in. Once the product change is live, data starts coming in and shows up in standard data sets. From there it is available to query using SQL through Redash, our generic go-to data analysis tool. Other tools can also later integrate it, like the measurement dashboard or Amplitude.

Of course there is a set of other metric types available, including events, dates & times and other typical use cases.

Want to see how this looks in code? You can take a look at the Glean Android sample app, especially the metrics.yaml file and its main activity.

What’s next?

The first version of the Glean solution went live to support the launch of Firefox Preview, with an initial SDK support for Android applications & a priority set of data tools. iOS support for the SDK is already planned for 2019, as is improved & expanded integration with different analysis tools. We are also actively considering support for desktop platforms, to make Glean a true cross-platform analytics SDK.

If you’re interested in learning more, you can check out:

We’ll certainly expand on more technical details in future upcoming blog posts.

Special thanks

While this project took contributions from a lot of people, I especially want to call out Frank Bertsch (data engineering lead), Alessio Placitelli (Glean SDK lead) and Michael Droettboom (data engineer & SDK engineer). Without their substantial contributions to design & implementation, this project would not have been possible.

Introducing Glean — Telemetry for humans was originally published in Georg Fritzsche on Medium, where people are continuing the conversation by highlighting and responding to this story.

David HumphreySome Assembly Required

In my open source courses, I spend a lot of time working with new developers who are trying to make sense of issues on GitHub and figure out how to begin.  When it comes to how people write their issues, I see all kinds of styles.  Some people write for themselves, using issues like a TODO list: "I need to fix X and Y."  Other people log notes from a call or meeting, relying on the collective memory of those who attended: "We agreed that so-and-so is going to do such-and-such."  Still others write issues that come from outside the project, recording a bug or some other problem: "Here is what is happening to me..."

Because I'm getting ready to take another cohort of students into the wilds of GitHub, I've been thinking once more about ways to make this process better.  Recently I spent a number of days assembling furniture from IKEA with my wife.  Spending that much time with Allen keys got me thinking about what we could learn from IKEA's work to enable contribution from customers.

I am not a furniture maker.  Not even close.  While I own some power tools, most were gifts from my father, who actually knows how to wield them.  I'm fearless when it comes to altering bits in a computer; but anything that puts holes in wood, metal, or concrete terrifies me.  And yet, like so many other people around the world, I've "built" all kinds of furniture in our house--at least I've assembled it.

In case you haven't bought furniture from IKEA, they are famous for designing not only the furniture itself, but also the materials, packaging, and saving cost by offloading most of the assmbly to the customer.  Each piece comes with instructions, showing the parts manifest, tools you'll need (often simple ones are included), and pictorial, step-wise instructions for assembling the piece.

IKEA's model is amazing: amazing that people will do it, amazing that it's doable at all by the general public!  You're asking people to do a task that they a) probably have never done before; b) probably won't do again.  Sometimes you'll buy 4 of some piece, a chair, and through repeated trial and error, get to the point where you can assemble it intuitively.  But this isn't the normal use case.  For the most part, we buy something we don't have, assemble it, and then we have it.  This means that the process has to work during the initial attempt, without training.  IKEA is keen that it work because they don't want you to return it, or worse, never come back again.

Last week I assembled all kinds of things for a few rooms in our basement: chairs, a couch, tables, etc.  I spent hours looking at, and working my way through IKEA instructions.  Take another look at the Billy instructions I included above.  Here's some of what I notice:

  • It starts with the end-goal: here is how things should look when you're done
  • It tells you what tools you'll need in order to make this happen and, importantly, imposes strict limits on the sorts of tools that might be required.  An expert could probably make use of more advanced tools; but this isn't for experts.
  • It gives you a few GOTCHAs to avoid up front.  "Be careful to do it this way, not that way." This repeats throughout the rest of the steps.  Like this, not that.
  • It itemizes and names (via part number) all the various pieces you'll need.  There should be 16 of these, 18 of these, etc.
  • It takes you step-by-step through maniplating the parts on the floor into the product you saw in the store, all without words.
  • Now look at how short this thing is.  The information density is high even though the complexity is low.

It got me thinking about lessons we could learn when filing issues in open source projects.  I realize that there isn't a perfect analogy between assmbling furniture and fixing a bug.  IKEA mass produces the same bookshelf, chairs, and tables, and these instructions work on all of them.  Meanwhile, a bug (hopefully) vanishes as soon as it's fixed.  We can't put the same effort into our instructions for a one-off experience as we can for a mass produced one.  However, in both cases, I would stress that the experience is similar for the person working through the "assembly," it's often their first time following these steps.

When filing a GitHub issue, what could we learn from IKEA instructions?

  1. Show the end goal of the work.  "This issue is about moving this button to the right.  Currently it looks like this and we want it to look like this."  A lot of people do this, especially with visual, UI related bugs.  However, we could do a version of it on lots of non-visual bugs too.  Here is what you're trying to acheive with this work.  When we file bugs, we assume this is always clear.  But imagine it needs to be clear based solely on these "instructions."
  2. List the tools you'll need to accomplish this, and include any that are not common.  We do this sometimes. "Go read the page."  That might be enough.  But we could put more effort into calling out specific things you'll need that might not be obvious, URLs to things, command invocation examples, etc.  I think a lot of people bristle at the idea of using issues to teach people "common knowledge."  I agree that there's a limit to what is reasonable in an issue (recall how short IKEA's was).  But we often err on the side of not-enough, and assume that our knowledge is the same as our reader's.  It almost certainly won't be if this is for a new contributor.
  3. Call out the obsticles in the way of accomplishing this work.  Probably there are some things you should know about how the tests run when we change this part of the code.  Or maybe you need to be aware that we need to run some script after we update things in this directory.  Any mistakes that people have made in the past, and which haven't been dealt with through automation, are probably in scope here.  Even better, put them in a more sticky location like the official docs, and link to them from here.
  4. Include a manifest of the small parts involved.  For example, see the lines of code here, here, and here.  You'll have to update this file, this file, and that file.  This is the domain of the change you're going to need to make.  Be clear about what's involved.  I've done this a lot, and it often doesn't take much time when you know the code well.  However, for the new contributor, this is a lifesaver.
  5. Include a set of steps that one could follow on the way to making this fix.  This is espeically important in the case that changes need to happen in a sequence.

These steps aren't always possible or practical.  But it takes less work than you might think, and the quality of the contributions you get as a result is worth the upfront investment.  In reality, you'll likely end up having to it in reviews after the fact, when people get it wrong.  Try doing it at the beginning.

Here's a fantastic example of how to do it well.  I've tweeted about this in the past, but Devon Abbott's issue in the Lona repo is fantastic:  Here we see many of the things outlined above.  As a result of this initial work, one of my students was able to jump in.

I want to be careful to not assume that everyone has time to do all this when filing bugs.  Not all projects are meant for external contributors (GitHub actually needs some kind of signal so that people know when to engage and when to avoid certain repos), and not all developers on GitHub are looking to mentor or work with new contributors.  Regardless, I think we could all improve our issues if we thought back to these IKEA instructions from time to time.  A lot of code fixes and regular maintenance tasks should really feel more like assembling furniture vs. hand carving a table leg.  There's so much to do to keep all this code working, we are going to have to find ways to engage and involve new generations of developers who need a hand getting started.

The Firefox FrontierStop video autoplay with Firefox

You know that thing where you go to a website and a video starts playing automatically? Sometimes it’s a video created by the site, and sometimes it’s an ad. That … Read more

The post Stop video autoplay with Firefox appeared first on The Firefox Frontier.

Hacks.Mozilla.OrgDebugging WebAssembly Outside of the Browser

WebAssembly has begun to establish itself outside of the browser via dedicated runtimes like Mozilla’s Wasmtime and Fastly’s Lucet. While the promise of a new, universal format for programs is appealing, it also comes with new challenges. For instance, how do you debug .wasm binaries?

At Mozilla, we’ve been prototyping ways to enable source-level debugging of .wasm files using traditional tools like GDB and LLDB.

The screencast below shows an example debugging session. Specifically, it demonstrates using Wasmtime and LLDB to inspect a program originally written in Rust, but compiled to WebAssembly.

This type of source-level debugging was previously impossible. And while the implementation details are subject to change, the developer experience—attaching a normal debugger to Wasmtime—will remain the same.

By allowing developers to examine programs in the same execution environment as a production WebAssembly program, Wasmtime’s debugging support makes it easier to catch and diagnose bugs that may not arise in a native build of the same code. For example, the WebAssembly System Interface (WASI) treats filesystem access more strictly than traditional Unix-style permissions. This could create issues that only manifest in WebAssembly runtimes.

Mozilla is proactively working to ensure that WebAssembly’s development tools are capable, complete, and ready to go as WebAssembly expands beyond the browser.

Please try it out and let us know what you think.

Note: Debugging using Wasmtime and LLDB should work out of the box on Linux with Rust programs, or with C/C++ projects built via the WASI SDK.

Debugging on macOS currently requires building and signing a more recent version of LLDB.

Unfortunately, LLDB for Windows does not yet support JIT debugging.

Thanks to Lin Clark, Till Schneidereit, and Yury Delendik for their assistance on this post, and for their work on WebAssembly debugging.

The post Debugging WebAssembly Outside of the Browser appeared first on Mozilla Hacks - the Web developer blog.

Daniel StenbergFIPS ready with curl

Download wolfSSL fips ready (in my case I got

Unzip the source code somewhere suitable

$ cd $HOME/src
$ unzip
$ cd wolfssl-4.1.0-gplv3-fips-ready

Build the fips-ready wolfSSL and install it somewhere suitable

$ ./configure --prefix=$HOME/wolfssl-fips --enable-harden --enable-all
$ make -sj
$ make install

Download curl, the normal curl package. (in my case I got curl 7.65.3)

Unzip the source code somewhere suitable

$ cd $HOME/src
$ unzip
$ cd curl-7.65.3

Build curl with the just recently built and installed fips ready wolfSSL version.

$ LD_LIBRARY_PATH=$HOME/wolfssl-fips/lib ./configure --with-wolfssl=$HOME/wolfssl-fips --without-ssl
$ make -sj

Now, verify that your new build matches your expectations by:

$ ./src/curl -V

It should show that it uses wolfSSL and that all the protocols and features you want are enabled and present. If not, iterate until it does!

FIPS Ready means that you have included the FIPS code into your build and that you are operating according to the FIPS enforced best practices of default entry point, and Power On Self Test (POST).”

Nick FitzgeraldCombining Coverage-Guided and Generation-Based Fuzzing

Coverage-guided fuzzing and generation-based fuzzing are two powerful approaches to fuzzing. It can be tempting to think that you must either use one approach or the other at a time, and that they can’t be combined. However, this is not the case. In this blog post I’ll describe a method for combining coverage-guided fuzzing with structure-aware generators that I’ve found to be both effective and practical.

What is Generation-Based Fuzzing?

Generation-based fuzzing leverages a generator to create random instances of the fuzz target’s input type. The csmith program, which generates random C source text, is one example generator. Another example is any implementation of the Arbitrary trait from the quickcheck property-testing framework. The fuzzing engine repeatedly uses the generator to create new inputs and feeds them into the fuzz target.

// Generation-based fuzzing algorithm.

fn generate_input(rng: &mut impl Rng) -> MyInputType {
    // Generator provided by the user...

loop {
    let input = generate_input(rng);
    let result = run_fuzz_target(input);

    // If the fuzz target crashed/panicked/etc report
    // that.
    if result.is_interesting() {

The generator can be made structure aware, leveraging knowledge about the fuzz target to generate inputs that are more likely to be interesting. They can generate valid C programs for fuzzing a C compiler. They can make inputs with the correct checksums and length prefixes. They can create instances of typed structures in memory, not just byte buffers or strings. But naïve generation-based fuzzing can’t learn from the fuzz target’s execution to evolve its inputs. The generator starts from square one each time it is invoked.

What is Coverage-Guided Fuzzing?

Rather than throwing purely random inputs at the fuzz target, coverage-guided fuzzers instrument the fuzz target to collect code coverage. The fuzzer then leverages this coverage information as feedback to mutate existing inputs into new ones, and tries to maximize the amount of code covered by the total input corpus. Two popular coverage-guided fuzzers are libFuzzer and AFL.

// Coverage-guided fuzzing algorithm.

let corpus = initial_set_of_inputs();
loop {
    let old_input = choose_one(&corpus);
    let new_input = mutate(old_input);
    let result = run_fuzz_target(new_input);

    // If the fuzz target crashed/panicked/etc report
    // that.
    if result.is_interesting() {

    // If executing the input hit new code paths, add
    // it to our corpus.
    if result.executed_new_code_paths() {

The coverage-guided approach is great at improving a fuzzer’s efficiency at creating new inputs that poke at new parts of the program, rather than testing the same code paths repeatedly. However, in its naïve form, it is easily tripped up by the presence of things like checksums in the input format: mutating a byte here means that a checksum elsewhere must be updated as well, or else execution will never proceed beyond validating the checksum to reach more interesting parts of the program.

The Problem

Coverage-based fuzzing lacks a generator’s understanding of which inputs are well-formed, while generation-based fuzzing lacks coverage-guided fuzzing’s ability to mutate inputs strategically. We would like to combine coverage-guided and generation-based fuzzing to get the benefits of both without the weaknesses of either.

So how do we implement a fuzz target?

When writing a fuzz target for use with coverage-guided fuzzers, you’re typically given some byte buffer, and you feed that into your parser or process it in whatever way is relevant.

// Implementation of a fuzz target for a coverage
// -guided fuzzer.
fn my_coverage_guided_fuzz_target(input: &[u8]) {
    // Do stuff with `input`...

However, when writing a fuzz target for use with a generation-based fuzzer, instead of getting a byte buffer, you’re given a structure-aware input that was created by your generator.

// Implementation of a fuzz target for a generation-
// based fuzzer using `csmith`.
fn my_c_smith_fuzz_target(input: MyCsmithOutput) {
    // Compile the C source text that was created
    // by `csmith`...

// Implementation of a generator and fuzz target for
// a generation-based fuzzer using the `quickcheck`
// property-testing framework.
impl quickcheck::Arbitrary for MyType {
    fn arbitrary(g: &mut impl quickcheck::Gen) -> MyType {
        // Generate a random instance of `MyType`...
fn my_quickcheck_fuzz_target(input: MyType) {
    // Do stuff with the `MyType` instance that was
    // created by `MyType::arbitrary`...

The signatures for coverage-guided and generation-based fuzz targets have different input types, and it isn’t obvious how we can reuse a single fuzz target with each approach to fuzzing separately, let alone combine both approaches at the same time.

A Solution

As a running example, let’s imagine we are authoring a crate that converts back and forth between RGB and HSL colors.

/// A color represented with RGB.
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
pub struct Rgb {
    pub r: u8,
    pub g: u8,
    pub b: u8,

impl Rgb {
    pub fn to_hsl(&self) -> Hsl {
        // ...

/// A color represented with HSL.
#[derive(Clone, Copy, Debug, PartialEq)]
pub struct Hsl {
    pub h: f64,
    pub s: f64,
    pub l: f64,

impl Hsl {
    pub fn to_rgb(&self) -> Rgb {
        // ...

Now, let’s start by writing a couple structure-aware test case generators by implementing quickcheck::Arbitrary for our color types, and then using quickcheck’s default test runner to get generation-based fuzzing up and running.

Here are our Arbitrary implementations:

use rand::prelude::*;
use quickcheck::{Arbitrary, Gen};

impl Arbitrary for Rgb {
    fn arbitrary<G: Gen>(g: &mut G) -> Self {
        Rgb {
            r: g.gen(),
            g: g.gen(),
            b: g.gen(),

impl Arbitrary for Hsl {
    fn arbitrary<G: Gen>(g: &mut G) -> Self {
        Hsl {
            h: g.gen_range(0.0, 360.0),
            s: g.gen_range(0.0, 1.0),
            l: g.gen_range(0.0, 1.0),

Now we need to define what kinds of properties we want to check and what oracles we want to implement to check those properties for us. One of the simplest thigns we can assert is “doing operation X does not panic or crash”, but we might also assert that converting RGB into HSL and back into RGB again is the identity function.

pub fn rgb_to_hsl_doesnt_panic(rgb: Rgb) {
    let _ = rgb.to_hsl();

pub fn rgb_to_hsl_to_rgb_is_identity(rgb: Rgb) {
    assert_eq!(rgb, rgb.to_hsl().to_rgb());

mod tests {
    quickcheck::quickcheck! {
        fn rgb_to_hsl_doesnt_panic(rgb: Rgb) -> bool {

    quickcheck::quickcheck! {
        fn rgb_to_hsl_to_rgb_is_identity(rgb: Rgb) -> bool {

Now we can run cargo test and quickcheck will do some generation-based fuzzing for our RGB and HSL conversion crate — great!

However, so far we have just done plain old generation-based fuzzing. We also want to get the advantages of coverage-guided fuzzing without giving up our nice structure-aware generators.

An easy and practical — yet still effective — way to add coverage-guided fuzzing into the mix, is to treat the raw byte buffer input provided by the coverage-guided fuzzer as a sequence of random values and implement a “random” number generator around it. If our generators only use “random” values from this RNG, never from any other source, then the coverage-guided fuzzer can mutate and evolve its inputs to directly control the structure-aware fuzzer and its generated test cases!

The BufRng type from my bufrng crate is exactly this “random” number generator implementation:

// Snippet from the `bufrng` crate.

use rand_core::{Error, RngCore};
use std::iter::{Chain, Repeat};
use std::slice;

/// A "random" number generator that yields values
/// from a given buffer (and then zeros after the
/// buffer is exhausted).
pub struct BufRng<'a> {
    iter: Chain<slice::Iter<'a, u8>, Repeat<&'a u8>>,

impl BufRng<'_> {
    /// Construct a new `BufRng` that yields values
    /// from the given `data` buffer.
    pub fn new(data: &[u8]) -> BufRng {
        BufRng {
            iter: data.iter().chain(iter::repeat(&0)),

impl RngCore for BufRng<'_> {
    fn next_u32(&mut self) -> u32 {
        let a = * as u32;
        let b = * as u32;
        let c = * as u32;
        let d = * as u32;
        (a << 24) | (b << 16) | (c << 8) | d

    // ...

BufRng will generate “random” values, which are copied from its given buffer. Once the buffer is exhausted, it will keep yielding zero.

let mut rng = BufRng::new(&[1, 2, 3, 4]);

    (1 << 24) | (2 << 16) | (3 << 8) | 4,

assert_eq!(rng.gen::<u32>(), 0);
assert_eq!(rng.gen::<u32>(), 0);
assert_eq!(rng.gen::<u32>(), 0);

Because there is a blanket implementation of quickcheck::Gen for all types that implement rand_core::RngCore, we can use BufRng with any quickcheck::Arbitrary implementation.

Finally, here is a fuzz target definition for a coverage-guided fuzzer that uses BufRng to reinterpret the input into something that our Arbitrary implementations can use:

fn my_fuzz_target(input: &[u8]) {
    // Create a `BufRng` from the raw input byte buffer
    // given to us by the fuzzer.
    let mut rng = BufRng::new(input);

    // Generate an `Rgb` instance with it.
    let rgb = Rgb::arbitrary(&mut rng);

    // Assert our properties!

With BufRng, going from quickcheck property tests to combined structure-aware test case generation and coverage-guided fuzzing only required minimal changes!


We can get the benefits of both structure-aware test case generators and coverage-guided fuzzing in an easy, practical way by interpreting the fuzzer’s raw input as a sequence of random values that our generator uses. Because BufRng can be used with quickcheck::Arbitrary implementations directly, it is easy to make the leap from generation-based fuzzing with quickcheck property tests to structure-aware, coverage-guided fuzzing with libFuzzer or AFL. With this setup, the fuzzer can both learn from program execution feedback to create new inputs that are more effective, and it can avoid checksum-style pitfalls.

If you’d like to learn more, check out these resources:

  • Structure-Aware Fuzzing with libFuzzer. A great tutorial describing a few different ways to do exactly what it says on the tin.

  • Write Fuzzable Code by John Regehr. A nice collection of things you can do to make your code easier to fuzz and get the most out of your fuzzing.

  • cargo fuzz. A tool that makes using libFuzzer with Rust a breeze.

  • quickcheck. A port of the popular property-testing framework to Rust.

  • bufrng. A “random” number generator that yields pre-determined values from a buffer (e.g. the raw input generated by libFuzzer or AFL).

Finally, thanks to Jim Blandy and Jason Orendorff for reading an early draft of this blog post. This text is better because of their feedback, and any errors that remain are my own.

Post Script

After I originally shared this blog post, a few people made a few good points that I think are worth preserving and signal boosting here!

Rohan Padhye et al implemented this same idea for Java, and they wrote a paper about it: JQF: Coverage-Guided Property-Based Testing in Java

Manish Goregaokar pointed out that cargo-fuzz uses the arbitrary crate, which has its own Arbitrary trait that is different from quickcheck::Arbitrary. Notably, its paradigm is “generate an instance of Self from this buffer of bytes” rather than “generate an instance of Self from this RNG” like quickcheck::Arbitrary does. If you are starting from scratch, rather than reusing existing quickcheck tests, it likely makes sense to implement the arbitrary::Arbitrary trait instead of (or in addition to!) the quickcheck::Arbitrary trait, since using it with a coverage-guided fuzzer doesn’t require the BufRng trick.

John Regehr pointed out that this pattern works well when small mutations to the input string result in small changes to the generated structure. It works less well when a small change (e.g. at the start of the string) guides the generator to a completely different path, generating a completely different structure, which leads the fuzz target down an unrelated code path, and ultimately we aren’t really doing “proper” coverage-guided, mutation-based fuzzing anymore. Rohan Padhye reported that he experimented with ways to avoid this pitfall, but found that the medicine was worse than the affliction: the overhead of avoiding small changes in the prefix vastly changing the generated structure was greater than just running the fuzz target on the potentially very different generated structure.

One thing that I think would be neat to explore would be a way to avoid this problem by design: design an Arbitrary-style trait that instead of generating an arbitrary instance of Self, takes &mut self and makes an arbitrary mutation to itself as guided by an RNG. Maybe we would allow the caller to specify if the mutation should grow or shrink self, and then you could build test case reduction “for free” as well. Here is a quick sketch of what this might look like:

pub enum GrowOrShrink {
    // Make `self` bigger.
    // Make `self` smaller.

trait ArbitraryMutation {
    // Mutate `self` with a random mutation to be either bigger
    // or smaller. Return `Ok` if successfully mutated, `Err`
    // if `self` can't get any bigger/smaller.
    fn arbitrary_mutation(
        &mut self,
        rng: &mut impl Rng,
        grow_or_shrink: GrowOrShrink,
    ) -> Result<(), ()>;

// Example implementation for `u64`.
impl ArbitraryMutation for u64 {
    fn arbitrary_mutation(
        &mut self,
        rng: &mut impl Rng,
        grow_or_shrink: GrowOrShrink,
    ) -> Result<(), ()> {
    match GrowOrShrink {
        GrowOrShrink::Grow if *self == u64::MAX => Err(()),
        GrowOrShrink::Grow => {
            *self = rng.gen_range(*self + 1, u64::MAX);
        GrowOrShrink::Shrink if *self == 0 => Err(()),
        GrowOrShrink::Shrink => {
            *self = rng.gen_range(0, *self - 1);

The Firefox FrontierRecommended Extensions program—where to find the safest, highest quality extensions for Firefox

Extensions can add powerful customization features to Firefox—everything from ad blockers and tab organizers to enhanced privacy tools, password managers, and more. With thousands of extensions to choose from—either those … Read more

The post Recommended Extensions program—where to find the safest, highest quality extensions for Firefox appeared first on The Firefox Frontier.

Mozilla Addons BlogMozilla’s Manifest v3 FAQ

What is Manifest v3?

Chrome versions the APIs they provide to extensions, and the current format is version 2. The Firefox WebExtensions API is nearly 100% compatible with version 2, allowing extension developers to easily target both browsers.

In November 2018, Google proposed an update to their API, which they called Manifest v3. This update includes a number of changes that are not backwards-compatible and will require extension developers to take action to remain compatible.

A number of extension developers have reached out to ask how Mozilla plans to respond to the changes proposed in v3. Following are answers to some of the frequently asked questions.

Why do these changes negatively affect content blocking extensions?

One of the proposed changes in v3 is to deprecate a very powerful API called blocking webRequest. This API gives extensions the ability to intercept all inbound and outbound traffic from the browser, and then block, redirect or modify that traffic.

In its place, Google has proposed an API called declarativeNetRequest. This API impacts the capabilities of content blocking extensions by limiting the number of rules, as well as available filters and actions. These limitations negatively impact content blockers because modern content blockers are very sophisticated and employ layers of algorithms to not only detect and block ads, but to hide from the ad networks themselves. Extensions would still be able to use webRequest but only to observe requests, not to modify or block them.

As a result, some content blocking extension developers have stated they can no longer maintain their add-on if Google decides to follow through with their plans. Those who do continue development may not be able to provide the same level of capability for their users.

Will Mozilla follow Google with these changes?

In the absence of a true standard for browser extensions, maintaining compatibility with Chrome is important for Firefox developers and users. Firefox is not, however, obligated to implement every part of v3, and our WebExtensions API already departs in several areas under v2 where we think it makes sense.

Content blocking: We have no immediate plans to remove blocking webRequest and are working with add-on developers to gain a better understanding of how they use the APIs in question to help determine how to best support them.

Background service workers: Manifest v3 proposes the implementation of service workers for background processes to improve performance. We are currently investigating the impact of this change, what it would mean for developers, and whether there is a benefit in continuing to maintain background pages.

Runtime host permissions: We are evaluating the proposal in Manifest v3 to give users more granular control over the sites they give permissions to, and investigating ways to do so without too much interruption and confusion.

Cross-origin communication: In Manifest v3, content scripts will have the same permissions as the page they are injected in. We are planning to implement this change.

Remotely hosted code: Firefox already does not allow remote code as a policy. Manifest v3 includes a proposal for additional technical enforcement measures, which we are currently evaluating and intend to also enforce.

Will my extensions continue to work in Manifest v3?

Google’s proposed changes, such as the use of service workers in the background process, are not backwards-compatible. Developers will have to adapt their add-ons to accommodate these changes.

That said, the changes Google has proposed are not yet stabilized. Therefore, it is too early to provide specific guidance on what to change and how to do so. Mozilla is waiting for more clarity and has begun investigating the effort needed to adapt.

We will provide ongoing updates about changes necessary on the add-ons blog.

What is the timeline for these changes?

Given Manifest v3 is still in the draft and design phase, it is too early to provide a specific timeline. We are currently investigating what level of effort is required to make the changes Google is proposing, and identifying where we may depart from their plans.

Later this year we will begin experimenting with the changes we feel have a high chance of being part of the final version of Manifest v3, and that we think make sense for our users. Early adopters will have a chance to test our changes in the Firefox Nightly and Beta channels.

Once Google has finalized their v3 changes and Firefox has implemented the parts that make sense for our developers and users, we will provide ample time and documentation for extension developers to adapt. We do not intend to deprecate the v2 API before we are certain that developers have a viable path forward to migrate to v3.

Keep your eyes on the add-ons blog for updates regarding Manifest v3 and some of the other work our team is up to. We welcome your feedback on our community forum.


Hacks.Mozilla.OrgFirefox 69 — a tale of Resize Observer, microtasks, CSS, and DevTools

For our latest excellent adventure, we’ve gone and cooked up a new Firefox release. Version 69 features a number of nice new additions including JavaScript public instance fields, the Resize Observer and Microtask APIs, CSS logical overflow properties (e.g. overflow-block), and @supports for selectors.

We will also look at highlights from the raft of new debugging features in the Firefox 69 DevTools, including console message grouping, event listener breakpoints, and text label checks.

This blog post provides merely a set of highlights; for all the details, check out the following:

The new CSS

Firefox 69 supports a number of new CSS features; the most interesting are as follows.

New logical properties for overflow

69 sees support for some new logical propertiesoverflow-block and overflow-inline — which control the overflow of an element’s content in the block or inline dimension respectively.

These properties map to overflow-x or overflow-y, depending on the content’s writing-mode. Using these new logical properties instead of overflow-x and overflow-y makes your content easier to localize, especially when adapting it to languages using a different writing direction. They can also take the same values — visible, hidden, scroll, auto, etc.

Note: Look at Handling different text directions if you want to read up on these concepts.

@supports for selectors

The @supports at-rule has long been very useful for selectively applying CSS only when a browser supports a particular property, or doesn’t support it.

Recently this functionality has been extended so that you can apply CSS only if a particular selector is or isn’t supported. The syntax looks like this:

@supports selector(selector-to-test) {
  /* insert rules here */

We are supporting this functionality by default in Firefox 69 onwards. Find some usage examples here.

JavaScript gets public instance fields

The most interesting addition we’ve had to the JavaScript language in Firefox 69 is support for public instance fields in JavaScript classes. This allows you to specify properties you want the class to have up front, making the code more logical and self-documenting, and the constructor cleaner. For example:

class Product {
  tax = 0.2;
  basePrice = 0;

  constructor(name, basePrice) { = name;
    this.basePrice = basePrice;
    this.price = (basePrice * (1 +;

Notice that you can include default values if wished. The class can then be used as you’d expect:

let bakedBeans = new Product('Baked Beans', 0.59);
console.log(`${} cost $${bakedBeans.price}.`);

Private instance fields (which can’t be set or referenced outside the class definition) are very close to being supported in Firefox, and also look to be very useful. For example, we might want to hide the details of the tax and base price. Private fields are indicated by a hash symbol in front of the name:

#tax = 0.2;
 #basePrice = 0;

The wonder of WebAPIs

There are a couple of new WebAPIs enabled by default in Firefox 69. Let’s take a look.

Resize Observer

Put simply, the Resize Observer API allows you to easily observe and respond to changes in the size of an element’s content or border box. It provides a JavaScript solution to the often-discussed lack of “element queries” in the web platform.

A simple trivial example might be something like the following (resize-observer-border-radius.html, see the source also), which adjusts the border-radius of a <div> as it gets smaller or bigger:

const resizeObserver = new ResizeObserver(entries => {
  for (let entry of entries) {
    if(entry.contentBoxSize) { = Math.min(100, (entry.contentBoxSize.inlineSize/10) +
                                                      (entry.contentBoxSize.blockSize/10)) + 'px';
    } else { = Math.min(100, (entry.contentRect.width/10) +
                                                      (entry.contentRect.height/10)) + 'px';


“But you can just use border-radius with a percentage”, I hear you cry. Well, sort of. But that quickly leads to ugly-looking elliptical corners, whereas the above solution gives you nice square corners that scale with the box size.

Another, slightly less trivial example is the following (resize-observer-text.html , see the source also):

if(window.ResizeObserver) {
  const h1Elem = document.querySelector('h1');
  const pElem = document.querySelector('p');
  const divElem = document.querySelector('body > div');
  const slider = document.querySelector('input'); = '600px';

  slider.addEventListener('input', () => { = slider.value + 'px';

  const resizeObserver = new ResizeObserver(entries => {
    for (let entry of entries) {
        if(entry.contentBoxSize) {
   = Math.max(1.5, entry.contentBoxSize.inlineSize/200) + 'rem';
   = Math.max(1, entry.contentBoxSize.inlineSize/600) + 'rem';
        } else {
   = Math.max(1.5, entry.contentRect.width/200) + 'rem';
   = Math.max(1, entry.contentRect.width/600) + 'rem';


Here we use the resize observer to change the font-size of a header and paragraph as a slider’s value is changed, causing the containing <div> to change its width. This shows that you can respond to changes in an element’s size, even if they have nothing to do with the viewport size changing.

So to summarise, Resize Observer opens up a wealth of new responsive design work that was difficult with CSS features alone. We’re even using it to implement a new responsive version of our new DevTools JavaScript console!.


The Microtasks API provides a single method — queueMicrotask(). This is a low-level method that enables us to directly schedule a callback on the microtask queue. This schedules code to be run immediately before control returns to the event loop so you are assured a reliable running order (using setTimeout(() => {}, 0)) for example can give unreliable results).

The syntax is as simple to use as other timing functions:

self.queueMicrotask(() => {
  // function contents here

The use cases are subtle, but make sense when you read the explainer section in the spec. The biggest benefactors here are framework vendors, who like lower-level access to scheduling. Using this will reduce hacks and make frameworks more predictable cross-browser.

Developer tools updates in 69

There are various interesting additions to the DevTools in 69, so be sure to go and check them out!

Event breakpoints and async functions in the JS debugger

The JavaScript debugger has some cool new features for stepping through and examining code:

New remote debugging

In the new shiny about:debugging page, you’ll find a grouping of options for remotely debugging devices, with more to follow in the future. In 69, we’ve enabled a new mechanism for allowing you to remotely debug other versions of Firefox, on the same machine or other machines on the same network (see Network Location).

Console message grouping

In the console, we now group together similar error messages, with the aim of making the console tidier, spamming developers less, and making them more likely to pay attention to the messages. In turn, this can have a positive effect on security/privacy.

The new console message grouping looks like this, when in its initial closed state:

When you click the arrow to open the message list, it shows you all the individual messages that are grouped:

Initially the grouping occurs on CSP, CORS, and tracking protection errors, with more categories to follow in the future.

Flex information in the picker infobar

Next up, we’ll have a look at the Page inspector. When using the picker, or hovering over an element in the HTML pane, the infobar for the element now shows when it is a flex container, item, or both.

website nav menu with infobar pointing out that it is a flex item

See this page for more details.

Text Label Checks in the Accessibility Inspector

A final great feature to mention is the new text label checks feature of the Accessibility Inspector.

When you choose Check for issues > Text Labels from the dropdown box at the top of the accessibility inspector, it marks all the nodes in the accessibility tree with a warning sign if it is missing a descriptive text label. The Checks pane on the right hand side then gives a description of the problem, along with a Learn more link that takes you to more detailed information available on MDN.

WebExtensions updates

Last but not least, Let’s give a mention to WebExtensions! The main feature to make it into Firefox 69 is User Scripts — these are a special kind of extension content script that, when registered, instruct the browser to insert the given scripts into pages that match the given URL patterns.

See also

In this post we’ve reviewed the main web platform features added in Firefox 69. You can also read up on the main new features of the Firefox browser — see the Firefox 69 Release Notes.

The post Firefox 69 — a tale of Resize Observer, microtasks, CSS, and DevTools appeared first on Mozilla Hacks - the Web developer blog.

Botond BalloA case study in analyzing C++ compiler errors: why is the compiler trying to copy my move-only object?

Recently a coworker came across a C++ compiler error message that seemed baffling, as they sometimes tend to be.

We figured it out together, and in the hope of perhaps saving some others form being stuck on it too long, I thought I’d describe it.

The code pattern that triggers the error can be distilled down into the following:

#include <utility>  // for std::move

// A type that's move-only.
struct MoveOnly {
  MoveOnly() = default;
  // copy constructor deleted
  MoveOnly(const MoveOnly&) = delete;  
  // move constructor defaulted or defined
  MoveOnly(MoveOnly&&) = default;      

// A class with a MoveOnly field.
struct S {
  MoveOnly field;

// A function that tries to move objects of type S
// in a few contexts.
void foo() {
  S obj;
  // move it into a lambda
  [obj = std::move(obj)]() {    
    // move it again inside the lambda
    S moved = std::move(obj);   

The error is:

test.cpp: In lambda function:
test.cpp:19:28: error: use of deleted function ‘S::S(const S&)’
   19 |     S moved = std::move(obj);
      |                            ^
test.cpp:11:8: note: ‘S::S(const S&)’ is implicitly deleted because the default definition would be ill-formed:
   11 | struct S {
      |        ^
test.cpp:11:8: error: use of deleted function ‘MoveOnly::MoveOnly(const MoveOnly&)’
test.cpp:6:3: note: declared here
    6 |   MoveOnly(const MoveOnly&) = delete;
      |   ^~~~~~~~

The reason the error is baffling is that we’re trying to move an object, but getting an error about a copy constructor being deleted. The natural reaction is: “Silly compiler. Of course the copy constructor is deleted; that’s by design. Why aren’t you using the move constructor?”

The first thing to remember here is that deleting a function using = delete does not affect overload resolution. Deleted functions are candidates in overload resolution just like non-deleted functions, and if a deleted function is chosen by overload resolution, you get a hard error.

Any time you see an error of the form “use of deleted function F“, it means overload resolution has already determined that F is the best candidate.

In this case, the error suggests S’s copy constructor is a better candidate than S’s move constructor, for the initialization S moved = std::move(obj);. Why might that be?

To reason about the overload resolution process, we need to know the type of the argument, std::move(obj). In turn, to reason about that, we need to know the type of obj.

That’s easy, right? The type of obj is S. It’s right there: S obj;.

Not quite! There are actually two variables named obj here. S obj; declares one in the local scope of foo(), and the capture obj = std::move(obj) declares a second one, which becomes a field of the closure type the compiler generates for the lambda expression. Let’s rename this second variable to make things clearer and avoid the shadowing:

// A function that tries to move objects of type S in a few contexts.
void foo() {
  S obj;
  // move it into a lambda
  [capturedObj = std::move(obj)]() {    
    // move it again inside the lambda
    S moved = std::move(capturedObj);

We can see more clearly now, that in std::move(capturedObj) we are referring to the captured variable, not the original.

So what is the type of capturedObj? Surely, it’s the same as the type of obj, i.e. S?

The type of the closure type’s field is indeed S, but there’s an important subtlety here: by default, the closure type’s call operator is const. The lambda’s body becomes the body of the closure’s call operator, so inside it, since we’re in a const method, the type of capturedObj is const S!

At this point, people usually ask, “If the type of capturedObj is const S, why didn’t I get a different compiler error about trying to std::move() a const object?”

The answer to this is that std::move() is somewhat unfortunately named. It doesn’t actually move the object, it just casts it to a type that will match the move constructor.

Indeed, if we look at the standard library’s implementation of std::move(), it’s something like this:

template <typename T>
typename std::remove_reference<T>::type&& move(T&& t)
    return static_cast<typename std::remove_reference<T>::type&&>(t); 

As you can see, all it does is cast its argument to an rvalue reference type.

So what happens if we call std::move() on a const object? Let’s substitute T = const S into that return type to see what we get: const S&&. It works, we just get an rvalue reference to a const.

Thus, const S&& is the argument type that gets used as the input to choosing a constructor for S, and this is why the copy constructor is chosen (the move constructor is not a match at all, because binding a const S&& argument to an S&& parameter would violate const-correctness).

An interesting question to ask is, why is std::move() written in a way that it compiles when passed a const argument? The answer is that it’s meant to be usable in generic code, where you don’t know the type of the argument, and want it to behave on a best-effort basis: move if it can, otherwise copy. Perhaps there is room in the language for another utility function, std::must_move() or something like that, which only compiles if the argument is actually movable.

Finally, how do we solve our error? The root of our problem is that capturedObj is const because the lambda’s call operator is const. We can get around this by declaring the lambda as mutable:

void foo() {
  S obj;
  [capturedObj = std::move(obj)]() mutable {
    S moved = std::move(capturedObj);

which makes the lambda’s call operator not be const, and all is well.

The Mozilla BlogToday’s Firefox Blocks Third-Party Tracking Cookies and Cryptomining by Default

Today, Firefox on desktop and Android will — by default — empower and protect all our users by blocking third-party tracking cookies and cryptominers. This milestone marks a major step in our multi-year effort to bring stronger, usable privacy protections to everyone using Firefox.

Firefox’s Enhanced Tracking Protection gives users more control

For today’s release, Enhanced Tracking Protection will automatically be turned on by default for all users worldwide as part of the ‘Standard’ setting in the Firefox browser and will block known “third-party tracking cookies” according to the Disconnect list. We first enabled this default feature for new users in June 2019. As part of this journey we rigorously tested, refined, and ultimately landed on a new approach to anti-tracking that is core to delivering on our promise of privacy and security as central aspects of your Firefox experience.

Currently over 20% of Firefox users have Enhanced Tracking Protection on. With today’s release, we expect to provide protection for 100% of ours users by default. Enhanced Tracking Protection works behind-the-scenes to keep a company from forming a profile of you based on their tracking of your browsing behavior across websites — often without your knowledge or consent. Those profiles and the information they contain may then be sold and used for purposes you never knew or intended. Enhanced Tracking Protection helps to mitigate this threat and puts you back in control of your online experience.

You’ll know when Enhanced Tracking Protection is working when you visit a site and see a shield icon in the address bar:


When you see the shield icon, you should feel safe that Firefox is blocking thousands of companies from your online activity.

For those who want to see which companies we block, you can click on the shield icon, go to the Content Blocking section, then Cookies. It should read Blocking Tracking Cookies. Then, click on the arrow on the right hand side, and you’ll see the companies listed as third party cookies that Firefox has blocked:

If you want to turn off blocking for a specific site, click on the Turn off Blocking for this Site button.

Protecting users’ privacy beyond tracking cookies

Cookies are not the only entities that follow you around on the web, trying to use what’s yours without your knowledge or consent. Cryptominers, for example, access your computer’s CPU, ultimately slowing it down and draining your battery, in order to generate cryptocurrency — not for yours but someone else’s benefit. We introduced the option to block cryptominers in previous versions of Firefox Nightly and Beta and are including it in the ‘Standard Mode‘ of your Content Blocking preferences as of today.

Another type of script that you may not want to run in your browser are Fingerprinting scripts. They harvest a snapshot of your computer’s configuration when you visit a website. The snapshot can then also be used to track you across the web, an issue that has been present for years. To get protection from fingerprinting scripts Firefox users can turn on ‘Strict Mode.’ In a future release, we plan to turn fingerprinting protections on by default.

Also in today’s Firefox release

To see what else is new or what we’ve changed in today’s release, you can check out our release notes.

Check out and download the latest version of Firefox available here.

The post Today’s Firefox Blocks Third-Party Tracking Cookies and Cryptomining by Default appeared first on The Mozilla Blog.

Mozilla Cloud Services BlogA New Policy for Mozilla Location Service

Several years ago we started a geolocation experiment called the Mozilla Location Service (MLS) to create a location service built on open-source software and powered through crowdsourced location data. MLS provides geolocation lookups based on publicly observable cell tower and WiFi access point information. MLS has served the public interest by providing location information to open-source operating systems, research projects, and developers.

Today Mozilla is announcing a policy change regarding MLS. Our new policy will impose limits on commercial use of MLS. Mozilla has not made this change by choice. Skyhook Holdings, Inc. contacted Mozilla some time ago and alleged that MLS infringed a number of its patents. We subsequently reached an agreement with Skyhook that avoids litigation. While the terms of the agreement are confidential, we can tell you that the agreement exists and that our MLS policy change relates to it. We can also confirm that this agreement does not change the privacy properties of our service: Skyhook does not receive location data from Mozilla or our users.

Our new policy preserves the public interest heart of the MLS project. Mozilla has never offered any commercial plans for MLS and had no intention to do so. Only a handful of entities have made use of MLS API Query keys for commercial ventures. Nevertheless, we regret having to impose new limits on MLS. Sometimes companies have to make difficult choices that balance the massive cost and uncertainty of patent litigation against other priorities.

Mozilla has long argued that patents can work to inhibit, rather than promote, innovation. We continue to believe that software development, and especially open-source software, is ill-served by the patent system. Mozilla endeavors to be a good citizen with respect to patents. We offer a free license to our own patents under the Mozilla Open Software Patent License Agreement. We will also continue our advocacy for a better patent system.

Under our new policy, all users of MLS API Query keys must apply.  Non-commercial users (such as academic, public interest, research, or open-source projects) can request an MLS API Query key capped at a daily usage limit of 100,000. This limit may be increased on request. Commercial users can request an MLS API Query key capped at a daily usage limit of 100,000. The daily limit cannot be increased for commercial uses and those keys will expire after 3 months. In effect, commercial use of MLS will now be of limited duration and restricted in volume.

Existing keys will expire on March 1, 2020. We encourage non-commercial users to re-apply for continued use of the service. Keys for a small number of commercial users that have been exceeding request limits will expire sooner. We will reach out to those users directly.

Location data and services are incredibly valuable in today’s connected world. We will continue to provide an open-source and privacy respecting location service in the public interest. You can help us crowdsource data by opting-in to the contribution option in our Android mobile browser.

This Week In RustThis Week in Rust 302

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is cargo-udeps, a cargo subcommand to find unused dependencies.

Thanks to Christopher Durham for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

No issues were proposed for CfP.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

214 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.


No RFCs are currently in final comment period.

Tracking Issues & PRs

New RFCs

Upcoming Events

Asia Pacific
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Threads are for working in parallel, async is for waiting in parallel.

ssokolow on /r/rust

Thanks to Philipp Oppermann for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nasa42, llogiq, and Flavsditz.

Discuss on r/rust.

Cameron KaiserThe deformed yet thoughtful offspring of AppleScript and Greasemonkey

Ah, AppleScript. I can't be the only person who's thinking Apple plans to replace AppleScript with Swift because it's not new and sexy anymore. And it certainly has its many rough edges and Apple really hasn't done much to improve this, which are clear signs it's headed for a room-temperature feet-first exit.

But, hey! If you're using TenFourFox, you're immune to Apple's latest self-stimulatory bright ideas. And while I'm trying to make progress on TenFourFox's various deficiencies, you still have the power to make sites work the way you want thanks to TenFourFox's AppleScript-to-JavaScript "bridge." The bridge lets you run JavaScript within the page and sample or expose data back to AppleScript. With AppleScript's other great powers, like even running arbitrary shell scripts, you can connect TenFourFox to anything else on the other end with AppleScript.

Here's a trivial example. Go to any Github wiki page, like, I dunno, the one for TenFourFox's AppleScript support. If there's a link there for more wiki entries, go ahead and click on it. It doesn't work (because of issue 521). Let's fix that!

You can either cut and paste the below examples directly into Script Editor and click the Run button to run them, or you can cut and paste them into a text file and run them from the command line with osascript filename, or you can be a totally lazy git and just download them from SourceForge. Unzip them and double click the file to open them in Script Editor.

In the below examples, change TenFourFoxWhatever to the name of your TenFourFox executable (TenFourFoxG5, etc.). Every line highlighted in the same colour is a continuous line. Don't forget the double quotes!

Here's the script for Github's wiki.

tell application "TenFourFoxWhatever"
    tell current tab of front browser window to run JavaScript "
    // Don't run if not on github wiki.
    if (!window.location.href.match(/\\/\\/\\//i) ||
        !window.location.href.match(/\\/wiki\\//)) {
        window.alert('not a Github wiki page');
    // Display the hidden links
    let nwiki=document.getElementById('wiki-pages-box').getElementsByClassName('wiki-more-pages');
    while (nwiki.length > 0) {
    // Remove the 'more pages' link (should be only one)
    let jwiki=document.getElementById('wiki-pages-box').getElementsByClassName('wiki-more-pages-link');
    if (jwiki.length > 0)
        jwiki.item(0).style.display = 'none';
end tell

Now, have the current tab on any Github wiki page. Run the script. Poof! More links! (If you run it on a page that isn't Github, it will give you an error box.)

Most of you didn't care about that. Some of you use your Power Macs for extremely serious business like YouTube videos. I ain't judging. Let me help you get rid of the crap, starting with Weird Al's anthem to alumin(i)um foil.

With comments in the five figures from every egoist fruitbat on the interwebs with an opinion on Weird Al, that's gonna take your poor Power Mac a while to process. Plus all those suggested videos! Let's make those go away!

tell application "TenFourFoxWhatever"
    tell current tab of front browser window to run JavaScript "
    // Don't run if not on youtube.
    if (!window.location.href.match(/\\\\//i) ||
        !window.location.href.match(/\\/watch/)) {
        window.alert('not a YouTube video page');
    // Remove secondary column and comments.
    // Wrap in try blocks in case the elements don't exist yet.
    try {
        document.getElementById('secondary').innerHTML = '';
        document.getElementById('secondary').style.display = 'none';
    } catch(e) { }
    try {
        document.getElementById('comments').innerHTML = '';
        document.getElementById('comments').style.display = 'none';
    } catch(e) { }
end tell

This script not only makes those areas invisible, it even nukes their internal contents. This persists from video to video unless you reload the page.

As an interesting side effect, you'll notice that the video area scaled to meet the new width of the browser, but the actual video didn't. I consider this a feature rather than a bug because the browser isn't trying to enable a higher-resolution stream or scale up the video for display, so the video "just plays better." Just make sure you keep the mouse pointer out of this larger area or the browser will now have to composite the playback controls.

You can add things to a page, too, instead of just taking things away. Issue 533 has been one of our long-playing issues which has been a particular challenge because it requires a large parser update to fix. Fortunately, Webpack has been moving away from uglify and as sites upgrade their support (Citibank recently did so), this problem should start disappearing. Unfortunately UPS doesn't upgrade as diligently, so right now you can't track packages with TenFourFox from the web interface; you just get this:

Let's fix it! This script is a little longer, so you will need to download it. Here are the major guts though:

    // Attempt to extract tracking number.
    let results = window.location.href.match(
    if (!results || results.length != 3) {
        window.alert('Unable to determine UPS tracking number.');
    // Construct payload.
    let locale = results[1];
    let tn = results[2];
    let payload = JSON.stringify({'Locale':locale,'TrackingNumber':[tn]});

A bit of snooping on UPS's site from the Network tab in Firefox 69 on my Talos II shows that it uses an internal JSON API. We can inject script to complete the request that TenFourFox can't yet make. Best of all, it will look to UPS like it's coming from inside the house the browser ... because it is. Even the cookies are passed. When we get the JSON response back, we can process that and display it:

    // For each element, display delivery date and status.
    // You can add more fields here from the JSON.
    output.innerHTML = '';
    data.trackDetails.forEach(function(pkg) {
        output.innerHTML += (pkg.trackingNumber+' '+
            pkg.packageStatus+' '+

So we just hit Run on the script, and ...

... my package arrives tomorrow.

Some of you will remember a related concept in Classilla called stelae, which were scraps of JavaScript that would automatically execute when you browse to a site the stela covers. I chose not to implement those in precisely that fashion in TenFourFox because the check imposes its own overhead on the browser on every request, and manipulating the DOM is a lot easier (and in TenFourFox, a lot more profitable) than manipulating the actual raw HTML and CSS that a stela does (and Classilla, because of its poorer DOM support, must). Plus, by being AppleScript, you can run them from anywhere at any time (even from the command line), including the very-convenient ever-present-if-you-enable-it Script menu, and they run only when you actually run them.

The next question I bet some of you will ask is, that's all fine for you because you're a super genius™, but how do us hoi polloi know the magic JavaScript incantations to chant? I wrote these examples to give you general templates. If you want to make a portion of the page disappear, you can work with the YouTube example and have a look with TenFourFox's built-in Inspector to find the ID or class of the element to nuke. Then, getElementById('x') will find the element with id="x", or getElementsByClassName('y') will find all elements with y in their class="..." (see the Github example). Make those changes and you can basically make it work. Remove the block limiting it to certain URLs if you don't care about it. If you do it wrong, look at the Browser Console window for the actual error message from JavaScript if you get an error back.

For adding functionality, though, this requires looking at what Firefox does on a later system. On my Talos II I had the Network tab in the Inspector open and ran a query for the tracking number and was able to see what came back, and then compared it with what TenFourFox was doing to find what was missing. I then simulated the missing request. This took about 15 minutes to do, granted given that I understood what was going on, but the script will still give you a template for how to do these kinds of specialized requests. (Be careful, though, about importing data from iffy sites that could be hacked or violating the same-origin policy. The script bridge has special privileges and assumes you know what you're doing.) Or, if you need more fields than the UPS script is providing, just look at the request the AppleScript sends and study the JSON object the response passes back, then add the missing fields you want to the block above. Tinker with the formatting. Sex it up a bit. It's your browser!

One last note. You will have noticed the scripts in the screen shot (and the ones you download) look a little different. That's because they use a little magic to figure out what TenFourFox you're actually running. It looks like this:

set tenfourfox to do shell script "ps x | perl -ne '/(TenFourFox[^.]+)\.app/ && print($1) && exit 0'"
if {tenfourfox starts with "TenFourFox"} then
    tell application tenfourfox
        tell «class pCTb» of front «class BWin» to «event 104FxrJS» given «class jscp»:"

This group of commands runs a quick script through Perl to find the first TenFourFox instance running (another reason to start TenFourFox before running foxboxes). However, because we dynamically decide the application we'll send AppleEvents to (i.e., "tell-by-variable"), the Script Editor doesn't have a dictionary available, so we have to actually provide the raw classes and events the dictionary would ordinarily map to. Otherwise it is exactly identical to tell current tab of front browser window to run JavaScript " and this is actually the true underlying AppleEvent that gets sent to TenFourFox. If TenFourFox isn't actually found, then we can give you a nice error message instead of the annoying "Where is ?" window that AppleScript will give you for orphaned events. Again, if you don't want to type these scripts in, grab them here.

No, I'm not interested in porting this to mainline Firefox, but the source code is in our tree if someone else wants to. At least until Apple decides that all other scripting languages than the One True Swift Language, even AppleScript, must die.

Cameron KaiserTenFourFox FPR16 available

TenFourFox Feature Parity Release 16 final is now available for testing (downloads, hashes, release notes). This final version has a correctness fix to the VMX text fragment scanner found while upstreaming it to mainline Firefox for the Talos II, as well as minor outstanding security updates. Assuming no issues, it will become live on Monday afternoon-evening Pacific time (because I'm working on Labor Day).

Firefox NightlyThese Weeks in Firefox: Issue 63


Mockup of the new warning UI on the about:addons page

  • The new DOM Mutation Breakpoint feature allows the user to pause JS execution when the DOM structure changes. It’s now available behind a flag and ready for testing in Nightly! You need to flip the two following prefs to enable it:
    • devtools.debugger.features.dom-mutation-breakpoints devtools.markup.mutationBreakpoints.enabled
Mockup of the new context menu entry to create DOM breakpoint

Use the Inspector panel context menu to create new DOM Mutation Breakpoints

Mockup of the DOM breakpoint list in a side pane of the debugger

Use the debugger side panel to see existing DOM Mutation Breakpoints.

  • Inline Previews in the Debugger – showing actual variable values inline. This feature is also available behind a flag and ready for testing. You need to flip the following pref to enable it:
    • Devtools.debugger.features.inline-preview
Inline preview of variable values in the debugger showcased on example code.

Note the inline boxes showing actual variable values


Friends of the Firefox team

Resolved bugs (excluding employees)

Fixed more than one bug

  • Arun Kumar Mohan
  • Dennis van der Schagt [:dennisschagt]
  • Dhyey Thakore [:dhyey35]
  • Florens Verschelde :fvsch
  • Heng Yeow (:tanhengyeow)
  • Itiel
  • Krishnal Ciccolella
  • Megha [ :meghaaa ]
  • Priyank Singh

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

Mockup of the extension storage view in the storage tab of the devtools

Developer Tools

  • Disable debugger; statement – It is now possible to disable a debugger statement via a false-conditional breakpoint created at the same location (bug).
  • WebSocket Inspection (GSoC project) – This feature is enabled on Nightly by default (devtools.netmonitor.features.webSockets)
    • Supported protocols Socket.IO, SockJS, plain JSON (and MQTT soon)
    • WS frames Filtering supported (by type & by free text)
    • A test page

Mockup of the new view in the devtools network tab showing websocket messages

  • Heads Up: Search across all files/cookies/headers in the Network panel coming soon! (Outreachy)
Perf Tools
  • JavaScript Allocations – tracking for JS allocations added. It’s still experimental but you can enable it by checking “JS Allocations” in recording panel feature flags

JavaScript allocations shown in a tree view in the Devtools Performance tab.


Accessibility Panel
  • A new option to auto-scroll to the selected node has been added to the panel tree

Screenshot showing a new "Scroll into view" option in the settings context menu of the Acccessibility Panel of the Devtools.



  • ESLint has been updated to 6.1.0 (from 5.16.0). This helped us with a few things:
    • Running –fix over the entire tree should now work for pretty much everything that’s auto-fixable. Previously it would stop working at the `js/` directories, and then you’d have to run it manually on the remaining sub-directories (e.g. services, toolkit etc etc).
    • If you have an /* global foo */ statement for ESLint and foo is unused, it will now throw an error.
    • There’s also now a new rule enabled on most of the tree: no-async-promise-executor. This aims to avoid issues with having asynchronous functions within new Promise()
  • ESLint will also be upgraded to 6.2.2 within the next couple of days. The main addition here is support for BigInt and dynamic imports.

New Tab Page

  • In Firefox 69 we’re launching “Discovery Stream” which is a newtab experience with more Pocket content. We’re doing this with a rollout on Sept 10th.

Password Manager




Search and Navigation

User Journey

Uplifted to 69 beta: Moments page, New Monitor snippet, Extended Triplets

About:CommunityFirefox 69 new contributors

With the release of Firefox 69, we are pleased to welcome the 50 developers who contributed their first code change to Firefox in this release, 39 of whom were brand new volunteers! Please join us in thanking each of these diligent and enthusiastic individuals, and take a look at their contributions:

Hacks.Mozilla.OrgThe Baseline Interpreter: a faster JS interpreter in Firefox 70


Modern web applications load and execute a lot more JavaScript code than they did just a few years ago. While JIT (just-in-time) compilers have been very successful in making JavaScript performant, we needed a better solution to deal with these new workloads.

To address this, we’ve added a new, generated JavaScript bytecode interpreter to the JavaScript engine in Firefox 70. The interpreter is available now in the Firefox Nightly channel, and will go to general release in October. Instead of writing or generating a new interpreter from scratch, we found a way to do this by sharing most code with our existing Baseline JIT.

The new Baseline Interpreter has resulted in performance improvements, memory usage reductions and code simplifications. Here’s how we got there:

Execution tiers

In modern JavaScript engines, each function is initially executed in a bytecode interpreter. Functions that are called a lot (or perform many loop iterations) are compiled to native machine code. (This is called JIT compilation.)

Firefox has an interpreter written in C++ and multiple JIT tiers:

  • The Baseline JIT. Each bytecode instruction is compiled directly to a small piece of machine code. It uses Inline Caches (ICs) both as performance optimization and to collect type information for Ion.
  • IonMonkey (or just Ion), the optimizing JIT. It uses advanced compiler optimizations to generate fast code for hot functions (at the expense of slower compile times).

Ion JIT code for a function can be ‘deoptimized’ and thrown away for various reasons, for example when the function is called with a new argument type. This is called a bailout. When a bailout happens, execution continues in the Baseline code until the next Ion compilation.

Until Firefox 70, the execution pipeline for a very hot function looked like this:

Timeline showing C++ Interpreter, Baseline Compilation, Baseline JIT Code, Prepare for Ion, Ion JIT Code with an arrow (called bailout) from Ion JIT Code back to Baseline JIT Code


Although this works pretty well, we ran into the following problems with the first part of the pipeline (C++ Interpreter and Baseline JIT):

  1. Baseline JIT compilation is fast, but modern web applications like Google Docs or Gmail execute so much JavaScript code that we could spend quite some time in the Baseline compiler, compiling thousands of functions.
  2. Because the C++ interpreter is so slow and doesn’t collect type information, delaying Baseline compilation or moving it off-thread would have been a performance risk.
  3. As you can see in the diagram above, optimized Ion JIT code was only able to bail out to the Baseline JIT. To make this work, Baseline JIT code required extra metadata (the machine code offset corresponding to each bytecode instruction).
  4. The Baseline JIT had some complicated code for bailouts, debugger support, and exception handling. This was especially true where these features intersect!

Solution: generate a faster interpreter

We needed type information from the Baseline JIT to enable the more optimized tiers, and we wanted to use JIT compilation for runtime speed. However, the modern web has such large codebases that even the relatively fast Baseline JIT Compiler spent a lot of time compiling. To address this, Firefox 70 adds a new tier called the Baseline Interpreter to the pipeline:

Same timeline of execution tiers as before but now has the 'Baseline Interpreter' between C++ interpreter and Baseline compilation. The bailout arrow points to Baseline Interpreter instead of Baseline JIT Code.

The Baseline Interpreter sits between the C++ interpreter and the Baseline JIT and has elements from both. It executes all bytecode instructions with a fixed interpreter loop (like the C++ interpreter). In addition, it uses Inline Caches to improve performance and collect type information (like the Baseline JIT).

Generating an interpreter isn’t a new idea. However, we found a nice new way to do it by reusing most of the Baseline JIT Compiler code. The Baseline JIT is a template JIT, meaning each bytecode instruction is compiled to a mostly fixed sequence of machine instructions. We generate those sequences into an interpreter loop instead.

Sharing Inline Caches and profiling data

As mentioned above, the Baseline JIT uses Inline Caches (ICs) both to make it fast and to help Ion compilation. To get type information, the Ion JIT compiler can inspect the Baseline ICs.

Because we wanted the Baseline Interpreter to use exactly the same Inline Caches and type information as the Baseline JIT, we added a new data structure called JitScript. JitScript contains all type information and IC data structures used by both the Baseline Interpreter and JIT.

The diagram below shows what this looks like in memory. Each arrow is a pointer in C++. Initially, the function just has a JSScript with the bytecode that can be interpreted by the C++ interpreter. After a few calls/iterations we create the JitScript, attach it to the JSScript and can now run the script in the Baseline Interpreter.

As the code gets warmer we may also create the BaselineScript (Baseline JIT code) and then the IonScript (Ion JIT code).
JSScript (bytecode) points to JitScript (IC and profiling data). JitScript points to BaselineScript (Baseline JIT Code) and IonScript (Ion JIT code).

Note that the Baseline JIT data for a function is now just the machine code. We’ve moved all the inline caches and profiling data into JitScript.

Sharing the frame layout

The Baseline Interpreter uses the same frame layout as the Baseline JIT, but we’ve added some interpreter-specific fields to the frame. For example, the bytecode PC (program counter), a pointer to the bytecode instruction we are currently executing, is not updated explicitly in Baseline JIT code. It can be determined from the return address if needed, but the Baseline Interpreter has to store it in the frame.

Sharing the frame layout like this has a lot of advantages. We’ve made almost no changes to C++ and IC code to support Baseline Interpreter frames—they’re just like Baseline JIT frames. Furthermore, When the script is warm enough for Baseline JIT compilation, switching from Baseline Interpreter code to Baseline JIT code is a matter of jumping from the interpreter code into JIT code.

Sharing code generation

Because the Baseline Interpreter and JIT are so similar, a lot of the code generation code can be shared too. To do this, we added a templated BaselineCodeGen base class with two derived classes:

The base class has a Handler C++ template argument that can be used to specialize behavior for either the Baseline Interpreter or JIT. A lot of Baseline JIT code can be shared this way. For example, the implementation of the JSOP_GETPROP bytecode instruction (for a property access like in JavaScript code) is shared code. It calls the emitNextIC helper method that’s specialized for either Interpreter or JIT mode.

Generating the Interpreter

With all these pieces in place, we were able to implement the BaselineInterpreterGenerator class to generate the Baseline Interpreter! It generates a threaded interpreter loop: The code for each bytecode instruction is followed by an indirect jump to the next bytecode instruction.

For example, on x64 we currently generate the following machine code to interpret JSOP_ZERO (bytecode instruction to push a zero value on the stack):

// Push Int32Value(0).
movabsq $-0x7800000000000, %r11
pushq  %r11
// Increment bytecode pc register.
addq   $0x1, %r14
// Patchable NOP for debugger support.
nopl   (%rax,%rax)
// Load the next opcode.
movzbl (%r14), %ecx
// Jump to interpreter code for the next instruction.
leaq   0x432e(%rip), %rbx
jmpq   *(%rbx,%rcx,8)

When we enabled the Baseline Interpreter in Firefox Nightly (version 70) back in July, we increased the Baseline JIT warm-up threshold from 10 to 100. The warm-up count is determined by counting the number of calls to the function + the number of loop iterations so far. The Baseline Interpreter has a threshold of 10, same as the old Baseline JIT threshold. This means that the Baseline JIT has a lot less code to compile.


Performance and memory usage

After this landed in Firefox Nightly our performance testing infrastructure detected several improvements:

  • Various 2-8% page load improvements. A lot happens during page load in addition to JS execution (parsing, style, layout, graphics). Improvements like this are quite significant.
  • Many devtools performance tests improved by 2-10%.
  • Some small memory usage wins.

Note that we’ve landed more performance improvements since this first landed.

To measure how the Baseline Interpreter’s performance compares to the C++ Interpreter and the Baseline JIT, I ran Speedometer and Google Docs on Windows 10 64-bit on Mozilla’s Try server and enabled the tiers one by one. (The following numbers reflect the best of 7 runs.):
C++ Interpreter 901 ms, + Baseline Interpreter 676 ms, + Baseline JIT 633 ms
On Google Docs we see that the Baseline Interpreter is much faster than just the C++ Interpreter. Enabling the Baseline JIT too makes the page load only a little bit faster.

On the Speedometer benchmark we get noticeably better results when we enable the Baseline JIT tier. The Baseline Interpreter does again much better than just the C++ Interpreter:
C++ Interpreter 31 points, + Baseline Interpreter 52 points, + Baseline JIT 69 points
We think these numbers are great: the Baseline Interpreter is much faster than the C++ Interpreter and its start-up time (JitScript allocation) is much faster than Baseline JIT compilation (at least 10 times faster).


After this all landed and stuck, we were able to simplify the Baseline JIT and Ion code by taking advantage of the Baseline Interpreter.

For example, deoptimization bailouts from Ion now resume in the Baseline Interpreter instead of in the Baseline JIT. The interpreter can re-enter Baseline JIT code at the next loop iteration in the JS code. Resuming in the interpreter is much easier than resuming in the middle of Baseline JIT code. We now have to record less metadata for Baseline JIT code, so Baseline JIT compilation got faster too. Similarly, we were able to remove a lot of complicated code for debugger support and exception handling.

What’s next?

With the Baseline Interpreter in place, it should now be possible to move Baseline JIT compilation off-thread. We will be working on that in the coming months, and we anticipate more performance improvements in this area.


Although I did most of the Baseline Interpreter work, many others contributed to this project. In particular Ted Campbell and Kannan Vijayan reviewed most of the code changes and had great design feedback.

Also thanks to Steven DeTar, Chris Fallin, Havi Hoffman, Yulia Startsev, and Luke Wagner for their feedback on this blog post.

The post The Baseline Interpreter: a faster JS interpreter in Firefox 70 appeared first on Mozilla Hacks - the Web developer blog.

The Mozilla BlogThank you, Chris

Thank you, Chris.

Chris Beard has been Mozilla Corporation’s CEO for 5 and a half years. Chris has announced 2019 will be his last year in this role. I want to thank Chris from the bottom of my heart for everything he has done for Mozilla. He has brought Mozilla enormous benefits — new ideas, new capabilities, new organizational approaches. As CEO Chris has put us on a new and better path. Chris’ tenure has seen the development of important organization capabilities and given us a much stronger foundation on which to build. This includes reinvigorating our flagship web browser Firefox to be once again a best-in-class product. It includes recharging our focus on meeting the online security and privacy needs facing people today. And it includes expanding our product offerings beyond the browser to include a suite of privacy and security-focused products and services from Facebook Container and Enhanced Tracking Protection to Firefox Monitor.

Chris will remain an advisor to the board. We recognize some people may think these words are a formula and have no deep meaning. We think differently. Chris is a true “Mozillian.” He has been devoted to Mozilla for the last 15 years, and has brought this dedication to many different roles at Mozilla. When Chris left Mozilla to join Greylock as an “executive-in-residence” in 2013, he remained an advisor to Mozilla Corporation. That was an important relationship, and Chris and I were in contact when it started to become clear that Chris could be the right CEO for MoCo. So over the coming years I expect to work with Chris on mission-related topics. And I’ll consider myself lucky to do so.

One of the accomplishments of Chris’ tenure is the strength and depth of Mozilla Corporation today. The team is strong. Our organization is strong, and our future full of opportunities. It is precisely the challenges of today’s world, and Mozilla’s opportunities to improve online life, that bring so many of us to Mozilla. I personally remain deeply focused on Mozilla. I’ll be here during Chris’ tenure, and I’ll be here after his tenure ends. I’m committed to Mozilla, and to making serious contributions to improving online life and developing new technical capabilities that are good for people.

Chris will remain as CEO during the transition. We will continue to develop the work on our new engagement model, our focus on privacy and user agency. To ensure continuity, I will increase my involvement in Mozilla Corporation’s internal activities. I will be ready to step in as interim CEO should the need arise.

The Board has retained Tuck Rickards of the recruiting firm Russell Reynolds for this search. We are confident that Tuck and team understand that Mozilla products and technology bring our mission to life, and that we are deeply different than other technology companies. We’ll say more about the search as we progress.

The internet stands at an inflection point today. Mozilla has the opportunity to make significant contributions to a better internet. This is why we exist, and it’s a key time to keep doing more. We offer heartfelt thanks to Chris for leading us to this spot, and for leading the rejuvenation of Mozilla Corporation so that we are fit for this purpose, and determined to address big issues.

Mozilla’s greatest strength is the people who respond to our mission and step forward to take on the challenge of building a better internet and online life. Chris is a shining example of this. I wish Chris the absolute best in all things.

I’ll close by stating a renewed determination to find ways for everyone who seeks safe, humane and exciting online experiences to help create this better world.

The post Thank you, Chris appeared first on The Mozilla Blog.

The Mozilla BlogMy Next Chapter

Earlier this morning I shared the news internally that – while I’ve been a Mozillian for 15 years so far, and plan to be for many more years – this will be my last year as CEO.

When I returned to Mozilla just over five years ago, it was during a particularly tumultuous time in our history. Looking back it’s amazing to reflect on how far we’ve come, and I am so incredibly proud of all that our teams have accomplished over the years.

Today our products, technology and policy efforts are stronger and more resonant in the market than ever, and we have built significant new organizational capabilities and financial strength to fuel our work. From our new privacy-forward product strategy to initiatives like the State of the Internet we’re ready to seize the tremendous opportunity and challenges ahead to ensure we’re doing even more to put people in control of their connected lives and shape the future of the internet for the public good.

In short, Mozilla is an exceptionally better place today, and we have all the fundamentals in place for continued positive momentum for years to come.

It’s with that backdrop that I made the decision that it’s time for me to take a step back and start my own next chapter. This is a good place to recruit our next CEO and for me to take a meaningful break and recharge before considering what’s next for me. It may be a cliché — but I’ll embrace it — as I’m also looking forward to spending more time with my family after a particularly intense but gratifying tour of duty.

However, I’m not wrapping up today or tomorrow, but at year’s end. I’m absolutely committed to ensuring that we sustain the positive momentum we have built. Mitchell Baker and I are working closely together with our Board of Directors to ensure leadership continuity and a smooth transition. We are conducting a search for my successor and I will continue to serve as CEO through that transition. If the search goes beyond the end of the year, Mitchell will step in as interim CEO if necessary, to ensure we don’t miss a beat. And I will stay engaged for the long-term as advisor to both Mitchell and the Board, as I’ve done before.

I am grateful to have had this opportunity to serve Mozilla again, and to Mitchell for her trust and partnership in building this foundation for the future.

Over the coming months I’m excited to share with you the new products, technology and policy work that’s in development now. I am more confident than ever that Mozilla’s brightest days are yet to come.

The post My Next Chapter appeared first on The Mozilla Blog.

Mozilla ThunderbirdWhat’s New in Thunderbird 68

Our newest release, Thunderbird version 68 is now available! Users on version 60, the last major release, will not be immediately updated – but will receive the update in the coming weeks. In this blog post, we’ll take a look at the features that are most noteworthy in the newest version. If you’d like to see all the changes in version 68, you can check out the release notes.

Thunderbird 68 focuses on polish and setting the stage for future releases. There was a lot of work that we had to do below the surface that has made Thunderbird more future-proof and has made it a solid base to continue to build upon. But we also managed to create some great features you can touch today.

New App Menu

Thunderbird 68 features a big update to the App Menu. The new menu is single pane with icons and separators that make it easier to navigate and reduce clutter. Animation when cycling through menu items produces a more engaging experience and results in the menu feeling more responsive and modern.

New Thunderbird Menu

Thunderbird’s New App Menu

Options/Preferences in a Tab

Thunderbird’s Options/Preferences have been converted from a dialog window to its own dedicated tab. The new Preferences tab provides more space which allows for better organized content and is more consistent with the look and feel of the rest of Thunderbird. The new Preferences tab also makes it easier to multitask without the problem of losing track of where your preferences are when switching between windows.

Preferences in a Tab

Preferences in a Tab

Full Color Support

Thunderbird now features full color support across the app. This means changing the color of the text of your email to any color you want or setting tags to any shade your heart desires.

New Full Color Picker

Full Color Support

Better Dark Theme

The dark theme available in Thunderbird has been enhanced with a dark message thread pane as well as many other small improvements.

Thunderbird Dark Theme

Thunderbird Dark Theme

Attachment Management

There are now more options available for managing attachments. You can “detach” an attachment to store it in a different folder while maintaining a link from the email to the new location. You can also open the folder containing a detached file via the “Open Containing Folder” option.

Attachment options for detached files.

Attachment options for detached files.

Filelink Improved

Filelink attachments that have already been uploaded can now be linked to again instead of having to re-upload them. Also, an account is no longer required to use the default Filelink provider – WeTransfer.

Other Filelink providers like Box and Dropbox are not included by default but can be added by grabbing the Dropbox and Box add-ons.

Other Notable Changes

There are many other smaller changes that make Thunderbird 68 feel polished and powerful including an updated To/CC/BCC selector in the compose window, filters can now be set to run periodically, and feed articles now show external attachments as links.

There are many other updates in this release, you can see a list of all of them in the Thunderbird 68 release notes. If you would like to try the newest Thunderbird, head to our website and download it today!

QMOFirefox 69 Beta 14 Testday Results

Hello Mozillians!

As you may already know, Friday August 16th – we held a new Testday event, for Firefox 69 Beta 14.

Thank you all for helping us make Mozilla a better place: Fernando, noelonassis !

Results: Several test cases were executed for: Anti-tracking.

Thanks for another awesome testday, we appreciate your contribution! 🙂

We hope to see you all in our next events, keep an eye on QMO.
We will make announcements as soon as something shows up!


Mozilla VR BlogNew Avatar Features in Hubs

New Avatar Features in Hubs

It is now easier than ever to customize avatars for Hubs! Choosing the way that you represent yourself in a 3D space is an important part of interacting in a virtual world, and we want to make it possible for anyone to have creative control over how they choose to show up in their communities. With the new avatar remixing update, members of the Hubs community can publish avatars that they create under a remixable, Creative Commons license, and grant others the ability to derive new works from those avatars. We’ve also added more options for creating custom avatars.

When you change your avatar in Hubs, you will now have the opportunity to browse through 'Featured' avatars and ‘Newest’ avatars. Avatars that are remixable will have an icon on them that allows you to save a version of that avatar to your own ‘My Avatars’ library, where you can customize the textures on the avatar to create your own spin on the original work. The ‘Red Panda’ avatar below is a remix of the original Panda Bot.

New Avatar Features in Hubs

In addition to remixing avatars, you can create avatars by uploading a binary glTF file (or selecting one of the base bots that Hubs provides) and uploading up to four texture maps to the model. We have a number of resources available on GitHub for creating custom textures, as well as sets to get you started. You can also make your own designs with a 2D image editor or a 3D texture painting program.

The Hubs base avatar is a glTF model that has four texture maps and supports physically-based rendering (PBR) materials. This allows a great deal of flexibility in what kind of avatars can be created while still providing a quick way to create custom base color maps. For users who are familiar with 3D modeling, you can also create your own new avatar style from scratch, or by using the provided .blend files in the avatar-pipelines GitHub repo.

New Avatar Features in Hubs

We’ve also made it easier to share avatars with one another inside a Hubs room. A tab for ‘Avatars’ now appears in the Create menu, and you can place thumbnails for avatars in the room you’re in to quickly swap between them. This will also allow others in the room to easily change to a specific avatar, which is a fun way to share avatars with a group.

New Avatar Features in Hubs

These improvements to our avatar tools are just the start of what we’re working on to increase opportunities that Hubs users have available to express themselves on the platform. Making it easy to change your avatar - over and over again - allows users to have flexibility over how much personal information they want their avatar to reveal, and easily change from one digital body to another depending on how they’re using Hubs at a given time. While, at Mozilla, we find Panda Robots to be perfectly suited to company meetings, other communities and groups will have their own established social norms for professional activities. We want to support a rich, creative ecosystem for our users, and we can't wait to see what you create!

Nick FitzgeraldAsync Stack Traces in Rust

One neat result of Rust’s futures and async/await design is that all of the async callers are on the stack below the async callees. In most other languages, only the youngest async callee is on the stack, and none of the async callers. Because the youngest frame is most often not where a bug’s root cause lies, this extra context makes debugging async code easier in Rust.

How others do it

First, before we look at the extra context Rust gives you in its async stack traces, let’s take a look at a stack trace from async code in another language. We’ll walk through an example in JavaScript, but this applies equally well to any async model that is callback-driven under the hood.

Here’s our async JavaScript snippet. foo waits for a short timeout asynchronously, then calls bar, which also waits for a short timeout before calling blowUp, which again waits before finally throwing an error. This example, while simple, is nonetheless representative of an async task that involves multiple async functions with multiple await suspension points.

// A `Promise`-returning, `async`/`await`-compatible
// wrapper around `setTimeout`.
function sleep(n) {
  return new Promise(resolve => setTimeout(resolve, n));

async function foo() {
  await sleep(100);
  await bar();

async function bar() {
  await sleep(200);
  await blowUp();

async function blowUp() {
  await sleep(300);
  throw new Error("nested error in async code");

If we run foo, we get an error with a stack trace like this:


That’s the complete stack: just a single stack frame. It doesn’t give us any help with identifying who called blowUp.

Why do we have just this lonely stack frame? Because what async/await boils down to in JavaScript is roughly speaking0 a suspendable generator function and a series of Promise callbacks that keep pumping the generator function to completion. The async function’s desugared generator suspends at await points, yielding the Promise it is waiting on. The runtime takes that Promise and enqueues a callback to resume the generator (via calling next) once the promise is resolved:

// Enqueue a callback on the yielded promise.
yieldedPromise.then(x => {
  // Resume our suspended function with the result.;

Whenever the Promise is resolved, its queued callbacks are run on a new tick of the event loop1, without any caller on the stack.

If we visualize the call stack over time, with youngest frames on top and oldest frames on bottom, it looks like this:

                  ╱╱            ╱╱               ╱╱
  ^               ╲╲   ┌─────┐  ╲╲   ┌────────┐  ╲╲
  │               ╱╱   │ bar │  ╱╱   │ blowUp │  ╱╱
  │      ┌─────┐  ╲╲  ┌┴─────┤  ╲╲  ┌┴────────┤  ╲╲  ┌────────┐
Stack    │ foo │  ╱╱  │ foo  │  ╱╱  │ bar     │  ╱╱  │ blowUp │
  │      └─────┘  ╲╲  └──────┘  ╲╲  └─────────┘  ╲╲  └────────┘
  │               ╱╱            ╱╱               ╱╱
  │             +100 ms       +200 ms          +300 ms


After blowUp sleeps for 300 milliseconds and throws an error, it is the only frame on the stack, and there is nothing left pointing to its async callers foo and bar.

How Rust does it

Now let’s look at some async Rust code that is roughly-equivalent to our previous JavaScript snippet:

use async_std::task;
use std::time::Duration;

fn main() {
    let future = foo();

async fn foo() {

async fn bar() {

async fn blow_up() {
    panic!("nested async panic!");

When we compile and run this Rust program with RUST_BACKTRACE=1 to capture a stack trace on panic, we get the following stack trace2, which has the async callers foo and bar in addition to the youngest async callee blow_up:

thread 'async-task-driver' panicked at 'nested async panic!', src/
stack backtrace:
   8: async_stack_traces::blow_up::{{closure}}
             at src/
  15: async_stack_traces::bar::{{closure}}
             at src/
  22: async_stack_traces::foo::{{closure}}
             at src/

In Rust, async functions desugar into suspendable generator functions, which in turn desugar into state machine enums. Rather than yielding futures, like how JavaScript’s desugared async functions will yield promises, Rust’s desugared async functions’ state machines implement the Future trait themselves3. The same way that futures nest and it is common to have MyOuterFuture::poll calling MyInnerFuture::poll, awaiting a future desugars into the generator polling the nested future.

For example, if an activation of our async fn foo is awaiting a task::sleep future, it will poll that future inside its own poll implementation, and if it is awaiting an activation of async fn bar, then it will poll its bar activation inside its own poll implementation:

// Lightly edited output of `cargo rustc -- -Z unpretty=hir`
// for our `async fn foo`.
fn foo() -> impl Future<Output = ()> {
    use std::future;
    use std::pin::Pin;
    use std::task::Poll;

    future::from_generator(move || {
        // Awaiting `task::sleep`.
            let mut pinned = task::sleep(<Duration>::from_millis(100));
            loop {
                match future::poll_with_tls_context(unsafe {
                    Pin::new_unchecked(&mut pinned)
                }) {
                    Poll::Ready(()) => break,
                    Poll::Pending => yield (),

        // Awaiting `async fn bar`.
            let mut pinned = bar();
            loop {
                match future::poll_with_tls_context(unsafe {
                    Pin::new_unchecked(&mut pinned)
                }) {
                    Poll::Ready(()) => break,
                    Poll::Pending => yield (),

With this nested polling, we have the async caller on the stack below its async callee.

To make it visual, here is the stack of Future::poll trait method calls over time for our Rust example:

                     ╱╱               ╱╱                ╱╱
  ^                  ╲╲               ╲╲    ┌────────┐  ╲╲
  │                  ╱╱               ╱╱    │ sleep  │  ╱╱
  │                  ╲╲    ┌───────┐  ╲╲   ┌┴────────┤  ╲╲  ┌─────────┐
  │                  ╱╱    │ sleep │  ╱╱   │ blow_up │  ╱╱  │ blow_up │
  │       ┌───────┐  ╲╲   ┌┴───────┤  ╲╲  ┌┴─────────┤  ╲╲  ├─────────┤
Stack     │ sleep │  ╱╱   │ bar    │  ╱╱  │ bar      │  ╱╱  │ bar     │
  │      ┌┴───────┤  ╲╲  ┌┴────────┤  ╲╲  ├──────────┤  ╲╲  ├─────────┤
  │      │ foo    │  ╱╱  │ foo     │  ╱╱  │ foo      │  ╱╱  │ foo     │
  │      └────────┘  ╲╲  └─────────┘  ╲╲  └──────────┘  ╲╲  └─────────┘
  │                  ╱╱               ╱╱                ╱╱
  │                +100 ms          +200 ms           +300 ms


After blow_up has slept for 300 milliseconds and panics, we still have its async callers bar and foo on the stack, which is super useful for debugging.


Rust’s readiness-based, polling model for asynchronous code means that we always have the async caller that is waiting on an async callee function’s completion on the stack during polling. Therefore async callers show up in stack traces with async Rust code, not just the youngest async callee like in most other languages. In turn, this property makes debugging async code that much easier in Rust.

I wasn’t the first to notice this property of Rust’s futures and async/await design, nor was I involved in its design process. However, I don’t think most folks have discovered this property yet, so I’m doing my part to help explain it and spread the knowledge.

Thanks to Alex Crichton, Jason Orendorff, and Jim Blandy for providing feedback on an early draft of this blog post. Their feedback only improved this text, and any errors that remain are my own.


What about “async stacks” in JavaScript?

To help users debug asynchronous code in JavaScript, engines introduced “async stacks”. The way this works, at least in SpiderMonkey, is to capture the stack when a promise is allocated or when some async operation begins, and then when running the async callbacks append that captured stack to any new stacks captured.

Async stacks come with both time and space overheads. These aren’t small costs, and there has been significant engineering work put into making the captured stack traces compact in SpiderMonkey. There are also limits on how many frames and how often async stacks are captured because of these costs.

Async stacks are also not a zero-cost abstraction, since the engine must capture the async portion and save them in memory regardless whether any execution actually ends up using them. On the other hand, since Rust’s async callers appear on the normal call stack, nothing needs to be captured eagerly and stored in memory.

What if I want even more context for my async Rust code?

The tracing facade looks like a promising framework for annotating async tasks, creating a hierarchy between tasks, and recording structured data about program execution.

0 This is not the actual desugaring, but is close enough to get the point across for our purposes. Read the actual ECMAScript standard if you want to know what is really going on under the hood of async/await in JavaScript functions. The exact details are too nitpicky for (and largely irrelevant to) this blog post.

1 I’m ignoring micro vs. macro task queues in this blog post, because (again) those details are irrelevant here.

2 I edited this stack trace to remove extra stack frames that come from the async runtime library and the stack unwinding library. In Rust, we have the opposite problem compared to other languages: we have too many stack frames! 🤣

3 Technically they are wrapped in a std::future::GenFuture via std::future::from_generator, and there is this Future implementation for GenFuture wrappers:

impl<T: Generator<Yield = ()>> Future for GenFuture<T> {
    type Output = T::Return;
    fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {
        // ...

But this is an implementation detail that’s mostly irrelevant to this discussion.

This Week In RustThis Week in Rust 301

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is include_flate, a variant of include_bytes!/include_str with compile-time DEFLATE compression and runtime lazy decompression.

Thanks to Willi Kappler for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

No issues were proposed for CfP.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

221 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs

New RFCs

No new RFCs were proposed this week.

Upcoming Events

Asia Pacific
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Just as Bruce Lee practiced Jeet Kune Do, the style of all styles, Rust is not bound to any one paradigm. Instead of trying to put it into an existing box, it's best to just feel it out. Rust isn't Haskell and it's not C. It has aspects in common with each and it has traits unique to itself.

Alexander Nye on rust-users

Thanks to Louis Cloete for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nasa42, llogiq, and Flavsditz.

Discuss on r/rust.

IRL (podcast)Making Privacy Law

The word “regulation" gets tossed around a lot. And it’s often aimed at the internet’s Big Tech companies. Some worry that the size of these companies and the influence they wield is too much. On the other side, there’s the argument that any regulation is overreach — leave it to the market, and everything will sort itself out. But over the last year, in the midst of this regulation debate, a funny thing happened. Tech companies got regulated. And our right to privacy got a little easier to exercise.

Gabriela Zanfir-Fortuna gives us the highlights of Europe’s sweeping GDPR privacy law, and explains how the law netted a huge fine against Spain’s National Football League. Twitter’s Data Protection Officer, Damien Kieran explains how regulation has shaped his new job and is changing how Twitter works with our personal data. Julie Brill at Microsoft says the company wants legislators to go further, and bring a federal privacy law to the U.S. And Manoush chats with Alastair MacTaggart, the California resident whose work led to the passing of the California Consumer Privacy Act.

IRL is an original podcast from Firefox. For more on the series go to

Learn more about consumer rights under the GDPR, and for a top-level look at what the GDPR does for you, check out our GDPR summary.

Here’s more about the California Consumer Privacy Act and Alastair MacTaggart.

And, get commentary and analysis on data privacy from Julie Brill, Gabriela Zanfir-Fortuna, and Damien Kieran.

Firefox has a department dedicated to open policy and advocacy. We believe that privacy is a right, not a privilege. Follow our blog for more.

Cameron KaiserTenFourFox FPR16b1 available

TenFourFox Feature Parity Release 16 beta 1 is now available (downloads, hashes, release notes). In addition, the official FAQ has been updated, along with the tech notes.

FPR16 got delayed because I really tried very hard to make some progress on our two biggest JavaScript deficiencies, the infamous issues 521 (async and await) and 533 (this is undefined). Unfortunately, not only did I make little progress on either, but the speculative fix I tried for issue 533 turned out to be the patch that unsettled the optimized build and had to be backed out. There is some partial work on issue 521, though, including a fully working parser patch. The problem is plumbing this into the browser runtime which is ripe for all kinds of regressions and is not currently implemented (instead, for compatibility, async functions get turned into a bytecode of null throw null return, essentially making any call to an async function throw an exception because it wouldn't have worked in the first place).

This wouldn't seem very useful except that effectively what the whole shebang does is convert a compile-time error into a runtime warning, such that other functions that previously might not have been able to load because of the error can now be parsed and hopefully run. With luck this should improve the functionality of sites using these functions even if everything still doesn't fully work, as a down payment hopefully on a future implementation. It may not be technically possible but it's a start.

Which reminds me, and since this blog is syndicated on Planet Mozilla: hey, front end devs, if you don't have to minify your source, how about you don't? Issue 533, in fact, is entirely caused because uglify took some fast and loose shortcuts that barf on older parsers, and it is nearly impossible to unwind errors that occur in minified code (this is now changing as sites slowly update, so perhaps this will be self-limited in the end, but in the meantime it's as annoying as Andy Samberg on crack). This is particularly acute given that the only thing fixing it in the regression range is a 2.5 megabyte patch that I'm only a small amount of the way through reading. On the flip side, I was able to find and fix several parser edge cases because Bugzilla itself was triggering them and the source file that was involved was not minified. That means I could actually read it and get error reports that made sense! Help support us lonely independent browser makers by keeping our lives a little less complicated. Thank you for your consideration!

Meanwhile, I have the parser changes on by default to see if it induces any regressions. Sites may or may not work any differently, but they should not work worse. If you find a site that seems to be behaving adversely in the beta, please toggle javascript.options.asyncfuncs to false and restart the browser, which will turn the warning back into an error. If even that doesn't fix it, make sure nothing on the site changed (like, I dunno, checking it in FPR15) before reporting it in the comments.

This version also "repairs" Firefox Sync support by connecting the browser back up to the right endpoints. You are reminded, however, that like add-on support Firefox Sync is only supported at a "best effort" level because I have no control over the backend server. I'll make reasonable attempts to keep it working, but things can break at any time, and it is possible that it will stay broken for good (and be removed from the UI) if data structures or the protocol change in a way I can't control for. There's a new FAQ entry for this I suggest you read.

Finally, there are performance improvements for HTML5 and URL parsing from later versions of Firefox as well as a minor update to same-site cookie support, plus a fix for a stupid bug with SVG backgrounds that I caused and Olga found, updates to basic adblock with new bad hosts, updates to the font blacklist with new bad fonts, and the usual security and stability updates from the ESRs.

I realize the delay means there won't be a great deal of time to test this, so let me know deficiencies as quickly as possible so they can be addressed before it goes live on or about September 2 Pacific time.

Joel MaherDigging into regressions

Whenever a patch is landed on autoland, it will run many builds and tests to make sure there are no regressions.  Unfortunately many times we find a regression and 99% of the time backout the changes so they can be fixed.  This work is done by the Sheriff team at Mozilla- they monitor the trees and when something is wrong, they work to fix it (sometimes by a quick fix, usually by a backout).  A quick fact, there were 1228 regressions in H1 (January-June) 2019.

My goal in writing is not to recommend change, but instead to start conversations and figure out what data we should be collecting in order to have data driven discussions.  Only then would I expect that recommendations for changes would come forth.

What got me started in looking at regressions was trying to answer a question: “How many regressions did X catch?”  This alone is a tough question, instead I think the question should be “If we were not running X, how many regressions would our end users see?”  This is a much different question and has two distinct parts:

  • Unique Regressions: Only look at regressions found that only X found, not found on both X and Y
  • Product Fixes: did the regression result in changing code that we ship to users? (i.e. not editing the test)
  • Final Fix: many times a patch [set] lands and is backed out multiple times, in this case do we look at each time it was backed out, or only the change from initial landing to final landing?

These can be more difficult to answer.  For example, Product Fixes- maybe by editing the test case we are preventing a regression in the future because the test is more accurate.

In addition we need to understand how accurate the data we are using is.  As the sheriffs do a great job, they are human and humans make judgement calls.  In this case once a job is marked as “fixed_by_commit”, then we cannot go back in and edit it, so a typo or bad data will result in incorrect data.  To add to it, often times multiple patches are backed out at the same time, so is it correct to say that changes from bug A and bug B should be considered?

This year I have looked at this data many times to answer:

This data is important to harvest because if we were to turn off a set of jobs or run them as tier-2 we would end up missing regressions.  But if all we miss is editing manifests to disable failing tests, then we are getting no value from the test jobs- so it is important to look at what the regression outcome was.

In fact every time I did this I would run an active-data-recipe (fbc recipe in my repo) and have a large pile of data I needed to sort through and manually check.  I spent some time every day for a few weeks looking at regressions and now I have looked at 700 (bugs/changesets).  I found that in manually checking regressions, the end results fell into buckets:

test 196 28.00%
product 272 38.86%
manifest 134 19.14%
unknown 48 6.86%
backout 27 3.86%
infra 23 3.29%

Keep in mind that many of the changes which end up in mozilla-central are not only product bugs, but infrastructure bugs, test editing, etc.

After looking at many of these bugs, I found that ~80% of the time things are straightforward (single patch [set] landed, backed out once, relanded with clear comments).  Data I would like to have easily available via a query:

  • Files that are changed between backout and relanding (even if it is a new patch).
  • A reason as part of phabricator that when we reland, it is required to have a few pre canned fields

Ideally this set of data would exist for not only backouts, but for anything that is landed to fix a regression (linting, build, manifest, typo).

Mozilla Open Policy & Advocacy BlogMozilla Mornings on the future of EU content regulation

On 10 September, Mozilla will host the next installment of our EU Mozilla Mornings series – regular breakfast meetings where we bring together policy experts, policymakers and practitioners for insight and discussion on the latest EU digital policy developments.

The next installment will focus on the future of EU content regulation. We’re bringing together a high-level panel to discuss how the European Commission should approach the mooted Digital Services Act, and to lay out a vision for a sustainable and rights-protective content regulation framework in Europe.


Werner Stengg
Head of Unit, E-Commerce & Platforms, European Commission DG CNECT
Alan Davidson
Vice President of Global Policy, Trust & Security, Mozilla
Liz Carolan
Executive Director, Digital Action
Eliska Pirkova
Europe Policy Analyst, Access Now

Moderated by Brian Maguire, EURACTIV

 Logistical information

10 September 2019
L42 Business Centre, rue de la Loi 42, Brussels 1040


Register your attendance here

The post Mozilla Mornings on the future of EU content regulation appeared first on Open Policy & Advocacy.

Dustin J. MitchellOutreachy Round 20

Outreachy is a program that provides paid internships working on FOSS (Free and Open Source Software) to applicants from around the world. Internships are three months long and involve deep, technical work on a mentor-selected project, guided by mentors and other developers working on the FOSS application. At Mozilla, projects include work on Firefox itself, development of associated services and sites like Taskcluster and Treeherder, and analysis of Firefox telemetry data from a data-science perspective.

The program has an explicit focus on diversity: “Anyone who faces under-representation, systemic bias, or discrimination in the technology industry of their country is invited to apply.” It’s a small but very effective step in achieving better representation in this field. One of the interesting side-effects is that the program sees a number of career-changing participants. These people bring a wealth of interesting and valuable perspectives, but face challenges in a field where many have been programming since they were young.

Round 20 will involve full-time, remote work from mid-December 2019 to mid-March 2020. Initial applications for this round are now open.

New this year, applicants will need to make an “initial application” to determine eligibility before September 24. During this time, applicants can only see the titles of potential internship projects – not the details. On October 1, all applicants who have been deemed eligible will be able to see the full project descriptions. At that time, they’ll start communicating with project mentors and making small open-source contributions, and eventually prepare applications to one or more projects.

So, here’s the call to action:

  • If you are or know people who might benefit from this program, encourage them to apply or to talk to one of the Mozilla coordinators (Kelsey Witthauer and myself) at
  • If you would like to mentor for the program, there’s still time! Get in touch with us and we’ll figure it out.

Armen ZambranoFrontend security — thoughts on Snyk

Frontend security — thoughts on Snyk

I can’t remember why but few months ago I started looking into keeping my various React projects secure. Here’s some of what I discovered (more to come). I hope some will be valuable to you.

A while ago I discovered Snyk and I hooked it up my various projects with it. Snyk sends me a weekly security summary with the breakdown of various security issues across all of my projects.

<figcaption>This is part of Snyk’s weekly report I receive in my inbox</figcaption>

Snyk also gives me context about the particular security issues found:

<figcaption>This is extremely useful if you want to understand the security issue</figcaption>

It also analyzes my dependencies on a per-PR level:

<figcaption>Safey? Check! — This is a GitHub PR check (like Travis)</figcaption>

Other features that I’ve tried from Snyk:

  1. It sends you an email when there’s a vulnerable package (no need to wait for the weekly report)
  2. Open PRs upgrading vulnerable packages when possible
  3. Patch your code while there’s no published package with a fix
<figcaption>This is a summary for your project — it shows that a PR can be opened</figcaption>

The above features I have tried and I decided not to use them for the following reasons (listed in the same order as above):

  1. As a developer I already get enough interruptions in a week. I don’t need to be notified for every single security issue in my dependency tree. My projects don’t deal with anything sensitive, thus, I’m OK with waiting to deal with them at the beginning of the week
  2. The PR opened by Snyk does not work well with Yarn since it does not update the yarn.lock file, thus, requirying me to fetch the PR, run yarn install and push it back (This wastes my time)
  3. The feature to patch your code (Runtime protection or snyk protect) adds a very high set up cost (1–2 minutes) everytime you need to run yarn install. This is because it analyzes all your dependencies and patches your code in-situ. This gets on the way of my development workflow.

Overall I’m very satisfied with Snyk and I highly recommend using it.

In the following posts I’m thinking of speaking on:

  • How Renovate can help reduce the burden of keeping your projects up-to-date (reducing security work later on)
  • Differences between GitHub’s security tab (DependaBot) and Snyk
  • npm audit, yarn audit & snyk test

NOTE: This post is not sponsored by Snyk. I love what they do, I root for them and I hope they soon fix the issues I mention above.

Support.Mozilla.OrgIntroducing Bryce and Brady

Hello SUMO Community,

I’m thrilled to share this update with you today. Bryce and Brady have joined us last week and will be able to help out on Support for some of the new efforts Mozilla are working on towards creating a connected and integrated Firefox experience.

They are going to be involved with new products, but also they won’t forget to put extra effort in providing support on forums and as well as serving as an escalation point for hard to solve issues.

Here is a short introduction to Brady and Bryce:

Hi! My name is Brady, and I am one of the new members of the SUMO team. I am originally from Boise, Idaho and am currently going to school for a Computer Science degree at Boise State. In my free time, I’m normally playing video games, writing, drawing, or enjoying the Sawtooths. I will be providing support for Mozilla products and for the SUMO team.

Hello!  My name is Bryce, I was born and raised in San Diego and I reside in Boise, Idaho.  Growing up I spent a good portion of my life trying to be the best sponger(boogie boarder) and longboarder in North County San Diego.  While out in the ocean I had all sorts of run-ins with sea creatures; but nothing to scary. I am also an IN-N-Out fan, as you may find me sporting their merchandise with boardshorts and the such.   I am truly excited to be part of this amazing group of fun loving folks and I am looking forward to getting to know everyone.

Please welcome them warmly!

Hacks.Mozilla.OrgWebAssembly Interface Types: Interoperate with All the Things!

People are excited about running WebAssembly outside the browser.

That excitement isn’t just about WebAssembly running in its own standalone runtime. People are also excited about running WebAssembly from languages like Python, Ruby, and Rust.

Why would you want to do that? A few reasons:

  • Make “native” modules less complicated
    Runtimes like Node or Python’s CPython often allow you to write modules in low-level languages like C++, too. That’s because these low-level languages are often much faster. So you can use native modules in Node, or extension modules in Python. But these modules are often hard to use because they need to be compiled on the user’s device. With a WebAssembly “native” module, you can get most of the speed without the complication.
  • Make it easier to sandbox native code
    On the other hand, low-level languages like Rust wouldn’t use WebAssembly for speed. But they could use it for security. As we talked about in the WASI announcement, WebAssembly gives you lightweight sandboxing by default. So a language like Rust could use WebAssembly to sandbox native code modules.
  • Share native code across platforms
    Developers can save time and reduce maintainance costs if they can share the same codebase across different platforms (e.g. between the web and a desktop app). This is true for both scripting and low-level languages. And WebAssembly gives you a way to do that without making things slower on these platforms.

Scripting languages like Python and Ruby saying 'We like WebAssembly's speed', low-level languages like Rust and C++ saying, 'and we like the security it could give us' and all of them saying 'and we all want to make developers more effective'

So WebAssembly could really help other languages with important problems.

But with today’s WebAssembly, you wouldn’t want to use it in this way. You can run WebAssembly in all of these places, but that’s not enough.

Right now, WebAssembly only talks in numbers. This means the two languages can call each other’s functions.

But if a function takes or returns anything besides numbers, things get complicated. You can either:

  • Ship one module that has a really hard-to-use API that only speaks in numbers… making life hard for the module’s user.
  • Add glue code for every single environment you want this module to run in… making life hard for the module’s developer.

But this doesn’t have to be the case.

It should be possible to ship a single WebAssembly module and have it run anywhere… without making life hard for either the module’s user or developer.

user saying 'what even is this API?' vs developer saying 'ugh, so much glue code to worry about' vs both saying 'wait, it just works?'

So the same WebAssembly module could use rich APIs, using complex types, to talk to:

  • Modules running in their own native runtime (e.g. Python modules running in a Python runtime)
  • Other WebAssembly modules written in different source languages (e.g. a Rust module and a Go module running together in the browser)
  • The host system itself (e.g. a WASI module providing the system interface to an operating system or the browser’s APIs)

A wasm file with arrows pointing to and from: logos for different runtimes (Ruby, php, and Python), other wasm files compiled from Rust and Go, and host systems like the OS or browser

And with a new, early-stage proposal, we’re seeing how we can make this Just Work™, as you can see in this demo.

So let’s take a look at how this will work. But first, let’s look at where we are today and the problems that we’re trying to solve.

WebAssembly talking to JS

WebAssembly isn’t limited to the web. But up to now, most of WebAssembly’s development has focused on the Web.

That’s because you can make better designs when you focus on solving concrete use cases. The language was definitely going to have to run on the Web, so that was a good use case to start with.

This gave the MVP a nicely contained scope. WebAssembly only needed to be able to talk to one language—JavaScript.

And this was relatively easy to do. In the browser, WebAssembly and JS both run in the same engine, so that engine can help them efficiently talk to each other.

A js file asking the engine to call a WebAssembly function

The engine asking the WebAssembly file to run the function

But there is one problem when JS and WebAssembly try to talk to each other… they use different types.

Currently, WebAssembly can only talk in numbers. JavaScript has numbers, but also quite a few more types.

And even the numbers aren’t the same. WebAssembly has 4 different kinds of numbers: int32, int64, float32, and float64. JavaScript currently only has Number (though it will soon have another number type, BigInt).

The difference isn’t just in the names for these types. The values are also stored differently in memory.

First off, in JavaScript any value, no matter the type, is put in something called a box (and I explained boxing more in another article).

WebAssembly, in contrast, has static types for its numbers. Because of this, it doesn’t need (or understand) JS boxes.

This difference makes it hard to communicate with each other.

JS asking wasm to add 5 and 7, and Wasm responding with 9.2368828e+18

But if you want to convert a value from one number type to the other, there are pretty straightforward rules.

Because it’s so simple, it’s easy to write down. And you can find this written down in WebAssembly’s JS API spec.

A large book that has mappings between the wasm number types and the JS number types

This mapping is hardcoded in the engines.

It’s kind of like the engine has a reference book. Whenever the engine has to pass parameters or return values between JS and WebAssembly, it pulls this reference book off the shelf to see how to convert these values.

JS asking the engine to call wasm's add function with 5 and 7, and the engine looking up how to do conversions in the book

Having such a limited set of types (just numbers) made this mapping pretty easy. That was great for an MVP. It limited how many tough design decisions needed to be made.

But it made things more complicated for the developers using WebAssembly. To pass strings between JS and WebAssembly, you had to find a way to turn the strings into an array of numbers, and then turn an array of numbers back into a string. I explained this in a previous post.

JS putting numbers into WebAssembly's memory

This isn’t difficult, but it is tedious. So tools were built to abstract this away.

For example, tools like Rust’s wasm-bindgen and Emscripten’s Embind automatically wrap the WebAssembly module with JS glue code that does this translation from strings to numbers.

JS file complaining about having to pass a string to Wasm, and the JS glue code offering to do all the work

And these tools can do these kinds of transformations for other high-level types, too, such as complex objects with properties.

This works, but there are some pretty obvious use cases where it doesn’t work very well.

For example, sometimes you just want to pass a string through WebAssembly. You want a JavaScript function to pass a string to a WebAssembly function, and then have WebAssembly pass it to another JavaScript function.

Here’s what needs to happen for that to work:

  1. the first JavaScript function passes the string to the JS glue code

  2. the JS glue code turns that string object into numbers and then puts those numbers into linear memory

  3. then passes a number (a pointer to the start of the string) to WebAssembly

  4. the WebAssembly function passes that number over to the JS glue code on the other side

  5. the second JavaScript function pulls all of those numbers out of linear memory and then decodes them back into a string object

  6. which it gives to the second JS function

JS file passing string 'Hello' to JS glue code
JS glue code turning string into numbers and putting that in linear memory
JS glue code telling engine to pass 2 to wasm
Wasm telling engine to pass 2 to JS glue code
JS glue code taking bytes from linear memory and turning them back into a string
JS glue code passing string to JS file

So the JS glue code on one side is just reversing the work it did on the other side. That’s a lot of work to recreate what’s basically the same object.

If the string could just pass straight through WebAssembly without any transformations, that would be way easier.

WebAssembly wouldn’t be able to do anything with this string—it doesn’t understand that type. We wouldn’t be solving that problem.

But it could just pass the string object back and forth between the two JS functions, since they do understand the type.

So this is one of the reasons for the WebAssembly reference types proposal. That proposal adds a new basic WebAssembly type called anyref.

With an anyref, JavaScript just gives WebAssembly a reference object (basically a pointer that doesn’t disclose the memory address). This reference points to the object on the JS heap. Then WebAssembly can pass it to other JS functions, which know exactly how to use it.

JS passing a string to Wasm and the engine turning it into a pointer
Wasm passing the string to a different JS file, and the engine just passes the pointer on

So that solves one of the most annoying interoperability problems with JavaScript. But that’s not the only interoperability problem to solve in the browser.

There’s another, much larger, set of types in the browser. WebAssembly needs to be able to interoperate with these types if we’re going to have good performance.

WebAssembly talking directly to the browser

JS is only one part of the browser. The browser also has a lot of other functions, called Web APIs, that you can use.

Behind the scenes, these Web API functions are usually written in C++ or Rust. And they have their own way of storing objects in memory.

Web APIs’ parameters and return values can be lots of different types. It would be hard to manually create mappings for each of these types. So to simplify things, there’s a standard way to talk about the structure of these types—Web IDL.

When you’re using these functions, you’re usually using them from JavaScript. This means you are passing in values that use JS types. How does a JS type get converted to a Web IDL type?

Just as there is a mapping from WebAssembly types to JavaScript types, there is a mapping from JavaScript types to Web IDL types.

So it’s like the engine has another reference book, showing how to get from JS to Web IDL. And this mapping is also hardcoded in the engine.

A book that has mappings between the JS types and Web IDL types

For many types, this mapping between JavaScript and Web IDL is pretty straightforward. For example, types like DOMString and JS’s String are compatible and can be mapped directly to each other.

Now, what happens when you’re trying to call a Web API from WebAssembly? Here’s where we get to the problem.

Currently, there is no mapping between WebAssembly types and Web IDL types. This means that, even for simple types like numbers, your call has to go through JavaScript.

Here’s how this works:

  1. WebAssembly passes the value to JS.
  2. In the process, the engine converts this value into a JavaScript type, and puts it in the JS heap in memory
  3. Then, that JS value is passed to the Web API function. In the process, the engine converts the JS value into a Web IDL type, and puts it in a different part of memory, the renderer’s heap.

Wasm passing number to JS
Engine converting the int32 to a Number and putting it in the JS heap
Engine converting the Number to a double, and putting that in the renderer heap

This takes more work than it needs to, and also uses up more memory.

There’s an obvious solution to this—create a mapping from WebAssembly directly to Web IDL. But that’s not as straightforward as it might seem.

For simple Web IDL types like boolean and unsigned long (which is a number), there are clear mappings from WebAssembly to Web IDL.

But for the most part, Web API parameters are more complex types. For example, an API might take a dictionary, which is basically an object with properties, or a sequence, which is like an array.

To have a straightforward mapping between WebAssembly types and Web IDL types, we’d need to add some higher-level types. And we are doing that—with the GC proposal. With that, WebAssembly modules will be able to create GC objects—things like structs and arrays—that could be mapped to complicated Web IDL types.

But if the only way to interoperate with Web APIs is through GC objects, that makes life harder for languages like C++ and Rust that wouldn’t use GC objects otherwise. Whenever the code interacts with a Web API, it would have to create a new GC object and copy values from its linear memory into that object.

That’s only slightly better than what we have today with JS glue code.

We don’t want JS glue code to have to build up GC objects—that’s a waste of time and space. And we don’t want the WebAssembly module to do that either, for the same reasons.

We want it to be just as easy for languages that use linear memory (like Rust and C++) to call Web APIs as it is for languages that use the engine’s built-in GC. So we need a way to create a mapping between objects in linear memory and Web IDL types, too.

There’s a problem here, though. Each of these languages represents things in linear memory in different ways. And we can’t just pick one language’s representation. That would make all the other languages less efficient.

someone standing between the names of linear memory languages like C, C++, and Rust, pointing to Rust and saying 'I pick... that one!'. A red arrow points to the person saying 'bad idea'

But even though the exact layout in memory for these things is often different, there are some abstract concepts that they usually share in common.

For example, for strings the language often has a pointer to the start of the string in memory, and the length of the string. And even if the string has a more complicated internal representation, it usually needs to convert strings into this format when calling external APIs anyways.

This means we can reduce this string down to a type that WebAssembly understands… two i32s.

The string Hello in linear memory, with an offset of 2 and length of 5. Red arrows point to offset and length and say 'types that WebAssembly understands!'

We could hardcode a mapping like this in the engine. So the engine would have yet another reference book, this time for WebAssembly to Web IDL mappings.

But there’s a problem here. WebAssembly is a type-checked language. To keep things secure, the engine has to check that the calling code passes in types that match what the callee asks for.

This is because there are ways for attackers to exploit type mismatches and make the engine do things it’s not supposed to do.

If you’re calling something that takes a string, but you try to pass the function an integer, the engine will yell at you. And it should yell at you.

So we need a way for the module to explicitly tell the engine something like: “I know Document.createElement() takes a string. But when I call it, I’m going to pass you two integers. Use these to create a DOMString from data in my linear memory. Use the first integer as the starting address of the string and the second as the length.”

This is what the Web IDL proposal does. It gives a WebAssembly module a way to map between the types that it uses and Web IDL’s types.

These mappings aren’t hardcoded in the engine. Instead, a module comes with its own little booklet of mappings.

Wasm file handing a booklet to the engine and saying `Here's a little guidebook. It will tell you how to translate my types to interface types`

So this gives the engine a way to say “For this function, do the type checking as if these two integers are a string.”

The fact that this booklet comes with the module is useful for another reason, though.

Sometimes a module that would usually store its strings in linear memory will want to use an anyref or a GC type in a particular case… for example, if the module is just passing an object that it got from a JS function, like a DOM node, to a Web API.

So modules need to be able to choose on a function-by-function (or even even argument-by-argument) basis how different types should be handled. And since the mapping is provided by the module, it can be custom-tailored for that module.

Wasm telling engine 'Read carefully... For some function that take DOMStrings, I'll give you two numbers. For others, I'll just give you the DOMString that JS gave to me.'

How do you generate this booklet?

The compiler takes care of this information for you. It adds a custom section to the WebAssembly module. So for many language toolchains, the programmer doesn’t have to do much work.

For example, let’s look at how the Rust toolchain handles this for one of the simplest cases: passing a string into the alert function.

extern "C" {
    fn alert(s: &str);

The programmer just has to tell the compiler to include this function in the booklet using the annotation #[wasm_bindgen]. By default, the compiler will treat this as a linear memory string and add the right mapping for us. If we needed it to be handled differently (for example, as an anyref) we’d have to tell the compiler using a second annotation.

So with that, we can cut out the JS in the middle. That makes passing values between WebAssembly and Web APIs faster. Plus, it means we don’t need to ship down as much JS.

And we didn’t have to make any compromises on what kinds of languages we support. It’s possible to have all different kinds of languages that compile to WebAssembly. And these languages can all map their types to Web IDL types—whether the language uses linear memory, or GC objects, or both.

Once we stepped back and looked at this solution, we realized it solved a much bigger problem.

WebAssembly talking to All The Things

Here’s where we get back to the promise in the intro.

Is there a feasible way for WebAssembly to talk to all of these different things, using all these different type systems?

A wasm file with arrows pointing to and from: logos for different runtimes (Ruby, php, and Python), other wasm files compiled from Rust and Go, and host systems like the OS or browser

Let’s look at the options.

You could try to create mappings that are hardcoded in the engine, like WebAssembly to JS and JS to Web IDL are.

But to do that, for each language you’d have to create a specific mapping. And the engine would have to explicitly support each of these mappings, and update them as the language on either side changes. This creates a real mess.

This is kind of how early compilers were designed. There was a pipeline for each source language to each machine code language. I talked about this more in my first posts on WebAssembly.

We don’t want something this complicated. We want it to be possible for all these different languages and platforms to talk to each other. But we need it to be scalable, too.

So we need a different way to do this… more like modern day compiler architectures. These have a split between front-end and back-end. The front-end goes from the source language to an abstract intermediate representation (IR). The back-end goes from that IR to the target machine code.

This is where the insight from Web IDL comes in. When you squint at it, Web IDL kind of looks like an IR.

Now, Web IDL is pretty specific to the Web. And there are lots of use cases for WebAssembly outside the web. So Web IDL itself isn’t a great IR to use.

But what if you just use Web IDL as inspiration and create a new set of abstract types?

This is how we got to the WebAssembly interface types proposal.

Diagram showing WebAssembly interface types in the middle. On the left is a wasm module, which could be compiled from Rust, Go, C, etc. Arrows point from these options to the types in the middle. On the right are host languages like JS, Python, and Ruby; host platforms like .NET, Node, and operating systems, and more wasm modules. Arrows point from these options to the types in the middle.

These types aren’t concrete types. They aren’t like the int32 or float64 types in WebAssembly today. There are no operations on them in WebAssembly.

For example, there won’t be any string concatenation operations added to WebAssembly. Instead, all operations are performed on the concrete types on either end.

There’s one key point that makes this possible: with interface types, the two sides aren’t trying to share a representation. Instead, the default is to copy values between one side and the other.

saying 'since this is a string in linear memory, I know how to manipulate it' and browser saying 'since this is a DOMString, I know how to manipulate it'

There is one case that would seem like an exception to this rule: the new reference values (like anyref) that I mentioned before. In this case, what is copied between the two sides is the pointer to the object. So both pointers point to the same thing. In theory, this could mean they need to share a representation.

In cases where the reference is just passing through the WebAssembly module (like the anyref example I gave above), the two sides still don’t need to share a representation. The module isn’t expected to understand that type anyway… just pass it along to other functions.

But there are times where the two sides will want to share a representation. For example, the GC proposal adds a way to create type definitions so that the two sides can share representations. In these cases, the choice of how much of the representation to share is up to the developers designing the APIs.

This makes it a lot easier for a single module to talk to many different languages.

In some cases, like the browser, the mapping from the interface types to the host’s concrete types will be baked into the engine.

So one set of mappings is baked in at compile time and the other is handed to the engine at load time.

Engine holding Wasm's mapping booklet and its own mapping reference book for Wasm Interface Types to Web IDL, saying 'So this maps to a string? Ok, I can take it from here to the DOMString that the function is asking for using my hardcoded bindings'

But in other cases, like when two WebAssembly modules are talking to each other, they both send down their own little booklet. They each map their functions’ types to the abstract types.

Engine reaching for mapping booklets from two wasm files, saying 'Ok, let's see how these map to each other'

This isn’t the only thing you need to enable modules written in different source languages to talk to each other (and we’ll write more about this in the future) but it is a big step in that direction.

So now that you understand why, let’s look at how.

What do these interface types actually look like?

Before we look at the details, I should say again: this proposal is still under development. So the final proposal may look very different.

Two construction workers with a sign that says 'Use caution'

Also, this is all handled by the compiler. So even when the proposal is finalized, you’ll only need to know what annotations your toolchain expects you to put in your code (like in the wasm-bindgen example above). You won’t really need to know how this all works under the covers.

But the details of the proposal are pretty neat, so let’s dig into the current thinking.

The problem to solve

The problem we need to solve is translating values between different types when a module is talking to another module (or directly to a host, like the browser).

There are four places where we may need to do a translation:

For exported functions

  • accepting parameters from the caller
  • returning values to the caller

For imported functions

  • passing parameters to the function
  • accepting return values from the function

And you can think about each of these as going in one of two directions:

  • Lifting, for values leaving the module. These go from a concrete type to an interface type.
  • Lowering, for values coming into the module. These go from an interface type to a concrete type.

Telling the engine how to transform between concrete types and interface types

So we need a way to tell the engine which transformations to apply to a function’s parameters and return values. How do we do this?

By defining an interface adapter.

For example, let’s say we have a Rust module compiled to WebAssembly. It exports a greeting_ function that can be called without any parameters and returns a greeting.

Here’s what it would look like (in WebAssembly text format) today.

a Wasm module that exports a function that returns two numbers. See proposal linked above for details.

So right now, this function returns two integers.

But we want it to return the string interface type. So we add something called an interface adapter.

If an engine understands interface types, then when it sees this interface adapter, it will wrap the original module with this interface.

an interface adapter that returns a string. See proposal linked above for details.

It won’t export the greeting_ function anymore… just the greeting function that wraps the original. This new greeting function returns a string, not two numbers.

This provides backwards compatibility because engines that don’t understand interface types will just export the original greeting_ function (the one that returns two integers).

How does the interface adapter tell the engine to turn the two integers into a string?

It uses a sequence of adapter instructions.

Two adapter instructions inside of the adapter function. See proposal linked above for details.

The adapter instructions above are two from a small set of new instructions that the proposal specifies.

Here’s what the instructions above do:

  1. Use the call-export adapter instruction to call the original greeting_ function. This is the one that the original module exported, which returned two numbers. These numbers get put on the stack.
  2. Use the memory-to-string adapter instruction to convert the numbers into the sequence of bytes that make up the string. We have to specifiy “mem” here because a WebAssembly module could one day have multiple memories. This tells the engine which memory to look in. Then the engine takes the two integers from the top of the stack (which are the pointer and the length) and uses those to figure out which bytes to use.

This might look like a full-fledged programming language. But there is no control flow here—you don’t have loops or branches. So it’s still declarative even though we’re giving the engine instructions.

What would it look like if our function also took a string as a parameter (for example, the name of the person to greet)?

Very similar. We just change the interface of the adapter function to add the parameter. Then we add two new adapter instructions.

Here’s what these new instructions do:

  1. Use the arg.get instruction to take a reference to the string object and put it on the stack.
  2. Use the string-to-memory instruction to take the bytes from that object and put them in linear memory. Once again, we have to tell it which memory to put the bytes into. We also have to tell it how to allocate the bytes. We do this by giving it an allocator function (which would be an export provided by the original module).

One nice thing about using instructions like this: we can extend them in the future… just as we can extend the instructions in WebAssembly core. We think the instructions we’re defining are a good set, but we aren’t committing to these being the only instruct for all time.

If you’re interested in understanding more about how this all works, the explainer goes into much more detail.

Sending these instructions to the engine

Now how do we send this to the engine?

These annotations gets added to the binary file in a custom section.

A file split in two. The top part is labeled 'known sections, e.g. code, data'. The bottom part is labeled 'custom sections, e.g. interface adapter'

If an engine knows about interface types, it can use the custom section. If not, the engine can just ignore it, and you can use a polyfill which will read the custom section and create glue code.

How is this different than CORBA, Protocol Buffers, etc?

There are other standards that seem like they solve the same problem—for example CORBA, Protocol Buffers, and Cap’n Proto.

How are those different? They are solving a much harder problem.

They are all designed so that you can interact with a system that you don’t share memory with—either because it’s running in a different process or because it’s on a totally different machine across the network.

This means that you have to be able to send the thing in the middle—the “intermediate representation” of the objects—across that boundary.

So these standards need to define a serialization format that can efficiently go across the boundary. That’s a big part of what they are standardizing.

Two computers with wasm files on them and multiple lines flowing into a single line connecting connecting them. The single line represents serialization and is labelled 'IR'

Even though this looks like a similar problem, it’s actually almost the exact inverse.

With interface types, this “IR” never needs to leave the engine. It’s not even visible to the modules themselves.

The modules only see the what the engine spits out for them at the end of the process—what’s been copied to their linear memory or given to them as a reference. So we don’t have to tell the engine what layout to give these types—that doesn’t need to be specified.

What is specified is the way that you talk to the engine. It’s the declarative language for this booklet that you’re sending to the engine.

Two wasm files with arrows pointing to the word 'IR' with no line between, because there is no serialization happening.

This has a nice side effect: because this is all declarative, the engine can see when a translation is unnecessary—like when the two modules on either side are using the same type—and skip the translation work altogether.

The engine looking at the booklets for a Rust module and a Go module and saying 'Ooh, you’re both using linear memory for this string... I’ll just do a quick copy between your memories, then'

How can you play with this today?

As I mentioned above, this is an early stage proposal. That means things will be changing rapidly, and you don’t want to depend on this in production.

But if you want to start playing with it, we’ve implemented this across the toolchain, from production to consumption:

  • the Rust toolchain
  • wasm-bindgen
  • the Wasmtime WebAssembly runtime

And since we maintain all these tools, and since we’re working on the standard itself, we can keep up with the standard as it develops.

Even though all these parts will continue changing, we’re making sure to synchronize our changes to them. So as long as you use up-to-date versions of all of these, things shouldn’t break too much.

Construction worker saying 'Just be careful and stay on the path'

So here are the many ways you can play with this today. For the most up-to-date version, check out this repo of demos.

Thank you

  • Thank you to the team who brought all of the pieces together across all of these languages and runtimes: Alex Crichton, Yury Delendik, Nick Fitzgerald, Dan Gohman, and Till Schneidereit
  • Thank you to the proposal co-champions and their colleagues for their work on the proposal: Luke Wagner, Francis McCabe, Jacob Gravelle, Alex Crichton, and Nick Fitzgerald
  • Thank you to my fantastic collaborators, Luke Wagner and Till Schneidereit, for their invaluable input and feedback on this article

The post WebAssembly Interface Types: Interoperate with All the Things! appeared first on Mozilla Hacks - the Web developer blog.

The Mozilla BlogMozilla takes action to protect users in Kazakhstan

Today, Mozilla and Google took action to protect the online security and privacy of individuals in Kazakhstan. Together the companies deployed technical solutions within Firefox and Chrome to block the Kazakhstan government’s ability to intercept internet traffic within the country.

The response comes after credible reports that internet service providers in Kazakhstan have required people in the country to download and install a government-issued certificate on all devices and in every browser in order to access the internet. This certificate is not trusted by either of the companies, and once installed, allowed the government to decrypt and read anything a user types or posts, including intercepting their account information and passwords. This targeted people visiting popular sites Facebook, Twitter and Google, among others.

“People around the world trust Firefox to protect them as they navigate the internet, especially when it comes to keeping them safe from attacks like this that undermine their security. We don’t take actions like this lightly, but protecting our users and the integrity of the web is the reason Firefox exists.” — Marshall Erwin, Senior Director of Trust and Security, Mozilla

“We will never tolerate any attempt, by any organization—government or otherwise—to compromise Chrome users’ data. We have implemented protections from this specific issue, and will always take action to secure our users around the world.” — Parisa Tabriz, Senior Engineering Director, Chrome

This is not the first attempt by the Kazakhstan government to intercept the internet traffic of everyone in the country. In 2015, the Kazakhstan government attempted to have a root certificate included in Mozilla’s trusted root store program. After it was discovered that they were intending to use the certificate to intercept user data, Mozilla denied the request. Shortly after, the government forced citizens to manually install its certificate but that attempt failed after organizations took legal action.

Each company will deploy a technical solution unique to its browser. For additional information on those solutions please see the below links.


Russian: Если вы хотите ознакомиться с этим текстом на русском языке, нажмите здесь.

Kazakh: Бұл постыны қазақ тілінде мына жерден оқыңыз.





The post Mozilla takes action to protect users in Kazakhstan appeared first on The Mozilla Blog.

Mozilla Security BlogProtecting our Users in Kazakhstan

Russian translation: Если вы хотите ознакомиться с этим текстом на русском языке, нажмите здесь.

Kazakh translation: Бұл постыны қазақ тілінде мына жерден оқыңыз.

In July, a Firefox user informed Mozilla of a security issue impacting Firefox users in Kazakhstan: They stated that Internet Service Providers (ISPs) in Kazakhstan had begun telling their customers that they must install a government-issued root certificate on their devices. What the ISPs didn’t tell their customers was that the certificate was being used to intercept network communications. Other users and researchers confirmed these claims, and listed 3 dozen popular social media and communications sites that were affected.

The security and privacy of HTTPS encrypted communications in Firefox and other browsers relies on trusted Certificate Authorities (CAs) to issue website certificates only to someone that controls the domain name or website. For example, you and I can’t obtain a trusted certificate for because Mozilla has strict policies for all CAs trusted by Firefox which only allow an authorized person to get a certificate for that domain. However, when a user in Kazakhstan installs the root certificate provided by their ISP, they are choosing to trust a CA that doesn’t have to follow any rules and can issue a certificate for any website to anyone. This enables the interception and decryption of network communications between Firefox and the website, sometimes referred to as a Monster-in-the-Middle (MITM) attack.

We believe this act undermines the security of our users and the web, and it directly contradicts Principle 4 of the Mozilla Manifesto that states, “Individuals’ security and privacy on the internet are fundamental and must not be treated as optional.”

To protect our users, Firefox, together with Chrome, will block the use of the Kazakhstan root CA certificate. This means that it will not be trusted by Firefox even if the user has installed it. We believe this is the appropriate response because users in Kazakhstan are not being given a meaningful choice over whether to install the certificate and because this attack undermines the integrity of a critical network security mechanism.  When attempting to access a website that responds with this certificate, Firefox users will see an error message stating that the certificate should not be trusted.

We encourage users in Kazakhstan affected by this change to research the use of virtual private network (VPN) software, or the Tor Browser, to access the Web. We also strongly encourage anyone who followed the steps to install the Kazakhstan government root certificate to remove it from your devices and to immediately change your passwords, using a strong, unique password for each of your online accounts.

The post Protecting our Users in Kazakhstan appeared first on Mozilla Security Blog.

Cameron KaiserFPR16 delays

FPR16 was supposed to reach you in beta sometime tomorrow but I found a reproducible crash in the optimized build, probably due to one of my vain attempts to fix JavaScript bugs. I'm still investigating exactly which change(s) were responsible. We should still make the deadline (September 3) to be concurrent with the 60.9/68.1 ESRs, but there will not be much of a beta testing period and I don't anticipate it being available until probably at least Friday or Saturday. More later.

While you're waiting, read about today's big OpenPOWER announcement. Isn't it about time for a modern PowerPC under your desk?

This Week In RustThis Week in Rust 300

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is async-std, a library with async variants of the standard library's IO etc.

Thanks to mmmmib for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

268 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.


No RFCs are currently in final comment period.

Tracking Issues & PRs

New RFCs

No new RFCs were proposed this week.

Upcoming Events

Asia Pacific
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

C++ being memory safe is like saying riding a motorcycle is crash safe.

It totally is, if you happen to have the knowledge and experience to realize this is only true if you remember to put on body-armor, a helmet, a full set of leathers including gloves and reinforced boots, and then remember to operate the motorcycle correctly afterwards. In C/C++ though, that armor is completely 100% optional.

cyrusm on /r/rust

Thanks to Dmitry Kashitsyn for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nasa42, llogiq, and Flavsditz.

Discuss on r/rust.

Mozilla GFXmoz://gfx newsletter #47

Hi there! Time for another mozilla graphics newsletter. In the comments section of the previous newsletter, Michael asked about the relation between WebRender and WebGL, I’ll try give a short answer here.

Both WebRender and WebGL need access to the GPU to do their work. At the moment both of them use the OpenGL API, either directly or through ANGLE which emulates OpenGL on top of D3D11. They, however, each work with their own OpenGL context. Frames produced with WebGL are sent to WebRender as texture handles. WebRender, at the API level, has a single entry point for images, video frames, canvases, in short for every grid of pixels in some flavor of RGB format, be them CPU-side buffers or already in GPU memory as is normally the case for WebGL. In order to share textures between separate OpenGL contexts we rely on platform-specific APIs such as EGLImage and DXGI.

Beyond that there isn’t any fancy interaction between WebGL and WebRender. The latter sees the former as a image producer just like 2D canvases, video decoders and plain static images.

What’s new in gfx

Wayland and hidpi improvements on Linux

  • Martin Stransky made a proof of concept implementation of DMABuf textures in Gecko’s IPC mechanism. This dmabuf EGL texture backend on Wayland is similar what we have on Android/Mac. Dmabuf buffers can be shared with main/compositor process, can be bound as a render target or texture and can be located at GPU memory. The same dma buf buffer can be also used as hardware overlay when it’s attached to wl_surface/wl_subsurface as wl_buffer.
  • Jan Horak fixed a bug that prevented tabs from rendering after restoring a minimized window.
  • Jan Horak fixed the window parenting hierarchy with Wayland.
  • Jan Horak fixed a bug with hidpi that was causing select popups to render incorrectly after scrolling.

WebGL multiview rendering

WebGL’s multiview rendering extension has been approved by the working group and it’s implementation by Jeff Gilbert will be shipping in Firefox 70.
This extension allows more efficient rendering into multiple viewports, which is most commonly use by VR/AR for rendering both eyes at the same time.

Better high dynamic range support

Jean Yves landed the first part of his HDR work (a set of 14 patches). While we can’t yet output HDR content to HDR screen, this work greatly improved the correctness of the conversion from various HDR formats to low dynamic range sRGB.

You can follow progress on the color space meta bug.

What’s new in WebRender

WebRender is a GPU based 2D rendering engine for web written in Rust, currently powering Firefox‘s rendering engine as well as the research web browser servo.

If you are curious about the state of WebRender on a particular platform, up to date information is available at

Speaking of which, darkspirit enabled webrender on Nightly for Nvidia+Nouveau drivers linux users in Firefox Nightly.

More filters in WebRender

When We run into a primitive that isn’t supported by WebRender, we make it go through software fallback implementation which can be slow for some things. SVG filters are a good example of primitives that perform much better if implemented on the GPU in WebRender.
Connor Brewster has been working on implementing a number of SVG filters in WebRender:

See the SVG filters in WebRender meta bug.

Texture swizzling and allocation

WebRender previously only worked with BGRA for color textures. Unfortunately this format is optimal on some platforms but sub-optimal (or even unsupported) on others. So a conversion sometimes has to happen and this conversion if done by the driver can be very costly.

Kvark reworked the texture caching logic to support using and swizzling between different formats (for example RGBA and BGRA).
A document that landed with the implementation provides more details about the approach and context.

Kvark also improved the texture cache allocation behavior.

Kvark also landed various refactorings (1), (2), (3), (4).

Android improvements

Jamie fixed emoji rendering on webrender on android and continues investigating driver issues on Adreno 3xx devices.

Displaylist serialization

Dan replaced bincode in our DL IPC code with a new bespoke and private serialization library (peek-poke), ending the terrible reign of the secret serde deserialize_in_place hacks and our fork of serde_derive.

Picture caching improvements

Glenn landed several improvements and fixes to the picture caching infrastructure:

  • Bug 1566712 – Fix quality issueswith picture caching when transform has fractional offsets.
  • Bug 1572197 – Fix world clip region for preserve-3d items with picture caching.
  • Bug 1566901 – Make picture caching more robust to float issues.
  • Bug 1567472 – Fix bug in preserve-3d batching code in WebRender.

Font rendering improvements

Lee landed quite a few font related patches:

  • Bug 1569950 – Only partially clear WR glyph caches if it is not necessary to fully clear.
  • Bug 1569174 – Disable embedded bitmaps if ClearType rendering mode is forced.
  • Bug 1568841 – Force GDI parameters for GDI render mode.
  • Bug 1568858 – Always stretch box shadows except for Cairo.
  • Bug 1568841 – Don’t use enhanced contrast on GDI fonts.
  • Bug 1553818 – Use GDI ClearType contrast for GDI font gamma.
  • Bug 1565158 – Allow forcing DWrite symmetric rendering mode.
  • Bug 1563133 – Limit the GlyphBuffer capacity.
  • Bug 1560520 – Limit the size of WebRender’s glyph cache.
  • Bug 1566449 – Don’t reverse glyphs in GlyphBuffer for RTL.
  • Bug 1566528 – Clamp the amount of synthetic bold extra strikes.
  • Bug 1553228 – Don’t free the result of FcPatternGetString.

Various fixes and improvements

  • Gankra fixed an issue with addon popups and document splitting.
  • Sotaro prevented some unnecessary composites on out-of-viewport external image updates.
  • Nical fixed an integer oveflow causing the browser to freeze
  • Nical improved the overlay profiler by showing more relevant numbers when the timings are noisy.
  • Nical fixed corrupted rendering of progressively loaded images.
  • Nical added a fast path when none of the primitives of an image batch need anti-aliasing or repetition.

Wladimir PalantKaspersky in the Middle - what could possibly go wrong?

Roughly a decade ago I read an article that asked antivirus vendors to stop intercepting encrypted HTTPS connections, this practice actively hurting security and privacy. As you can certainly imagine, antivirus vendors agreed with the sensible argument and today no reasonable antivirus product would even consider intercepting HTTPS traffic. Just kidding… Of course they kept going, and so two years ago a study was published detailing the security issues introduced by interception of HTTPS connections. Google and Mozilla once again urged antivirus vendors to stop. Surely this time it worked?

Of course not. So when I decided to look into Kaspersky Internet Security in December last year, I found it breaking up HTTPS connections so that it would get between the server and your browser in order to “protect” you. Expecting some deeply technical details about HTTPS protocol misimplementations now? Don’t worry, I don’t know enough myself to inspect Kaspersky software on this level. The vulnerabilities I found were far more mundane.

Kaspersky Internet Security getting between browser and server

I reported eight vulnerabilities to Kaspersky Lab between 2018-12-13 and 2018-12-21. This article will only describe three vulnerabilities which have been fixed in April this year. This includes two vulnerabilities that weren’t deemed a security risk by Kaspersky, it’s up to you to decide whether you agree with this assessment. The remaining five vulnerabilities have only been fixed in July, and I agreed to wait until November with the disclosure to give users enough time to upgrade.

Edit (2019-08-22): In order to disable this functionality you have to go into Settings, select “Additional” on the left side, then click “Network.” There you will see a section called “Encryption connection scanning” where you need to choose “Do not scan encrypted connections.”


The underappreciated certificate warning pages

There is an important edge case with HTTPS connections: what if a connection is established but the other side uses an invalid certificate? Current browsers will generally show you a certificate warning page in this scenario. In Firefox it looks like this:

Certificate warning page in Firefox

This page has seen a surprising amount of changes over the years. The browser vendors recognized that asking users to make a decision isn’t a good idea here. Most of the time, getting out is the best course of action, and ignoring the warning only a viable option for very technical users. So the text here is very clear, low on technical details, and the recommended solution is highlighted. The option to ignore the warning is well-hidden on the other hand, to prevent people from using it without understanding the implications. While the page looks different in other browsers, the main design considerations are the same.

But with Kaspersky Internet Security in the middle, the browser is no longer talking to the server, Kaspersky is. The way HTTPS is designed, it means that Kaspersky is responsible for validating the server’s certificate and producing a certificate warning page. And that’s what the certificate warning page looks like then:

Certificate warning page when Kaspersky is installed

There is a considerable amount of technical details here, supposedly to allow users to make an informed decision, but usually confusing them instead. Oh, and why does it list the URL as “”? That’s not what I typed into the address bar, it’s actually what this site claims to be (the name has been extracted from the site’s invalid certificate). That’s a tiny security issue here, wasn’t worth reporting however as this only affects sites accessed by IP address which should never be the case with HTTPS.

The bigger issue: what is the user supposed to do here? There is “leave this website” in the text, but experience shows that people usually won’t read when hitting a roadblock like this. And the highlighted action here is “I understand the risks and wish to continue” which is what most users can be expected to hit.

Using clickjacking to override certificate warnings

Let’s say that we hijacked some user’s web traffic, e.g. by tricking them into connecting to our malicious WiFi hotspot. Now we want to do something evil with that, such as collecting their Google searches or hijacking their Google account. Unfortunately, HTTPS won’t let us do it. If we place ourselves between the user and the Google server, we have to use our own certificate for the connection to the user. With our certificate being invalid, this will trigger a certificate warning however.

So the goal is to make the user click “I understand the risks and wish to continue” on the certificate warning page. We could just ask nicely, and given how this page is built we’ll probably succeed in a fair share of cases. Or we could use a trick called clickjacking – let the user click it without realizing what they are clicking.

There is only one complication. When the link is clicked there will be an additional confirmation pop-up:

Warning displayed by Kaspersky when overriding a certificate

But don’t despair just yet! That warning is merely generic text, it would apply to any remotely insecure action. We would only need to convince the user that the warning is expected and they will happily click “Continue.” For example, we could give them the following page when they first connect to the network, similar to those captive portals:

Fake Kaspersky warning page

Looks like a legitimate Kaspersky warning page but isn’t, the text here was written by me. The only “real” thing here is the “I understand the risks and wish to continue” link which actually belongs to an embedded frame. That frame contains Kaspersky’s certificate warning for and has been positioned in such a way that only the link is visible. When the user clicks it, they will get the generic warning from above and without doubt confirm ignoring the invalid certificate. We won, now we can do our evil thing without triggering any warnings!

How browser vendors deal with this kind of attack? They require at least two clicks to happen on different spots of the certificate warning page in order to add an exception for an invalid certificate, this makes clickjacking attacks impracticable. Kaspersky on the other hand felt very confident about their warning prompt, so they opted for adding more information to it. This message will now show you the name of the site you are adding the exception for. Let’s just hope that accessing a site by IP address is the only scenario where attackers can manipulate that name…

Something you probably don’t know about HSTS

There is a slightly less obvious detail to the attack described above: it shouldn’t have worked at all. See, if you reroute traffic to a malicious server and navigate to the site then, neither Firefox nor Chrome will give you the option to override the certificate warning. Getting out will be the only option available, meaning no way whatsoever to exploit the certificate warning page. What is this magic? Did browsers implement some special behavior only for Google?

Firefox certificate warning for

They didn’t. What you see here is a side-effect of the HTTP Strict-Transport-Security (HSTS) mechanism, which Google and many other websites happen to use. When you visit Google it will send the HTTP header Strict-Transport-Security: max-age=31536000 with the response. This tells the browser: “This is an HTTPS-only website, don’t ever try to create an unencrypted connection to it. Keep that in mind for the next year.”

So when the browser later encounters a certificate error on a site using HSTS, it knows: the website owner promised to keep HTTPS functional. There is no way that an invalid certificate is ok here, so allowing users to override the certificate would be wrong.

Unless you have Kaspersky in the middle of course, because Kaspersky completely ignores HSTS and still allows users to override the certificate. When I reported this issue, the vendor response was that this isn’t a security risk because the warning displayed is sufficient. Somehow they decided to add support for HSTS nevertheless, so that current versions will no longer allow overriding certificates here.

It’s no doubt that there are more scenarios where Kaspersky software weakens the security precautions made by browsers. For example, if a certificate is revoked (usually because it has been compromised), browsers will normally recognize that thanks to OCSP stapling and prevent the connection. But I noticed recently that Kaspersky Internet Security doesn’t support OCSP stapling, so if this application is active it will happily allow you to connect to a likely malicious server.

Using injected content for Universal XSS

Kaspersky Internet Security isn’t merely listening in on connections to HTTPS sites, it is also actively modifying those. In some cases it will generate a response of its own, such as the certificate warning page we saw above. In others it will modify the response sent by the server.

For example, if you didn’t install the Kaspersky browser extension, it will fall back to injecting a script into server responses which is then responsible for “protecting” you. This protection does things like showing a green checkmark next to Google search results that are considered safe. As Heise Online wrote merely a few days ago, this also used to leak a unique user ID which allowed tracking users regardless of any protective measures on their side. Oops…

There is a bit more to this feature called URL Advisor. When you put the mouse cursor above the checkmark icon a message appears stating that you have a safe site there. That message is a frame displaying url_advisor_balloon.html. Where does this file load from? If you have the Kaspersky browser extension, it will be part of that browser extension. If you don’t, it will load from in Firefox and in Chrome – Kaspersky software will intercept requests to these servers and answer them locally. I noticed however that things were different in Microsoft Edge, here this file would load directly from (or any other website if you changed the host name).

URL Advisor frame showing up when the checkmark icon is hovered

Certainly, when injecting their own web page into every domain on the web Kaspersky developers thought about making it very secure? Let’s have a look at the code running there:

var policeLink = document.createElement("a");
policeLink.href = IsDefined(UrlAdvisorLinkPoliceDecision) ? UrlAdvisorLinkPoliceDecision : locales["UrlAdvisorLinkPoliceDecision"]; = "_blank";

This creates a link inside the frame dynamically. Where the link target comes from? It’s part of the data received from the parent document, no validation performed. In particular, javascript: links will be happily accepted. So a malicious website needs to figure out the location of url_advisor_balloon.html and embed it in a frame using the host name of the website they want to attack. Then they send a message to it:

  command: "init",
  data: {
    verdict: {
      url: "",
      categories: [
    locales: {
      UrlAdvisorLinkPoliceDecision: "javascript:alert('Hi, this JavaScript code is running on ' + document.domain)",
      CAT_21: "click here"
}), "*");

What you get is a link labeled “click here” which will run arbitrary JavaScript code in the context of the attacked domain when clicked. And once again, the attackers could ask the user nicely to click it. Or they could use clickjacking, so whenever the user clicks anywhere on their site, the click goes to this link inside an invisible frame.

Injected JavaScript code running in context of the Google domain

And here you have it: a malicious website taking over your Google or social media accounts, all because Kaspersky considered it a good idea to have their content injected into secure traffic of other people’s domains. But at least this particular issue was limited to Microsoft Edge.


  • 2018-12-13: Sent report via Kaspersky bug bounty program: Lack of HSTS support facilitating MiTM attacks.
  • 2018-12-17: Sent reports via Kaspersky bug bounty program: Certificate warning pages susceptible to clickjacking and Universal XSS in Microsoft Edge.
  • 2018-12-20: Response from Kaspersky: HSTS and clickjacking reports are not considered security issues.
  • 2018-12-20: Requested disclosure of the HSTS and clickjacking reports.
  • 2018-12-24: Disclosure denied due to similarity with one of my other reports.
  • 2019-04-29: Kaspersky notifies me about the three issues here being fixed (KIS 2019 Patch E, actually released three weeks earlier).
  • 2019-04-29: Requested disclosure of these three issues, no response.
  • 2019-07-29: With five remaining issues reported by me fixed (KIS 2019 Patch F and KIS 2020), requested disclosure on all reports.
  • 2019-08-04: Disclosure denied on HSTS report because “You’ve requested too many tickets for disclosure at the same time.”
  • 2019-08-05: Disclosure denied on five not yet disclosed reports, asking for time until November for users to update.
  • 2019-08-06: Notified Kaspersky about my intention to publish an article about the three issues here on 2019-08-19, no response.
  • 2019-08-12: Reminded Kaspersky that I will publish an article on these three issues on 2019-08-19.
  • 2019-08-12: Kaspersky requesting an extension of the timeline until 2019-08-22, citing that they need more time to prepare.
  • 2019-08-16: Security advisory published by Kaspersky without notifying me.

Cameron KaiserChrome murders FTP like Jeffrey Epstein

What is it with these people? Why can't things that are working be allowed to still go on working? (Blah blah insecure blah blah unused blah blah maintenance blah blah web everything.)

This leaves an interesting situation where Google has, in its very own search index, HTML pages served by FTP its own browser won't be able to view:

At the top of the search results, even!

Obviously those FTP HTML pages load just fine in mainline Firefox, at least as of this writing, and of course TenFourFox. (UPDATE: This won't work in Firefox either after Fx70, though FTP in general will still be accessible. Note that it references Chrome's announcements; as usual, these kinds of distributed firing squads tend to be self-reinforcing.)

Is it a little ridiculous to serve pages that way? Okay, I'll buy that. But it works fine and wasn't bothering anyone, and they must have some relevance to be accessible because Google even indexed them.

Why is everything old suddenly so bad?

Tantek ÇelikIndieWebCamps Timeline 2011-2019: Amsterdam to Utrecht

At the beginning of IndieWeb Summit 2019, I gave a brief talk on State of the IndieWeb and mentioned that:

We've scheduled lots of IndieWebCamps this year and are on track to schedule a record number of different cities as well.

I had conceived of a graphical representation of the growth of IndieWebCamps over the past nine years, both in number and across the world, but with everything else involved with setting up and running the Summit, ran out of time. However, the idea persisted, and finally this past week, with a little help from Aaron Parecki re-implementing Dopplr’s algorithm for turning city names into colors, was able to put togther something pretty close to what I’d envisioned:

New Haven 
New York     
Los Angeles  
San Francisco   

I don’t know of any tools to take something like this kind of locations vs years data and graph it as such. So I built an HTML table with a cell for each IndieWebCamp, as well as cells for the colspans of empty space. Each colored cell is hyperlinked to the IndieWebCamp for that city for that year.

2011-2018 and over half of 2019 are IndieWebCamps (and Summits) that have already happened. 2019 includes bars for four upcoming IndieWebCamps, which are fully scheduled and open for sign-ups.

The table markup is copy pasted from the IndieWebCamp wiki template where I built it, and you can see the template working live in the context of the IndieWebCamp Cities page. I’m sure the markup could be improved, suggestions welcome!

Julien VehentThe cost of micro-services complexity

It has long been recognized by the security industry that complex systems are impossible to secure, and that pushing for simplicity helps increase trust by reducing assumptions and increasing our ability to audit. This is often captured under the acronym KISS, for "keep it stupid simple", a design principlepopularized by the US Navy back in the 60s. For a long time, we thought the enemy were application monoliths that burden our infrastructure with years of unpatched vulnerabilities.

So we split them up. We took them apart. We created micro-services where each function, each logical component, is its own individual service, designed, developed, operated and monitored in complete isolation from the rest of the infrastructure. And we composed them ad vitam æternam. Want to send an email? Call the rest API of micro-service X. Want to run a batch job? Invoke lambda function Y. Want to update a database entry? Post it to A which sends an event to B consumed by C stored in D transformed by E and inserted by F. We all love micro-services architecture. It’s like watching dominoes fall down. When it works, it’s visceral. It’s when it doesn’t that things get interesting. After nearly a decade of operating them, let me share some downsides and caveats encountered in large-scale production environments.

High operational cost

The first problem is operational cost. Even in a devops cloud automated world, each micro-service, serverless or not, needs setup, maintenance and deployment. We never fully got to the holy grail of completely automated everything, so humans are still involved with these things. Perhaps someone sold you on the idea devs could do the ops work on their free time, but let’s face it, that’s a lie, and you need dedicated teams of specialists to run the stuff the right way. And those folks don’t come cheap.

The more services you have, the harder it is to keep up with them. First you’ll start noticing delays in getting new services deployed. A week. Two weeks. A month. What do you mean you need a three months notice to get a new service setup?

Then, it’s the deployments that start to take time. And as a result, services that don’t absolutely need to be deployed, well, aren’t. Soon they’ll become outdated, vulnerable, running on the old version of everything and deploying a new version means a week worth of work to get it back to the current standard.

QA uncertainty

A second problem is quality assurance. Deploying anything in a micro-services world means verifying everything still works. Got a chain of 10 services? Each one probably has its own dev team, QA specialists, ops people that need to get involved, or at least notified, with every deployment of any service in the chain. I know it’s not supposed to be this way. We’re supposed to have automated QA, integration tests, and synthetic end-to-end monitoring that can confirm that a butterfly flapping its wings in us-west-2 triggers a KPI update on the leadership dashboard. But in the real world, nothing’s ever perfect and things tend to break in mysterious ways all the time. So you warn everybody when you deploy anything, and require each intermediate service to rerun their own QA until the pain of getting 20 people involved with a one-line change really makes you wish you had a monolith.

The alternative is that you don’t get those people involved, because, well, they’re busy, and everything is fine until a minor change goes out, all testing passes, until two days later in a different part of the world someone’s product is badly broken. It takes another 8 hours for them to track it back to your change, another 2 to roll it back, and 4 to test everything by hand. The post-mortem of that incident has 37 invitees, including 4 senior directors. Bonus points if you were on vacation when that happened.

Huge attack surface

And finally, there’s security. We sure love auditing micro-services, with their tiny codebases that are always neat and clean. We love reviewing their infrastructure too, with those dynamic security groups and clean dataflows and dedicated databases and IAM controlled permissions. There’s a lot of security benefits to micro-services, so we’ve been heavily advocating for them for several years now.

And then, one day, someone gets fed up with having to manage API keys for three dozen services in flat YAML files and suggests to use oauth for service-to-service authentication. Or perhaps Jean-Kevin drank the mTLS Kool-Aid at the FoolNix conference and made a PKI prototype on the flight back (side note: do you know how hard it is to securely run a PKI over 5 or 10 years? It’s hard). Or perhaps compliance mandates that every server, no matter how small, must run a security agent on them.

Even when you keep everything simple, this vast network of tiny services quickly becomes a nightmare to reason about. It’s just too big, and it’s everywhere. Your cross-IAM role assumptions keep you up at night. 73% of services are behind on updates and no one dares touch them. One day, you ask if anyone has a diagram of all the network flows and Jean-Kevin sends you a dot graph he generated using some hacky python. Your browser crashes trying to open it, the damn thing is 158MB of SVG.

Most vulnerabilities happen in the seam of things. API credentials will leak. Firewall will open. Access controls will get mis-managed. The more of them you have, the harder it is to keep it locked down.

Everything in moderation

I’m not anti micro-services. I do believe they are great, and that you should use them, but, like a good bottle of Lagavulin, in moderation. It’s probably OK to let your monolith do more than one thing, and it’s certainly OK to extract the one functionality that several applications need into a micro-service. We did this with autograph, because it was obvious that handling cryptographic operations should be done by a dedicated micro-service, but we don’t do it for everything. My advice is to wait until at least three services want a given thing before turning it into a micro-service. And if the dependency chain becomes too large, consider going back to a well-managed monolith, because in many cases, it actually is the simpler approach.