Air MozillaBay Area Rust Meetup March 2018 (Algorithms and Procedural Macros)

Bay Area Rust Meetup March 2018 (Algorithms and Procedural Macros) Rust Meet up. Bay Area Rust Meetup. This month we have: - Jeffrey Seyfried on Procedural Macros - Tristan Hume on Algorithms in Rust.

Mozilla Localization (L10N)compare-locales 3.0 – GSOC

There’s something magic about compare-locales 3.0. It comes with Python 3 support.

It took me quite a while to get to it, but the writing is on the wall that I had to add support for Python 3. That’s just been out for 10 years, too. Well, more like 9ish.

We’re testing against Python 2.7, 3.5, and 3.6 now.

Thanks to Emin Mastizada for the reviews of this patch.

Slightly related, we’re having two l10n-tooling related proposals out for this year’s Google Summer of Code. Check out Google’s student guide for how to pick a project. Adrian is mentoring a project to improve the experience of first-time users of Pontoon. I’m mentoring a project to support Android’s localization platform as a first-class citizen. You’d write translation quality checks for compare-locales and add support for the XML dialect to Pontoon.

Zibi BranieckiMultilingual Gecko Status Update 2018.1

As promised in my previous post, I’d like to do a better job at delivering status updates on Internationalization and Localization technologies at Gecko at shorter intervals than once per year.

In the previous post we covered recent history up to Firefox 58 which got released in January 2018. Since then we finished and shipped Firefox 59 and also finished all major work on Firefox 60, so this post will cover the two.

Firefox 59 (March)

Firefox 58 shipped with a single string localized using Fluent. In 59 we made the next step and migrated 5 strings from an old localization system to use Fluent. This allowed us to test all of our migration code to ensure that as we port Firefox to the new API we preserve all the hard work of hundreds of localizers.

In 59 we also made several improvements to performance of how Fluent operates, all of that while waiting for the stylo-chrome to land in Firefox 60.

In LocaleService, we made another important switch. We replaced old general.useragent.locale and intl.locale.matchOS prefs with a single new pref intl.locale.requested.

This change accomplished several goals:

  • The name represents the function better. Previously it was pretty confusing for people as to why Gecko doesn’t react immediately when they set the pref. Now it is more clear that this is just a requested locale, and there’s some language negotiation that, depending on available locales, will switch to it or not.
  • The new pref is optional. Since by default it matches the defaultLocale, we can now skip it and just treat its non-presence as the default mode in which we follow the default locale. That allowed us to remove some code.
  • The new pref allows us to store locale lists. The new pref is meant to store a comma separated list of requested locales like "fr-CA, fr-FR, en-US", in line with our model of handling locale lists, rather than single locales.
  • If the pref is defined and the value is empty, we’ll look into OS for the locale to use, making it a replacement for the matchOS pref.

This is important particularly because it took us very long time to unify all uses of the pref, remove it from all around the code and finally be able to switch to the new one which should serve us much better.

Next come a usual set of updates, including update to ICU 60 by Andre, and cleanups by Makoto Kato – we’re riding the wave of removing old APIs and unifying code around ICU and encoding_rs.

Lastly, as we start looking more at aligning our language resources with CLDR, Francesco started sorting out our plural rules differences and language and region names. This is the first step on the path to upstream our work to CLDR and reuse it in place of our custom data.

Notable changes [my work] [intl]:

Firefox 60 (May)

Although Firefox 60 has not yet been released as of today, the major work cycle on it has finished, and it is currently in the beta channel for stabilization.

In it, we’ve completed another milestone for Fluent migrating not just a couple, but over 100 strings in Firefox Preferences from the old API. This marks the first release where Fluent is actually used to localize a visible portion of Firefox UI!

As part of that work, we pushed our first significant update of Fluent in Gecko, and landed a special chrome-only API to get Fluent’s performance on par with the old system.

With an increase in the use of Fluent, we also covered it with Mozilla eslint rules, improved missing strings reporting, and wrote an Introduction to Fluent for Firefox Developers.

On the Locale Management side, we separated out mozilla::intl::Locale class and further aligned it with BCP47.

But the big change here is the switch of the source of available locales from the old ChromeRegistry to L10nRegistry.

This is the last symbolic hand-over from the old model to the new, meaning that from that moment the locales registered in L10nRegistry will be used to negotiate language selection for Firefox, and ChromeRegistry becomes a regular customer rather than a provider of the language selection.

We’re very close to finalize the LocaleService model after over a year of refactoring Gecko!

Regular healthy number of cleanups happened as well. Henri switched more code to use encoding_rs, and updated encoding_rs to 0.7.2, Jeff Walden performed a major cleanup of our SpiderMonkey Intl source code, Andre added caches for many Intl APIs to speed them up and Axel updated compare-locales to 2.7,

We also encountered two interesting bugs – Andre dove deep into ICU to fix `Intl.NumberFormat` breaking on roundings in Persian, and I had to disable some of our bidirectionality features in Fluent due to a bug in Windows API.

Notable changes [my work] [intl]:


With all that work in, we’re slowly settling down the new APIs and finalizing the cleanups and the bulk of work now is going directly into switching away from DTD and .properties to Fluent.

As Firefox 60 is getting ready for its stable release, we’re accelerating the migration of Preferences to Fluent hoping to accomplish it in 61 or 62 release. Once that is completed, we’ll evaluate the experience and make recommendations for the rest of Firefox.

Stay tuned!

Mark SurmanEnough is enough. Let’s tell Facebook what we want fixed.

I had one big loud thought pounding in my head as I read the Cambridge Analytica headlines this past weekend: it’s time for Facebook users to say ‘enough is enough‘.

Many people have said we need to regulate Facebook and other platforms. Maybe. What’s clear is we need the platforms to work differently.

A faster route to this outcome — or at least a first big step forward — could be for millions of us who use Facebook to tell the company what we want ‘differently’ to look like. And to ask them to make it happen. Now.

There is a long history of this sort of direct consumer-to-company conversation outside the tech world.

People who care about fair work pushed Nike to raise wages and improve factory conditions. People who care about our forests got Kimberly Clark to stop cutting down old growth. People concerned with human health convinced McDonalds to stop buying antibiotic ridden chicken.

The surprising thing: we have yet to see internet users start a conversation with a company en masse to say: hey, we want things to work differently. Until now.

The concerns people have raised about Facebook and other platforms are wide ranging — and most often tie back to the fact that the ‘big five‘ are near monopolies in key aspects of the tech business.

Yet, many of the problems and harms that people have been pointing to in recent weeks are quite specific. App permissions that allow third party business to access the private information of our friends. Third party data profiling that shows where each of us  stand on issues. And advertising services that allow companies, politicians and trolls to micro target ads at each of us individually based on these profiles. These are all very specific features or services that the companies involved can change — or stop offering altogether.

As a citizen of the internet and a long time Facebook user, I feel like it’s on me to start talking to the company about the specific changes I’d like to see — and to find others who want to do the same.

With this goal in mind, Mozilla launched a campaign today to get users to band together to ask Facebook to change its app permissions and make sure our privacy is protected by default. This one small, specific thing that could make a difference.

Of course, there is also bigger ambition for this campaign: to spark a conversation between the people who make Facebook and the people who use it about how we can make a digital world that is safer and saner and that we all want to live in. I hope that is a conversation they will welcome.

The post Enough is enough. Let’s tell Facebook what we want fixed. appeared first on Mark Surman.

Firefox Test PilotNew features in Notes v3

Today we are updating TestPilot Notes to v3.1! We have several new user-facing features and behind the scenes changes in this v3 release. The focus of this release was discoverability, speed and a bit of codebase cleanup.

We heard your feedback about “Exporting notes…” and with this release we have added the first export related feature. You can now export the notepad as HTML using the menu. We are still playing around with Markdown and other exporting features.

<figcaption>Export your Notes as HTML today!</figcaption>

A lot of users also had trouble finding and opening Notes via the sidebar. This is why we added new ways to open the notepad. The first way is by using the new “Send to Notes” button in the context menu. This new button will open the notepad and copy the text from the webpage into it.

<figcaption>Use “Send to Notes” to open Notes and insert text into the notepad</figcaption>

The second path to discoverability is by using the new toolbar extension button. This will quickly open the Notes sidebar for you.

<figcaption>The Notes toolbar button helps you open the notepad quickly</figcaption>

The Notes team would like to thank long-time contributor Cedric Amaya for helping out with these new features.

We have also started migrating the codebase to React and Redux. Thanks to our new developer Sébastien we have landed the first pieces of the React refactor. The React changes make the Notes extension faster and make it easier to maintain the codebase. Besides the code changes there are also new UI design changes that make Notes look more like other parts of Firefox. For example the new menu looks a lot more like the Firefox browser menu:

There are other upcoming design changes to make Notes follow the Photon Design System. In the future releases we are also planning to pick up latest updates from the CKEditor editor and introduce multi-note support.

Stay tuned!

New features in Notes v3 was originally published in Firefox Test Pilot on Medium, where people are continuing the conversation by highlighting and responding to this story.

Hacks.Mozilla.OrgBringing interactive examples to MDN

“This is scoped to be a pretty small change.”me, January 2017.

Over the last year and a bit, the MDN Web Docs team has been designing, building, and implementing interactive examples for our reference pages. The motivation for this was the idea that MDN should do more to help “action-oriented” users: people who like to learn by seeing and playing around with example code, rather than by reading about it.

We’ve just finished adding interactive examples for the JavaScript and CSS reference pages. This post looks back at the project to see how we got here and what we learned on the way.

First prototypes

The project was first outlined in the MDN product strategy, published at the end of 2016. We discussed some ideas on the MDN mailing list, and developed some prototypes.

The JS editor looked like this:

Early prototype of JavaScript editor

The CSS editor looked like this:

Screenshot of CSS editor after first user testing

We wanted the examples – especially the CSS examples – to show users the different kinds of syntax that an item could accept. In the early prototypes, we did this using autocomplete. When the user deleted the value assigned to a CSS property, we showed an autocomplete popup listing different syntax variations:

First round of user testing

In March 2017 Kadir Topal and I attended the first round of user testing, which was run by Mark Hurst. We learned a great deal about user testing, about our prototypes, and about what users wanted to see. We learned that users wanted examples and appreciated them being quick to find. Users liked interactive examples, too.

But autocomplete was not successful as a way to show different syntax forms. It just wasn’t discoverable, and even people who did accidentally trigger it didn’t seem to understand what it was for.

Especially for CSS, though, we still wanted a way to show readers the different kinds of syntax that an item could accept. For the CSS pages, we already had a code block in the pages that lists syntax options, like this:

transform: matrix(1.0, 2.0, 3.0, 4.0, 5.0, 6.0);
transform: translate(12px, 50%);
transform: translateX(2em);
transform: translateY(3in);
transform: scale(2, 0.5);
transform: scaleX(2);
transform: scaleY(0.5);
transform: rotate(0.5turn);
transform: skew(30deg, 20deg);

One user interaction we saw, that we really liked, was when readers would copy lines from this code block into the editor, to see the effect. So we thought of combining this block with the editor.

In this next version, you can select a line from the block underneath, and the style is applied to the element above:

Looking back at this prototype now, two things stand out: first, the basic interaction model that we would eventually ship was already in place. Second, although the changes we would make after this point were essentially about styling, they had a dramatic effect on the editor’s usability.

Building a foundation

After that not much happened for a while, because our front-end developers were busy on other projects. Stephanie Hobson helped improve the editor design, but she was also engaged in a full-scale redesign of MDN’s article pages. In June Schalk Neethling joined the team, dedicated to this project. He built a solid foundation for the editors and a whole new contribution workflow. This would be the basis of the final implementation.

In this implementation, interactive examples are maintained in the interactive-examples GitHub repository. Once an interactive example is merged to the repo, it is built automatically as a standalone web page which is then served from the “” domain. To include the example in an MDN page, we then embed the interactive example’s document using an iframe.

UX work and more user testing

At the end of June, we showed the editors to Jen Simmons and Dan Callahan, who provided us some very useful feedback. The JavaScript editor seemed pretty good, but we were still having problems with the CSS editor. At this point it looked like this:

Early prototype of CSS editor in June 2017

People didn’t understand that they could edit the CSS, or even that the left-hand side consisted of a list of separate choices rather than a single block.

Stephanie and Schalk did a full UX review of both editors. We also had an independent UX review from Julia Lopez-Mobilia from The Brigade. After all this work, the editors looked like this in static screenshots:

JS editor for the final user test

CSS editor for the final user test

Then we had another round of user testing. This time we ran remote user tests over video, with participants recruited through MDN itself. This gave us a tight feedback loop for the editors: we could quickly make and test adjustments based on user feedback.

This time user testing was very positive, and we decided we were ready for beta.

Beta testing

The beta test started at the end of August and lasted for two weeks. We embedded editors on three JavaScript and three CSS pages, added a survey, and asked for feedback. Danielle Vincent mentioned it in the Mozilla Developer Newsletter, which drove thousands of people to our Discourse announcement post.

Feedback was overwhelmingly positive: 156/159 people who took the survey voted to see the editor on more pages, and the free-form text feedback was very encouraging. We were confident that we had a good UX.

JavaScript examples and page load optimization

Now we had an editor but very few actual examples. We asked Mark Boas to write examples for the JavaScript reference pages, and in a couple of months he had written about 400 beautiful concise examples.

See the example for Array.slice().

We had another problem, though: the editors regressed page load time too much. Schalk and Stephanie worked to wring every last millisecond of performance optimization out of the architecture, and finally, in December 2017, we decided to ship.

We have some extra tricks we plan to implement this year to continue improving page load performance, the fact is we’re still not happy with current performance on interactive pages.

CSS examples

In the first three weeks of 2018, Schalk and I updated 400 JavaScript pages to include Mark’s examples, and then we turned to getting examples written for the CSS pages.

We asked for help, Jen Simmons tweeted about it, and three weeks later our community had contributed more than 150 examples, with over a hundred coming from a single volunteer, mfluehr.

See the example for rotate3d().

After that Rachel Andrew and Daniel Beck started working with us, and they took care of the rest.

See the example for clip-path.

What’s next?

Right now we’re working on implementing interactive examples for the HTML reference. We have just finished a round of user testing, with encouraging results, and hope to start writing examples soon.

As I hope this post makes clear, this project has been shaped by many people contributing a wide range of different skills. If you’d like to help out with the project, please check out the interactive-examples repo and the MDN Discourse forum, where we regularly announce updates.

The Firefox FrontierMarch Add(on)ness: Ghostery (2) Vs Decentraleyes (3)

It’s the last battle of the first round of March Add(on)ness. Closing out the privacy bracket we have… Ghostery Privacy Ghostery is a powerful privacy extension. Block ads, stop trackers … Read more

The post March Add(on)ness: Ghostery (2) Vs Decentraleyes (3) appeared first on The Firefox Frontier.

The Mozilla BlogMozilla Statement, Petition: Facebook and Cambridge Analytica

The headlines speak for themselves: Up to 50 million Facebook users had their information used by Cambridge Analytica, a private company, without their knowledge or consent. That’s not okay.

Facebook is facing a lot of questions right now, but one thing is clear: Facebook needs to act to make sure this doesn’t happen again.

Mozilla is asking Facebook to change its app permissions and ensure users’ privacy is protected by default. And we’re asking users to stand with us by signing our petition.

Facebook’s current app permissions leave billions of its users vulnerable without knowing it. If you play games, read news or take quizzes on Facebook, chances are you are doing those activities through third-party apps and not through Facebook itself. The default permissions that Facebook gives to those third parties currently include data from your education and work, current city and posts on your timeline.

We’re asking Facebook to change its policies to ensure third parties can’t access the information of the friends of people who use an app.

At Mozilla, our approach to data is simple: no surprises, and user choice is critical. We believe in that not just because it makes for good products, but because trust is a key factor in keeping the internet healthy.

The internet is transformative because it’s a place to explore, transact, connect, and create. Trust is key to that. We’re pushing Facebook to improve its privacy practices not just because of its 2 billion users, but also for the health of the internet broadly.

Ashley Boyd is Mozilla’s VP, Advocacy

The post Mozilla Statement, Petition: Facebook and Cambridge Analytica appeared first on The Mozilla Blog.

Daniel PocockCan a GSoC project beat Cambridge Analytica at their own game?

A few weeks ago, I proposed a GSoC project on the topic of Firefox and Thunderbird plugins for Free Software Habits.

At first glance, this topic may seem innocent and mundane. After all, we all know what habits are, don't we? There are already plugins that help people avoid visiting Facebook too many times in one day, what difference will another one make?

Yet the success of companies like Facebook and those that prey on their users, like Cambridge Analytica (who are facing the prospect of a search warrant today), is down to habits: in other words, the things that users do over and over again without consciously thinking about it. That is exactly why this plugin is relevant.

Many students have expressed interest and I'm keen to find out if any other people may want to act as co-mentors (more information or email me).

One Facebook whistleblower recently spoke about his abhorrence of the dopamine-driven feedback loops that keep users under a spell.

The game changer

Can we use the transparency of free software to help users re-wire those feedback loops for the benefit of themselves and society at large? In other words, instead of letting their minds be hacked by Facebook and Cambridge Analytica, can we give users the power to hack themselves?

In his book The Power of Habit, Charles Duhigg lays bare the psychology and neuroscience behind habits. While reading the book, I frequently came across concepts that appeared immediately relevant to the habits of software engineers and also the field of computer security, even though neither of these topics is discussed in the book.

where is my cookie?

Most significantly, Duhigg finishes with an appendix on how to identify and re-wire your habits and he has made it available online. In other words, a quickstart guide to hack yourself: could Duhigg's formula help the proposed plugin succeed where others have failed?

If you could change one habit, you could change your life

The book starts with examples of people who changed a single habit and completely reinvented themselves. For example, an overweight alcoholic and smoker who became a super-fit marathon runner. In each case, they show how the person changed a single keystone habit and everything else fell into place. Wouldn't you like to have that power in your own life?

Wouldn't it be even better to share that opportunity with your friends and family?

One of the challenges we face in developing and promoting free software is that every day, with every new cloud service, the average person in the street, including our friends, families and co-workers, is ingesting habits carefully engineered for the benefit of somebody else. Do you feel that asking your friends and co-workers not to engage you in these services has become a game of whack-a-mole?

Providing a simple and concise solution, such as a plugin, can help people to find their keystone habits and then help them change them without stress or criticism. Many people want to do the right thing: if it can be made easier for them, with the right messages, at the right time, delivered in a positive manner, people feel good about taking back control. For example, if somebody has spent 15 minutes creating a Doodle poll and sending the link to 50 people, is there any easy way to communicate your concerns about Doodle? If a plugin could highlight an alternative before they invest their time in Doodle, won't they feel better?

If you would like to provide feedback or even help this project go ahead, you can subscribe here and post feedback to the thread or just email me.

cat plays whack-a-mole

Mozilla Release Management TeamCrash-Stop, an extension to help handle crashes on Bugzilla

Crash-stop is a webextension I wrote for Bugzilla to display crash stats by builds and patch information.

The goal is to have enough information to be able to decide if a patch helped (hence its name) and, if needed, uplift it to the Beta/ESR/Release trains as appropriate.

This project was initially meant to assist release-managers but it’s been useful for developers who fix/monitor crashes or for folks doing bug triage.

A screen snapshot of crash-top from bug 1432409 (in the “Details” section):

Crash stop table

How to read the data in the table above?

  • The patches landed in beta on the 2018-02-20 at 23:40
  • The buildid of b12 is 20180222170353 and of b11 is 20180219114835
  • The first beta build containing the patches is b12.
  • The builds which don’t contain the patches are shown in pink
  • The builds that contain the patch are shown in green.

As you can see from the example above, the patches had a very positive effect for the first 2 signatures.

For release channel, the builds are shown in light yellow because no patches were found for that channel (the addon reads all the comments to try to find the push urls). As is obvious in this example, the reassuring data from Beta channel makes for a strong case to request an uplift to release channel.

Recently, I added stuff to show startup crashes, for example in bug 1435779:

Crash stop table

Recent updates:

  • The cells are colored in red when more than 50% of the crashes have the flag startup_crash set to true (on each number in Crashes rows there is a tooltip with the percentage of startup_crash == true).
  • I added icons for impacted platforms.
  • Click on signatures or versions to get more information from Socorro.

All feedback is welcome and appreciated! If you want to request features or more data, or report an error, please feel free to file a bug on GitHub.

Source Code and extension download

The extension can be installed from AMO and the development is done on GitHub, pull requests are also welcome!

Mozilla Release Management TeamCrash-Stop, an extension to help handle crashes on Bugzilla

Crash-stop is a webextension I wrote for Bugzilla to display crash stats by builds and patch information.

The goal is to have enough information to be able to decide if a patch helped (hence its name) and, if needed, uplift it to the Beta/ESR/Release trains as appropriate.

This project was initially meant to assist release-managers but it’s been useful for developers who fix/monitor crashes or for folks doing bug triage.

A screen snapshot of crash-top from bug 1432409 (in the “Details” section):

Crash stop table

How to read the data in the table above?

  • The patches landed in beta on the 2018-02-20 at 23:40
  • The buildid of b12 is 20180222170353 and of b11 is 20180219114835
  • The first beta build containing the patches is b12.
  • The builds which don’t contain the patches are shown in pink
  • The builds that contain the patch are shown in green.

As you can see from the example above, the patches had a very positive effect for the first 2 signatures.

For release channel, the builds are shown in light yellow because no patches were found for that channel (the addon reads all the comments to try to find the push urls). As is obvious in this example, the reassuring data from Beta channel makes for a strong case to request an uplift to release channel.

Recently, I added stuff to show startup crashes, for example in bug 1435779:

Crash stop table

Recent updates:

  • The cells are colored in red when more than 50% of the crashes have the flag startup_crash set to true (on each number in Crashes rows there is a tooltip with the percentage of startup_crash == true).
  • I added icons for impacted platforms.
  • Click on signatures or versions to get more information from Socorro.

All feedback is welcome and appreciated! If you want to request features or more data, or report an error, please feel free to file a bug on github (see [3]).

Source Code and extension download

The extension can be installed from AMO and the development is done on GitHub, pull requests are also welcome!

This Week In RustThis Week in Rust 226

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is noisy_float, a crate with surprisingly useful floating point types that would rather panic than be Not a Number. Thanks to Ayose Cazorla for the suggestion.

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

145 pull requests were merged in the last week

New Contributors

  • Alan Du
  • Alexandre Martin
  • Alex Butler
  • Boris-Chengbiao Zhou
  • Dileep Bapat
  • dragan.mladjenovic
  • Eric Huss
  • snf
  • Yukio Siraichi

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Upcoming Events

The community team is trying to improve outreach to meetup organisers. Please fill out their call for contact info if you are running or used to run a meetup.

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

No jobs listed for this week.

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Imagine going back in time and telling the reporter “this bug will get fixed 16 years from now, and the code will be written in a systems programming language that doesn’t exist yet”.

Nicholas Nethercote.

Thanks to jleedev!

Submit your quotes for next week!

This Week in Rust is edited by: nasa42 and llogiq.

Daniel StenbergTwenty years, 1998 – 2018

Do you remember this exact day, twenty years ago? March 20, 1998. What exactly happened that day? I’ll tell you what I did then.

First a quick reminder of the state of popular culture at the time: three days later, on the 23rd, the movie Titanic would tangent the record and win eleven academy awards. Its theme song “My heart will go on” was in the top of the music charts around this time.

I was 27 years old and I worked full-time as a software engineer, mostly with embedded systems. I had already been developing software as a profession for several years then. At this moment in time I was involved as a consultant in a (rather boring) project for Ericsson Telecom ETX, in Nacka Strand in the south eastern general Stockholm area.

At some point during that Friday (I don’t remember the details, but presumably it happened during the late evening), I packaged up the source code of the URL transfer tool we were working on and uploaded it to my personal web site to share it with the world. It was the first release ever of the project under the new name: curl. The tool was already supporting HTTP, FTP and GOPHER – including uploads for the two first protocols.

It would take more than a year after this day until we started hosting the curl project on its own dedicated web site. went live in August 1999, and it was changed again to in June the following year, a URL and name we’ve kept since.

(this is the first curl logo we used, made in 1998 by Henrik Hellerstedt)

In my flat in Solna (just north of Stockholm, Sweden) I already then spent a lot of spare time, mostly late nights, in front of my computer. Back then, an Intel Pentium 120Mhz based desktop PC with a huge 19″ Nokia CRT monitor, on which I dialed up to my work’s modem pool to access the Internet and to log in to the Unix machines there on which I did a lot of the early curl development. On SunOS, Solaris and Linux.

In Stockholm, that Friday started out with sub-zero degrees Celsius but the temperature climbed up to a few positive degrees during the day and there was no snow on the ground. Pretty standard March weather in Stockholm. This is usually a period when the light is slowly coming back (winters are really dark here) but the temperatures remind us that spring still isn’t quite here.

curl 4.0 was just a little more than 2000 lines of C code. It featured 23 command line options. curl 4.0 introduced support for the FTP PORT command and now it could do ftp uploads that append to the remote file. The version number was bumped up from the 3.12 which was the last version number used by the tool under the old name, urlget.

<figcaption class="wp-caption-text">This is what the web site looked like in December 1998, the oldest capture I could find. Extracted from so unfortunately two graphical elements are missing!</figcaption>

It was far from an immediate success. An old note mentions how curl 4.8 (released the summer of 1998) was downloaded more than 300 times from the site. In August 1999, we counted 1300 weekly visits on the web site. It took time to get people to discover curl and make it into the tool users wanted. By September 1999 curl had already grown to 15K lines of code

In August 2000 we shipped the first version of libcurl: all the networking transfer powers of curl in a library, ready to be used by your applications. PHP was one of the absolutely first users of libcurl and that certainly helped to drive the early use.

A year later, in August 2001, when Apple started shipping curl by default in Mac OS X 10.1 curl was already widely available in Linux and BSD package collections.

By June 2002, we counted 13000 weekly visits on the site and we had grown to 35K lines of code. And it would not stop there…

Twenty years is both nothing at all and at the same time what feels like an eternity. Just three weeks before curl 4.0 shipped, Mozilla was founded. Google wasn’t founded until six months after. This was long before Facebook or Twitter had even been considered. Certainly a different era. Even the term open source was coined just a month prior to this curl release.

Growth factors over 20 years in the project:

Supported protocols: 7.67x
Command line options: 9x
Lines of code: 75x
Contributors: 100x
Weekly web site visitors: 1,400x
End users using (something that runs) the code: 4,000,000x
Stickers with the curl logo: infinity

Twenty years since the first ever curl release. Of course, it took time to make that first release too so the work is older. curl is the third name or incarnation of the project that I first got involved with already in late 1996…


Emma IrwinWhat we learned about gender identity in Open Source

In research the Open Innovation team ran in 2017 we learned that often ‘Women’ was being used as a catch all for non-male, non-binary people; and that this often results in people feeling excluded or invisible inside open source communities.

“This goes into the gender thing — a lot of the time I see non-binary people get lumped in with “women” in diversity things — which is very dysphoria-inducing, as someone who was assigned female but is definitely *not*.” — community interview

To learn more, we launched a Diversity & Inclusion in Open Source survey earlier this year, which sought to better understand how people identify, including gender-identity.

Our gender spectrum question, was purposely long — to experiment with the value people found in seeing their identity represented in a question. People from over 200 open projects participated. Amazingly, of 17 choices, each was uniquely selected, by a survey participant at least once.

7.9% ** of all respondents, selected something other than male or female; for those under the age of 40 that number was higher at 9.1% .a

In some regions, many of the gender choices felt unfamiliar or confusing — but the idea that there be more than two options was not. For example, we know that India already recognizes a ‘third gender’.

Through this experience, and other feedback we settled on a 1.0 standard for gender questions and gender pronouns for surveys, and systems.

One way your community can act on these findings, is to ensure that people can express their pronouns on profile pages, and communication channels. After our given names, pronouns are the most frequently used way of referring to each other and when we get people’s pronouns wrong, it’s no different than calling someone by the wrong name.

It’s also super-important for binary folks to take this step , by creating norms of sharing pronouns, we make it easier and safer for others.

One other way to act on this research, to ensure that if you create identity groups for women, but you mean women and non-binary — say so; invite people in by through their expressed identity.

** Responses that were deemed not to be sincere were filtered out

Join our next Diversity & Inclusion in Open Source Call — April 4th. Details in our wiki.


Chris IliasWhy we participate in support

Why do you participate in user support?
Have you ever wondered why any of the people who answer support questions, and write documentation take the time to do it?

This is a followup to a post I wrote about dealing with disgruntled users.

Firefox is a tool of Mozilla to influence an industry toward open standards, and against software silos. By having enough market share in the browser world, web-developers are forced to support open standards.
Users will not use Firefox if they don’t know how to use it, or if it is not working as expected. Support exists to retain users. If their experience of using Firefox is a bad, we’re here to make it good, so they continue to use Firefox.

That experience includes user support. The goal is not only to help users with their problems, but remove any negative feeling they may have had. That should be the priority of every person participating in support.

Dealing with disgruntled users is an inherent part of user support. In those cases, it’s important to remind ourselves what the user wants to achieve, and what it takes to make their experience a pleasant one.

In the end, users will be more willing to forgive individual issues out of fondness of the company. That passion for helping users will attract others, and the community will grow.

Mozilla GFXWebRender newsletter #16

Greetings! 16th newsletter inbound, with some exciting fixes. Oh yeah, fixes again and again, and spoiler alert: this will remain the main topic for a little while.
So what’s exciting about it this time around? For one, Heftig, Martin Stránský and Snorp figured out what was causing rendering to be so broken with nvidia GPUs on Linux (and fixed it). The problem was that when creating a GLX visual, Gdk by default tries to select one that does not have a depth buffer. However, WebRender relies on the depth buffer for rendering opaque primitives.
The other fix that I am particularly excited about is brought to you by Kvark, who finally ended the content flickering saga on Windows after a series of fixes and workarounds in our own code and upstream in ANGLE.

Notable WebRender changes

  • Simon added support for rendering with ANGLE in wrench on Windows. This will let us run tests in a configuration that better matches what users run.
  • Kvark fixed a division by zero in the brush_blend shader.
  • Glenn fixed some box-shadow artifacts.
  • Lee fixed the way we clear font data when shutting down.
  • Glenn avoided attempting to render a scene if the requested window dimension are unreasonably large.
  • Nical avoided re-buiding the scene when updating dynamic properties (it had already been done by Glenn, but accidentally backed out).
  • Glenn refactored the way pictures and brushes are stored to allow more flexibility.
  • Kats and Nical updated the tidy CI script, and fixed an avalanche of followup issues (2), (3), (4).
  • Martin simplified the clipping API.
  • Glenn fixed text-shadow primitives during batch merging.
  • Glenn ported intermediate blits to the brush_image shader.
  • Nical decoupled the tiled image decomposition from the scene building code (in preparation for moving it to the frame building phase).
  • Kvark refactored the shader management code.
  • Glenn ported blurs to use brush_image instead of the composite shader.
  • Simon implemented an example that uses DirectComposition.
  • Martin fixed a clipping issue in blurred and shadowed text.
  • Kvark worked around an ANGLE bug after backing out another attempt at working around the same dreaded ANGLE flickering bug.
  • Jeff avoided performing divisons and modulos on unsigned integers in the shaders.
  • Glenn optimized the brush_image shader.
  • Glenn changed box-shadow to be a clip source instead of a picture, providing some simplications and better batching.
  • Glenn reduced the number of clip store allocations.
  • Martin removed inversed matrix computations from the shaders.
  • Kvark fixed a bug with zero-sized render tasks.

Notable Gecko changes

  • Snorp, Heftig and Martin Stránský fixed broken rendering with nvidia graphics cards on Linux. WebRender is now usable with nvidia GPUs on Linux.
  • Kvark fixed flickering issues with ANGLE on Windows.
  • Sotaro fixed a crash, and another one.
  • Andrew made background SVGs use blob images instead of the basic layer manager fallback, yielding nice perf imporvements.
  • Nical made gradients rely on WebRender’s pixel snapping instead of snapping incorrectly during dsiplay list building.
  • Sotaro fixed a bug when moving a tab containing a canvas to a different window.
  • Sotaro fixed a jiggling issue on WIndows when resizing the browser window.
  • Nical fixed a race condition in the frame throttling logic causing windows to not paint intermittently.
  • Andrew avoided using the fallback logic for images during decoding.
  • Sotaro fixed an ffi bug causing images to not render on 32bit Windows.
  • Jeff simplified the memory management of WebRenderUserData.

Enabling WebRender in Firefox Nightly

In about:config, just set “gfx.webrender.all” to true and restart the browser. No need to toggle any other prefs.

Note that WebRender can only be enabled in Firefox Nightly. We will make it possible to enable it on other release channels as soon as we consider it stable enough to reach a broader audience.

The Firefox FrontierMarch Add(on)ness: Tab Centre Redux (2) vs Tabby Cat (3)

Do you like your tabs on the side, or with a side of cats? Tell us in today’s March Add(on)ness… Tab Center Redux Customization Move your tabs to the side … Read more

The post March Add(on)ness: Tab Centre Redux (2) vs Tabby Cat (3) appeared first on The Firefox Frontier.

Don MartiA good question, from Twitter

Good question on Twitter, but one that might take more than, what is it now, 280 characters? to answer.

Why do I pay attention to Internet advertising? Why not just block it and forget about it? By now, web ad revenue per user is so small that it only makes sense if you're running a platform with billions of users, so sites are busy figuring out other ways to get paid anyway.

To the generation that never had a print magazine subscription, advertising is just a subset of "creepy shit on the Internet." Who wants to do that for a living? According to Charlotte Rogers at Marketing Week, the lack of information out there explaining the diverse opportunities of a career in marketing puts the industry at a distinct disadvantage in the minds of young people. Marketing also has to contend with a perception problem among the younger generation that it is intrinsically linked with advertising, which Generation Z notoriously either distrust or dislike.

Like the man says, Where Did It All Go Wrong?

The answer is that I'm interested in Internet advertising for two reasons.

  • First, because I'm a Kurt Vonnegut fan and have worked for a magazine. Some kinds of advertising can have positive externalities. Vonnegut was able to quit his job at a car dealership, and write full time, because advertising paid for original fiction in Collier's magazine. How did advertising lose its ability to pay for news and cultural works? Can advertising reclaim that ability?

  • Second, because most of the economic role of advertising is in an area that Internet advertising hasn't been able to get a piece of. While Internet advertising plays a game of haha, look what I tricked you into clicking on for chump change, the real money is in signal-carrying advertising that helps build brand reputation. Is it possible to make Internet advertising into a medium that can get a piece of the action?

Maybe make that three reasons. As long as Internet advertising fails to pull its weight in either supporting news and cultural works or helping to send a credible economic signal for brands then the scams, malware and mental manipulation will only continue. More: World's last web advertising optimist tells all!

The Servo BlogThis Week In Servo 108

In the last week, we merged 89 PRs in the Servo organization’s repositories.

We have been working on adding automated performance tests for the Alexa top pages, and thanks to contributions from the Servo community we are now regularly tracking the performance of the top 10 websites.

Planning and Status

Our roadmap is available online, including the overall plans for 2018.

This week’s status updates are here.

Notable Additions

  • UK992 embedded the Servo icon in Windows nightly builds.
  • kwonoj added a hotkey to perform a WebRender capture.
  • Xanewok removed all traces of an unsafe API from the JS bindings.
  • jdm tracked down an intermittent build problem that was interfering with CI.
  • nox fixed a panic that could occur when navigating away from pages that use promises.
  • lsalzman fixed a font-related memory leak in WebRender.
  • Xanewok implemented APIs for storing typed arrays on the heap.
  • alexrs extracted parts of homu’s command parsing routine to add automated tests.
  • Xanewok implemented support for generating bindings for WebIDL APIs that use typed arrays.
  • kvark simplified the management of shaders in WebRender.
  • oOIgnitionOo added Windows support for running nightlies through the mach tool.
  • paul added more typed units to APIs related to the compositor.
  • mrobinson made binary capture recording work again.

New Contributors

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!

The Firefox FrontierMarch Add(on)ness: Reverse Image Search (2) Vs Unpaywall (3)

Two strong competitors in today’s March Add(on)ness that will help us learn if finding out where an image is from is better than being able to access millions of open-access … Read more

The post March Add(on)ness: Reverse Image Search (2) Vs Unpaywall (3) appeared first on The Firefox Frontier.

The Firefox FrontierMarch Add(on)ness: Momentum (2) vs Grammarly (3)

The pen is mightier than the sword, but is personal organization more powerful than having to worry about grammar? You tell us in today’s March Add(on)ness. Momentum Optimization With Momentum, … Read more

The post March Add(on)ness: Momentum (2) vs Grammarly (3) appeared first on The Firefox Frontier.

The Firefox FrontierMarch Add(on)ness: uBlock (1) vs Kimetrack (4)

Decide who will be the ultimate privacy extension in today’s Add-on Madness… uBlock Origin Privacy, tracking uBlock Origin is an efficient blocker. Easy on CPU and memory. Nobody likes to … Read more

The post March Add(on)ness: uBlock (1) vs Kimetrack (4) appeared first on The Firefox Frontier.

Daniel PocockOSCAL'18, call for speakers, radio hams, hackers & sponsors reminder

The OSCAL organizers have given a reminder about their call for papers, booths and sponsors (ask questions here). The deadline is imminent but you may not be too late.

OSCAL is the Open Source Conference of Albania. OSCAL attracts visitors from far beyond Albania (OpenStreetmap), as the biggest Free Software conference in the Balkans, people come from many neighboring countries including Kosovo, Montenegro, Macedonia, Greece and Italy. OSCAL has a unique character unlike any other event I've visited in Europe and many international guests keep returning every year.

A bigger ham radio presence in 2018?

My ham radio / SDR demo worked there in 2017 and was very popular. This year I submitted a fresh proposal for a ham radio / SDR booth and sought out local radio hams in the region with an aim of producing an even more elaborate demo for OSCAL'18.

If you are a ham and would like to participate please get in touch using this forum topic or email me personally.

Why go?

There are many reasons to go to OSCAL:

  • We can all learn from their success with diversity. One of the finalists for Red Hat's Women in Open Source Award, Jona Azizaj, is a key part of their team: if she is announced the winner at Red Hat Summit the week before OSCAL, wouldn't you want to be in Tirana when she arrives back home for the party?
  • Warm weather to help people from northern Europe to thaw out.
  • For many young people in the region, their only opportunity to learn from people in the free software community is when we visit them. Many people from the region can't travel to major events like FOSDEM due to the ongoing outbreak of immigration bureaucracy and the travel costs. Many Balkan countries are not EU members and incomes are comparatively low.
  • Due to the low living costs in the region and the proximity to larger European countries, many companies are finding compelling opportunities to work with local developers there and OSCAL is a great place to make contacts informally.

Sponsors sought

Like many free software communities, Open Labs is a registered non-profit organization.

Anybody interested in helping can contact the team and ask them for whatever details you need. The Open Labs Manifesto expresses a strong commitment to transparency which hopefully makes it easy for other organizations to contribute and understand their impact.

Due to the low costs in Albania, even a small sponsorship or donation makes a big impact there.

If you can't make a direct payment to Open Labs, you could also potentially help them with benefits in kind or by contributing money to one of the larger organizations supporting OSCAL.

Getting there without direct service from Ryanair or Easyjet

These notes about budget airline routes might help you plan your journey. It is particularly easy to get there from major airports in Italy. If you will also have a vacation at another location in the region it may be easier and cheaper to fly to that location and then use a bus to Tirana.

Making it a vacation

For people who like to combine conferences with their vacations, the Balkans (WikiTravel) offer many opportunities, including beaches, mountains, cities and even a pyramid (in Tirana itself).

It is very easy to reach neighboring countries like Montenegro and Kosovo by coach in just 3-4 hours. For example, there is the historic city of Prizren in Kosovo and many beach resorts in Montenegro.

If you go to Kosovo, don't miss the Prishtina hackerspace.

Tirana Pyramid: a future hackerspace?

Cameron KaiserTenFourFox FPR6 SPR1 coming

Stand by for FPR6 Security Parity Release 1 due to the usual turmoil following Pwn2Own, in which the mighty typically fall and this year Firefox did. We track these advisories and always plan to have a patched build of TenFourFox ready and parallel with Mozilla's official chemspill release; I have already backported the patch and tested it internally.

The bug in question would require a TenFourFox-specific exploit to be useful, but is definitely exploitable, and fortunately was easily repaired. The G5 will chug overnight and have builds tomorrow and heat the rear of the house all at the same time.

Michael ComellaAddressing GitHub Problems: "What PRs are open for this issue?"

When looking at a GitHub issue, I often need to know, “What PRs are open for this issue?” I wrote the GitHub Issue Hoister add-on to address my problem.

It hoists those “mcomella added a commit that references this issue” links to the top of an issue page to make them easier to access and see at a glance:

An example of the Issue Hoister in use

Check out the brief tutorial for caveats and more details, or just download it off AMO. For bugs/issues, file them on github.

The Mozilla BlogPrepare to be Creeped Out

Mozilla Fellow Hang Do Thi Duc joins us to share her Data Selfie art project. It collects the same basic info you provide to Facebook. Sharing this kind of data about yourself isn’t something we’d normally recommend. But, if you want to know what’s happening behind the scenes when you scroll through your Facebook feed, installing Data Selfie is worth considering. Use at your own risk. If you do, you might be surprised by what you see.

Hi everyone, I’m Hang,

Ever wonder what Facebook knows about you? Why did that ad for motorcycle insurance pop up when you don’t own a motorcycle? Why did that ad for foot cream pop up right after you talked about your foot itching?

I wondered. So I created something to help me find out. I call it Data Selfie. It’s an add-on–a little piece of software you download to use with your web browser–that works in both Firefox and Chrome.

How does it work? Every time you like, click, read, or post something on Facebook, Facebook knows. Even if you don’t comment or share much, Facebook learns about you as you scroll through your feed.

My add-on does something similar. It’s here to help you understand how your actions online can be tracked. It does this by collecting the same information you provide to Facebook, while still respecting your privacy.

NOTE: The add-on is available in Firefox too.

Want to see what your Data Selfie looks like? Here’s how:

  1. Go here:
  2. Download the Firefox or Chrome add-on
  3. Check out my privacy policy if you want to know more about how this works .
  4. You’ll see an eye icon that looks in the upper right corner of your browser. Click on it.
  5. From the list, click “Your Data Selfie.”

You’ll see there’s not much to your Data Selfie yet. Just browse Facebook as you normally do. It takes about a week of regular Facebook use for your Data Selfie to gather enough information to give you a good idea of what Facebook might know about you.

Thanks! I hope you enjoy your Data Selfie.

Hang Do Thi Duc
Mozilla Fellow

PS. My Data Selfie says I’m a laid-back, liberal man who isn’t likely to have a gym membership and prefers style when buying clothes. Pretty accurate, actually.

The post Prepare to be Creeped Out appeared first on The Mozilla Blog.

Air MozillaReps Weekly Meeting, 15 Mar 2018

Reps Weekly Meeting This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Mozilla VR BlogBuilding Mixed Reality spaces for the web

Building Mixed Reality spaces for the web

One of the primary goals of our Social Mixed Reality team is to enable and accelerate access to Mixed Reality-based communication. As mentioned in our announcement blog post, we feel meeting with others around the world in Mixed Reality should be as easy as sharing a link, and creating a virtual space to spend time in should be as easy as building your first website. In this post, we wanted to share an early look at some work we are doing to help achieve the second goal, making it easy for newcomers to create compelling 3D spaces suited for meeting in Mixed Reality.

Anyone who has gone through the A-Frame tutorials and learned the basics of creating boxes, spheres, and other entities soon find themselves wanting to build out a full 3D environment. Components such as the a-frame environment component can be a good start to adding life to the initial black void of an empty virtual space, but that mostly takes care of ‘background’ aspects to the space such as the sky, ground surface, and far-off objects like trees and clouds.

Beyond that, people quickly find themselves facing a roadblock: the kind of space they want to make is often more ambitious than what can be done with a few simple shapes, and needs to be more architectural and grounded in reality. To build such a space today requires a wide variety of knowledge and skills, from the obvious ones like modelling and texturing, to those more specific to Mixed Reality such as optimizing rendering performance and properly designing the architecture for scale and comfort in a headset.

If we want building your first space to be as easy as building your first website, there is clearly a lot of work to be done! So, what can we do to make it easier?

Modular by Design

What do Lego and IKEA have in common? Well, aside from originating from Scandinavian countries, they both make products whose designs embrace modularity to great effect. Through this modularity, just about anyone can put together a desk from IKEA or a spaceship from Lego, and a wide variety of products can be made due to the versatility of the parts. Why not apply these same ideas to building virtual spaces?

We are working on a system, all of which will be open sourced and freely available, which will allow anyone to create virtual spaces using a set of premade architectural elements that can be combined in countless ways. We’re not the first to come up with such a system, it’s been an approach growing in popularity and sophistication within game studios for building large, continuous worlds. In our case, the pieces in our system all follow a strict set of metrics that make the construction process as simple as possible and remove the guesswork involved in assembling a scene. The result is that a person with basic knowledge can quickly put together a virtual space that feels more like a real place and less like a world made up of simple shapes. For more experienced creators, the system can be used for rapid prototyping, allowing them to realize their ideas more quickly.

The most exciting part is that, combined with our other efforts, you’ll soon be able to visit the spaces you build with this system with anyone around the world, all from within Mixed Reality, by simply sharing a link.

Building Mixed Reality spaces for the web

Optimized for Mixed Reality

Creating experiences for Mixed Reality poses a unique set of challenges, such as the need to deliver high frame rates and a comfortable, immersive experience. Things can quickly fall apart when using assets that are too demanding for mobile devices or even lower-end PC hardware. Unfortunately, many assets you might obtain from various asset stores are often not optimized or designed for Mixed Reality experiences.

Our architectural modules are being built for Mixed Reality from the start. Vertex counts, texel density, and draw calls are just a few of the metrics we use to validate performance and to ensure that these assets can be used to build virtual spaces that run well on a wide range of devices. We have designed a grid system and an approach to composability that will ensure your space not only runs well, but also has proper scale and looks good within a Mixed Reality headset.

Our team’s mission is to enable access to Mixed Reality for communication, and this means we are committed to cross-platform compatibility, including lower-end devices. Our performance targets are chosen so that spaces designed with our system should have minimal rendering costs for even the lowest end mobile VR devices. This is hard work, but is necessary if we expect to allow everyone in the world to connect within this new medium, not just those who have access to high-end hardware.

Ultimately, the benefit of these efforts is that you, the creator, will spend less time worrying about graphics performance and basic design needs such as proper scale and proportion. Instead, you’ll be able to focus on what’s important: creating an amazing virtual space that people will want to spend many hours in together!

Building Mixed Reality spaces for the web

This interior (and the one above) were made in just a few hours.


Next steps

We hope this work will help end the feeling of being overwhelmed by the ‘blank canvas’ when starting your first virtual space and instead, empower you to create, iterate, and share your creations quickly while reaching as many people as possible. You can expect to see more announcements from us soon on how we’ll be releasing this and other work in trying to deliver on this promise. You can follow our progress at @mozillareality or join the conversation in the #social channel on the WebXR slack. We’ll see you there!

Mozilla Addons BlogEnter the Firefox Quantum Extensions Challenge

Firefox users love using extensions to personalize their browsing experience. Now, it’s easier than ever for developers with working knowledge of JavaScript, HTML, and CSS to create extensions for Firefox using the WebExtensions API. New and improved WebExtensions APIs land with each new Firefox release, giving developers the freedom to create new features and fine-tune their extensions.

You’re invited  to use your skill, savvy, and creativity to create great new extensions for the Firefox Quantum Extensions Challenge. Between March 15 and April 15, 2018, use Firefox Developer Edition to create extensions that make full use of available WebExtensions APIs for one of the prize categories. (Legacy extensions that have been updated to WebExtensions APIs, or Chrome extensions that have been ported to Firefox on or after January 1, 2018, are also eligible for this challenge.)

A panel of judges will select three to four finalists in each category, and the community will be invited to vote for the winners. We’ll announce the winners with the release of Firefox 60 in May 2018. Winners in each category will receive an iPad Pro and promotion of their extensions to Firefox users. Runners-up will receive a $250 USD Amazon gift card.

Ready to get started? Visit the challenge site for more information (including the official rules) and download Firefox Developer Edition.

Winners will be notified by the end of April 2018 and will be announced with the release of Firefox 60 in May 2018.

Good luck!

The post Enter the Firefox Quantum Extensions Challenge appeared first on Mozilla Add-ons Blog.

Hacks.Mozilla.OrgFirefox Quantum Extensions Challenge

Firefox users love using extensions to personalize their browsing experience. Now, it’s easier than ever for developers with working knowledge of JavaScript, HTML, and CSS to create extensions for Firefox using the WebExtensions API . New and improved WebExtensions APIs land with each new Firefox release, giving developers the freedom to create new features and fine-tune their extensions.

You’re invited to use your skill, savvy, and creativity to create great new extensions for the Firefox Quantum Extensions Challenge . Between March 15 and April 15, 2018, use Firefox Developer Edition to create extensions that make full use of available WebExtensions APIs for one of the prize categories. (Legacy extensions that have been updated to WebExtensions APIs, or Chrome extensions that have been ported to Firefox on or after January 1, 2018, are also eligible for this challenge.)

A panel of judges will select three to four finalists in each category, and the community will be invited to vote for the winners. We’ll announce the winners with the release of Firefox 60 in May 2018. Winners in each category will receive an iPad Pro and promotion of their extensions to Firefox users. Runners-up will receive a $250 USD Amazon gift card.


Best in Tab Management & Organization

Firefox users love customizing their browser tabs. Create the next generation of user-friendly extensions to style, organize, and manage tabs.

Best Dynamic Themes

With the new theme API, developers can create beautiful and responsive dynamic themes to customize Firefox’s appearance and make them interactive. We’re looking for a dynamite combination of aesthetics and utility.

Best in Games & Entertainment

Extensions aren’t just for improving productivity — they’re also great for adding whimsy and fun to your day. We’re looking for high-performing, original ideas that will bring delight to Firefox users.

New & Improved APIs

So many new WebExtensions APIs have landed in the last few Firefox releases, and Firefox 60 will add even more. Let’s start with themes.

The current Theme API supports nearly 20 different visual elements that developers can customize. In Firefox 60, the list will grow to include the following items now in development:

But remember, your goal isn’t just to come up with a nice looking set of UI elements. Wow us with an extension that uses the Theme API to dynamically modify UI elements in order to create something that is visually stunning and equally useful.

For tabs, several new API have been added, including:

The contextualIdentities API is not new, but it is unique to Firefox and may provide developers with some interesting tools for separating online identities. The same goes for the sidebar API, another unique feature of Firefox that allows developers to get creative with alternate user interface models.

Get Started

Winners will be notified by the end of April 2018 and will be announced with the release of Firefox 60 in May 2018.

Good luck!

The Firefox FrontierMarch Add(on)ness: Tree Style Tab (1) Vs Don’t Touch My Tabs (4)

It’s a head-to-head match up of tab customization for March Add(on)ness… Tree Style Tab Customization Tree Style Tabs opens new tabs as organized “children” of the current tab. Such “branches” … Read more

The post March Add(on)ness: Tree Style Tab (1) Vs Don’t Touch My Tabs (4) appeared first on The Firefox Frontier.

Gervase MarkhamPoetic License

I found this when going through old documents. It looks like I wrote it and never posted it. Perhaps I didn’t consider it finished at the time. But looking at it now, I think it’s good enough to share. It’s a redrafting of the BSD licence, in poetic form. Maybe I had plans to do other licences one day; I can’t remember.

I’ve interleaved it with the original license text so you can see how true, or otherwise, I’ve been to it. Enjoy :-)

Copyright (c) <YEAR>, <OWNER>
All rights reserved.

Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions 
are met:

You may redistribute and use –
as source or binary, as you choose,
and with some changes or without –
this software; let there be no doubt.
But you must meet conditions three,
if in compliance you wish to be.

1. Redistributions of source code must retain the above copyright 
   notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright 
  notice, this list of conditions and the following disclaimer in the 
  documentation and/or other materials provided with the distribution.
3. Neither the name of the  nor the names of its 
   contributors may be used to endorse or promote products derived 
   from this software without specific prior written permission.

The first is obvious, of course –
To keep this text within the source.
The second is for binaries
Place in the docs a copy, please.
A moral lesson from this ode –
Don’t strip the copyright on code.

The third applies when you promote:
You must not take, from us who wrote,
our names and make it seem as true
we like or love your version too.
(Unless, of course, you contact us
And get our written assensus.)


One final point to be laid out
(You must forgive my need to shout):


When all is told, we sum up thus –
Do what you like, just don’t sue us.

Firefox NightlyThese Weeks in Firefox: Issue 34


Friends of the Firefox team

(Give a shoutout/thanks to people for helping fix and test bugs. Introductions)

Project Updates


Activity Stream

Browser Architecture


Policy Engine

  • Marketing push for Firefox Quantum for ESR (aka Firefox 60) starting soon, which will be talking about this feature
  • YUKI “Piro” (from Tree Style Tabs) contributing to Policy Engine, which is great! Thank you!



Search and Navigation

Address Bar & Search

Sync / Firefox Accounts

Test Pilot

Web Payments

Air MozillaThe Joy of Coding - Episode 132

The Joy of Coding - Episode 132 mconley livehacks on real Firefox bugs while thinking aloud.

Hacks.Mozilla.OrgMaking WebAssembly better for Rust & for all languages

One big 2018 goal for the Rust community is to become a web language. By targeting WebAssembly, Rust can run on the web just like JavaScript. But what does this mean? Does it mean that Rust is trying to replace JavaScript?

The answer to that question is no. We don’t expect Rust WebAssembly apps to be written completely in Rust. In fact, we expect the bulk of application code will still be JS, even in most Rust WebAssembly applications.

This is because JS is a good choice for most things. It’s quick and easy to get up and running with JavaScript. On top of that, there’s a vibrant ecosystem full of JavaScript developers who have created incredibly innovative approaches to different problems on the web.

Rust logo and JS logo with a heart in between

But sometimes for specific parts of an application, Rust+WebAssembly is the right tool for the job… like when you’re parsing source maps, or figuring out what changes to make to the DOM, like Ember.

So for Rust+WebAssembly, the path forward doesn’t stop at compiling Rust to WebAssembly. We need to make sure that WebAssembly fits into the JavaScript ecosystem. Web developers need to be able to use WebAssembly as if it were JavaScript.

But WebAssembly isn’t there yet. To make this happen, we need to build tools to make WebAssembly easier to load, and easier to interact with from JS. This work will help Rust. But it will also help all other languages that target WebAssembly.

Pipeline from compiling, to generating bindings, to packaging, to bundling

What WebAssembly usability challenges are we tackling? Here are a few:

  1. How do you make it easy to pass objects between WebAssembly and JS?
  2. How do you package it all up for npm?
  3. How do developers easily combine JS and WASM packages, whether in bundlers or browsers?

But first, what are we making possible in Rust?

Rust will be able to call JavaScript functions. JavaScript will be able to call Rust functions. Rust will be able to call functions from the host platform, like alert. Rust crates will be able to have dependencies on npm packages. And throughout all of this, Rust and JavaScript will be passing objects around in a way that makes sense to both of them.

Rust crate graph

So that’s what we are making possible in Rust. Now let’s look at the WebAssembly usability challenges that we need to tackle.

Q. How do you make it easy to pass objects between WebAssembly and JS?

A. wasm-bindgen

One of the hardest parts of working with WebAssembly is getting different kinds of values into and out of functions. That’s because WebAssembly currently only has two types: integers and floating point numbers.

This means you can’t just pass a string into a WebAssembly function. Instead, you have to go through a bunch of steps:

  1. On the JS side, encode the string into numbers (using something like the TextEncoder API)
    Encoder ring encoding Hello into number equivalent
  2. Put those numbers into WebAssembly’s memory, which is basically an array of numbers
    JS putting numbers into WebAssembly's memory
  3. Pass the array index for the first letter of the string to the WebAssembly function
  4. On the WebAssembly side, use that integer as a pointer to pull out the numbers

And that’s only what’s required for strings. If you have more complex types, then you’re going to have a more convoluted process to get the data back and forth.

If you’re using a lot of WebAssembly code, you’ll probably abstract this kind of glue code out into a library. Wouldn’t it be nice if you didn’t have to write all that glue code, though? If you could just pass complex values across the language boundary and have them magically work?

That’s what wasm-bindgen does. If you add a few annotations to your Rust code, it will automatically create the code that’s needed (on both sides) to make more complex types work.

JS passing the string Hello to wasm-bindgen, which does all of the other work

This means calling JS functions from Rust using whatever types those functions expect:

extern {
    type console;

    #[wasm_bindgen(static = console)]
    fn log(s: &str);
pub fn foo() {

… Or using structs in Rust and having them work as classes in JS:

// Rust
pub struct Foo {
    contents: u32,

impl Foo {
    pub fn new() -> Foo {
        Foo { contents: 0 }
    pub fn add(&mut self, amt: u32) -> u32 {
        self.contents += amt;
        return self.contents
// JS
import { Foo } from "./js_hello_world";
let foo =;
assertEq(foo.add(10), 10);;

… Or many other niceties.

Under the hood, wasm-bindgen is designed to be language-independent. This means that as the tool stabilizes it should be possible to expand support for constructs in other languages, like C/C++.

Alex Crichton will be writing more about wasm-bindgen in a couple of weeks, so watch for that post.

Q. How do you package it all up for npm?

A. wasm-pack

Once we put it all together, we have a bunch of files. There’s the compiled WebAssembly file. Then there’s all of the JavaScript — both dependencies and the JS generated by wasm-bindgen. We need a way to package them all up. Plus, if we’ve added any npm dependencies, we need to put those into the package.json manifest file.

multiple files being packaged up and published to npm

Again, it would be nice if this could be done for us. And that’s what wasm-pack does. It is a one-stop shop for going from a compiled WebAsssembly file to an npm package.

It will run wasm-bindgen for you. Then, it will take all of the files and package them up. It will pop a package.json on top, filling in all of the npm dependencies from your Rust code. Then, all you need to do is npm publish.

Again, the foundations of this tool are language-independent, so we expect it to support multiple language ecosystems.

Ashley Williams will be writing more about wasm-pack next month, so that’s another post to watch for.

Q. How do developers easily combine JS and WASM, whether in bundlers, browsers, or Node?

A. ES modules

Now that we’ve published our WebAssembly to npm, how do we make it easy to use that WebAssembly in a JS application?

Make it easy to add the WebAssembly package as a dependency… to include it in JS module dependency graphs.

module graph with JS and WASM modules

Currently, WebAssembly has an imperative JS API for creating modules. You have to write code to do every step, from fetching the file to preparing the dependencies. It’s hard work.

But now that native module support is in browsers, we can add a declarative API. Specifically, we can use the ES module API. With this, working with WebAssembly modules should be as easy as importing them.

import {myFunction} from "myModule.wasm"

We’re working with TC39 and the WebAssembly community group to standardize this.

But we don’t just need to standardize ES module support. Even once browsers and Node support ES modules, developers will still likely use bundlers. That’s because bundlers reduce the number of requests that you have to make for module files, which means it takes less time to download your code.

Bundlers do this by combining a bunch of modules from different files into a single file, and then adding a little bit of a runtime to the top to load them.

a module graph being combined into a single file

Bundlers will still need to use the JS API to create the modules, at least in the short term. But users will be authoring with ES module syntax. Those users will expect their modules to act as if they were ES modules. We’ll need to add some features to WebAssembly to make it easier for bundlers to emulate ES modules.

I will be writing more about the effort to add ES module integration to the WebAssembly spec. I’ll also be diving into bundlers and their support for WebAssembly over the coming months.


To be a useful as a web language, Rust needs to work well with the JavaScript ecosystem. We have some work to do to get there, and fortunately that work will help other languages, too. Do you want to help make WebAssembly better for every language? Join us! We’re happy to help you get started :)


The Firefox FrontierMarch Add(on)ness: Video Download Helper (1) Vs Cookie AD (4)

It’s battle two of March Add(on)ness today and we have… Video DownloadHelper Media Video DownloadHelper is the easy way to download and convert Web videos from hundreds of YouTube-like sites. … Read more

The post March Add(on)ness: Video Download Helper (1) Vs Cookie AD (4) appeared first on The Firefox Frontier.

Daniel StenbergGAAAAAH

That’s the thought that ran through my head when I read the email I had just received.


You know the feeling when the realization hits you that you did something really stupid? And you did it hours ago and people already noticed so its too late to pretend it didn’t happen or try to cover it up and whistle innocently. Nope, none of those options were available anymore. The truth was out there.

I had messed up royally.

What triggered this sudden journey of emotions and sharp sense of pain in my soul, was an email I received at 10:18, Friday March 9 2018. The encrypted email pointed out to me in clear terms that there was information available publicly on the curl web site about the security vulnerabilities that we intended to announce in association with the next curl release, on March 21. (The person who emailed me is a member of a group that was informed by me about these issues ahead of time.)

In the curl project, we never reveal nor show any information about known security flaws until we ship fixes for them and publish their corresponding security advisories that explain the flaws, the risks, the fixes and work-arounds in detail. This of course in the name of keeping users safe. We don’t want bad guys to learn about problems and flaws until we also offer fixes for them. That is, unless you screw up like me.

It took me a few minutes until I paused my work I was doing at the moment and actually read the email, but once I did I acted immediately and at 10:24 I had reverted the change on the web site and purged the URL from the CDN so the information was no longer publicly visible there.

The entire curl web site is however kept in a public git repository, so while the sensitive information was no longer immediately notable on the site, it was still out of the bag and there was just no taking it back. Not to mention that we don’t know how many people that already updated their git clones etc.

I pushed the particular file containing the “extra information” to the web site’s git repository at 01:26 CET the same early morning and since the web site updates itself in a cronjob every 20 minutes we know the information became available just after 01:40. At which time I had already gone to bed.

The sensitive information was displayed on the site for 8 hours and 44 minutes. The security page table showed these lines at the top:

# Vulnerability Date First Last CVE CWE
78 RTSP RTP buffer over-read February 20, 2018 7.20.0 7.58.0 CVE-2018-1000122 CWE-126: Buffer Over-read
77 LDAP NULL pointer dereference March 06, 2018 7.21.0 7.58.0 CVE-2018-1000121 CWE-476: NULL Pointer Dereference
76 FTP path trickery leads to NIL byte out of bounds write March 21, 2018 7.12.3 7.58.0 CVE-2018-1000120 CWE-122: Heap-based Buffer Overflow

I only revealed the names of the flaws and their corresponding CWE (Common Weakness Enumeration) numbers, the full advisories were thankfully not exposed, the links to them were broken. (Oh, and the date column shows the dates we got the reports, not the date of the fixed release which is the intention.) We still fear that the names alone plus the CWE descriptions might be enough for intelligent attackers to figure out the rest.

As a direct result of me having revealed information about these three security vulnerabilities, we decided to change the release date of the pending release curl 7.59.0 to happen one week sooner than previously planned. To reduce the time bad actors would be able to abuse this information for malicious purposes.

How exactly did it happen?

When approaching a release day, I always create local git branches  called next-release in both the source and the web site git repositories. In the web site’s next-release branch I add the security advisories we’re working on and I add/update meta-data about these vulnerabilities etc. I prepare things in that branch that should go public on the release moment.

We’ve added CWE numbers to our vulnerabilities for the first time (we are now required to provide them when we ask for CVEs). Figuring out these numbers for the new issues made me think that I should also go back and add relevant CWE numbers to our old vulnerabilities as well and I started to go back to old issues and one by one dig up which numbers to use.

After having worked on that for a while, for some of the issues it is really tricky to figure out which CWE to use, I realized the time was rather late.

– I better get to bed and get some sleep so that I can get some work done tomorrow as well.

Then I realized I had been editing the old advisory documents while still being in the checked out next-release branch. Oops, that was a mistake. I thus wanted to check out the master branch again to push the update from there. git then pointed out that the file couldn’t get moved over because of reasons. (I forget the exact message but it it happened because I had already committed changes to the file in the new branch that weren’t present in the master branch.)

So, as I wanted to get to bed and not fight my tools, I saved the current (edited) file in a different name, checked out the old file version from git again, changed branch and moved the renamed file back to again (without a single thought that this file now contained three lines too many that should only be present in the next-release branch), committed all the edited files and pushed them all to the remote git repository… boom.

You’d think I would…

  1. know how to use git correctly
  2. know how to push what to public repos
  3. not try to do things like this at 01:26 in the morning

curl 7.59.0 and these mentioned security vulnerabilities were made public this morning.

Daniel StenbergHere’s curl 7.59.0

We ship curl 7.59.0 exactly 49 days since the previous release (a week shorter than planned because of reasons). Download it from here. Full changelog is here.

In these 49 days, we have done and had..

6 changes(*)
78 bug fixes (total: 4337)
149 commits (total: 22,952)
45 contributors, 20 new (total: 1,702)
29 authors (total: 552)
3 security fixes (total: 78)

This time we’ve fixed no less than three separate security vulnerabilities:

  1. FTP path trickery security issue
  2. LDAP NULL dereference
  3. RTSP RTP buffer over-read

(*) = changes are things that don’t fix existing functionality but actually add something new to curl/libcurl. New features mostly.

The new things time probably won’t be considered as earth shattering but still a bunch of useful stuff:


The ability to specified a public key pinning has been around for a while for regular servers, and libcurl has had the ability to pin proxies’ keys as well. This change makes sure that users of the command line tool also gets that ability. Make sure your HTTPS proxy isn’t MITMed!


Part of our effort to cleanup our use of ‘long’ variables internally to make sure we don’t have year-2038 problems, this new option was added.


This popular libcurl option that allows applications to populate curl’s DNS cache with custom IP addresses for host names were improved and now you can add multiple addresses for host names. This allows transfers using this to even more work like as if it used normal name resolves.


As a true HTTP swiss-army knife tool and library, you can toggle and tweak almost all aspects, timers and options that are used. This libcurl option has a new corresponding curl command line option, and allows the user to set the timeout time for how long after the initial (IPv6) connect call is done until the second (IPv4) connect is invoked in the happy eyeballs connect procedure. The default is 200 milliseconds.

Bug fixes!

As usual we fixed things all over. Big and small. Some of the ones that I think stuck out a little were the fix for building with OpenSSL 0.9.7 (because you’d think that portion of users should be extinct by now) and the fix to make configure correctly detect OpenSSL 1.1.1 (there are beta releases out there).

Some application authors will appreciate that libcurl now for the most part detects if it gets called from within one of its own callbacks and returns an error about it. This is mostly to save these users from themselves as doing this would already previously risk damaging things. There are some functions that are still allowed to get called from within callbacks.

Ehsan AkhgariAn overview of online ad fraud

I have researched various aspects of the online advertisement industry for a while, and one of the fascinating topics that I have come across which I didn’t know too much about before is ad fraud.  You may have heard that this is a huge problem as this topic hits the news often, and after learning more about it, I think of it as one of the major threats to the health of the Web, so it’s important for us to be more familiar with the problem.

People have done a lot of research on the topic but most of the material uses the jargon of the ad industry so they may be inaccessible to those who aren’t familiar with it (I’m learning my way through it myself!) and also you’d need to study a lot to put a broad picture of what’s wrong together, so I decided to summarize what I have learned so far, expressed in simple terms avoiding jargon, in the hopes that it’s helpful.  Needless to say, none of this should be taken as official Mozilla policy, but rather this is a hopefully objective summary plus some of my opinions after doing this research at the end.

How ad fraud works

Fraudsters have always existed in all walks of life, looking for easy ways of making money.  Online ad fraud provides an appealing avenue for fraudsters because of two reasons.  One is that once they have a working system capable of generating revenue, they can easily scale it up with almost no extra effort involved, so this gives them the ability to generate a lot of revenue.  And we’re talking a lot here.  To give you a sense of the scale, the infamous Methbot operation which has been well document was generating $3-5 million USD at some point, per day.  The other reason is that there is relatively low risk associated with online ad fraud, since depending on the jurisdiction, online ad fraud falls into a legal gray area, and also doesn’t involve physical risk as opposed many other types of fraudulent activities.

Ad fraud has been made possible through abusing the quality metrics the ad industry uses to assess the effectiveness of marketing campaigns.  For example, historically metrics such as time spent on page, or how often people clicked on an ad (click-through rate) were used, which were trivial to game programmatically.  Even when more sophisticated metrics such as percentage of customers achieving a specific marketing goal, such as buying something or signing up for a newsletter were employed, these were implemented through mechanisms such as invisible tracking pixels (1×1 invisible GIFs sending some tracking cookies to the server) which again is trivial to game.  These metrics in practice are gamed so much that high rates on these metrics are more associated with bot traffic than actual human customers!

A typical ad fraud scenario today works by automating the process of generating traffic designed to game one of these metrics, and run that on bots across a botnet.  These are bots that attempt to act like a human to avoid being detected as a bot (and being block listed or punished by ad networks).  These bots also usually aren’t simple scripts.  They are usually full browser environments, which are either controlled from the outside environment (e.g., through sending the browser  mouse/keyboard events, or through embedding APIs) or even by modifying an open source browser!  This allows the bot to perform actions on the page, such as add items to a shopping cart, or click on an ad, etc.

It’s worth explaining how these botnets are typically run.  Botnets usually consists of many hijacked computers connected to the Internet around the world, typically taken over by malware.  In fact, a large part of the malware distributed on the Internet is to delivering ad fraud.  Hijacking the computer allows the fraudster access to the unique IPs of real users which is helpful for the bot to masquerade a real human.  Malware is usually installed in one of the three ways: through Flash vulnerabilities, browser exploits and social engineering.  Thankfully Flash in on its path to demise.  Browser exploits are a continued challenge which we can directly impact.  Social engineering works by tricking the user to download software, e.g. through downloading games, pirated Photoshop copies, etc.  It’s important to note how there is no absolute path toward closing all the loopholes in the ways in which people’s machines get infected by bots.

Botnets can perform things other than ad fraud, such as distributed denial of service (DDoS) attacks, online banking fraud, stealing credit card information, sending spam, etc.  But let’s only focus on ad fraud.  Typically an end-to-end pipeline for ad fraud looks like this:

  • User’s machine gets infected by malware and bot engine gets installed
  • Bots are instructed to visit high quality sites to pick up the desired tracking cookies (payout opportunity #1)
  • Bots are then instructed to visit fake site setup by the botnet operator to display ads (payout opportunity #2)

The first payout opportunity for the botnet operator is selling bot traffic to website operators.  When website operators are looking for ways to increase traffic to their site, a lot of them resort to purchasing traffic.  Unfortunately, a lot of the purchased traffic sources available are either partly or completely bot traffic that come from ad fraud bots.  (In some cases the sites end up purchasing this bot traffic unknowingly.)  The second payout opportunity for botnet operators is when their bots achieve the goal of the marketing metric they’re gaming (e.g., display an ad, or click on it, etc.).

One way to think of ad fraud is finding ad models where a user is tracked from point A to B, where point B do some action to achieve a payout (such as, display an ad on a website, otherwise known as an ad impression), and automate this process and scale it up across a botnet.  A botnet is typically a network of compromised machines through malware, these could be anyone’s computer at home or at work.  There are also botnets that run in data centers, that’s the preferred method if the bot doesn’t get detected when run inside the data center through simple checks such as IP address range checks.

A popular example is targeting the ad retargeting campaigns where a business buys ads from an ad network for products that customers have tried to buy on online stores.  The way that this works is the bot pretends to be a customer by visiting online store websites, searching for products, placing items into the shopping cart, then going to fake sites that have been specifically set up to serve ads from the same ad network and click on the retargeting campaign ads that business has bought.  There is a detailed explanation of this setup with graphics available here which I recommend checking out.

Of course there are other types of ad fraud that don’t target tracking based models of advertisement.  Examples include:

  • Ad stacking: the practice of loading several ads on top of each other so that only one of them is visible to the user but the fraudster gets paid for displaying all them
  • Pixel stuffing: the practice of loading one web page as a 1×1 pixel iframe in another web page.  Typically the embedder web page is a shady website which is embedding the fraudster’s high quality website to drive up the ad revenue from the ads displayed there.
  • Domain spoofing: some online advertising involves an auction phase before displaying an ad, and during this phase the fraudster can use the domain name of a high quality site to bid for ads and then display them on a shady site
  • Location fraud: spoofing the real user’s location to trick marketing campaigns specific to geographic locations

There are other ad fraud methods and fraudsters are continually coming up with newer ways of defrauding the online advertisers.

How big of a problem is online ad fraud

A lot of research has been done to try to estimate the total size of the online ad fraud revenue.  This is interesting to know for some advertisers since money spent on bots viewing and clicking on ads is money spent on ineffective advertisement.  Typically the way this research is performed is by measuring the size of the fraud in one specific part of the ad industry and then extrapolating based on that.  Based on that, latest estimates for last year (2017) have been raised to around $16.4 billion.  To give you a sense of the scale of this number, the IAB estimated the revenue of Internet advertising in the US in the first half of 2017 to be $40.1 billion.  This is also a growing problem, and the more recent growth has been seen in mobile, using technologies such as Android test automation software to spawn botnets running on thousands of virtual devices running inside emulators.

Furthermore, as explained above, the characteristics of botnets mean that ad fraud impacts more than the ad industry.  This problem impacts consumer device security as it incentivizes malware authors to target normal users to be able to hijack their machines, and it also is harmful to the performance of web pages (see tricks like ad stacking or pixel stuffing which incurs extra needless load on web pages).

What can we do about online ad fraud?

If you have read this far through the post, you should probably be asking yourself, what can be done about online ad fraud, if anything?  And what if anything can a web browser do to help with this problem?

Since a few years ago, the ad industry has started to wake up to the existence of this massive issue and have started some countermeasures against the different common fraud types that exist.  One common technique among almost all the deployed fraud detection mechanisms is trying to identify human traffic vs. bot traffic.  There are a variety of approaches for this.  The most simple ones only look at the trail left by the traffic at the network level, such as by analyzing HTTP or TCP/IP traffic logs.  This has of course were insufficient as bots have become more advanced, so fraud detection technologies have moved to running their diagnostics code in JavaScript as part of the code responsible for serving advertisements on web pages.  Such code looks at many different data sources, such as, things that the browser exposes to the programmatic environment the JavaScript code is running it to detect whether the code is running on a real browser or on a modified browser used for a bot, or doing more advanced analyses such as by listening for mouse movements on the page and analyzing the coordinates the user has moved their mouse on to see whether it follows typical bot generated patterns (bots are very good at moving the mouse in precise straight lines, humans not so much!).  Even more sophisticated approaches use various anomaly detection algorithms to try to find some bit of information from the traffic that is “unusual” and classify human vs. bot traffic based on that.

The more you read about ad fraud detection and prevention technologies, the more interesting and advanced techniques you’ll find that are being deployed against these bots all the time.  That may bring up the following question in your mind: with all this great anti-bot technology, why do we still see so much bot generated online ad fraud, and why is that an increasing and not a decreasing trend?  The reasons are… depressingly simple:

  • Such technology is deployed too late, so a bot developer finds a new technique than no fraud detection software catches, and in a matter of weeks to a couple of months they could have made hundreds of millions of dollars with it.  Once fraud detection software catches up, they’d move on to the next technique.
  • Such technology is deployed in the wrong places.  The advertising ecosystem is massive, with many countries and many companies involved in it, and not all these actors are using the same anti-fraud technology.  A bot that is detected and blocked in one place of the ecosystem may work well elsewhere.

Such is the nature of all cat and mouse games like this.  The bad actors just move on to find the next weak link in your chain once you find them in one place, and they would do what they were doing before there.  A game of whack-a-mole without an end in sight.

There are potentially some types of online ad fraud that the browser, through different ways of bending the rules of the Web Platform, could potentially null out.  For example cheap tricks like ad stacking and pixel stuffing are at least in theory within the realm of the control of web browsers.  But again, we are playing a game of whack-a-mole.  If browsers only move on closing those vectors, the fraudsters will move to other existing possibilities of committing online ad fraud, since the other doors would be left wide open for them.

Can the rules of the game be changed?

Without being able to detect the bots at the right places at the right times, it seems pretty hopeless to try to address this issue in the long run.  But before giving up all hope and declaring defeat, let’s look at the bot-generated online ad fraud issue again and this time break the problem down to its fundamental building blocks:

  • Advertising networks track a user’s online browsing history to be able to show them an ad that would be served on a more expensive website instead on a less expensive website.
  • This tracking is typically done by setting a third-party cookie and saving it on their computer, or computing a fingerprint of their browser and saving it on a remote server (typically also tied to a computer).
  • This setup is used to represent human users by advertising networks.
  • Since there is nothing here that actually ties any of this to a real human, specialized software (bots) can simulate this all outside of the normal web browser, get ads served to them and make money based on that.

This is the how, but it’s also important to remember why the fraudsters do this: they want an easy way to make money.

Note the combination of the why and the how: we have a situation where a group of people (fraudsters) are incentivized for financial gain to leverage a huge design flaw (usage of cookies to represent a token tied to a real human) for gaming how the ad industry is set up to serve advertisements.

But what if we lived in a world where we used a different model of online advertisement, such as, a signal-based advertising model, where the value proposition of advertising comes from the advertiser communicating their commitment to the product for a long time by spending money advertising it (this is a nice post countering this model of advertisement against the tracking-based model).  That would take away the incentive for the fraudsters to continue to develop new fraud bot technology, by making it financially worthless to view ads or click on them using a bot.  The reason is that in such a world, ads that would show up on high quality sites would be more expensive and ads that would show up on low-quality sites that only the bots would visit would in fact not be something that anyone would be paying for!  So even if some fraudster would spend the time and money required to develop a new one of these bots in such a world, they wouldn’t be able to make any money from it — and they would need to go find some other industry to defraud.

How to make the online ad industry switch to a different advertisement model?  Well, as mentioned before the ad industry is an ecosystem with many players, but this is an opportunity.  Gradually, consumers have demanded more control over their personal data online.  We’ve seen this have some impact on the legal scene with the European Union about to enforce the GDPR, and at the Web consumer level the market demand has turned into many privacy extensions and browser features.  It’s only reasonable to expect more in this space as long as there is clear consumer demand for more control over sharing of private data.  This means that gradually, it will become harder and harder for these advertising networks to continue with the current practices of constructing the user’s browsing history on their servers and target them to serve ads.  And the continued existence of online ad fraud means that advertisers who actually pay for online advertisement will continue to bleed marketing budgets going to fraudulent bot traffic.  As we expect these trends to continue on their current trajectories, perhaps some day soon marketers will start to put 2 and 2 together and incrementally switch to advertising models that are more compatible with the consumer demands around sharing of data, which also happen to be the right models if you’re more interested to target humans with your advertisement than bots.

Key Take-Aways

This was a long article, and I do hope you’ve made it this far through.  Here is a TL;DR section highlighting some of the key points discussed hopefully serving as a useful summary.

  • Online ad fraud is a massive and growing problem.
  • Bots commit much of online ad fraud.  Some run in data centers, some on real users’ computers (laptops/desktops/phones).
  • More online ad fraud is shifting to real users’ computers to avoid fraud detection, and this is typically done through malware distribution.
  • Online ad fraud is a big reason behind security vulnerabilities exploited on real users’ computers due to the usage of malware to deliver the bots.
  • While some online ad fraud happens inside the classic browser environment, most of it happens outside of it.
  • Online ad fraud is very profitable, and runs low risks for the fraudsters.
  • A large part of online ad fraud is incentivized via online tracking of users browsing history by advertisement networks.
  • Ad fraud detection and prevention techniques focusing on bot traffic analysis have been tried and have mostly been unsuccessful at dealing with the large-scale problem.
  • To address this issue long-term, it seems necessary to start to focus on the incentives behind ad fraud to some extent.

K Lars LohnThings Gateway, Part 7 - IKEA TRÅDFRI

In this series of postings, I've been setting up, configuring, and playing with IoT devices through the experimental Things Gateway from Mozilla.  I've covered the generic Zigbee and Z-Wave devices, the Philips Hue devices, and the TP-Link WiFi devices.  Today, I add IKEA TRÅDFRI to this circus.

Of course, in this series, I've also been doing a bit of editorializing.  I was critical of the TP-Link devices because their security model requires the end user to just trust them.  I'm critical of the IKEA TRÅDFRI for a physical safety reason.  What does the word TRÅDFRI mean?  I'm assuming it is a Swedish word that means "severe blood loss from slashed wrists" because that is what is likely to happen when opening the package.  The clamshell plastic that entombs their products is difficult to open with anything short of a chainsaw.  My kitchen scissors wouldn't do the job and I had to resort to garden pruning shears and that left dangerously sharp pieces that drew blood.  Be careful.

However, the products themselves have a lot of positive aspects once you manage to liberate them from their packaging.  IKEA's decision to not implement their own method of remote access from outside the home is great.  The Android and iOS apps cannot operate the IKEA devices remotely.  That is a big plus for data security.  It also means that the IKEA corporation is apparently not monitoring the use of your light bulbs.

Another advantage to IKEA TRÅDFRI is affordability.  These are currently the least expensive Zigbee compatible lights out there.

For this demonstration, I'm only going to use the TRÅDFRI light bulbs.  Because of an idiosyncrasy in how the dimmers, switches and motion detectors work, they are not currently compatible with the Things Gateway.  I'm assuming that will change in the future.

Goal: demonstrate the use of IKEA TRÅDFRI bulbs with the Things Gateway.

ItemWhat's it for?Where I got it
The Raspberry Pi and associated hardware from Part 2 of this series.This is the base platform that we'll be adding ontoFrom Part 2 of this series
DIGI XStickThis allows the Raspberry Pi to talk the ZigBee protocol - there are several models, make sure you get the XU-Z11 model.The only place that I could find this was Mouser Electronics
IKEA TRÅDFRI 980 lumen bulb To demonstrate use of the bulb without the TRÅDFRI gateway, dimmer or switch.IKEA TRÅDFRI 980 lumen

Step 1: setup the Raspberry Pi and the DIGI XStick in the manner specified in Part 2 of this series.

Step 2:  Plug in your IKEA TRÅDFRI bulbs.  If they came with a kit, like the one shown in the unpackaging photo above, they need to be factory reset.  Factory reset is fairly easy: using a manual power switch, turn the bulb on and off rapidly at least six times (more seems ok). It will do no harm to do the factory reset even if the bulbs did not come in a bundled package.  Once they've reset, they wink once in acknowledgement.  You can see that wink at the end of the video.

Step 3: Pair the bulbs with the Things Gateway by pressing the "+" button on the Things screen.  Then apply power to the bulbs.  I found that the IKEA bulbs take a bit longer to be recognized than other Zigbee compatible bulbs.  Select Save on each bulb and then press "Done".  That is all there is to it.

While these IKEA bulbs are the least expensive Zigbee bulbs that I have found, there may be a reason behind that.  I noticed that the bulb that I've labeled I02 seems to have a problem.  After being on for about five minutes, it'll just spontaneously blink out.  Thereafter it will not respond to the Things Gateway until the bulb is power cycled.  I factory reset the bulb, paired it with the IKEA dimmer and repeated the test.  The bulb just fails after about five minutes.  These may be inexpensive because they are cheaply made.  I'll report later after seeing how long these bulbs last.

Air MozillaMartes Mozilleros, 13 Mar 2018

Martes Mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos. Bi-weekly meeting to talk (in Spanish) about Mozilla status, community and...

Mike ConleyFirefox Performance Update #3

Hi! I’ve got another slew of Firefox performance work to report today.

Special thanks to the folks who submitted things through this form to let me know about performance work that’s taken place recently! If you’ve seen something fixed lately that’ll likely have a positive impact on Firefox performance, let me know about it!

So, without further ado, here are some of the folks who have made some nice improvements to Firefox performance lately. Thanks for making Firefox faster and better!

Firefox Test PilotSo, How’s Screenshots Doing?

It’s been a bit over five months since we launched Firefox Screenshots in Firefox 56, and I wanted to take a moment to reflect on what’s happened so far and to look forward to what’s coming next.

So far, our users have taken more than 67 million screenshots. This is a big number that makes my manager happy, but more interesting is how we got here.

The changing shape of Firefox

We launched Firefox Screenshots in Firefox 56 in late September of 2017. This was one release before the widely hailed Firefox Quantum release, back when Firefox still had curvy tabs.

When we launched, the screenshot button appeared in the browser toolbar with a little badge highlighting the new feature.

<figcaption>Firefox 56 UI with Screenshots appearing in the toolbar</figcaption>

In Firefox Quantum, actions such as bookmarking, sending a tab to a mobile device, or saving to Pocket were all moved into a contextual menu. The Firefox Screenshots control moved to this new home as well.

<figcaption>Firefox Quantum with a hidden Screenshots control</figcaption>

As a Firefox user, I really like this new design: it’s cleaner, more consistent than what came before. As the Product Manager for Screenshots, I was definitely worried about how the change would affect our numbers.

We did take a pretty sizable hit in the short term. Firefox Quantum launched on November 14th and rolled out over the following week. In the four weeks that followed, 23.2% fewer shots were taken than prior to the Firefox Quantum launch.

<figcaption>The dark purple line show shots taken in the 28 days after Quantum, while the lighter line shows shots taken in the 28 days prior.</figcaption>

Taking a step back, the logic of Firefox’s redesign starts to show. While the graph above measures shots actually taken, the one below shows total shots initiated during the same period. Shots are initiated when someone clicks the screenshots button or right-clicks to trigger the screenshots UI.

<figcaption>Users started to take a lot more shots in the month before Quantum.</figcaption>

These charts show that while users started to take a lot more shots before Firefox Quantum, they didn’t actually wind up taking that many more shots. This difference really shows in the relative rates of shots canceled before and after Firefox Quantum. Canceled shots just mean that a user escapes the Screenshot UI without capturing a screenshot by refreshing the page, hitting escape, or clicking the cancel button. As the graph below show, these events fell off a cliff after Firefox Quantum.

<figcaption>After Quantum, canceled shots fell drastically.</figcaption>

So, yes, we lost users with the Firefox Quantum launch, but the change was actually quite positive for us because the changes made engagements with Firefox Screenshots a lot more likely to end in a shot actually being taken.

The chart at left shows all shots initiated, canceled and taken from September 28th, 2017 through March 1st, 2018 split by the Firefox Quantum release. The change in ratio between taken and canceled is pretty impressive. Before Firefox Quantum there was 1 shot taken for every 2 shots canceled. Since Firefox Quantum there have been 2 shots taken for every shot canceled. It seems that users who engage with Screenshots in Firefox Quantum do so intentionally whereas before people might have simply clicked the new button to to see what happened.

Another noteworthy feature of Firefox Quantum is that right-clicking (aka context clicking) to start a shot has long-since become our users’ preferred way to begin engagement with Screenshots. This makes sense given that people take screenshots in order to complete tasks such as sending along flight information or grabbing a chat bubble for later reference. In the chart below (which really shows the effect Firefox Quantum had on starting shots) context clicking surpasses the toolbar menu just after Firefox Quantum launched.

<figcaption>Most users now start shots through a context click.</figcaption>

What kind of shots are people taking?

When we launched Screenshots in Firefox 56 there were two different kinds of shots our users could take. They could drag select a region of the page and either download the highlighted region or upload it and get a URL copied to their clipboard.

Over the last several releases, we’ve added different ways of capturing shots so that now there are 9 different methods of taking and saving shots. As of Firefox Quantum users could capture regions, visible parts of a page, or an entire page. In the Firefox release after Quantum we added the ability to copy images directly to the clipboard.

<figcaption>Users have lots of different options when it comes to saving shots.</figcaption>

So, how do all of these different shot types compare? Well, the most popular shot type is downloading a region, followed by saving a region and getting a URL. This has been the case pretty much throughout the life of Screenshots. Interestingly, this ranking was reversed when we were testing Screenshots in Test Pilot back when it was called Page Shot.

<figcaption>The view since Quantum: downloading and uploading regions are still the most popular options, but copying a region to clipboard has grown a lot since it launched in late January</figcaption>

Since we started launching more shot options in Firefox Quantum, copying directly to clipboard (above in aqua) has grown particularly popular (it’s how I added charts to this post). The chart below shows all shots taken in the last month, the relative changes in each category, and shot totals. There are a few highlights here:

  1. Clipboard shots are growing like crazy.
  2. They appear to be cannibalizing shots that might have otherwise been saved to a URL.
  3. We’re growing!
<figcaption>All the ways people took shots from February 5th to March 6th 2018</figcaption>

And we’ve been growing. Since the new year, a time when we expect to see a seasonal decrease in users, we’ve grown every single week in 2018. We’re not quite back to the raw weekly shot numbers we had before Firefox Quantum, but as the chart below shows, we’re getting awfully close, and these shots are far more likely to be taken intentionally than prior to the Firefox Quantum launch.

Up next

<figcaption>Hey look, annotations!</figcaption>

We’re still actively improving Firefox Screenshots all the time. In fact, we’re launching an annotations feature today! This feature will let users draw on or re-crop any saved shot.

Beyond this, we’ve got a few other ideas up our sleeves to make Firefox Screenshots even cooler than it is now. If you’re interested in helping out, please check out our repo on GitHub and feel free to contribute! 🏄‍♀️

So, How’s Screenshots Doing? was originally published in Firefox Test Pilot on Medium, where people are continuing the conversation by highlighting and responding to this story.

Mozilla VR BlogA Truly Responsive WebXR Experiment: A-Painter XR

A Truly Responsive WebXR Experiment: A-Painter XR

In our posts announcing our Mixed Reality program last year, we talked about some of the reasons we were excited to expand WebVR to include AR technology. In the post about our experimental WebXR Polyfill and WebXR Viewer, we mentioned that the WebVR Community Group has shifted to become the Immersive Web Community Group and the WebVR API proposal is becoming the WebXR Device API proposal. As the community works through the details of these changes, this is a great time to step back and think about the requirements and implications of mixing AR and VR in one API.

In this post, I want to illustrate one of the key opportunities enabled by integrating AR and VR in the same API: the ability to build responsive applications that work across the full range of Mixed Reality devices.

Some recent web articles have explored the idea of building responsive AR or responsive VR web apps, such as web pages that support a range of VR devices (from 2D to immersive) or the challenges of creating web pages that support both 2D and AR-capable browsers. The approaches in these articles have focused on creating a 3D user-interface that targets either AR or VR, and falls back to progressively simpler UIs on less capable platforms.

In contrast, WebXR gives us the opportunity to have a single web app respond to the full range of MR platforms (AR and VR, immersive and flat-screen). This will require developers to consider targeting a range of both AR and VR devices with different sorts of interfaces, and falling back to lesser capabilities for both, in a single web app.

Over the past few months, we have been experimenting with this idea by extending the WebVR paint program A-Painter to support handheld AR when loaded on an appropriate web browser, such as our WebXR viewer, but also Google’s WebARonARCore and WebARonARKit browsers. We will dig deeper into this idea of building apps that adapt to the full diversity of WebXR platforms, beyond just these two, in a future blog post.

An Adaptive UI: A-Painter XR

Let’s start with a video of some samples we created for the WebXR Viewer, our iOS app.

This video ends with a clip of the WebXR version of A-Painter. Rather than simply port A-Painter to support handheld AR, we extended it so that both the VR and handheld AR UIs are integrated in the same app, with the appropriate UI selected based on the capabilities of the user’s device. This project was undertaken to explore the idea of creating an “AR Graffiti” experience where users could paint in AR using the WebXR Viewer, and required us to create an interface designed for the 2D touch screens on these phones. The result is shown in this video.

This version of A-Painter XR currently resides in the “xr” branch of the A-Painter github repository and is hosted at (There is a direct link to the page, along with links loading pre-created content, on the startup page for the WebXR Viewer.) I’d encourage you to give it a try. The same URL works in on all the platforms highlighted in the video.

There were two major steps to making A-Painter XR work in AR. First, the underlying technology needed to be ported from WebVR to our WebXR polyfill. We updated three.js and aframe.js to work with WebXR instead of only supporting WebVR. This was necessary because A-Painter is built on AFrame, which in turn is built on three.js. (You can use these libraries yourself to explore using WebXR with your own apps, until the WebXR Device API is finalized and WebXR support is added to the official versions of each of them.)

The second step was creating an appropriate user interface. A-Painter was designed for fully immersive WebVR, on head-worn displays like the HTC Vive, and uses 6DOF tracked controllers for painting and all UI actions, as shown in this image of a user painting with Windows MR controllers. The user can paint with either controller, and all menus are attached to the controllers.
A Truly Responsive WebXR Experiment: A-Painter XR
But interacting with a touch-based handheld display is quite different then interacting in fully immersive VR. While the WebXR Viewer supports 6DOF movement (courtesy of ARKit), there are no controllers (or reliable hand or finger tracking) available yet on these platforms. Even if tracked controllers were available, most users would not have them, so we focused on creating a 2D touch UI for painting.

UI Walkthrough for Touchscreen AR Painting

The first change to the user experience is not shown in the video, but is required for any web app that wants to bridge the gap between AR and VR: the XR application needs to start with the user or system defining a place in the world (an “anchor”) for the content.

When a user installs and sets up a VR platform, like an HTC Vive or a Windows MR headset, they define a fixed area in the room in which all VR experiences happen. When A-Painter is run on a VR platform, the center of that fixed area of the room is automatically used as the starting point, or anchor, for the painting.

AR platforms like ARKit and ARCore do not define a fixed area for the user to work in, nor do they provide any well-defined location that could be used automatically by the system as the anchor for content. That is why most current AR experiences on these platforms start by having the user pick a place in the world, often on a surface detected by the system, as an anchor. The AR interface for A-Painter XR starts this way.
A Truly Responsive WebXR Experiment: A-Painter XR
After selecting an anchor for the painting, the user is presented with an interface to paint in the world. The interface is analogous to the VR interface, but made up of 2D buttons arranged around the edges of screen, rather than widgets attached to one of the VR controllers.
A Truly Responsive WebXR Experiment: A-Painter XR
Paint mode (see below) and undo/redo are in the lower left, a save button is in the upper right, and a slider to change the brush size is in the center bottom under a button to change the brush selection. To keep the screen uncluttered, the brush selection UI is normally hidden, being brought up on request.
A Truly Responsive WebXR Experiment: A-Painter XR
Much of the code and visual elements for this interface are shared with the VR version (e.g., icons and palettes), and the application state is similar. But the user experience is tuned for the two different platforms.

Digging Deeper into Touch Screen Painting

While the UI described above, and shown in the video, works reasonably well, there is a subtle problem that may not be obvious until you try to create a 3D painting yourself. In immersive VR, the graphics are drawn from the viewpoint of your eyes (or as close to them as the system can get) and you paint with the VR controller. The controller is in your hand in 3D space, and defines a single 3D location, so there is no ambiguity as to where the strokes should go: they are placed where the tip of the controller was when you pressed the button. Painting in immersive AR while wearing a see-through head-worn display would be similar.

In handheld AR, you interact with a view that seems to be a transparent view of the physical world. But while so-called “video-mixed AR” gives the illusion that the screen is transparent, with graphics overlaid on it, that illusion is slightly misleading. The video being shown on the display is captured from a camera on the back of the phone or tablet, so the viewpoint of the scene is a place on the back of the camera, not the user’s eyes (as it is with head worn displays). This is illustrated in this image.
A Truly Responsive WebXR Experiment: A-Painter XR
The camera on my iPhone7 Plus is in the upper right of the phone in the image; the red lines show the field of view of the camera. Notice, for example, that the front corner of the book appears near the center of the phone screen, but is actually near the left of the phone in this image; similarly, the phone appears to be looking along the left edge of my laptop, but the view from the camera is showing a much more extreme view of the side of the laptop.

In most of the AR demos coming out for ARKit and ARCore, this doesn’t appear to matter (and, even if it creates problems for the user, these problems aren’t obvious in the videos posted to youtube and twitter!). But it matters here because when a user draws on the screen, it isn’t at all clear where they expect the stroke they are drawing to actually end up.

Perhaps they imagine they are drawing on the screen itself? I might hold the phone and draw on it, expecting that the phone is like a piece of glass, and my drawing will be in 3D space exactly where the screen is. In some cases, this might be exactly what the user wants. But that is difficult to implement because the view I’m seeing is actually video images of the world behind and slightly to the left of the phone (the area between the red lines in the image image above): if we put the strokes where the screen is, they wouldn’t be visible until the user stepped backwards away from where they drew them, and they wouldn’t be where the video was showing them to be when they drew.

Perhaps they imagine they are drawing on the physical world they see on the screen? But, if so, where? In the image above, when I touch the screen where the yellow/black dot is, the dot actually corresponds to a line shooting out of the camera in the direction shown. There are an infinite number of points on this line that someone might reasonably expect the paint stroke to appear on. I might intend for the paint to be placed in front of the corner of the book (point A), on the corner of the book (point B), or somewhere behind the book (point C). There’s no way we can know what the user wants in this case.

In our interface, we opted to place all points at a fixed distance of ½ meter behind the phone. While not the right choice for all situations, this distance is at least predictable. However, it is hard for users to visualize where a stroke might be placed before they place it. To help, we experimented with an alternative painting mode with more visual feedback to provide extra help for the user understanding where strokes might appear. Selecting the “Paint Mode” button in the lower left allows the user to select between these two possible paint modes.
A Truly Responsive WebXR Experiment: A-Painter XR
Turning on “Paint with Helpers” mode shows a translucent rectangle positioned at the painting distance behind the screen, representing the plane that paint will be drawn on. The image below has some dark blue paint in the world near the user.
A Truly Responsive WebXR Experiment: A-Painter XR
Any paint behind the drawing plane is rendered in a transparent style behind the rectangle; any paint in front of the plane occludes the rectangle. Looking around with the phone or tablet causes the rectangle to slice through the painting, helping the user anticipate where paint will end up.

We considered, but didn't implement, other paint modes (some of which you might be thinking of right now), such as “Paint on World” where strokes would be projected only on the surfaces detected by ARKit. Or perhaps “Paint on a Virtual Prop”, where a 3D scene is placed in the world, painted on, and then removed.

What’s Next for WebXR?

In this post, I introduced the idea of having a single WebXR web app be responsive to both AR and VR, using A-Painter XR as an example. I walked through the AR UI we created for A-Painter XR, emphasizing how the different context of “touch screen video-mixed AR” required a very different UI, but that much of the application code and logic remained the same.

While many WebXR apps might be best suited to either AR or VR, immersive or touch-screen, the great diversity of devices people will use to experience web content makes it essential that future WebXR app developers seriously consider supporting both AR and VR whenever possible. The true power of the web is its openness; we've come to expect that information and applications are instantly available to everyone regardless of the platform they choose to use, and keeping this true in the age of AR and VR will in turn keep the web open, alive and healthy.

We’d love to hear what you think about this, reach out to us at @mozillareality on twitter!

(Thanks to Trevor Smith (@trevorfsmith), Aurturo Paracuellos (@arturitu) and Fernando Serrano (@fernandojsg) for their work on A-Painter XR, the webxr polyfill, three.xr.js, and aframe-xr. Thanks Trevor and Roberto Garrido (@che1404) for their work on the WebXR Viewer iOS app. Arturo also created the A-Painter XR video.)

Mozilla Open Policy & Advocacy BlogMozilla files response to European Commission ‘Fake news and online disinformation’ public consultation

The rising phenomenon of so-called ‘fake news’ and online misinformation has become a global political issue in recent times. We believe that the complex and multi-factor nature of the phenomenon – in terms of its causes and impact – make one-size-fits-all regulatory solutions inappropriate. Rather, as our just-filed response to the European Commission public consultation on ‘Fake News and Online Disinformation’ argues, the true solutions lie in greater investment in media literacy, trust, and a multi-stakeholder approach.

As a mission-driven organisation promoting openness, innovation, and opportunity on the Web, online misinformation cuts to the heart of our vision. Our consultation response – and broader engagement around this issue with lawmakers around the globe – thus seeks to provide an accurate problem definition and a series of balanced actionable insights to mitigate against online misinformation.

In any conversation around political and social issues, proper framing is essential. To that end, we advise European lawmakers to avoid sweeping terms such as ‘fake news’ and instead adopt a more nuanced definition that captures the design intent, legality, and purposeful nature of misinformation content on the Web.

Linked to this, to make meaningful progress against the spread of misinformation online it is necessary to understand that this is a constantly evolving threat, which manifests in different ways, and is the result of a range of causes. From interaction with a broad variety of stakeholders across the Internet community, we have identified a broad mix of technological, economic, literary, and psychological factors which can contribute to the phenomenon

The fluid and interdependent nature of these contributory factors mean counter-actions must be targeted, proportionate, and multi-stakeholder in nature. In that context, we have used the consultation response to advise against sweeping one-size-fits-all platform regulation and government regulation of legal speech, and instead stress the importance of media literacy education, trust-building exercises, and continuous dialogue between all stakeholders involved.

As the European Union considers measures to tackle online misinformation, we will continue to provide thought-leadership to keep the Internet healthy and empowering for its users and creators. Our ongoing Mozilla Information Trust Initiative (MITI) and our leadership in developing the final report of the European Commission’s High-level Expert Group on Fake News and Online Misinformation (HLEG) are just two examples of how we seek to support an open and thriving online news ecosystem. And of course, we’ll continue to build products like Pocket and build out the Coral Project, that help online news empower democratic societies.

Read our full consultation submission here, and stay tuned for updates on our work on this through the European Commission’s HLEG and around the world.


The post Mozilla files response to European Commission ‘Fake news and online disinformation’ public consultation appeared first on Open Policy & Advocacy.

The Mozilla BlogLatest Firefox available to users where they browse the web — laptop, Fire TV and the office. Plus, a chance to help with the next Firefox release!

This week, we’re happy to roll out not one, but three Firefox releases to our users. Now available in more of the places where they browse, Firefox users can access the web whether they’re relaxing at home with their laptop, in front of their TV with Amazon Fire TV, or at the office. Additionally, we’re running a contest (with prizes!) for users who want to help with the next Firefox Quantum release in May. So, without further ado, here’s information on this week’s Firefox releases:

  • Latest Firefox Quantum release for Desktop

Today, March 13, the latest release of Firefox Quantum for desktop users is now available. We’ve improved privacy for those who use Private Browsing mode. To learn more about the technical details on how that works, you can visit this blog post. And, we made changes under the hood where users may notice faster page load times. The latest version of Firefox Quantum is available for the Desktop and Mobile – iOS and Android.

  • Latest Firefox for Amazon Fire TV Available this Week

With this latest release, we’ve included a fresh new look to help you easily navigate the web on your Fire TV. No more typing in long URLs that you like to visit frequently. Users can now save their preferred websites by pinning them to the Firefox home screen. By using the menu button, you can easily remove any pinned websites at any time.

Add your favorite websites to Firefox on Fire TV


  • Firefox Quantum for Enterprise Available Wednesday in Beta

Starting on Wednesday, Firefox Quantum for Enterprise enters Beta, as a final step towards bringing a release version of Firefox Quantum to enterprise users. Needless to say, we’re all super excited to give millions of additional users an update to Firefox Quantum, as everyone deserves to have a super fast and well designed browser. To learn more about how we’re making it easier for IT professionals to install the new Firefox Quantum for their employees, visit our blog post and sign up for the beta of Firefox Quantum for Enterprise.


Want to help with the next Firefox Quantum release?

Did you know that back in 2008, Pocket won our Extend Firefox 3 contest? We’re bringing back the tradition of Firefox Extensions contests with our first Firefox Quantum Extensions Challenge this month! Whether you’re a developer or someone who likes to create fun, cool things, like one-woman Firefox theme machine, MaDonna, we’re looking for the next generation of Extensions. Since the next release of Firefox Quantum supports new WebExtension APIs, we’re on the hunt for new Extensions to make our users’ browsing experience productive, fast, and fun. The winners will be crowned by the next Firefox Quantum release in May. For more details about the contest and prizes, visit our site today and the Hacks blog on Thursday, March 15.


And in related Extensions/Add-on news, we’re holding our annual March Add(on)ness. There are thousands of ways you can customize Firefox to make it your own web experience. So, we’re playing off the top Add-ons to find out who will walk away with the title as “the must-have, must-install extension” of our annual tournament. Learn more on the Firefox Frontier.


If you haven’t yet switched to the new Firefox Quantum browser, we invite you to download the latest version.

Release Notes for Firefox for Android

The post Latest Firefox available to users where they browse the web — laptop, Fire TV and the office. Plus, a chance to help with the next Firefox release! appeared first on The Mozilla Blog.

Mozilla Future Releases BlogIT Pros and CIOs: sign up to try Firefox Quantum for Enterprise

A few months ago we announced our plan to build enterprise administrative controls (i.e. a “policy engine”) for Firefox Quantum. These new administrative controls will allow IT professionals to easily deploy a pre-configured installation of the new Firefox to employees’ Windows, Mac, and Linux PCs. Administrators can, for example, set up a default proxy, disable certain features, or package Firefox Quantum along with a collection of Add-Ons or bookmarks.

As we gear up for the release of these administrative controls, we’d like to get feedback from IT professionals interested in deploying Firefox Quantum. Today we invite IT pros to sign up to try the beta of Firefox Quantum for Enterprise.

If you’re an IT professional, why should you provide your employees with Firefox Quantum?

Modern business demands a modern browser

Over the past decade, many businesses have adopted on-demand applications (SaaS) for seemingly everything: tasks like word processing, accounting, file sharing, marketing, and sales tracking. Rather than installing these applications on users’ computers, employees access these applications simply by loading them via a web browser.

This trend has made the web browser the most frequently used and arguably the most critical application that is installed on employees’ computers. The web browser has, in effect, quietly become the operating system for modern business software.

Legacy web browsers (e.g. Internet Explorer and old versions of Firefox) run many of these web applications slowly. Even worse, sometimes legacy web browsers can’t run modern web apps because older browsers don’t support newer web standards these apps rely on. That’s why IT professionals should ensure that their employees have a modern web browser capable of quickly running today’s web apps.

Speed up your business with Firefox Quantum

It’s often said that time is money, and in business this adage certainly rings true. With this in mind, consider the unique impact of your web browser. A browser that loads pages and switches tabs just seconds faster can save users more than fifteen minutes over the course of the day.

So why is it that some browsers are faster than others? And what’s special about Firefox Quantum?

While browsers might seem simple and similar to each other on the surface, they are remarkably different and complex under the hood. Much like cars, browsers have engines with unique performance characteristics.

Firefox Quantum is the result of a years-long effort to dramatically reinvent the quintessential open source browser. Inside Firefox Quantum is an all-new, cutting-edge engine made to harness the power of today’s multi-core computers. Above all things, Firefox Quantum is FAST.

Mozilla, the organization that makes Firefox, helped pioneer a whole new systems programming language – Rust – and coded major parts of the browser with it. For example, Firefox Quantum uses an algorithm written in Rust to match CSS to HTML. This breakthrough algorithm runs super fast, in parallel across multiple CPU cores, instead of in a sequence on one CPU core.

Firefox Quantum’s unique architecture translates to real user benefit, as it’s often faster than Chrome and Edge, while typically using less memory. With Firefox Quantum, users can open numerous tabs to run web apps, while still having enough RAM available to run traditional desktop apps like Microsoft Word and Adobe Photoshop.

Easy administration, powerful controls

IT professionals can configure and deploy Firefox Quantum for Enterprise through familiar tools. Windows administrators can quickly set policies using Windows Group Policy. Administrators can then deploy the managed Firefox Quantum browser to users’ Windows PCs.

For Mac, Linux, and Windows, administrators can simply include a JSON configuration file inside of Firefox’s installation directory.

From an organization you can trust

Mozilla is unique among browser makers in that is motivated, not by profit, but by its mission to ensure that the Internet remain a global public resource, open and accessible to all. One of Mozilla’s core principles is that users’ privacy and security are fundamental.

Out of respect for privacy, Firefox does not track user activity to target advertising as other browsers do. To further protect privacy, administrators and users can turn on Tracking Protection, which disables many invisible scripts that follow users from site to site. Tracking Protection also makes browsing the web significantly faster – cutting page load times in half on many sites. To help ensure security, Firefox Quantum sandboxes web page content, creating a boundary that protects your computer’s files and hardware from malicious websites.

Unlike other browsers, Firefox has always been open source.

Firefox Quantum is ready for enterprise testing

Thousands of organizations have deployed Firefox to their employees for years, but Firefox Quantum kicks the browser into a whole new gear. Numerous technology publications have written kind remarks about the new Firefox, but perhaps Wired put it best:

“Ciao, Chrome: Firefox Quantum is the browser built for 2017.” – Wired

If you’re an IT professional interested in providing your employees with Firefox Quantum, we ask that you sign up at so that we can better understand your business and its requirements. We’ll then provide you with information about how to try out Firefox Quantum for Enterprise. To share feedback and engage with other IT professionals regarding Firefox Quantum, please join our enterprise mailing list.

The post IT Pros and CIOs: sign up to try Firefox Quantum for Enterprise appeared first on Future Releases.

Wladimir PalantCan Chrome Sync or Firefox Sync be trusted with sensitive data?

A few days ago I wrote about insufficient protection of locally saved passwords in Firefox. As some readers correctly noted however, somebody gaining physical access to your device isn’t the biggest risk out there. All the more reason to take a look at how browser vendors protect your passwords when they upload them to the cloud. Both Chrome and Firefox provide a sync service that can upload not just all the stored passwords, but also your cookies and browsing history which are almost as sensitive. Is it a good idea to use that service?

TL;DR: The answer is currently “no,” both services have weaknesses in their protection. Some of these weaknesses are worse than others however.

Chrome Sync

I’ll start with Chrome Sync first, where the answer is less surprising. After all, there are several signs that this service is built for convenience rather than privacy. For example, the passphrase meant to protect your data from Google’s eyes is optional. There is no setup step where it asks you “Hey, do you mind if we can peek into your data? Then choose a passphrase.” Instead, you have to become active on your own. Another sign is that Google lets you access your passwords via a web page. The idea is probably that you could open up that webpage on a computer that doesn’t belong to you, e.g. in an internet café. Is it a good idea? Hardly.

Either way, what happens if you set a passphrase? That passphrase will be used to derive (among other things) an encryption key and your data will be encrypted with it. And the big question of course is: if somebody gets hold of your encrypted data on Google’s servers, is translating the passphrase into an encryption key slow enough to prevent somebody from guessing your passphrase? Turned out, Chrome is using PBKDF2-HMAC-SHA1 with 1003 iterations.

To give you an idea of what that means, I’ll again use the numbers from this article as a reference: with that iterations count, a single Nvidia GTX 1080 graphics card could turn out 3.2 million PBKDF2-HMAC-SHA1 hashes per second. That’s 3.2 million password guesses tested per second. 1.5 billion passwords known from various website leaks? Less than 8 minutes. A 40 bits strong password that this article considers to be the average chosen by humans? That article probably overestimates humans’ capabilities for choosing good passwords, but on average within two days that password will be guessed as well.

It’s actually worse than that. The salt that Chrome uses for key derivation here is a constant. It means that the same password will result in the same encryption key for any Chrome user. That in turn means that an attacker who got the data for a multitude of users could test a password guess against all accounts. So they would only spend four days and the data for any account using up to 40 bits strong password would be decrypted. Mind you, Google themselves has enough hardware to do the job within minutes if not seconds. I am talking about somebody not willing to invest more than $1000 into hardware.

I reported this as issue 820976, stay tuned.

Site-note: Style points to Chrome for creative waste of CPU time. The function in question manages to run PBKDF2 four times where one run would have been sufficient. First run derives the salt from host name and username (both happen to be constants in case of Chrome Sync). This is pretty pointless: a salt doesn’t have to be a secret, it merely needs to be unique. So concatenating the values or running SHA-256 on them would do just as well. The next three runs derive three different keys from identical input, using different iteration counts. A single PBKDF2 call producing the data for all three keys clearly would have been a better idea.

Firefox Sync

Firefox Sync relies on the well-documented Firefox Accounts protocol to establish encryption keys. While all the various parameters and operations performed there can be quite confusing, it appears to be a well-designed approach. If somebody gets hold of the data stored on the server they will have to deal with password derivation based on scrypt. Speeding up scrypt with specialized hardware is a whole lot harder than PBKDF2, already because each scrypt call requires 64 MB of memory given the parameters used by Mozilla.

There is an important weakness here nevertheless: scrypt runs on the Firefox Accounts server, not on the client side. On the client side this protocol uses PBKDF2-HMAC-SHA256 with merely 1000 iterations. And while the resulting password hash isn’t stored on the server, if somebody can read it out when it is being transmitted to the server they will be able to guess the corresponding password comparably easily. Here, a single Nvidia GTX 1080 graphics card could validate 1.2 million guesses per second. While the effort would have to be repeated for each user account, testing 1.5 billion known passwords would be done within twenty minutes. And a 40 bits strong password would fall within five days on average. Depending on what’s in the account, spending this time (or adding more hardware) might be worth it.

The remarkable part of this story: Mozilla paid a security audit of Firefox Accounts, and that audit pointed out the client-side key derivation as a key weakness. So Mozilla has been aware of this issue for at least 18 months, and 8 months ago they even published this information. What happened? Nothing so far, the issue didn’t receive the necessary priority it seems. This might have been partly due to the auditor misjudging the risk:

Further, this attack assumes a very strong malicious adversary who is capable of bypassing TLS

Sure, somebody getting a valid certificate for and rerouting traffic destined for to their own server would be one possible way to exploit this issue. It’s more likely however that the integrity of the real server is compromised. Even if the server isn’t hacked, there is always a chance that a Mozilla or Amazon (it is hosted on AWS) employee decides to take a look at somebody’s data. Or what if the U.S. authorities knock at Mozilla’s door?

I originally reported this as bug 1444866. It has now been marked as a duplicate of bug 1320222 – I couldn’t find it because it was marked as security sensitive despite not containing any information that wasn’t public already.

Mark SurmanMozilla Foundation is seeking a VP, Leadership Programs

One of Mozilla’s biggest strengths is the people — a global community of engineers, designers, educators, lawyers, scientists, researchers, artists, activists and every day users brought together with the common goal of making the internet healthier.

A big part of Mozilla Foundation’s focus over the past few years has been increasing both the size and diversity of this community and the broader moveme. In particular, we’ve run a series of initiatives — the Internet Health Report, MozFest, our fellowships and awards — aimed at connecting and supporting people who want to take a leadership role in this community. Our global community is the lynchpin in our strategy to grow a global movement to create a healthier digital world.

Over the next couple of months, we are looking for a new VP, Leadership Programs (click for job spec) to drive this aspect of Mozilla Foundation’s work. This role was formerly held by Chris Lawrence, who built an incredible team and foundational set of programs. Chris left Mozilla last November. We are seeking someone to step into this role and to help us increase the impact and global reach of these leadership programs.

It’s worth lingering on this one point: we want to grow the global reach of our leadership development programs — and, in turn, increase the global scope and diversity of our community. That is one of the first priorities we will ask this new VP to tackle. Right now, the majority of our staff and much of our are in North America. Certainly, this has improved in the last few years. For example, our 2018 cohort of fellows has people based in Brazil, Canada, Chile, Germany, India, Kenya, Mexico, Netherlands, South Africa, Tunisia, and USA. However, this is just a start. This new VP will lead the effort to go further .

With this in mind, the VP, Leadership Programs will be based in Mozilla’s Berlin office. Berlin is our biggest office outside of North America. It is well placed to work with people based in African, Middle Eastern and South Asian time zones. And, Berlin as a city is a cosmopolitan hub of open tech work — attracting people from all around the world. While putting one person in Berlin won’t immediately change things, it should help us shift our attention across the Atlantic and further eastward over time.

Who are we looking for? Someone quite rare and special. Ideally, the new VP will be someone with both: deep experience working on some aspect of internet health; and a proven track record building high impact organizations and teams. They will need the vision to hone our leadership development and community building programs, working with our teams to take the Internet Health Report, our fellowships and awards program, and the annual Mozilla Festival to the next level of excellence. They will also need to look outwards, growing our community of partner orgs and foundations to building a movement for a healthier digital world. A full job spec is posted here.

Our aim is to  make the process as open as we possibly can — knowing this is hard when you’re recruiting for a senior role and most of the people you want are in existing jobs. The first step is this blog post letting everyone know what’s up. If you have names to suggest or suggestions on other factors to consider, please reach out to myself or to Anna Sauter, the recruiter we are working with at i-potentials in Berlin. I will work with Anna to screen and shortlist a diverse slate of candidates. From there, these candidates will be interviewed by both our other VPs, directors in the leadership programs team and a handful of other staff and community members. Following this there will be an AMA with the final candidate for all staff and vouched Mozillians. I will post at least one more update here on my blog during the course of the process.

Again: the job spec for the Mozilla Foundation, VP Leadership Programs is here. If you have candidates to suggest or feedback to offer, please contact Anna at i-potentials or myself.

The post Mozilla Foundation is seeking a
VP, Leadership Programs
appeared first on Mark Surman.

This Week In RustThis Week in Rust 225

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is cursive, a library for easy text-user interface applications. Thanks to Wangshan Lu for the suggestion.

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

124 pull requests were merged in the last week

New Contributors

  • 1011X
  • Kurtis Nusbaum
  • Maxim Nazarenko
  • Peter Lyons
  • Songbird0

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Upcoming Events

The community team is trying to improve outreach to meetup organisers. Please fill out their call for contact info if you are running or used to run a meetup.

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Captain's log, day 21

We have sailed on Reddit and Twitter for three weeks now, searching far and wide, yet the only thing we found was a barren landscape, with no end in sight. The supplies are shrinking, the men are growing impatient and hungry, and I fear we will have a mutiny soon. But I am stubborn and optimistic, and urge them to hold on and keep waiting until we find a quote of the week.

u/SelfDistinction on reddit.

Thanks to u/nasa42 for the suggestion!

Submit your quotes for next week!

This Week in Rust is edited by: nasa42 and llogiq.

Will Kahn-GreeneSide projects and swag-driven development


I work at Mozilla. I work on a lot of stuff:

  • a main project I do a ton of work on and maintain: Socorro
  • a bunch of projects related to that project which I work on and maintain: Antenna, Everett, Markus
  • some projects that I work on occasionally but don't maintain: mozilla-django-oidc
  • one project that many Mozilla sites use that somehow I ended up with but has no relation to my main project: Bleach
  • some projects I'm probably forgetting about
  • a side-project that isn't related to anything else I do that I "maintain": Standups

For most of those projects, they're either part of my main job or I like working on them or I get some recognition for owning them. Whatever the reason, I don't work on them because I feel bad. Then there's Standups which I work on solely because I feel bad.

This blog post talks about me and Standups, pontificates about some options I've talked with others about, and then lays out the concept of swag-driven development.

Read more… (8 mins to read)

Mozilla Security BlogDistrust of Symantec TLS Certificates

A Certification Authority (CA) is an organization that browser vendors (like Mozilla) trust to issue certificates to websites. Last year, Mozilla published and discussed a set of issues with one of the oldest and largest CAs run by Symantec. The discussion resulted in the adoption of a consensus proposal to gradually remove trust in all Symantec TLS/SSL certificates from Firefox. The proposal includes a number of phases designed to minimize the impact of the change to Firefox users:

  • January 2018 (Firefox 58): Notices in the Browser Console warn about Symantec certificates issued before 2016-06-01, to encourage site owners to replace their TLS certificates.
  • May 2018 (Firefox 60): Websites will show an untrusted connection error if they use a TLS certificate issued before 2016-06-01 that chains up to a Symantec root certificate.
  • October 2018 (Firefox 63): Distrust of Symantec root certificates for website server TLS authentication.

After the consensus proposal was adopted, the Symantec CA was acquired by DigiCert; however, that fact has not changed Mozilla’s commitment to implement the proposal.

Firefox 60 is expected to enter Beta on March 13th carrying with it the removal of trust for Symantec certificates issued prior to June 1st, 2016, with the exception of certificates issued by a few subordinate CAs that are controlled by Apple and Google. This change affects all Symantec brands including GeoTrust, RapidSSL, Thawte, and VeriSign. The change is already in effect in Firefox Nightly.

Mozilla telemetry currently shows that a significant number of sites – roughly 1% of the top one million – are still using TLS certificates that are no longer trusted in Firefox 60. While the number of affected sites has been declining steadily, we do not expect every website to be updated prior to the Beta release of Firefox 60. We strongly encourage operators of affected sites to take immediate action to replace these certificates.

If you attempt to visit a site that is using a TLS certificate that is no longer trusted in Firefox 60, you will encounter the following error:

Clicking on the “Advanced” button will allow you to bypass the error and reach the site:

These changes are expected to be included in the final version of Firefox 60 that is planned to be release on May 9th, 2018.

In Firefox 63, trust will be removed for all Symantec TLS certificates regardless of the date issued (with the exception of certificates issued by Apple and Google subordinate CAs as described above).

Wayne Thayer
Kathleen Wilson

The post Distrust of Symantec TLS Certificates appeared first on Mozilla Security Blog.

The Firefox FrontierMarch Add(on)ness is here

Winter’s icy hand is releasing its grip, birds are returning from southern migration which means it’s that time of year where people everywhere rank things, put them in brackets and … Read more

The post March Add(on)ness is here appeared first on The Firefox Frontier.

Gervase MarkhamTo Planet Mozilla Readers

This is a quick note addressed to those reading this blog via a subscription to Planet Mozilla. Following my stepping back from the Mozilla project, posts to this blog are unlikely to feature Mozilla-related content in the future, and will instead be about, well, what it’s like to be dying :-) I therefore won’t be syndicating them. If you wish to keep reading what I write, you may want to take a direct subscription. Here’s my direct feed.

The Servo BlogThis Week In Servo 107

In the last week, we merged 85 PRs in the Servo organization’s repositories.

Congratulations to waywardmonkeys for their new mandate to review and maintain the low-level harfbuzz bindings, and their work to create safe higher-level bindings!

Planning and Status

Our roadmap is available online, including the overall plans for 2018.

This week’s status updates are here.

Notable Additions

  • emilio made some Linux environments not crash on startup.
  • jdm created a tool to chart memory usage over time.
  • emilio reordered some style system checks for better performance.
  • mrobinson improved the clipping behaviour of blurred text shadows.
  • mbrubeck added the resize API to SmallVec
  • nox expanded the set of CSS types that can use derived serialization.
  • gw reduced the number of allocations necessary on most pages.
  • SimonSapin replaced the angle crate with a fork maintained by Mozilla.
  • mrobinson removed some redundant GPU matrix math calculations.
  • Beta-Alf improved the performance of parsing CSS keyframes.
  • gw simplified the rendering for box shadows.
  • mkollaro implemented the glGetTexParameter API.
  • fabricedesre added the pageshow event when navigating a page.
  • SimonSapin demonstrated how to integrate the DirectComposition API in WebRender.
  • waywardmonkey added a higher-level crate for using the harfbuzz library.
  • paulrouget switched Servo to use the upstream glutin crate instead of an outdated fork.
  • oOIgnitionOo added a command line flag to download and run a nightly build of Servo.

New Contributors

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!

The Rust Programming Language BlogRust's 2018 roadmap

Each year the Rust community comes together to set out a roadmap. This year, in addition to the survey, we put out a call for blog posts in December, which resulted in 100 blog posts written over the span of a few weeks. The end result is the recently-merged 2018 roadmap RFC.

Rust: 2018 edition

This year, we will deliver Rust 2018, marking the first major new edition of Rust since 1.0 (aka Rust 2015).

We will continue to publish releases every six weeks as usual. But we will designate a release in the latter third of the year (Rust 1.29 - 1.31) as Rust 2018. This new “edition” of Rust will be the culmination of feature stabilization throughout the year, and will ship with polished documentation, tooling, and libraries that tie in to those features.

The idea of editions is to signify major steps in Rust’s evolution, where a collection of new features or idioms, taken as a whole, changes the experience of using Rust. They’re a chance, every few years, to take stock of the work we’ve delivered in six-week increments. To tell a bigger story about where Rust is going. And to ship the whole stack as a polished product.

We expect that each edition will have a core theme or focus. Thinking of 1.0 as “Rust 2015”, we have:

  • Rust 2015: stability
  • Rust 2018: productivity

What will be in Rust 2018?

The roadmap doesn’t say for certain what will ship in Rust 2018, but we have a pretty good idea, and we’ll cover the major suspects below.

Documentation improvements

Part of the goal with the Rust 2018 release is to provide high quality documentation for the full set of new and improved features and the idioms they give rise to. The Rust Programming Language book has been completely re-written over the last 18 months, and will be updated throughout the year as features reach the stable compiler. Rust By Example will likewise undergo a revamp this year. And there are numerous third party books, like Programming Rust, reaching print as well.

Language improvements

The most prominent language work in the pipeline stems from 2017’s ergonomics initiative. Almost all of the accepted RFCs from the initiative are available on nightly today, and will be polished and stabilized over the next several months. Among these productivity improvements are a few “headliners” that will form the backbone of the release:

  • Ownership system improvements, including making borrowing more flexible via “non-lexical lifetimes”, improved pattern matching integration, and more.
  • Trait system improvements, including the long-awaited impl Trait syntax for dealing with types abstractly.
  • Module system improvements, focused on increasing clarity and reducing complexity.
  • Generators/async/await: work is rapidly progressing on first-class async programming support.

In addition, we anticipate a few more major features to stabilize prior to the Rust 2018 release, including SIMD, custom allocators, and macros 2.0.

Compiler improvements

As of Rust 1.24, incremental recompilation is available and enabled by default on the stable compiler. This feature already makes rebuilds significantly faster than fresh builds, but over the course of the year we expect continued improvements for both fresh and re-builds. Compiler performance should not be an obstacle to productivity in Rust 2018.

Tooling improvements

Rust 2018 will see high quality 1.0 releases of the Rust Language Server (“RLS”, which underlies much of our IDE integration story) and rustfmt (a standard formatting tool for Rust code). We will continue to improve Cargo by stabilizing custom registries, public dependencies, and a revised profile system. We’re also expecting further work on Cargo build system integration, Xargo integration, and custom test frameworks, though it’s unclear as yet how many of these will be complete prior to Rust 2018.

Library improvements

Building on our work from last year, we will publish a 1.0 version of the Rust API guidelines book, continue pushing important libraries to 1.0 status, improve discoverability through a revamped cookbook effort, and make heavy investments in libraries in specific domains—as we’ll see below.

Web site improvements

As part of Rust 2018, we will completely overhaul the Rust web site, making it useful for CTOs and engineers alike. It should be far easier to find information to help evaluate Rust for your use case, and to stay up to date with the latest tooling and ecosystem improvements.

Four target domains

Part of our goal with Rust 2018 is to demonstrate Rust’s productivity in specific domains of use. We’ve selected four such domains to invest in and highlight this year:

  • Network services. Rust’s reliability and low footprint make it an excellent match for network services and infrastructure, especially at high scale.
  • Command-line apps (CLI). Rust’s portability, reliability, ergonomics, and ability to produce static binaries come together to great effect for writing CLI apps.
  • WebAssembly. The “wasm” web standard allows shipping native-like binaries to all major browsers, but GC support is still years away. Rust is extremely well positioned to target this domain, and provides a reasonable on-ramp for programmers coming from JS.
  • Embedded devices. Rust has the potential to make programming resource-constrained devices much more productive—and fun! We want embedded programming to reach first-class status this year.

Each of these domains has a dedicated working group for the year. These WGs will work in a cross-cutting fashion, interfacing with language, tooling, library, and documentation work.

Compatibility across editions

TL;DR: Rust will continue its stability guarantee of hassle-free updates to new versions.

Editions will have a meaning for the compiler. You will be able to write:

edition = "2018"

in your Cargo.toml to opt in to the new edition for your crate. Doing so may introduce new keywords or otherwise require adjustments to code. However:

  • You can use old editions indefinitely on new compilers; editions are opt-in.
  • Editions are set on a per-crate basis and can be mixed and matched; you can be on a different edition from your dependencies.
  • Warning-free code in one edition must compile, and have the same behavior, on the next.
  • Edition-related warnings, e.g. that an identifier will become a keyword in the next edition, must be easily fixable via an automated migration tool (rustfix). Only a small minority of crates should require any manual work to opt in to a new edition, and that manual work must be minimal.
  • Most new features are edition-independent, and will be usable on new compilers even when an older edition is selected.

In other words, the progression of new compiler versions is independent from editions; you can migrate at your leisure, and don’t have to worry about ecosystem compatibility; and edition migration is normally trivial.

Additional 2018 goals

While the Rust 2018 release is our major focus this year, there are some additional ongoing concerns that we want to give attention to.

Better serving intermediate Rustaceans

One of the strongest messages we’ve heard from production users, and the 2017 survey, is that people need more resources to take them from understanding Rust’s concepts to knowing how to use them effectively. The roadmap does not stipulate exactly what these resources should look like — probably there should be several kinds — but commits us as a community to putting significant work into this space, and ending the year with some solid new material.


Connect and empower Rust’s global community. We will pursue internationalization as a first-class concern, and proactively work to build ties between Rust subcommunities currently separated by language, geography, or culture. We will spin up and support Rust events worldwide, including further growth of the RustBridge program.

Grow Rust’s teams and new leaders within them. We will refactor the Rust team structure to support more scale, agility, and leadership growth. We will systematically invest in mentoring, both by creating more on-ramp resources and through direct mentorship relationships.

A call to action

As always in the Rust world, the goals laid out here will ultimately be the result of a community-wide effort—maybe one including you! Here are some of the teams where we could use the most help. Note that all IRC channels refer to the network.

  • WebAssembly WG. Compiling Rust to WebAssembly should be the best choice for fast code on the Web. Check out rust-lang-nursery/rust-wasm to learn more and get involved!
  • CLI WG. Writing CLI apps in Rust should be a frictionless experience–from finding the right libraries and writing concise integration tests up to cross-platform distribution. Join us at rust-lang-nursery/cli-wg and help us reach that goal!
  • Embedded Devices WG. Quality, productivity, accessibility: Rust can change the embedded industry for the better. Let’s get this process started in 2018! Join us at
  • Ecosystem WG. We’ll be providing guidance and support to important crates throughout the ecosystem. Drop into the WG-ecosystem room and we’ll guide you to places that need help!
  • Dev Tools Team. There are always interesting things to tackle with developer tools (IDEs, Cargo, rustdoc, Clippy, Rustfmt, custom test frameworks, and more). Drop in to #rust-dev-tools and have a chat with the team!
  • Rustdoc Team. With your help, we can make documentation better for everyone. Come join us in #rustdoc on IRC, and we can help you get started!
  • Release Team. Drop by #rust-release on IRC to get involved with regression triage and release production!
  • Community Team. We’ve kicked off several new Teams within the Community Team and are eager to add new members: Events, Content, Switchboard, RustBridge, Survey, and Localization! Check out our team repo or stop by our IRC channel, #rust-community, to learn more and get involved!

Cameron KaiserTenFourFox FPR6 available

TenFourFox Feature Parity Release 6 is now available for testing (downloads, hashes, release notes). Other than finishing the security patches and adding a couple more entries to the basic adblock, there are no other changes in this release. Assuming no issues, it will become live Monday evening Pacific time as usual.

The backend for the main download page at Floodgap has been altered such that the Downloader is now only offered to browsers that do not support TLS 1.2 (this is detected by checking for a particular JavaScript math function Math.hypot, the presence of which I discovered roughly correlates with TLS 1.2 support in Google Chrome, Microsoft Edge, Safari and Firefox/TenFourFox). This is to save bandwidth on our main server since those browsers are perfectly capable of downloading directly from SourceForge and don't need the Downloader to help them. This is also true of Leopard WebKit, assuming the Security framework update is also installed.

For FPR7, I have already exposed basic adblock in the TenFourFox preferences pane, and am looking at some efficiency updates as well as updates to the supported TLS ciphers and hopefully date pickers if there is still time. Also, the limited profiling tools I have at my disposal suggest that some of the browser's occasional choppiness is at least partially associated with improperly scheduled garbage collection slices. I'm experimenting with retuning the runtime environment to see if we can stave off some types of collection to preserve CPU cycles and not bloat peak memory usage too much. So far, 24 hours into testing with some guesswork numbers, it doesn't seem to be exploding. More on that later.

Wladimir PalantMaster password in Firefox or Thunderbird? Do not bother!

There is a weakness common to any software letting you protect a piece of data with a password: how does that password translate into an encryption key? If that conversion is a fast one, then you better don’t expect the encryption to hold. Somebody who gets hold of that encrypted data will try to guess the password you used to protect it. And modern hardware is very good at validating guesses.

Case in question: Firefox and Thunderbird password manager. It is common knowledge that storing passwords there without defining a master password is equivalent to storing them in plain text. While they will still be encrypted in logins.json file, the encryption key is stored in key3.db file without any protection whatsoever. On the other hand, it is commonly believed that with a master password your data is safe. Quite remarkably, I haven’t seen any articles stating the opposite.

However, when I looked into the source code, I eventually found the sftkdb_passwordToKey() function that converts a password into an encryption key by means of applying SHA-1 hashing to a string consisting of a random salt and your actual master password. Anybody who ever designed a login function on a website will likely see the red flag here. This article sums it up nicely:

Out of the roughly 320 million hashes, we were able to recover all but 116 of the SHA-1 hashes, a roughly 99.9999% success rate.

The problem here is: GPUs are extremely good at calculating SHA-1 hashes. Judging by the numbers from this article, a single Nvidia GTX 1080 graphics card can calculate 8.5 billion SHA-1 hashes per second. That means testing 8.5 billion password guesses per second. And humans are remarkably bad at choosing strong passwords. This article estimates that the average password is merely 40 bits strong, and that estimate is already higher than some of the others. In order to guess a 40 bit password you will need to test 239 guesses on average. If you do the math, cracking a password will take merely a minute on average then. Sure, you could choose a stronger password. But finding a considerably stronger password that you can still remember will be awfully hard.

Turns out that the corresponding NSS bug has been sitting around for the past 9 (nine!) years. That’s also at least how long software to crack password manager protection has been available to anybody interested. So, is this issue so hard to address? Not really. NSS library implements PBKDF2 algorithm which would slow down bruteforcing attacks considerably if used with at least 100,000 iterations. Of course, it would be nice to see NSS implement a more resilient algorithm like Argon2 but that’s wishful thinking seeing a fundamental bug that didn’t find an owner in nine years.

But before anybody says that I am unfair to Mozilla and NSS here, other products often don’t do any better. For example, if you want to encrypt a file you might be inclined to use OpenSSL command line tools. However, the password-to-key conversion performed by the openssl enc command is even worse than what Firefox password manager does: it’s essentially a single MD5 hash operation. OpenSSL developers are aware of this issue but:

At the end of the day, OpenSSL is a library, not an end-user product, and enc(1) and friends are developer utilities and “demo” tools.

News flash: there are plenty of users out there not realizing that OpenSSL command line tools are insecure and not actually meant to be used.

Chris H-CTIL: Feature Detection in Windows using GetProcAddress

In JavaScript, if you want to use a function that was introduced only in certain versions of browsers, you use Feature Detection. For example, you can ask “Hey, browser, do you have a function called `includes` on Array?” If the browser has it, you use it; and if it doesn’t, you either get along without it or load your own implementation.

It turns out that this same concept can be (and, in Firefox, is) done with Windows APIs.

Firefox for Windows is built against the Windows 10 SDK. This means the compiler knows the API calls and type definitions for all sorts of wondrous modern features like toast notifications and enumerating graphics adapters in a specific order.

However, as of writing, Firefox for Windows supports Windows 7 and up. What would happen if Firefox tried to use those fancy new Windows 10 features when running on Windows 7?

Well, at compile time (when Mozilla builds Firefox), it knows everything it needs to about the sizes and names of things used in the new features thanks to the SDK. At runtime (when a user runs Firefox), it needs to ask Windows at what address exactly all of those fancy new features live so that it can use them.

If Firefox can’t find a feature it expects to be there, it won’t start. We want Firefox to start, though, and we want to use the new features when available. So how do we both use the new feature (if it’s there) and not (if it’s not)?

Windows provides an API called GetProcAddress that allows the running program to perform some Feature Detection. It is asking Windows “Hey, so I’m looking for the address of this fancy new feature named FancyNewAPI. Do you know where that is?”. Windows will either reply “No, sorry” at which point you work around it, or “Yes, it’s over at address X” at which point to convert address X into a function pointer that takes the same number and types of arguments that the documentation said it takes and then instruct your program to jump into it and start executing.

We use this in Firefox to detect gamepad input modules, cancelable synchronous IO, display density measurements, and a whole bunch of graphics and media acceleration stuff.

And today (well, yesterday at this point) I learned about it. And now so have you.


–edited to remove incorrect note that GetProcAddress started in WinXP– :aklotz noted that GetProcAddress has been around since ancient times, MSDN just periodically updates its “Minimum Supported Release” fields to drop older versions.

Nicholas NethercoteA New Preferences Parser for Firefox

Firefox’s preferences system uses data files to store information about default preferences within Firefox, and user preferences in a user’s profile (such as prefs.js, which records changes to preference values, and user.js, which allows users to override default preference values).

A new parser

These data files use a custom format, and therefore Firefox has a custom parser for them. I recently rewrote the parser. The new parser has the following benefits over the old parser.

  • It is faster (raw parsing speed is close to 2x faster).
  • It is safer (because it’s written in Rust rather than C++).
  • It is more correct and better tested (the old one got various obscure edge cases wrong).
  • It is more readable, and easier to modify.
  • It issues no warnings, only errors.
  • It is slightly stricter (e.g. doesn’t allow any malformed input, and it catches integer overflow).
  • It has error recovery and better error messages (including correct line numbers).


Modifiability was the prime motivation for the change. I wanted to make some adjustments to the preferences file grammar, but this would have been very difficult in the old parser, because it was written in an awkward style.

It was essentially a single loop containing a giant switch statement on a state variable. This switch was executed for every single char in a file. The states held by the state variable had names like PREF_PARSE_QUOTED_STRING, PREF_PARSE_UNTIL_OPEN_PAREN, PREF_PARSE_COMMENT_BLOCK_MAYBE_END. It also had a second state variable, because in some places a single one wasn’t enough; the parser had to return to the previous state after exiting the current state. Furthermore, lexing and parsing were not separate, so code to handle comments and whitespace was spread around in various places.

The new parser is a recursive descent parser — even though the grammar doesn’t actually have any recursion — in which the structure of the code reflects the structure of the grammar. Lexing is distinct from parsing. As a result, the new parser is much easier to read and modify. In particular, after landing it I added error recovery without too much effort; that would have been almost impossible in the old parser.

Note that the idea of error recovery for preferences parsing was first proposed in bug 107264, filed in 2001! After landing it, I tweeted the following.

Amazingly enough, the original reporter is on Twitter and responded!


The new parser is slightly stricter and rejects some malformed input that the old parser accepted.

Junk chars

Disconcertingly, the old parser allowed arbitrary junk between preferences (including at the start and end of the prefs file) so long as that junk didn’t include any of the following chars: ‘/’, ‘#’, ‘u’, ‘s’, ‘p’. This means that lines like these:

!foo@bar&pref("prefname", true);
ticky_pref("prefname", true);    // missing 's' at start
User_pref("prefname", true);     // should be 'u' at start

would all be treated the same as this:

pref("prefname", true);

The new parser disallows such junk because it isn’t necessary and seems like an unintentional botch by the old parser. In practice, this caught a couple of prefs that accidentally had an extra ‘;’ at the end.

SUB char

The old parser allowed the SUB (0x1a) character between tokens and treated it like ‘\n’.

The new parser does not allow this character. SUB was used to indicate end-of-file (not end-of-line) in some old operating systems such as MS-DOS, but this doesn’t seem necessary today.

Invalid escapes

The old parser tolerated (with a warning) invalid escape sequences within  string literals — such as “\q” (not a valid escape) and “\x1” and “\u12″(both of which have insufficient hex digits) — accepting them literally.

The new parser does not tolerate invalid escape sequences because it doesn’t seem necessary and would complicate things.

NUL char

The old parser tolerated the NUL character (0x00) within string literals; this is
dangerous because C++ code that manipulates string values with embedded NULs will almost certainly consider those chars as end-of-string markers.

The new parser treats the NUL character as end-of-file, to avoid this danger. (The escape sequences “\x00” and “\u0000” are also disallowed.)

Integer overflow

The old parser allowed integer literals to overflow, silently wrapping them.

The new parser treats integer overflow as a parse error. This seems better,
and it caught overflows of several existing prefs.


Error recovery minimizes the risk of data loss caused by the increased strictness because malformed pref lines in prefs.js will be removed but well-formed pref lines afterwards are preserved.

Nonetheless, please keep an eye out for any other problems that might arise from this change.


I mentioned before that I wanted to make some adjustments to the preferences file grammar. Specifically, I changed the grammar used by default preference files (but not user preference files) to support annotating each preference with one or more boolean attributes. The attributes supported so far are ‘sticky’ and ‘locked’. For example:

pref("sticky.pref", true, sticky);
pref("locked.pref", 123, locked);
pref("sticky-and-locked-pref", "blah", sticky, locked);

Note that the addition of the ‘locked’ attribute fixed a 10 year old bug.

When will this ship?

All of these changes are on track to ship in Firefox 60, which is due to release on May 9th.

Firefox Test PilotFun with Themes in Firefox

TL;DR: Last year, I started work on a new Test Pilot experiment playing with themes in Firefox.

New theme APIs are fun

At the core of this experiment are new theme APIs for add-ons shipping with Firefox.

These APIs take inspiration from static themes in Google Chrome, building from there to enable the creation of dynamic themes.

For example, Quantum Lights changes based on the time of day.

VivaldiFox reflects the sites you’re visiting.

You could even build themes that use data from external HTTP services — e.g. to change based on the weather.

To explore these new APIs, Firefox Themer consists of a website and a companion add-on for Firefox. The website offers a theme editor with a paper doll preview — you can click on parts of a simulated browser interface and dress it up however you like. The add-on grants special powers to the website, applying changes from the theme in the editor onto the browser itself.

Editing themes on the web

The site is built using Webpack, React, and Redux. React offers a solid foundation for composing the editor. Personally, I really like working with stateless functional components — they’re kind of what tipped me over into becoming a React convert a few years ago. I’m also a terrible visual designer with weak CSS-fu — but using Webpack to bundle assets from per-component directories makes it easier for teammates to step in where I fall short.

Further under the hood, Redux offers a clean way to manage theme data and UI state. Adding undo & redo buttons is easy, thanks to redux-undo. And, by way of some simple Redux middleware, I was able to easily add a hook to push every theme changes into the browser via the add-on.

The website is just a static page — there’s no real server-side application. When you save a theme, it ends up in your browser’s localStorage. Though we plan to move Themer to a proper production server when we launch in Test Pilot, I’ve been deploying builds to GitHub Pages during development.

Another interesting feature of the website is that we encode themes as a parameter in the URL. Rather than come up with a bespoke scheme, I use this json-url module to compress JSON and encode it as Base64, which makes for a long URL but not unreasonably so. This approach enables folks to simply copy & paste a URL to share a theme they’ve made. You can even link to themes from a blog post, if you wanted to!

When the page loads and sees the ?theme URL, it unpacks the data and loads it into editor’s Redux store. I’ve also been able to work this into the location bar with the HTML5 History API and Redux middleware. The browser location represents the current theme, while back & forward buttons double as undo & redo.

Add-ons can be expansion cartridges

The companion add-on is also built using Webpack. It acts as an expansion cartridge for the theme editor on the website.

(Can you tell I’ve had retro computers on the mind, lately?)

Add-ons in Firefox can install content scripts that access content and data on web pages. Content scripts can communicate with the parent add-on by way of a message port. They can also communicate with a web page by way of synthetic events. Put the two together, and you’ve got a messaging channel between a web page and an add-on in Firefox.

Here’s the heart of that messaging bridge:

With this approach, the web page doesn’t actually gain access to any Firefox APIs. The add-on can decide what to do with with messages it receives. If the page sends invalid data or asks to do something not supported — nothing happens. Here’s a snippet of that logic from the extension:

And here’s a peek at that Redux middleware I mentioned earlier which updates the add-on from the web:

The add-on can also restrict the set of pages from which it will accept messages: We hardcode the URL for the theme editor into the add-on’s content script configuration at build time, which means no other web page should be able to ask the add-on to alter the theme in Firefox.

Add-on detection is hard

There is a wrinkle to the relationship between website and add-on, though: A normal web page cannot detect whether or not a particular add-on has been installed. All the page can do is send a message. If the add-on responds, then we know the add-on is available.

Proving a negative, however, is impossible: the web page can’t know for sure that the add-on is not available. Responses to asynchronous messages take time — not necessarily a long time, but more than zero time.

If the page sends a message and doesn’t get a response, that doesn’t mean the add-on is missing. It could just mean that the add-on is taking awhile to respond. So, we have to render the theme editor such that it starts off by assuming the add-on is not installed. If the add-on shows up, minutes or milliseconds later, the page can update itself to reflect the new state of things.

Left as-is, you’d see several flashes of color and elements on the page move as things settle. That seems unpleasant and possibly confusing, so we came up with a loading spinner:

When the page loads, it displays the spinner and a timer starts. If that timer expires, we consider things ready and reveal the editor. But, if there’s any change to the Redux store while that timer is running, we restart the clock.

This is the gist of what that code does:

Early changes to the store are driven by things like decoding a shared theme and responses from the add-on. Again, these are asynchronous and unpredictable. The timer duration is an arbitrary guess I made that seems to feel right. It’s a dirty hack, but it seems like a good enough effort for now.

Using npm scripts and multiple Webpack configs

One of the things that has worked nicely on this project is building everything in parallel with a single npm command. You can clone the repo and kick things off for development with a simple npm install && npm start dance.

The add-on and the site both use Webpack. There’s a shared config as a base and then specific configurations with tweaks for the site and the add-on. So, we want to run two separate instances of Webpack to build everything, watch files, and host the dev server.

This is where npm-run-all comes in: It’s a CLI tool that lets you run multiple npm scripts. I used to use gulp to orchestrate this sort of thing, but npm-run-all lets me arrange it all in package.json. It would be fine if this just enabled running scripts in series. But, npm-run-all also lets you run scripts in parallel. The cherry on top is that this parallelization works on Linux, OS X, and Windows.

In past years, Windows support might have been an abstract novelty for me. But, in recent months, I’ve switched from Apple hardware to a PC laptop. I’ve found the new Windows Subsystem for Linux to be essential to that switch. But, sometimes it’s nice to just fire up a Node.js dev environment directly in PowerShell — npm-run-all lets me (and you) do that!

So, the start script in our package.json is able to fire up both Webpack processes for the site and add-on. It can also start a file watcher to run linting and tests (when we have them) alongside. That simplifies using everything in a single shell window across platforms.

I used to lean on Vagrant or Docker to offer something “simple” to folks interested in contributing to a project. But, though virtual machines and containers can hide apparent complexity in development, it’s hard to beat just running things in node on the native OS.

Help us make themes more fun!

We’re launching this experiment soon. And, though it only makes limited use of the new theme APIs for now, we’re hoping that the web-based editor and ease of sharing makes it fun & worth playing with. We’ve got some ideas on what to add over the course of the experiment and hope to get more from the community.

Whether you can offer code, give feedback, participate in discussions, or just let us watch how you use something — everyone has something valuable to offer. In fact, one of the overarching goals of Test Pilot is to expand channels of contribution for folks interested in helping us build Firefox.

As with all Test Pilot experiments, we’ll be watching how folks use this stuff as input for what happens next. We also encourage participation in our Discourse forums. And finally, the project itself is open source on GitHub and open to pull requests.

In the meantime, start collecting color swatches for your own theme. Personally, I might try my hand at a Dracula theme or maybe raid my Vim config directory for some inspiration.

Originally published at on March 1, 2018.

Fun with Themes in Firefox was originally published in Firefox Test Pilot on Medium, where people are continuing the conversation by highlighting and responding to this story.

The Servo BlogMozilla’s Servo team joining Mixed Reality

Servo had amazing year in 2017. We saw the style system ship and deliver performance improvements as a flagship element of the highly regarded Firefox Quantum release. And we’ve continued to build out the engine platform and experiment with new embedding APIs, innovations in graphics and font rendering, and graduate subsystems to production readiness for inclusion in Firefox. Consistently throughout those efforts, we saw work in Servo demonstrate breakthrough advances in parallelism, graphics rendering, and robustness.

Coming in to 2018, we see virtual and augmented reality devices transitioning from something just for hardcore gamers and enterprises into broad consumer adoption. These platforms will transform the way that users create and consume content on the internet. As part of the Emerging Technologies and Mozilla Research missions to enable the web platform on these new systems, we will be adopting the Mozilla Servo team as part of the Mixed Reality team and doubling down on our investigations in virtual and augmented reality. Servo is already the platform where we first implemented support for mobile VR, extensions, such as, WebGL MultiView, and even our sneak peak running on the Qualcomm Snapdragon 835 developer kit and compatible AR glasses from last September. Servo’s lean, modern code base and leading-edge strengths in parallelism and graphics are ideal for prototyping new technology for the web and growing the results into production code usable both inside and outside of Servo.

Servo 6DOF A-Blast from September 2017

What does this look like concretely? The first thing we will do is get Servo implementing the GeckoView API, working inside one of our existing mobile browser shell apps, and working with a ton of VR and AR devices so that it can run hand-in-hand with our existing use of Gecko in our Mixed Reality Browser. Like our WebXR iOS Viewer, this will give us a platform where we can experiment, drive standards forward, and build compelling pilot experiences.

Some of the experiments we’re looking to invest more in during 2018:

  • Declarative VR. We have libraries like Three.js, Babylon.js, A-Frame, and ReactVR and tools like PlayCanvas and Unity to produce 3D content, but there are no standards yet for how traditional web pages should behave when loaded into a headset.

  • We will continue to experiment with things like DOM to texture. It is still difficult to allow web content to be part of a 3D scene.

  • Higher quality text rendering with WebRender and Pathfinder, originally designed for desktop but now tuned for VR and AR hardware.

  • Experiment with new AR APIs and computer vision.

  • Experiment with new WebGL extensions (multiview, lens-matched shading, etc.)

  • Experiments with device & voice APIs (WebBluetooth, Physical Web/Beacon successors, etc.)

Keep tuned here and to the Mozilla Mixed Reality blog for more updates! It’s going to be a thrilling year.

Hacks.Mozilla.OrgHands-On Web Security: Capture the Flag with OWASP Juice Shop

As a developer, are you confident that you know what you need to know about web security? Wait, maybe you work in infosec. As a security specialist, are you confident that the developers you work with know enough to do the right thing?

Screenshot of OWASP Juice Shop

Often, these aren’t easy questions to answer, even for seasoned security professionals working with world class software engineers as we do at Mozilla.

OK, you can watch tutorial videos and take a variety of online tests, but it’s always more fun to try things in real life with a group of friends or colleagues. Our recent Mozilla all-hands was one of those opportunities.

A Capture the Flag (CTF) event offer a sociable hands-on way to learn about security and they are often a tradition at security conferences.

I’m part of the Mozilla Firefox Operations Security team and we work closely with all Mozilla developers to make sure that the core services Mozilla relies on to build, ship, and run Firefox are as secure as possible.

In this retrospective, I’ll show how you can easily set up a CTF event using free and open source software, as the Security team did back in December, when we gathered in Austin for Mozilla All Hands event.

Customizing OWASP Juice Shop

We chose OWASP Juice Shop, a web app designed intentionally for training purposes to be insecure. Juice Shop uses modern technologies like Node.js, Express and AngularJS, and provides a wide range of security challenges ranging from the simple to the complex. This was important for us since our participants had a wide range of skills, and included developers with little formal security training to professional penetration testers.

Juice Shop is a “single user application,” but it comes with a CTF mode and detailed instructions for Hosting a CTF Event. When this is turned on, the application generates “CTF-tokens” anytime someone solves one of the challenges. These can then be uploaded to a central scoring server. The CTF mode also disables the hints which might have made some of the challenges too easy for our more advanced players.

Juice Shop can be run in a wide variety of ways, but to make it easy for your participants I recommend using a docker image, as this has only one dependency: docker.

You can find the official Juice Shop docker image here: or you can build your own if you want to customize it. You can customization instructions online.

We enabled the built-in CTF mode and changed the application name and the example products in order to make it feel more Firefox-y and to hide its origin (as solutions for the Juice Shop challenges are easily found on the internet).

Once we were happy with our changes we uploaded our image to dockerhub: mozilla/ctf-austin

Screenshot of Mozilla-customized OWASP Juice Shop

Setting Up a Scoring Server

You’ll want to set up a scoring server, to allow participants to upload their CTF-tokens and compare their scores with everyone else. It definitely helped encourage competition among our participants!

A scoring server should also provide a summary of each of the challenges and the points each challenge is worth. For this we used CTFd – it’s easy to install and there’s an officially supported tool for importing the Juice Shop challenges into CTFd which can be run using:

npm install -g juice-shop-ctf-cli

You’re then presented with a set of questions that allow you to tune the setup to your requirements.

Running the CTF

To get your CTF event underway you just need to tell participants the URL of your CTFd server and how to get Juice Shop running locally. If you are using the official image, here’s how to go about running Juice Shop locally:

docker pull bkimminich/juice-shop
docker run -d -e "NODE_ENV=ctf" -p 3000:3000 bkimminich/juice-shop

If you’re using your own image then change the image name, and if you have the CTF option enabled then your code wont need the -e "NODE_ENV=ctf" part:

docker pull mozilla/ctf-austin
docker run -d -p 3000:3000 mozilla/ctf-austin

In either case, participants will now be able to access their own local copy of Juice Shop via http://localhost:3000/

Although some of the Juice Shop security challenges can be solved just by using Firefox, a security tool that proxies your browser will really help.

A good option for this is OWASP ZAP (for which I’m the project leader), a free and open source security tool specifically designed to find security vulnerabilities in web applications.

ZAP sits between your browser and the application you want to test and shows all of the traffic that flows between them. It also allows you to intercept and change that traffic and provides a wide range of automated and manual features that can be used to test the application. If you use ZAP you won’t need to change your browser settings, as ZAP can launch Firefox (or any other locally installed browser) preconfigured to proxy through ZAP.


Remind all participants to explore Juice Shop as thoroughly as they can – you can’t find all the issues if there are features that you are not aware of. Suggest that they start with the easiest challenges w(the ones with the fewest points) and work upwards, as the challenges are designed to get progressively harder.

A graph of the top 10 teams and their results on the challenges

If you are running the CTF over several days (as we did), it’s a good idea to be available for help and advice. We set up a private irc channel, a Google group, and held daily check-in sessions where anyone could come along and ask us questions about the event, and get help on solving the challenges.

Graphs and charts from CTFd showing Score Charts, Key Percentages, Category Breakdowns

On the last day of our event, we held a final session to congratulate the winners, revealed the app’s origin and handed out Juice Shop stickers kindly provided by Björn Kimminich (the JuiceShop project lead).

Outcomes and Next Steps

Running a Capture the Flag event is a great way to raise security awareness and knowledge within a team, a company, or an organization.

Juice Shop is an ideal application for a CTF as its based on modern web technologies and includes a wide range of challenges. It’s very well thought out and well supported.The fact that it’s a real application with realistic vulnerabilities, rather than a set of convoluted tasks, makes it ideal for learning about application security.

Our Mozilla/Firefox custom Juice Shop app is available at Unless you particularly want to use a Mozilla-branded version, we recommend the original Juice Shop app: (Note: It has already been updated since we forked our copy.)
And if you haven’t played with it yet, then I strongly recommend doing so. It’s a lot of fun and you’ll almost certainly learn something.

In the end, over 20 people registered for our event and their feedback was very positive:

“The cookie / JWT stuff is the most illuminating part of this.”

“This whole thing is excellent thanks for putting it together.”

“I hate the fact I can’t focus on my things because I’d like to solve more ctf tasks and learn something.”

“It’s awesome because I’m planning to improve my sec skills.”

“This has been a lot of fun – thanks for setting it up.”

Photo of Mozilla Y'All-Hands CTF participants

Not surprisingly 2 of our pen testers who took part did very well, but they were given a run for their money by one of our operations staff who clearly knows a lot about security!

Do you have a knack for uncovering security vulnerabilities? At Mozilla, we have a Web and Services Bug Bounty Program. We welcome your help in making Mozilla even more secure. You could even earn some bounty rewards for your efforts. And we’re always looking for contributors to help us make ZAP better, so if that sounds interesting, have a look at Contributing to OWASP ZAP.

Daniel PocockBug Squashing and Diversity

Over the weekend, I was fortunate enough to visit Tirana again for their first Debian Bug Squashing Party.

Every time I go there, female developers (this is a hotspot of diversity) ask me if they can host the next Mini DebConf for Women. There have already been two of these very successful events, in Barcelona and Bucharest. It is not my decision to make though: anybody can host a MiniDebConf of any kind, anywhere, at any time. I've encouraged the women in Tirana to reach out to some of the previous speakers personally to scope potential dates and contact the DPL directly about funding for necessary expenses like travel.

The confession

If you have read Elena's blog post today, you might have seen my name and picture and assumed that I did a lot of the work. As it is International Women's Day, it seems like an opportune time to admit that isn't true and that as in many of the events in the Balkans, the bulk of the work was done by women. In fact, I only bought my ticket to go there at the last minute.

When I arrived, Izabela Bakollari and Anisa Kuci where already at the venue getting everything ready. They looked busy, so I asked them if they would like a bonus responsibility, presenting some slides about bug squashing that they had never seen before while translating them into Albanian in real-time. They delivered the presentation superbly, it was more entertaining than any TED talk I've ever seen.

The bugs that won't let you sleep

The event was boosted by a large contingent of Kosovans, including 15 more women. They had all pried themselves out of bed at 03:00 am to take the first bus to Tirana. It's rare to see such enthusiasm for bugs amongst developers anywhere but it was no surprise to me: most of them had been at the hackathon for girls in Prizren last year, where many of them encountered free software development processes for the first time, working long hours throughout the weekend in the summer heat.

and a celebrity guest

A major highlight of the event was the presence of Jona Azizaj, a Fedora contributor who is very proactive in supporting all the communities who engage with people in the Balkans, including all the recent Debian events there. Jona is one of the finalists for Red Hat's Women in Open Source Award. Jona was a virtual speaker at DebConf17 last year, helping me demonstrate a call from the Fedora community WebRTC service to the Debian equivalent, At Mini DebConf Prishtina, where fifty percent of talks were delivered by women, I invited Jona on stage and challenged her to contemplate being a speaker at Red Hat Summit. Giving a talk there seemed like little more than a pipe dream just a few months ago in Prishtina: as a finalist for this prestigious award, her odds have shortened dramatically. It is so inspiring that a collaboration between free software communities helps build such fantastic leaders.

With results like this in the Balkans, you may think the diversity problem has been solved there. In reality, while the ratio of female participants may be more natural, they still face problems that are familiar to women anywhere.

One of the greatest highlights of my own visits to the region has been listening to some of the challenges these women have faced, things that I never encountered or even imagined as the stereotypical privileged white male. Yet despite enormous social, cultural and economic differences, while I was sitting in the heat of the summer in Prizren last year, it was not unlike my own time as a student in Australia and the enthusiasm and motivation of these young women discovering new technologies was just as familiar to me as the climate.

Hopefully more people will be able to listen to what they have to say if Jona wins the Red Hat award or if a Mini DebConf for Women goes ahead in the Balkans (subscribe before posting).

The Firefox FrontierCelebrating 24 incredible women on International Women’s Day

This International Women’s Day Mozilla is celebrating 24 remarkable women who are using the web to change the world. We’re recognizing them throughout the day on the Mozilla twitter feed. … Read more

The post Celebrating 24 incredible women on International Women’s Day appeared first on The Firefox Frontier.

Mozilla Localization (L10N)L10n Report: March Edition

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet.


New localizers:

Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

New community/locales added

We enabled several new locales on Pontoon in the past weeks, get in touch if you speak the language and want to contribute:

New content and projects

What’s new or coming up in Firefox desktop

Firefox 59 closed down for localization on February 28, while 60 will remain open until April 25. Firefox 60 is an ESR release, so it’s particularly important to catch up with localization and ensure a good quality in the shipping build.

From a localization point of view, the focus remains on migrating existing Preferences strings to Fluent. We have recently passed the 100 messages milestone, with the migration of the XUL portion of the General pane, and started introducing some of the cool features available in the new localization system.

use-current-pages =
.label =
    { $tabCount ->
        [1] Use Current Page
       *[other] Use Current Pages
.accesskey = C

For more details about this migration, see the announcement on dev-l10n. Also make sure to familiarize yourself with both Pontoon’s UI and Fluent syntax.

More migrations are planned for the 61 Nightly cycle, starting on March 12.

Activity Stream (New Tab) is also planning to integrate its settings in Firefox main preferences for 60: new strings have already landed in the Activity Stream project, a few more will land in mozilla-central.

What’s new or coming up in mobile

There’s a lot going on around mobile this month, so hang on tight!

First of all, Firefox for iOS is soon launching it’s v11. L10n deadline for this is Tuesday, March 13th. This release includes many cool new features and improvements, such as:

  • Improved tracking protection
  • iPad specific improvements in navigation, drag and drop
  • Keyboard support
  • Performance telemetry

On the Firefox for Android front, merge day is like for Desktop, so March 12th. The official release is the next day. Locales supported by the Play Store should expect to get an updated string for the What’s New Beta 60 this week. Please remember to check your appstores folder on Pontoon for anything new! 🙂

Firefox for Fire TV v2 is also right around the corner, and the l10n deadline for completion is Mon March 12. Firefox for Fire TV currently supports 8 languages but have no fear – an in-app locale switcher is in the works for future versions. This means we will be able to provide as many languages as we want (well… almost! 😉 ).

Focus for Android is regaining a bit of activity this month, with a few new strings exposed. Something important to note is that the mobile team stopped using Buddybuilds for testing on device. The APKs are on TaskCluster, which will be where you will be heading in order to test your l10n work on (you may need to uninstall your current Buddybuild first though). A v4.1 is expected at the end of the month – more details to come soon.

Focus for iOS continues to be on pause for now. Expect spot releases here and there. As always, stay tuned on the dev-l10n mailing list for any udpates!

On a more or less related topic – Mozilla is participating in the Google Summer of Code project, and we’re looking to mentor students to work on some really cool projects… come check them out (especially the ones related to Pontoon, wink wink)!

What’s new or coming up in web projects

The Common Voice project is Mozilla’s initiative to help teach machines how real people speak. The first time the team that came to speak to the localization communities was the Berlin L10n Workshop last September. The discussion generated a lot of interests and shed some light on the technical challenges as well. There are two parts that made up the project: the web page and the recordings.

Fast forward, the web part of the project is now open to localization. The team has been overwhelmed with enthusiasm and responses. The website is being localized in 17 languages. The project has attracted quite a following. There are new languages added and new contributors introduced to established communities. Thanks to all who helped with onboarding of the new contributors and to our new contributors who join the Mozillian community.

To the locale managers, please review new suggestions and provide constructive feedback to the new localizers in a timely manner. Share glossary and style guide if they are available. Suggest them sign up to web project mailing lists for future updates, and create an account!

A few words on where the project stands:

  • The website on production is in English for now. Localized websites are on staging only.
  • The sync between Pontoon and GitHub is every 40 minutes.
  • Recording is in English only. However, all speakers, native and non-native, are welcome to create a profile and contribute to the recordings.
  • The first non-English recordings will be in Macedonian.
  • Additional questions, visit the FAQ page.

What’s new or coming up in Foundation projects

Not a whole lot happening this month on localization on the Foundation side, mostly focusing on collecting feedback from our donors to improve our campaigns and running different campaigns in the U.S.
But the Advocacy team is still prepping up a Copyright push before the JURI vote, which has been delayed once again. And they are also exploring campaign ideas around connected toys that could be launched in certain European markets, but the toys themselves are still being investigated. More to come soon!

What’s new or coming up in Pontoon

In February, a team of Pontoon contributors met in Munich to drill into Translate.Next, our effort at rewriting the translate view of Pontoon and building the foundations of the future of Pontoon as a whole. One of the topics we discussed was how to involve the community in the development of Pontoon more closely. As the first step we created a Pontoon category on Discourse, where we’re hoping to get feedback on proposed developments as well as ideation and problem reports.

Additional way to review localized emails

Reviewing Mozilla localized emails can be challenging, especially when you can’t easily display all the strings at once, strip the HTML code and read it in a format close to what will be sent to subscribers.

Sometimes, inconsistencies or typos are hard to notice before receiving an actual test send proof. While it’s still possible to fix them at this point, it’s much more error prone and requires further edits. And usually, only major issues can be fixed.

This new review mode aims to provide a quick preview of localized emails, even if it can’t replicate everything from the final email template, and does not intent to replace a final review and approval of the test send.

Once you’re done with a first translation, you can proofread the email in its entirety, and adjust your translation if necessary. Translations are pulled automatically from Git every 15 minutes.

If you want to quickly check the source of a string, just hover your cursor over the translation, the English string will appear in a tooltip.

How do I access the review mode?

Simply open this page (also linked in the Engagement project info on Pontoon), find the file you just translated then click the REVIEW button. You can use the top menu to jump to your locale.

We hope you will find it handy! And if not, let us know how we can improve it.

Newly published localizer facing documentation

This was mentioned last month, but it’s always good to have a reminder. Come check out what Kekoa (our l10n intern from last summer) wrote out to help l10n communities (yes, that’s YOU!) write simple – but thorough – style guides.

Make sure to start out by reading the Mozilla General Style Guide first, since it provides the basic guidelines for translating Mozilla products. This guide should be used in coordination with a locale-specific style guide for your language.

Then look at some general guidelines that will help you create your own locale style guide.

We encourage communities to participate actively and create or modify their existing style guides here. Everything is explained in the documentation above, so don’t be shy and give it a try! (or ask anyone from l10n-drivers to help out if you’re not sure how to proceed).

Now, are you interested in learning more about Fluent from a localizer perspective? Then this is just what you need!

Friends of the Lion

Image by Elio Qoshi

Know someone in your l10n community who’s been doing a great job and should appear here? Contact on of the l10n-drivers and we’ll make sure they get a shout-out (see list at the bottom)!

Useful Links

Questions? Want to get involved?

Did you enjoy reading this report? Let us know how we can improve by reaching out to any one of the l10n-drivers listed above.

Air MozillaReps Weekly Meeting, 08 Mar 2018

Reps Weekly Meeting This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

The Mozilla BlogMozilla experiment aims to reduce bias in code reviews

Mozilla is kicking off a new experiment for International Women’s Day, looking at ways to make open source software projects friendlier to women and racial minorities. Its first target? The code review process.

The experiment has two parts: there’s an effort to build an extension for Firefox that gives programmers a way to anonymize pull requests, so reviewers will see the code itself, but not necessarily the identity of the person who wrote it. The second part is gathering data about how sites like Bugzilla and GitHub work, to see how “blind reviews” might fit into established workflows.

The idea behind the experiment is a simple one: If the identity of a coder is shielded, there’s less opportunity for unconscious gender or racial bias to creep into decision-making processes. It’s similar to an experiment that began the 1970s, when U.S. symphonies began using blind auditions to hire musicians. Instead of hand-picking known proteges, juries listened to candidates playing behind a screen. That change gave women an edge: They were 50 percent more likely to make it past the first audition if their gender wasn’t known. Over the decades, women gained ground, going from 10% representation in orchestras to 35 percent in the 1990s.

Mozilla is hoping to use a similar mechanism – anonymity – to make the code review process more egalitarian, especially in open source projects that rely on volunteers. Female programmers are underrepresented in the tech industry overall, and much less likely to participate in open source projects. Women account for 22 percent of computer programmers working in the U.S, but only 11 percent of them contribute to open source projects. A 2016 study of more than 50 GitHub repositories revealed that, in fact, women’s pull requests were approved more often than their male counterparts – nearly 3% more often. However, if their gender was known, female coders were .8% less likely to have their code accepted.

What’s going on? There are two possible answers. One is that people have an unconscious bias against women who write code. If that’s the case, there’s a test you can take to find out: Do I have trouble associating women with scientific and technical roles?

Then there is a darker interpretation: that men are acting deliberately to keep computer programming a boy’s club, rather than accepting high-quality input from women, racial minorities, transgender individuals, and economically underprivileged folks.

A Commitment to Diversity

What does it mean to be inclusive and welcoming to female software engineers? It means, first of all, taking stock of what kind of people we think will do the best job creating software.

“When we talk about diversity and inclusion, it helps to understand the “default persona” that we’re dealing with,” said Emma Humphries, an engineering program manager and bugmaster at Mozilla. “We think of a typical software programmer as a white male with a college education and full-time job that affords him the opportunity to do open source work, either as a hobby or as part of a job that directly supports open source projects.”

This default group comes with a lot of assumptions, Humphries said. They have access to high-bandwidth internet and computers that can run a compiler and development tools, as opposed to a smartphone or a Chromebook. “When we talk about including people outside of this idealized group, we get pushback based on those assumptions,” she said.

For decades, white men have dominated the ranks of software developers in the U.S. But that’s starting to change. The question is, how can we deal with biases that have been years in the making?

Inventing a Solution

Mozilla’s Don Marti, a strategist for Mozilla’s Open Innovation group, decided to take on the challenge. Marti’s hypothesis was: If I don’t know who requested the code review, then I won’t have any preconceived notions about how good or bad the code might be. Marti recruited Tomislav Jovanovic, a ten-year veteran of Mozilla’s open source projects, to create a blinding mechanism for code repositories like GitHub. That way, reviewers can’t see the gender, location, user name, icon, or avatar associated with a particular code submission.

Jovanovic was eager to contribute. “I have been following tech industry diversity efforts for a long time, so the idea of using a browser extension to help with that seemed intriguing,” he said. “Even if we are explicitly trying to be fair, most of us still have some unconscious bias that may influence our reviews based on the author’s perceived gender, race, and/or authority.”

Bias goes the other way as well, in that reviewers might be less critical of work by their peers and colleagues. “Our mind often tricks us into skimming code submitted by known and trusted contributors,” Jovanovic said. “So hiding their identities can lead to more thorough reviews, and ultimately better code overall.”

Test and Measure

An early prototype of a Firefox Quantum add-on can redact the identity of a review requestor on Bugzilla and the Pull Request author on GitHub. It also provides the ability to uncover that identity, if you prefer to get a first look at code without author info, then greet a new contributor or refer to a familiar contributor by name in your comments. Early users can also flag the final review as performed in “blind mode”, helping gather information about who is getting their code accepted and measuring how long the process takes.

Jovanovic is also gathering user input about what types of reviews could be blind by default and how to use a browser extension to streamline common workflows in GitHub. It’s still early days, but so far, feedback on the tests has been overwhelmingly positive.

Having a tool that can protect coders, no matter who they are, is a great first step to building a meritocracy in a rough-and-tumble programmer culture. In recent years, there have been a number of high-profile cases of harassment at companies like Google, GitHub, Facebook, and others. An even better step would be if companies, projects, and code repositories would adopt blind reviews as a mandatory part of their code review processes.

For folks who are committed to open source software development, the GitHub study was something of a downer. “I thought open source was this great democratizing force in the world,” said Larissa Shapiro, Head of Global Diversity and Inclusion at Mozilla. “But it does seem that there is a pervasive pattern of gender bias in tech, and it’s even worse in the open source culture.”

Small Bias, Big Impact

Bias in any context adds up to a whole lot more than hurt feelings. There are far-reaching consequences to having gender and racial bias in peer reviews of code. For the programmers, completing software projects – including review and approval of their code – is the way to be productive and therefore valued. If a woman is not able to merge her code into a project for whatever reason, it imperils her job.

“In the software world, code review is a primary tool that we use to communicate, to assign value to our work, and to establish the pecking order at work in our industry,” Shapiro said.

Ironically, demand for programming talent is high and expected to go higher. Businesses need programmers to help them build new applications, create and deliver quality content, and offer novel ways to communicate and share experiences online. According to the group Women Who Code, companies could see a massive shortfall of technical talent just two years from now, with as many as a million jobs going unfilled. At 59% of the U.S. workforce, women could help with that shortfall. However, they make up just 30% of workers in the tech industry today, and are leaving it faster than any other sector. So we’re not really heading in the right direction, in terms of encouraging women and other underrepresented groups to take on technical roles.

Maybe a clever bit of browser code can start to turn the tide. At the very least, we should all be invested in making open source more open to all, and accept high-quality contributions, no matter who or where they come from. The upside is there: Eliminate bias. Build better communities. Cultivate talent. Get better code, and complete projects faster. What’s not to like about that?

You can sign up for an email alert when the final version of the Blind Reviews Experiment browser extension becomes available later this year, and we’ll ask for your feedback on how to make the extensions as efficient and effective as possible.


The post Mozilla experiment aims to reduce bias in code reviews appeared first on The Mozilla Blog.

Mozilla Cloud Services BlogChanging your primary email in Firefox Accounts

The Firefox Accounts team recently introduced the ability for a user to change their primary email address. Being one of the main developers to work this feature, I wanted to share my experience and give a summary on what it took to get this feature to our users.

Our motivation

Based on user feedback, the most common scenario for changing your primary email was losing access to that email account. This email was often associated with work or an organization they no longer were apart of.

Most account systems would simply allow the user to continue logging in with their old email. However, because your Firefox Account can contain sensitive information, we needed to have an extra layer of security. This came in the form of us running heuristics on the login attempt and prompting you to verify that email address. For example, logging in from a device that has not had a login in over 3 days would require an email confirmation.

If you can no longer access that email address, you are locked out of your account and the data it contains. This caters on the side of security versus user experience. The most common workaround was to create a new account and sync your existing data. This method meant that you could lose data on the old account if you were syncing from a new device.

Design decisions

Once we decided to move forward with the feature, we created a high level plan on how it was going to be done. Exploratory work was already done a few years ago that outlined the risks and a possible solution. We used this as a basis for our initial design.

One of the complexities of changing your Firefox Account email is that our login procedure combines email and password to derive a strong encryption key. This original design decision was driven by a security requirement and meant that we could not perform an email change in one operation, because we would lose part of the key.

Considering these factors, we opted to create an intermediate feature, adding a secondary email address, that would solve a few of the original problems while being designed to allow easy changes to the user’s primary email. Secondary email addresses also receive security notifications and can verify login requests.

While implementing secondary emails, we migrated from a single email on the account database table, to supporting multiple emails in separate emails table. Each email has a couple of flags to signify whether or not they are they primary and verified. Additionally, we wrote several migration scripts that populated our new emails table while falling back to using the account table if there wasn’t any email. Doing this phased approached allowed us to safely rollback if any issues were found.

After adding the secondary email feature, we were able to simplify our database which allowed the actual email change to be flipping the isPrimary flag on an email. After that, our quality assurance team made sure there were no regressions and everything worked as expected.

Updating browsers and our services

Once the secondary emails feature landed, we then set our focus on updating all of our clients and services to support changing the primary email. In addition to the server side changes, updates were needed for each browser and service that uses a Firefox Account.

Before any of the browsers would pickup the email change, they needed to be updated to properly detect and fetch the updated profile information. The Desktop, Android, iOS, Pocket, AMO and Basket  teams each had unique problems while trying to add support for this feature. If interested, you can check out the complete bug tree. Each one of the updates could be worthy of their own blog post.

After adding and verifying a secondary email, you now have the option to make it your primary!

Turning it on

While the Firefox Account team’s development schedule is fairly fixed, we could not risk turning this feature on until all of the clients and services were updated. This meant that we had to wait on external teams to finish testing and integrating the changes. Each browser and team could have a different schedule and timeline for getting fixes in.

While the complete feature rollout took several months, we were able to test the majority of the change email feature by putting it behind a feature flag and having users opt into it. Several bugs were found this way as it gave our QA a way to access feature in production.

The final bug to remove the feature flag was merged in February which turned it on for everyone.

Final thoughts

Our team kept putting this feature off because of the complexity and all the components involved. While the final verdict on how well this retains users is not out, I am happy that we were able to push through these and give a long requested feature to our user base. Below is a usage graph that shows that users are already changing their address and keeping their account updated.

Thanks to everyone and teams that helped review, development and push the changes needed for this feature!

The Mozilla BlogSetting the stage for our next chapter

2017 was a great year for Mozilla. From new and revitalized product releases across our expanding portfolio to significant progress in advocating for and advancing the open web with new capabilities and approaches, to ramping up support for our allies in the broader community, to establishing new strategic partnerships with global search providers — we now have a much stronger foundation from which we can grow our impact in the world.

Building on this momentum, we are making two important changes to our leadership team to ensure we’re positioned for even greater impact in the years to come.  I’m pleased to announce that Denelle Dixon has been promoted to Chief Operating Officer and Mark Mayo has been promoted to Chief Product Officer.

As Chief Operating Officer, Denelle will be responsible for our overall operating business leading the strategic and operational teams that work across Mozilla to ensure we’re scaling our impact as a robust open source organization. Aligning these groups under Denelle’s leadership will ensure a holistic approach to business growth, development and operating efficiency by integrating the best of commercial and open innovation practices across all that we do.

As Chief Product Officer, Mark will oversee existing and new product development as we deepen and expand our product portfolio. In his new role, Mark will oversee Firefox, Pocket, and our Emerging Markets teams. Having all our product groups in one organization means we can more effectively execute against a single, clear vision and roadmap to ultimately give people more agency in every part of their connected lives.

Our mission is more important and urgent than ever, our goals are ambitious and I’m confident that together we will achieve them.


The post Setting the stage for our next chapter appeared first on The Mozilla Blog.

Wladimir PalantImplementing safe sync functionality in a server-less extension

The major change in PfP: Pain-free Passwords 2.1.0 is the new sync functionality. Given that this password manager is explicitly not supposed to rely on any server, how does this work? I chose to use existing cloud storage like Dropbox or Google Drive for this, PfP will upload its encrypted backup file there.

This would be pretty trivial, but sync functionality is also supposed to sync records if data is modified by multiple clients concurrently. Not just that, sync has to work even when passwords are locked, meaning: without the possibility to decrypt data. The latter is addressed by uploading local data without any modifications. Records are encrypted in the same way both locally and remotely, so decrypting them is unnecessary.

Merging changes without access to decrypted data is more complicated. This is done by using record identifiers that are both deterministic (same site and password name result in the same record identifier on all devices) and opaque (don’t allow any conclusions about site and password name). PfP uses HMAC to create record identifiers, with the HMAC secret being a random byte sequence that is stored encrypted. When sync is set up for a device, its HMAC secret is replaced to make it match the HMAC secret of other devices connected to the same storage. After that a particular site/password combination is guaranteed to be stored with the same record identifier on all devices.

The merge operation itself is comparably easy then: PfP downloads remote data and replaces any records (by record identifier) that changed locally since the previous sync by local versions. It then needs to make sure that no conflicting changes by two clients are uploaded at the same time. This is fairly straightforward for Dropbox, you can always specify the file version you want to replace — if the file changed in the meantime, the operation fails and sync is restarted. Google Drive API makes it more complicated, you have to use underdocumented ETag functionality and cannot avoid conflicts when creating a new file. Worse yet, this feature only exists in the v2 API, whereas the newer v3 API has no conflict resolution whatsoever. One has to hope that Google doesn’t decide to deprecate v2 API soon.

Altogether, the sync functionality required more effort than I imagined but it works really well. And what about the Edge version that I promised before? Stuck in traffic. I figured out everything necessary with the 2.0.2 release a month ago already. However, turned out that uploading Edge extensions to the Windows Store requires a special permission. I requested this permission and that’s where we still are. Microsoft is making this ridiculously complicated, I suspect that they don’t really want people to create Edge extensions.

Mozilla Addons BlogTheme API Update

This article is written by Michael de Boer, Mozilla Engineering Manager working on the Firefox Frontend team. Mike has been actively involved in themes for a long time and is excited to share the improvements coming to the WebExtensions Theme API.

Last spring we announced our plans to improve themes in Firefox and today I’d like to share our progress and what you can expect in the next few releases!

We started off with laying the groundwork to get a new type of Theme supported; a new ‘theme’ WebExtension namespace was created and we made the Addon Manager aware of WebExtension Themes.

Our first milestone was to completely support the LightWeight Theme (LWT) features, because they’re so simple. This way we had our first new-style themes that are able to change the background image, background color and foreground text color working very quickly. We continued to implement more properties on top of this solid base and are moving toward Chrome compatibility at a good pace.

If you’ve created an extension before, writing your new Theme will be a walk in the park; you can use about:debugging and an extensive toolbox to load up and inspect your manifest-based theme or WebExtension that uses the ‘theme’ JavaScript API and has obtained the ‘theme’ permission.

What you can use today

Since Firefox 55, extension developers have been able to create extensions that can request permission to change the theme that’s currently active and use a number of JavaScript APIs provided by the `browser.theme` namespace.

We fondly call them ‘dynamic themes’, because you can mix and match WebExtension APIs to create wholly unique browser experiences that may reactively update parts of the browser theme.

In Firefox Quantum 57 you can use the following methods:

  • theme.update([windowId]), with which you can update the browser’s’ theme and optionally do that only for a specific window.
  • theme.reset([windowId]), which removes any theme updates made in a call to `theme.update()`. The optional windowId argument allows you to reset a specific window.

And in Firefox 58 you can use these:

As you might have noticed, the theme.update() method is where the magic happens. But it only does something pretty when you feed it a bag of properties that you want it to change.

These properties are defined in the schema and the current set is:

  • images
    • additional_backgrounds: This is a list (JavaScript Array) of image URLs that will be used as the background of a browser window. Each image will be tiled relative to its predecessor.
    • headerURL: Some of you might recognise this property from LWTs; it may be used to set the topmost background image of a browser window.
    • theme_frame: Alias for the ‘headerURL’ property, which is there for compatibility with Chrome themes.
  • colors
    • accentcolor: Use this property to change the background color of a browser window. Usually, this will be set to look pretty next to the ‘headerURL’ you provide. This is also a property that comes from LWTs.
    • tab_background_text: Alias for the ‘textcolor’ property, providing compatibility with Chrome Themes. (Since Firefox 59.)
    • bookmark_text: Alias for the ‘toolbartext’ property, providing compatibility with Chrome themes. (Since Firefox 58.)
    • frame: Alias for the ‘accentcolor’ property, providing compatibility with Chrome themes.
    • tab_text: Alias for the ‘textcolor’ property, providing compatibility with Chrome themes.
    • textcolor: Use this property to change the foreground color of a browser window and is used for the tab titles, for example. This is also a property the comes from LWTs.
    • toolbar: Change the background color of browser toolbars using this property. This property is also supported by Chrome themes.
    • toolbar_text: And this property can be used to change the foreground color inside browser toolbars, like button captions.
    • toolbar_field: Use this property to change the background color of the Awesomebar. This property is also supported by Chrome themes.
    • toolbar_field_text: Use this property to change the foreground color of the Awesomebar, thus the text you type in it. This property is also supported by Chrome themes.
    • toolbar_field_border: Use this property to change the color of border of textboxes other than the Awesomebar inside toolbars. (Since Firefox 59.)
    • toolbar_top_separator: Use this property to change the color of the top border of a toolbar that visually separates it from other toolbars above it. (Since Firefox 58.)
    • toolbar_bottom_separator: Use this property to change the color of the bottom border of a toolbar, that visually separates it from other toolbars below it. (Since Firefox 58.)
    • toolbar_vertical_separator: The color of the separator next to the application menu icon. (Since Firefox 58.)
    • toolbar_field_separator: The color of separators inside the URL bar.
  • properties
    • additional_backgrounds_alignment: Paired with the ‘additional_backgrounds’ property, you can specify the preferred position for each of the images you specified, in a JavaScript Array.
    • additional_backgrounds_tiling: Paired with the ‘addition_backgrounds’ property, you can specify the preferred tiling mode for each of the images you specified, in a JavaScript Array.

You can find a fantastic primer on browser themes and plenty of examples on MDN and

I would like to highlight the VivaldiFox extension in particular, created by Tim Nguyen; he not only worked on this extension but also stepped up to implement some of the missing properties that he needed! Also read his excellent Mozilla Hacks article.

Do you want a more playful introduction to all this tech talk and wish you could fiddle with these properties interactively? Your wish may be granted by our very own John Gruen:

What you can use in the future

Static themes, which only contain a manifest and image resources, will be supported on (AMO) early this year. These will replace LightWeight Themes in a backward and forward compatible way. Until that time comes you will be able to experiment locally using about:debugging and by submitting dynamic themes that use the JavaScript API as explained above.

At this moment of writing, we’ve implemented most of the properties that Chrome supports to theme the browser. Going beyond that, we’re looking to add support for about:home custom themes and possibly for other in-content pages as well.

A group of students from Michigan State University (MSU) will be working on adding the remainder of Chrome properties and more.

One of the most exciting things to come in the near future is the ‘Theme Experiments’ API: on the Nightly channel, you can write an extension that is able to style a part of the browser UI that we haven’t implemented yet, using CSS selectors to make up your properties for it. This way it’s possible to propose new manifest properties to be added to the current set and have them become a part of Firefox, so every theme author can use it and every user may enjoy it!

Our hope is that with this in place the Theming API will evolve continuously to adapt to your needs, because we know that a good API is never ‘done’.

Get involved

If you want to help us out – yes please! – the easiest thing to start with is file bug reports with the things you think are missing today, blocking this bug:

Don’t worry about duplicate bug reports too much, we’re actively pruning the list and rather have too many than an incomplete API.

If you want to implement the next property or API method yourself, please join us on IRC in the #teamaddons channel or send a message to the dev-addons mailing list!

The post Theme API Update appeared first on Mozilla Add-ons Blog.