Mozilla Web DevelopmentBeer and Tell – August 2016

Once a month, web developers from across the Mozilla Project get together to talk about our side projects and drink, an occurrence we like to call “Beer and

There’s a wiki page available with a list of the presenters, as well as links to their presentation materials. There’s also a recording available courtesy of Air Mozilla.

Osmose: PyJEXL

First up was Osmose (that’s me!), who shared PyJEXL, an implementation of JEXL in Python. JEXL is an expression language based on JavaScript that computes the value of an expression when applied to a given context. For example, the statement 2 + foo, when applied to a context where foo is 7, evaluates to 9. JEXL also allows for defining custom functions and operators to be used in expressions.

JEXL is implemented as a JavaScript library; PyJEXL allows you to parse and evaluate JEXL expressions in Python, as well as analyze and validate expressions.


Next up was pmac, who shared, the website for “The Really Terrible Orchestra of the Triangle”. It’s a static site for pmac’s local orchestra built using lektor, an easy-to-use static site generator. One of the more interesting features pmac highlighted was lektor’s data model. He showed a lektor plugin he wrote that added support for CSV fields as part of the data model, and used it to describe the orchestra’s rehearsal schedule.

The site is deployed as a Docker container on a Dokku instance. Dokku replicates the Heroku style of deployments by accepting Git pushes to trigger deploys. Dokku also has a great Let’s Encrypt plugin for setting up and renewing HTTPS certificates for your apps.

groovecoder: Scary/Tracky JS

Next was groovecoder, who previewed a lightning talk he’s been working on called “Scary JavaScript (and other Tech) that Tracks You Online”. The talk gives an overview of various methods that sites can use to track what sites you’ve visited without your consent. Some examples given include:

  • Reading the color of a link to a site to see if it is the “visited link” color (this has been fixed in most browsers).
  • Using requestAnimationFrame to time how long it took the browser to draw a link; visited links will take longer as the browser has to change their color and then remove it to avoid the previous vulnerability.
  • Embed a resource from another website as a video, and time how long it takes for the browser to attempt and fail to parse the video; a low time indicates the resource was served from the cache and you’ve seen it before.
  • Cookie Syncing

groovecoder plans to give the talk at a few local conferences, leading up to the Hawaii All Hands where he will give the talk to Mozilla.

rdalal: Therapist

rdalal was next with Therapist, a tool to help create robust, easily-configurable pre-commit hooks for Git. It is configured via a YAML file at your project root, and can run any command you want to run before committing code, such as code linting or build commands. Therapist is also able to detect which files were changed by the commit and only run commands on those to help save time.

gregtatum: River – Paths Over Time

Last up was gregtatum, who shared River – Paths Over Time. Inspired by the sight of rivers from above during the flight back from the London All Hands, Rivers is a simulation of rivers flowing and changing over time. The animation is drawn on a 2d canvas using rectangles that are drawn and then slowly faded over time; this slow fade creates the illusion of “drips” seeping into the ground as rivers shift position.

The source code for the simulation is available on Github, and is configured with several parameters that can be tweaked to change the behavior of the rivers.

If you’re interested in attending the next Beer and Tell, sign up for the mailing list. An email is sent out a week beforehand with connection details. You could even add yourself to the wiki and show off your side-project!

See you next month!

hacks.mozilla.orgView Source Conference Berlin 2016

An overview

View Source is an intimate, single-track conference for web developers, now in its second year.

View Source 2016 takes place in Berlin, Germany, September 12-14, beginning with Ignite lightning talks on Monday evening, followed by two full days of great presenters, curated conversations, and sociable evenings. Tickets are still on sale.

Here’s a quick look at our lineup of main stage speakers — 16 great reasons to go! Discounted tickets are still available if you register now. (Note: this link applies the MOZHACKS discount. Bring a friend!). We’d love to meet you there.


View Source Conference at the Gerding Theater in Portland, OR 2015. (© Photo by Jakub Mosur Photography)

Sixteen speakers, and much, much more

Belén Albeza – engineer and game developer on the Mozilla Developer Relations Team.”Coding like a girl since 1996.”

Rachel Andrew – web developer, speaker and author. Co-founder of the really little CMS Perch.

Hadley Beeman – Open data, open standards & technology policy. “Mission: Using open data, open standards and online collaboration to improve government and daily life.

  • View Source keynote: State of the Web

View Source Conference is the inaugural Mozilla hosted web developer conference held at the Gerding Theater in Portland, OR from November 2-4, 2015. (© Photo by Jakub Mosur Photography)

Myles Borins“musician, artist, developer and inventor. He works for IBM spending most of his time contributing to the node.js ecosystem.”

Ola Gasidlo“JavaScript && daughter driven development. Lead developer.”

Dominique Hazael-Massieux“W3C Staff, working on next generation of Web technologies (incl JS APIs and WebRTC), with specific mobile focus.”

Helen Holmes – coder, author, and all round client-side wonk. “@firefoxdevtools @mozilla▫️ design, type, IoT, feminist, swift, javascript, español, white ally

Jeremy Keith“An Irish web developer working with @Clearleft curating @dConstruct, and more.”

Robert Nyman – “Global Lead for Developer Feedback & Communities, Web Platform, at Google. Helps developers in creating great things!

  • View Source keynote: The Future of the Web – Progressive Web Apps and Beyond
  • Website: Robert Nyman

Tracy Osborn – “Author of @HelloWebApp and creator of @WeddingLovely. Designer-developer-entreprenerd who loves being outside and climbing mountains.

Lena Reinhard“Team Lead @TravisCI, Speaker, Photographer, Feminist Killjoy.”

Dan Shappir – “My job is to make 85 million websites… load and execute faster #perfmatters

Jen Simmons – “Designer Advocate at Mozilla. Host & executive producer of The Web Ahead podcast. Excited about new CSS for web page layout & revolutionizing editorial design.

Mike Taylor – “Web Compat at Mozilla. I mostly tweet about crappy code.

Estelle Weyl – “0.10X Engineer. Snuggler of dogs. Trainer of slugs. Never has an opinion. Always has many.

Chris Wilson“World Wide Web Shaman. Freethinker.”

It’s a brilliant lineup.

And that’s just a rundown of the main stage speakers for Tuesday and Wednesday. The View Source experience begins Monday evening with a collection of Ignite talks, and the schedule continues with workshops, demos, discussion areas, and evening social events through Wednesday. Join us!


View Source Conference at the Gerding Theater in Portland, OR 2015. (© Photo by Jakub Mosur Photography)

If you can’t be in Berlin next month…

Rest assured that all the View Source speaker talks will be recorded and made available after the event. We will let you know where to find them as they are released. Want to have a look at last year’s conference talks? Check out the View Source 2015 channel on Air Mozilla, Mozilla’s video platform. Got questions, comments, concerns? Please tweet to @viewsourceconf and we will respond.

Air MozillaOutreachy Participant Presentations Summer 2016

Outreachy Participant Presentations Summer 2016 Outreachy program participants from the summer 2016 cohort present their contributions and learnings. Mozilla has hosted 15 Outreachy participants this summer, working on technical projects...

Web Application SecurityMitigating MIME Confusion Attacks in Firefox

Scanning the content of a file allows web browsers to detect the format of a file regardless of the specified Content-Type by the web server. For example, if Firefox requests script from a web server and that web server sends that script using a Content-Type of “image/jpg” Firefox will successfully detect the actual format and will execute the script. This technique, colloquially known as “MIME sniffing”, compensates for incorrect, or even complete absence of metadata browsers need to interpret the contents of a page resource. Firefox uses contextual clues (the HTML element that triggered the fetch) or also inspects the initial bytes of media type loads to determine the correct content type. While MIME sniffing increases the web experience for the majority of users, it also opens up an attack vector known as MIME confusion attack.

Consider a web application which allows users to upload image files but does not verify that the user actually uploaded a valid image, e.g., the web application just checks for a valid file extension. This lack of verification allows an attacker to craft and upload an image which contains scripting content. The browser then renders the content as HTML opening the possibility for a Cross-Site Scripting attack (XSS). Even worse, some files can even be polyglots, which means their content satisfies two content types. E.g., a GIF can be crafted in a way to be valid image and also valid JavaScript and the correct interpretation of the file solely depends on the context.

Starting with Firefox 50, Firefox will reject stylesheets, images or scripts if their MIME type does not match the context in which the file is loaded if the server sends the response header “X-Content-Type-Options: nosniff” (view specification). More precisely, if the Content-Type of a file does not match the context (see detailed list of accepted Content-Types for each format underneath) Firefox will block the file, hence prevent such MIME confusion attacks and will display the following message in the console:

The resource from “” was blocked due to MIME type mismatch (X-Content-Type-Options: nosniff).

Valid Content-Types for Stylesheets:
– “text/css”

Valid Content-Types for images:
– have to start with “image/”

Valid Content-Types for Scripts:
– “application/javascript”
– “application/x-javascript”
– “application/ecmascript”
– “application/json”
– “text/ecmascript”
– “text/javascript”
– “text/json”

SUMO BlogWhat’s Up with SUMO – 25th August

Hello, SUMO Nation!

Another hot week behind us, and you kept us going – thank you for that! Here is a portion of the latest and greatest news from the world of SUMO – your world :-)

Welcome, new contributors!

If you just joined us, don’t hesitate – come over and say “hi” in the forums!

Contributors of the week

Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

Most recent SUMO Community meeting

The next SUMO Community meeting

  • …is happening on the 31st of August!
  • If you want to add a discussion topic to the upcoming meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Wednesday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).
    • If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback.



Support Forum

  • All quiet, keep up the awesome work, people :-)

Knowledge Base & L10n


  • for Android
    • Version 49 will offer offline caching for some web pages. Take a bit of the net with you outside of the network’s range!
  • for iOS
    • Nothing new to report about iOS for now… Stay tuned for news in the future!

… and we’re done! We hope you have a great week(end) and that you come back soon to keep rocking the helpful web with us!

P.S. Just in case you missed it, here’s a great read about the way we want you to make Mozilla look better in the future.

CalendarGSoC 2016: Where Things Stand

The clock has run out on Google Summer of Code 2016.  In this post I’ll summarize the feedback we received on the new UI design and the work I’ve been doing since my last post.

Feedback on the New UI Design

A number of people shared their feedback on the new UI design by posting comments on the previous blog post.  The response was generally positive.  Here’s a brief summary:

  • One commenter advocated for keeping the current date/time picker design, while another just wanted to be sure to keep quick and easy text entry.
  • A question about how attendees availability would be shown (same as it is currently).
  • A request to consider following Google Calendar’s reminders UI.
  • A question about preserving the vertical scroll position across different tabs (this should not be a problem).
  • A concern about how the design would scale for very large numbers (say hundreds) of attendees, categories, reminders, etc.  (See my reply.)

Thanks to everyone who took the time to share their thoughts.  It is helpful to hear different views and get user input.  If you have not weighed in yet, feel free to do so, as more feedback is always welcome.  See the previous blog post for more details.

Coding the Summer Away

A lot has happened over the last couple months.  The big news is that I finished porting the current UI from the window dialog to a tab.  Here’s a screenshot of this XUL-based implementation of the UI in a tab (click to enlarge):


Getting this working in a really polished way took more time than I anticipated, largely because the code had to be refactored so that the majority of the UI lives inside an iframe.  This entailed using asynchronous message passing for communication between the iframe’s contents and its outer parent context (e.g. toolbars, menus, statusbar, etc.), whether that context is a tab or a dialog window.  While this is not a visible change, it was necessary to prepare the way for the new HTML-based design, where an HTML file will be loaded in the iframe instead of a XUL file.

Along with the iframe refactoring, there are also just a lot of details that go into providing an ideal user experience, all the little things we tend to take for granted when using software.  Here’s a list of some of these things that I worked on over the last months for the XUL implementation:

  • when switching tabs, update the toolbar and statusbar to reflect the current tab
  • persist open tabs across application restarts (which requires serializing the tab state)
  • ask the user about saving changes before closing a tab, before closing the application window, and before quitting the application
  • allow customizing toolbars with the new iframe setup
  • provide a default window dialog height and width with the new iframe setup
  • display icons for tabs and related CSS/style work
  • get the relevant ‘Events and Tasks’ menu items to work for a task in a tab
  • allow hiding and showing the toolbar from the view > toolbars menu
  • if the user has customized their toolbar for the window dialog, migrate those settings to the tab toolbar on upgrade
  • fix existing mozmill tests so they work with the new iframe setup
  • test for regressions in SeaMonkey

In the next two posts I’ll describe how to try out this new feature in a development version of Thunderbird, discuss the HTML implementation of the new UI design, and share some thoughts on using React for the HTML implementation.

— Paul Morris

Air MozillaConnected Devices Weekly Program Update, 25 Aug 2016

Connected Devices Weekly Program Update Weekly project updates from the Mozilla Connected Devices team.

Mozilla Add-ons BlogWebExtensions in Firefox 50

Firefox 50 landed in Developer Edition this week, so we have another update on WebExtensions for you!Please use the WebExtensions API for any new add-on development, and consider porting your existing add-ons as soon as possible.

It’s also a great time to port because WebExtensions is compatible with multiprocess Firefox, which began rolling out in Firefox 48 to people without add-ons installed. When Firefox 49 reaches the release channel in September, we will begin testing multiprocess Firefox with add-ons. The goal is to turn it on for everyone in January 2017 with the release of Firefox 51.

If you need help porting to WebExtensions, please start with the compatibility checker, and check out these resources.

Since the last release, more than 79 bugs were closed on WebExtensions alone.

API Changes

In Firefox 50, a few more history APIs landed: the getVisits function, and two events–onVisited and onVisitRemoved.

Content scripts in WebExtensions now gain access to a few export helpers that existed in SDK add-ons: cloneInto, createObjectIn and exportFunction.

The webNavigation API has gained event filtering. This allows users of the webNavigation API to filter events based on some criteria. Details on the URL Filtering option are available here.

There’s been a change to debugging WebExtensions. If you go to about:debugging and click on debug you now get all the Firefox Developer Tools features that are available to you on a regular webpage.

Why is this significant? Besides providing more developer features, this will work across add-on reloads and allows the debugging of more parts of WebExtensions. More importantly, it means that we are now using the same debugger that the rest of the Firefox Dev Tools team is using. Reducing duplicated code is a good thing.

As mentioned in an earlier blog post, native messaging is now available. This allows you to communicate with other processes on the host’s operating system. It’s a commonly used API for password managers and security software, which need to communicate with external processes.


The documentation for WebExtensions has been updated with some amazing resources over the last few months. This has included the addition of a few new areas:

The documentation is hosted on MDN and updates or improvements to the documentation are always welcome.

There are now 17 example WebExtensions on github. Recently added are history-deleter and cookie-bg-picker.

What’s coming

We are currently working on the proxy API. The intent is to ship a slightly different API than the one Chrome provides because we have access to better APIs in Firefox.

The ability to write WebExtensions APIs in an add-on has now landed in Firefox 51 through the implementation of WebExtensions Experiments. This means that you don’t need to build and compile all of Firefox in order to add in new APIs and get involved in WebExtensions. The policy for this functionality is currently under discussion and we’ll have more details soon.

There are also lots of other ways to get involved with WebExtensions, so please check them out!

Air MozillaWeb QA Team Meeting, 25 Aug 2016

Web QA Team Meeting They say a Mozilla Web QA team member is the most fearless creature in the world. They say their jaws are powerful enough to crush...

Air MozillaReps weekly, 25 Aug 2016

Reps weekly This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

hacks.mozilla.orgA Web for Everyone: Interviews with Web Practitioners — Rachel Andrew

A recent article on Mozilla Hacks, “Make the Web Work for Everyone,” explored challenges and opportunities in browser compatibility. In that post we urged developers to build cross-browser compatible web experiences in order to maximize exposure and market size; prevent interface bugs that drive users away forever; and demonstrate professional mastery.

Today we’re kicking off a series of interviews with web developers who have earned widespread admiration for their work on the web. Do these professionals at the top of their field take web compatibility as seriously as we do? Why or why not? And how do they go about achieving it?

We’ll start by talking with Rachel Andrew (@rachelandrew).

Rachel Andrew

Rachel is a founder and managing director of, the company that builds Perch CMS. She has worked on the web since 1996; in that time she’s authored numerous books about CSS and HTML5. She is a frequent speaker at web development conferences (you can see her talking about CSS layouts at View Source 2016 in Berlin, September 12-14). She blogs at

So Rachel, what does cross-browser compatibility mean to you?

Doing my job properly! That’s it really.

How often do you have to think about cross-browser compatibility? Have you found ways to work that allow you to reduce the amount of time you think about it compared to when you were less experienced?

I think about it a lot less now than I used to. The main core things we want to do work cross-browser in a consistent way. We don’t have wildly buggy behaviour as we had in the past. New developers should go look at to see the stuff we used to have to deal with!

Where I do need to think about it is when I am using a new specification, something that isn’t fully implemented perhaps. Or something that hasn’t yet been implemented in all browsers, in that case I need to be sure that my usage of that technique for supporting browsers does not cause a problem for people in a browser that hasn’t got that feature yet.

What motivates you to make the extra effort to build a cross-browser compatible site or product?

I’ve always worked from the assumption that the web is for everyone. I’m also old enough to understand how fast this stuff changes. As far as I can see the absolute best way to ensure that things work well for people now and a year down the line is to get them working in the biggest possible number of browsers and devices today.

Could anything convince you not to make that effort?

Not really, because it is so ingrained into how I work. I also know from experience that when someone says the site only needs to work in a certain browser, look a year down the line and that might change.

Also, in a world of evergreen browsers, working cross-browser can actually mean improving the future experience for the people using the browser with the biggest market share. Here’s an example: position: sticky, enabling sticky headers on tables and navigation has been in Firefox for a good while. It is currently behind a flag in Chrome. Over 50% of my users are in Chrome, however the sticky headers on a table are a nice enhancement so I might use position: sticky to add that little touch for Firefox users who have support.

When Chrome ship their support, suddenly all those users will get that nice enhancement. The site will get better for them without me having to ship any code. So being aware of what is shipping in different browsers means you can take advantage of new stuff and leave these enhancements in your site for other browsers to grow into.

We think of cross-browser being dull grunt work, battling with old browsers and weird bugs. However it can also be pretty fun and liberating, especially right now. There is new stuff shipping all of the time, you miss out if you don’t take a look at who is shipping what.

Can you think of a particularly vexing or interesting compatibility bug you’ve encountered?

Not really, of course I’ve had them. No matter how much testing you do you can be sure that something subtle or not so subtle will happen at some point. What you find is that the more time you have invested into understanding why things work as they do, the more these issues just become a bit annoying rather than giant showstoppers. Actually practicing working cross-browser in non-stressful situations means that when something baffling does occur you have the ability to take a step back and debug it, figure out how that user could possibly be seeing what they are seeing and get it fixed.

Have you ever had to convince a client or boss that building a cross-browser compatible site was important? How’d you do it?

Even back in the days of the browser wars I never mentioned it, I just did it. It’s how we work.

Did you ever have a specific experience that caused you to take cross-browser compatibility more seriously with your next project?

Not really, I’ve been doing this for a long time! However I used to do a lot of troubleshooting of work done by other people. Rather than starting from a baseline of solid experience and enhancing from there, they would have all started with something that only worked in one browser and then tried to retrofit compatibility for the others. Fixing the mess often meant going right back to basics, figuring out the core experience that was needed and then building back up the desired result in those browsers that supported the full experience.

In your post, “Unfashionably Profitable,” you say, “Everything should be possible without JavaScript.” Do you think over-reliance on JavaScript has made the web less cross-browser compatible?

I think that people will often reach for JavaScript earlier in the process than they need to. Even if they are not building the whole thing as a JavaScript application there is a tendency to assume JavaScript is required for all sorts of things that can be achieved without it, or where a baseline experience can be developed without JavaScript to be enhanced later.

I’m not saying “don’t use JavaScript” but the fact is that as soon as you bring JavaScript into the mix you have a whole host of new potential compatibility problems. A development practice that brings these things in one at a time and encourages testing at each stage makes the whole process much much easier.

Are there parts of your process/toolchain/etc. that make it easy for you to incorporate or test for compatibility that you would recommend every web developer incorporate into their own?

I’m pretty old-school when it comes to tools, however I am a big fan of BrowserStack, mostly because I travel a lot and can’t fill that laptop with virtual machines or drag several devices around with me.

What would you tell a brand new developer graduating from a coding bootcamp about cross-browser compatibility?

Everything gets easier if you start by figuring out what the core thing the site or application or feature you are building needs to do. Build that, make sure it works in a few browsers. Then, and only then start to add the bells and whistles. Don’t get everything working in one browser and then a day before launch think, “hmm maybe I should test this in another browser”. That is when cross-browser becomes difficult, when you try and do the retro-fitting.

Start out right and you probably won’t have to think about it too much.

Tips from Rachel’s interview

  • Experiment with upcoming or partially-implemented browser features, but use them only to enhance basic functionality, not to deliver basic functionality.
  • Add new features one at a time and test their compatibility as you go. Don’t wait to test at the end.
  • If you don’t have access to all the machines and devices you need to test on, test in one of the online browser testing tools.

The Mozilla BlogEU Copyright Law Undermines Innovation and Creativity on the Internet. Mozilla is Fighting for Reform

Mozilla has launched a petition — and will be releasing public education videos — to reform outdated copyright law in the EU


The internet is an unprecedented platform for innovation, opportunity and creativity. It’s where artists create; where coders and entrepreneurs build game-changing technology; where educators and researchers unlock progress; and where everyday people live their lives.

The internet brings new ideas to life everyday, and helps make existing ideas better. As a result, we need laws that protect and enshrine the internet as an open, collaborative platform.

But in the EU, certain laws haven’t caught up with the internet. The current copyright legal framework is outdated. It stifles opportunity and prevents — and in many cases, legally prohibits — artists, coders and everyone else from creating and innovating online. This framework was enacted before the internet changed the way we live. As a result, these laws clash with life in the 21st century.


Here are just a few examples of outdated copyright law in the EU:

  • It’s illegal to share a picture of the Eiffel Tower light display at night. The display is copyrighted — and tourists don’t have the artists’ express permission.
  • In some parts of the EU, making a meme is technically unlawful. There is no EU-wide fair use exception.
  • In some parts of the EU, educators can’t screen films or share teaching materials in the classroom due to restrictive copyright law.

It’s time our laws caught up with our technology. Now is the time to make a difference: This fall, the European Commission plans to reform the EU copyright framework.

Mozilla is calling on the EU Commission to enact reform. And we’re rallying and educating citizens to do the same. Today, Mozilla is launching a campaign to bring copyright law into the 21st century. Citizens can read and sign our petition. When you add your name, you’re supporting three big reforms:

1. Update EU copyright law for the 21st century.

Copyright can be valuable in promoting education, research, and creativity — if it’s not out of date and excessively restrictive. The EU’s current copyright laws were passed in 2001, before most of us had smartphones. We need to update and harmonise the rules so we can tinker, create, share, and learn on the internet. Education, parody, panorama, remix and analysis shouldn’t be unlawful.

2. Build in openness and flexibility to foster innovation and creativity.

Technology advances at a rapid pace, and laws can’t keep up. That’s why our laws must be future-proof: designed so they remain relevant in 5, 10 or even 15 years. We need to allow new uses of copyrighted works in order to expand growth and innovation. We need to build into the law flexibility — through a User Generated Content (UGC) exception and a clause like an open norm, fair dealing, or fair use — to empower everyday people to shape and improve the internet.

3. Don’t break the internet.

A key part of what makes the internet awesome is the principle of innovation without permission — that anyone, anywhere, can create and reach an audience without anyone standing in the way. But that key principle is under threat. Some people are calling for licensing fees and restrictions on internet companies for basic things like creating hyperlinks or uploading content. Others are calling for new laws that would mandate monitoring and filtering online. These changes would establish gatekeepers and barriers to entry online, and would risk undermining the internet as a platform for economic growth and free expression.

At Mozilla, we’re committed to an exceptional internet. That means fighting for laws that make sense in the 21st century. Are you with us? Voice your support for modern copyright law in the EU and sign the petition today.

Mozilla Add-ons BlogAdd-ons Update – Week of 2016/08/24

I post these updates every 3 weeks to inform add-on developers about the status of the review queues, add-on compatibility, and other happenings in the add-ons world.

The Review Queues

In the past 3 weeks, 1228 listed add-on submissions were reviewed:

  • 1224 (92%) were reviewed in fewer than 5 days.
  • 57 (4%) were reviewed between 5 and 10 days.
  • 46 (3%) were reviewed after more than 10 days.

There are 203 listed add-ons awaiting review.

You can read about the improvements we’ve made in the review queues here.

If you’re an add-on developer and are looking for contribution opportunities, please consider joining us. Add-on reviewers are critical for our success, and can earn cool gear for their work. Visit our wiki page for more information.


The compatibility blog post for Firefox 49 is up, and the bulk validation was run. The blog post for Firefox 50 should be published in the next week.

Going back to Firefox 48, there are a couple of changes that are worth keeping in mind: (1) release and beta builds no longer have a preference to deactivate signing enforcement, and (2) multiprocess Firefox is now enabled for users without add-ons, and add-ons will be gradually phased in, so make sure you’ve tested your add-on and either use WebExtensions or set the multiprocess compatible flag in your add-on manifest.

As always, we recommend that you test your add-ons on Beta and Firefox Developer Edition to make sure that they continue to work correctly. End users can install the Add-on Compatibility Reporter to identify and report any add-ons that aren’t working anymore.


We would like to thank fiveNinePlusR for their recent contributions to the add-ons world. You can read more about their contributions in our recognition page.

Mozilla ServicesSending VAPID identified WebPush Notifications via Mozilla’s Push Service


The Web Push API provides the ability to deliver real time events (including data) from application servers (app servers) to their client-side counterparts (applications), without any interaction from the user. In other parts of our Push documentation we provide a general reference for the application API and a basic client usage tutorial. This document addresses the app server side portion in detail, including integrating VAPID and Push message encryption into your server effectively, and how to avoid common issues.

Note: Much of this document presumes you’re familiar with programming as well as have done some light work in cryptography. Unfortunately, since this is new technology, there aren’t many libraries available that make sending messages painless and easy. As new libraries come out, we’ll add pointers to them, but for now, we’re going to spend time talking about how to do the encryption so that folks who need it, or want to build those libraries can understand enough to be productive.

Bear in mind that Push is not meant to replace richer messaging technologies like Google Cloud Messaging (GCM), Apple Push Notification system (APNs), or Microsoft’s Windows Notification System (WNS). Each has their benefits and costs, and it’s up to you as developers or architects to determine which system solves your particular set of problems. Push is simply a low cost, easy means to send data to your application.

Push Summary

The Push system looks like:
A diagram of the push process flow

Application — The user facing part of the program that interacts with the browser in order to request a Push Subscription, and receive Subscription Updates.
Application Server — The back-end service that generates Subscription Updates for delivery across the Push Server.
Push — The system responsible for delivery of events from the Application Server to the Application.
Push Server — The server that handles the events and delivers them to the correct Subscriber. Each browser vendor has their own Push Server to handle subscription management. For instance, Mozilla uses autopush.
Subscription — A user request for timely information about a given topic or interest, which involves the creation of an Endpoint to deliver Subscription Updates to. Sometimes also referred to as a “channel”.
Endpoint — A specific URL that can be used to send a Push Message to a specific Subscriber.
Subscriber — The Application that subscribes to Push in order to receive updates, or the user who instructs the Application to subscribe to Push, e.g. by clicking a “Subscribe” button.
Subscription Update — An event sent to Push that results in a Push Message being received from the Push Server.
Push Message — A message sent from the Application Server to the Application, via a Push Server. This message can contain a data payload.

The main parts that are important to Push from a server-side perspective are as follows — we’ll cover all of these points below in detail:

Identifying Yourself

Mozilla goes to great lengths to respect privacy, but sometimes, identifying your feed can be useful.

Mozilla offers the ability for you to identify your feed content, which is done using the Voluntary Application server Identification for web Push VAPID specification. This is a set of header values you pass with every subscription update. One value is a VAPID key that validates your VAPID claim, and the other is the VAPID claim itself — a set of metadata describing and defining the current subscription and where it has originated from.

VAPID is only useful between your servers and our push servers. If we notice something unusual about your feed, VAPID gives us a way to contact you so that things can go back to running smoothly. In the future, VAPID may also offer additional benefits like reports about your feeds, automated debugging help, or other features.

In short, VAPID is a bit of JSON that contains an email address to contact you, an optional URL that’s meaningful about the subscription, and a timestamp. I’ll talk about the timestamp later, but really, think of VAPID as the stuff you’d want us to have to help you figure out something went wrong.

It may be that you only send one feed, and just need a way for us to tell you if there’s a problem. It may be that you have several feeds you’re handling for customers of your own, and it’d be useful to know if maybe there’s a problem with one of them.

Generating your VAPID key

The easiest way to do this is to use an existing library for your language. VAPID is a new specification, so not all languages may have existing libraries.
Currently, we’ve collected several libraries under and are very happy to learn about more.

Fortunately, the method to generate a key is fairly easy, so you could implement your own library without too much trouble

The first requirement is an Elliptic Curve Diffie Hellman (ECDH) library capable of working with Prime 256v1 (also known as “p256” or similar) keys. For many systems, the OpenSSL package provides this feature. OpenSSL is available for many systems. You should check that your version supports ECDH and Prime 256v1. If not, you may need to download, compile and link the library yourself.

At this point you should generate a EC key for your VAPID identification. Please remember that you should NEVER reuse the VAPID key for the data encryption key you’ll need later. To generate a ECDH key using openssl, enter the following command in your Terminal:

openssl ecparam -name prime256v1 -genkey -noout -out vapid_private.pem

This will create an EC private key and write it into vapid_private.pem. It is important to safeguard this private key. While you can always generate a replacement key that will work, Push (or any other service that uses VAPID) will recognize the different key as a completely different user.

You’ll need to send the Public key as one of the headers . This can be extracted from the private key with the following terminal command:

openssl ec -in vapid_private.pem -pubout -out vapid_public.pem

Creating your VAPID claim

VAPID uses JWT to contain a set of information (or “claims”) that describe the sender of the data. JWTs (or Javascript Web Tokens) are a pair of JSON objects, turned into base64 strings, and signed with the private ECDH key you just made. A JWT element contains three parts separated by “.”, and may look like:


  1. The first element is a “header” describing the JWT object. This JWT header is always the same — the static string {typ:"JWT",alg:"ES256"} — which is URL safe base64 encoded to eyJ0eXAiOiJKV1QiLCJhbGciOiJFUzI1NiJ9. For VAPID, this string should always be the same value.
  2. The second element is a JSON dictionary containing a set of claims. For our example, we’ll use the following claims:
        "sub": "",
        "exp": "1463001340"

    The required claims are as follows:

    1. sub : The “Subscriber” — a mailto link for the administrative contact for this feed. It’s best if this email is not a personal email address, but rather a group email so that if a person leaves an organization, is unavailable for an extended period, or otherwise can’t respond, someone else on the list can. Mozilla will only use this if we notice a problem with your feed and need to contact you.
    2. exp : “Expires” — this is an integer that is the date and time that this VAPID header should remain valid until. It doesn’t reflect how long your VAPID signature key should be valid, just this specific update. Normally this value is fairly short, usually the current UTC time + no more than 24 hours. A long lived “VAPID” header does introduce a potential “replay” attack risk, since the VAPID headers could be reused for a different subscription update with potentially different content.


    Feel free to add additional items to your claims. This info really should be the sort of things you want to get at 3AM when your server starts acting funny. For instance, you may run many AWS S3 instances, and one might be acting up. It might be a good idea to include the AMI-ID of that instance (e.g. “aws_id”:”i-5caba953″). You might be acting as a proxy for some other customer, so adding a customer ID could be handy. Just remember that you should respect privacy and should use an ID like “abcd-12345” rather than “Mr. Johnson’s Embarrassing Bodily Function Assistance Service”. Just remember to keep the data fairly short so that there aren’t problems with intermediate services rejecting it because the headers are too big.

Once you’ve composed your claims, you need to convert them to a JSON formatted string, ideally with no padding space between elements1, for example:


Then convert this string to a URL-safe base64-encoded string, with the padding ‘=’ removed. For example, if we were to use python:

   import base64
   import json
   # These are the claims
   claims = {"sub":"",
   # convert the claims to JSON, then encode to base64
   body = base64.urlsafe_b64encode(json.dumps(claims))
   print body

would give us


This is the “body” of the JWT base string.

The header and the body are separated with a ‘.’ making the JWT base string.


  • The final element is the signature. This is an ECDH signature of the JWT base string created using your VAPID private key. This signature is URL safe base64 encoded, “=” padding removed, and again joined to the base string with an a ‘.’ delimiter.

    Generating the signature depends on your language and library, but is done by the ecdsa algorithm using your private key. If you’re interested in how it’s done in Python or Javascript, you can look at the code in

    Since your private key will not match the one we’ve generated, the signature you see in the last part of the following example will be different.


  • Forming your headers

    The VAPID claim you assembled in the previous section needs to be sent along with your Subscription Update as an Authorization header Bearer token — the complete token should look like so:

    Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJFUzI1NiJ9.eyJzdWIiOiAibWFpbHRvOmFkbWluQGV4YW1wbGUuY29tIiwgImV4cCI6ICIxNDYzMDg3Njc3In0.uyVNHws2F3k5jamdpsH2RTfhI3M3OncskHnTHnmdo0hr1ZHZFn3dOnA-42YTZ-u8_KYHOOQm8tUm-1qKi39ppA

    Note: the header should not contain line breaks. Those have been added here to aid in readability
    Important! The Authorization header ID will be changing soon from “Bearer” to “WebPush”. Some Push Servers may accept both, but if your request is rejected, you may want to try changing the tag. This, of course, is part of the fun of working with Draft specifications

    You’ll also need to send a Crypto-Key header along with your Subscription Update — this includes a p256ecdsa element that takes the VAPID public key as its value — formatted as a URL safe, base64 encoded DER formatted string of the raw keypair.

    An example follows:
    Crypto-Key: p256ecdsa=BA5vkyMXVfaKuehJuecNh30-NiC7mT9gM97Op5d8LiAKzfIezLzCZMwrY7OypBBNwEnusGkdg9F84WqW1j5Ymjk

    Note: If you like, you can cheat here and use the content of “vapid_public.pem”. You’ll need to remove the “-----BEGIN PUBLIC KEY------” and “-----END PUBLIC KEY-----” lines, remove the newline characters, and convert all “+” to “-” and “/” to “_”.

    You can validate your work against the VAPID test page — this will tell you if your headers are properly encoded. In addition, the VAPID repo contains libraries for JavaScript and Python to handle this process for you.

    We’re happy to consider PRs to add libraries covering additional languages.

    Receiving Subscription Information

    Your Application will receive an endpoint and key retrieval functions that contain all the info you’ll need to successfully send a Push message. See
    Using the Push API for details about this. Your application should send this information, along with whatever additional information is required, securely to the Application Server as a JSON object.

    Such a post back to the Application Server might look like this:

        "customerid": "123456",
        "subscription": {
            "endpoint": "…",
            "keys": {
                "p256dh": "BOrnIslXrUow2VAzKCUAE4sIbK00daEZCswOcf8m3TF8V…",
                "auth": "k8JV6sjdbhAi1n3_LDBLvA"
        "favoritedrink": "warm milk"

    In this example, the “subscription” field contains the elements returned from a fulfilled PushSubscription. The other elements represent additional data you may wish to exchange.

    How you decide to exchange this information is completely up to your organization. You are strongly advised to protect this information. If an unauthorized party gained this information, they could send messages pretending to be you. This can be made more difficult by using a “Restricted Subscription”, where your application passes along your VAPID public key as part of the subscription request. A restricted subscription can only be used if the subscription carries your VAPID information signed with the corresponding VAPID private key. (See the previous section for how to generate VAPID signatures.)

    Subscription information is subject to change and should be considered “opaque”. You should consider the data to be a “whole” value and associate it with your user. For instance, attempting to retain only a portion of the endpoint URL may lead to future problems if the endpoint URL structure changes. Key data is also subject to change. The app may receive an update that changes the endpoint URL or key data. This update will need to be reflected back to your server, and your server should use the new subscription information as soon as possible.

    Sending a Subscription Update Without Data

    Subscription Updates come in two varieties: data free and data bearing. We’ll look at these separately, as they have differing requirements.

    Data Free Subscription Updates

    Data free updates require no additional App Server processing, however your Application will have to do additional work in order to act on them. Your application will simply get a “push” message containing no further information, and it may have to connect back to your server to find out what the update is. It is useful to think of Data Free updates like a doorbell — “Something wants your attention.”

    To send a Data Free Subscription, you POST to the subscription endpoint. In the following example, we’ll include the VAPID header information. Values have been truncated for presentation readability.

    curl -v -X POST\
      -H "Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJFUzI1NiJ9.eyJhdWQiOiJodHR…"\
      -H "Crypto-Key: p256ecdsa=BA5vkyMXVfaKuehJuecNh30-NiC7mT9gM97Op5d8LiAKzfIezLzC…"\
      -H "TTL: 0"\

    This should result in an Application getting a “push” message, with no data associated with it.

    To see how to store the Push Subscription data and send a Push Message using a simple server, see our Using the Push API article.

    Data Bearing Subscription Updates

    Data Bearing updates are a lot more interesting, but do require significantly more work. This is because we treat our own servers as potentially hostile and require “end-to-end” encryption of the data. The message you send across the Mozilla Push Server cannot be read. To ensure privacy, however, your application will not receive data that cannot be decoded by the User Agent.

    There are libraries available for several languages at, and we’re happy to accept or link to more.

    The encryption method used by Push is Elliptic Curve Diffie Hellman (ECDH) encryption, which uses a key derived from two pairs of EC keys. If you’re not familiar with encryption, or the brain twisting math that can be involved in this sort of thing, it may be best to wait for an encryption library to become available. Encryption is often complicated to understand, but it can be interesting to see how things work.

    If you’re familiar with Python, you may want to just read the code for the http_ece package. If you’d rather read the original specification, that is available. While the code is not commented, it’s reasonably simple to follow.

    Data encryption summary

    • Octet — An 8 bit byte of data (between \x00 and \xFF)
    • Subscription data — The subscription data to encode and deliver to the Application.
    • Endpoint — the Push service endpoint URL, received as part of the Subscription data.
    • Receiver key — The p256dh key received as part of the Subscription data.
    • Auth key — The auth key received as part of the Subscription data.
    • Payload — The data to encrypt, which can be any streamable content between 2 and 4096 octets.
    • Salt — 16 octet array of random octets, unique per subscription update.
    • Sender key — A new ECDH key pair, unique per subscription update.

    Web Push limits the size of the data you can send to between 2 and 4096 octets. You can send larger data as multiple segments, however that can be very complicated. It’s better to keep segments smaller. Data, whatever the original content may be, is also turned into octets for processing.

    Each subscription update requires two unique items — a salt and a sender key. The salt is a 16 octet array of random octets. The sender key is a ECDH key pair generated for this subscription update. It’s important that neither the salt nor sender key be reused for future encrypted data payloads. This does mean that each Push message needs to be uniquely encrypted.

    The receiver key is the public key from the client’s ECDH pair. It is base64, URL safe encoded and will need to be converted back into an octet array before it can be used. The auth key is a shared “nonce”, or bit of random data like the salt.

    Emoji Based Diagram
    Subscription Data Per Update Info Update
    🎯 Endpoint 🔑 Private Server Key 📄Payload
    🔒 Receiver key (‘p256dh’) 🗝 Public Server Key
    💩 Auth key (‘auth’) 📎 Salt
    🔐 Private Sender Key
    ✒️ Public Server Key
    🏭 Build using / derive
    🔓 message encryption key
    🎲 message nonce

    Encryption uses a fabricated key and nonce. We’ll discuss how the actual encryption is done later, but for now, let’s just create these items.

    Creating the Encryption Key and Nonce

    The encryption uses HKDF (Hashed Key Derivation Function) using a SHA256 hash very heavily.

    Creating the secret

    The first HKDF function you’ll need will generate the common secret (🙊), which is a 32 octet value derived from a salt of the auth (💩) and run over the string “Content-Encoding: auth\x00”.
    So, in emoji =>
    🔐 = 🔑(🔒);
    🙊 = HKDF(💩, “Content-Encoding: auth\x00”).🏭(🔐)

    An example function in Python could look like so:

    # receiver_key must have "=" padding added back before it can be decoded.
    # How that's done is an exercise for the reader.
    # 🔒
    receiver_key = subscription['keys']['p256dh']
    # 🔑
    server_key = pyelliptic.ECC(curve="prime256v1")
    # 🔐
    sender_key = server_key.get_ecdh_key(base64.urlsafe_b64decode(receiver_key))
    secret = HKDF(
        length = 32,
        salt = auth,
        info = "Content-Encoding: auth\0").derive(sender_key)
    The encryption key and encryption nonce

    The next items you’ll need to create are the encryption key and encryption nonce.

    An important component of these is the context, which is:

    • A string comprised of ‘P-256’
    • Followed by a NULL (“\x00”)
    • Followed by a network ordered, two octet integer of the length of the decoded receiver key
    • Followed by the decoded receiver key
    • Followed by a networked ordered, two octet integer of the length of the public half of the sender key
    • Followed by the public half of the sender key.

    As an example, if we have an example, decoded, completely invalid public receiver key of ‘RECEIVER’ and a sample sender key public key example value of ‘sender’, then the context would look like:

    # ⚓ (because it's the base and because there are only so many emoji)
    root = "P-256\x00\x00\x08RECEIVER\x00\x06sender"

    The “\x00\x08” is the length of the bogus “RECEIVER” key, likewise the “\x00\x06” is the length of the stand-in “sender” key. For real, 32 octet keys, these values will most likely be “\x00\x20” (32), but it’s always a good idea to measure the actual key rather than use a static value.

    The context string is used as the base for two more HKDF derived values, one for the encryption key, and one for the encryption nonce. In python, these functions could look like so:

    In emoji:
    🔓 = HKDF(💩 , “Content-Encoding: aesgcm\x00” + ⚓).🏭(🙊)
    🎲 = HKDF(💩 , “Content-Encoding: nonce\x00” + ⚓).🏭(🙊)

    encryption_key = HKDF(
        info="Content-Encoding: aesgcm\x00" + context).derive(secret)
    # 🎲
    encryption_nonce = HDKF(
        info="Content-Encoding: nonce\x00" + context).derive(secret)

    Note that the encryption_key is 16 octets and the encryption_nonce is 12 octets. Also note the null (\x00) character between the “Content-Encoding” string and the context.

    At this point, you can start working your way through encrypting the data 📄, using your secret 🙊, encryption_key 🔓, and encryption_nonce 🎲.

    Encrypting the Data

    The function that does the encryption (encryptor) uses the encryption_key 🔓 to initialize the Advanced Encryption Standard (AES) function, and derives the Galois/Counter Mode (G/CM) Initialization Vector (IV) off of the encryption_nonce 🎲, plus the data chunk counter. (If you didn’t follow that, don’t worry. There’s a code snippet below that shows how to do it in Python.) For simplicity, we’ll presume your data is less than 4096 octets (4K bytes) and can fit within one chunk.
    The IV takes the encryption_nonce and XOR’s the chunk counter against the final 8 octets.

    def generate_iv(nonce, counter):
        (mask,) = struct.unpack("!Q", nonce[4:])  # get the final 8 octets of the nonce
        iv = nonce[:4] + struct.pack("!Q", counter ^ mask)  # the first 4 octets of nonce,
                                                         # plus the XOR'd counter
        return iv

    The encryptor prefixes a “\x00\x00” to the data chunk, processes it completely, and then concatenates its encryption tag to the end of the completed chunk. The encryptor tag is a static string specific to the encryptor. See your language’s documentation for AES encryption for further information.

    def encrypt_chunk(chunk, counter, encryption_nonce, encryption_key):
        encryptor = Cipher(algorithms.AES(encryption_key),
        return encryptor.update(b"\x00\x00" + chunk) +
               encryptor.finalize() +
    def encrypt(payload, encryption_nonce, encryption_key):
        result = b""
        counter = 0
        for i in list(range(0, len(payload)+2, 4096):
            result += encrypt_chunk(

    Sending the Data

    Encrypted payloads need several headers in order to be accepted.

    The Crypto-Key header is a composite field, meaning that different things can store data here. There are some rules about how things should be stored, but we can simplify and just separate each item with a semicolon “;”. In our case, we’re going to store three things, a “keyid”, “p256ecdsa” and “dh”.

    “keyid” is the string “p256dh”. Normally, “keyid” is used to link keys in the Crypto-Key header with the Encryption header. It’s not strictly required, but some push servers may expect it and reject subscription updates that do not include it. The value of “keyid” isn’t important, but it must match between the headers. Again, there are complex rules about these that we’re safely ignoring, so if you want or need to do something complex, you may have to dig into the Encrypted Content Encoding specification a bit.

    “p256ecdsa” is the public key used to sign the VAPID header (See Forming your Headers). If you don’t want to include the optional VAPID header, you can skip this.

    The “dh” value is the public half of the sender key we used to encrypt the data. It’s the same value contained in the context string, so we’ll use the same fake, stand-in value of “sender”, which has been encoded as a base64, URL safe value. For our example, the base64 encoded version of the string ‘sender’ is ‘c2VuZGVy’

    Crypto-Key: p256ecdsa=BA5v…;dh=c2VuZGVy;keyid=p256dh

    The Encryption Header contains the salt value we used for encryption, which is a random 16 byte array converted into a base64, URL safe value.

    Encryption: keyid=p256dh;salt=cm5kIDE2IGJ5dGUgc2FsdA

    The TTL Header is the number of seconds the notification should stay in storage if the remote user agent isn’t actively connected. “0” (Zed/Zero) means that the notification is discarded immediately if the remote user agent is not connected; this is the default. This header must be specified, even if the value is “0”.

    TTL: 0

    Finally, the Content-Encoding Header specifies that this content is encoded to the aesgcm standard.

    Content-Encoding: aesgcm

    The encrypted data is set as the Body of the POST request to the endpoint contained in the subscription info. If you have requested that this be a restricted subscription and passed your VAPID public key as part of the request, you must include your VAPID information in the POST.

    As an example, in python:

    headers = {
        'crypto-key': 'p256ecdsa=BA5v…;dh=c2VuZGVy;keyid=p256dh',
        'content-encoding': 'aesgcm',
        'encryption': 'keyid=p256dh;salt=cm5kIDE2IGJ5dGUgc2FsdA',
        'ttl': 0,

    A successful POST will return a response of 201, however, if the User Agent cannot decrypt the message, your application will not get a “push” message. This is because the Push Server cannot decrypt the message so it has no idea if it is properly encoded. You can see if this is the case by:

    • Going to about:config in Firefox
    • Setting the dom.push.loglevel pref to debug
    • Opening the Browser Console (located under Tools > Web Developer > Browser Console menu.

    When your message fails to decrypt, you’ll see a message similar to the following
    The debugging console displaying "The service worker for scope '' encountered an error decryption the a push message:, with a message and where to look for more info

    You can use values displayed in the Web Push Data Encryption Page to audit the values you’re generating to see if they’re similar. You can also send messages to that test page and see if you get a proper notification pop-up, since all the key values are displayed for your use.

    You can find out what errors and error responses we return, and their meanings by consulting our server documentation.

    Subscription Updates

    Nothing (other than entropy) lasts forever. There may come a point where, for various reasons, you will need to update your user’s subscription endpoint. There are any number of reasons for this, but your code should be prepared to handle them.

    Your application’s service worker will get a onpushsubscriptionchange event. At this point, the previous endpoint for your user is now invalid and a new endpoint will need to be requested. Basically, you will need to re-invoke the method for requesting a subscription endpoint. The user should not be alerted of this, and a new endpoint will be returned to your app.

    Again, how your app identifies the customer, joins the new endpoint to the customer ID, and securely transmits this change request to your server is left as an exercise for the reader. It’s worth noting that the Push server may return an error of 410 with an errno of 103 when the push subscription expires or is otherwise made invalid. (If a push subscription has expired several months ago, the server may return a different errno value.


    Push Data Encryption can be very challenging, but worthwhile. Harder encryption means that it is more difficult for someone to impersonate you, or for your data to be read by unintended parties. Eventually, we hope that much of this pain will be buried in libraries that allow you to simply call a function, and as this specification is more widely adopted, it’s fair to expect multiple libraries to become available for every language.

    See also:

    1. WebPush Libraries: A set of libraries to help encrypt and send push messages.
    2. VAPID lib for python or javascript can help you understand how to encode VAPID header data.


    1. Technically, you don’t need to strip the whitespace from JWS tokens. In some cases, JWS libraries take care of that for you. If you’re not using a JWS library, it’s still a very good idea to make headers and header lines as compact as possible. Not only does it save bandwidth, but some systems will reject overly lengthy header lines. For instance, the library that autopush uses limits header line length to 16,384 bytes. Minus things like the header, signature, base64 conversion and JSON overhead, you’ve got about 10K to work with. While that seems like a lot, it’s easy to run out if you do things like add lots of extra fields to your VAPID claim set.

    hacks.mozilla.orgA few HTML tips

    A while ago I wrote an article with some CSS tips, now it’s time to give some polish to our HTML! In this article I’ll share some tips and advice about HTML code. Some of this guidance will be best suited for beginners – how to properly build paragraphs, use headings, or improve forms, but we will also discuss SVG sprites for icons, a somewhat more advanced topic.



    Most of our writing is structured in paragraphs, and there is an HTML element for that: <p>. Do not use the line break tag <br> to separate blocks of texts into pseudo-paragraphs, since line breaks are not meant for that.


    Cupcake ipsum dolor sit. Amet chupa chups chupa chups sesame snaps. Ice cream pie jelly
    beans muffin donut marzipan oat cake.
    Gummi bears tart cotton candy icing. Muffin bear claw carrot cake jelly jujubes pudding
    chocolate cake cheesecake toffee.


    <p>Cupcake ipsum dolor sit. Amet chupa chups chupa chups sesame snaps. Ice cream
    pie jelly beans muffin donut marzipan oat cake.</p>
    <p>Gummi bears tart cotton candy icing. Muffin bear claw carrot cake jelly jujubes
    pudding chocolate cake cheesecake toffee.</p>

    A legit use for line breaks would be, for instance, to break verses of a poem or song:

    <p>So close, no matter how far<br>
    Couldn’t be much more from the hearth<br>
    Forever trusting who we are<br>
    And nothing else matters</p>


    Headings tags, from <h1> to <h6>, have an implicit rank assigned to them, from 1 (most important) to 6 (less important).

    To handle semantics properly, pick your heading rank in sequential order, not just because of the size that the browser will use to render the heading. You can – and should!– use CSS for this, and pick a suitable rank instead.


        <h1>Monkey Island</h1>
        <h4>Look behind you! A three-headed monkey!</h4>
        <!-- ... -->


        <h1>Monkey Island</h1>
        <h2>Look behind you! A three-headed monkey!</h2>
        <!-- ... -->

    Another thing to take into account is how to create subheadings or tag lines to accompany headings. The W3C recommendation is to use regular text markup rather than a lower-rank heading.


        <h1>Star Wars VII</h1>
        <h2>The Force Awakens</h2>


        <h1>Star Wars VII</h1>
        <p>The Force Awakens</p>



    The placeholder attribute in <input> form elements will let you show an example value to the user that is automatically erased once the user types anything in the field. Placeholders are meant to show examples of formatting valid for a field.

    Unfortunately, in the wild there are a lot of placeholders acting as <label> elements, informing of what the field is instead of serving as an example of a valid input value. This practice is not accessible, and you should avoid it.


    <input type="email" placeholder="Your e-mail" name="mail">


        Your e-mail:
        <input type="email" placeholder="" name="mail">

    Keyboards in mobile devices

    It is crucial to provide typing hints for people browsing from a mobile device, like a phone or a tablet. We can easily achieve this by picking the correct type for our <input> elements.

    For instance, type="number" will make a mobile phone display the numeric keypad instead of the regular alphanumeric keyboard. The same goes for type="email", type="tel", etc.


    <label>Phone number: <input type="text" name="mobile"></label>


    <label>Phone number: <input type="tel" name="mobile"></label>

    Here is a comparison: on the left, the keyboard that shows up when using type="text"; on the right, the keyboard for type="tel".

    keyboard comparison


    Say hi to SVG files! Not only can you use vector graphics in <img> tags like this:

    <img src="acolyte_cartoon.svg" alt="acolyte">

    You can also use SVG sprites to implement vector icons in your website, instead of using a Web Font – which is a hack, and might not yield perfect results. This is because browsers treat Web Font icons as text, and not as images. And there are other potential problems, like content/ad blockers disabling the download of Web Fonts. If you would like to learn more about this, watch this talk by Sarah Semark about why using SVG for icons is better than using a Web Font. You can also read more about this technique on CSS-Tricks.

    The idea of SVG sprites is very similar to CSS sprites. The implementation consists of merging all your SVG assets in a single image file. In the case of SVG, every asset is wrapped in a <symbol> tag, like this:

        <symbol id="social-twitter" viewBox="...">
            <!-- actual image data here -->

    Then, the icon can be used in your HTML with a <svg> tag like this, so we point to the symbol ID in the SVG file:

    <svg class="social-icon">
        <use xlink:href="icons.svg#social-twitter" />

    Does creating an SVG spritesheet seem tedious? Well, that’s why there are tools like gulp-svgstore to automate the process and generate a spritesheet from your individual asset files.

    And remember, since we are using a <svg> tag instead of an <img> to include the picture, we can then use CSS to apply styles. So all the cool things you can do with Web Font icons, can be done with these SVG icons as well!

    .social-icon {
        fill: #000;
        transition: all 0.2s;
    .social-icon:hover {
        fill: #00f;

    There are some CSS limitations though: when using SVG this way, with <use> linking to a <symbol>, the image gets injected in Shadow DOM and we lose some CSS capabilities. In this case, we can’t cherry-pick which elements of the SVG to apply the styling to, and some properties (e.g., fill) will only be applied to those elements that have them undefined. But hey, you can’t do this with Web Font icons either!

    In the demo below, you can see an example of a SVG sprite in action. When you mouse over the image, the torch’s fire will change its color via CSS.

    See the Pen SVG acolyte demo by ladybenko (@ladybenko) on CodePen.

    I hope that these tips are helpful. If you have any questions, or would like to share your own tip, please leave a comment!

    QMOFirefox 49 Beta 7 Testday, August 26th

    Hello Mozillians,

    We are happy to announce that Friday, August 26th, we are organizing Firefox 49 Beta 7 Testday. We will be focusing our testing on WebGL Compatibility and Exploratory Testing. Check out the detailed instructions via this etherpad.

    No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

    Join us and help us make Firefox better! See you on Friday!

    Air MozillaWebdev Beer and Tell: August 2016

    Webdev Beer and Tell: August 2016 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...

    Air MozillaImproving Pytest-HTML: My Outreachy Project

    Improving Pytest-HTML: My Outreachy Project Ana Ribero of the Outreachy Summer 2016 program cohort describes her experience in the program and what she did to improve Mozilla's Pytest-HTML QA tools.

    Mozilla Add-ons BlogA Simpler Add-on Review Process

    In 2011, we introduced the concept of “preliminary review” on AMO. Developers who wanted to list add-ons that were still being tested or were experimental in nature could opt for a more lenient review with the understanding that they would have reduced visibility. However, having two review levels added unnecessary complexity for the developers submitting add-ons, and the reviewers evaluating them. As such, we have implemented a simpler approach.

    Starting on August 22nd, there will be one review level for all add-ons listed on AMO. Developers who want to reduce the visibility of their add-ons will be able to set an “experimental” add-on flag in the AMO developer tools. This flag won’t have any effect on how an add-on is reviewed or updated.

    All listed add-on submissions will either get approved or rejected based on the updated review policy. For unlisted add-ons, we’re also unifying the policies into a single set of criteria. They will still be automatically signed and post-reviewed at our discretion.

    We believe this will make it easier to submit, manage, and review add-ons on AMO. Review waiting times have been consistently good this year, and we don’t expect this change to have a significant impact on this. It should also make it easier to work on AMO code, setting up a simpler codebase for future improvements.  We hope this makes the lives of our developers and reviewers easier, and we thank you for your continued support.

    Arabic Mozilla“حسابات فَيَرفُكس” الآن باللغة العربية

    تم الإنتهاء من توطين مشروع ” حسابات فيرفكس” وإصلاح معظم مشاكل RTL الموجودة بواجهة المستخدم. حيث يمكنكم الآن تجربة النسخة النهائية من إصدار ”

    تم الإنتهاء من توطين مشروع ” حسابات فيرفكس” وإصلاح معظم مشاكل RTL الموجودة بواجهة المستخدم. حيث يمكنكم الآن تجربة النسخة النهائية من إصدار “

    لا تتردد في تصحيح الأخطاء إن وجدت فهذا مشروع مفتوح المصدر ويحتاج إلى آراءكم ومساهماتكم.
    ملاحظة: تحتاج إلى الإصدار العربي من متصفح فيرفكس للإطلاع على الإصدار الجديد من مشروع حسابات فيرفكس.
    روابط مهمة: 
    لمتابعة التراجم والمساهمة في توطين المشروع:
    للإبلاغ عن مشاكل RTL الموجودة بواجهة المستخدم:

    Air MozillaIntern Presentations 2016, 18 Aug 2016

    Intern Presentations 2016 Group 5 of Mozilla's 2016 Interns presenting what they worked on this summer. Click the Chapters Tab for a topic list. Nathanael Alcock- MV Dimitar...

    SUMO BlogWhat’s Up with SUMO – 18th August

    Hello, SUMO Nation!

    It’s good to be back and know you’re reading these words :-) A lot more happening this week (have you heard about Activate Mozilla?), so go through the updates if you have not attended all our meetings – and do let us know if there’s anything else you want to see in the blog posts – in the comments!

    Welcome, new contributors!

    If you just joined us, don’t hesitate – come over and say “hi” in the forums!

    Contributors of the week

    Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

    Most recent SUMO Community meeting

    The next SUMO Community meeting

    • …is happening on the 24th of August!
    • If you want to add a discussion topic to the upcoming meeting agenda:
      • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Wednesday (this will make it easier to have an efficient meeting).
      • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).
      • If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback.



    Support Forum

    • The SUMO Firefox 48 Release Report is open for feedback: please add your links, tweets, bugs, threads and anything else that you would like to have highlighted in the report.
    • More details about the audit for the Support Forum:
      • The audit will only be happening this week and next
      • It will determine the forum contents that will be kept and/or refreshed
      • The Get Involved page will be rewritten and designed as it goes through a “Think-Feel-do” exercis.
      • Please take a few minutes this week to read through the document and make a comment or edit.
      • One of the main questions is “what are the things that we cannot live without in the new forum?” – if you have an answer, write more in the thread!
    • Join Rachel in the SUMO Vidyo room on Friday between noon and 14:00 PST for answering forum threads and general hanging out!

    Knowledge Base & L10n


    • for Android
      • Version 49 will not have many features, but will include bug and security fixes.
    • for iOS
      • Version 49 will not have many features, but will include bug and security fixes.

    … and that’s it for now, fellow Mozillians! We hope you’re looking forward to a great weekend and we hope to see you soon – online or offline! Keep rocking the helpful web!

    Air MozillaConnected Devices Weekly Program Update, 18 Aug 2016

    Connected Devices Weekly Program Update Weekly project updates from the Mozilla Connected Devices team.

    Air MozillaReps weekly, 18 Aug 2016

    Reps weekly This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

    Air Mozilla360/VR Meet-Up

    360/VR Meet-Up We explore the potential of this evolving medium and update you on the best new tools and workflows, covering: -Preproduction: VR pre-visualization, Budgeting, VR Storytelling...

    Air MozillaThe Joy of Coding - Episode 68

    The Joy of Coding - Episode 68 mconley livehacks on real Firefox bugs while thinking aloud.

    QMOFirefox 49 Beta 3 Testday Results

    Hello Mozillians!

    As you may already know, last Friday – August 12th – we held a new Testday event, for Firefox 49 Beta 3.

    Thank you all for helping us making Mozilla a better place – Logicoma, Julie Myers, Moin Shaikh, Ilse Macías, Iryna Thompson.

    From BangladeshRezaul Huque Nayeem, Raihan Ali, Md. Rahimul Islam, Rabiul Hossain Bablu, Hossain Al Ikram, Azmina Akter Papeya, Saddam Hossain, Sufi Ahmed Hamim, Fahim, Maruf Rahman, Hossain Ahmed Sadi, Tariqul Islam Chowdhury, Sajal Ahmed, Md.Majedul islam, Amir Hossain Rhidoy, Toki Yasir, Jobayer Ahmed Mickey, Sayed Ibn Masud, kazi Ashraf hossain, Sahab Ibn Mamun, Kazi Nuzhat Tasnem, Sourov Arko, Sauradeep Dutta, Samad Talukder, Kazi Sakib Ahmad, Sajedul Islam, Forhad hossain, Syed Nayeem Roman, Md. Faysal Alam Riyad, Tanvir Rahman, Oly Roy, Akash, Fatin Shahazad.

    From India: Paarttipaabhalaji, Surentharan, Bhuvana Meenakshi.K, Nagaraj V, Md Shahbaz Alam, prasanthp96, Selva Makilan, Jayesh Ram, Dhinesh Kumar M, B.AISHWARYA, Ashly Rose, Kamlesh Vilpura, Pavithra.

    A big thank you goes out to all our active moderators too!


    Keep an eye on QMO for upcoming events! 😉

    hacks.mozilla.orgUsing Feature Queries in CSS

    There’s a tool in CSS that you might not have heard of yet. It’s powerful. It’s been there for a while. And it’ll likely become one of your favorite new things about CSS.

    Behold, the @supports rule. Also known as Feature Queries.

    With @supports, you can write a small test in your CSS to see whether or not a particular “feature” (CSS property or value) is supported, and apply a block of code (or not) based on the answer. Like this:

    @supports (display: grid) {
       // code that will only run if CSS Grid is supported by the browser 

    If the browser understands display: grid, then all of the styling inside the brackets will be applied. Otherwise all of that styling will be skipped.

    Now, there seems to be a bit of confusion about what Feature Queries are for. This is not some kind of external verification that analyzes whether or not a browser has properly implemented a CSS property. If you are looking for that, look elsewhere. Feature Queries ask the browser to self-report on whether or not a certain CSS property/value is supported, and use the answer to decide whether or not to apply a block of CSS. If a browser has implemented a feature improperly or incompletely, @supports won’t help you. If the browser is misreporting what CSS it supports, @supports won’t help you. It’s not a magic wand for making browser bugs disappear.

    That said, I’ve found @supports to be incredibly helpful. The @supports rule has repeatedly let me use new CSS far earlier than I could without it.

    For years, developers have used Modernizr to do what Feature Queries do — but Modernizr requires JavaScript. While the scripts might be tiny, CSS architected with Modernizr requires the JavaScript file to download, to execute, and to complete before the CSS is applied. Involving JavaScript will always be slower than only using CSS. Requiring JavaScript opens up the possibility of failure — what happens if the JavaScript doesn’t execute? Plus, Modernizr requires an additional layer of complexity that many projects just can’t handle. Feature Queries are faster, more robust, and much simpler to use.

    You might notice the syntax of a Feature Query is a lot like a Media Query. I think of them as cousins.

    @supports (display: grid) {
      main {
        display: grid;
        grid-template-columns: repeat(auto-fit, minmax(280px, 1fr));

    Now most of the time, you do not need such a test in your CSS. For example, you can write this code without testing for support:

    aside {
      border: 1px solid black;
      border-radius: 1em;

    If a browser understands border-radius, then it will put rounded corners on the aside box. If it doesn’t, it will skip that line of code and move on, leaving the edges of the box to be square. There is no reason to run a test or use a Feature Query. This is just how CSS works. It’s a fundamental principle in architecting solid, progressively-enhanced CSS. Browsers simply skip over code they don’t understand, without throwing an error.

    a screenshot of border radius effect in  old vs new browsers

    Most browsers will display border-radius: 1em as the result on the right. Internet Explorer 6, 7 and 8, however, will not round the corners, and you’ll see the result on the left. Check out this example at

    You do not need a Feature Query for this.

    So when do you want to use @supports? A Feature Query is a tool for bundling together CSS declarations so that they’ll run as a group under certain conditions. Use a Feature Query when you want to apply a mix of old and new CSS, but only when the new CSS is supported.

    Let’s look at an example using the Initial Letter property. This new property initial-letter tells the browser to make the element in question bigger — like for a drop cap. Here, the first letter of the first word in a paragraph is being told to be the size of four lines of text. Fabulous. Oh, but I would also like to make that letter bold, and put a bit of margin on its right side, and hey, let’s make it a nice orange color. Cool.

      p::first-letter {
         -webkit-initial-letter: 4;
         initial-letter: 4;
         color: #FE742F;
         font-weight: bold;
         margin-right: 0.5em;
    a screenshot of this example Initial Letter in Safari 9

    Here’s what our initial-letter example looks like in Safari 9.

    Now let’s see what will happen in all the other browsers…

    a screenshot of this Initial Letter example in other browsers

    Oh, no. This looks terrible in all the other browsers.

    Well, that’s not acceptable. We don’t want to change the color of the letter, or add a margin, or make it bold unless it’s also going to be made bigger by the Initial Letter property. We need a way to test and see whether or not the browser understands initial-letter, and only apply the change to color, weight, and margin if it does. Enter the Feature Query.

    @supports (initial-letter: 4) or (-webkit-initial-letter: 4) {
      p::first-letter {
         -webkit-initial-letter: 4;
         initial-letter: 4;
         color: #FE742F;
         font-weight: bold;
         margin-right: 0.5em;

    Notice, you do need to test a full string with both the property and value. This confused me at first. Why am I testing initial-letter: 4 ? Is the value of 4 important? What if I put 17? Does it need to match the value that is further down in my code?

    The @supports rule tests a string that contains both the property and value because sometimes it’s the property that needs the test, and sometimes it’s the value. For the initial-letter example, it doesn’t really matter what you put for the value. But consider @supports (display: grid) and you’ll see the need for both. Every browser understands display. Only experimental browsers understand display: grid (at the moment).

    Back to our example: Currently initial-letter is only supported in Safari 9, and it requires a prefix. So I’ve written the prefix, making sure to also include the unprefixed version, and I’ve written the test to look for one or the other. Yes, you can have or, and, and not statements in your Feature Queries.

    Here’s the new result. The browsers that understand initial-letter show a giant bolded, orange drop-cap. The other browsers act like the drop cap doesn’t exist — the same way they would if I’d waited to use this feature until more browsers had support for it. (We are currently implementing Initial Letter in Firefox, by the way.)

    a before and after comparison

    The screenshot on the left is from Safari 9. All other browsers show the result on the right. You can see this code in action at

    Organizing Your Code

    Now you might be tempted to use this tool to cleanly fork your code into two branches. “Hey browser, if you understand Viewport Units, do this, and if you do not understand them, do this other thing.” That feels nice and tidy.

    @supports (height: 100vh) {
      // my layout that uses viewport height
    @supports not (height: 100vh) {
      // the alternative layout for older browsers

    This is not a good idea — at least not yet. Do you see the problem?

    Well, not all browsers support Feature Queries. And the browsers that do not understand @supports will skip over both blocks of code. That’s probably bad.

    Does this mean we can’t use Feature Queries until 100% of browsers support them? Nope. We can, and we should use Feature Queries today. Simply do not write your code like the last example.

    How do we do this right? Well, in much the same way we used Media Queries before they were 100% supported. And well, actually it’s easier to use Feature Queries in this transitional period than it was to use Media Queries. You just have to be smart about it.

    You want to structure your code knowing that the oldest browsers won’t support Feature Queries or the feature you are testing for. I’ll show you how.

    (Of course, sometime in the far future, once 100% of the browsers have Feature Queries, we can make heavier use of @supports not and organize our code in that way. But it’ll be many years until we get there.)

    Support for Feature Queries

    So how far back are Feature Queries supported?

    Well @supports has worked in Firefox, Chrome, and Opera since mid–2013. It also works in every version of Edge. Safari shipped it in Fall 2015, in Safari 9. Feature Queries are not supported in any version of Internet Explorer, Opera Mini, Blackberry Browser, or UC Browser.

    a screenshot from Can I Use showing support for Feature Queries

    Looking up support for Feature Queries on Can I Use

    You might think the fact Internet Explorer doesn’t have support for Feature Queries is a big problem. Actually, it’s usually not. I’ll show you why in a moment. I believe the biggest hurdle is Safari 8. We need to keep an eye out for what happens there.

    Let’s look at another example. Say we have some layout code we want to apply that requires using object-fit: cover in order to work properly. For the browsers that don’t understand object-fit, we want to apply different layout CSS.

    a screenshot from Can I Use showing support for Object-fit

    Looking up support for Object Fit on Can I Use

    So let’s write:

    div {
      width: 300px;
      background: yellow;
      // some complex code for a fallback layout
    @supports (object-fit: cover) {
      img {
        object-fit: cover;
      div {
        width: auto;
        background: green;
       // some other complex code for the fancy new layout

    So what happens? Feature Queries is either supported or not, and the new feature object-fit: cover is either supported or not. Combine those, and we get four possibilities:

    Feature Query Support? Feature support? What Happens? Is This What We Want?
    Supports Feature Queries Supports the feature in question
    Supports Feature Queries Does not support the feature
    Does not support Feature Queries Does not support the feature
    Does not support Feature Queries Supports the feature in question

    Situation 1: Browsers that support Feature Queries, and support the feature in question

    Firefox, Chrome, Opera, and Safari 9 all support object-fit and support @supports, so this test will run just fine, and the code inside this block will be applied. Our image will be cropped using object-fit: cover, and our div background will be green.

    Situation 2: Browsers that support Feature Queries, and do not support the feature in question

    Edge does not support object-fit, but it does support @supports, so this test will run and fail, preventing the code block from being applied. The image will not have object-fit applied, and the div will have a yellow background.

    This is what we want.

    Situation 3: Browsers that do not support Feature Queries, and do not support the feature in question

    This is where our classic nemesis Internet Explorer appears. IE does not understand @supports and it does not understand object-fit. You might think this means we can’t use a Feature Query — but that’s not true.

    Think about the result we want. We want IE to skip over this entire block of code. And that’s exactly what will happen. Why? Because when it reaches @supports, it doesn’t recognize the syntax, and it skips to the end.

    It might be skipping the code “for the wrong reasons” — it skips over the code because it doesn’t understand @supports, instead of because it doesn’t understand object-fit — but who cares! We still get exactly the result we want.

    This same thing happens with the Blackberry Browser and UC Browser for Android. They don’t understand object-fit, nor @supports, so we are all set. It works out great.

    The bottom line — anytime you use a Feature Query in a browser that does not support Feature Queries, it’s fine as long as that browser also does not support the feature you are testing.

    Think through the logic of your code. Ask yourself, what happens when the browser skips over this code? If that’s what you want, you are all set.

    Situation 4: Browsers that support do not Feature Queries, yet do support the feature in question

    The problem is this fourth combination — when the test proposed by a Feature Query doesn’t run, but the browser does support that feature and should run that code.

    For example, object-fit is supported by Safari 7.1 (on Mac) and 8 (Mac and iOS) — but neither browser supports Feature Queries. The same is true for Opera Mini — it will support object-fit but not @supports.

    What happens? Those browsers get to this block of code, and instead of using the code, applying object-fit:cover to the image and turning the background color of the div green, it skips the whole block of code, leaving yellow as the background color.

    And this is not really what we want.

    Feature Query Support? Feature support? What Happens? Is This What We Want?
    Supports Feature Queries Supports the feature in question CSS is applied Yes
    Supports Feature Queries Does not support the feature CSS is not applied Yes
    Does not support Feature Queries Does not support the feature CSS is not applied Yes
    Does not support Feature Queries Supports the feature in question CSS is not applied No, likely not.

    Of course, it depends on the particular use case. Perhaps this is a result we can live with. The older browser gets an experience planned for older browsers. The web page still works.

    But much of the time, we will want that browser to be able to use any feature that it does support. This is why Safari 8 is likely the biggest problem when it comes to Feature Queries, not Internet Explorer. There are many newer properties that Safari 8 does support — like Flexbox. You probably don’t want to block Safari 8 from these properties. That’s why I rarely use @supports with Flexbox, or when I have, I’ve written at least three forks in my code, one with a not. (Which gets complicated fast, so I’m not even going to try to explain it here.)

    If you are using a feature that has better support in older browsers than Feature Queries, then think through all of the combinations as you write your code. Be sure not to exclude browsers from getting something you want them to get.

    Meanwhile, it’s easy to use @supports with the newest CSS features — CSS Grid for example, and Initial Letter. No browser will ever support CSS Grid without supporting Feature Queries. We don’t have to worry about our fourth, problematic combination with the newest features, which makes Feature Queries incredibly useful as we go forward.

    All of this means that while IE11 will likely be around for many years to come, we can use Feature Queries liberally with the newest advances in CSS.

    Best Practices

    So now we realize why we can’t write our code like this:

    @supports not (display: grid) {
        // code for older browsers
    @supports (display: grid) {
        // code for newer browsers

    If we do, we’ll stop the older browsers from getting the code they need.

    Instead, structure your code like this:

    // fallback code for older browsers
    @supports (display: grid) {
        // code for newer browsers
        // including overrides of the code above, if needed

    This is exactly the strategy we applied to using Media Queries when supporting old versions of IE. This strategy is what gave rise to the phrase “mobile first”.

    I expect CSS Grid to land in browsers in 2017, and I bet we will use Feature Queries quite a lot when implementing future layouts. It’s going to be much less of a hassle, and much faster, than involving JavaScript. And @supports will let us doing interesting and complex things for browsers that support CSS Grid, while providing layout options for the browser that don’t.

    Feature Queries have been around since mid–2013. With the imminent release of Safari 10, I believe it’s past time for us to add @supports to our toolbox.

    Air Mozilla2016 Intern Presentations

    2016 Intern Presentations Group 4 of the interns will be presenting on what they worked on this summer. Andrew Comminos- TOR Benton Case- PDX Josephine Kao- SF Steven...

    Mozilla Add-ons Blog“Restart Required” Badge on AMO

    When add-ons were first introduced as a way to personalize Firefox, they required a restart of Firefox upon installation. Then came “restartless” extensions, which made the experience of installing an add-on much smoother. Every iteration of extensions APIs since then has similarly supported restartless add-ons, up to WebExtensions.

    To indicate that an add-on was restartless, we added “No Restart” badges next to them on (AMO). This helped people see which add-ons would be smoother to install, and encouraged developers to implement them for their own add-ons. However, two things happened recently that prompted us to reverse this badge. Now, rather than using a “No Restart” badge to indicate that an add-on is restartless, we will use a “Restart Required” badge to indicate that an add-on requires a restart.

    One reason for this change is because we reached a tipping point: now that restartless add-ons are more common, and the number of WebExtensions add-ons is increasing, there are now more extensions that do not require a restart than those that do.

    Another reason is that we encountered an unexpected issue with the recent introduction of multiprocess Firefox. In Firefox 48, multiprocess capability was only enabled for people with no add-ons installed. If you are one of these people and you now install an add-on, you’ll be asked to restart Firefox even if the add-on is restartless. This forced restart will only occur over the next few versions as multiprocess Firefox is gradually rolled out. This is not because of the add-on, but because Firefox needs to turn multiprocess off in order to satisfy the temporary rule that only people without add-ons installed have multiprocess Firefox enabled. So a “No Restart” badge may be confusing to people.

    Restartless add-ons becoming the norm is a great milestone and a huge improvement in the add-on experience, and one we couldn’t have reached without all our add-on developers—thank you!

    hacks.mozilla.orgWhat’s new in Web Audio?

    The Web Audio API is still under development, which means there are new methods and properties being added, renamed, shuffled around or simply removed!

    In this article, we look at what’s happened since our last update in early 2015, both in the Web Audio specification and in Firefox’s implementation. The demos all work in Firefox Nightly, but some of the latest changes might not be present in Firefox release or Developer Edition yet.

    API changes

    Breaking change

    The reduction attribute in DynamicsCompressorNode is now a float instead of an AudioParam. You can read the value with compressor.reduction instead of compressor.reduction.value.

    This value shows the amount of gain reduction that the compressor is applying to the signal, and it had been read-only anyway, so it makes sense to have it as a float and not an AudioParam (since no changes can be scheduled).

    To detect whether the browser your code is running on supports the AudioParam or float data type, you can check for the existence of the .value attribute on reduction:

    if(compressor.reduction.value !== undefined) {
      // old style
    } else {
      // new style

    Take a look at this example where the reduction attribute is accessed while a drum loop is playing. Notice how the value changes to react to the loudness in the track, and how we’re detecting which API version the browser supports before reading the attribute value.

    New properties and methods

    New life cycle management methods in AudioContext

    With AudioContexts being rather expensive, three new methods have been added: suspend(), resume() and close().

    These allow developers to suspend the processing for some time until it is needed again, and to free some resources with close() when the AudioContext isn’t required anymore.

    Essentially, when an AudioContext is suspended, no sounds will be played, and when it resumes, it will continue playing where it left off. The description for suspend() in the specification has all the details.

    This example demonstrates the usage of these methods.

    Precise AudioNode disconnect() methods

    In the past, you could not disconnect nodes selectively: if you ran disconnect() on a given node, it would disconnect it from all other nodes.

    Thankfully, the disconnect() method can now be overloaded to increase the type of disconnections you can perform:

    • disconnect() – disconnects a node from every node (this is the existing function)
    • disconnect(outputNumber) – disconnects all connections from the node’s outputNumber channel
    • disconnect(anotherNode) – disconnects all connections to anotherNode
    • disconnect(anotherNode, outputNumber) – disconnects all connections from outputNumber channel from node anotherNode
    • disconnect(anotherNode, outputNumber, inputNumber) – disconnects connections to anotherNode from outputNumber into inputNumber channels
    • disconnect(audioParam) – disconnects connections from this node to audioParam
    • disconnect(audioParam, outputNumber) – disconnects connections from this node’s outputNumber channel to audioParam

    I strongly recommend you read the specification for AudioNode to understand all the details on the effects of these disconnections. You can also read the original discussion to find out about the motivation for this change.

    New length attribute in OfflineAudioContext

    This new attribute reflects the value that was passed to the constructor when the OfflineAudioContext was initialised, so developers don’t have to keep track of it on a separate variable:

    var oac = new OfflineAudioContext(1, 1000, 44100);
    >> 1000

    Here’s an example that demonstrates using that attribute and also rendering a sound wave with a gain envelope.

    New detune attribute in AudioBufferSourceNode

    This is similar to the detune attribute in OscillatorNodes, but can now be used for fine-tuning samples with more accuracy than just using the existing playbackRate property.

    New AudioParam-typed position and orientation attributes in PannerNode

    These new attributes are AudioParams, which means you can use automation to modify them instead of continuously calling the setPosition() or setOrientation() methods in a loop.

    The StereoPannerNode pan attribute was already an AudioParam, so all the nodes that let you pan sounds in space also allow you to automate their spatial properties. Great stuff for modular synthesis!

    That said, we still lack the ability to automate the position and orientation properties in AudioListener, which means that if you want to update these periodically you have to use setPosition() and setOrientation() methods on the AudioListener for now. (Bug #1283029 tracks this).

    Passing values to set initial parameters for PeriodicWave instances

    You can now pass an options object when creating instances of PeriodicWave:

    var wave = audioContext.createPeriodicWave(real, imag, { disableNormalization: false });

    Compare with the previous syntax:

    var wave = audioContext.createPeriodicWave(real, imag);
    wave.disableNormalization = false;

    In the future, all node creation methods will allow developers to pass objects to set their initial parameters, and will also be constructible, so we’ll be able to do things such as new GainNode(anAudioContext, {gain: 0.5});. This will make Web Audio code way more succinct than it actually can be when it comes to initialising nodes. Less code to maintain is always good news!

    New node: IIRFilterNode

    Building an IIRFilter with Digital Filter Design

    If BiquadFilterNode is not enough for your needs, IIRFilterNode will allow you to build your own custom filter.

    You can create instances calling the createIIRFilter() function on an AudioContext, and passing in two arrays of coefficients representing the feedforward and feedback values that define the filter:

    var customFilter = audioContext.createIIRFilter([ 0.1, 0.2, ...], [0.4, 0.3, ...]);

    This type of filter node is not automatable, which means that once created, you cannot change its parameters. If you want to use automation, you’ll have to keep using the existing BiquadFilter nodes, and alter their Q, detune, frequency and gain attributes which are all AudioParams.

    The spec has more data on these differences, and you can use the Digital Filter Design resource to design and visualise filters and get ready-to-use Web Audio code with prepopulated feedforward and feedback arrays.

    Chaining methods

    Some more “syntactic sugar” to improve developers’ ergonomics:

    The connect() methods return the node they connect to, so you can chain multiple nodes faster. Compare:





    And the AudioParam automation methods can be chained as well, as each method returns the object it was called on. For example, you could use it to define envelopes faster:


    gain.setValueAtTime(0, ac.currentTime);
    gain.linearRampToValueAtTime(1, ac.currentTime + attackTime);


    gain.setValueAtTime(0, ac.currentTime)
      .linearRampToValueAtTime(1, ac.currentTime + attackTime);

    Coming up in the future

    The Web Audio Working Group is almost finished writing the specification for AudioWorklets, which is the new name for AudioWorkers. These will replace the ScriptProcessorNode, which also lets you write your own nodes, but runs on the UI thread, so it’s not the best idea performance-wise.

    The pull request defining AudioWorklets and associated objects on the specification must be merged first, and once that’s done vendors can start implementing support for AudioWorklets on their browsers.

    Firefox changes: performance and debugging improvements

    Three hard-working engineers (Karl Tomlinson, Daniel Minor and Paul Adenot) spent at least six months improving the performance of Web Audio in Firefox. What this means, in practical terms, is that audio code now takes less time to run and it’s faster than or as fast as Chrome is. The only exception is when working with AudioParams, where Firefox performance is not as good… yet.

    Similarly, ScriptProcessorNodes are now less prone to introduce delay if the main thread is very busy. This is great for applications such as console emulators: low latency means a more faithful emulation, which in turns makes for lots of fun playing games!

    Going even deeper, assembly level optimisations for computing DSP kernels have been introduced. These take advantage of SIMD instructions on ARM and x86 to compute multiple values in parallel, for simple features such as panning, adjusting gain, etc. This means faster and more efficient code, which uses less battery—especially important on mobile devices.

    Additionally, cross-origin errors involving MediaElement nodes will now be reported to the developer tools console, instead of silently failing. This will help developers identify the exact issue, instead of wondering why are they getting only silence.

    There were many more fixed bugs—probably too many to list here! But have a look at the bug list if you’re really curious.

    Mozilla Add-ons BlogWebExtensions Taking Root

    Stencil and its 700,000+ royalty-free images are now available for Firefox users, thanks to WebExtensions.

    Stencil and its 700,000+ royalty-free images are now available for Firefox users, thanks to WebExtensions.

    From enhanced security for users to cross-browser interoperability and long-term compatibility with Firefox—including compatibility with multiprocess Firefox—there are many reasons why WebExtensions are becoming the future of add-on development.

    So it’s awesome to see so many developers already embracing WebExtensions. To date, there are more than 700 listed on AMO. In celebration of their efforts to modernize their add-ons, I wanted to share a few interesting ones I recently stumbled upon…

    musicfm has an impressively vast and free music library, plus an intuitive layout for simple browsing. However, I’m more of a SoundCloud music consumer myself, so I was intrigued to find SCDL SoundCloud Downloader, which is built for downloading not just music files, but related artwork and other meta information.

    The popular Chrome add-on Stencil is now available for Firefox, thanks to WebExtensions. It’s a diverse creativity tool that allows you to combine text and imagery in all sorts of imaginative ways.

    musicfm offers unlimited free music and the ability to create your playlists and online stations.

    musicfm offers unlimited free music and the ability to create your own playlists and online stations.

    I’m enjoying Dark Purple YouTube Theme. I think video resolution reads better against a dark background.

    Keepa is one of the finest Amazon price trackers out there that also supports various international versions of the online bazaar (UK, Germany, Japan, plus many others).

    Googley Eyes elegantly informs you which sites you visit send information about you to Google.

    Search Engine Ad Remover is a perfectly titled extension. But arguably even better than removing ads is replacing them with cat pics.

    Thanks for your continued support as we push ahead with a new model of extension development. If you need help porting your add-on to WebExtensions, check out the resources we’ve compiled. If you’re interested in writing your first add-on with WebExtensions, here’s how to get started.

    SUMO BlogWhat’s Up with SUMO – 11th August

    Hello, SUMO Nation!

    How have you been? We missed you! Some of you have gone on holidays and already came back (to the inaudible – but huge – relief of the hundreds of users who ask questions in the forums and the millions of visitors who read the Knowledge Base). Let’s move on to the updates, shall we?

    Welcome, new contributors!

    • … who seem to be enjoying summer away from computers… The way they should! So, no major greeting party for anyone this week, since you’ve been fairly quiet… But, if you just joined us, don’t hesitate – come over and say “hi” in the forums!

    Contributors of the week

    Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

    Most recent SUMO Community meeting

    The next SUMO Community meeting

    • …is happening on the 17th of August!
    • If you want to add a discussion topic to the upcoming meeting agenda:
      • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Wednesday (this will make it easier to have an efficient meeting).
      • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).
      • If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback.



    Support Forum

    Knowledge Base & L10n


    • for Desktop
      • …or the Desktop side…

    …what a quiet ending to this post, I hope you did not fall asleep. Then again, a siesta on a hot summer day is the best thing ever, trust me :-). Keep rocking (quietly, at least in the summer) the helpful web!

    Air MozillaConnected Devices Weekly Program Update, 11 Aug 2016

    Connected Devices Weekly Program Update Weekly project updates from the Mozilla Connected Devices team.

    Arabic Mozillaأفضل الإضافات التي ننصحك بها لمتصفح فيرفكس

    ماذا يمكنك أن تفعل بفيرفكس ؟، أظن أن فيرفكس علينا أن نطلق عليه إسم نظام تشغيل حتى وهو متصفح ذلك أنه يمكنه القيام بالكثير من الأشياء ربما تجعلك تستغني عن الكثير من التطبيقات يمكنه تشغيل ملفات Mp3، ملفات الفيديو، ملفات pdf، يمكنك أن تجعله منصة تنزيل الفيديو، أن تبرمج داخله، أن تكتب والكثير من الأشياء التي يقوم بها ولكن هل تعتبر هذه النهاية؟، لا، من خلال إضافات فيرفكس ستجعل فيرفكس يصل إلى أقصى قدرة له، في هذا المقال سأستعرض لكم أفضل الإضافات التي أستخدمها.

    موقع إضافات فيرفكس

    إضافة anonymox

    أحيانا تحتاج لتغيير الإي بي الخاص بك ( عنوانك على الأنترنات أو من أين تتصفح ) من أجل الحصول ربما على بعض الخدمات الغير متوفرة ببلدك أو ربما هناك مشكل في سرفر الشركة المحلية التي تقدم لك خدمة الإنترنات. مع هذه الإضافة إصلاح المشاكل التي ذكرتها لك حيث تستطيع إختيار تغيير تواجدك على الإنترنات إلى العديد من البلدان، حيث في النسخة المجانية منها يمكنك التغيير إلى أمريكا، بريطانيا، هولندا وهناك أكثر من هذا في النسخة المدفوعة.
    للحصول على الإضافة: إضغط هنا


    إضافة DownThemAll

    تعتبر هذه الإضافة بديلا لبرنامج idm المدفوع، ولكن هذه الإضافة مجانية بالكامل، حيث تعتبر مديل تنزيل رائع، تتوفر أيضا بالإضافة إمكانية تنزيل كل ما تحتويه الصفحة بضغطة زر. كل هذا مجانا وبدون مشاكل البحث عن الكراك أو البحث عن سريال مجاني كما يحدث
    للحصول على الإضافة: إضغط هنا


    إضافة Lightbeam من موزيلا
    تعتبر موزيلا رائدة في حماية الخصوصية والحماية من التتبع، إضافة Lightbeam طورت لأجل هذا، حيث تستطيع مشاهدة المواقع التي تتبعك من خلال واجهة رسومية تعتمد على الأشكال الهندسية البسيطة مع ربطها بخصوص، حيث مثلا لو دخلت إلى موقع ما وكان هذا الموقع يحمل إشهارات جوجل أدسنس فهنا هذا الموقع لديه تتبع من جوجل لذلك ستقوم الإضافة بعمل الموقع الذي دخلت إليه على شكل دائرة وجوجل ستضعها في مثلث وستربط بينهما بخط، كما يمكنك أيضا مشاهدة ذلك على شكل جدول بيانات، أظن أن هذه الإضافة أذكى إضافات مشاهدة التتبع حيث يمكنك وقف التتبع وكما يوجد تصنيف لتحديد من هم الطرف الثالث، مثلا لو كان الموقع الذي دخلت له هو الموقع المراد فجوجل الذي وضعت إشهاراتها في ذلك الموقع تعتبر طرف ثالث ويمكنك حضرها.
    للحصول على الإضافة: إضغط هنا


    إضافة Privacy Settings للتحكم الشامل بالخصوصية

    في فيرفكس يمكنك التحكم بشكل كامل في المتصفح ولكن هذا قد يكون صعبا على المبتدئين لذلك مع هذه الإضافة ستستطيع التحكم في كل شيء تقريبا وذلك بشكل بسيط ومن خلال أقفال الفتح والغلق وبشكل بسيط. والشيء الرائع أنها تتوفر على منصات سطح المكتب والأندرويد.
    للحصول على الإضافة: إضغط هنا


    إضافة uBlock Origin لحظر الإعلانات

    فقدت إضافة أد بلوكس بعض المستخدمين لأنها في تسريبات يقال أنها قامت بعمل تعاقدات مع بعض شركات الإشهارات لذلك لن تمنع الإضافة بعض الإعلانات من الظهور ولكن مع هذه الإضافة كل الإعلانات ستختفي لذلك تعتبر بديلا لإضافة أد بلوكس الشهيرة.
    للحصول على الإضافة: إضغط هنا

    إضافة Safe preview

    تريد التأكد ما إذا كان الموقع الذي ستزوره به فيروسات أو لا، تمكنك هذه الإضافة من التأكد من ذلك من خلال عمل فحص للموقع من خلال عدة مواقع أمن رقمي.
    للحصول على الإضافة: إضغط هنا


    إضافة Stylish

    يبدوا لك شكل الفيسبوك مزعجا؟ هذه الإضافة تمكنك من تغيير شكل أي موقع تريد إلى أشكال أخرى تتوفر على موقعها الرسمي كما يمكنك بناء رؤيتك الخاصة بك، للمتخصصين هذه الإضافة تقوم بالتعديل على css المواقع بحيث تفرض الخصائص التي تضعها على الموقع لعرضها وترك الخصائص الخاصة به جانبا.
    للحصول على الإضافة: إضغط هنا


    إضافة Stylish Sync

    هذه الإضافة شقيقة الإضافة السابقة حيث وجدت هذه الإضافة لعمل مزامنة مع باقي الأجهزة التي تعمل بفيرفكس سطح المكتب، بالمعنى أنه لو قمت بعمل تعديلات على موقع ما ثم أردت أن تنقل التعديلات إلى جهاز آخر هذه الإضافة تتكامل مع حسابات فيرفكس وتقوم بنقل البيانات ( مزامنة ) إلى الجهاز الآخر.
    للحصول على الإضافة: إضغط هنا


    إضافة Video DownloadHelper

    تريد تنزيل أي فيديو من أي موقع ( ما عدا يوتوب ) هذه الإضافة تعطيك هذه القدرة، حيث عند تحميل صفحة ما تقوم هذه الإضافة بتتبع الميديا فيها ولو وجدت فيديو ستضعه في تصنيفها ( فكرة مهمة هنا تقنيا أنت عندما تشاهد الفيديو فالفيديو ينزل لجهازك ثم تشاهده لذلك أنت تأخذ وقتا في تحميل الفيديو، فالفيديو يتم تحميله لجهازك – فكرة للتعلم فقط – ).
    للحصول على الإضافة: إضغط هنا


    إضافة 1-Click YouTube Video Downloader

    قلنا في الإضافة السابقة أنها لا تعمل مع اليوتوب فقط، ولكن لدينا الحل لهذا المشكل مع هذه الإضافة. تستطيع تحميل أي فيديو من اليوتوب وبأي صيغة متوفرة من خلال نقرة واحدة.
    للحصول على الإضافة: إضغط هنا


    إضافة Adblock Plus Pop-up Addon

    شقيقة أد بلوكس وهذه الإضافة متخصصة في منع النوافذ المنبثقة من الظهور، رغم أنه يمكنك أن تفعل خاصية داخل فيرفكس بحيث تمنع المواقع من توجيهك إلى مواقع أخرى إتجه إلى هذا المسار:
    • Firefox > Options/Preferences > Advanced > General : Accessibility : [ ] “Warn me when web sites try to redirect or reload the page”
    للحصول على الإضافة: إضغط هنا


    إضافة BetterPrivacy

    حصلت هذه الإضافة أفضل إضافة بشهر جويلية 2016 حيث تمكنك من التحكم الكامل بالكوكيز ( الكوكيز ملفات صغيرة تحمل معلومات تشاركها مع المواقع من أجل تجربة مستخدم أفضل ).
    للحصول على الإضافة: إضغط هنا


    لديك مشكل ؟ تحتاج للمساعدة، ضع تعليقا وسنساعدك.

    Arabic Mozillaتقرير الإجتماع الدوري الثاني لمجتمع موزيلا العربي ليوم 23 أفريل 2016

     الإجتماع 2
    تم يوم 23 أفريل 2016 من الساعة السابعة إلى الثامنة مساءا بتوقيت الجزائر وتونس إجتماع ضم العديد من الأفراد من مجتمع موزيلا العربي وكالعادة حضر عدة أفراد من الجزائر، تونس ، مصر، الأردن في غياب المجتمع الفلسطيني. وتم الإجتماع على جوجل هنقاوت وتم بثه على اليوتوب مباشرة.

    أدار اللقاء محمود قضاه من الأردن وبدأ سلسلة المواضيع أمين من تونس، حيث تحدث أمين عن الجديد في برنامج مندوبي موزيلا والذي تمثل في تحويل المندوبين الغير ناشطين في ثلاث الأشهر إلى مندوبين سابقين. ثم أضاف أمين موضوع آخر وهو موضوع فرق موزيلا كما وعدنا سابقا وقدم لي عدة روابط سأضعها أسفل هذه التدوينة، وتعتبر فرق موزيلا الأهم في موزيلا بحد ذاتها لأن المجتمع الآن يركز فقط على فريق التعريب والتسويق وهو ما يجعل الأفراد أو المجتمع عموما في حالة إرتباك للقرارات المفاجئة مثل سحب فيرفكس أو أس والعديد من القرارات، لأنه لا يوجد أفراد داخل الفرق الأخرى من موزيلا، هذا الموضوع جر الإجتماع إلى الحديث عن هذه النقطة بالذات وتسبب في طرح السؤال ” هل نحن نقوم بالعمل الكافي لمعرفة الجديد حول موزيلا ؟  ”، تداول الحضور عدة أجوبة وأعطى أمين مثالا عنه كيف يمكنه الإطلاع على الجديد بصفة دورية. بعد هذا تحدث الحضور عن التحفيز وهل موزيلا تقوم بالكثير لتحفيز الأفراد خاصة في المجتمع العربي وهل هي تساعدهم في مشاريعهم، صحيح لم أكن مع الحضور للحديث عن هذا لأنني أظن أنه خروجا عن النقاط المطروحة للحديث عنها وأيضا إذا تم البقاء على أسلوب الخروج سيأتي بعض الأشخاص ويطرحونا نقاطا لا يجب ان تطرح من أصلها وهي ما جعلت فيما سبق المجتمع ينهار كمجموعة  هذا كان رأيي. 
    اللقاء دام لأكثر من ساعة ولم يتم التحدث عن النقاط التي طرحت في ورقة الإجتماع وتم تأجيلها للقاء القادم. ومن الأمور الجيدة جدا التي نراها هي أنك عندما ترى أفرادا من دول مختلفة تتحدث بالعربية قد إجتمعت في مكان واحد بدون أي شيء سوى من أجل التكنولوجيا والتطوع أظن أن هذا إنجاز حقيقي، لأنه فيما يحصل في دولنا أصبح الإجتماع من أجل الخراب فقط في حين هاهي هذه المجموعة كنموذج لشباب يريد تغيير الصورة السوداء التي خيمت على منطقتنا في الفترة الأخيرة يزرعون بذرة أمل في واقع أفضل وفي أناس يريدون حماية خصوصية الناس وتسهيل وتمكين الناس من إستخدام الأنترنات بشكل أفضل.
    تسجيل للقاء لمن فاته : 

    موقع مهم لمتابعة جديد موزيلا:
    الملف المفتوح لملصق اللقاء ( الملف بصيغة PSD ) :
    الملف المفتوح لملصق اللقاء ( الملف بصيغة PSD ) :

    Air MozillaThe Joy of Coding - Episode 67

    The Joy of Coding - Episode 67 mconley livehacks on real Firefox bugs while thinking aloud.

    Air MozillaPrometheus and Grafana Presentation - Speaker Ben Kochie

    Prometheus and Grafana Presentation - Speaker Ben Kochie Ben Kochie Site Reliability Engineer and Prometheus maintainer at SoundCloud will be presenting basics of Prometheus Monitoring and Grafana reporting.

    Air MozillaConnected Devices Meetup

    Connected Devices Meetup The Connected Devices team at Mozilla is an effort to apply our guiding principles to the Internet of Things. We have a lot to learn...

    Air MozillaIntern Presentations 2016, 09 Aug 2016

    Intern Presentations 2016 Group 3 of the interns will be presenting what they worked on this summer.

    hacks.mozilla.orgDeveloper Edition 50: Console, Memory Tool, Net Monitor and more

    Firefox Developer Edition 50 is here. It has numerous improvements that will help you work with script-initiated network requests, tweak indexedDB data, and much more. It also introduces something special we’ve all been really wanting for a while, so let’s get right to it:


    A long awaited feature is finally coming to the dev tools, but we need your help in this final phase of testing. The source maps feature is currently preffed off by default, as we test before shipping it to everyone.

    If you’re curious as to why this has been such a challenging issue, James Long wrote an excellent post on the matter: On the Road to Better Sourcemaps in the Firefox Developer Tools

    Curious how the solution came about? I’ll paraphrase our own Joe Walker,

    “Interns often don’t have all the background on how difficult bugs are, and sometimes jump into really challenging bugs—which is to say, yay interns! ”

    So, a big thanks to Firefox Developer Tools’ intern Jaideep Bhoosreddy for figuring it out.

    Source maps allow you to compact all your JavaScript files in one script in order to save download time for your users, or to compile from another format (like TypeScript or CoffeeScript) to Javascript, while maintaining a reference to the original files, so it’s not a nightmare to debug.

    Source maps were supported in the debugger, but not in the console till now. This meant that any logged message had its location (the file and the line the log was emitted from) point to the compiled JavaScript file, but if said file was long and/or minified, this location info was barely usable.

    Those times are over. The console will now show the original source, and not the compiled one anymore. You can view it in action in the gif below, with a CoffeeScript file:

    Gif demonstrating source map support in console with a CoffeeScript file

    Source map support in the console

    Source map support is currently off by default and can be activated through a preference. Because there are various implementations in the wild depending on the tool used to build the source map file, we want to get some initial testing of the different variations. That’s where you come in.

    Here’s how you can help:

    To activate source map support in the console, please set the preference on.

    • Go to about:config
    • Search for devtools.sourcemap.locations.enabled
    • Double-click the line to toggle the value to true
    • Close and re-open the web console

    about:config screenshot for enabling source map support

    If you see anything that looks wrong, shout out to @firefoxdevtools on Twitter or let us know on the #devtools channel on IRC.

    Network Stack Trace

    In Firefox Developer Edition 50, the console now shows the stack trace that led to a network request in HTTP log message. This is on by default.

    Screenshot of an HTTP log's stack trace in the console

    Memory Tool

    The Memory Tool is also now enabled by default. This is a must-have tool for debugging and maintaining top-notch app performance. It helps you to find and fix memory leaks in your application. If you want to learn more about it, check out the article on MDN or go read the Hacks post on Firefox’s New Memory Tool.

    Network Monitor

    In Firefox 49, the “Cause” column was added. It shows how a given network request is initiated, its type and, if available, the stack trace that led to it. The stack trace bubble now shows the frame asynchronous cause (XHR, Promise, setTimeout, etc.), similar to the debugger stack trace panel.

    Screenshot of the Network Monitor panel showing a stack trace with an asynchronous cause

    Furthermore, entries can be sorted by their cause by clicking on the column header. This could be helpful to quickly find all the network requests that were initiated by fetch for example.

    JSON Viewer

    The JSON Viewer was refined and shows data in a smarter manner:

    Storage Inspector

    Following the global effort by Mike Ratcliffe and Jarda Snajdr to improve the Storage Inspector, it is now possible to remove a single indexedDB entry through the context menu.

    Screenshot of the context menu to remove an IndexedDB entry in Storage Editor


    Service Workers are definitely the next big thing in Web development, providing a whole set of tools you can use to build progressive web apps that match native apps in functionality, with offline capabilities and push notifications.
    Did you know that you can manage registered Service Workers in the about:debugging#workers page? This page now also shows push subscription endpoints and allows you to send a test notification with no more effort than the click of a button.

    Screenshot of Push subscription endpoints in the about:debugging page

    Other Notes

    Icons: Icons across all the developer tools got even better in Firefox 50. They are now more consistent and look sharp as a knife.

    Devtools tab icons in Firefox 49

    Tab icons in Firefox 49

    New devtools tab icons in Firefox 50

    New tab icons in Firefox 50

    WebAssembly: As Luke Wagner said in a previous blog post :

    “WebAssembly is an emerging standard whose goal is to define a safe, portable, size- and load-time efficient binary compiler target which offers near-native performance”

    WebAssembly files were already supported in the debugger, and they are now highlighted which make them look much nicer.

    Screenshot of WebAssembly file syntax highlighting in debugger

    WebAssembly file syntax highlighting

    And to wrap up, a minor but useful change: the Style Editor can now be disabled to save some space if you don’t use it.

    With that, we’ve completed the overview of Developer Edition 50. Download the latest update now and let us know what you think. One last thing, though. A lot of the improvements we covered in this post were made possible by awesome contributors. Big thanks to all of you.

    The dev tools are written using standard HTML, Javascript and CSS, so if you have any front-end development experience, you can contribute too. If you want to help the dev tools to get even better, you can find easy bugs to start with in Everyone is welcome!

    Air MozillaConnected Devices Meetup - Project Haiku

    Connected Devices Meetup - Project Haiku Mozilla's Project Haiku team will share some of the things we've learned in the past 6 months working with a young audience and ambient communication.

    Air MozillaConnected Devices Meetup - Nut Technology

    Connected Devices Meetup - Nut Technology Based in San Jose, Luke Ma is the VP of Business Development at Nut Technology focusing on marketing and overall company vision and strategy. Luke...

    Air MozillaConnected Devices Meetup - Apache Mynewt

    Connected Devices Meetup - Apache Mynewt James Pace will be speak about Apache Mynewt, a community-driven, permissively licensed open source initiative for constrained, embedded devices and applications. The emergence of the...

    WebmakerCommunity Spotlight: Hive Manchester

    So far in 2016, we’ve had the privilege of spotlighting six exceptional community members that are making a difference in their local communities. This month, we are highlighting Hive Manchester in England and how they are helping to create more digital making opportunities for young people through mentorships and events in Greater Manchester.

    Photo provided by Hive Manchester.

    Photo provided by Hive Manchester.

    We spoke with Damian Payton, co-founder of Hive Manchester, to learn more about the collaboration happening in their community. Here is what he had to say.

    How did Hive Manchester get its start?

    Hive Manchester was inspired by the other Hive communities we met over several years at Mozfest. Steven Flower and I were already working in digital access and skills in Manchester and shared many of Mozilla’s aims, so we decided to found Hive Manchester together. At Mozfest 2014, a Hive was officially set up, and just before Mozfest 2015 we gained funding from Manchester City Council to run Hive as a ‘pilot project’ for one year. That has been extended by a local authority for another sixth months and are actively seeking partnership with other local organizations.

    Manchester already had a vibrant community of organisations like Code Club, Coderdojo and others, teaching digital literacy to young people, educators and people in the community. Starting a Hive was a way to bring this together more closely and collaboratively, building on a strong existing scene.

    What is Hive Manchester’s most noteworthy #teachtheweb accomplishment?

    Our Youth Hacks has proven a big success – two-day coding events for youth, which we’ve started running every couple of months in different venues, attracting 30 – 100 people each time. Feedback from the young people attending has confirmed that they love the events and gain a great deal of learning that is not available in school. Employers in the area have proved extremely keen to partner with us at the events, offering mentors, sponsorships and challenges (e.g. “Design an app that has a positive social impact – you decide what this means to you” or “Use a Raspberry Pi to create a robot that can pick up and move an egg one metre – without breaking”) The employers help because they strongly believe in the ‘hack’ way of learning, want to contribute to the the community, and are keen to nurture growth while also finding new talent for their companies.

    To see more Hack Manchester Junior 2015 interviews, watch the full playlist here.


    What is one stand-out project that has made a positive impact on the local community?

    Hive Manchester was contracted to run Picademy, the Raspberry Pi (RPi) Foundation’s flagship educator training programme. It’s an intensive two-day course, with Day 1  focusing on five key RPi learning resources and Day 2 was a hack. We ran six of these sessions for 30 educators each time, reaching not only our city but educators from around Europe, the U.S., Australia and Singapore.

    Read about one Picademy attendee’s experience here.

    Photo provided by Hive Manchester.

    Photo provided by Hive Manchester.

    How is the organization inspiring others to #teachtheweb?

    Our aim is not just to run digital making events ourselves – we want to provide an example and inspiration that others can follow. We are sharing our methods and insights through the Hive Manchester Playbook, an open resource that any educator, community organisation or tech worker can use as a guide for creating their own events – available to share soon!

    What’s next for Hive Manchester?

    Hive Manchester hopes to work more closely with ‘formal education’ to help them integrate methods like hack days into the school day. To get this right, we need tech experts from local businesses to join us in working directly with young people in schools.

    Want to learn more about Hive Manchester? Check out their website and follow them on Twitter.

    Web Application SecurityMWoS 2015: Let’s Encrypt Automation Tooling

    winterOfSecurity_logo_dark_vertical2The Mozilla Winter of Security of 2015 has ended, and the participating teams of students are completing their projects.

    The Certificate Automation tooling for Let’s Encrypt project wrapped up this month, having produced an experimental proof-of-concept patch for the Nginx webserver to tightly integrate the ACME automated certificate management protocol into the server operation.

    The MWoS team, my co-mentor Richard Barnes, and I would like to thank Klaus Krapfenbauer, his advisor Martin Schmiedecker, and the Technical University of Vienna for all the excellent research and work on this project.

    Below is Klaus’ end-of-project presentation on AirMozilla, as well as further details on the project.

    MWoS Let’s Encrypt Certificate Automation presentation on AirMozilla

    Developing an ACME Module for Nginx

    Author: Klaus Krapfenbauer

    Note: The module is an incomplete proof-of-concept, available at

    The 2015-2016 Mozilla Winter of Security included a project to implement an ACME client within a well-known web server, to show the value of automated HTTPS configuration when used with Let’s Encrypt. Projects like Caddy Server showed the tremendous ease-of-use that could be attained, so for this project we sought to patch-in such automation to a mainstay web server: Nginx.

    The goal of the project is to build a module for a web server to make securing your web site even easier. Instead of configuring the web server, getting the certificate (e.g. with the Let’s Encrypt Certbot) and installing the certificate on the web server, you just need to configure your web server. The rest of the work is done by the built-in ACME module in the web server.


    This project didn’t specify which particular web server we should develop on. We evaluated several, including Apache, Nginx, and Stunnel. Since the goal is to help as many people as possible in securing their web sites we narrowed to the two most widely-used: Nginx and Apache. Ultimately, we decided to work with Nginx since it has a younger code base to develop with.

    Nginx has a module system with different types of modules for different purposes. There are load-balancer modules which pass the traffic to multiple backend servers, filter modules which convert the data of a website (for example encrypt it like the SSL/TLS module) and handler modules which create the content of a web request (e.g. the http handler loads the html file from the disk and serves it). In addition to their purpose the module types also differ in how they hook into the server core, which makes the choice crucial when you start to implement a Nginx module. In our case none of the types were suitable, which introduced some difficulties, discussed later.

    The ACME module

    The Nginx module should be a replacement of the traditional workflow involving the ACME Certbot. Therefore the features of the module should resemble the features of the Certbot. This includes:

    • Generate and store a key-pair
    • Register an account on an ACME server
    • Create an authorization for a domain
    • Solve the HTTP challenge for the domain authorization
      • At a later date, support the other challenge types
    • Retrieve the certificate from the ACME server
    • Renew a certificate
    • Configure the Nginx SSL/TLS module to use the certificate

    To provide the necessary information for all the steps in the ACME protocol, we introduced new Nginx configuration directives:

    • A directive to activate the module
    • Directive(s) for the recovery contact of the ACME account (optional)
      • An array of URIs like “mailto:” or “tel:”

    Everything else is gathered from the default Nginx directives. For example, the domain for which the certificate is issued is taken from the Nginx configuration directive “server_name”.

    An architecture diagram showing the different resources available to Nginx, and their relationships with the ACME module developed, as well as the ACME server: Let's Encrypt.

    Architecture of the ACME Module for Nginx

    As the ACME module is an extension of the Nginx server itself, it’s a part of the software and therefore uses the Nginx config file for its own configuration and stores the certificates in the Nginx config directory. The ACME module communicates with the ACME server (e.g. Let’s Encrypt, but it could be any other server speaking the ACME protocol) for gathering the certificate, then configures the SSL/TLS module to use this certificate. The SSL/TLS module then does the encryption work for the website’s communication to the end user’s browser.

    Let’s look at the workflow of setting up a secure website. In a world without ACME, anyone who wanted to setup an encrypted website had to:

    1. Create a CSR (certificate signing request) with all the information needed
    2. Send the CSR over to a CA (certificate authority)
    3. Pay the CA for getting a signed certificate
    4. Wait for their reply containing the certificate (this could take hours)
    5. Download the certificate and put it in the right place on the server
    6. Configure the server to use the certificate

    With the ACME protocol and the Let’s Encrypt CA you just have to:

    1. Install an ACME client
    2. Use the ACME client to:
      1. Enter all your information for the certificate
      2. Get a certificate
      3. Automatically configure the server

    That’s already a huge improvement, but with the ACME module for Nginx it’s even more simple. You just have to:

    1. Activate the ACME module in the server’s configuration

    Pretty much everything else is handled by the ACME module. So it does all the steps the Let’s Encrypt client does, but fully automated during the server startup. This is how easy it can and should be to encourage website admins to secure their sites.

    The minimal configuration work for the ACME module is to just add the “acme” directive to the server context in the Nginx config for which you would like to activate it. For example:

    http {
      server {
        listen 443 ssl;
        <recommended SSL hardening config directives>
        location / {

    Experienced challenges

    Designing and developing the ACME module was quite challenging.

    As mentioned earlier, there are different types of modules which enhance different portions of the Nginx core server. The default Nginx module types are: handler modules (which create content on their own), filter modules (which convert website data – like the SSL/TLS module does) and load-balancer modules (which route requests to backend servers). Unfortunately, the ACME module and its inherent workflow does not fit any of these types. Our module breaks these conventions: it has its own configuration directives, and requires hooks into both the core and other modules. Nginx’s module system was not designed to accommodate our module’s needs, therefore we had a very limited choice on when we could perform the ACME protocol communication.

    The ACME module serves to configure the existing SSL/TLS module, which performs the actual encryption of the website. Our module needs to control the SSL/TLS module to some degree in order to provide the ACME-retrieved encryption certificates. Unfortunately, the SSL/TLS module does a check for the existence and the validity of the certificates during the Nginx configuration parsing phase while the server is starting. This means the ACME module must complete its tasks before the configuration is parsed. Our decision, due to those limitations, was to handle all the certificate gathering at the time when the “acme” configuration directive is parsed in the configuration during server startup. After getting the certificates, the ACME module then updates the in-memory configuration of the SSL/TLS module to use those new certificates.

    Another architectural problem arose when implementing the ACME HTTP challenge-response.  To authorize a domain using the ACME HTTP challenge, the server needs to respond with a particular token at a well known URL path in its domain. Basically, it must publish this token like  a web server publishes any other site. Unfortunately, at the time the ACME module is processing, Nginx has not yet started: There’s no web server. If the ACME module exits, permitting web server functions to begin (and keeping in mind the SSL/TLS module certificate limitations from before), there’s no simple mechanism to resume the ACME functions later. Architecturally, this makes sense for Nginx, but it is inconvenient for this project. Faced with this dilemma, for the purposes of this proof-of-concept, we decided to launch an independent, tiny web server to service the ACME challenge before Nginx itself properly starts.


    As discussed, the limitations of a Nginx module prompted some suboptimal architectural design decisions. As in many software projects, the problem is that we want something from the underlying framework which it wasn’t designed to do. The current architectural design of the ACME module should be considered a proof-of-concept.

    There are potential changes that would improve the architecture of the module and the communication between the Nginx core, the SSL/TLS module and the ACME module. These changes, of course, have pros and cons which merit discussion.

    One change would be deferring the retrieval of the certificate to a time after the configuration is parsed. This would require spoofing the SSL/TLS module with a temporary certificate until the newly retrieved certificate is ready. This is a corner-case issue that arises just for the first server start when there is no previously retrieved certificate already stored.

    Another change is the challenge-response: A web server inside a web server (whether with a third party library or not) is not clean. Therefore perhaps the TLS-SNI or another challenge type in the ACME protocol could be more suitable, or perhaps there is some method to start Nginx while still permitting the module to continue work.

    Finally, the communication to the SSL/TLS module is very hacky.

    Current status of the project & future plans

    The current status of the module can be roughly described as a proof-of-concept in a late development stage. The module creates an ephemeral key-pair, registers with the ACME server, requests the authentication challenge for the domain and starts to answer the challenge. As the proof of concept isn’t finished yet, we intend to carry on with the project.

    Many thanks

    This project was an exciting opportunity to help realize the vision of securing the whole web. Personally, I’d like to give special thanks to J.C. Jones and Richard Barnes from the Mozilla Security Engineering Team who accompanied and supported me during the project. Also special thanks to Martin Schmiedecker, my professor and mentor at SBA Research at the Vienna University of Technology, Austria. Of course I also want to thank the whole Mozilla organization for holding the Mozilla Winter of Security and enabling students around the world to participate in some great IT projects. Last but not least, many thanks to the Let’s Encrypt project for allowing me to participate and play a tiny part in such a great security project.

    hacks.mozilla.orgjs13kGames: Code golf for game devs

    How much is 13 kB? These days a couple of kilobytes seem like a drop in the ocean. Rewind back to the dawn of video game history, however, and you’ll soon realise that early pioneers had to work with crazy limitations.

    The beloved Atari 2600, for example, had a measly 128 bytes of RAM with cartridges supplying an additional 4 kilobytes. As the saying goes: constraints inspire creativity. The annual js13kGames competition channels creativity by challenging aspiring game developers to create a game using just 13,312 bytes, zipped.

    A coding competition for HTML5 game developers

    Js13kGames is an annual online JavaScript competition for HTML5 game developers that began in 2012. The fun part is the file-size limit, set to 13 kilobytes. Participants have a whole month (August 13th – September 13th) to build a game on the given theme – in 2015, the theme was Reversed.

    js13kgames banner

    Thanks to our friends and sponsors, this competition offers plenty of prizes, includes a panel of expert judges, free t-shirts, and other goodies, shipped worldwide for free. But winning is only one of the benefits of participation. There’s lots to be gained from being a part of the js13kGames community. People help each other if they’re stuck on something, share their tools, workflows, tips, and tricks. Plus, the constraint of a limited time frame helps you finish a game, and trains your skills in the process.

    Last year’s winners

    Thirteen kilobytes is not enough even for a low resolution image. The small screenshots on the entries pages are usually bigger than the games themselves! And yet, you may be surprised by what can be achieved in such a small size. Take a look at some of last year’s winners for inspiration:

    Wondering how such features are implemented? I’ve interviewed the winners, asking them to share some of their secrets to success. They share tooling and techniques for game development with extreme constraints. And if you’re craving more details: all games are on GitHub, so you can dig through the source code yourself.



    Eoin McGrath describes some aspects of his workflow for RoboFlip:

    “The final entries can be zipped. Zip compression works much better on a single file than multiple files, so the best thing to do is inline all images, concatenate files, minify your JavaScript and remove any white space. Thanks to task runners like Grunt and Gulp this process can be largely automated. Check out the Gulp file that I used. A simple gulp build command takes care of all the heavy lifting and lets me know how much valuable space I have left.”

    Gulp build task in action


    “First off, forget about high resolution sprite sheets with lots of animation frames. Simplicity is the key. A lot can be achieved with procedural generation or SVGs. I personally went for a retro-style pixellated look. First, all images were created at tiny resolutions (from about 6×6 pixels) in GIMP. I then encoded them in base64 and used the Canvas API to redraw them at a larger scale.”

    Scaled up sprites

    “Another handy trick I used was to run all images through a function that replaced non-transparent color values with white.”

    Damage frame for crate sprite

    “This proved a cheap and effective way to have damage frames available for all sprites.”


    “A game with no sound effects is like coffee without the caffeine. Obviously, there is no way you can fit a single .mp3 or .ogg file into your game with the given limitations. Thankfully, there is jsfxr, which is a very nice library you can use to create some 8bit styled beeps.

    For the musically inclined you can also have a stab at creating a soundtrack
    using the Sonant-X library – check out this awesome playable example.”
    (You may need to hit “Play Song” to initiate.)

    Road Blocks

    “One of the things I love about the js13kGames competition is the artificial limitation it places on you in terms of what you can achieve,” writes Ash Kyd, a game developer from Australia.

    “I find as an indie dev, with more open-ended projects it’s possible to get bogged down by all the possibilities and end up with nothing to show at the end, whereas setting some hard limits makes you more creative in terms of what you can accomplish.”

    Road Blocks

    “Thanks to the filesize limitation, Road Blocks is a fundamentally simple game and didn’t require a massive amount of coding work. As a result, a lot of my time was spent polishing the gameplay and smoothing rough edges during the competition, which resulted in a higher quality product at the end of the month.”

    Behind Asteroids – The Dark Side

    “Js13kGames is a great opportunity to discover and experiment with cool technologies like WebGL or Web Audio — and improve your skills. With a 13 kB limit, you can’t afford to hide behind a framework. Also, obviously, you shouldn’t use images but try to procedurally generate them. That said, it’s up to you to find your style and use the tricks that suit you. Don’t fall into doing all the tricks right away – prototype first and compress your code at the very end,” advises veteran game developer and js13kGames winner Gaëtan Renaudeau.

    Behind Asteroids

    “One of the interesting tricks I’ve found to save bytes is to avoid object-oriented style. Instead, I just write functions and use Arrays as tuple data type – I’ve used this technique in the past for a previous js1k entry.

    This is the third year I’ve participated in the js13kGames competition and the third time I’ve had fun with WebGL. My 2015 submission is a remake of Asteroids where you don’t actually control the spaceship but you instead send the asteroids. This is my take on the Reversed theme.

    On desktop, the game is implemented by typing – a letter is displayed on each asteroid and you have to type it at the right moment. On mobile, it simply turns into a touch-enabled game.

    The game is paced with different levels from beginners to more experienced players who control the spaceship which you must destroy with the asteroids. The spaceship controls are driven by an AI algorithm.”

    How the game is rendered

    “The game uses hybrid rendering techniques: it is first rendered on Canvas using basic 2D shapes and is then piped into multiple WebGL post-processing effects.

    The 2D Canvas drawing involves circles for particles and bullets, polygons for asteroids and spaceships, and lines for the procedural font as the path of each letter is hardcoded. Game shapes are drawn exclusively with one of the 3 color channels (red, blue and green) to split objects into different groups that the post-processing can filter – for instance, bullets are drawn in blue so we can apply specific glare effects for them. This is an interesting technique to optimize and store different things into a single texture as the game is in monochrome.

    The different effects involved in the WebGL post-processing are detailed in this post-mortem. The important goal of this final step is to graphically reproduce the great vector graphics of the original arcade machine.

    A background environment map where you see a player reflecting into the arcade game screen is also added in the background. It is entirely procedurally generated in a Fragment Shader.”


    You can employ a myriad of approaches to shave precious bytes off your code base. These range from the fairly well known to the more obscure. There’s an article on How to minify your HTML5 game over at Tuts+ Game Development, and you will also find a nicely curated list of tools and other materials at js13kGames Resources page.

    I hope you’ve enjoyed this brief tour of js13kGames landscape and some of the winning code golf tricks past winners recommend. Tempted to give it a go this year? The 2016 competition starts in just a few days – on August 13th. Join us! It’s not too late to start coding.

    Mozilla Web DevelopmentExtravaganza – August 2016

    Once a month, web developers from across Mozilla get together to talk about the work that we’ve shipped, share the libraries we’re working on, meet new folks, and talk about whatever else is on our minds. It’s the Webdev Extravaganza! The meeting is open to the public; you should stop by!

    You can check out the wiki page that we use to organize the meeting, or view a recording of the meeting in Air Mozilla. Or just read on for a summary!

    Shipping Celebration

    The shipping celebration is for anything we finished and deployed in the past month, whether it be a brand new site, an upgrade to an existing one, or even a release of a library.

    Normandy Control Interface Release

    First up was mythmon, who mentioned the new release of Normandy, the backend server for the SHIELD system that powers surveys and feature studies in Firefox. This release includes a new admin interface built using React and Redux, as well as a switch to client-side targeting that is powered by JEXL expressions.

    Google Sign-In on Socorro

    Next was peterbe, who mentioned that Socorro, the crash-report service for Firefox, has added Google-based sign-in ahead of the planned shut-down of Persona. It’s planned to land in production sometime within the next week, and involves some extra work around triggering automatic sign-out of users who have been signed in for a certain amount of time.

    DXR: The Ballad of Peter Elmers

    ErikRose was next, and shared yet another list of new features developed by DXR intern new_one:

    • Description column in file listings
    • Better handling of whitespace in paths
    • Modification dates are pulled from the VCS instead of the filesystem
    • Per-line blame links
    • Badges in the filter dropdown showing what languages support each filter

    Open-source Citizenship

    Here we talk about libraries we’re maintaining and what, if anything, we need help with for them.


    Next was pmac in absentia, who wanted to share django-jinja-markdown, a fork of jingo-markdown. It adds support for rendering Markdown strings to HTML in templates rendered with django-jinja via a markdown filter, as well as a similarly-named template function. It also includes a block-level template tag that can be enabled by adding the library as a Jinja extension.


    Back to peterbe, who shared json-schema-reducer. The Python-based library takes in a JSON Schema and a JSON object or dict, and returns the JSON object with only fields that are specified in the schema. The main use case for the library is taking Socorro crash reports, and whitelisting data that is appropriate to be sent to Mozilla’s Telemetry platform for analysis, removing sensitive data that isn’t meant to leave the crash report system.


    The Roundtable is the home for discussions that don’t fit anywhere else.


    Last up was ErikRose, who brought up the Getting Things Done methodology and how he recently has adopted it to help deal with his personal and professional time management. The video recording contains an extended discussion of time management strategies, but useful tools highlighted during the discussion include Things (OSX only), Org-Mode, and good old-fashioned sticky notes.

    If you’re interested in web development at Mozilla, or want to attend next month’s Extravaganza, subscribe to the mailing list to be notified of the next meeting, and maybe send a message introducing yourself. We’d love to meet you!

    See you next month!

    QMOFirefox 49 Beta 3 Testday, August 12th

    Hello Mozillians,

    We are happy to announce that Friday, August 12th, we are organizing Firefox 49 Beta 3 Testday. We will be focusing our testing on Windows 10 compatibility, Text to Speech in Reader Mode and Text to Speech on Desktop features. Check out the detailed instructions via this etherpad.

    No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

    Join us and help us make Firefox better! See you on Friday!

    WebmakerCompetency-Based Education & Digital Credential Design Convening

    On July 26 and 27, the Mozilla Foundation, with facilitation support by IDEO Design for Learning team and strategic convening support by Penn Hill Group held a design convening in Aurora, Colorado for states interested in exploring competency-based education and digital credentials. The design convening was an opportunity for states to:

    • Re-imagine systems that make any time, any place learning count towards college and career readiness;
    • Think through how to leverage new flexibilities in policy for innovative practices (i.e. Every Student Succeeds Act – ESSA);
    • Brainstorm strategic partners within states and communities; and
    • Exchange ideas for policies, practices, and systems including those which utilize digital badges/digital credentials.

    %22Who's Who%22 CBE Digital Credentials Design Convening 7.26-27





    The following invited states attended: Alabama, Arkansas, Kansas, Maine, Maryland, Michigan, Oregon, Rhode Island, South Carolina, Vermont. Each state team included a diverse group of stakeholders from business, state and district education agencies, higher education, in and out of school educators, and others as a way to design innovative systems in their states.






    The two-day design convening kicked off with a presentation by An-Me Chung about Mozilla’s five key issues and current work in digital badges set the context for the convening.  As a graduate of the 2015 Presidential Leadership Scholars program, a partnership of four presidential centers designed for leaders who share a commitment to helping solve society’s greatest challenges, this convening was also An-Me’s capstone project for the program.

    Following that, an expert panel of innovators set the landscape on competency-based education and shared lessons learned about competency-based education and the use of digital credentials as a way to connect and capture skills and competencies towards college and career readiness.


    “Digital badges seemed like the natural way [for the Aurora Colorado School District] to connect business, industry and students to implement the district’s strategic plan – that every student would have a plan for their future, the skills to implement that plan, and the credential to open those doors. [It is important to have] both internal and external stakeholders understand what a post-secondary workforce ready environment is, in order to not only deliver badges but build out the overarching system.”

    – Superintendent Rico Munn, Aurora Colorado School District, Aurora Public Schools Badge Initiative

    “Michigan explored digital badging as a way to honor student learning outside of the classroom and allow students to take that information with them to future employers. The technical aspect of the work is the easy part, it has been more difficult getting people to understand how to create the badges and lead people through it… [however] as more and more partners come on board in this work, the currency and value for students will increase as well.”

    – Michelle Ribant, Director for 21st Century Learning at the Office of Educational Technology and Data Coordination, Michigan Department of Education

    “We have all seen the headlines about increasing graduation rates and the reality of remediation rates and workplaces unhappy with skills graduates bring with them. Competency-based education offers a way to break standards into transferable skills so that teachers can see what gaps are, fill those in or out of the classroom, and move forward… mastery is the focus, rather than method of education delivery.”

    – Sarah Jenkins, Senior Manager of Research and Advocacy, KnowledgeWorks

    “There is a huge opportunity to engage in big systems thinking around credentials to say that competency matters in both the classroom and beyond. At a smaller level, micro-credentials from in and out of school are a way for teachers to feel like part of a bigger team while working with students and be able to see them in a more holistic way.”

    – Sheryl Grant, Director of Alternative Credentials and Badge Research, Humanities, Arts, Science and Technology Alliance and Collaboratory (HASTAC)

    With a final push of encouragement from Richard Culatta, Design Resident at IDEO, Chief Innovation Officer of the State of Rhode Island, and former U.S Department of Education Director of Educational Technology, attendees delved into the day’s design challenges with a “bias to action” mindset.

    Screen Shot 2016-08-03 at 1.08.23 PM

    IDEO introduced state teams to design thinking by first having teams roll their sleeves up to tackle the marshmallow challenge.

    Copy of IMG_1575 (1) Copy of IMG_9674 IMG_5112 IMG_20160726_135220

    From there, state teams set individual and team aspirations for the convening, including the following “how might we” share-outs:

    “How might we create incentives for change?”

    “How might we involve students at every step of the process?”

    “How might we leverage existing tools, digital and non-digital in order to learn?”

    “How might we identify and capture competencies / currency?”

    “How might we share digital credentialing in new ways?”

    IMG_1582 IMG_9693

    States heard about patterns across state initiatives, understood common challenges and how states overcame them. They then brainstormed new ideas and designed short-term experiments, with specific actionable ideas to help them achieve long-term goals.  Resource experts on hand included representatives from National Conference of State Legislatures, Mozilla, Penn Hill Group, KnowledgeWorks, HASTAC, and the Afterschool Alliance.

    Copy of IMG_1876 Copy of IMG_5123 image IMG_5126

    On Day 2 of the design convening, states continued to make connections with strategic partners from their own states and beyond in a peer breakfast. States unpacked the process of designing experiments and made a plan for taking action by: articulating learning goals, designing an experiment, planning their next steps, and “building” a quick prototype.

    Copy of IMG_5136 Copy of IMG_5139 Copy of IMG_5140 Copy of IMG_5132

    Next Steps

    As opportunities for education innovation in and out of school continue to grow, the states at this convening are looking toward the future. They are focused on identifying solutions around how to build systems where learning is more student-centered, how to recognize learning any time and any place, and how to ensure multiple stakeholders, including students, are at the forefront of the design process. This was a first step in that direction and we look forward to following and sharing states’ progress.

    For more information, check out these additional resources:


    The IDEO Design for Learning Team enjoying blizzards!

    SUMO BlogWhat’s Up with SUMO – 4th August

    Hello, SUMO Nation!

    August, the most venerable and majestic of months, is here. How are you doing? This is the release week, and we have also a few pieces of new to release, for your reading pleasure :-)

    Welcome, new contributors!

    • …it’s the summer holidays, so we’re actually VERY happy that a lot of people decided to spend a bit of time away from the keyboards… The warmer the welcome for…
    • farmanp!

    If you just joined us, don’t hesitate – come over and say “hi” in the forums!

    Contributors of the week

    Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

    Most recent SUMO Community meeting

    The next SUMO Community meeting

    • …is happening on the 10th of August!
    • If you want to add a discussion topic to the upcoming meeting agenda:
      • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Wednesday (this will make it easier to have an efficient meeting).
      • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).
      • If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback.



    Support Forum

    • On Friday at 12:00 PST we will have a live video session of forum question troubleshooting in the SUMO room and IRc channel – we will be there for 2 hours and you are welcome to join via IRC or Vidyo – as usual – see you there!
    • Please be on the look out for e10s questions! There are users who received Electrolysis with the latest update, and their profile will have this preference set to “true”: browser.tabs.remote.force-enable
    • A request to all the new contributors joining this week: please come say “hello” in the Introduce yourself thread and on IRC at any time. Just ping anyone with an ops account in IRC
    • Expect to see some new moderators later this week in the forums – say “hello!” – more on this coming your way next week.
    • FINAL IMPORTANT REMINDER! Please report any fake firefox-update malware in this thread, the more evidence is found, the closer we get to solving this issue. What can you do?
      • Provide URLs and copies of the fake file to antivirus partners to add it to their blocks.
      • Report the web forgeries through the Google URL forgery reporting tool
      • Looking for an ad server source is the key to success here (if there is one core source of this issue)

    Knowledge Base & L10n


    • for iOS
      • No big new from the fruity front. Stay tuned for updates later on!

    This being the release week, you may want to read about the new improvements in more detail. Good times ahead!

    Keep rocking the helpful web, SUMO Nation… and make the most of the final weeks of summer! Party on!

    Air MozillaMWoS 2015: Let's Encrypt Automation Tooling

    MWoS 2015: Let's Encrypt Automation Tooling Mozilla Winter of Security presentation of Klaus Krapfenbauer, a graduate student at the security institute SBA Research of the Vienna University of Technology, who worked...

    The Mozilla BlogMozilla Awards $585,000 to Nine Open Source Projects in Q2 2016

    “People use Tails to chat off-the-record, browse the web anonymously, and share sensitive documents. Many human rights defenders depend on Tails to do their daily work, if not simply to stay alive.” – Tails developer team

    “We think that the Web will only be truly open when we own the means of locating information in the billions of documents at our disposal. Creating PeARS is a way to put the ownership of the Web back into people’s hands.” – Aurelie Herbelot, PeARS

    “Item 4 of Mozilla’s Manifesto states, ‘Individuals’ security and privacy on the Internet are fundamental and must not be treated as optional.’ This is the primary philosophy behind Caddy, the first and only web server to use HTTPS by default.” –Matt Holt, Caddy

    Last quarter’s Mozilla Open Source Support (MOSS)-awarded projects are diverse, but they have one thing in common: they believe in innovation for public benefit. Projects like Tails, PeARS and Caddy are paving the way for the next wave of openness, which is why Mozilla has allocated over $3.5 million to the MOSS initiative in support of these and other open source projects. We’re excited to share the program’s progress this quarter, which includes $585,000 in awards, nine new projects supported and two new tracks launched.

    We're Open

    One of the new tracks is “Mission Partners”, which supports any open source project which meaningfully advances the Mozilla mission. We had a large number of applications in the initial round, of which we have already funded eight (for a total of $385,000) and are still considering several more. Applications for “Mission Partners” remain open on an ongoing basis.

    The second is our “Secure Open Source” track, which works on improving the security of open source software by providing manual source code audits for important and widely-used pieces of free software. By the end of the second quarter we completed three audits – for the PCRE regular expression library, the libjpeg-turbo image decoding library, and the phpMyAdmin database administration software – with more in the pipeline. We hope that Secure Open Source will grow to be supported by multiple parties with an interest in improving the state of the Internet infrastructure – from companies to non-profits to governments. You can submit a suggestion for a project which might benefit from SOS support.

    Our initial track, “Foundational Technology”, which supports projects that Mozilla already uses, integrates or deploys in our infrastructure, was launched late last year and remained open during this quarter. We made one additional award – to PyPy, the Python JIT compiler, for $200,000. Applications for a “Foundational Technology” award remain open.

    Mozilla is proud to support the open source community of which we are a part and from which so many benefit. We look forward to enabling even more OS maintenance, improvement and innovation through MOSS, so please apply! The committee meets next in early September, so get your applications in by the end of August.

    Air MozillaWeb QA Team Meeting, 04 Aug 2016

    Web QA Team Meeting They say a Mozilla Web QA team member is the most fearless creature in the world. They say their jaws are powerful enough to crush...

    Air MozillaReps weekly, 04 Aug 2016

    Reps weekly This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

    Mozilla Add-ons BlogAdd-ons Update – Week of 2016/08/03

    I post these updates every 3 weeks to inform add-on developers about the status of the review queues, add-on compatibility, and other happenings in the add-ons world.

    The Review Queues

    In the past 3 weeks, 1228 listed add-on submissions were reviewed:

    • 1106 (90%) were reviewed in fewer than 5 days.
    • 80 (7%) were reviewed between 5 and 10 days.
    • 42 (3%) were reviewed after more than 10 days.

    There are 98 listed add-ons awaiting review.

    You can read about the improvements we’ve made in the review queues here.

    If you’re an add-on developer and are looking for contribution opportunities, please consider joining us. Add-on reviewers are critical for our success, and can earn cool gear for their work. Visit our wiki page for more information.


    The compatibility blog post for Firefox 49 is up, and the bulk validation was run. The blog post for Firefox 50 should come up in a couple of weeks.

    Going back to the recently released Firefox 48, there are a couple of changes that are worth a reminder: (1) release and beta builds no longer have a preference to deactivate signing enforcement, and (2) multiprocess Firefox is now enabled for users without add-ons, and add-ons will be gradually phased in, so make sure you’ve tested your add-on and either use WebExtensions or set the multiprocess compatible flag in your add-on manifest.

    As always, we recommend that you test your add-ons on Beta and Firefox Developer Edition to make sure that they continue to work correctly. End users can install the Add-on Compatibility Reporter to identify and report any add-ons that aren’t working anymore.

    Extension Signing

    The wiki page on Extension Signing has information about obtaining the unbranded builds to test on release and beta. We will try to make them easier to get to, but for the most part the Firefox 48 release marks the end of deploying this change, after years of work.


    We would like to thank these people for their recent contributions to the add-ons world: Rob Wu, NilkasG, and h4ever.

    You can read more about their contributions in our recognition page.

    WebmakerJuly Community Call: Serving Youth Around the World

    July’s Mozilla Learning Community Call featured guest speakers Gilad Cohen, founder of the non-profit JAYU that shares human rights stories through the arts, and Tina Verbo, volunteer Mozilla Representative for Mozilla Philippines. Gilad and Tina discussed their inspiring local projects and how they strive to make an impact on those they serve.

    Here are a few key takeaways and reflection questions from the call:

    • Gilad stressed that his experience and passion on a personal level is what sparked the creation of JAYU. How is your work being fueled by your own personal experiences, passions and motivations?
    • Working hand-in-hand with the community you are serving will allow for an accurate, respectful, and real representation of their situation and needs. What are important experiences and lessons you’ve learned that help others improve web literacy skills in a safe, welcoming environment?
    • Mozilla Philippines is learning that it takes a lot of hard work and willing contributors to translate curriculum, resources, and tools into local languages to help a community learn web literacy skills. What can you do to help provide access to open source tools and resources to your local community?

    Watch the community call in full below:

    Mozilla Add-ons BlogDiscovery Pane Editorial Policy

    discoWith the launch of Firefox 48, we debuted a new page in the Get Add-ons section of about:addons. For simplicity’s sake, we’re calling this the Discovery Pane.

    We recently covered the reasons why we redesigned this page. Now I’d like to share more insight into how this page’s content will refresh moving forward.

    The Discovery Pane is designed to appeal to users who may have very little—or zero—understanding of what add-ons are, or why they should customize their browser. That’s why this page only features add-ons that meet the highest standards of user experience, like excellent UI, intuitive work flows, and polished functionality.

    We also want to put forth only content that addresses what we know to be the most common needs of a broad set of Firefox users. The page as it exists today includes a few visual themes and four add-ons: an ad blocker, a video downloader, an emoji enhancement tool, and a screenshot extension—all very widely appealing use cases.

    The list of content is intentionally short. As with introducing any unfamiliar concept, we wanted to avoid overwhelming people with loads of options, but rather focus their attention on a few enticing paths to choose from.

    There are many high-caliber add-ons that would be a great fit in the Discovery Pane. So we’ll update this page each month as we identify more content and appropriate use cases. This does not mean we’ll update all of the content wholesale each month—we may leave certain add-ons in place if they offer a distinctly unique user benefit. However, for example, if we have four screenshot add-ons that are all equally awesome, we’ll endeavor to evenly rotate them.

    An updated wiki page goes into greater detail about the selection criteria of Discovery Pane add-ons. But here are three critical criteria you should know about for all Discovery Pane add-ons; they must be…

    1. e10s compatible. As Firefox moves deeper into the new world of multi-process, we should only highlight content that is compatible with it (without relying on shims). Here’s how you can check your add-on for compatibility.
    2. Restartless. We need a uniform installation process for all add-ons presented on the page.
    3. Already be part of our broader Featured Extensions collection, which is vetted by our community-driven Feature Board, as well as Mozilla staff.

    If you’d like to nominate your add-on (or someone else’s) for Discovery Pane consideration, please use the same channel we always have for featured content—email us at and we’ll add your nomination to the editorial review queue. There’s no need to specifically mention “Discovery Pane” while nominating, since all nominations will be viewed through that prism, but feel free if you like.

    Any questions? Concerns? Better ideas? Feedback? You know where to leave comments…

    Air MozillaThe Joy of Coding - Episode 66

    The Joy of Coding - Episode 66 mconley livehacks on real Firefox bugs while thinking aloud.

    hacks.mozilla.orgAnimating like you just don’t care with Element.animate

    In Firefox 48 we’re shipping the Element.animate() API — a new way to programmatically animate DOM elements using JavaScript. Let’s pause for a second — “big deal”, you might say, or “what’s all the fuss about?” After all, there are already plenty of animation libraries to choose from. In this post I want to explain what makes Element.animate() special.

    What a performance

    Element.animate() is the first part of the Web Animations API that we’re shipping and, while there are plenty of nice features in the API as a whole, such as better synchronization of animations, combining and morphing animations, extending CSS animations, etc., the biggest benefit of Element.animate() is performance. In some cases, Element.animate() lets you create jank-free animations that are simply impossible to achieve with JavaScript alone.

    Don’t believe me? Have a look at the following demo, which compares best-in-class JavaScript animation on the left, with Element.animate() on the right, whilst periodically running some time-consuming JavaScript to simulate the performance when the browser is busy.

    Performance of regular JavaScript animation vs Element.animate()To see for yourself, try loading the demo in the latest release of Firefox or Chrome. Then, you can check out the full collection of demos we’ve been building!

    When it comes to animation performance, there is a lot of conflicting information being passed around. For example, you might have heard amazing (and untrue) claims like, “CSS animations run on the GPU”, and nodded along thinking, “Hmm, not sure what that means but it sounds fast.” So, to understand what makes Element.animate() fast and how to make the most of it, let’s look into what makes animations slow to begin with.

    Animations are like onions (Or cakes. Or parfait.)

    In order for an animation to appear smooth, we want all the updates needed for each frame of an animation to happen within about 16 milliseconds. That’s because browsers try to update the screen at the same rate as the refresh rate of the display they’re drawing to, which is usually 60Hz.

    On each frame, there are typically two things a browser does that take time: calculating the layout of elements on the page, and drawing those elements. By now, hopefully you’ve heard the advice, “Don’t animate properties that update layout.” I am hopeful here — current usage metrics suggest that web developers are wisely choosing to animate properties like transform and opacity that don’t affect layout whenever they can. (color is another example of a property that doesn’t require recalculating layout, but we’ll see in a moment why opacity is better still.)

    If we can avoid performing layout calculations on each animation frame, that just leaves drawing the elements. It turns out that programming is not the only job where laziness is a virtue — indeed animators worked out a long time ago that they could avoid drawing a bunch of very similar frames by creating partially transparent cels, moving the cels around on top of the background, and snapshotting the result along the way.

    Example of cels used to create animation frames

    Example of creating animation frames using cels.
    (Of course, not everyone uses fancy cels; some people just cut out Christmas cards.)

    A few years ago browsers caught on to this “pull cel” trick. Nowadays, if a browser sees that an element is moving around without affecting layout, it will draw two separate layers: the background and the moving element. On each animation frame, it then just needs to re-position these layers and snapshot the result without having to redraw anything. That snapshotting (more technically referred to as compositing) turns out to be something that GPUs are very good at. What’s more, when they composite, GPUs can apply 3D transforms and opacity fades all without requiring the browser to redraw anything. As a result, if you’re animating the transform or opacity of an element, the browser can leave most of the work to the GPU and stands a much better chance of making its 16ms deadline.

    Hint: If you’re familiar with tools like Firefox’s Paint Flashing Tool or Chrome’s Paint Rectangles you’ll notice when layers are being used because you’ll see that even though the element is animating nothing is being painted! To see the actual layers, you can set layers.draw-borders to true in Firefox’s about:config page, or choose “Show layer borders” in Chrome’s rendering tab.

    You get a layer, and you get a layer, everyone gets a layer!

    The message is clear — layers are great and you are expecting that surely the browser is going to take full advantage of this amazing invention and arrange your page’s contents like a mille crêpe cake. Unfortunately, layers aren’t free. For a start, they take up a lot more memory since the browser has to remember (and draw) all the parts of the page that would otherwise be overlapped by other elements. Furthermore, if there are too many layers, the browser will spend more time drawing, arranging, and snapshotting them all, and eventually your animation will actually get slower! As a result, a browser only creates layers when it’s pretty sure they’re needed — e.g. when an element’s transform or opacity property is being animated.

    Sometimes, however, browsers don’t know a layer is needed until it’s too late. For example, if you animate an element’s transform property, up until the moment when you apply the animation, the browser has no premonition that it should create a layer. When you suddenly apply the animation, the browser has a mild panic as it now needs to turn one layer into two, redrawing them both. This takes time, which ultimately interrupts the start of the animation. The polite thing to do (and the best way to ensure your animations start smoothly and on time) is to give the browser some advance notice by setting the will-change property on the element you plan to animate.

    For example, suppose you have a button that toggles a drop-down menu when clicked, as shown below.

    Example of using will-change to prepare a drop-down menu for animation

    Live example

    We could hint to the browser that it should prepare a layer for the menu as follows:

    nav {
      transition: transform 0.1s;
      transform-origin: 0% 0%;
      will-change: transform;
    nav[aria-hidden=true] {
      transform: scaleY(0);

    But you shouldn’t get too carried away. Like the boy who cried wolf, if you decide to will-change all the things, after a while the browser will start to ignore you. You’re better off to only apply will-change to bigger elements that take longer to redraw, and only as needed. The Web Console is your friend here, telling you when you’ve blown your will-change budget, as shown below.

    Screenshot of the DevTools console showing a will-change over-budget warning.

    Animating like you just don’t care

    Now that you know all about layers, we can finally get to the part where Element.animate() shines. Putting the pieces together:

    • By animating the right properties, we can avoid redoing layout on each frame.
    • If we animate the opacity or transform properties, through the magic of layers we can often avoid redrawing them too.
    • We can use will-change to let the browser know to get the layers ready in advance.

    But there’s a problem. It doesn’t matter how fast we prepare each animation frame if the part of the browser that’s in control is busy tending to other jobs like responding to events or running complicated scripts. We could finish up our animation frame in 5 milliseconds but it won’t matter if the browser then spends 50 milliseconds doing garbage collection. Instead of seeing silky smooth performance our animations will stutter along, destroying the illusion of motion and causing users’ blood pressure to rise.

    However, if we have an animation that we know doesn’t change layout and perhaps doesn’t even need redrawing, it should be possible to let someone else take care of adjusting those layers on each frame. As it turns out, browsers already have a process designed precisely for that job — a separate thread or process known as the compositor that specializes in arranging and combining layers. All we need is a way to tell the compositor the whole story of the animation and let it get to work, leaving the main thread — that is, the part of the browser that’s doing everything else to run your app — to forget about animations and get on with life.

    This can be achieved by using none other than the long-awaited Element.animate() API! Something like the following code is all you need to create a smooth animation that can run on the compositor:

    elem.animate({ transform: [ 'rotate(0deg)', 'rotate(360deg)' ] },
                 { duration: 1000, iterations: Infinity });

    Screenshot of the animation produced: a rotating foxkeh
    Live example

    By being upfront about what you’re trying to do, the main thread will thank you by dealing with all your other scripts and event handlers in short order.

    Of course, you can get the same effect by using CSS Animations and CSS Transitions — in fact, in browsers that support Web Animations, the same engine is also used to drive CSS Animations and Transitions — but for some applications, script is a better fit.

    Am I doing it right?

    You’ve probably noticed that there are a few conditions you need to satisfy to achieve jank-free animations: you need to animate transform or opacity (at least for now), you need a layer, and you need to declare your animation up front. So how do you know if you’re doing it right?

    The animation inspector in Firefox’s DevTools will give you a handy little lightning bolt indicator for animations running on the compositor. Furthermore, as of Firefox 49, the animation inspector can often tell you why your animation didn’t make the cut.

    Screenshot showing DevTools Animation inspector reporting why the transform property could not be animated on the compositor.

    See the relevant MDN article for more details about how this tool works.

    (Note that the result is not always correct — there’s a known bug where animations with a delay sometimes tell you that they’re not running on the compositor when, in fact, they are. If you suspect DevTools is lying to you, you can always include some long-running JavaScript in the page like in the first example in this post. If the animation continues on its merry way you know you’re doing it right — and, as a bonus, this technique will work in any browser.)

    Even if your animation doesn’t qualify for running on the compositor, there are still performance advantages to using Element.animate(). For instance, you can avoid reparsing CSS properties on each frame, and allow the browser to apply other little tricks like ignoring animations that are currently offscreen, thereby prolonging battery life. Furthermore, you’ll be on board for whatever other performance tricks browsers concoct in the future (and there are many more of those coming)!


    With the release of Firefox 48, Element.animate() is implemented in release versions of both Firefox and Chrome. Furthermore, there’s a polyfill (you’ll want the web-animations.min.js version) that will fall back to using requestAnimationFrame for browsers that don’t yet support Element.animate(). In fact, if you’re using a framework like Polymer, you might already be using it!

    There’s a lot more to look forward to from the Web Animations API, but we hope you enjoy this first installment (demos and all)!

    Air MozillaContributing to Mozilla as an Attorney

    Contributing to Mozilla as an Attorney Cameron Swords has not yet given us a description of this presentation, nor any keyword tags to make searching for this event easier.

    Air MozillaMaking MIR Fly

    Making MIR Fly Mid-level Intermediate Representation (MIR) was introduced into the Rust compiler in early August 2016. One of the many benefits of MIR is it makes writing...

    Air MozillaProcedural Macros in Rust

    Procedural Macros in Rust Cameron Swords has not yet given us a description of this presentation, nor any keyword tags to make searching for this event easier.

    Air MozillaExploring Internet Issues

    Exploring Internet Issues Dominick Namis has not yet given us a description of this presentation, nor any keyword tags to make searching for this event easier.

    Air MozillaDown the Rabbit Hole

    Down the Rabbit Hole Nate Hakkakzadeh has not yet given us a description of this presentation, nor any keyword tags to make searching for this event easier.

    Air MozillaBreaking Builds

    Breaking Builds Hassan Ali talks about working for TaskCluster and successfully landing Push Inspector, a dashboard showing the state of tasks in a build.