Support.Mozilla.OrgMozillian profile: Jayesh

Hello, SUMO Nation!

Do you still remember Dinesh? Turns out he’s not the only Mozillian out there who’s happy to share his story with us. Today, I have the pleasure of introducing Jayesh, one of the many SUMOzillians among you, with a really inspiring story of his engagement in the community to share. Read on!


I’m Jayesh from India. I’ve been contributing to Mozilla as a Firefox Student Ambassador since 2014. I’m a self-made entrepreneur, tech lover, and passionate traveller. I am also an undergraduate with a Computer Science background.

During my university days I used to waste a lot of time playing games as I did not have a platform to showcase my technical skills. I thought working was only useful when you had a “real” job. I only heard about open source, but in my third year I came to know about open source contributors – through my friend Dinesh, who told me about the FSA program – this inspired me a lot. I thought it was the perfect platform for me to kickstart my career as a Mozillian and build a strong, bright future.

Being a techie, I could identify with Mozilla and its efforts to keep the web open. I registered for the FSA program with the guidance of my friend, and found a lot of students and open source enthusiasts from India contributing to Mozilla in many ways. I was very happy to join the Mozilla India Community.

Around 90% of Computer Science students at the university learn the technology but don’t actually try to implement working prototypes using their knowledge, as they don’t know about the possibility of open source contributions – they just believe that showcasing counts only during professional internships and work training. Thus, I thought of sharing my knowledge about open source contributors through the Mozilla community.

I gained experience conducting events for Mozilla in the Tirupati Community, where my friend was seeking help in conducting events as he was the only Firefox Student Ambassador in that region. Later, to learn more, we travelled to many places and attend various events in Bengaluru and Hyderabad , where we met a very well developed Mozilla community in southern India. We met many Mozilla Representatives and sought help from them. Vineel and Galaxy helped us a lot, guiding us through our first steps.

Later, I found that I was the only Mozillian in my region – Kumbakonam, where I do my undergrad studies – within a 200 miles radius. This motivated me to personally build a new university club – SRCMozillians. I inaugurated the club at my university with the help of the management.

More than 450 students in the university registered for the FSA program in the span of two days, and we have organized more than ten events, including FFOS App days, Moz-Quiz, Web-Development-Learning, Connected Devices-Learning, Moz-Stall, a ponsored fun event, community meet-ups – and more! All this in half a year. For my efforts, I was recognized as FSA of the month, August 2015 & FSA Senior.

The biggest problems we faced while building our club were the studying times, when we’d be having lots of assignments, cycle tests, lab internals, and more – with everyone really busy and working hard, it took time to bridge the gap and realise grades alone are not the key factor to build a bright future.

My contributions to the functional areas in Mozilla varied from time to time. I started with Webmaker by creating educational makes about X-Ray Goggles, App-Maker and Thimble. I’m proud of being recognized as a Webmaker Mentor for that. Later, I focused on Army of Awesome (AoA) by tweeting and helping Firefox users. I even developed two Firefox OS applications (Asteroids – a game and a community application for SRCMozillians), which were available in the Marketplace. After that, I turned my attention to Quality Assurance, as Software Testing was one of the subject in my curriculum. I started testing tasks in One And Done – this helped me understand the key concepts of software testing easily – especially checking the test conditions and triaging bugs. My name was even mentioned on the Mozilla blog about the Firefox 42.0 Beta 3 Test day for successfully testing and passing all the test cases.

I moved on to start localization for Telugu, my native language. I started translating KB articles – with time, my efforts were recognized, and I became a Reviewer for Telugu. This area of contribution proved to be very interesting, and I even started translating projects in Pontoon.

As you can see from my Mozillian story above, it’s easy to get started with something you like. I guarantee that every individual student with passion to contribute and build a bright career within the Mozilla community, can discover that this is the right platform to start with. The experience you gain here will help you a lot in building your future. I personally think that the best aspect of it is the global connection with many great people who are always happy to support and guide you.

– Jayesh , a proud Mozillian

Thank you, Jayesh! A great example of turning one’s passion into a great initiative that enables many people around you understand and use technology better. We’re looking forward to more open source awesomeness from you!

SUMO Blog readers – are you interested in posting on our blog about your open source projects and adventures? Let us know!

Dustin J. MitchellLoading TaskCluster Docker Images

When TaskCluster builds a push to a Gecko repository, it does so in a docker image defined in that very push. This is pretty cool for developers concerned with the build or test environment: instead of working with releng to deploy a change, now you can experiment with that change in try, get review, and land it like any other change. However, if you want to actually download that docker image, docker pull doesn’t work anymore.

The image reference in the task description looks like this now:

"image": {
    "path": "public/image.tar",
    "taskId": "UDZUwkJWQZidyoEgVfFUKQ",
    "type": "task-image"

This is referring to an artifact of the task that built the docker image. If you want to pull that exact image, there’s now an easier way:

./mach taskcluster-load-image --task-id UDZUwkJWQZidyoEgVfFUKQ

will download that docker image:

dustin@dustin-moz-devel ~/p/m-c (central) $ ./mach taskcluster-load-image --task-id UDZUwkJWQZidyoEgVfFUKQ
######################################################################## 100.0%
Determining image name
Image name: mozilla-central:f7b4831774960411275275ebc0d0e598e566e23dfb325e5c35bf3f358e303ac3
Loading image into docker
Deleting temporary file
Loaded image is named mozilla-central:f7b4831774960411275275ebc0d0e598e566e23dfb325e5c35bf3f358e303ac3
dustin@dustin-moz-devel ~/p/m-c (central) $ docker images
REPOSITORY          TAG                                                                IMAGE ID            CREATED             VIRTUAL SIZE
mozilla-central     f7b4831774960411275275ebc0d0e598e566e23dfb325e5c35bf3f358e303ac3   51e524398d5c        4 weeks ago         1.617 GB

But if you just want to pull the image corresponding to the codebase you have checked out, things are even easier: give the image name (the directory under testing/docker), and the tool will look up the latest build of that image in the TaskCluster index:

dustin@dustin-moz-devel ~/p/m-c (central) $ ./mach taskcluster-load-image desktop-build
Task ID: TjWNTysHRCSfluQjhp2g9Q
######################################################################## 100.0%
Determining image name
Image name: mozilla-central:f5e1b476d6a861e35fa6a1536dde2a64daa2cc77a4b71ad685a92096a406b073
Loading image into docker
Deleting temporary file
Loaded image is named mozilla-central:f5e1b476d6a861e35fa6a1536dde2a64daa2cc77a4b71ad685a92096a406b073

Tim TaubertA Fast, Constant-time AEAD for TLS

The only TLS v1.2+ cipher suites with a dedicated AEAD scheme are the ones using AES-GCM, a block cipher mode that turns AES into an authenticated cipher. From a cryptographic point of view these are preferable to non-AEAD-based cipher suites (e.g. the ones with AES-CBC) because getting authenticated encryption right is hard without using dedicated ciphers.

For CPUs without the AES-NI instruction set, constant-time AES-GCM however is slow and also hard to write and maintain. The majority of mobile phones, and mostly cheaper devices like tablets and notebooks on the market thus cannot support efficient and safe AES-GCM cipher suite implementations.

Even if we ignored all those aforementioned pitfalls we still wouldn’t want to rely on AES-GCM cipher suites as the only good ones available. We need more diversity. Having widespread support for cipher suites using a second AEAD is necessary to defend against weaknesses in AES or AES-GCM that may be discovered in the future.

ChaCha20 and Poly1305, a stream cipher and a message authentication code, were designed with fast and constant-time implementations in mind. A combination of those two algorithms yields a safe and efficient AEAD construction, called ChaCha20/Poly1305, which allows TLS with a negligible performance impact even on low-end devices.

Firefox 47 will ship with two new ECDHE/ChaCha20 cipher suites as specified in the latest draft. We are looking forward to see the adoption of these increase and will, as a next step, work on prioritizing them over AES-GCM suites on devices not supporting AES-NI.

QMOFirefox 47 Beta 3 Testday, May 6th

Hey everyone,

I am happy to announce that the following Friday, May 6th, we are organizing a new event – Firefox 47 Beta 3 Testday. The main focus will be on Synced Tabs Sidebar and Youtube Embedded Rewrite features. The detailed instructions are available via this etherpad.

No previous testing experience is needed, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better! 😉

See you all on Friday!

Mozilla Addons BlogWebExtensions in Firefox 48

We last updated you on our progress with WebExtensions when Firefox 47 landed in Developer Edition (Aurora), and today we have an update for Firefox 48, which landed in Developer Edition this week.

With the release of Firefox 48, we feel WebExtensions are in a stable state. We recommend developers start to use the WebExtensions API for their add-on development. Over the last release more than 82 bugs were closed on WebExtensions alone.

If you have authored an add-on in the past and are curious how it’s affected by the upcoming changes, please use the lookup tool. There is also a wiki page filled with resources to support you through the changes.

APIs Implemented

Many APIs gained improved support in this release, including: alarms, bookmarks, downloads, notifications, webNavigation, webRequest, windows and tabs.

The options v2 API is now supported so that developers can implement an options UI for their users. We do not plan to support the options v1 API, which is deprecated in Chrome. You can see an example of how to use this API in the WebExtensions examples on Github.


In Firefox 48 we pushed hard to make the WebRequest API a solid foundation for privacy and security add-ons such as Ghostery, RequestPolicy and NoScript. With the current implementation of the onErrorOccurred function, it is now possible for Ghostery to be written as a WebExtension.

The addition of reliable origin information was a major requirement for existing Firefox security add-ons performing cross-origin checks such as NoScript or uBlock Origin. This feature is unique to Firefox, and is one of our first expansions beyond parity with the Chrome APIs for WebExtensions.

Although requestBody support is not in Firefox 48 at the time of publication, we hope it will be uplifted. This change to Gecko is quite significant because it will allow NoScript’s XSS filter to perform much better as a WebExtension, with huge speed gains (20 times or more) in some cases over the existing XUL and XPCOM extension for many operations (e.g. form submissions that include file uploads).

We’ve also had the chance to dramatically increase our unit test coverage again across the WebExtensions API, and now our modules have over 92% test coverage.

Content Security Policy Support

By default WebExtensions now use a Content Security Policy, limiting the location of resources that can be loaded. The default policy for Firefox is the same as Chrome’s:

"script-src 'self'; object-src 'self';"

This has many implications, such as the following: eval will no longer work, inline JavaScript will not be executed and only local scripts and resources are loaded. To relax that and define your own, you’ll need to define a new CSP using the content_security_policy entry in the WebExtension’s manifest.

For example, to load scripts from, the manifest would include a policy configuration that would look like this:

"content_security_policy": "script-src 'self'; object-src 'self'"

Please note: this will be a backwards incompatible change for any Firefox WebExtensions that did not adhere to this CSP. Existing WebExtensions that do not adhere to the CSP will need to be updated.

Chrome compatibility

To improve the compatibility with Chrome, a change has landed in Firefox that allows an add-on to be run in Firefox without the add-on id specified. That means that Chrome add-ons can now be run in Firefox with no manifest changes using about:debugging and loading it as a temporary add-on.

Support for WebExtensions with no add-on id specified in the manifest is being added to (AMO) and our other tools, and should be in place on AMO for when Firefox 48 lands in release.

Android Support

With the release of Firefox 48 we are announcing Android support for WebExtensions. WebExtensions add-ons can now be installed and run on Android, just like any other add-on. However, because Firefox for Android makes use of a native user interface, anything that involves user interface interaction is currently unsupported (similar to existing extensions on Android).

You can see what the full list of APIs supported on Android in the WebExtensions documentation on MDN, these include alarms, cookies, i18n and runtime.

Developer Support

In Firefox 45 the ability to load add-ons temporarily was added to about:debugging. In Firefox 48 several exciting enhancements are added to about:debugging.

If your add-on fails to load for some reason in about:debugging (most commonly due to JSON syntax errors), then you’ll get a helpful message appearing at the top of about:debugging. In the past, the error would be hidden away in the browser console.


It still remains in the browser console, but is now visible that an error occurred right in the same page where loading was triggered.



You can now debug background scripts and content scripts in the debugging tools. In this example, to debug background scripts I loaded the add-on bookmark-it from the MDN examples. Next click “Enable add-on debugging”, then click “debug”:


You will need to accept the incoming remote debugger session request. Then you’ll have a Web Console for the background page. This allows you to interact with the background page. In this case I’m calling the toggleBookmark API.


This will call the toggleBookmark function and bookmark the page (note the bookmark icon is now blue. If you want to debug the toggleBookmark function,  just add the debugger statement at the appropriate line. When you trigger toggleBookmark, you’ll be dropped into the debugger:image09

You can now debug content scripts. In this example I’ve loaded the beastify add-on from the MDN examples using about:debugging. This add-on runs a content script to alter the current page by adding a red border.

All you have to do to debug it is to insert the debugger statement into your content script, open up the Developer Tools debugger and trigger the debug statement:


You are then dropped into the debugger ready to start debugging the content script.


As you may know, restarting Firefox and adding in a new add-on is can be slow, so about:debugging now allows you to reload an add-on. This will remove the add-on and then re-enable the add-on, so that you don’t have to keep restarting Firefox. This is especially useful for changes to the manifest, which will not be automatically refreshed. It also resets UI buttons.

In the following example the add-on just calls setBadgeText to add “Test” onto the browser action button (in the top right) when you press the button added by the add-on.


Hitting reload for that add-on clears the state for that button and reloads the add-on from the manifest, meaning that after a reload, the “Test” text has been removed.


This makes developing and debugging WebExtensions really easy. Coming soon, web-ext, the command line tool for developing add-ons, will gain the ability to trigger this each time a file in the add-on changes.

There are also lots of other ways to get involved with WebExtensions, so please check them out!

Update: clarified that no add-on id refers to the manifest as a WebExtension.

Daniel Stenbergcurl 7.49.0 goodies coming

Here’s a closer look at three new features that we’re shipping in curl and libcurl 7.49.0, to be released on May 18th 2016.

connect to this instead

If you’re one of the users who thought --resolve and doing Host: header tricks with --header weren’t good enough, you’ll appreciate that we’re adding yet another option for you to fiddle with the connection procedure. Another “Swiss army knife style” option for you who know what you’re doing.

With --connect-to you basically provide an internal alias for a certain name + port to instead internally use another name + port to connect to.

Instead of connecting to HOST1:PORT1, connect to HOST2:PORT2

It is very similar to --resolve which is a way to say: when connecting to HOST1:PORT1 use this ADDR2:PORT2. --resolve effectively prepopulates the internal DNS cache and makes curl completely avoid the DNS lookup and instead feeds it with the IP address you’d like it to use.

--connect-to doesn’t avoid the DNS lookup, but it will make sure that a different host name and destination port pair is used than what was found in the URL. A typical use case for this would be to make sure that your curl request asks a specific server out of several in a pool of many, where each has a unique name but you normally reach them with a single URL who’s host name is otherwise load balanced.

--connect-to can be specified multiple times to add mappings for multiple names, so that even following HTTP redirects to other host names etc can be handled. You don’t even necessarily have to redirect the first used host name.

The libcurl option name for for this feature is CURLOPT_CONNECT_TO.

Michael Kaufmann brought this feature.

http2 prior knowledge

In our ongoing quest to provide more and better HTTP/2 support in a world that is slowly but steadily doing more and more transfers over the new version of the protocol, curl now offers --http2-prior-knowledge.

As the name might hint, this is a way to tell curl that you have “prior knowledge” that the URL you specifies goes to a host that you know supports HTTP/2. The term prior knowledge is in fact used in the HTTP/2 spec (RFC 7540) for this scenario.

Normally when given a HTTP:// or a HTTPS:// URL, there will be no assumption that it supports HTTP/2 but curl when then try to upgrade that from version HTTP/1. The command line tool tries to upgrade all HTTPS:// URLs by default even, and libcurl can be told to do so.

libcurl wise, you ask for a prior knowledge use by setting CURLOPT_HTTP_VERSION to CURL_HTTP_VERSION_2_PRIOR_KNOWLEDGE.

Asking for http2 prior knowledge when the server does in fact not support HTTP/2 will give you an error back.

Diego Bes brought this feature.

TCP Fast Open

TCP Fast Open is documented in RFC 7413 and is basically a way to pass on data to the remote machine earlier in the TCP handshake – already in the SYN and SYN-ACK packets. This of course as a means to get data over faster and reduce latency.

The --tcp-fastopen option is supported on Linux and OS X only for now.

This is an idea and technique that has been around for a while and it is slowly getting implemented and supported by servers. There have been some reports of problems in the wild when “middle boxes” that fiddle with TCP traffic see these packets, that sometimes result in breakage. So this option is opt-in to avoid the risk that it causes problems to users.

A typical real-world case where you would use this option is when  sending an HTTP POST to a site you don’t have a connection already established to. Just note that TFO relies on the client having had contact established with the server before and having a special TFO “cookie” stored and non-expired.

TCP Fast Open is so far only used for clear-text TCP protocols in curl. These days more and more protocols switch over to their TLS counterparts (and there’s room for future improvements to add the initial TLS handshake parts with TFO). A related option to speed up TLS handshakes is --false-start (supported with the NSS or the secure transport backends).

With libcurl, you enable TCP Fast Open with CURLOPT_TCP_FASTOPEN.

Alessandro Ghedini brought this feature.

Support.Mozilla.OrgWhat’s Up with SUMO – 28th April

Hello, SUMO Nation!

Did you know that in Japanese mythology, foxes with nine tails are over a 100 years old and have the power of omniscience? I think we could get the same result if we put a handful of SUMO contributors in one room – maybe except for the tails ;-)

Here are the news from the world of SUMO!

Welcome, new contributors!

If you just joined us, don’t hesitate – come over and say “hi” in the forums!

Contributors of the week

Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

Most recent SUMO Community meeting

The next SUMO Community meeting

  • …is happening on WEDNESDAY the 4th of May – join us!
  • Reminder: if you want to add a discussion topic to the upcoming meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Wednesday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).
    • If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback.



Support Forum

Knowledge Base & L10n

  • Hackathons everywhere! Find your people and get organized!
  • We have three upcoming iOS articles that will need localization. Their drafts are still in progress (pending review from the product team). Coming your way real soon – watch your dashboards!
  • New l10n milestones coming to your dashboards soon, as well.


What’s your experience of release week? Share with us in the comments or our forums! We are looking forward to seeing you all around SUMO – KEEP ROCKING THE HELPFUL WEB!

Air MozillaWeb QA Weekly Meeting, 28 Apr 2016

Web QA Weekly Meeting This is our weekly gathering of Mozilla'a Web QA team filled with discussion on our current and future projects, ideas, demos, and fun facts.

Air MozillaReps weekly, 28 Apr 2016

Reps weekly This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Chris H-CIn Lighter News…

…Windows XP Firefox users may soon be able to properly render poop.


Here at Mozilla, we take these things seriously.


Ian BickingA Product Journal: Data Up and Data Down

I’m blogging about the development of a new product in Mozilla, look here for my other posts in this series

We’re in the process of reviewing the KPI (Key Performance Indicators) for Firefox Hello (relatedly I joined the Firefox Hello team as engineering manager in October). Mozilla is trying (like everyone else) to make data-driven decisions. Basing decisions on data has some potential to remove or at least reveal bias. It provides a feedback mechanism that can provide continuity even as there are personnel changes. It provides some accountability over time. Data might also provide insight about product opportunities which we might otherwise miss.

Enter the KPI: for Hello (like most products) the key performance indicators are number of users, growth in users over time, user retention, and user sentiment (e.g., we use the Net Promoter Score). But like most projects those are not actually our success criteria: product engagement is necessary but not sufficient for organizational goals. Real goals might be revenue, social or political impact, or improvement in brand sentiment.

The value of KPI is often summarized as “letting us know how we’re doing”. I think the value KPI offers is more select:

  1. When you think a product is doing well, but it’s not, KPI is revealing.
  2. When you know a product isn’t doing well, KPI let’s you triage: is it hopeless? Do we need to make significant changes? Do we need to maintain our approach but try harder?
  3. When a product is doing well the KPI gives you a sense of the potential. You can also triage success: Should we invest heavily? Stay the path? Is there no potential to scale the success far enough?

I’m skeptical that KPI can provide the inverse of 1: when you think a product is doing poorly, can KPI reveal that it is doing well? Because there’s another set of criteria that defines “success”, KPI is necessary but not sufficient. It requires a carefully objective executive to revise their negative opinion about the potential of a project based on KPI, and they may have reasonably lost faith that a project’s KPI-defined success can translate into success given organizational goals.

The other theoretical value of KPI is that you could correlate KPI with changes to the product, testing whether each change improves your product’s core value. I’m sure people manage to do this, with both very fine grained measurements and fine grained deployments of changes. But it seems more likely to me that for most projects given a change in KPI you’ll simply have to say “yup” and come up with unverified theories about that change.

The metrics that actually support the development of the product are not “key”, they are “incidental”. These are metrics that find bugs in the product design, hint at unexplored opportunities, confirm the small wins. These are metrics that are actionable by the people making the product: how do people interact with the tool? What do they use it for? Where do they get lost? What paths lead to greater engagement?

What is KPI for?

I’m trying to think more consciously about the difference between managing up and managing down. A softer way of phrasing this is managing in and managing out – but in this case I think the power dynamics are worth highlighting.

KPI is data that goes up. It lets someone outside the project – and above the project – make choices: about investment, redirection, cancellation. KPI data doesn’t go down, it does little to help the people doing the work. Feeling joy or despair about your project based on KPI is not actionable for those people on the inside of a project.

Incentive or support

I would also distinguish two kinds of management here: one perspective on management is that the organization should set up the right incentives and consequences so that rewards are aligned with organizational goals. The right incentives might make people adapt their behavior to get alignment; how they adapt is undefined. The right incentives might also exclude those who aren’t in alignment, culling misalignment from the organization. Another perspective is that the organization should work to support people, that misalignment of purpose between a person and the organization is more likely a bug than a misalignment of intention. Are people black boxes that we can nudge via punishment and reward? Are there less mechanical ways to influence change?

Student performance measurement are another kind of KPI. It lets someone on the outside (of the classroom) know if things are going well or poorly for the students. It says little about why, and it doesn’t support improvement. School reform based on measurement presumes that teachers and schools are able to achieve the desired outcomes, but simply not willing. A risk of top-down reform: the people on the top use a perspective from the top. As an authority figure, how do I make decisions? The resulting reform is disempowering, supporting decisions from above, as opposed to using data to support the empowerment of those making the many day-to-day decisions that might effect a positive outcome.

Of course, having data available to inform decisions at all levels – from the executive to the implementor – would be great. But there’s a better criteria for data: it should support decision making processes. What are your most important decisions?

As an example from Mozilla, we have data about how much Firefox is used and its marketshare. How much should we pay attention to this data? We certainly don’t have the granularity to connect changes in this KPI to individual changes we make in the project. The only real way to do that is through controlled experiments (which we are trying). We aren’t really willing to triage the project; no one is asking “should we just give up on Firefox?” The only real choice we can make is: are we investing enough in Firefox, or should we invest more? That’s a question worth asking, but we need to keep our attention on the question and not the data. For instance, if we decide to increase investment in Firefox, the immediate questions are: what kind of investment? Over what timescale? Data can be helpful to answer those questions, but not just any data.

Exploratory data

Weeks after I wrote (but didn’t publish) this post I encountered Why Greatness Cannot Be Planned: The Myth of the Objective, a presentation by Kenneth Stanley:

Setting an objective can block its own achievement. It can be an obstacle to creativity and innovation in general. Without protection of individual autonomy collaboration can become dangerously objective.”

The example he uses is manually searching a space of nonlinear image generation to find interesting images. The positive example is one where people explore, branching from novel examples until something recognizable emerges:

One negative example is one where an algorithm explores with a goal in mind:

Another negative example is selection by voting, instead of personal exploration; a product of convergent consensus instead of divergent treasure hunting:

If you decide what you are looking for, you are unlikely to find it. This generated image search space is deliberately nonlinear, so it’s difficult to understand how actions affect outcomes. Though artificial, I think the example is still valid: in a competitive environment, the thing you are searching for is hard to find, because if it was not hard then someone would have found it. And it’s probably hard because actions affect outcomes in unexpected ways.

You could describe this observation as another way of describing the pitfalls of hill climbing: getting stuck at local maximums. Maybe an easy fix is to add a little randomness, to bounce around, to see what lies past the hill you’ve found. But the hills themselves can be distractions: each hill supposes a measurement. The divergent search doesn’t just reveal novel solutions, but it can reveal a novel rubric for success.

This is also a similar observation to that in Innovator’s Dilemma: specifically that in these cases good management consistently and deliberately keeps a company away from novelty and onto the established track, and it does so by paying attention to the feedback that defines the company’s (current) success. The disruptive innovation, a term somewhat synonymous with the book, is an innovation that requires a change in metrics, and that a large portion of the innovation is finding the metric (and so finding the market), not implementing the maximizing solution.

But I digress from the topic of data. If we’re going to be data driven to entirely new directions, we may need data that doesn’t answer a question, doesn’t support a decision, but just tells us about things we don’t know. To support exploration, not based on a hypothesis which we confirm or reject based on the data, because we are still trying to discover our hypothesis. We use the data to look for the hidden variable, the unsolved need, the desire that has not been articulated.

I think we look for this kind of data more often than we would admit. Why else would we want complex visualizations? The visualizations are our attempt at finding a pattern we don’t expect to find.

In Conclusion

I’m lousy at conclusions. All those words up there are like data, and I’m curious what they mean, but I haven’t figured it out yet.

Geoff LankowDoes Firefox update despite being set to "never check for updates"? This might be why.

If, like me, you have set Firefox to "never check for updates" for some reason, and yet it does sometimes anyway, this could be your problem: the chrome debugger.

The chrome debugger uses a separate profile, with the preferences copied from your normal profile. But, if your prefs (such as app.update.enabled) have changed, they remain in the debugger profile as they were when you first opened the debugger.

App update can be started by any profile using the app, so the debugger profile sees the pref as it once was, and goes looking for updates.

Solution? Copy the app update prefs from the main profile to the debugger profile (mine was at ~/.cache/mozilla/firefox/31392shv.default/chrome_debugger_profile), or just destroy the debugger profile and have a new one created next time you use it.

Just thought you might like to know.

Air MozillaPrivacy Lab - April 2016 - Encryption vs. the FBI

Privacy Lab - April 2016 - Encryption vs. the FBI Riana Pfefferkorn, Cryptography Fellow at the Stanford Center for Internet and Society, will talk about the FBI's dispute with Apple over encrypted iPhones.

Air MozillaPrivacy Lab - April 2016 - Encryption vs. the FBI

Privacy Lab - April 2016 - Encryption vs. the FBI Riana Pfefferkorn, Cryptography Fellow at the Stanford Center for Internet and Society, will talk about the FBI's dispute with Apple over encrypted iPhones.

Mike HommeyAnnouncing git-cinnabar 0.3.2

Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.

Get it on github.

These release notes are also available on the git-cinnabar wiki.

This is mostly a bug and regression-fixing release.

What’s new since 0.3.1?

  • Fixed a performance regression when cloning big repositories on OSX.
  • git configuration items with line breaks are now supported.
  • Fixed a number of issues with corner cases in mercurial data (such as, but not limited to nodes with no first parent, malformed .hgtags, etc.)
  • Fixed a stack overflow, a buffer overflow and a use-after free in cinnabar-helper.
  • Better work with git worktrees, or when called from subdirectories.
  • Updated git to 2.7.4 for cinnabar-helper.
  • Properly remove all refs meant to be removed when using git version lower than 2.1.

Mozilla Addons BlogJoin the Featured Add-ons Community Board

Are you a big fan of add-ons? Think you can help help identify the best content to spotlight on AMO? Then let’s talk!

All the add-ons featured on (AMO) are selected by a board of community members. Each board consists of 5-8 members who nominate and select featured add-ons once a month for six months. Featured add-ons help users discover what’s new and useful, and downloads increase dramatically in the months they’re featured, so your participation really makes an impact.

And now the time has come to assemble a new board for the months July – December.

Anyone from the add-ons community is welcome to apply: power users, theme designers, developers, and evangelists. Priority will be given to applicants who have not served on the board before, followed by those from previous boards, and finally from the outgoing board. This page provides more information on the duties of a board member. To be considered, please email us at with your name, and tell us how you’re involved with AMO. The deadline is Friday, May 10, 2016 at 23:59 PDT. The new board will be announced about a week after.

We look forward to hearing from you!

Michael KaplyBroken Add-ons in Firefox 46

A lot of add-ons are being broken by a subtle change in Firefox 46, in particular the removal of legacy array/generator comprehension.

Most of these add-ons (including mine) did not use array comprehension intentionally, but they copied some code from this page on for doing an md5 hash of a string. It looked like this:

var s = [toHexString(hash.charCodeAt(i)) for (i in hash)].join("");

You should search through your source code for toHexString and make sure you aren’t using this. MDN was updated in January to fix this. Here’s what the new code looks like:

var s = Array.from(hash, (c, i) => toHexString(hash.charCodeAt(i))).join("");

The new code will only work in Firefox 32 and beyond. If for some reason you need an older version, you can go through the history of the page to find the array based version.

Using this old code will cause a syntax error, so it will cause much more breakage than you realize. You’ll want to get it fixed sooner than later because Firefox 46 started rolling out yesterday.

As a side note, Giorgio Maone caught this in January, but unfortunately all that was updated was the MDN page.

Air MozillaThe Joy of Coding - Episode 55

The Joy of Coding - Episode 55 mconley livehacks on real Firefox bugs while thinking aloud.

Air MozillaApril 2016 Speaker Series: When Change is the Only Constant, Org Structure Doesn't Matter - Kirsten Wolberg

April 2016 Speaker Series: When Change is the Only Constant, Org Structure Doesn't Matter - Kirsten Wolberg Regardless of whether an organization is decentralized or command & control, large-scale changes are never simple nor straightforward. There's no silver bullets. And yet, when...

Rail AliievFirefox 46.0 and SHA512SUMS

In my previous post I introduced the new release process we have been adopting in the 46.0 release cycle.

Release build promotion has been in production since Firefox 46.0 Beta 1. We have discovered some minor issues; some of them are already fixed, some still waiting.

One of the visible bugs is Bug 1260892. We generate a big SHA512SUMS file, which should contain all important checksums. With numerous changes to the process the file doesn't represent all required files anymore. Some files are missing, some have different names.

We are working on fixing the bug, but you can use the following work around to verify the files.

For example, if you want to verify, you need use the following 2 files:

Example commands:

# download all required files
$ wget -q
$ wget -q
$ wget -q
$ wget -q
# Import Mozilla Releng key into a temporary GPG directory
$ mkdir .tmp-gpg-home && chmod 700 .tmp-gpg-home
$ gpg --homedir .tmp-gpg-home --import KEY
# verify the signature of the checksums file
$ gpg --homedir .tmp-gpg-home --verify firefox-46.0.checksums.asc && echo "OK" || echo "Not OK"
# calculate the SHA512 checksum of the file
$ sha512sum "Firefox Setup 46.0.exe"
c2ed64298ac2140d8dbdaed28cabc90b38dd9444e9c0d6dd335a2a32cf043a35314945536a5c75124a88bf418a4e2ba77256be223425380e7fcc45a97da8f479  Firefox Setup 46.0.exe
# lookup for the checksum in the checksums file
$ grep c2ed64298ac2140d8dbdaed28cabc90b38dd9444e9c0d6dd335a2a32cf043a35314945536a5c75124a88bf418a4e2ba77256be223425380e7fcc45a97da8f479 firefox-46.0.checksums
c2ed64298ac2140d8dbdaed28cabc90b38dd9444e9c0d6dd335a2a32cf043a35314945536a5c75124a88bf418a4e2ba77256be223425380e7fcc45a97da8f479 sha512 46275456 install/sea/firefox-46.0.ach.win64.installer.exe

This is just a temporary work around and the bug will be fixed ASAP.

Air MozillaSuMo Community Call 27th April 2016

SuMo Community Call 27th April 2016 This is the sumo weekly call We meet as a community every Wednesday 17:00 - 17:30 UTC The etherpad is here:

Air MozillaBay Area Rust Meetup April 2016

Bay Area Rust Meetup April 2016 Rust meetup on the subject of operating systems.

Air MozillaConnected Devices Weekly Program Review, 26 Apr 2016

Connected Devices Weekly Program Review Weekly project updates from the Mozilla Connected Devices team.

Richard NewmanDifferent kinds of storage

I’ve been spending most of my time so far on Project Tofino thinking about how a user agent stores data.

A user agent is software that mediates your interaction with the world. A web browser is one particular kind of user agent: one that fetches parts of the web and shows them to you.

(As a sidenote: browsers are incredibly complicated, not just for the obvious reasons of document rendering and navigation, but also because parts of the web need to run code on your machine and parts of it are actively trying to attack and track you. One of a browser’s responsibilities is to keep you safe from the web.)

Chewing on Redux, separation of concerns, and Electron’s process model led to us drawing a distinction between a kind of ‘profile service’ and the front-end browser itself, with ‘profile’ defined as the data stored and used by a traditional browser window. You can see the guts of this distinction in some of our development docs.

The profile service stores full persistent history and data like it. The front-end, by contrast, has a pure Redux data model that’s much closer to what it needs to show UI — e.g., rather than all of the user’s starred pages, just a list of the user’s five most recent.

The front-end is responsible for fetching pages and showing the UI around them. The back-end service is responsible for storing data and answering questions about it from the front-end.

To build that persistent storage we opted for a mostly event-based model: simple, declarative statements about the user’s activity, stored in SQLite. SQLite gives us durability and known performance characteristics in an embedded database.

On top of this we can layer various views (materialized or not). The profile service takes commands as input and pushes out diffs, and the storage itself handles writes by logging events and answering queries through views. This is the CQRS concept applied to an embedded store: we use different representations for readers and writers, so we can think more clearly about the transformations between them.

Where next?

One of the reasons we have a separate service is to acknowledge that it might stick around when there are no browser windows open, and that it might be doing work other than serving the immediate needs of a browser window. Perhaps the service is pre-fetching pages, or synchronizing your data in the background, or trying to figure out what you want to read next. Perhaps you can interact with the service from something other than a browser window!

Some of those things need different kinds of storage. Ad hoc integrations might be best served by a document store; recommendations might warrant some kind of graph database.

When we look through that lens we no longer have just a profile service wrapping profile storage. We have a more general user agent service, and one of the data sources it manages is your profile data.

Mozilla Addons BlogMigrating Popup ALT Attribute from XUL/XPCOM to WebExtensions

Today’s post comes from Piro, the developer of Popup ALT Attribute, in addition to 40 other add-ons. He shares his thoughts about migrating XUL/XPCOM add-ons to WebExtensions, and shows us how he did it with Popup ALT Attribute. You can see the full text of this post on his personal blog.


Hello, add-on developers. My name is YUKI Hiroshi aka Piro, a developer of Firefox add-ons. For many years I developed Firefox and Thunderbird add-ons personally and for business, based on XUL and XPCOM.

I recently started to research the APIs are required to migrate my add-ons to WebExtensions, because Mozilla announced that XUL/XPCOM add-ons will be deprecated at the end of 2017. I realized that only some add-ons can be migrated with currently available APIs, and
Popup ALT Attribute is one such add-on.

Here is the story of how I migrated it.

What’s the add-on?

Popup ALT Attribute is an ancient add-on started in 2002, to show what is written in the alt attribute of img HTML elements on web pages. By default, Firefox shows only the title attribute as a tooltip.

Initially, the add-on was implemented to replace an internal function FillInHTMLTooltip() of Firefox itself.

In February 2016, I migrated it to be e10s-compatible. It is worth noting that depending on your add-on, if you can migrate it directly to WebExtensions, it will be e10s-compatible by default.

Re-formatting in the WebExtensions style

I read the tutorial on how to build a new simple WebExtensions-based add-on from scratch before migration, and I realized that bootstrapped extensions are similar to WebExtensions add-ons:

  • They are dynamically installed and uninstalled.
  • They are mainly based on JavaScript code and some static manifest files.

My add-on was easily re-formatted as a WebExtensions add-on, because I already migrated it to bootstrapped.

This is the initial version of the manifest.json I wrote. There were no localization and options UI:

  "manifest_version": 2,
  "name": "Popup ALT Attribute",
  "version": "4.0a1",
  "description": "Popups alternate texts of images or others like NetscapeCommunicator(Navigator) 4.x, and show long descriptions in the multi-row tooltip.",
  "icons": { "32": "icons/icon.png" },
  "applications": {
    "gecko": { "id": "{61FD08D8-A2CB-46c0-B36D-3F531AC53C12}",
               "strict_min_version": "48.0a1" }
  "content_scripts": [
    { "all_frames": true,
      "matches": ["<all_urls>"],
      "js": ["content_scripts/content.js"],
      "run_at": "document_start" }

I had already separated the main script to a frame script and a loader for it. On the other hand, manifest.json can have some manifest keys to describe how scripts are loaded. It means that I don’t need to put my custom loaders in the package anymore. Actually, a script for any web page can be loaded with the content_scripts rule in the above sample. See the documentation for content_scripts for more details.

So finally only 3 files were left.


+ install.rdf
+ icon.png
+ [components]
+ [modules]
+ [content]
    + content-utils.js

And after:

+ manifest.json (migrated from install.rdf)
+ [icons]
|   + icon.png (moved)
+ [content_scripts]
    + content.js (moved and migrated from content-utils.js)

And I still had to isolate my frame script from XPCOM.

  • The script touched nsIPrefBranch and some XPCOM components via XPConnect, so they were temporarily commented out.
  • User preferences were not available and only default configurations were there as fixed values.
  • Some constant properties accessed, like Ci.nsIDOMNode.ELEMENT_NODE, had to be replaced as Node.ELEMENT_NODE.
  • The listener for mousemove events from web pages was attached to the global namespace for a frame script, but it was re-attached to the document itself of each web page, because the script was now executed on each web page directly.


For the old install.rdf I had a localized description. In WebExtensions add-ons I had to do it in different way. See how to localize messages for details. In short I did the following:

Added files to define localized descriptions:

+ manifest.json
+ [icons]
+ [content_scripts]
+ [_locales]
    + [en_US]
    |   + messages.json (added)
    + [ja]
        + messages.json (added)

Note, en_US is different from en-US in install.rdf.

English locale, _locales/en_US/messages.json was:

  "name": { "message": "Popup ALT Attribute" },
  "description": { "message": "Popups alternate texts of images or others like NetscapeCommunicator(Navigator) 4.x, and show long descriptions in the multi-row tooltip." }

Japanese locale, _locales/ja/messages.json was also included. And, I had to update my manifest.json to embed localized messages:

  "manifest_version": 2,
  "name": "__MSG_name__",
  "version": "4.0a1",
  "description": "__MSG_description__",
  "default_locale": "en_US",

__MSG_****__ in string values are automatically replaced to localized messages. You need to specify the default locale manually via the default_locale key.

Sadly, Firefox 45 does not support the localization feature, so you need to use Nightly 48.0a1 or newer to try localization.

User preferences

Currently, WebExtensions does not provide any feature completely compatible to nsIPrefBranch. Instead, there are simple storage APIs. It can be used like an alternative of nsIPrefBranch to set/get user preferences. This add-on had no configuration UI but had some secret preferences to control its advanced features, so I did it for future migrations of my other add-ons, as a trial.

Then I encountered a large limitation: the storage API is not available in content scripts. I had to create a background script just to access the storage, and communicate with it via the inter-sandboxes messaging system. [Updated 4/27/16: bug 1197346 has been fixed on Nightly 49.0a1, so now you don’t need any hack to access the storage system from content scripts anymore. Now, my library (Configs.js) just provides easy access for configuration values instead of the native storage API.]

Finally, I created a tiny library to do that. I don’t describe how I did it here, but if you hope to know details, please see the source. There are just 177 lines.

I had to update my manifest.json to use the library from both the background page and the content script, like:

  "background": {
    "scripts": [
      "common/Configs.js", /* the library itself */
      "common/common.js"   /* codes to use the library */
  "content_scripts": [
    { "all_frames": true,
      "matches": ["<all_urls>"],
      "js": [
        "common/Configs.js", /* the library itself */
        "common/common.js",  /* codes to use the library */
      "run_at": "document_start" }

Scripts listed in the same section share a namespace for the section. I didn’t have to write any code like require() to load a script from others. Instead, I had to be careful about the listing order of scripts, and wrote a script requiring a library after the library itself, in each list.

One last problem was: how to do something like the about:config or the MCD — general methods to control secret preferences across add-ons.

For my business clients, I usually provide add-ons and use MCD to lock their configurations. (There are some common requirements for business use of Firefox, so combinations of add-ons and MCD are more reasonable than creating private builds of Firefox with different configurations for each client.)

I think I still have to research around this point.

Options UI

WebExtensions provides a feature to create options pages for add-ons. It is also not supported on Firefox 45, so you need to use Nightly 48.0a1 for now. As I previously said, this add-on didn’t have its configuration UI, but I implemented it as a trial.

In XUL/XPCOM add-ons, rich UI elements like <checkbox>, <textbox>, <menulist>, and more are available, but these are going away at the end of next year. So I had to implement a custom configuration UI based on pure HTML and JavaScript. (If you need more rich UI elements, some known libraries for web applications will help you.)

On this step I created two libraries:


I’ve successfully migrated my Popup ALT Attribute add-on from XUL/XPCOM to WebExtensions. Now it is just a branch but I’ll release it after Firefox 48 is available.

Here are reasons why I could do it:

  • It was a bootstrapped add-on, so I had already isolated the add-on from all destructive changes.
  • The core implementation of the add-on was similar to a simple user script. Essential actions of the add-on were enclosed inside the content area, and no privilege was required to do that.

However, it is a rare case for me. My other 40+ add-ons require some privilege, and/or they work outside the content area. Most of my cases are such non-typical add-ons.

I have to do triage, plan, and request new APIs not only for me but for other XUL/XPCOM add-on developers also.

Thank you for reading.

The Mozilla BlogUpdate to Firefox Released Today

The latest version of Firefox was released today. It features an improved look and feel for Linux users, a minor security improvement and additional updates for all Firefox users.

The update to Firefox for Android features minor changes, including an improvement to user notifications and clearer homescreen shortcut icons.

More information:

Air MozillaMartes mozilleros, 26 Apr 2016

Martes mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos. Bi-weekly meeting to talk (in Spanish) about Mozilla status, community and...

Marcia KnousNightly is where I will live

After some time working on Firefox OS and Connected Devices, I am moving back to Desktop land. Going forward I will be working with the Release Management Team as the Nightly Program Manager. That means I would love to work with all of you all to identify any potential issues in Nightly and help bring them to resolution. To that end, I have done a few things. First, we now have a Telegram Group for Nightly Testers. Feel free to join that group if you want to keep up with issues we are

David LawrenceHappy BMO Push Day!

the following changes have been pushed to

  • [1195736] intermittent internal error: “file error – nav_link: not found” (also manifests as fields_lhs: not found)

discuss these changes on

Daniel GlazmanFirst things first

Currently implementing many new features into Postbox, I carefully read (several times) Mark Surman's recent article on Thunderbird's future. I also read Simon Phipps's report twice. Then the contract offer for a Thunderbird Architect posted by Mozilla must be read too:

... Thunderbird is facing a number of technical challenges, including but not limited to:

  • ...
  • The possible future deprecation of XUL, its current user interface technology and XPCOM, its current component technology, by Mozilla
  • ...

In practice, the last line above means for Thunderbird:

  1. rewrite the whole UI and the whole JS layer with it
  2. most probably rewrite the whole SMTP/MIME/POP/IMAP/LDAP/... layer
  3. most probably have a new Add-on layer or, far worse, no more Add-ons

Well, sorry to say, but that's a bit of a « technical challenge »... So yes, that's indeed a « fork in the road » but let's be serious a second, it's unfortunately this kind of fork; rewriting the app is not a question of if but only a question of when. Unless Thunderbird dies entirely, of course.

Evaluating potential hosts for Thunderbird and a fortiori chosing one seems to me rather difficult without first discussing the XUL/XPCOM-less future of the app, i.e. without having in hands the second milestone delivered by the Thunderbird Architect. First things first. I would also be interested in knowing how many people MoCo will dedicate to the deXULXPCOMification of Firefox, that would allow some extrapolations and some pretty solid requirements (and probably rather insurmountable...) for TB's host.

Last but not least and from a more personal point of view, I feel devastated confronting Mark's article and the Mozilla Manifesto.

Daniel StenbergAbsorbing 1,000 emails per day

Some people say email is dead. Some people say there are “email killers” and bring up a bunch of chat and instant messaging services. I think those people communicate far too little to understand how email can scale.

I receive up to around 1,000 emails per day. I average on a little less but I do have spikes way above.

Why do I get a thousand emails?

Primarily because I participate on a lot of mailing lists. I run a handful of open source projects myself, each with at least one list. I follow a bunch more projects; more mailing lists. We have a whole set of mailing lists at work (Mozilla) and I participate and follow several groups in the IETF. Lists and lists. I discuss things with friends on a few private mailing lists. I get notifications from services about things that happen (commits, bugs submitted, builds that break, things that need to get looked at). Mails, mails and mails.

Don’t get me wrong. I prefer email to web forums and stuff because email allows me to participate in literally hundreds of communities from a single spot in an asynchronous manner. That’s a good thing. I would not be able to do the same thing if I had to use one of those “email killers” or web forums.

Unwanted email

I unsubscribe from lists that I grow tired from. I stamp down on spam really hard and I run aggressive filters and blacklists that actually make me receive rather few spam emails these days, percentage wise. There are nowadays about 3,000 emails per month addressed to me that my mail server accepts that are then classified as spam by spamassassin. I used to receive a lot more before we started using better blacklists. (During some periods in the past I received well over a thousand spam emails per day.) Only 2-3 emails per day out of those spam emails fail to get marked as spam correctly and subsequently show up in my inbox.

Flood management

My solution to handling this steady high paced stream of incoming data is prioritization and putting things in different bins. Different inboxes.

  1. Filter incoming email. Save the email into its corresponding mailbox. At this very moment, I have about 30 named inboxes that I read. I read them in order, top to bottom as they’re sorted in roughly importance order (to me).
  2. Mails that don’t match an existing mailing list or topic that get stored into the 28 “topic boxes” run into another check: is the sender a known “friend” ? That’s a loose term I use, but basically means that the mail is from an email address that I have had conversations with before or that I know or trust etc. Mails from “friends” get the honor of getting put in mailbox 0. The primary one. If the mail comes from someone not listed as friend, it’ll end up in my “suspect” mailbox. That’s mailbox 1.
  3. Some of the emails get the honor of getting forwarded to a cloud email service for which I have an app in my phone so that I can get a sense of important mail that arrive. But I basically never respond to email using my phone or using a web interface.
  4. I also use the “spam level” in spams to save them in different spam boxes. The mailbox receiving the highest spam level emails is just erased at random intervals without ever being read (unless I’m tracking down a problem or something) and the “normal” spam mailbox I only check every once in a while just to make sure my filters are not hiding real mails in there.


I monitor my incoming mails pretty frequently all through the day – every day. My wife calls me obsessed and maybe I am. But I find it much easier to handle the emails a little at a time rather than to wait and have it pile up to huge lumps to deal with.

I receive mail at my own server and I read/write my email using Alpine, a text based mail client that really excels at allowing me to plow through vast amounts of email in a short time – something I can’t say that any UI or web based mail client I’ve tried has managed to do at a similar degree.

A snapshot from my mailbox from a while ago looked like this, with names and some topics blurred out. This is ‘INBOX’, which is the main and highest prioritized one for me.

alpine screenshot

I have my mail client to automatically go to the next inbox when I’m done reading this one. That makes me read them in prio order. I start with the INBOX one where supposedly the most important email arrives, then I check the “suspect” one and then I go down the topic inboxes one by one (my mail client moves on to the next one automatically). Until either I get overwhelmed and just return to the main box for now or I finish them all up.

I tend to try to deal with mails immediately, or I mark them as ‘important’ and store them in the main mailbox so that I can find them again easily and quickly.

I try to only keep mails around in my mailbox that concern ongoing topics, discussions or current matters of concern. Everything else should get stored away. It is hard work to maintain the number of emails there at a low number. As you all know.

Writing email

I averaged at less than 200 emails written per month during 2015. That’s 6-7 per day.

That makes over 150 received emails for every email sent.

Allen Wirfs-BrockSlide Bite: Survival of the Fittest


The first ten or fifteen years of a computing era is a period of chaotic experimentation. Early product concepts rapidly evolve via both incremental and disruptive innovations. Radical ideas are tried. Some succeed and some fail. Survival of the fittest prevails. By mid-era, new stable norms should be established. But we can’t predict the exact details.

Andrew TruongExperience, Learn, Revitalize and Share: The Adventures of High School

High school was an adventure. This time around, I was in courses that I picked and not determined by someone else who followed how you rank the offered courses. At the end of junior high, we were left with the phrase from every teacher that we will no longer be with the people we usually hang out with. How true can that be? I am not able to say.

I started my first year off in high school rough. However, I was able to adapt quite easily through the attendance of leadership seminars every week. I started to get a little more involved with events around the school and eventually around the community. The teachers were far from different than what our junior high teachers prescribed them as. They weren't uncaring, leaving you on your own and were helpful with finding your way around. They were the exact opposite of what our junior high teachers told us. Perhaps, they told us that "lie" to prepare us, or maybe they went through something completely different during their time.

First year in, I had an assortment of classes and I felt good and at ease with them. I was fortunate enough to have every other day free in the second semester where I was able to go to leadership and further enhance my life skills. The regular days, I had a class where I was able to do homework and receive additional help when I needed it, due the fact that I didn't do too well in junior high. Nonetheless, I excelled in the main course wasting most of my time in the additional help class.

Grade 11 rolls by and I took a block (there are 4 blocks in a day) of my day during the first semester to go to leadership. There I was able to further enhance my abilities, be assigned responsibilities and earn the trust of the department head. Furthermore, I ran to the students' union president, though I was not successful - it may have benefited me instead. There's nothing much to say as things went a certain direction and it worked out quite well.

Into my last year of high school, there's a new development in our family and household. This year is extremely important as I must pass all courses in order to graduate and move on to post-secondary. I was satisfied with my first semester where my courses went out pretty well. I still took a block of my day out the first semester to go to leadership. But, this time, I took on the position of being the chairperson of Spirit Wear for the school year. Designing, advertising, and promoting what we had to sell was a wonderful journey. I also met some great people during my spare time in leadership and I learned a lot more about myself and what I was socially doing wrong. That realization of what I was doing wrong, dawned upon me and led me to become who I am today.

The second semester comes around the corner and it was a roller coaster for me. For some odd reason, the course I excelled in continuously the 2 years before, I was now having trouble. It was partially a leap from what I knew and learned to something completely different. Part of the blame for this is the instructor, as I knew from how others have struggled with this particular teacher in the past, I would too - even though I told myself I won't. I got through it with my ups and downs despite being worried about whether or not I would being able to graduate and move on to post-secondary. In the end, I graduated and received my high school diploma.

Mark SurmanFirefox and Thunderbird: A Fork in the Road

Firefox and Thunderbird have reached a fork in the road: it’s now the right time for them to part ways on both a technical and organizational level.

In line with the process we started in 2012, today we’re taking another step towards the independence of Thunderbird. We’re posting a report authored by open source leader Simon Phipps that explores options for a future organizational home for Thunderbird. We’ve also started the process of helping the Thunderbird Council chart a course forward for Thunderbird’s future technical direction, by posting a job specification for a technical architect.

In this post, I want to take the time to go over the origins of Thunderbird and Firefox, the process for Thunderbird’s independence and update you on where we are taking this next. For those close to Mozilla, both the setting and the current process may already be clear. For those who haven’t been following the process, I wanted to write a longer post with all the context. If you are interested in that context, read on.


Much of Mozilla, including the leadership team, believes that focusing on the web through Firefox offers a vastly better chance of moving the Internet industry to a more open place than investing further in Thunderbird—or continuing to attend to both products.

Many of us remain committed Thunderbird users and want to see Thunderbird remain a healthy community and product. But both Firefox and Thunderbird face different challenges, have different goals and different measures of success. Our actions regarding Thunderbird should be viewed in this light.

Success for Firefox means continued relevance in the mass consumer market as a way for people to access, shape and feel safe across many devices. With hundreds of millions of users on both desktop and mobile, we have the raw material for this success. However, if we want Firefox to continue to have an impact on how developers and consumers interact with the Internet, we need to move much more quickly to innovate on mobile and in the cloud. Mozilla is putting the majority of its human and financial resources into Firefox product innovation.

In contrast, success for Thunderbird means remaining a reliable and stable open source desktop email client. While many people still value the security and independence that come with desktop email (I am one of them), the overall number of such people in the world is shrinking. In 2012, around when desktop email first became the exception rather than the rule, Mozilla started to reduce its investment and transitioned Thunderbird into a fully volunteer-run open source project.

Given these different paths, it should be no surprise that tensions have arisen as we’ve tried to maintain Firefox and Thunderbird on top of a common underlying code base and common release engineering system. In December, we started a process to deal with those release engineering issues, and also to find a long-term organizational home for Thunderbird.

The Past

On a technical level, Firefox and Thunderbird have common roots, emerging from the browser and email components of the Mozilla Application Suite nearly 15 years ago. When they were turned into separate products, they also maintained a common set of underlying software components, as well as a shared build and release infrastructure. Both products continue to be intertwined in this manner today.

Firefox and Thunderbird also share common organizational roots. Both were incorporated by the Mozilla Foundation in 2003, and from the beginning, the Foundation aimed to make these products successful in the mainstream consumer Internet market. We believed—and still believe—mass-market open source products are our biggest lever in our efforts to ensure the Internet remains a public resource, open and accessible to all.

Based on this belief, we set up Mozilla Corporation (MoCo) and Mozilla Messaging (MoMo) as commercial subsidiaries of the Mozilla Foundation. These organizations were each charged with innovating and growing a market: one in web access, the other in messaging. We succeeded in making the browser a mass market success, but we were not able to grow the same kind of market for email or messaging.

In 2012, we shut down Mozilla Messaging. That’s when Thunderbird became a purely volunteer-run project.

The Present

Since 2012, we have been doggedly focused on how to take Mozilla’s mission into the future.

In the Mozilla Corporation, we have tried to innovate and sustain Firefox’s relevance in the browser market while breaking into new product categories—first with smartphones, and now in a variety of connected devices.

In the Mozilla Foundation, we have invested in a broader global movement of people who stand for the Internet as a public resource. In 2016, we are focused on becoming a loud and clear champion on open internet issues. This includes significant investments in fuelling the open internet movement and growing a next generation of leaders who will stand up for the web.

These are hard and important things to do—and we have not yet succeeded at them to the level that we need to.

During these shifts, we invested less and less of Mozilla’s resources in Thunderbird, with the volunteer community developing and sustaining the product. MoCo continues to provide the underlying code and build and release infrastructure, but there are no dedicated staff focused on Thunderbird.

Many people who work on Firefox care about Thunderbird and do everything they can to accommodate Thunderbird as they evolve the code base, which slows down Firefox development when it needs to be speeding up. People in the Thunderbird community also remain committed to building on the Firefox codebase. This puts pressure on a small, dedicated group of volunteer coders who struggle to keep up. And people in the Mozilla Foundation feel similar pressure to help the Thunderbird community with donations and community management, which distracts them from the education and advocacy work that’s needed to grow the open internet movement on a global level.

Everyone has the right motivations, and yet everyone is stretched thin and frustrated. And Mozilla’s strategic priorities are elsewhere.

The Future

In late 2015, Mozilla leadership and the Thunderbird Council jointly agreed to:

a) take a new approach to release engineering, as a first step towards putting Thunderbird on the path towards technical independence from Firefox; and

b) identify the organizational home that will best allow Thunderbird to thrive as a volunteer-run project.

Mozilla has already posted a proposal for separating Thunderbird from Firefox release engineering infrastructure. In order to move the technical part of this plan further ahead and address some of the other challenges Thunderbird faces, we agreed to contract for a short period of time with a technical architect who can support the Thunderbird community as they decide what path Thunderbird should take. We have a request for proposals for this position here.

On the organizational front, we hired open source leader Simon Phipps to look at different long-term options for a home for Thunderbird, including: The Document Foundation, Gnome, Mozilla Foundation, and The Software Freedom Conservancy. Simon’s initial report will be posted today in the Thunderbird Planning online forum and is currently being reviewed by both Mozilla and the Thunderbird Council.

With the right technical and organizational paths forward, both Firefox and Thunderbird will have a better chance at success. We believe Firefox will evolve into something consumers need and love for a long time—a way to take the browser into experiences across all devices. But we need to move fast to be effective.

We also believe there’s still a place for stable desktop email, especially if it includes encryption. The Thunderbird community will attract new volunteers and funders, and we’re digging in to help make that happen. We will provide more updates as things progress further.

The post Firefox and Thunderbird: A Fork in the Road appeared first on Mark Surman.

Mike TaylorString.prototype.contains, use your judgement

I was lurking on the darkweb (stackoverflow) looking for old bugs when I ran into this gem: "How can I check if one string contains another substring?".

Pretty normal question for people new to programming (like myself), and the #3 answer contains the following suggestion:

String.prototype.contains = function(it) { return this.indexOf(it) != -1; };

Totally does what the person was asking for. Good stuff.

(And as a result, the person who gave the answer is swimming in stackoverflow points—which is how you buy illegal things on the darkweb.)

The spooky part is back in 2011, in his response to this answer, zachleat linked to a classic Zakas post "Don’t modify objects you don’t own".

From the article,

Maintainable code is code that you don’t need to modify when the browser changes. You don’t know how browser developers will evolve existing browsers and the rate at which those evolutions will take place.

You might remember that ES6 tried to add String.prototype.contains, but it broke a number of sites (especially those using MooTools because the two implementations had different semantics) and had to be renamed to String.prototype.includes.

To the OP's credit, they came back with an edit:

Note: see the comments below for a valid argument for not using this. My advice: use your own judgement.

The morals to this story are obvious: the darkweb is as scary as they say. And Zach Leatherman might be a witch.

This Week In RustThese Weeks in Rust 127

Hello and welcome to another multi-week issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

This week's edition was edited by: Vikrant and llogiq.

Updates from Rust Community

News & Blog Posts

Notable New Crates & Project Updates

Crate of the Week

This week's Crate of the Week is owning_ref, which contains a reference type that can carry its owner with it. Thanks to Diwic for the suggestion!

Submit your suggestions for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

186 pull requests were merged in the last two weeks.

Notable changes

New Contributors

  • Alec S
  • Andrey Tonkih
  • c4rlo
  • David Hewitt
  • David Tolnay
  • Deepak Kannan
  • Gigih Aji Ibrahim
  • jocki84
  • Jonathan Turner
  • Kaiyin Zhong
  • Lukas Kalbertodt
  • Lukas Pustina
  • Maxim Samburskiy
  • Raph Levien
  • rkjnsn
  • Sander Maijers
  • Szabolcs Berecz

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

fn work(on: RustProject) -> Money

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Cow is still criminally underused in a lot of code bases

I suggest we make a new slogan to remedy this: "To err is human, to moo bovine." (I may or may not have shamelessly stolen this from this bug report)

so_you_like_donuts on reddit.

Thanks to killercup for the suggestion.

Submit your quotes for next week!

Daniel Stenbergfcurl is fread and friends for URLs

This whole family of functions, fopen, fread, fwrite, fgets, fclose and more are defined in the C standard since C89. You can’t really call yourself a C programmer without knowing them and probably even using them in at least a few places.

The charm with these is that they’re standard, they’re easy to use and they’re available everywhere where there’s a C compiler.

A basic example that just reads a file from disk and writes it to stdout could look like this:

FILE *file;

file = fopen("hello.txt", "r");
if(file) {
  char buffer [256];
  while(1) {
    size_t rc = fread(buffer, sizeof(buffer),
                1, file);
    if(rc > 0)
      fwrite(buffer, rc, 1, stdout);

Imagine you’d like to switch this example, or one of your actual real world programs that use the fopen() family of functions to read or write files, and instead read and write files from and to the Internet instead using your favorite Internet protocols. How would you do that without having to change your code a lot and do a major refactoring job?

Enter fcurl

I’ve started to work on a library that provides a look-alike API with matching functions and behaviors, but that allows fopen() to instead specify a URL instead of a file name. I call it fcurl. (Much inspired by the libcurl example fopen.c, which I wrote the first version of already back in 2002!)

It is of course open source and is powered by libcurl.

The project is in its early infancy. I think it would be interesting to try it out and I’ve mentioned the idea to a few people that have shown interest. I really can’t make this happen all on my own anyway so while I’ve created a first embryo, it will take some time before it gets truly useful. Help from others would be greatly appreciated of course.

Using this API, a version of the above example that instead reads data from a HTTPS site instead of a local file could look like:

FCURL *file;

file = fcurl_open("",
if(file) {
  char buffer [256];
  while(1) {
    size_t rc = fcurl_read(buffer,         
                           sizeof(buffer), 1, 
    if(rc > 0)
      fwrite(buffer, rc, 1, stdout);

And it could even actually also read a local file using the file:// sheme.

Drop-in replacement

The idea here is to make the alternative functions have new names but as far as possible accept the same input arguments, return the same return codes and so on.

If we do it right, you could possibly even convert an existing program with just a set of #defines at the top without even having to change the code!

Something like this:

#define FILE FCURL
#define fopen(x,y) fcurl_open(x, y)
#define fclose(x) fcurl_close(x)

I think it is worth considering a way to provide an official macro set like that for those who’d like to switch easily (?) and quickly.

Fun things to consider

1. for non-scheme input, use normal fopen?

An interesting take is probably to make fcurl_open() treat input specified without a “scheme://” to be a local file, and then passed to fopen() instead under the hood. That would then enable even more code to switch to fcurl since all the existing use cases with local file names would just continue to work.


An interesting area of deeper research around this could be to provide a way to LD_PRELOAD replacements for the functions so that not even any source code would need be changed and already built existing binaries could be given this functionality.

3. fopencookie

There’s also the GNU libc’s fopencookie concept to figure out if that is something for fcurl to support/use. BSD and OS X have something similar called funopen.

4. merge in official libcurl

If this turns out useful, appreciated and good. We could consider moving the API in under the curl project’s umbrella and possibly eventually even making it part of the actual libcurl. But hey, we’re far away from that and I’m not saying that is even the best idea…

Your input is valuable

Please file issues or pull-requests. Let’s see where we can take this!

Michael KohlerReps Council Working Days Berlin 2016

From April 15th through April 17th the Mozilla Reps Council met in Berlin together with the Participation Team to discuss the Working groups and overall strategy topics. Unfortunately I couldn’t attend on Friday (working day 1) since I had to take my exams. Therefore I could only attend Saturday and Sunday. Nevertheless I think I could help out a lot and definitely learned a lot doing this :) This blog posts reflects my personal opinions, the others will write a blog post as well to give you a more concise view of this weekend.


Alignment Working Group

The first session on Saturday was about the Alignment WG. Before the weekend we (more or less) finished the proposal. This allowed us to discuss the last few open questions, which are now all integrated in the proposal. This will only need review by Konstantina to make sure I haven’t forgotten to add anything from the session and then we can start implementing it. We are sure that this will formalize the interaction between Mozilla goals and Reps goals, stay tuned for more information, we’re currently working on a communication strategy for all the RepsNext changes to make it easier and more fun for you to get informed about the changes.

Meta Working Group

For the Meta Working Group we had more open questions and therefore decided to do brainstorming in three teams. The questions were:

  • Who can join Council?
  • Which recognition mechanisms should be implement now?
  • How does accountability look in Reps?

We’re currently documenting the findings in the Meta working group working proposal, but we probably will need some more time to figure out everything perfectly. Keep an eye out on the Discourse topic in case we’ll need more feedback from you all!

Identity Working Group

A new working group? As you see, I didn’t believe it at first and Rara was visibly shocked!

Fun aside, yes, we’ll start a new Working group around the topics of outwards communication and the Rep program’s image. During our discussions on Saturday, we came up with a few questions that we will need to answer. This Friday we had our first call, follow us in the Discourse topic and it’s not too late to help out here! Please get involved as soon as possible to shape the future of Reps!

Communication Session

On Sunday we ran a joint session with the rest of the Participation team around the topic “How we work together”. We came up with the questions above and let those be answered / brainstormed in groups. I started to document the findings yesterday, but this is not yet in a state where it will be useful for anybody. Stay tuned for more communication around this (communication about communication, isn’t it fun? :)). The last question around “How might we improve the communication between the Participation-Team and the Council?” is already documented in the Alignment Working group proposal. Further the Identity working group will tackle and elaborate further the question around visibility.

Reps Roadmap for 2016

Wait, there is a roadmap?


At the end of our sessions we put up a timeline for Reps for all our different initiatives on a wall. Within the next days we’ll work on this to have it digitally per months. For now, we have started to create GitHub issues in the Reps repo. Stay tuned for more information about this, the current information might confuse you since we haven’t updated all issues yet! It basically includes everything from RepsNext proposal implementations to London Work Week preparations to Council elections.


This weekend showed that we currently have an amazing, hard-working Council. It also showed that we’re on track with all the RepsNext work and that we can do a lot once we all work together and have Working Groups to involve all Reps as well.

Looking forward to the next months! If you haven’t yet, have a look at the Reps Discourse category, to keep yourself updated on Reps related topics and the working groups!

The other Council members will write their blog post in the next few days as well, keep an eye out for link on our Reps issues. Once again, there are a lot of changes to be implemented and discussed, we are working on a strategy for that. We believe that just pointing to all proposals is not easy enough and will come up with fun ways to chime into these and fully understand them. Nevertheless, if you have questions about anything I wrote here, feel free to reach out to me!

Credit: all pictures were taken by our amazing photographer Christos!

Michael KohlerMozilla Switzerland IoT Hackathon in Lausanne

On April 2nd 2016 we held a small IoT Hackathon in Lausanne to brainstorm about the Web and IoT. This was aligned with the new direction that Mozilla is taking on.

We started to organize the Hackathon on Github, so everyone can participate. Geoffroy was really helpful to organize the space for it at Thanks a lot to them, without them organizing our events would be way harder!

The Hackathon
We expected more people to come, but as mentioned above, this is our first self-organized event in the French speaking part of Switzerland. Nevertheless we were four persons with an interest in hacking something together.

Geoffroy and Paul started to have a look at Vaani.iot, one of the projects that Mozilla is currently pushing on. They started to build it on their laptops, unfortunately the Vaani documentation is not good enough yet to see the full picture and what you could do with it. We’re planning to send some feedback regarding that to the Vaani team.

In the meantime Martin and I set up my Raspberry Pi and started to write a small script together that reads out the temperature from one of the sensors. Once we’ve done that, I created a small API to have the temperature returned in JSON format.

At this point, we decided we wanted to connect those two pieces and create a Web app to read out the temperature and announce it through voice. Since we couldn’t get Vaani working, we decided to use the WebSpeech API for this. The voice output part is available in Firefox and Chrome right now, therefore we could achieve this goal without using any non-standard APIs. After that Geoffroy played around with the voice input feature of this API. This is currently only working in Chrome, but there is a bug to implement it in Firefox as well. In the spirit of the open web, we decided to ignore the fact that we need to use Chrome for now, and create a feature that is built on Web standards that are on track to standardization.

After all, we could achieve something together and definitely had some good learnings during that.

Lessions learned

  • Organizing a hackathon for the first time in a new city is not easy
  • We probably need to establish an “evening-only” meetup series first, so we can attract participants that identify with us
  • We could use this opportunity to document the Liip space in Lausanne for future events on our Events page on the wiki
  • Not all projects are well documented, we need to work on this!

After the Hackathon

Since I needed to do a project for my studies that involves hardware as well, I could take the opportunity and take the sensors for my project.

You can find the Source Code on the MozillaCH github organization. It currently regularly reads out the two temperature sensors and checks if there is any movement registered by the movement sensor. If the temperature difference is too high it sends an alarm to the NodeJS backend. The same goes for the situation where it detects movement. I see this as a first step into my own take on a smart home, it would need a lot of work and more sensors to be completely useful though.




Daniel PocockLinuxWochen, MiniDebConf Vienna and Linux Presentation Day

Over the coming week, there are a vast number of free software events taking place around the world.

I'll be at the LinuxWochen Vienna and MiniDebConf Vienna, the events run over four days from Thursday, 28 April to Sunday, 1 May.

At MiniDebConf Vienna, I'll be giving a talk on Saturday (schedule not finalized yet) about our progress with free Real-Time Communications (RTC) and welcoming 13 new GSoC students (and their mentors) working on this topic under the Debian umbrella.

On Sunday, Iain Learmonth and I will be collaborating on a workshop/demonstration on Software Defined Radio from the perspective of ham radio and the Debian Ham Radio Pure Blend. If you want to be an active participant, an easy way to get involved is to bring an RTL-SDR dongle. It is highly recommended that instead of buying any cheap generic dongle, you buy one with a high quality temperature compensated crystal oscillator (TXCO), such as those promoted by

Saturday, 30 April is also Linux Presentation Day in many places. There is an event in Switzerland organized by the local local FSFE group in Basel.

DebConf16 is only a couple of months away now, Registration is still open and the team are keenly looking for additional sponsors. Sponsors are a vital part of such a large event, if your employer or any other organization you know benefits from Debian, please encourage them to contribute.

Hal WineEnterprise Software Writers R US

Enterprise Software Writers R US

Someone just accused me of writing Enterprise Software!!!!!

Well, the “someone” is Mahmoud Hashemi from PayPal, and I heard him on the Talk Python To Me podcast (episode 54). That whole episode is quite interesting - go listen to it.

Mahmoud makes a good case, presenting nine “hallmarks” of enterprise software (the more that apply, the more “enterprisy” your software is). Most of the work RelEng does easily hits 7 of the points. You can watch Mahmoud define Enterprise Software for free by following the link from his blog entry (link is 2.1 in table of contents). (It’s part of his “Enterprise Software with Python” course offered on O’Reilly’s Safari.) One advantage of watching his presentation is that PayPal’s “Mother of all Diagrams” make ours_ look simple! (Although “blue spaghetti” is probably tastier.)

Do I care about “how enterprisy” my work is? Not at all. But I do like the way Mahmoud explains the landscape and challenges of enterprise software. He makes it clear, in the podcast, how acknowledging the existence of those challenges can inform various technical decisions. Such as choice of language. Or need to consider maintenance. Or – well, just go listen for yourself.

Myk MelezProject Positron

Along with several colleagues, I recently started working on Project Positron, an effort to build an Electron-compatible runtime on top of the Mozilla technology stack (Gecko and SpiderMonkey). Mozilla has long supported building applications on its stack, but the process is complex and cumbersome. Electron development, by comparison, is a dream. We aim to bring the same ease-of-use to Mozilla.

Positron development is proceeding along two tracks. In the SpiderNode repository, we’re working our way up from Node, shimming the V8 API so we can run Node on top of SpiderMonkey. Ehsan Akhgari details that effort in his Project SpiderNode post.

In the Positron repository, we’re working our way down from Electron, importing Electron (and Node) core modules, stubbing or implementing their native bindings, and making the minimal necessary changes (like basing the <webview> element on <iframe mozbrowser>) so we can run Electron apps on top of Gecko. Eventually we aim to join these tracks, even though we aren’t yet sure exactly where the last spike will be located.

It’s early days. As Ehsan noted, SpiderNode doesn’t yet link the Node executable successfully, since we haven’t shimmed all the V8 APIs it accesses. Meanwhile, Positron supports only a tiny subset of the Electron and Node APIs.

Nevertheless, we reached a milestone today: the tip of the Positron trunk now runs the Electron Quick Start app described in the Electron tutorial. That means it can open a BrowserWindow, hook up a DevTools window to it (with Firefox DevTools integration contributed by jryans), and handle basic application lifecycle events. We’ve imported that app into the Positron repository as our “hello world” example.

Clone and build Positron to see it in action!


About:CommunityFirefox 46 new contributors

With the release of Firefox 46, we are pleased to welcome the 37 developers who contributed their first code change to Firefox in this release, 31 of whom were brand new volunteers! Please join us in thanking each of these diligent and enthusiastic individuals, and take a look at their contributions:

Andrew TruongExperience, Learn, Revitalize and Share: Junior High in a Nutshell

The transition into junior was a somewhat complex one for me. I had trouble adapting a newer environment where there were more students, classrooms, teachers, etc. I struggled the first 2 of the 3 years. The style of teaching I was used to in elementary compared to junior high was different, and the treatment from teachers was absolutely different, and not in a positive way. I tried hard to adapt, bring my marks to a level where I want them to be, and to have a balance between school and social life.

Part of what made it difficult is that I was keen on following the same way of doing things I was used to in elementary. I was also reluctant to change and the ability to see change as well. The most crucial part was that I was acting in a manner where I was telling myself to just be myself but, I did just that in the wrong way, which caused more harm rather than good. More so, I started doing things that were unacceptable, but not embarrassing. It resulted in me being down in the office speaking with the AP or Principal on a few occasions. With one of the incidents that frowned upon then, wouldn't be frowned upon now, in our digital age. There are times where I wish I could just go back and change things up, but at the same time: life moves on. What has happened, happened.

The other part which made it difficult was that I didn't have wonderful teachers in my opinion. Not all of them were bad, but a handful of them just didn't click for me. I could say, to a certain extent that they picked on me at times because I simply just did not like class discussions (I still don't). More so, that in grade 8, teachers would know every students' name in the class except mine; I'm not sure what the issue was, or what the deal with that was. It wasn't just one, but 2 teachers that did it all the time. They were corrected from time to time but it didn't click to them that it was the reason that other classmates were laughing out loud because of it, every time it happened.

What changed me, however, took place through the summer break I had before going into grade 9. At that point, I discovered the opportunity to volunteer/ contribute to Mozilla. I started off with live chat on SUMO which paved the way for me to improve my writing skills and grammar by contributing to the knowledge base.

As I started into my last year of junior high, the other thing that I was lucky to be part of was: leadership. I was lucky and fortunate enough to be enrolled in that class so, that I was able to find myself and be myself. In leadership, students are encouraged to help each other out, work in groups/ teams, work to boost your enthusiasm and self-esteem, and to help organize school events. I loved it! I was able to see my potential, and what potential others had. These 2 factors allowed me to become more successful in my studies, and day-to-day life along with my contributions to Mozilla. I started to have things dawn on me, and so I was able to figure out what I did wrong, and how I could take a different and better approach the next time if the similar situations arose again.

Unfortunately, even though there are positives, there will be negatives. Not everything worked out to be a miracle. There were 2 situations where I had issues with my teachers.

The first of which, was where a teacher wanted things done her way only. If you found a solution to a homework question, test question a different way but with the same answer, and you could do that for other questions as well, you were still wrong. You had to do it a certain and specific way in order for it to be right. Now, as always, there are 2 ways of thinking about this for sure, but as we've progressed, we find that there are multiple approaches to achieving or reaching something and it doesn't have to be done in the set in a stone way.

The second was where I know that I don't have the talent or ability to complete something and required help. I tried and tried through the whole semester to achieve what was being taught in the class. On the very last day, I didn't expect myself to take it any further but somehow one thing lead to another where I wasn't happy with the teacher and nor was he happy with me. In the end, I spoke with my favourite AP who was also a teacher of mine as well, and she agreed with what I said and we ended it there.

There are always 2 parts to a story, but I can only reveal so much that it doesn't hurt me in the long run. I'm being really vague as I don't want to hurt my reputation nor do I want an investigation to be launched. The sole purpose of this blog post is to share what I experienced in junior high and to share how I was able to progress and find myself simply through the power of leadership.

Christian HeilmannTurning a community into evangelism helpers – DevRelCon Notes

These are the notes of my talk at DevRelCon in San Francisco. “Turning a community into evangelism helpers” covered how you can scale your evangelism/advocacy efforts. The trick is to give up some of the control and sharing your materials with the community. Instead of being the one who brings the knowledge, you’re the one who shares it and coaches people how to use it.

fox handshake-campusparty

Why include the community?

First of all, we have to ask ourselves why we should include the community in our evangelism efforts. Being the sole source of information about your products can be beneficial. It is much easier to control the message and create materials that way. But, it limits you to where you can physically be at one time. Furthermore, your online materials only reach people who already follow you.

Sharing your materials and evangelism efforts with a community reaps a lot of benefits:

  • You cut down on travel – whilst it is glamorous to rack up the air miles and live the high life of lounges and hotels it also burns you out. Instead of you traveling everywhere, you can nurture local talent to present for you. A lot of conferences will want the US or UK presenter to come to attract more attendees. You can use this power to introduce local colleagues and open doors for them.
  • You reach audiences that are beyond your reach – often it is much more beneficial to speak in the language and the cultural background of a certain place. You can do your homework and get translations. But, there is nothing better than a local person delivering in the right format.
  • You avoid being a parachute presenter – instead of dropping out of the sky, giving your talk and then vanishing without being able to keep up with the workload of answering requests, you introduce a local counterpart. That way people get answers to their requests after you left in a language and format they understand. It is frustrating when you have no time to answer people or you just don’t understand what they want.

Share, inspire, explain

Starts by making yourself available beyond the “unreachable evangelist”. You’re not a rockstar, don’t act like one. Share your materials and the community will take them on. That way you can share your workload. Break down the barrier between you and your community by sharing everything you do. Break down fears of your community by listening and amplifying things that impress you.

Make yourself available and show you listen

  • Have a repository of slide decks in an editable format – besides telling your community where you will be and sharing the videos of your talks also share your slides. That way the community can re-use and translate them – either in part or as a whole.
  • Share out interesting talks and point out why they are great – that way you show that there is more out there than your company materials. And you advertise other presenters and influencers for your community to follow. Give a lot of details here to show why a talk is great. In Mozilla I did this as a minute-by-minute transcript.
  • Create explanations for your company products, including demo code and share it out with the community – the shorter and cleaner you can keep these, the better. Nobody wants to talk over a 20 minute screencast.
  • Share and comment on great examples from community members – this is the big one. It encourages people to do more. It shows that you don’t only throw content over the wall, but that you expect people to make it their own.

Record and teach recording

Keeping a record of everything you do is important. It helps you to get used to your own voice and writing style and see how you can improve over time. It also means that when people ask you later about something you have a record of it. Ask for audio and video recordings of your community presenting to prepare for your one on one meetings with them. It also allows you to share these with your company to show how your community improves. You can show them to conference organisers to promote your community members as prospective speakers.

Recordings are great

  • They show how you deliver some of the content you talked about
  • They give you an idea of how much coaching a community member needs to become a presenter
  • They allow people to get used to seeing themselves as they appear to others
  • You create reusable content (screencasts, tutorials), that people can localise and talk over in presentations

Often you will find that a part of your presentation can inspire people. It makes them aware of how to deliver a complex concept in an understandable manner. And it isn’t hard to do – get Camtasia or Screenflow or even use Quicktime. YouTube is great for hosting.

Avoid the magical powerpoint

One thing both your company and your community will expect you to create is a “reusable power point presentation”. One that people can deliver over and over again. This is a mistake we’ve been doing for years. Of course, there are benefits to having one of those:

  • You have a clear message – a Powerpoint reviewed by HR, PR and branding and makes sure there are no communication issues.
  • You have a consistent look and feel – and no surprises of copyrighted material showing up in your talks
  • People don’t have to think about coming up with a talk – the talking points are there, the soundbites hidden, the tweetable bits available.

All these are good things, but they also make your presentations boring as toast. They don’t challenge the presenter to own the talk and perform. They become readers of slides and notes. If you want to inspire, you need to avoid that at all cost.

You can have the cake of good messaging and eat it, too. Instead of having a full powerpoint to present, offer your community a collection of talking points. Add demos and screencasts to remix into their own presentations.

There is merit in offering presentation templates though. It can be daunting to look at a blank screen and having to choose fonts, sizes and colours. Offering a simple, but beautiful template to use avoids that nuisance.

What I did in the past was offering an HTML slide deck on GitHub that had introductory slides for different topics. Followed by annotated content slides how to show parts of that topic. Putting it up on GitHub helped the community adding to it, translating it and fork their own presentations. In other words, I helped them on the way but expected them to find their own story arc and to make it relevant for the audience and their style of presenting.

Delegate and introduce

Delegation is the big win whenever you want to scale your work. You can’t reap the rewards of the community helping you without trusting them. So, stop doing everything yourself and instead delegate tasks. What is annoying and boring to you might be a great new adventure for someone else. And you can see them taking your materials into places you hadn’t thought of.

Delegate tasks early and often

Here are some things you can easily delegate:

  • Translation / localisation – you don’t speak all the languages. You may not be aware that your illustration or your use of colour is offensive in some countries.
  • Captioning and transcription of demo videos – this takes time and effort. It is annoying for you to describe your own work, but it is a great way for future presenters to memorise it.
  • Demo code cleanup / demo creation – you learn by doing, it is that simple.
  • Testing and recording across different platforms/conditions – your community has different setups from what you have. This is a good opportunity to test and fix your demos with their hardware.
  • Maintenance of resources – in the long run, you don’t want to be responsible for maintaining everything. The earlier you get people involved, the smoother the transition will be.

Introduce local community members

Sharing your content is one thing. The next level is to also share your fame. You can use your schedule and bookings to help your community:

  • Mention them in your talks and as a resource to contact – you avoid disappointing people by never coming back to them. And it shows your company cares about the place you speak at.
  • Co-present with them at events – nothing better to give some kudos than to share the stage
  • Introduce local companies/influencers to your local counterpart – the next step in the introduction cycle. This way you have something tangible to show to your company. It may be the first step for that community member to get hired.
  • Once trained up, tell other company departments about them. – this is the final step to turn volunteers into colleagues.

Set guidelines and give access

You give up a lot of control and you show a lot of trust when you start scaling by converting your community. In order not to cheapen that, make sure you also define guidelines. Being part of this should not be a medal for showing up – it should become something to aim for.

  • Define a conference playbook – if someone speaks on behalf of your company using your materials, they should also have deliveries. Failing to deliver them means they get less or no support in the future.
  • Offer 1:1 training in various levels as a reward – instead of burning yourself out by training everyone, have self-training materials that people can use to get themselves to the next level
  • Have a defined code of conduct – your reputation is also at stake when one of your community members steps out of line
  • Define benefits for participation – giving x number of talks gets you y, writing x amount of demos y amount of people use give you the same, and so on.

Official channels > Personal Blogs

Often people you train want to promote their own personal channels in their work. That is great for them. But it is dangerous to mix their content with content created on work time by someone else. This needs good explanation. Make sure to point out to your community members that their own brand will grow with the amount of work they delivered and the kudos they got for it. Also explain that by separating their work from your company’s, they have a chance to separate themselves from bad things that happen on a company level.

Giving your community members access to the official company channels and making sure their content goes there has a lot of benefits:

  • You separate personal views from company content
  • You control the platform (security, future plans…)
  • You enjoy the reach and give kudos to the community member.

You don’t want to be in the position to explain a hacked blog or outrageous political beliefs of a community member mixed with your official content. Believe me, it isn’t fun.

Communicate sideways and up

This is the end game. To make this sustainable, you need full support from your company.

For sustainability, get company support

The danger of programs like this is that they cost a lot of time and effort and don’t yield immediate results. This is why you have to be diligent in keeping your company up-to-date on what’s happening.

  • Communicate out successes company-wide – find the right people to tell about successful outreach into markets you couldn’t reach but the people you trained could. Tell all about it – from engineering to marketing to PR. Any of them can be your ally in the future.
  • Get different company departments to maintain and give input to the community materials – once you got community members to talk about products, try to get a contact in these departments to maintain the materials the community uses. That way they will be always up to date. And you don’t run into issues with outdated materials annoying the company department.
  • Flag up great community members for hiring as full-time devrel people

The perfect outcome of this is to convert community members into employees. This is important to the company as people getting through the door is expensive. Already trained up employees are more effective to hit the ground running. It also shows that using your volunteer time on evangelism pays off in the long run. It can also be a great career move for you. People hired through this outreach are likely to become your reports.

Mark CôtéHow MozReview helps

A great post on code review is making its rounds. It’s started some discussion amongst Mozillians, and it got me thinking about how MozReview helps with the author’s points. It’s particularly interesting because apparently Twitter uses Review Board for code reviews, which is a core element of the whole MozReview system.

The author notes that it’s very important for reviewers to know what reviews are waiting on them, but also that Review Board itself doesn’t do a good job of this. MozReview fixes this problem by piggybacking on Bugzilla’s review flags, which have a number of features built around them: indicators, dashboards, notification emails, and reminder emails. People can even subscribe to the reminders for other reviewers; this is a way managers can ensure that their teams are responding promptly to review requests. We’ve also toyed around with the idea of using push notifications to notify people currently using Bugzilla that they have a new request (also relevant to the section on being “interrupt-driven”).

On the submitter side, MozReview’s core support for microcommits—a feature we built on top of Review Board, within our extensions—helps “keep reviews as small as possible”. While it’s impossible to enforce small commits within a tool, we’ve tried to make it as painless as possible to split up work into a series of small changes.

The MozReview team has made progress on automated static analysis (linters and the like), which helps submitters verify that their commits follow stylistic rules and other such conventions. It will also shorten review time, as the reviewer will not have to spend time pointing out these issues; when the review bots have given their r+s, the reviewer will be able to focus solely on the logic. As we continue to grow the MozReview team, we’ll be devoting some time to finishing up this feature.

Armen ZambranoThe Joy of Automation

This post is to announce The Joy of Automation YouTube channel. In this channel you should be able to watch presentations about automation work by Mozilla's Platforms Operations. I hope more folks than me would like to share their videos in here.

This follows the idea that mconley started with The Joy of Coding and his livehacks.
At the moment there is only "Unscripted" videos of me hacking away. I hope one day to do live hacks but for now they're offline videos.

Mistakes I made in case any Platform Ops member wanting to contribute want to avoid:

  • Lower the music of the background music
  • Find a source of music without ads and with music that would not block certain countries from seeing it (e.g. Germany)
  • Do not record in .flv format since most video editing software do not handle it
  • Add an intro screen so you don't see me hiding OBS
  • Have multiple bugs to work on in case you get stuck in the first one

Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Benjamin BouvierMaking asm.js/WebAssembly compilation more parallel in Firefox

In December 2015, I've worked on reducing startup time of asm.js programs in Firefox by making compilation more parallel. As our JavaScript engine, Spidermonkey, uses the same compilation pipeline for both asm.js and WebAssembly, this also benefitted WebAssembly compilation. Now is a good time to talk about what it meant, how it got achieved and what are the next ideas to make it even faster.

What does it mean to make a program "more parallel"?

Parallelization consists of splitting a sequential program into smaller independent tasks, then having them run on different CPU. If your program is using N cores, it can be up to N times faster.

Well, in theory. Let's say you're in a car, driving on a 100 Km long road. You've already driven the first 50 Km in one hour. Let's say your car can have unlimited speed from now on. What is the maximal average speed you can reach, once you get to the end of the road?

People intuitively answer "If it can go as fast as I want, so nearby lightspeed sounds plausible". But this is not true! In fact, if you could teleport from your current position to the end of the road, you'd have traveled 100 Km in one hour, so your maximal theoritical speed is 100 Km per hour. This result is a consequence of Amdahl's law. When we get back to our initial problem, this means you can expect a N times speedup if you're running your program with N cores if, and only if your program can be entirely run in parallel. This is usually not the case, and that is why most wording refers to speedups up to N times faster, when it comes to parallelization.

Now, say your program is already running some portions in parallel. To make it faster, one can identify some parts of the program that are sequential, and make them independent so that you can run them in parallel. With respect to our car metaphor, this means augmenting the portion of the road on which you can run at unlimited speed.

This is exactly what we have done with parallel compilation of asm.js programs under Firefox.

A quick look at the asm.js compilation pipeline

I recommend to read this blog post. It clearly explains the differences between JIT (Just In Time) and AOT (Ahead Of Time) compilation, and elaborates on the different parts of the engines involved in the compilation pipeline.

As a TL;DR, keep in mind that asm.js is a strictly validated, highly optimizable, typed subset of JavaScript. Once validated, it guarantees high performance and stability (no garbage collector involved!). That is ensured by mapping every single JavaScript instruction of this subset to a few CPU instructions, if not only a single instruction. This means an asm.js program needs to get compiled to machine code, that is, translated from JavaScript to the language your CPU directly manipulates (like what GCC would do for a C++ program). If you haven't heard, the results are impressive and you can run video games directly in your browser, without needing to install anything. No plugins. Nothing more than your usual, everyday browser.

Because asm.js programs can be gigantic in size (in number of functions as well as in number of lines of code), the first compilation of the entire program is going to take some time. Afterwards, Firefox uses a caching mechanism that prevents the need for recompilation and almost instaneously loads the code, so subsequent loadings matter less*. The end user will mostly wait for the first compilation, thus this one needs to be fast.

Before the work explained below, the pipeline for compiling a single function (out of an asm.js module) would look like this:

  • parse the function, and as we parse, emit intermediate representation (IR) nodes for the compiler infrastructure. SpiderMonkey has several IRs, including the MIR (middle-level IR, mostly loaded with semantic) and the LIR (low-level IR closer to the CPU memory representation: registers, stack, etc.). The one generated here is the MIR. All of this happens on the main thread.
  • once the entire IR graph is generated for the function, optimize the MIR graph (i.e. apply a few optimization passes). Then, generate the LIR graph before carrying out register allocation (probably the most costly task of the pipeline). This can be done on supplementary helper threads, as the MIR optimization and LIR generation for a given function doesn't depend on other ones.
  • since functions can call between themselves within an asm.js module, they need references to each other. In assembly, a reference is merely an offset to somewhere else in memory. In this initial implementation, code generation is carried out on the main thread, at the cost of speed but for the sake of simplicity.

So far, only the MIR optimization passes, register allocation and LIR generation were done in parallel. Wouldn't it be nice to be able to do more?

* There are conditions for benefitting from the caching mechanism. In particular, the script should be loaded asynchronously and it should be of a consequent size.

Doing more in parallel

Our goal is to make more work in parallel: so can we take out MIR generation from the main thread? And we can take out code generation as well?

The answer happens to be yes to both questions.

For the former, instead of emitting a MIR graph as we parse the function's body, we emit a small, compact, pre-order representation of the function's body. In short, a new IR. As work was starting on WebAssembly (wasm) at this time, and since asm.js semantics and wasm semantics mostly match, the IR could just be the wasm encoding, consisting of the wasm opcodes plus a few specific asm.js ones*. Then, wasm is translated to MIR in another thread.

Now, instead of parsing and generating MIR in a single pass, we would now parse and generate wasm IR in one pass, and generate the MIR out of the wasm IR in another pass. The wasm IR is very compact and much cheaper to generate than a full MIR graph, because generating a MIR graph needs some algorithmic work, including the creation of Phi nodes (join values after any form of branching). As a result, it is expected that compilation time won't suffer. This was a large refactoring: taking every single asm.js instructions, and encoding them in a compact way and later decode these into the equivalent MIR nodes.

For the second part, could we generate code on other threads? One structure in the code base, the MacroAssembler, is used to generate all the code and it contains all necessary metadata about offsets. By adding more metadata there to abstract internal calls **, we can describe the new scheme in terms of a classic functional map/reduce:

  • the wasm IR is sent to a thread, which will return a MacroAssembler. That is a map operation, transforming an array of wasm IR into an array of MacroAssemblers.
  • When a thread is done compiling, we merge its MacroAssembler into one big MacroAssembler. Most of the merge consists in taking all the offset metadata in the thread MacroAssembler, fixing up all the offsets, and concatenate the two generated code buffers. This is equivalent to a reduce operation, merging each MacroAssembler within the module's one.

At the end of the compilation of the entire module, there is still some light work to be done: offsets of internal calls need to be translated to their actual locations. All this work has been done in this bugzilla bug.

* In fact, at the time when this was being done, we used a different superset of wasm. Since then, work has been done so that our asm.js frontend is really just another wasm emitter.

** referencing functions by their appearance order index in the module, rather than an offset to the actual start of the function. This order is indeed stable, from a function to the other.


Benchmarking has been done on a Linux x64 machine with 8 cores clocked at 4.2 Ghz.

First, compilation times of a few asm.js massive games:

The X scale is the compilation time in seconds, so lower is better. Each value point is the best one of three runs. For the new scheme, the corresponding relative speedup (in percentage) has been added:

Compilation times of various benchmarks

For all games, compilation is much faster with the new parallelization scheme.

Now, let's go a bit deeper. The Linux CLI tool perf has a stat command that gives you an average of the number of utilized CPUs during the program execution. This is a great measure of threading efficiency: the more a CPU is utilized, the more it is not idle, waiting for other results to come, and thus useful. For a constant task execution time, the more utilized CPUs, the more likely the program will execute quickly.

The X scale is the number of utilized CPUs, according to the perf stat command, so higher is better. Again, each value point is the best one of three runs.

CPU utilized on DeadTrigger2

With the older scheme, the number of utilized CPUs quickly rises up from 1 to 4 cores, then more slowly from 5 cores and beyond. Intuitively, this means that with 8 cores, we almost reached the theoritical limit of the portion of the program that can be made parallel (not considering the overhead introduced by parallelization or altering the scheme).

But with the newer scheme, we get much more CPU usage even after 6 cores! Then it slows down a bit, although it is still more significant than the slow rise of the older scheme. So it is likely that with even more threads, we could have even better speedups than the one mentioned beforehand. In fact, we have moved the theoritical limit mentioned above a bit further: we have expanded the portion of the program that can be made parallel. Or to keep on using the initial car/road metaphor, we've shortened the constant speed portion of the road to the benefit of the unlimited speed portion of the road, resulting in a shorter trip overall.

Future steps

Despite these improvements, compilation time can still be a pain, especially on mobile. This is mostly due to the fact that we're running a whole multi-million line codebase through the backend of a compiler to generate optimized code. Following this work, the next bottleneck during the compilation process is parsing, which matters for asm.js in particular, which source is plain text. Decoding WebAssembly is an order of magnitude faster though, and it can be made even faster. Moreover, we have even more load-time optimizations coming down the pipeline!

In the meanwhile, we keep on improving the WebAssembly backend. Keep track of our progress on bug 1188259!

Cameron Kaiser38.8.0 available

38.8.0 is available (downloads, hashes, release notes). There are no major changes, only a bustage fix for one of the security updates that does not compile under gcc 4.6. Although I built the browser and did all the usual custodial tasks remotely from a hotel room in Sydney, assuming no major showstoppers I will actually take a couple minutes on my honeymoon to flip the version indicator Monday Pacific time (and, in a good sign for the marriage, she accepts this as a necessary task).

Don't bother me on my honeymoon.

David BurnsSelenium WebDriver and Firefox 46

As of Firefox 46, the extension based version FirefoxDriver will no longer work. This is because of the new Add-on policy that Mozilla is enforcing to try help protect end users from installers inserting add ons that are not what the user wants. This version is due for release next week.

This does not mean that your tests need to stop working entirely as there are options to keep them working.


Firstly, you can use Marionette, the Mozilla version of FirefoxDriver to drive Firefox. This has been in Firefox since about 24 as we, slowly working against Mozilla priorities, getting it up to Selenium level. Currently Marionette is passing ~85% of the Selenium test suite.

I have written up some documentation on how to use Marionette on MDN

I am not expecting everything to work but below is a quick list that I know doesn't work.

  • No support for self-signed certificates
  • No support for actions
  • No support logging endpoint
  • I am sure there are other things we don't remember

It would be great if we could raise bugs.

Firefox 45 ESR

If you don't want to worry about Marionette, the other option is to downgrade to Firefox 45, preferably the ESR as it won't update to 46 and will update in about 6-9 months time to Firefox 52 when you will need to use Marionette.

Marionette will be turned on by default from Selenium 3, which is currently being worked on by the Selenium community. Ideally when Firefox 52 comes around you will just update to Selenium 3 and, fingers crossed, all works as planned.

Support.Mozilla.OrgGet inspired! Reaching 100% SUMO Localization with the Czech team

Hey there, SUMO Nation! We’re back to sharing more awesomeness from you, by you, for all the users. This time I have the pleasure of passing the screen over to Michal, our Czech locale leader. Michal and his trusted team of Czech l10ns reached all the possible KB milestones ever and are maintaining the Czech KB with grace and ease. Learn how they did it and get inspired!

The years 2015 and 2016 were a great success for our Czech localization team. We have grown in number, improved our suggestion & reviewing workflow, moved all projects to a single place (Pontoon) and finished all milestones for SUMO l10n – both for UI and articles. But there is much more that we gained when making all dashboards green than just “getting the work done”.

But who is the Czech team?

That’s a very good question. The Czech team has not been involved much in the global SUMO life. So, if you do not know us, let me introduce everyone.

  • First there is me ;) – Michal. I primarily focus on product localization, but “as a hobby” I am trying to help the SUMO heroes too.
  • Our biggest hero and record breaker Jiří! If you open any Czech article, he worked on it directly, or reviewed and polished it to perfection. His counter recently exceeded the number of 730 articles updated.
  • Miroslav is our long-time contributor and his updates and translations are considered approved in advance.
  • Tomáš does irreplaceable work keeping Kitsune UI localization in great shape, and he started that in the old ages of Verbatim.
  • I almost forgot our former leader Pavel. Many thanks to him for the outstanding work on both Kitsune UI l10n and the very first help articles as well.
  • I also want to highlight the contributions of other brave volunteers. Their updates and translation even a few articles helped us conquer our dashboard.

Nice to meet all of you, guys.

Thank you, SUMO! So, the story… At the end of last year, Michał looked closely on the locale statuses and assigned the milestones we should smash this year. Our milestone was to localize all articles globally. That was something I didn’t believe we can do easily. Even in February or March a new set of milestones appeared and the updated one for Czech reduced the “requirement” to localize 700 articles. Well, from that point in time, we cheated a little. ;)

As you may notice only 697 articles now. During the localization we noticed some articles were pretty outdated, containing links to pages that no longer exist, etc. So we’ve got in touch with the team, reported them and… they were magically archived. But do not think we are just bloody cheaters achieving milestones through asking for content deletion. No, we made almost 400 updates to all articles this year (50% of the total we’ve done in the whole 2015)!

The cooperation between us (localizers) and the SUMO team (Michał, Joni, Madalina, and others), I personally have found very beneficial. One one side, they put a huge effort in our support, introducing a new tool or explaining article content, as well as Firefox release notes and news. During the localization we read each article whole at least once or twice, so giving them feedback or suggesting updates is the smallest thing we can offer from our side.

Amazing! But you mentioned you learned something new too?

True. We learned that communication is very important. In the team, we learned to share new ideas on terminology and also opened a discussion on the theme of screenshots, which are our next target. Did you notice, there is no dedicated way to mark that your localization revision is missing localized screenshots? Oh, it’s quite simple in fact. We are adding a “[scr]” tag into the revision comment each time we have not had enough time to take localized screenshots, and only translated the article content. It’s very easy to filter them out in the “Recent revisions” list, once you have time and mood for some “screenshotting”.

Equally important is the communication outside the localization team. A lot of strings in the Kitsune UI would have been translated blindly without any consultation. Especially in the support forum areas, we haven’t been using those pages ourselves until the beginning of April (yes, we did forum support, too!).

In the light of our success, we do not want to rest on the laurels. It’s time to look forward – our screenshots are not perfect, and we are still dividing our efforts between articles and the Kitsune UI. I hope that Kitsune can support us in both areas a little more, e.g. there are no tools for finding the actual location of the strings, but that’s something we can help fix. We are actually quite new to Kitsune. But as it’s great to help people by bringing knowledge into your language, it’s also quite important to start a discussion, even if we might think the questions we have are trivial. Just do not be afraid to say what you think is important for the project.

Karl Dubost[worklog] Mabuigumi, the soul shifting

We had two powerful earthquakes in the south of Japan which have been registered both at 7, the maximum, on the Japanese earthquake scale. Previously only 3 other earthquakes had been that high in Japan.

Tune of the week: Boom! Shake the room.

Webcompat Life

Progress this week:

Today: 2016-04-22T10:25:49.831867
376 open issues
needsinfo       4
needsdiagnosis  132
needscontact    27
contactready    95
sitewait        116

You are welcome to participate

Londong agenda.

Webcompat issues

(a selection of some of the bugs worked on this week).

  • -webkit-overflow-scrolling: touch; and a missing width: 100vw; creates a scrolling issue for a menu. If you speak Chinese you can help us find the contact and reach out to them.
  • One of the issues with Webcompat work is that people change life, job, assignments like it is happening on this Amazon bug, but after finding another person in charge, the dialog is still going on. And it's an healthy one, where constraints and needs are discussed on both sides. That's the best you could hope for when discussing about features and bugs.
  • User-Agent override is never an easy choice. When deciding to do user agent override, aka faking the user agent so you receive the proper user experience from the Web site, what else will you break in the process?
  • A selection which creates a jump in reading the next article on the NYTimes Web site.
  • There is something to write about user agent sniffing, websites and longterm. Orbis is obviously a site which is not maintained anymore BUT working at least for some browsers. The strategies of user agent sniffing are still in place and then fail the browser even if it could work.
  • There should be a special category for Daniel Holbert's Web compatibility bug reports… Holy cow. This is just butter on top of grilled bread. A barber shop

Webcompat development

Need to increase my dev rate.

Gecko Bugs

  • Brian Grinstead [fixed] an issue which was created by the interaction of User Agent Switcher addons and developer tool.

Meaning of WebCompat, err, Web Compatibility

In two tweets, Jen Simmons got pretty interesting answers:

  • “web compatibility” is HTML content which is accessible via interconnected URLs independent of further technologies.
  • another buzzword.
  • Web compat means pages that work for everyone regardless of browser, screen size, network speed, language, physical ability.
  • Something that can safely be loaded in a browser?
  • more seriously, is a cool initiative. Wasn’t aware until you asked :)
  • Spiderman vs. Bizarro Spiderman
  • Web Compatibility, but then in a slightly puzzled manner I wonder why they decided to shorten compatibility to compat.
  • it means someone didn't finish their sentence

I like this series of answers. "Web Compat" as a buzzword is interesting. We usually use webcompat in between us in the team without realizing that it might be not understood outside of our circle. Taking notes.

The other answers are very spread apart into something which is more on does it work more than interoperable, aka focusing more on universality of the Web technologies. In the London agenda, we have an item opened for discussions on the meaning of Web Compatibility and what are the class of issues. It should be fun.

Reading List

Follow Your Nose


  • Document how to write tests on using test fixtures.
  • ToWrite: rounding numbers in CSS for width
  • ToWrite: Amazon prefetching resources with <object> for Firefox only.


Air MozillaConnected Devices Speakers and Open Forum

Connected Devices Speakers and Open Forum This is the Connected Devices Meetup where we will have 3 speakers presenting their slides or demos and answering questions.

David LawrenceHappy BMO Push Day!

the following changes have been pushed to

  • [1264207] add support for the hellosplat tracker to ‘see also’
  • [1195736] intermittent internal error: “file error – nav_link: not found” (also manifests as fields_lhs: not found)
  • [1265432] backport upstream bug 1263923 to bmo/4.2 – X-Bugzilla-Who header is not set for flag mails
  • [1266117] I have found a bug in the section 2.6.1 in the user guide(2.6) of BMO documentation. The bug identified is a grammatical error committed in one of the sentences.
  • [1239838] Don’t see a way to redirect a needinfo request (in Experimental UI)
  • [1266167] clickjacking is possible on “view all” and “details” attachment pages

discuss these changes on

Support.Mozilla.OrgWhat’s Up with SUMO – 21st April

Hello, SUMO Nation!

Let’s get the big things out of the way – we met last week in Berlin to talk about where we are and what’s ahead of us. While you will see changes and updates appearing here and there around SUMO, the most crucial result is the start of a discussion about the future of our platform – a discussion about the technical future of SUMO. We need your feedback about it as soon as possible. Read more details here – and tell us what you think.

This is just the beginning of a discussion about one aspect of sumo, so please – don’t panic and remember: we are not going away, no content will be lost (but we may be archiving some obsolete stuff), no user will be left behind, and (on a less serious note) no chickens will have to cross the road – we swear!

Now, let’s get to the updates…

A glimpse of Berlin, courtesy of Roland

Welcome, new contributors!

If you just joined us, don’t hesitate – come over and say “hi” in the forums!

Contributors of the week

Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

Most recent SUMO Community meeting

The next SUMO Community meeting

  • …is happening on WEDNESDAY the 27th of April – join us!
  • Reminder: if you want to add a discussion topic to the upcoming meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Wednesday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).
    • If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback.



Support Forum

Knowledge Base & L10n

  • Hackathons everywhere! Well, at least in Stockholm, Sweden (this Friday) and Prague, Czech Republic (next Friday). Contact information in the meeting notes!
  • A guest post all about a certain group of our legendary l10ns coming your way – it will be a great read, I guarantee!
  • An update post about SUMO l10n coming over the weekend, because there ain’t no rest for the wicked.


…and that’s it for today! We hope you enjoyed the update and will stick around for more news this (and next) week. We are looking forward to seeing you all around SUMO – KEEP ROCKING THE HELPFUL WEB!

Allen Wirfs-BrockSlide Bite: Early Era Products


The chaotic early days of a new computing era is an extended period of product innovation and experimentation. But both the form and function of new products are still strongly influenced by the norms and transitional technologies of the waning era. New technologies are applied to new problems but often those new technologies are not yet mature enough to support early expectations. The optimal form-factors, conceptual metaphors, and usage idioms of the new era have yet to be fully explored and solidified. Looking back from the latter stages of a computing era, early era products appear crude and naive.

This is a great time to be a product innovator or an enthusiastic early adopter. But don’t get too comfortable with the present. These are still the early days of the Ambient Computing Era and the big changes are likely still to come.

Mozilla WebDev CommunityBeer and Tell – April 2016

Once a month, web developers from across the Mozilla Project get together to talk about our side projects and drink, an occurrence we like to call “Beer and Tell”.

There’s a wiki page available with a list of the presenters, as well as links to their presentation materials. There’s also a recording available courtesy of Air Mozilla.

emceeaich: Memory Landscapes

First up was emceeaich, who shared Memory Landscapes, a visual memoir of the life and career of artist and photographer Laurie Toby Edison. The project is presented as a non-linear collection of photographs, in contrast to the traditionally linear format of memoirs. The feature that emceeaich demoed was “Going to Brooklyn”, which gives any link a 1/5 chance of showing shadow pictures briefly before moving on to the linked photo.

lorchard: DIY Keyboard Prototype

Next was lorchard, who talked about the process of making a DIY keyboard using web-based tools. He used to generate a layout serialized in JSON, and then used Plate & Case Builder to generate a CAD file for use with a laser cutter.

A flickr album is available with photos of the process.

lorchard: Jupyter Notebooks in Space

lorchard also shared eve-market-fun, a Node.js-based service that pulls data from the EVE Online API and pre-digests useful information about it. He then uses a Jupyter notebook to pull data from the API and analyze it to guide his market activities in the game. Neat!

Pomax: React Circle-Tree Visualizer

Pomax was up next with a new React component: react-circletree! It depicts a tree structure using segmented concentric circles. The component is very configurable and can by styled with CSS as it is generated via SVG. While built as a side-project, the component can be seen in use on the Web Literacy Framework website.

Pomax: HTML5 Mahjong

Also presented by Pomax was an HTML5 multiplayer Mahjong game. It allows four players to play the classic Chinese game online by using and a Node.js server to connect the players. The frontend is built using React and Webpack.

groovecoder and John Dungan: Codesy

Last up was groovecoder and John Dungan, who shared codesy, an open-source startup addressing the problem of compensation for fixing bugs in open-source software. They provide a browser extension that allows users to bid on bugs as well as name their price for fixing a bug. Users may then provide proof that they fixed a bug, and once it is approved by the bidders, they receive a payout.

If you’re interested in attending the next Beer and Tell, sign up for the mailing list. An email is sent out a week beforehand with connection details. You could even add yourself to the wiki and show off your side-project!

See you next month!

Air MozillaMeetup Open Transport #8

Meetup Open Transport #8 Les meetup Open Transport ce sont des échanges réguliers autour des initiatives ouvertes et collaboratives dans le secteur de la mobilité (projets à base d'open...

Gervase MarkhamMozilla’s Root Store Housekeeping Program Bears Fruit

Just over a year ago, in bug 1145270, we removed the root certificate of e-Guven (Elektronik Bilgi Guvenligi A.S.), a Turkish CA, because their audits were out of date. This is part of a larger program we have to make sure all the roots in our program have current audits and are in other ways properly included.

Now, we find that e-Guven has contrived to issue an X509 v1 certificate to one of their customers.

The latest version of the certificate standard X509 is v3, which has been in use since at least the last millennium. So this is ancient magic and requires spelunking in old, crufty RFCs that don’t use current terminology but as far as I can understand it, whether a certificate is a CA certificate or an end-entity certificate in X509v1 is down to client convention – there’s no way of saying so in the certificate. In other words, they’ve accidentally issued a CA certificate to one of their customers, much like TurkTrust did. This certificate could itself issue certificates, and they would be trusted in some subset of clients.

But not Firefox, fortunately, thanks to the hard work of Kathleen Wilson, the CA Certificates module owner. Neither current Firefox nor the current or previous ESR trust this root any more. If they had, we would have had to go into full misissuance mode. (This is less stressful than it used to be due to the existence of OneCRL, our system for pushing revocations out, but it’s still good to avoid.)

Now, we aren’t going to prevent all misissuance problems by removing old CAs, but there’s still a nice warm feeling when you avoid a problem due to forward-looking preventative action. So well done Kathleen.

Air MozillaWeb QA Weekly Meeting, 21 Apr 2016

Web QA Weekly Meeting This is our weekly gathering of Mozilla'a Web QA team filled with discussion on our current and future projects, ideas, demos, and fun facts.

Liz HenryThat Zarro Boogs feeling

This is my third Firefox release as release manager, and the fifth that I’ve followed closely from the beginning to the end of the release cycle. (31 and 36 as QA lead; 39, 43, and 46 as release manager.) This time I felt more than usually okay with things, even while there was a lot of change in our infrastructure and while we started triaging and following even more bugs than usual. No matter how on top of things I get, there is still chaos and things still come up at the last minute. Stuff breaks, and we never stop finding new issues!

I’m not going into all the details because that would take forever and would mostly be me complaining or blaming myself for things. Save it for the post-mortem meeting. This post is to record my feeling of accomplishment from today.

During the approximately 6 week beta cycle of Firefox development we release around 2 beta versions per week. I read through many bugs nominated as possibly important regressions, and many that need review and assessment to decide if the benefit of backporting warrants the risk of breaking something else.

During this 7 week beta cycle I have made some sort of decision about at least 480 bugs. That usually means that I’ve read many more bugs, since figuring out what’s going on in one may mean reading through its dependencies, duplicates, and see-alsos, or whatever someone randomly mentions in comment 45 of 96.

And today I got to a point I’ve never been at near the end of a beta cycle: Zarro Boogs found!

list of zero bugs

This is what Bugzilla says when you do a query and it returns 0. I think everyone likes saying (and seeing) “Zarro Boogs”. Its silliness expresses the happy feeling you get when you have burned down a giant list of bugs.

This particular query is for bugs that anyone at all has nominated for the release management team to pay attention to.

Here is the list of requests for uplift (or backporting, same thing) to the mozilla-beta repo:

more zero pending requests

Yes!! Also zarro boogs.

Since we build our release candidate a week (or a few days) from the mozilla-release repo, I check up on requests to uplift there too:

list of zero pending requests


For the bugs that are unresolved and that I’m still tracking into the 46 release next week, it’s down to 4: Two fairly high volume crashes that may not be actionable yet, one minor issue in a system addon that will be resolved in a planned out-of-band upgrade, and one web compatibility issue that should be resolved soon by an external site. Really not bad!

Our overall regression tracking has a release health dashboard on displays in many Mozilla offices. Blockers, 0. Known new regressions that we are still working on and haven’t explicitly decided to wontfix: 1. (But this will be fixed by the system addon update once 46 ships.) Carryover regressions: 41; about 15 of them are actually fixed but not marked up correctly yet. The rest are known regressions we shipped with already that still aren’t fixed. Some of those are missed uplift opportunities. We will do better in the next release!

In context, I approved 196 bugs for uplift during beta, and 329 bugs for aurora. And, we fix several thousands of issues in every release during the approx. 12 week development cycle. Which ones of those should we pay the most attention to, and which of those can be backported? Release managers act as a sort of Maxwell’s Demon to let in only particular patches …

Will this grim activity level for the past 7 weeks and my current smug feeling of being on top of regression burndown translate to noticeably better “quality”… for Firefox users? That is hard to tell, but I feel hopeful that it will over time. I like the feeling of being caught up, even temporarily.

liz in sunglasses with a drink in hand

Here I am with drink in hand on a sunny afternoon, toasting all the hard working developers, QA testers, beta users, release engineers, PMs, managers and product folks who did most of the actual work to fix this stuff and get it firmly into place in this excellent, free, open source browser. Cheers!

Related posts:

Chris IliasReply to Walt Mossberg – Native apps vs web apps

I recently listened to an episode of the Ctrl-Walt-Delete podcast, in which Walt Mossberg and Nilay Patel talked about web browsers vs native mobile apps. There was something Walt said that I have to comment on, because I disagree with it, and a tweet just isn’t enough. 🙂

When explaining why most people use native mobile apps, he argued that the main reason is because an app (when done right) offers a more focused experience, He cited Google Maps as an example.

I don’t think it’s that complex. I think it has more to do with how fast you can get there. If I want to use Google Maps, it’s quicker and more convenient to tap on the Google Maps icon than it is to tap on the browser, then pull up a list of bookmarks, and tap on the Google Maps bookmark. That has nothing to do with the experience of using the app.

I’m not saying that’s the only reason people use native mobile apps. I think most other differences have a minor effect on the user’s decision, and how fast and convenient it is to get to the app is probably the biggest factor.

Ehsan AkhgariProject SpiderNode

Some time around 4 weeks ago, a few of us got together to investigate what it would take to implement the Electron API on top of Gecko.  Electron consists of two parts: a Node environment with a few additional Node modules, and a lightweight embedding API for opening windows that point to a local or remote web page in order to display UI.  Project Positron tries to create an Electron compatible runtime built on Mozilla technology stack, that is, Gecko and SpiderMonkey.

While a few of my colleagues are busy working on Positron itself, I have been working on SpiderNode, which is intended to be used in Positron to implement the Node part of the Electron API.  SpiderNode has been changing rapidly since 3 weeks ago when I made the initial commit.

SpiderNode is loosely based on node-chakracore, which is a port of Node running on top of ChakraCore, the JavaScript engine used in Edge.  We have adopted the node-chakracore build system modifications to support building Node against a different backend.  We’re following the overall structure of the chakrashim module, which implements enough of the V8 API used by Node on top of ChakraCore.  Similarly, SpiderNode has a spidershim module which implements the V8 API on top of SpiderMonkey.

SpiderNode is still in its early days, and is not yet complete.  As such, we still can’t link the Node binary successfully since we’re missing quite a few V8 APIs, but we’re making rapid progress towards finishing the V8 APIs used in Node.  If you’re curious to look at the parts of the V8 API that have been implemented so far, check out the existing tests for spidershim.

I have tried to fix the issues that new contributors to SpiderNode may face.  As things stand right now, you should be able to clone the repository and build it on Linux and OS X (note that as I said earlier we still can’t link the node binary, so the build won’t finish successfully, see for more details).  We have continuous integration set up so that we don’t regress the current state of the builds and tests.  I have also written some documentation that should help you get started!

Please see the current list of issues if you’re interested to contribute to SpiderNode.  Note that SpiderNode is under active development, so if you’re considering to contribute, it may be a good idea to get in touch with me to avoid working on something that is already being worked on!

Air MozillaThe Joy of Coding - Episode 54

The Joy of Coding - Episode 54 mconley livehacks on real Firefox bugs while thinking aloud.

Mozilla Addons BlogAdd-ons Update – Week of 2016/04/20

I post these updates every 3 weeks to inform add-on developers about the status of the review queues, add-on compatibility, and other happenings in the add-ons world.

The Review Queues

In the past 3 weeks, 1226 listed add-ons were reviewed:

  • 1160 (95%) were reviewed in fewer than 5 days.
  • 45 (4%) were reviewed between 5 and 10 days.
  • 21 (1%) were reviewed after more than 10 days.

There are 73 listed add-ons awaiting review.

You can read about the recent improvements in the review queues here.

If you’re an add-on developer and are looking for contribution opportunities, please consider joining us. Add-on reviewers get invited to Mozilla events and earn cool gear with their work. Visit our wiki page for more information.

Compatibility Communications

Most of you should have received an email from us about the future compatibility of your add-ons. You can use the compatibility tool to enter your add-on ID and get some info on what we think is the best path forward for your add-on.

To ensure long-term compatibility, we suggest you start looking into WebExtensions, or use the Add-ons SDK and try to stick to the high-level APIs. There are many XUL add-ons that require APIs that aren’t available in either of these options, which is why we’re also asking you to fill out this survey, so we know which APIs we should look into adding to WebExtensions.

We’re holding regular office hours for Multiprocess Firefox compatibility, to help you work on your add-ons, so please drop in on Tuesdays and chat with us!

Firefox 47 Compatibility

The compatibility blog post for 47 is up. The bulk validation will be run soon. Make sure that the compatibility metadata for your add-on is up to date, so you don’t miss these checks.

As always, we recommend that you test your add-ons on Beta and Firefox Developer Edition to make sure that they continue to work correctly. End users can install the Add-on Compatibility Reporter to identify and report any add-ons that aren’t working anymore.

Extension Signing

The wiki page on Extension Signing has information about the timeline, as well as responses to some frequently asked questions. The current plan is to remove the signing override preference in Firefox 47 (updated from 46).

Air MozillaSuMo Community Call 20th April 2016

SuMo Community Call 20th April 2016 This is the sumo weekly call We meet as a community every Wednesday 17:00 - 17:30 UTC The etherpad is here:

Wladimir PalantSecurity considerations for password generators

When I started writing my very own password generation extension I didn’t know much about the security aspects. In theory, any hash function should do in order to derive the password because hash functions cannot be reversed, right? Then I started reading and discovered that one is supposed to use PBKDF2. And not just that, you had to use a large number of iterations. But why?

Primary threat scenario: Giving away your master password

That’s the major threat with password generators: some website manages to deduce your master password from the password you used there. And once they have the master password they know all your other passwords as well. But how can this happen if hash functions cannot be reversed? Problem is, one can still guess your master password. They will try “password” as master password first — nope, this produces a different password for their site. Then they will try “password1” and get a match. Ok, now they know that your master password is most likely “password1” (it could still be something else but that’s quite unlikely).

Of course, a number of conditions have to be met for this scenario. First, a website where you have an account should be malicious — or simply leak its users database which isn’t too unlikely. Second, they need to know the algorithm you used to generate your password. However, in my case everybody knows now that I’m using Easy Passwords, no need to guess. And even for you it’s generally better if you don’t assume that they won’t figure out. And third, your master password has to be guessable within “finite” time. Problem is, if people start guessing passwords with GPUs most passwords fall way too quickly.

So, how does one address this issue? First, the master password clearly needs to be a strong one. But choosing the right hashing algorithm is also important. PBKDF2 makes guessing hard because it is computationally expensive — depending on the number of iterations generating a single password might take a second. A legitimate user won’t notice this delay, somebody who wants to test millions of guesses however will run out of time pretty quickly.

There are more algorithms, e.g. bcrypt and scrypt are even better. However, none of them found its way into Firefox so far. Since Easy Passwords is using the native (fast) PBKDF2 implementation in Firefox it can use a very high number of iterations without creating noticeable delays for the users. That makes guessing master passwords impractical on current hardware as long as the master password isn’t completely trivial.

To be precise, Easy Passwords is using PBKDF2-HMAC-SHA1 with 262,144 iterations. I can already hear some people exclaiming: “SHA1??? Old and busted!” Luckily, all the attacks against SHA1 and even MD5 are all about producing hash collisions which are completely irrelevant for password generation. Still, I would have preferred using SHA256, yet Firefox doesn’t support PBKDF2 with SHA256 yet. So it’s either SHA1 or a JavaScript-based implementation which will require a significantly reduced iteration count and result in a less secure solution.

Finally, it’s a good measure to use a random salt when hashing passwords — different salts would result in different generated passwords. A truly random salt would usually be unknown to potential attackers and make guessing master passwords impossible. However, that salt would also make recreating passwords on a different device complicated, one would need to back up the salt from the original device and transfer it to the new one. So for Easy Passwords I chose a compromise: the salt isn’t really random, instead the user-defined password name is used as salt. While an attacker will normally be able to guess the password’s name, it still makes his job significantly more complicated.

What about other password generators?

In order to check my assumptions I looked into what the other password generators were doing. I found more than twenty password generator extensions for Firefox, and most of them apparently didn’t think much about hashing functions. You have to keep in mind that none of them gained significant traction, most likely due to usability issues. The results outlined in the table below should be correct but I didn’t spend much time figuring out how these extensions work. For a few of them I noticed issues beyond their choice of a hashing algorithm, for others I might have missed these issues.

Extension User count Hashing algorithm Security
PasswordMaker 3056 SHA256/SHA1/MD4/MD5/RIPEMD160, optionally with HMAC Very weak
Password Hasher 2491 SHA1 Very weak
PwdHash 2325 HMAC+MD5 Very weak1
Hash Password Generator 291 Custom (same as Magic Password Generator) Very weak
Password Maker X 276 SHA256/SHA1/MD4/MD5/RIPEMD160, optionally with HMAC Very weak
masterpassword for Firefox 155 scrypt, cost parameter 32768, user-defined salt Medium2
uPassword 115 SHA1 Very weak
vPass Password Generator 88 TEA, 10 iterations Weak
Passwordgen For Firefox 1 77 SHA256 Very weak
Recall my password 64 SHA512 Very weak3
Phashword 57 SHA1 Very weak
Passera 52 SHA512 Very weak
My Password 51 MD5 Very weak
HashPass Firefox 48 MD5/SHA1/SHA256/SHA512 Very weak
UniPass 33 SHA256, 4,096 iterations Weak
RndPhrase 29 CubeHash Very weak
Domain Password Generator 29 SHA1 Very weak
PasswordProtect 28 SHA1, 10,000 iterations Weak
PswGen Toolbar v2.0 24 SHA512 Very weak
UniquePasswordBuilder Addon 13 scrypt, cost factor 1024 by default Strong4
Extrasafe 12 SHA3 Very weak
hash0 9 PBKDF2+HMAC+SHA256, 100,000 iterations, random salt Very strong5
MS Password Generator 9 SHA1 Very weak
Vault 9 PBKDF2+HMAC+SHA1, 8 iterations, fixed salt Weak
BPasswd2 8 bcrypt, 64 iterations by default, user-defined salt Weak6
Persistent "Magic" Password Generator 8 MurmurHash Very weak
BPasswd 7 bcrypt, 64 iterations Weak
CCTOO 4 scrypt, cost factor 16384, user-defined salt Very strong7
SecPassGen 2 PBKDF2+HMAC+SHA1, 10,000 iterations by default Weak8
Magic Password Generator ? Custom Very weak

1 The very weak hash function isn’t even the worst issue with PwdHash. It also requires you to enter the master password into a field on the web page. The half-hearted attempts to prevent the website from stealing that password are easily circumvented.

2 Security rating for masterpassword downgraded because (assuming that I understand the approach correctly) scrypt isn’t being applied correctly. The initial scrypt hash calculation only depends on the username and master password. The resulting key is combined with the site name via SHA-256 hashing then. This means that a website only needs to break the SHA-256 hashing and deduce the intermediate key — as long as the username doesn’t change this key can be used to generate passwords for other websites. This makes breaking scrypt unnecessary, security rating is still “medium” however because the intermediate key shouldn’t be as guessable as the master password itself.

3 Recall my password is quite remarkable as it manages to sent the user to the author’s website in order to generate a password for no good reason (unless the author is actually interested in stealing some of the passwords of course). Not only is it completely unnecessary, the website also has an obvious XSS vulnerability.

4 Security rating for UniquePasswordBuilder downgraded because of low default cost factor which it mistakenly labels as “rounds.” Users can select cost factor 16384 manually which is very recommendable.

5 hash0 actually went as far as paying for a security audit. Most of the conclusions just reinforced what I already came up with by myself, others were new (e.g. the pointer to window.crypto.getRandomValues() which I didn’t know before).

6 BPasswd2 allows changing the number of iterations, anything up to 2100 goes (the Sun will die sooner than this calculation completes). However, the default is merely 26 iterations which is a weak protection, and the extension neither indicates that changing the default is required nor does it give useful hints towards choosing a better value.

7 The security rating for CCTOO only applies when it is used with a password, not drawn gestures. From the look of it, the latter won’t have enough entropy and can be guess despite the good hashing function.

8 Security rating for SecPassGen downgraded because the master password is stored in Firefox preferences as clear text.

Additional threats: Shoulder surfing & Co.

Websites aren’t the only threat however, one classic being somebody looking over your shoulder and noting your password. Easy Passwords addresses this by never showing your passwords: it’s either filling in automatically or copying to clipboard so that you can paste it into the password field yourself. In both scenarios the password never become visible.

And what if you leave your computer unattended? Easy Password remembers your master password once it has been entered, this is an important usability feature. The security concerns are addressed by “forgetting” the master password again after a given time, 10 minutes by default. And, of course, the master password is never saved to disk.

Usability vs. security: Validating master password

There is one more usability feature in Easy Password with the potential to compromise security. When you mistype your master password Easy Passwords will notify you about it. That’s important because otherwise wrong passwords will get generated and you won’t know why. But how does one validate the master password without storing it?

My initial idea was storing a SHA hash of the master password. Then I realized that it opens the primary threat scenario again: somebody who can get their hands on this SHA hash (e.g. by walking past your computer when it is unattended) can use it to guess your master password. Only store a few characters of the SHA hash? Better but it will still allow an attacker who has both this SHA hash and a generated password to throw away a large number of guesses without having to spend time on calculating the expensive PBKDF2 hash. Wait, why treat this hash differently from other passwords at all?

And that’s the solution I went with. When the master password is set initially it is used to generate a new password with a random salt, using the usual PBKDF2 algorithm. Then this salt and the first two characters of the password are stored. The two characters are sufficient to recognize typos in most cases. They are not sufficient to guess the master password however. And they won’t even provide a shortcut when guessing based on a known generated password — checking the master password hash is just as expensive as checking the generated password itself.

Encrypting legacy passwords

One requirement for Easy Passwords was dealing with “legacy passwords,” meaning existing passwords that cannot be changed for some reason. Instead of generating, these passwords would have to be stored securely. Luckily, there is a very straightforward solution: the PBKDF2 algorithm can be used to generate an encryption key. The password is then encrypted with AES-256.

My understanding is that AES-encrypted data currently cannot be decrypted without knowing the encryption key. And the encryption key is derived using the same algorithm as Easy Passwords uses for generating passwords, so the security of stored passwords is identical to that of generated ones. The only drawback of such legacy passwords currently seems to be a more complicated backup approach, also moving the password from one device to another is no longer trivial.

Phishing & Co.

Password generators will generally protect you nicely against phishing: a phishing website can look exactly like the original, a password generator will still produce a different password for it. But what about malicious scripts injected into a legitimate site? These will still be able to steal your password. On the bright side, they will only compromise your password for a single website.

Question is, how do malicious scripts get to run there in the first place? One option are XSS vulnerabilities, not much can be done about those. But there are also plenty of websites showing password fields on pages that are transmitted unencrypted (plain HTTP, not HTTPS). These can then be manipulated by an attacker who is in the same network as you. The idea is that Easy Passwords could warn in such cases in future. It should be possible to disable this warning for websites that absolutely don’t support HTTPS, but for others it will hopefully be helpful. Oh, and did I recommend using Enforce Encryption extension already?

Finally, there is the worst-case scenario: your computer could be infected with a keylogger. This is really bad because it could intercept your master password. Then again, it could also intercept all the individual passwords as you log into the respective websites, it will merely take a bit longer. I think that there is only one effective solution here: just don’t get infected.

Other threats?

There are probably more threats to consider that I didn’t think of. It might also be that I made a mistake in my conclusions somewhere. So feel free to post your own thoughts in the comments.

Ludovic HirlimannFinancing Openstreetmap in Africa

The local osm community in Benin is trying to buy a high-res image of the capital to better map it. They need around 2500€ and have reached 50%. 1′“ days left 5,10, 20 euros would help.

Details .

Chris H-CFirefox’s Windows XP Users’ Upgrade Path

We’re still trying to figure out what to do with Firefox users on Windows XP.

One option I’ve heard is: Can we just send a Mozillian to each of these users’ houses with a fresh laptop and training in how to migrate apps and data?

( No, we can’t. For one, we can’t uniquely identify who and where these users are (this is by design). For two, even if we could, the Firefox Windows XP userbase is too geographically diverse (as I explained in earlier posts) for “meatspace” activities like these to be effective or efficient. For three, this could be kinda expensive… though, so is supporting extra Operating Systems in our products. )

We don’t have the advertising spend to reach all of these users in the real world, but we do have access to their computers in their houses… so maybe we can inform them that way?

Well, we know we can inform people through their browsers. We have plenty of data from our fundraising drives to that effect… but what do we say?

Can we tell them that their computer is unsafe? Would they believe us if we did?

Can we tell them that their Firefox will stop updating? Will they understand what we mean if we did?

Do these users have the basic level of technical literacy necessary to understand what we have to tell them? And if we somehow manage to get the message across about what is wrong and why,  what actions can we recommend they take to fix this?

This last part is the first thing I’m thinking about, as it’s the most engineer-like question: what is the optimal upgrade strategy for these users? Much more concrete to me than trying to figure out wording, appearance, and legality across dozens of languages and cultures.

Well, we could instruct them to upgrade to Linux. Except that it wouldn’t be an upgrade, it’d be a clean wipe and reinstall from scratch: all the applications would be gone and all of their settings would reset to default. All the data on their machines would be gone unless they could save it somewhere else, and if you imagine a user who is running Windows XP, you can easily imagine that they might not have access to a “somewhere else”. Also, given the average level of technical expertise, I don’t think we can make a Linux migration simple enough for most of these users to understand. These users have already bought into Windows, so switching them away is adding complexity no matter how simplistic we could make it for these users once the switch was over.

We could instruct them to upgrade to Windows 7. There is a clear upgrade path from XP to 7 and the system requirements of the two OSes are actually very similar. (Which is, in a sincere hat-tip to Microsoft, an amazing feat of engineering and commitment to users with lower-powered computers) Once there, if the user is eligible for the Windows 10 upgrade, they can take that upgrade if they desire (the system requirements for Windows 10 are only _slightly_ higher than Windows 7 (10 needs some CPU extensions that 7 doesn’t), which is another amazing feat). And from there, the users are in Microsoft’s upgrade path, and out of the clutches of the easiest of exploits, forever. There are a lot of benefits to using Windows 7 as an upgrade path.

There are a few problems with this:

  1. Finding copies of Windows 7: Microsoft stopped selling copies of Windows 7 years ago, and these days the most reliable way to find a copy is to buy a computer with it already installed. Mozilla likely isn’t above buying computers for everyone who wants them (if it has or can find the money to do so), but software is much easier to deliver than hardware, and is something we already know how to do.
  2. Paying for copies of Windows 7: Are we really going to encourage our users to spend money they may not have on upgrading a machine that still mostly-works? Or is Mozilla going to spend hard-earned dollarbucks purchasing licenses of out-of-date software for everyone who didn’t or couldn’t upgrade?
  3. Windows 7 has passed its mainstream support lifetime (extended support’s still good until 2020). Aren’t we just replacing one problem with another?
  4. Windows 7 System Requirements: Windows XP only needed a 233MHz processor, 64MB of RAM, and 1.5GB of HDD. Windows 7 needs 1GHz, 1GB, and 16GB.

All of these points are problematic, but that last point is at least one I can get some hard numbers for.

We don’t bother asking users how big their disk drives are, so I can’t detect how many users are cannot meet Windows 7’s HDD requirements. However, we do measure users’ CPU speeds and RAM sizes (as these are important for sectioning performance-related metrics. If we want to see if a particular perf improvement is even better on lower-spec hardware, we need to be able to divvy users up by their computers’ specifications).

So, at first this seems like a breeze: the question is simply stated and is about two variables that we measure. “How many Windows XP Firefox users are Stuck because they have CPUs slower than 1GHZ or RAM smaller than 1GB?”

But if you thought that for more than a moment, you should probably go back and read my posts about how Data Science is hard. It turns out that getting the CPU speed on Windows involves asking the registry for data, which can fail. So we have a certain amount of uncertainty.


So, after crunching the data and making some simplifying assumptions (like how I don’t expect the amount of RAM or the speed of a user’s CPU to ever decrease over time) we have the following:

Between 40% and 53% of Firefox users running Windows XP are Stuck (which is to say, they can’t be upgraded past Windows XP because they fail at least one of the requirements).

That’s some millions of users who are Stuck no matter what we do about education, advocacy, and software.

Maybe we should revisit the “Mozillians with free laptops” idea, after all?



Allen Wirfs-BrockSlide Bite: Grassroots Innovation


How do we know when we are entering a new computing era? One signal is a reemergence of grassroots innovation. Early in a computing era most technical development resources are still focused on sustaining the mature applications and use cases from the waning era or on exploiting attractive transitional technologies.

The first explorers of the technologies of a new era are rebels and visionaries operating at the fringes. These explorers naturally form grassroots organizations for sharing and socializing their ideas and accomplishments. Such grassroots organizations serve as incubators for the the technologies and leaders of the next era.

The HomeBrew Computing Club was a grassroots group out of which emerged many leaders of the Personal Computing Era. Now, as the Ambient Computing Era progresses, we see grassroots organizations such as the Nodebots movement and numerous collaborative GitHub projects serving a similar role.

Air MozillaConnected Devices Weekly Program Review, 19 Apr 2016

Connected Devices Weekly Program Review Weekly project updates from the Mozilla Connected Devices team.