Jeff MuizelaarCounting function calls per second

Say you want to know how often you're allocating tiles in Firefox or the rate of some other thing. There's an easy way to do this using dtrace. The following dtrace script counts calls to any functions matching the pattern '*SharedMemoryBasic*Create*' in XUL in the target process.

#pragma D option quiet
rate = 0;

printf("%d/sec\n", rate);
rate = 0;


You can run this script with following command:
$ dtrace -s $SCRIPT_NAME -p $PID
I'd be interested in knowing if anyone else has a similar technique for OSs that don't have dtrace.

Support.Mozilla.OrgWhat’s Up with SUMO – 28th July

Hello, SUMO Nation!

July’s almost over… but our updates are not, obviously :-) How have you been? Are you melting in the shade or freezing in the sun? Maybe both? ;-) Here are the hotte… no, wait, the coolest news on the web, for your eyes only!

Welcome, new contributors!

If you just joined us, don’t hesitate – come over and say “hi” in the forums!

Contributors of the week

We salute you!

Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

Most recent SUMO Community meeting

The next SUMO Community meeting

  • …is happening on the 3rd of August!
  • If you want to add a discussion topic to the upcoming meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Wednesday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).
    • If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback.



Support Forum

Knowledge Base & L10n


  • for Desktop
    • Version 48 coming August 2nd.

Thus, between a Firefox for iOS and a Firefox for Desktop and Android released into the wild wide web – that’s a lot of heat! Now I know why it’s so easy to sweat nowadays. Well, maybe there will be some relief in the weeks between releases ;-) Winter is coming! Slowly, but surely…

Keep rocking the helpful web, SUMO Nation!

Air MozillaWeb QA Team Meeting, 28 Jul 2016

Web QA Team Meeting They say a Mozilla Web QA team member is the most fearless creature in the world. They say their jaws are powerful enough to crush...

Air MozillaReps weekly, 28 Jul 2016

Reps weekly This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Air MozillaPrivacy Lab - July 2016 - Student Privacy

Privacy Lab - July 2016 - Student Privacy Join us for presentations and a lively discussion around Student Privacy. Guest speakers include: *Alex Smolen, head of security/privacy for Clever *Andrew Rock, online privacy...

Christian HeilmannWhy ChakraCore matters

People who have been in the web development world for a long time might remember A List Apart’s posts from 16 years ago talking about “Why $browser matters”. In these posts, Zeldman explained how the compliance with web standards and support for new features made some browsers stand out, even if they don’t have massive market share yet.

fruit stand
Variety is the spice of life

These browsers became the test bed for future-proof solutions. Solutions that later on already were ready for a larger group of new browsers. Functionality that made the web much more exciting than the one of old. The web we built for one browser and relied on tried-and-true, yet hacky solutions like table layouts. These articles didn’t praise a new browser with flashy new functionality. Browsers featured in this series were the ones compliant with upcoming and agreed standards. That’s what made them important.

The web thrives on diversity. Not only in people, but also in engines. We would not be where we are today if we had stuck with one browser engine. We would not enjoy the openness and free availability of our technologies if Mozilla hadn’t showed that you can be open and a success. The web thrives on user choice and on tool choice for developers.

Competition makes us better and our solutions more creative. Standardisation makes it possible for users of our solutions to maintain them. To upgrade them without having to re-write them from scratch. Monoculture brings quick success but in the long run always ends in dead code on the web that has nowhere to execute. As the controlled, solution-to-end-all-solutions changed from underneath the developers without backwards compatibility.

Today my colleague Arunesh Chandra announced at the NodeSummit in San Francisco that ChakraCore, the open source JavaScript engine of Microsoft, in part powering the Microsoft Edge browser is available now for Linux and OSX.

Screen captures showing ChakraCore running inside terminal windows on Ubuntu 16.04 and OS X
ChakraCore on Linux and OS X

This is a huge step for Microsoft, who – like any other company – is a strong believer in its own products. It also does well keeping their developers happy by sticking to what they are used to. It is nothing they needed to do to stay relevant. But it is something that is the right thing to do to ensure that the world of Node also has more choice and is not dependent on one predominant VM. Many players in the market see the benefits of Node and want to support it, but are not sold on a dependency on one JavaScript VM. A few are ready to roll out their own VMs, which cater to special needs, for example in the IoT space.

This angers a few people in the Node world. They worry that with several VMs, the “browser hell” of “supporting all kind of environment” will come to Node. Yes, it will mean having to support more engines. But it is also an opportunity to understand that by using standardised code, ratified by the TC39, your solutions will be much sturdier. Relying on specialist functionality of one engine always means that you are dependent on it not changing. And we are already seeing far too many Node based solutions that can’t upgrade to the latest version as breaking changes would mean a complete re-write.

ChakraCore matters the same way browsers that dared to support web standards mattered. It is a choice showing that to be future proof, Node developers need to be ready to allow their solutions to run on various VMs. I’m looking forward to seeing how this plays out. It took the web a few years to understand the value of standards and choice. Much rhetoric was thrown around on either side. I hope that with the great opportunity that Node is to innovate and use ECMAScript for everything we will get there faster and with less dogmatic messaging.

Photo by Ian D. Keating

Daniel StenbergA third day of deep HTTP inspection

The workshop roomThis fine morning started off with some news: Patrick is now our brand new official co-chair of the IETF HTTPbis working group!

Subodh then sat down and took us off on a presentation that really triggered a long and lively discussion. “Retry safety extensions” was his name of it but it involved everything from what browsers and HTTP clients do for retrying with no response and went on to also include replaying problems for 0-RTT protocols such as TLS 1.3.

Julian did a short presentation on http headers and his draft for JSON in new headers and we quickly fell down a deep hole of discussions around various formats with ups and downs on them all. The general feeling seems to be that JSON will not be a good idea for headers in spite of a couple of good characteristics, partly because of its handling of duplicate field entries and how it handles or doesn’t handle numerical precision (ie you can send “100” as a monstrously large floating point number).

Mike did a presentation he called “H2 Regrets” in which he covered his work on a draft for support of client certs which was basically forbidden due to h2’s ban of TLS renegotiation, he brought up the idea of extended settings and discussed the lack of special handling dates in HTTP headers (why we send 29 bytes instead of 4). Shows there are improvements to be had in the future too!

Martin talked to us about Blind caching and how the concept of this works. Put very simply: it is a way to make it possible to offer cached content for clients using HTTPS, by storing the data in a 3rd host and pointing out that data to the client. There was a lengthy discussion around this and I think one of the outstanding questions is if this feature is really giving as much value to motivate the rather high cost in complexity…

The list of remaining Lightning Talks had grown to 10 talks and we fired them all off at a five minutes per topic pace. I brought up my intention and hope that we’ll do a QUIC library soon to experiment with. I personally particularly enjoyed EKR’s TLS 1.3 status summary. I heard appreciation from others and I agree with this that the idea to feature lightning talks was really good.

With this, the HTTP Workshop 2016 was officially ended. There will be a survey sent out about this edition and what people want to do for the next/future ones, and there will be some sort of  report posted about this event from the organizers, summarizing things.

Attendees numbers

http workshopThe companies with most attendees present here were: Mozilla 5, Google 4, Facebook, Akamai and Apple 3.

The attendees were from the following regions of the world: North America 19, Europe 15, Asia/pacific 6.

38 participants were male and 2 female.

23 of us were also at the 2015 workshop, 17 were newcomers.

15 people did lightning talks.

I believe 40 is about as many as you can put in a single room and still have discussions. Going larger will make it harder to make yourself heard as easily and would probably force us to have to switch to smaller groups more and thus not get this sort of great dynamic flow. I’m not saying that we can’t do this smaller or larger, just that it would have to make the event different.

Some final words

I had an awesome few days and I loved all of it. It was a pleasure organizing this and I’m happy that Stockholm showed its best face weather wise during these days. I was also happy to hear that so many people enjoyed their time here in Sweden. The hotel and its facilities, including food and coffee etc worked out smoothly I think with no complaints at all.

Hope to see again on the next HTTP Workshop!

Air MozillaThe Joy of Coding - Episode 65

The Joy of Coding - Episode 65 mconley livehacks on real Firefox bugs while thinking aloud.

Mozilla Reps CommunityRep of the Month – July 2016

Please join us in congratulating Christophe Villeneuve as Reps of the Month for July 2016!

Christophe Villeneuve has been a Rep for more than 9 months now and has reported more than 100 activities in the program. From talks on security to writing articles and organizing events, Christophe is active in many different areas. Did we already mention that he also bakes Firefox cookies?


His energy and drive to promote the Open Web and web security is astonishing. Even if sometimes external factors intervene and some of the activities get blockers he  neither  gets disappointed nor quits, he looks for the next possibility out there. He truly is an open source believer contributing to other open source communities (like Drupal, PHP, MariaDB) as well and he tries to combine those activities for bigger audiences.

Please don’t forget to congratulate him on Discourse!

Mozilla Addons BlogLinting and Automatically Reloading WebExtensions

We recently announced web-ext 1.0, a command line tool that makes developing WebExtensions more of a breeze. Since then we’ve fixed numerous bugs and added two new features: automatic extension reloading and a way to check for “lint” in your source code.

You can read more about getting started with web-ext or jump in and install it with npm like this:

npm install --global web-ext

Automatic Reloading

Once you’ve built an extension, you can try it out with the run command:

web-ext run

This launches Firefox with your extension pre-installed. Previously, you would have had to manually re-install your extension any time you changed the source. Now, web-ext will automatically reload the extension in Firefox when it detects a source file change, making it quick and easy to try out a new icon or fiddle with the CSS in your popup until it looks right.

Automatic reloading is only supported in Firefox 49 or higher but you can still run your extension in Firefox 48 without it.

Checking For Code Lint

If you make a mistake in your manifest or any other source file, you may not hear about it until a user encounters the error or you try submitting the extension to The new lint command will tell you about these mistakes so you can fix them before they bite you. Run it like this:

web-ext lint

For example, let’s say you are porting an extension from Chrome that used the history API, which hasn’t fully landed in Firefox at the time of this writing. Your manifest might declare the history permission like this:

  "manifest_version": 2,
  "name": "My Extension",
  "version": "1.0",
  "permissions": [

When running web-ext lint from the directory containing this manifest, you’ll see an error explaining that history is an unknown permission.

Try it out and let us know what you think. As always, you can submit an issue if you have an idea for a new feature of if you run into a bug.

Dave HuntA Summer to Mentor

This summer I am mentoring Justin Potts – a university intern working on improving Mozilla’s add-ons related test automation, and Ana Ribeiro – an Outreachy participant working on enhancing the pytest-html plugin.

It’s not my first time working with Justin, who has been a regular team contributor for a few years now, and last summer helped me to get the pytest-selenium plugin released. It certainly helped to have previous experience working with Justin when deciding to take on the official role as mentor for his first internship. Unfortunately, his project is rather difficult to define, as he’s been working on a number of things, though mostly they are related to Firefox add-ons and running automated tests. There’s no shortage of challenging tasks for Justin to work on, and he’s taking them on with the enthusiasm that I expected he would. You can read more about Justin’s internship on his blog.

Ana’s project grew out of a security exploit discovered in Jenkins, which led to the introduction of the

  header for static files being served. This meant that the fancy HTML reports generated by pytest-html were broken due to the use of JavaScript, inline CSS, and inline images. Along with a few other enhancements, providing a CSP friendly from the plugin became a perfect candidate project for Outreachy. As part of her application, Ana contributed a patch for pytest-variables, and I was impressed with her level of communication over the patch. To get Ana familiar with the plugin, her initial contributions were not related to the CSP issue, but she’s now making good progress on this. You can read more about Ana’s Outreachy project on her dedicated blog.

So far I have mostly enjoyed the experience of being a mentor – it especially feels great to see the results that Justin and Ana are producing. Probably the most challenging aspect for me is being remote – Justin is based in Mountain View, California, and Ana is based in Brazil. It’s hard to feel a connection when you’re dependent on instant messages and video conferencing, though I suspect it’s probably harder for them than it is for me. Fortunately, I did get to work with them a little in London during the all hands, and then some more with Ana in Freiburg during the pytest sprint.

There are still a few weeks left for their projects, and I’m hoping they’ll both be able to conclude them to their satisfaction!

Myk Meleza basic browser app in Positron

Over in Positron, we’ve implemented enough of Electron’s <webview> element to run the basic browser app in this collection of Electron sample apps. To try it out:

git clone
git clone
cd positron
./mach build
./mach run ../electron-sample-apps/webview/browser/

Cameron KaiserAnd now for something completely different: Join me at Vintage Computer Festival XI

A programming note: My wife and I will be at the revised, resurrected Vintage Computer Festival XI August 6 and 7 in beautiful Mountain View, CA at the Computer History Museum (just down the street from the ominous godless Googleplex). I'll be demonstrating my very first home computer, the Tomy Tutor (a weird partial clone of the Texas Instruments 99/4A), and its Japanese relatives. Come by, enjoy the other less interesting exhibits, and bask in the nostalgic glow when 64K RAM was enough and cassette tape was king.

I'm typing this in a G5-optimized build of 45 and it seems to perform pretty well. JavaScript benches over 20% faster than 38 due to improvements in the JIT (and possibly some marginal improvement from gcc 4.8), and this is before I start doing further work on PowerPC-specific improvements which will be rolled out during 45's lifetime. Plus, the AltiVec code survived without bustage in our custom VP8, UTF-8 and JPEG backends, and I backported some graphics performance patches from Firefox 48 that improve throughput further. There's still a few glitches to be investigated; I spent most of tonight figuring out why I got a big black window when going to fullscreen mode (it turned out to be several code regressions introduced by Mozilla removing old APIs), and Amazon Music still has some weirdness moving from track to track. It's very likely there will be other such issues lurking next week when you get to play with it, but that's what a beta cycle is for.

38.10 will be built over the weekend after I'm done doing the backports from 45.3. Stay tuned for that.

Eric ShepherdMDN pro tip: Watch for changes

The Web moves pretty fast. Things are constantly changing, and the documentation content on the Mozilla Developer Network (MDN) is constantly changing, too. The pace of change ebbs and flows, and often it can be helpful to know when changes occur. I hear this most from a few categories of people:

  • Firefox developers who work on the code which implements a particular technology. These folks need to know when we’ve made changes to the documentation so they can review our work and be sure we didn’t make any mistakes or leave anything out. They often also like to update the material and keep up on what’s been revised recently.
  • MDN writers and other contributors who want to ensure that content remains correct as changes are made. With so many people making change to some of our content, keeping up and being sure mistakes aren’t made and that style guides are followed is important.
  • Contributors to specifications and members of technology working groups. These are people who have a keen interest in knowing how their specifications are being interpreted and implemented, and in the response to what they’ve designed. The text of our documentation and any code samples, and changes made to them, may be highly informative for them to that end.
  • Spies. Ha! Just kidding. We’re all about being open in the Mozilla community, so spies would be pretty bored watching our content.

There are a few ways to watch content for changes, from the manual to the automated. Let’s take a look at the most basic and immediately useful tool: MDN page and subpage subscriptions.

Subscribing to a page

Animation showing how to subscribe to a single MDN page After logging into your MDN account (creating one if you don’t already have one), make your way to the page you want to subscribe to. Let’s say you want to be sure nobody messes around with the documentation about <marquee> because, honestly, why would anyone need to change that anyway?

Find the Watch button near the top of the MDN page; it’s a drawing of an eye. In the menu that opens when you hover over that icon, you’ll find the option “Subscribe to this page.” Simply click that. From then on, each time someone makes a change to the page, you’ll get an email. We’ll talk about that email in a moment.

First, we need to consider another form of content subscriptions: subtree or sub-article subscriptions.

Subscribing to a subtree of pages


Daniel StenbergWorkshop day two

HTTP Workshop At 5pm we rounded off another fully featured day at the HTTP workshop. Here’s some of what we touched on today:

Moritz started the morning with an interesting presentation about experiments with running the exact same site and contents on h1 vs h2 over different kinds of networks, with different packet loss scenarios and with different ICWND set and more. Very interesting stuff. If he makes his presentation available at some point I’ll add a link to it.

I then got the honor to present the state of the TCP Tuning draft (which I’ve admittedly been neglecting a bit lately), the slides are here. I made it brief but I still got some feedback and in general this is a draft that people seem to agree is a good idea – keep sending me your feedback and help me improve it. I just need to pull myself together now and move it forward. I tried to be quick to leave over to…

Jana, who was back again to tell us about QUIC and the state of things in that area. His presentation apparently was a subset of slides he presented last week in the Berlin IETF. One interesting take-away for me, was that they’ve noticed that the amount of connections for which they detect UDP rate limiting on, has decreased with 2/3 during the last year!

Here’s my favorite image from his slide set. Apparently TCP/2 is not a name for QUIC that everybody appreciates! ;-)


While I think the topic of QUIC piqued the interest of most people in the room and there were a lot of questions, thoughts and ideas around the topic we still managed to get the lunch break pretty much in time and we could run off and have another lovely buffet lunch. There’s certainly no risk for us loosing weight during this event…

After lunch we got ourselves a series of Lightning talks presented for us. Seven short talks on various subjects that people had signed up to do

One of the lightning talks that stuck with me was what I would call the idea about an extended Happy Eyeballs approach that I’d like to call Even Happier Eyeballs: make the client TCP connect to all IPs in a DNS response and race them against each other and use the one that responds with a SYN-ACK first. There was interest expressed in the room to get this concept tested out for real in at least one browser.

We then fell over into the area of HTTP/3 ideas and what the people in the room think we should be working on for that. It turned out that the list of stuff we created last year at the workshop was still actually a pretty good list and while we could massage that a bit, it is still mostly the same as before.

Anne presented fetch and how browsers use HTTP. Perhaps a bit surprising that soon brought us over into the subject of trailers, how to support that and voilá, in the end we possibly even agreed that we should perhaps consider handling them somehow in browsers and even for javascript APIs… ( nah, curl/libcurl doesn’t have any particular support for trailers, but will of course get that if we’ll actually see things out there start to use it for real)

I think we deserved a few beers after this day! The final workshop day is tomorrow.

Emma IrwinPassports for Community Leadership

This is #1 of 5 posts I identified as perhaps, being worth finishing and sharing.   Writing never feels finished, and it’s a vulnerable thing… to share ideas – but perhaps better than never sharing them at all?

I wrote most of this post in April of this year (making this outdated with the current work of the Participation Team), thinking about ways the learning format of the Leadership Summit in Singapore could evolve into a valuable tool for community leadership development and credentialing.  Community Leadership Passport(s) perhaps…


At the Participation Leadership Summit in Singapore, we designed the schedule in time blocks sorted by the Leadership Framework.  This meant that everyone attended at least one session identified under each of the building blocks.  The schedule was structured something like this…

Copy of Schedule(1)

As you can see, the structure  ensured that everyone experienced learning outcomes of the entire framework, while still providing choice in what felt most relevant, exciting or interesting in their personal development.  You can find some of this content here.

I started wondering..

How might we evolve the schedule design and content into a format for leadership development that also provides real world credentials?

I don’t think the answer is to take this schedule and make it a static ‘course’ or offering, I don’t think it is about ‘event in box’,  but I do think there’s something in using the framework to enforce quality leadership development, while giving power to what people want to learn, and how they prefer to learn.

Merging this idea + my previous work with participation ‘steps & ladders’ into something like a passport, or series of passports for leadership.



Really, this is about creating a mechanism for helping people build leadership credentials in a way that intersects what they want to learn and do, and what the project needs. It could be used for anything from developing strong mentors, to project leads in areas like IoT and Rust, to governance and diversity & inclusion. Imagining Passports with  3 attributes:

Experience – Taking action, completing tasks, generating experiences associated with learning and project outcomes. Should be clear, and feel doable without too much detail.

Mozilla Content – Completing a course either developed by, or approved as Mozilla content.   These could be online, or in person events.

Learner Choice – Encouraging exploration, and learning that feels valuable, interesting and fun – but with some guidelines for topics, outcomes and likely recommendations to make things easier.  For example, some people might want to complete a Coursera Course on IOT and Embedded systems, while others might prefer a ‘learning by doing’ approach via YouTube channels.

Something like a Leadership Passport would obviously require more thought in implementation, tracking and issuing certification. It could also be used to test and evolve Leadership Framework. I prefer it over a participation ladder because it feels less prescriptive in ‘how’ we step up as leaders and more supportive of ways want to learn and lead — and ultimately help us recognize and invest in emerging leaders sooner.

Image Credit:  Kate Harding – Quilt of Nations.


The Mozilla BlogMozilla Delivers Improved User Experience in Firefox for iOS

When we rolled out Firefox for iOS late last year, we got a tremendous response and millions of downloads. Lots of Firefox users were ecstatic they could use the browser they love on the iPhone or iPad they had chosen. Today, we’re thrilled to release some big improvements to Firefox for iOS. These improvements will give users more speed, flexibility and choice, three things we care deeply about.

A Faster Firefox Awaits You


It’s summer intern season and our Firefox for iOS Engineer intern Tyler Lacroix pulled out all the stops this month when he unveiled the results of his pet project – making Firefox faster. In Tyler’s testing, he saw up to 40% reduction in CPU usage and up to 30% reduction in memory usage when using this latest version of Firefox. What this means is that users can get to their Web pages faster while seeing battery life savings. Of course, all devices and humans are different so results may vary. Either way, we are psyched to roll out these improvements to you today.

It’s Now Easy to Add Any Website Specific Search Engine. And Change Your Mind.


We’ve already included a set of the most popular search engines in Firefox for iOS, but users may want to search other sites right from the address bar. Looking for that perfect set of moustache handlebars for a vintage road bike? Users can add sites like Craigslist and eBay. Want to become a trivia champ? Get one tap access to Wikipedia. Simply go to a website with a search box and tap on the the magnifying glass to add that search to your list of search engines.


New Menu for Easier Navigation


Navigation in iOS browsers is a huge pain point for users who have come to expect the same seamless experience that’s available on their desktop or laptop. Firefox for iOS features a brand new menu on the toolbar that allows for easier navigation and quick access to frequently used features – from adding a bookmark to finding text in page.

menu button cropped

Recover Closed Tabs and Quickly Flip Through Open Tabs


Browser tabs on mobile devices have traditionally been difficult to use. Hard to see, hard to manage, hard to navigate, and gone forever in a tap. In this upgraded Firefox for iOS users can easily recover all closed tabs and navigate through open tabs.

undo closed tabs

Home Again, Home Again, With One Quick Tap



Almost everyone has one page they go to first and return to often. Further expanding the ability to customize their Firefox for iOS experience, users can now set their favorite site as their homepage. The designated website will open immediately with a tap of the “home” button. This makes it easier than ever before to visit preferred sites in a matter of seconds.

We created these new features in Firefox for iOS because of what we heard from our users, and we look forward to more feedback on the updates. To check out our handiwork, download Firefox for iOS from the App Store and let us know what you think.

From iOS to Android to Windows to Linux, we are supporting a healthy and open web by building a better Firefox. And we couldn’t do it without our hundreds of millions of active users across all platforms and our vibrant community. Thanks, everyone!


For more information:

Air MozillaConnected Devices Weekly Program Update, 26 Jul 2016

Connected Devices Weekly Program Update Weekly project updates from the Mozilla Connected Devices team.

Andreas TolfsenUpdate from WebDriver WG meeting in July 2016

The W3C Browser Tools- and Testing Working Group met again 13-14 July 2016 in Redmond, WA to discuss the progress of the WebDriver specification. I will try to summarise the discussions, but if you’re interested in all the details the meetings have been meticulously scribed.

I wrote about the progress from our TPAC 2015 meeting previously, and we appear to have made good progress since then. The specification text is nearing completion, although it is missing a few important chapters: Some particularly obvious omissions are the complete lack of input handling, and a big, difficult void where advanced user actions are meant to be.


James has been hard at work drafting a proposal for action semantics, which we went over in great detail. I think it’s fair to say there had been conceptual agreement in the working group on what the actions were meant to accomplish, but that the details of how they were going to work were extremely scarce.

WebDriver tries to innovate on the actions as they appear in Selenium. Actions in Selenium were originally meant to provide a way to pipeline a sequence of interactions—such as pressing down a mouse button, moving the mouse, and releasing it—through a complex data structure to a single command endpoint. The idea was that this would help address some of the race conditions that are intrinsically part of the one-directional design of the protocol, and reduce latency which may be critical when interacting with a document.

Unfortunately the pipelining design to reduce the number of HTTP requests was never quite implemented in Selenium, and the API design suffered from over-specialisation of different types of input devices and actions. The specification attempts to rectify this by generalising the range of input device classes, and by associating the actions that can be performed with a certain class. This means we are moving away from a flat sequence of types, such as [{type: "mouseDown"}, {type: "mouseMove"}, {type: "mouseUp"}] to a model where each input device has its own “track”. This limits the actions you can perform with each device, which makes some conceptual sense because it would be impossible to i.e. type keys with a mouse or press a mouse button with a stylus/pen input device.

The side-effect of this design is that it allows for parallelisation of actions from one or more types of input devices. This is an important development, as it makes it possible to combine primitives for input methods such as touch: In reality, a device cannot determine whether two fingers are “associated” with the same hand. So instead of defining high-level actions such as pinch and flick, it gives you the right level of granularity to combine actions from two or more touch “finger” devices to synthesise more complex movements. We believe this is a good approach with the right level of granularity that doesn’t try to over-specify or shoehorn in primitives that might not make sense in a cross-browser automation setting.

I’m looking forward to seeing James’ work land in the specificaton text. I think probably some explanatory notes and examples are required to fully explain this concept for both implementors and users.

Input locality

A known limitation of Selenium that we are not proud of is that it does not have a good story for input with alternative keyboard layouts. We have explicitly phrased the specification in such a way that it doesn’t make it impossible to retrofit in support for multiple layouts in the future. But right now we want to finish the baseline of the specification before we try moving into this.

The current design ideas floating around are to have some way of setting a keyboard layout either through a command or a capability. This would allow / to generate key events for Shift and ? on an American layout, and Shift and 7 on Norwegian layout. The biggest reason this is hard is because we need to find the right key code conversion tables for what would happen when typing for example .

Untrusted SSL certificates

We had a big discussion on invalid, self-signed, and untrusted SSL certificates. The general agreement in the WG is that it would be good to have functionality to allow a WebDriver session to bypass the security checks associated with them, as WebDriver may be run in an environment where it is difficult or even impossible to instrument the browser/environment in such a way that they are accepted implicitly (e.g. by modifying the root store).

Different browser vendors raised questions over whether this would pass security review as implementing such a feature increases the attack surface in one of the most critical components in web browsers. A counterargument is that by the point your browser has WebDriver enabled, you probably have bigger things to worry about than the fact that untrusted certificates are implicitly accepted.

We also found that this is highly inconsistently implemented in Selenium. For the two drivers that support it, FirefoxDriver (written and maintained by Selenium) has an acceptSslCerts capability that takes a boolean to switch off security checks, and chromedriver (by Google) by contrast accepts all certificates by default. The remaining drivers have no support for it.

This leaves the working group free to decide on a new and consistent approach. One point of concern is that a boolean to disable all security checks seems like an overly coarse design. A suggested alternative is to provide a list of domains to disable the checks for, where wildcards can be expanded to cover every domain or every subdomain, so that i.e. ["*"] would be equivalent to setting acceptSslCerts to true in today’s Firefox implementation, but that ["*"] would only disable untrusted certificates on this domain.

Navigation and URLs

Because WebDriver taps into the browser’s navigation algorithm at a much later point than when a user interacts with the address bar, we decided that malformed URLs should consistently return an error. We have also changed the prose to no longer mislead users to think that navigating in effect means the same as using the address bar; the address bar is not a concept of the web platform.

There was a proposal from Mozilla to allow navigation to relative URLs, so that one could navigate to i.e. "/foo" to go to the path on the current domain, similar to how window.location = "/foo" works. This was unfortunately voted down. I feel it would be useful, even just for consistency, for the WebDriver navigation command to mirror the platform API, modulo security checks.

Desired vs. required capabilities

A big discussion during the meeting was around the continuing confusion around capabilities: Many feel they are an intermediary node concept that is best left undefined in the core specification text itself, because the specification explicitly does not define any qualities or expectations about local ends (clients bindings) or intermediary nodes (Selenium server or proxy that gives you a session).

There was however consensus around the fact that having a way to pick a browser configuration from some matrix was a good idea. The uncertainty, I think, comes largely from driver implementors who feel that once capabilities reach the driver there is very little that can be done about the sort of conflict resolution that required- and desired capabilities warrant.

For example, what does it mean to desire a profile and how do you know if the provided profile is valid? We were unable to reach any agreement on this and decided to punt the topic for our next meeting in Lisbon.

Test coverage

In order to push the specification to “Rec” (short for Recommendation) one must have at least two interoperable implemenations by two separate vendors. To determine that they are interoperable, one needs a test suite. I’ve written previously about the test harness I wrote for the Web Platform Tests that integrates WebDriver spec tests with wptrunner.

We have a few exhaustive tests for a couple of chapters, but I hope to continue this work this quarter.

Next meeting

The working group is meeting again for TPAC that this year is in Lisbon (how civilised!) in late September. I’m enormously looking forward to visiting there as I’ve never been.

We hope resolve the outstanding capabilities discussion and make final decisions on a few more minor outstanding issues then.

Tim TaubertThe Evolution of Signatures in TLS

This post will take a look at the evolution of signature algorithms and schemes in the TLS protocol since version 1.0. I at first started taking notes for myself but then decided to polish and publish them, hoping that others will benefit as well.

(Let’s ignore client authentication for simplicity.)

Signature algorithms in TLS 1.0 and TLS 1.1

In TLS 1.0 as well as TLS 1.1 there are only two supported signature schemes: RSA with MD5/SHA-1 and DSA with SHA-1. The RSA here stands for the PKCS#1 v1.5 signature scheme, naturally.

select (SignatureAlgorithm)
    case rsa:
        digitally-signed struct {
            opaque md5_hash[16];
            opaque sha_hash[20];
    case dsa:
        digitally-signed struct {
            opaque sha_hash[20];
} Signature;

An RSA signature signs the concatenation of the MD5 and SHA-1 digest, the DSA signature only the SHA-1 digest. Hashes will be computed as follows:

h = Hash(ClientHello.random + ServerHello.random + ServerParams)

The ServerParams are the actual data to be signed, the *Hello.random values are prepended to prevent replay attacks. This is the reason TLS 1.3 puts a downgrade sentinel at the end of ServerHello.random for clients to check.

The ServerKeyExchange message containing the signature is sent only when static RSA/DH key exchange is not used, that means we have a DHE_* cipher suite, an RSA_EXPORT_* suite downgraded due to export restrictions, or a DH_anon_* suite where both parties don’t authenticate.

Signature algorithms in TLS 1.2

TLS 1.2 brought bigger changes to signature algorithms by introducing the signature_algorithms extension. This is a ClientHello extension allowing clients to signal supported and preferred signature algorithms and hash functions.

enum {
    none(0), md5(1), sha1(2), sha224(3), sha256(4), sha384(5), sha512(6)
} HashAlgorithm;

enum {
    anonymous(0), rsa(1), dsa(2), ecdsa(3)
} SignatureAlgorithm;

struct {
    HashAlgorithm hash;
    SignatureAlgorithm signature;
} SignatureAndHashAlgorithm;

If a client does not include the signature_algorithms extension then it is assumed to support RSA, DSA, or ECDSA (depending on the negotiated cipher suite) with SHA-1 as the hash function.

Besides adding all SHA-2 family hash functions, TLS 1.2 also introduced ECDSA as a new signature algorithm. Note that the extension does not allow to restrict the curve used for a given scheme, P-521 with SHA-1 is therefore perfectly legal.

A new requirement for RSA signatures is that the hash has to be wrapped in a DER-encoded DigestInfo sequence before passing it to the RSA sign function.

DigestInfo ::= SEQUENCE {
    digestAlgorithm DigestAlgorithm,
    digest OCTET STRING

This unfortunately led to attacks like Bleichenbacher’06 and BERserk because it turns out handling ASN.1 correctly is hard. As in TLS 1.1, a ServerKeyExchange message is sent only when static RSA/DH key exchange is not used. The hash computation did not change either:

h = Hash(ClientHello.random + ServerHello.random + ServerParams)

Signature schemes in TLS 1.3

The signature_algorithms extension introduced by TLS 1.2 was revamped in TLS 1.3 and MUST now be sent if the client offers a single non-PSK cipher suite. The format is backwards compatible and keeps old code points.

enum {
    /* RSASSA-PKCS1-v1_5 algorithms */
    rsa_pkcs1_sha1 (0x0201),
    rsa_pkcs1_sha256 (0x0401),
    rsa_pkcs1_sha384 (0x0501),
    rsa_pkcs1_sha512 (0x0601),

    /* ECDSA algorithms */
    ecdsa_secp256r1_sha256 (0x0403),
    ecdsa_secp384r1_sha384 (0x0503),
    ecdsa_secp521r1_sha512 (0x0603),

    /* RSASSA-PSS algorithms */
    rsa_pss_sha256 (0x0700),
    rsa_pss_sha384 (0x0701),
    rsa_pss_sha512 (0x0702),

    /* EdDSA algorithms */
    ed25519 (0x0703),
    ed448 (0x0704),

    /* Reserved Code Points */
    private_use (0xFE00..0xFFFF)
} SignatureScheme;

Instead of SignatureAndHashAlgorithm, a code point is now called a SignatureScheme and tied to a hash function (if applicable) by the specification. TLS 1.2 algorithm/hash combinations not listed here are deprecated and MUST NOT be offered or negotiated.

New code points for RSA-PSS schemes, as well as Ed25519 and Ed448-Goldilocks were added. ECDSA schemes are now tied to the curve given by the code point name, to be enforced by implementations. SHA-1 signature schemes SHOULD NOT be offered, if needed for backwards compatibility then only as the lowest priority after all other schemes.

The current draft-13 still lists RSASSA-PSS as the only valid signature algorithm allowed to sign handshake messages with an RSA key. The rsa_pkcs1_* values solely refer to signatures which appear in certificates and are not defined for use in signed handshake messages. There is hope.

To prevent various downgrade attacks like FREAK and Logjam the computation of the hashes to be signed has changed significantly and covers the complete handshake, up until CertificateVerify:

h = Hash(Handshake Context + Certificate) + Hash(Resumption Context)

This includes amongst other data the client and server random, key shares, the cipher suite, the certificate, and resumption information to prevent replay and downgrade attacks. With static key exchange algorithms gone the CertificateVerify message is now the one carrying the signature.

Giorgos LogiotatidisUser friendly website analytics with Sandstorm Oasis and Piwik

Piwik is a great FLOSS website analytics platform. I've been self hosting it for different small websites I've managed through the years. Although it's fairly easy to setup and maintain at this level of use, I want to avoid having another service in my maintenance list.

While looking for user and web respecting alternatives -read looking for something else than Google Analytics- I realized that Sandstorm Oasis does support Piwik.

I logged-in and setup my Piwik instance, or Grain as Sandstorm calls instances, in less than 30 of seconds. The tricky part is to copy the code provided by Sandstorm instead of using the code in Piwik documentation, since that's customized to work with Sandstorms special API interface. Paste in the HTML and you're done!

So if you're looking for decent solutions that respect your users and the web, give Sandstorm a try. They are on a "mission to make open source and indie web applications viable as an ecosystem" and they are doing so by developing a platform which makes it super easy to run many open source web apps, like Piwik, Rocket.Chat, Ghost, GitLab, Wordpress and others. Their hosted Oasis platform also comes with a free plan.

This Week In RustThis Week in Rust 140

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

New Crates & Project Updates

Crate of the Week

In what seems to become a kind of tradition, User gsingh93 suggested his trace crate, a syntax extension to insert print! statements to functions to help trace execution. Thanks, gsingh93!

Submit your suggestions for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

76 pull requests were merged in the last two weeks.

New Contributors

  • Evgeny Safronov
  • Matt Horn

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

fn work(on: RustProject) -> Money

No jobs listed for this week.

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

you have a problem. you decide to use Rust. now you have a Rc<RefCell<Box<Problem>>>

kmc on #rust.

Thanks to Alex Burka for the tip. Submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and brson.

Nicholas NethercoteFirefox 64-bit for Windows can take advantage of more memory

By default, on Windows, Firefox is a 32-bit application. This means that it is limited to using at most 4 GiB of memory, even on machines that have more than 4 GiB of physical memory (RAM). In fact, depending on the OS configuration, the limit may be as low as 2 GiB.

Now, 2–4 GiB might sound like a lot of memory, but it’s not that unusual for power users to use that much. This includes:

  • users with many (dozens or even hundreds) of tabs open;
  • users with many (dozens) of extensions;
  • users of memory-hungry web sites and web apps; and
  • users who do all of the above!

Furthermore, in practice it’s not possible to totally fill up this available space because fragmentation inevitably occurs. For example, Firefox might need to make a 10 MiB allocation and there might be more than 10 MiB of unused memory, but if that available memory is divided into many pieces all of which are smaller than 10 MiB, then the allocation will fail.

When an allocation does fail, Firefox can sometimes handle it gracefully. But often this isn’t possible, in which case Firefox will abort. Although this is a controlled abort, the effect for the user is basically identical to an uncontrolled crash, and they’ll have to restart Firefox. A significant fraction of Firefox crashes/aborts are due to this problem, known as address space exhaustion.

Fortunately, there is a solution to this problem available to anyone using a 64-bit version of Windows: use a 64-bit version of Firefox. Now, 64-bit applications typically use more memory than 32-bit applications. This is because pointers, a common data type, are twice as big; a rough estimate for 64-bit Firefox is that it might use 25% more memory. However, 64-bit applications also have a much larger address space, which means they can access vast amounts of physical memory, and address space exhaustion is all but impossible. (In this way, switching from a 32-bit version of an application to a 64-bit version is the closest you can get to downloading more RAM!)

Therefore, if you have a machine with 4 GiB or less of RAM, switching to 64-bit Firefox probably won’t help. But if you have 8 GiB or more, switching to 64-bit Firefox probably will help the memory usage situation.

Official 64-bit versions of Firefox have been available since December 2015. If the above discussion has interested you, please try them out. But note the following caveats.

  • Flash and Silverlight are the only supported 64-bit plugins.
  • There are some Flash content regressions due to our NPAPI sandbox (for content that uses advanced features like GPU acceleration or microphone APIs).

On the flip side, as well as avoiding address space exhaustion problems, a security feature known as ASLR works much better in 64-bit applications than in 32-bit applications, so 64-bit Firefox will be slightly more secure.

Work is being ongoing to fix or minimize the mentioned caveats, and it is expected that 64-bit Firefox will be rolled out in increasing numbers in the not-too-distant future.

UPDATE: Chris Peterson gave me the following measurements about daily active users on Windows.

  • 66.0% are running 32-bit Firefox on 64-bit Windows. These users could switch to a 64-bit Firefox.
  • 32.3% are running 32-bit Firefox on 32-bit Windows. These users cannot switch to a 64-bit Firefox.
  • 1.7% are running 64-bit Firefox already.

UPDATE 2: Also from Chris Peterson, here are links to 64-bit builds for all the channels:

Mozilla Localization (L10N)L20n in Firefox: A Summary for Developers

L20n is a new localization framework for Firefox and Gecko. Here’s what you need to know if you’re a Firefox front-end developer.

Gecko’s current localization framework hasn’t changed in the last two decades. It is based on file formats which weren’t designed for localization. It offers crude APIs. It tasks developers with things they shouldn’t have to do. It doesn’t allow localizers to use the full expressive power of their languages.

L20n is a modern localization and internationalization infrastructure created by the Localization Engineering team in order to overcome these limitations. It was successfully used in Firefox OS. We’ve put parts of it on the ECMA standardization path. Now we intend to integrate it into Gecko and migrate Firefox to it.

Overview of How L20n Works

For Firefox, L20n is most powerful when it’s used declaratively in the DOM. The localization happens on the runtime and gracefully falls back to the next language in case of errors. L20n doesn’t force developers to programmatically create string bundles, request raw strings from them and manually interpolate variables. Instead, L20n uses a Mutation Observer which is notified about changes to data-l10n-* attributes in the DOM tree. The complexity of the language negotiation, resource loading, error fallback and string interpolation is hidden in the mutation handler. It is still possible to use the JavaScript API to request a translation manually in rare situations when DOM is not available (e.g. OS notifications).

What problems L20n solves?

The current localization infrastructure is tightly-coupled: it touches many different areas of the codebase.  It also requires many decisions from the developer. Every time someone wants to add a new string they need to go through the following mental checklist:

  1. Is the translation embedded in HTML or XUL? If so, use the DTD format. Be careful to only use valid entity references or you’ll end up with a Yellow Screen of Death. Sure enough, the list of valid entities is different for HTML and for XUL. (For instance &hellip;
    is valid in HTML but not in XUL.)
  2. Is the translation requested dynamically from JavaScript? If so, use the .properties format.
  3. Does the translation use interpolated variables? If so, refer to the documentation on good practices and use #1, %S, %1$S, {name} or &name; depending on the use-case. (That’s five different ways of interpolating data!) For translations requested from JavaScript, replace the interpolation placeables manually with String.prototype.replace.
  4. Does the translation depend on a number in any of the supported languages? If so, use the PluralForm.jsm module to choose the correct variant of the translation. Specify all variants on a single line of the .properties file, separated by semicolons.
  5. Does the translation comprise HTML elements? If so, split the copy into smaller parts surrounding the HTML elements and put each part in its own translation. Remember to keep them in sync in case of changes to the copy. Alternatively write your own solution for replacing interpolation specifiers with HTML markup.

What a ride! All of this just to add a simple You have no new notifications message to the UI.  How do we fix this tight-coupled-ness?

L20n is designed around the principle of separation of concerns. It introduces a single syntax for all use-cases and offers a robust fallback mechanism in case of missing or broken translations.

Let’s take a closer look at some of the features of L20n which mitigate the headaches outlined above.

Single syntax

In addition to DTD and .properties files Gecko currently also uses .ini and .inc files for a total of four different localization formats.

L20n introduces a single file format based on ICU’s MessageFormat. It’s designed to look familiar to people who have previous experience with .properties and .ini. If you’ve worked with .properties or .ini before you already know how to create simple L20n translations.

Primer on the FTL syntax

Fig. 1. A primer on the FTL syntax

A single localization format greatly reduces the complexity of the ecosystem. It’s designed to keep simple translations simple and readable. At the same time it allows for more control from localizers when it comes to defining and selecting variants of translations for different plural categories, genders, grammatical cases etc. These features can be introduced only in translations which need them and never leak into other languages. You can learn more about L20n’s syntax in my previous blog post and at An interactive editor is also available at

Separation of Concerns: Plurals and Interpolation

In L20n all the logic related to selecting the right variant of the translation happens inside of the localization framework. Similarly L20n takes care of the interpolation of external variables into the translations. As a developer, all you need to do is declare which translation identifier you are interested in and pass the raw data that is relevant.

Plurals and interpolation in L20n

Fig. 2. Plurals and interpolation in L20n

In the example above you’ll note that in the BEFORE version the developer had to manually call the PluralForm API. Furthermore the calling code is also responsible for replacing #1 with the relevant datum. There’s is no error checking: if the translation contains an error (perhaps a typo in #1) the replace() will silently fail and the final message displayed to the user will be broken.

Separation of Concerns: Intl Formatters

L20n builds on top of the existing standards like ECMA 402’s Intl API (itself based in large part on Unicode’s ICU). The Localization team has also been active in advancing proposals and specification for new formatters.

L20n provides an easy way to use Intl formatters from within translations. Often times the Intl API completely removes the need of going through the localization layer. In the example below the logic for displaying relative time (“2 days ago”) has been replaced by a single call to a new Intl formatter, Intl.RelativeTimeFormat.

Intl API in use

Fig. 3. Intl API in use

Separation of Concerns: HTML in Translations

L20n allows for some semantic markup in translations. Localizers can use safe text-level HTML elements to create translations which obey the rules of typography and punctuation. Developers can also embed interactive elements inside of translations and attach event handlers to them in HTML or XUL. L20n will overlay translations on top of the source DOM tree preserving the identity of elements and the event listeners.

Semantic markup in L20n

Fig. 4. Semantic markup in L20n

In the example above the BEFORE version must resort to splitting the translation into multiple parts, each for a possible piece of translation surrounding the two <label> elements.  The L20n version only defines a single translation unit and the localizer is free to position the text around the <label> elements as they see fit.  In the future it will be possible to reorder the <label> elements themselves.

Resilient to Errors

L20n provides a graceful and robust fallback mechanism in case of missing or broken translations. If you’re a Firefox front-end developer you might be familiar with this image:

Yellow Screen of Death

Fig. 5. Yellow Screen of Death

This errors happens whenever a DTD file is broken. The way a DTD file can be broken might be as subtle as a translation using the &hellip; entity which is valid in HTML but not in XUL.

In L20n, broken translations never break the UI. L20n tries its best to display a meaningful message to the user in case of errors. It may try to fall back to the next language preferred by the user if it’s available. As the last resort L20n will show the identifier of the message.

New Features

L20n allows us to re-think major design decisions related to localization in Firefox. The first area of innovation that we’re currently exploring is the experience of changing the browser’s UI language. A runtime localization framework allows the change to happen seamlessly on the fly without restarts. It will also become possible to go back and forth between languages for just a part of the UI, a feature often requested by non-English users of Developer Tools.

Another innovation that we’re excited about is the ability to push updates to the existing translations independent of the software updates which currently happen approximately every 6 weeks. We call this feature Live Updates to Localizations.

We want to decouple the release schedule of Firefox from the release schedule of localizations. The whole release process can then become more flexible and new translations can be delivered to users outside of regular software updates.


L20n’s goal is to improve Mozilla’s ability to create quality multilingual user interfaces, simplify the localization process for developers, improve error recovery and allow us to innovate.

The migration will result in cleaner and easier to maintain code base. It will improve the quality and the security of Firefox. It will provide a resilient runtime fallback, loosening the ties between code and localizations. And it will open up many new opportunities to innovate.

Daniel StenbergA workshop Monday

http workshopI decided I’d show up a little early at the Sheraton as I’ve been handling the interactions with hotel locally here in Stockholm where the workshop will run for the coming three days. Things were on track, if we ignore how they got the wrong name of the workshop on the info screens in the lobby, instead saying “Haxx Ab”…

Mark welcomed us with a quick overview of what we’re here for and quick run-through of the rough planning for the days. Our schedule is deliberately loose and open to allow for changes and adaptations as we go along.

Patrick talked about the 1 1/2 years of HTTP/2 working in Firefox so far, and we discussed a lot around the numbers and telemetry. What do they mean and why do they look like this etc. HTTP/2 is now at 44% of all HTTPS requests and connections using HTTP/2 are used for more than 8 requests on median (compared to slightly over 1 in the HTTP/1 case). What’s almost not used at all? HTTP/2 server push, Alt-Svc and HTTP 308 responses. Patrick’s presentation triggered a lot of good discussions. His slides are here.

RTT distribution for Firefox running on desktop and mobile, from Patrick’s slide set:


The lunch was lovely.

Vlad then continued to talk about experiences from implementing and providing server push at Cloudflare. It and the associated discussions helped emphasize that we need better help for users on how to use server push and there might be reasons for browsers to change how they are stored in the current “secondary cache”. Also, discussions around how to access pushed resources and get information about pushes from javascript were briefly touched on.

After a break with some sweets and coffee, Kazuho continued to describe cache digests and how this concept can help making servers do better or more accurate server pushes. Back to more discussions around push and what it actually solved, how much complexity it is worth and so on. I thought I could sense hesitation in the room on whether this is really something to proceed with.

We intend to have a set of lightning talks after lunch each day and we have already have twelve such suggested talks listed in the workshop wiki, but the discussions were so lively and extensive that we missed them today and we even had to postpone the last talk of today until tomorrow. I can already sense how these three days will not be enough for us to cover everything we have listed and planned…

We ended the evening with a great dinner sponsored by Mozilla. I’d say it was a great first day. I’m looking forward to day 2!

The Mozilla BlogSusan Chen, Promoted to Vice President of Business Development

I’m excited to announce that Susan Chen has been appointed Vice President of Business Development at Mozilla, a new role we are creating to recognize her achievements.

Susan ChenSusan joined Mozilla in 2011 as Head of Strategic Development. During her five years at Mozilla, Susan has worked with the Mozilla team to conceive and execute multiple complex negotiations and concluded hundreds of millions dollar revenue and partnership deals for Mozilla products and services.

As Vice President of Business Development, Susan is now responsible for planning and executing major business deals and partnerships for Mozilla across its product lines including search, commerce, content, communications, mobile and connected devices. She is also in charge of managing the business development team working across the globe.

We are pleased to recognize Susan’s achievements and expanded scope with the title of Vice President. Please join me in welcoming Susan to the leadership team at Mozilla!


Susan’s bio & Mozillians profile

LinkedIn profile

High-resolution photo

Mitchell BakerUpdate on the United Nations High Level Panel on Women’s Economic Empowerment

It is critical to ensure that women are active participants in digital life. Without this we won’t reach full economic empowerment. This is the perspective and focus I bring to the UN High Level Panel for Women’s Economic Empowerment (HLP), which met last week in Costa Rica, hosted by President Luis Guillermo Solis.

(Here is the previous blog post on this topic.)

Many thanks to President Solis, who led with both commitment and authenticity. Here he shows his prowess with selfie-taking:

Screen Shot 2016-07-22 at 12.32.19 PM

Members of the High Level Panel – From Left to Right: Tina Fordham, Citi Research; Laura Tyson, UC Berkeley; Alejandra Mora, Government of Costa Rica; Ahmadou Ba, AllAfrica Global Media; Renana Jhabvala, WIEGO; Elizabeth Vazquez, WeConnect; Jeni Klugman, Harvard Business School; Mitchell Baker, Mozilla; Gwen Hines, DFID-UK; Phumzile Mlambo, UN Women; José Manuel Salazar Xirinachs, International Labour Organization; Simona Scarpaleggia, Ikea; Winnie Byanyima, Oxfam; Fiza Farhan, Buksh Foundation; Karen Grown, World Bank; Margo Thomas, HLP Secretariat.

Photo Credit: Luis Guillermo Solis, President, Costa Rica

In the meeting we learned about actions the Panel members have initiated, and provided feedback and guidelines on the first draft of the HLP report. The goal for the report is to be as concrete as possible in describing actions in women’s economic empowerment which have shown positive results so that interested parties could adopt these successful practices. An initial version of the report will be released in September, with the final report in 2017.  In the meantime, Panel members are also initiating, piloting and sometimes scaling activities that improve women’s economic empowerment.

As Phumzile Mlambo-Ngcuka, the Executive Director of UN Women often says, the best report will be one that points to projects that are known to work. One such example is a set of new initiatives, interventions and commitments to be undertaken in the Punjab, announced by the Panel Member and Deputy from Pakistan, Fiza Farhan and Mahwish Javaid.

Mozilla, too, is engaged in a set of new initiatives. We’ve been tuning our Mozilla Clubs program, which are on-going events to teach Web Literacy, to be interesting and more accessible to women and girls. We’ve entered into a partnership with UN Women to deepen this work and the pilots are underway. If you’d like to participate, consider applying your organizational, educational, or web skills to start a Mozilla Club for women and girls in your area. Here are examples of existing clubs for women in Nairobi and Cape Town.

Mozilla is also involved in the theme of digital inclusion as a cross-cutting, overarching theme of the HLP report. This is where Anar Simpson, my official Deputy for the Panel, focuses her work. We are liaising with companies in Silicon Valley who are working in the fields of connectivity and distribution of access to explore if, when and and how their projects can empower women economically.  We’re looking to gather everything they have learned about what has been effective. In addition to this information/content gathering task, Mozilla is working with the Panel on the advocacy and publicity efforts of the report.

I joined the Panel because I see it as a valuable mechanism for driving both visibility and action on this topic. Women’s economic empowerment combines social justice, economic growth benefits and the chance for more stability in a fragile world. I look forward to meeting with the UN Panel again in September and reporting back on practical and research-driven initiatives.

QMOFirefox 49.0 Aurora Testday Results

Hello mozillians!

Last week on Friday (July 22nd), we held another successful event – Firefox 49.0 Aurora Testday.

Thank you all for helping us making Mozilla a better place – Moin Shaikh, Georgiu Ciprian, Marko Andrejić, Dineesh Mv, Iryna Thompson.

From Bangladesh: Rezaul Huque Nayeem, Nazir Ahmed Sabbir, Hossain Al Ikram, Azmina Akter Papeya, Md. Rahimul Islam, Forhad Hossain, Akash, Roman Syed, Niaz Bhuiyan Asif, Saddam Hossain, Sajedul Islam, Md.Majedul islam, Fahim, Abdullah Al Jaber Hridoy, Raihan Ali, Md.Ehsanul Hassan, Sauradeep Dutta, Mohammad Maruf Islam, Kazi Nuzhat Tasnem, Maruf Rahman, Fatin Shahazad, Tanvir Rahman, Rakib Rahman, Tazin Ahmed, Shanjida Tahura Himi, Anika Nawar and Md. Nazmus Shakib (Robin).

From India: Nilima, Paarttipaabhalaji, Ashly Rose Mathew M, Selva Makilan R, Prasanth P, Md Shahbaz Alam and Bhuvana Meenakshi.K

A big thank you goes out to all our active moderators too!


I strongly advise everyone of you to reach out to us, the moderators, via#qa during the events when you encountered any kind of failures. Keep up the great work!

Keep an eye on QMO for upcoming events! 😉

Mike HommeyAnnouncing git-cinnabar 0.4.0 beta 2

Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.

Get it on github.

These release notes are also available on the git-cinnabar wiki.

What’s new since 0.4.0b1?

  • Some more bug fixes.
  • Updated git to 2.9.2 for cinnabar-helper.
  • Now supports `git push –dry-run`.
  • Added a new `git cinnabar fetch` command to fetch a specific revision that is not necessarily a head.
  • Some improvements to the experimental native wire protocol support.

The Servo BlogThis Week In Servo 72

In the last week, we landed 79 PRs in the Servo organization’s repositories.

The team working on WebBluetooth in Servo has launched a new site! It has two demo videos and very detailed instructions and examples on how to use standards-based Web Platform APIs to connect to Bluetooth devices.

We’d like to especially thank UK992 this week for their AMAZING work helping us out with Windows support! We are really eager to get the Windows development experience from Servo up to par with that of other platforms, and UK992’s work has been essential.

Connor Brewster (cbrewster) has also been on an incredible tear, working with Alan Jeffrey, on figuring out how session history is supposed to work, clarifying the standard and landing some great fixes into Servo.

Planning and Status

Our overall roadmap is available online and now includes the initial Q3 plans. From now on, we plan to include the quarterly plan with a high-level breakdown in the roadmap page.

This week’s status updates are here.

Notable Additions

  • UK992 added support for tinyfiledialogs on Windows, so that we can prompt there, too!
  • UK992 uncovered the MINGW magic to get AppVeyor building again after the GCC 6 bustage
  • jdm made it possible to generate the DOM bindings in parallel, speeding up some incremental builds by nearly a minute!
  • aneesh restored better error logging to our BuildBot configuration and provisioning steps
  • canaltinova fixed the reference test for text alignment in input elements
  • larsberg fixed up some issues preventing the Windows builder from publishing nightlies
  • upsuper added support for generating bindings for MSVC
  • heycam added FFI glue for 1-arg CSS supports() in Stylo
  • manish added Stylo bindings for calc()
  • johannhof ensured we only expose Worker interfaces to workers
  • cbrewster implemented joint session history
  • shinglyu optimized dirty flags for viewport percentage units based on viewport changes
  • stshine blockified some children of flex containers, continuing the work to flesh out flexbox support
  • creativcoder integrated a service worker manager thread
  • izgzhen fixed Blob type-strings
  • ajeffrey integrated logging with crash reporting
  • malisas allowed using ByteString types in WebIDL unions
  • emilio ensured that transitions and animations can be tested programmatically

New Contributors

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!


See the aforementioned demos from the team at the University of Szeged.

The Rust Programming Language BlogThe 2016 Rust Conference Lineup

The Rust Community is holding three major conferences in the near future, and we wanted to give a shout-out to each, now that all of the lineups are fully announced.

Sept 9-10: RustConf

RustConf is a two-day event held in Portland, OR, USA on September 9-10. The first day offers tutorials on Rust given directly by members of the Rust core team, ranging from absolute basics to advanced ownership techniques. The second day is the main event, with talks at every level of expertise, covering both core Rust concepts and design patterns, production use of Rust, reflections on the RFC process, and systems programming in general. We offer scholarship for those who would otherwise find it difficult to attend. Join us in lovely Portland and hear about the latest developments in the Rust world!

Follow us on Twitter @rustconf.

Sept 17-18: Rust Fest

Join us at RustFest, Europe’s first conference dedicated to the Rust programming language. Over the weekend 17-18th September we’ll gather in Berlin to talk Rust, its ecosystem and community. All day Saturday will have talks with topics ranging from hardware and testing over concurrency and disassemblers, and all the way to important topics like community, learning and empathy. While Sunday has a focus on learning and connecting, either at one of the many workshops we are hosting or in the central meet-n-greet-n-hack area provided.

Thanks to the many awesome sponsors, we are able to offer affordable tickets to go on sale this week, with an optional combo—including both Viewsource and RustFest. Keep an eye on, get all the updates on the blog and don’t forget to follow us on Twitter @rustfest

Oct 27-28: Rust Belt Rust

Rust Belt Rust is a two-day conference in Pittsburgh, PA, USA on October 27 and 28, 2016, and people with any level of Rust experience are encouraged to attend. The first day of the conference has a wide variety of interactive workshops to choose from, covering topics like an introduction to Rust, testing, code design, and implementing operating systems in Rust. The second day is a single track of talks covering topics like documentation, using Rust with other languages, and efficient data structures. Both days are included in the $150 ticket! Come learn Rust in the Rust Belt, and see how we’ve been transforming the region from an economy built on manufacturing to an economy built on technology.

Follow us on Twitter @rustbeltrust.

Karl Dubost[worklog] Edition 028. appearance on Enoshima

Each time I "set up my office" (moved to a new place for the next 3 months, construction work on the main home), I'm mesmerized by how easy it is to set up a work environment. Laptop, wifi and electricity are the main things needed to start. A table and a chair are useful but non essential. And eventually an additional screen to have more working surface to be comfortable. Basically in 5 minutes we are ready to work. And that's one chance of our area of work. How long does it take before you can start working?

Working with a view on Enoshima for the next 3 months. Tune of the week: Omoide no Enoshima.

Webcompat Life

Progress this week:

Today: 2016-07-25T06:21:33.702789
296 open issues
needsinfo       5
needsdiagnosis  71
needscontact    14
contactready    34
sitewait        164

You are welcome to participate

Webcompat issues

(a selection of some of the bugs worked on this week).

  • Time to time, people are reporting usability issues which are more or less cross-browsers. They basically hinder every browsers. It's out scope for the Web Compatibility project, but hint at something interesting about browsers and users perception. Often, I wonder if browsers should do more than just supporting the legacy Web site (aka it displays), but also adjust the content to a more palatable experience. Somehow the way, Reader mode is doing on user request, a beautify button for legacy content.
  • Google Image Search and black arrow. A kind of cubist arrow for Firefox. Modern design?
  • I opened an issue on Tracking Protection and Webcompat. Adam pointed me this morning to a project on moving tracking protection to a Web extension.
  • Because we have more issues on Firefox Desktop and Firefox Android, we focus our energy there, so we need someone in the community to focus on Firefox OS issues.
  • When I test Web sites on Firefox Android, I usually do it through the remote debugging in WebIDE and instead of typing a long URI on the device itself, I usually go to the console and paste the address I want window.location = ''.
  • Starting to test a bit more in depth what appearance means in different browsers. Specifically to determine what is needed for Web compatibility and/or Web standards.
  • a WONTFIX which is a good news. Bug 1231829 - Implement -webkit-border-image quirks for compatibility. It means it has been fixed by the site owners.
  • On this Find my phone issue on Google search, the wrong order of CSS properties creates a layout issue where the XUL -moz-box was finally interpreted, but it triggered a good question from Xidorn. Should we expose the XUL display values to Web content? Add to that that some properties in the CSS never existed.
  • hangout doesn't work the same way for Chrome and Firefox. There's something happening either on the Chrome side or the servers, which creates the right path of actions. I haven't determined it yet. dev

Reading List

Follow Your Nose


  • Document how to write tests on using test fixtures.
  • ToWrite: Amazon prefetching resources with <object> for Firefox only.


Patrick ClokeWindows Mobile (or Windows Phone) and FastMail

I’ve been a big fan of Windows Phone (now Windows Mobile) for a while and have had a few phones across versions 7, 8, and now 10. A while ago I switched to FastMail as my e-mail provider [1], but had been stuck using Google as my calendar provider still (and my contacts were on my Windows Live account). I had a desire to move all these onto a single account, but Windows 10 Mobile only officially supports e-mail from arbitrary providers. Calendar and contacts are limited to a few special providers.

Below I’ve outlined how I’ve gotten all three services (email, contacts, and calendar) from my FastMail account onto my Windows Mobile device.


Email is the easy one, FastMail even has a guide to setting up email on Windows Phone. This guide did not handle sending emails with a custom domain name, if you don’t have that situation, probably just use the FastMail guide.

  1. Add a new account, choose “other account”.
  2. Type in your email address (e.g. and password.
  3. It will complain about being unable to find proper account settings. Click “try again”.
  4. It will complain again, but not give you an option for “advanced”, click it.
  5. Choose “Internel email account”.
  6. Enter any “Account name” and “Your name” that you want.
  7. Choose “IMAP4” as the “Account type”.
  8. Change the incoming mail server to
  9. Change the username to your FastMail username (e.g.
  10. Change the outgoing mailserver to

Now when you send email it should show up properly as, but be sent via FastMail’s servers!


FastMail added support for CardDAV last year and Windows Phone added support back in 2013, so why is this hard? Well…turns out that there isn’t a way to make a CardDAV account on Windows Mobile, it’s just used for certain account types. Luckily, there is a forum post about hooking up CardDAV via a hack. Steps are reproduced below:

  1. Add a new account, choose “iCloud”.
  2. Type in your FastMail username, but add +Default before the @ (e.g., note that this isn’t anything special, just the scheme FastMail uses for CardDAV usernames.
  3. Put in your password. [2]
  4. Click “sign in”, it will fail.
  5. Go back into the account settings (click “Manage”) and modify the advanced settings (“Change mailbox sync settings”). Choose manually for when to download new email. Disable syncing of email and calendar.
  6. Go to “Advanced account settings”. Change the “Incoming email server”, “Outgoing (SMTP) email server” and “Calendar server (CalDAV)” to localhost. [3]
  7. Change “Contacts server (CardDAV)” to, changing to your FastMail username.
  8. Click “Done”!

Your contacts should eventually appear in your address book! I couldn’t figure out a way to force my phone to sync contacts, but they appeared fairly quickly.


FastMail added support for CalDAV back in the beginning of 2014 [4]. These steps are almost identical to the Contacts section above, but using information from the guide for setting up

  1. Add a new account, choose “iCloud”.
  2. Type in your FastMail username (e.g.
  3. Put in your password.
  4. Click “sign in”, it will fail.
  5. Go back into the account settings (click “Manage”) and modify the advanced settings (“Change mailbox sync settings”). Choose manually for when to download new email. Disable syncing of email and contacts.
  6. Go to “Advanced account settings”. Change the “Incoming email server”, “Outgoing (SMTP) email server” and “Contacts server (CardDAV)” to localhost.
  7. Change “Calendar server (CalDAV)” to, changing to your FastMail username.
  8. Click “Done”!

My default calendar appeared very quickly, but additional calendars took a bit to sync onto my phone.

Good luck and let me know if there are any errors, easier ways, or other tricks to getting the most of FastMail on a Windows Mobile device!

[1]There are a variety of reasons why I switched, I had recently bought a domain name to get better control over my online presence (email, website, etc.). I was also tired of my email being used to server me advertisements and various other issues with free webmail. I highly recommend FastMail, they have awesome security and privacy policies. They also have amazing support, give back to (a lot) to open source and a whole slew of other things.
[2]I put a dummy one in and then changed it after I updated the servers in step 6. This was to not send my password to iCloud servers. The password is hopefully encrypted and hashed, but I don’t know for sure.
[3]We’re just ensuring that our credentials for these other services will not hit Apple servers for any reason.
[4]That article talks about, but this is now available on the production FastMail servers too!

Daniel StenbergHTTP Workshop 2016, day -1

http workshop The HTTP Workshop 2016 will take place in Stockholm starting tomorrow Monday, as I’ve mentioned before. Today we’ll start off slowly by having a few pre workshop drinks and say hello to old and new friends.

I did a casual count, and out of the 40 attendees coming, I believe slightly less than half are newcomers that didn’t attend the workshop last year. We’ll see browser people come, more independent HTTP implementers, CDN representatives, server and intermediary developers as well as some friends from large HTTP operators/sites. I personally view my attendance to be primarily with my curl hat on rather than my Firefox one. Firmly standing in the client side trenches anyway.

Visitors to Stockholm these days are also lucky enough to arrive when the weather is possibly as good as it can get here with the warmest period through the summer so far with lots of sun and really long bright summer days.

News this year includes the @http_workshop twitter account. If you have questions or concerns for HTTP workshoppers, do send them that way and they might get addressed or at least noticed.

I’ll try to take notes and post summaries of each workshop day here. Of course I will fully respect our conference rules about what to reveal or not.

stockholm castle and ship

Cameron KaiserTenFourFox 45 is more of a thing

Since the initial liftoff of TenFourFox 45 earlier this week, much progress has been made and this blog post, ceremonially, is being typed in it. I ticked off most of the basic tests including printing, YouTube, social media will eat itself, webcam support, HTML5 audio/video, canvas animations, font support, forums, maps, Gmail, blogging and the major UI components and fixed a number of critical bugs and assertions, and now the browser is basically usable and able to function usefully. Still left to do is collecting the TenFourFox-specific strings into their own DTD for the localizers to translate (which will include the future features I intend to add during the feature parity phase) and porting our MP3 audio support forward, and then once that's working compiling some opt builds and testing the G5 JavaScript JIT pathways and the AltiVec acceleration code. After that it'll finally be time for the first beta once I'm confident enough to start dogfooding it myself. We're a little behind on the beta cycle, but I'm hoping to have 45 beta 1 ready shortly after the release of 38.10 on August 2nd (the final 38 release, barring a serious showstopper with 45), a second beta around the three week mark, and 45 final ready for general use by the next scheduled release on September 13th.

A couple folks have asked if there will still be a G3 version and I am pleased to announce the answer will very likely be yes; the JavaScript JIT in 45 does not mandate SIMD features in the host CPU, so I don't see any technical reason why not (for that matter, the debug build I'm typing this on isn't AltiVec accelerated either). Still, if you're bravely rocking a Yosemite in 2016 you might want to think about a G4 for that ZIF socket.

I've been slack on some other general interest posts such as the Power Mac security rollup and the state of the user base, but I intend to write them when 45 gets a little more stabilized since there have been some recurring requests from a few of you. Watch for those soon also.

Support.Mozilla.OrgSUMO Show & Tell: How I Got Involved With Mozilla

Hey SUMO Nation!

London Work Week 2016During the Work Week in London we had the utmost pleasure of hanging out with some of you (we’re still a bit sad about not everyone making it… and that we couldn’t organize a meetup for everyone contributing to everything around Mozilla).

Among the numerous sessions, working groups, presentations, and demos we also had a SUMO Show & Tell – a story-telling session where everyone could showcase one cool thing they think everyone should know about.

I have asked those who presented to help me share their awesome stories with everyone else – and here you go, with the second one presented by Andrew, a jack-of-all-trades and Bugzilla tamer.

Take a look below and relive the origin story of a great Mozillian – someone just like you!

WhistlerIt all started… with an issue that I had with Firefox on my desktop computer running Windows XP, back in 2011. Firefox wouldn’t stop crashing! I then discovered the support site for Firefox. There I found help with my issue through support articles, and at the same time, I was also intrigued by the ability to help other users through the very same site as well.

As I looked into the available opportunities to contribute to the support team, I landed upon live chat. Live chat was a 1-on-1 chat to help out users with the issues they had. Unfortunately, after I joined the team, the live chat was placed on a hiatus. It was recommended that I move on to the forums and knowledge base, because rather than just helping one user and only them benefiting, on the forums I could help many more people through a single suggestion. For some, this floated well and with others it didn’t, because we weren’t taking care of the user personally (like on the chat).

It definitely took some time for me to adjust to this new setting, as things were (and are) handled differently on the forum and on the knowledge base. Users on the forum sometimes do respond immediately, but most of the time they respond later, and some actually don’t respond at all. This is one of the differences between helping out through live chat and through the forums.

The knowledge base on the other hand, can be really complex. There is markup being used to present text in a different way to different users. We must be as clear and precise as possible when writing the article, since although we may know really well what we are talking about, the article reader (usually a user in need of helpful information) may not. It is definitely challenging for some Mozillians to get involved with writing, but once you do, you get the hang of it and truly enjoy it.

From there on, I kept contributing to the forum and knowledge base, but I also went to find out how I could contribute to other areas of Mozilla. I landed upon triaging bugs within Mozilla sites thanks to the help of Liz Henry and Tyler Downer. Furthermore, as Firefox OS rolled out, I started to provide support to the users, write more articles and file bugs in regards to the OS.

As things moved forward so did life – at the moment I am contributing through the Social Support team. Contributing through Social helps our users on social media realise that we are listening to them and that their comments and woes are not falling on deaf ears. We respond to all types of concerns, be they praises or complaints. Helping users on Twitter while being restricted to 140 characters is difficult, whereas on Facebook we can provide a more detailed explanation and response. With Social Support, a single response from us sometimes reaches only a single person – other times it can reach thousands through re-sharing.

Social media makes it easy to identify issues, crises, and hot topics – it is where people nowadays now go to seek assistance, rant, and share their experiences. Also, as posts and tweets can spread easily on social media, it is a double-edged sword: if something positive is spreading, we hope it spreads more. However, if something negative is spreading, we must contain it, identify, and address the root cause of the issue. The bottom line is: we must help our users while keeping everything in the balance and being constantly vigilant.

TorontoIn 2013, I was very thankful that I was able to attend the Summit that was held in 3 places across the world. I was invited to Toronto, where I held a session called “What does ‘Mozillian’ mean?” In that session, we defined what the term “Mozillian” meant, who was included, not included, and what roles and capabilities were necessary to classify an individual to be a Mozillian. At the end of the session, we touched base via email to finalize our thoughts and gather the necessary information to pass along to others. Although we made some progress, defining who a Mozillian is, who can (or can’t) be one, and setting a specific criteria is somewhat impossible. We must be accepting of those who come and go, those with different backgrounds, personal preferences regarding getting things done, and (sometimes highly) different opinions. All that said, we are a huge family – a huge Mozilla family.

Thank you Andrew for sharing your story with us. I personally appreciate your relaxed and flexible perspective on (sometimes inevitable) changes and challenges we all face when trying to make Mozilla work for the users of the web.

Here’s to many more great chances for you to rock the (helpful, but not only) web with Mozilla and others!

David BurnsWebDriver F2F - July 2016

Last week saw the latest WebDriver F2F to work on the specification. We held the meeting at the Microsoft campus in Redmond, Washington.

The agenda for the meeting was placed, as usual, on the W3 Wiki. We had quite a lot to discuss and, as always, was a very productive meeting.

The meeting notes are available for Wednesday and Thursday. The most notable items are;

  • Finalising Actions in the specification
  • newSession
  • Certificate handling on navigation
  • Specification tests

We also welcomed Apple to their first WG meeting. You may have missed it, but there is going to be a Safari Driver built in in macOS.

Honza BambasIllusion of atomic reference counting

Most people believe that having an atomic reference counter makes them safe to use RefPtr on multiple threads without any more synchronization.  Opposite may be truth, though!

Imagine a simple code, using our commonly used helper classes, RefPtr<> and an object Type with ThreadSafeAutoRefCnt reference counter and standard AddRef and Release implementations.

Sounds safe, but there is a glitch most people may not realize.  See an example where one piece of code is doing this, no additional locks involved:

RefPtr<Type> local = mMemeber; // mMember is RefPtr<Type>, holding an object

And other piece of code then, on a different thread presumably:

mMember = new Type(); // mMember's value is rewritten with a new object

Usually, people believe this is perfectly safe.  But it’s far from it.

Just break this to actual atomic operations and put the two threads side by side:

Thread 1

local.value = mMemeber.value;
/* context switch */ 

Thread 2

Type* temporary = new Type();
Type* old = mMember.value; 
mMember.value = temporary; 
/* context switch */ 

Similar for clearing a member (or a global, when we are here) while some other thread may try to grab a reference to it:

RefPtr<Type> service = sService;
if (!service) {
  return; // service being null is our 'after shutdown' flag

And another thread doing, usually during shutdown:

sService = nullptr; // while sService was holding an object

And here what actually happens:

Thread 1

local.value = sService.value;
/* context switch */

Thread 2

Type* old = sService.value; 
sService.value = nullptr; 
/* context switch */

And where is the problem?  Clearly, if the Release() call on the second thread is the last one on the object, the AddRef() on the first thread will do its job on a dying or already dead object.  The only correct way is to have both in and out assignments protected by a mutex or, ensure that there cannot be anyone trying to grab a reference from a globally accessed RefPtr when it’s being finally released or just being re-assigned. The letter may not always be easy or even possible.

Anyway, if somebody has a suggestion how to solve this universally without using an additional lock, I would be really interested!

The post Illusion of atomic reference counting appeared first on mayhemer's blog.

Gervase MarkhamSamsung’s L-ish Model Numbers

A slow hand clap for Samsung, who have managed to create versions of the S4 Mini phone with model numbers (among others):

  • GT-i9195
  • GT-i9195L (big-ell)
  • GT-i9195i (small-eye)
  • GT-i9195l (small-ell)

And of course, the small-ell variant, as well as being case-confusable with the big-ell variant and visually confusable with the small-eye variant if it’s written with a capital I as, say, here, is in fact an entirely different phone with a different CPU and doesn’t support the same aftermarket firmware images that all of the other variants do.

See this post for the terrible details.

Cameron KaiserTenFourFox 45 is a thing

The browser starts. Lots of problems but it boots. More later.

Armen ZambranoMozci and pulse actions contributions opportunities

We've recently finished a season of feature development adding TaskCluster support to add new jobs to Treeherder on pulse_actions.

I'm now looking at what optimizations or features are left to complete. If you would like to contribute feel free to let me know.

Here's some highligthed work (based on pulse_action issues and bugs):
This will help us save money in Heroku since using Buildapi + buildjson files is memory hungry and requires us to use bigger Heroku nodes.
This is important to help us change the behaviour of the Heroku app without having to commit any code. I've used this in the past to modify the logging level when debugging an issue.

This is also useful if we want to have different pipelines in Heroku. 
Having Heroku pipelines help us to test different versions of the software.
This is useful if we want to have a version running from 'master' against the staging version of Treeherder.
It would also help contributors to have a version of their pull requests running live.
We don't have any tests running. We need to determine how to run a minimum set of tests to have some confident in the product.

This needs integration tests of Pulse messages.
The comment is the bug is rather accurate and it shows that there are many small things that need fixing.
Manual backfilling uses Buildapi to schedule jobs. If we switched to scheduling via TaskCluster/Buildbot-bridge we would get better results since we can guarantee proper scheduling of a build + associated dependent jobs. Buildapi does not give us this guarantee. This is mainly useful when backfilling PGO test and talos jobs.

If instead you're interested on contributing to mozci you can have a look at the issues.

Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Support.Mozilla.OrgWhat’s Up with SUMO – 21st July

Hello, SUMO Nation!

Chances are you have noticed that we had some weird temporal issues, possibly caused by a glitch in the spacetime continuum. I don’t think we can pin the blame on the latest incarnation of Dr Who, but you never know… Let’s see what the past of the future brings then, shall we?

Welcome, new contributors!

If you just joined us, don’t hesitate – come over and say “hi” in the forums!

Contributors of the week

Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

Most recent SUMO Community meeting

The next SUMO Community meeting

  • …is happening on the 27th of July!
  • If you want to add a discussion topic to the upcoming meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Wednesday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).
    • If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback.



Support Forum

Knowledge Base & L10n

  • If you’re an active localizer in one of the top 20+ locales, expect a list of high priority articles coming your way within the next 24 hours. Please make sure that they are localized as soon as possible – our users rely on your awesomeness!
  • Final reminder: remember the discussion about the frequency & necessity of KB updates and l10n notifications? We’re trying to address this for KB editors and localizers alike. Give us your feedback!
  • Reminder: L10n hackathons everywhere! Find your people and get organized! If you have questions about joining, contact your global locale team.


  • for Android
    • Version 48 is still on track – release in early August.
  • for Desktop
    • Version 48 is still on track – release in early August.

Now that we’re safely out of the dangerous vortex of a spacetime continuum loop, I can only wish you a great weekend. Take it easy and keep rocking the helpful web!

Mozilla Addons BlogNew WebExtensions Guides and How-tos on MDN

The official launch of WebExtensions is happening in Firefox 48, but much of what you need is already supported in Firefox and AMO ( The best place to get started with WebExtensions is MDN, where you can find a trove of helpful information. I’d like to highlight a couple of recent additions that you might find useful:

Thank you to Will Bamberg for doing the bulk of this work. Remember that MDN is a community wiki, so anyone can help!

Air MozillaWeb QA Team Meeting, 21 Jul 2016

Web QA Team Meeting They say a Mozilla Web QA team member is the most fearless creature in the world. They say their jaws are powerful enough to crush...

Air MozillaReps weekly, 21 Jul 2016

Reps weekly This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Mozilla Reps CommunityRep of the Month – June 2016

Please join us in congratulating Alex Lakatos as Reps of the Month for June 2016!

Alex is a Mozilla Rep based in London, Great Britain, originally from Romania. He is also a Mozilla TechSpeaker, giving talks all around Europe.

In the last 2 months Alex held several technical talks all over Europe (CodeCamp Cluj, OSCAL in Albania, DevTalks in Bucharest and DevSum in Sweden just to name a few) to promote Mozilla’s mission and the Open Web. With his enthusiasm in tech he is a crucial force to promote our mission and educate developers all around Europe about new Web technologies. He covered both the transition we are doing shifting from Firefox OS to a more innovative area with Connected Devices but also changes in Firefox and why you should consider the improvements made on the DevTools side.

Please don’t forget to congratulate him on Discourse!

Adam StevensonCompatibility Screenshots

I’ve been trying to learn more about how screenshots can help us identify compatibility issues in Firefox. It started with the question:

How does Firefox compare to Chrome in the top 100 websites?

Pretty good it turns out, on the front pages at least, you can view them yourself [Some images are offensive and NSFW]. You can also check out the same list of sites but comparing Firefox to Firefox with tracking protection. I made some scripts to capture the screens in OSX. They make use of the screencapture utility and this other cool little utility called GetWindowID. GetWindowID determines which Window ID is associated to a program on the screen, Firefox or Chrome in this case.

Let’s look at how these utilities work together.

Running the GetWindowID command requires that we specify which program we are looking for and which tab is active as well. I’ve made sure that my version of Firefox starts up with the Mozilla Firefox Start Page. If we execute this command:

./GetWindowID "Firefox" "Mozilla Firefox Start Page";

It returns a numeric value like:

This is great because the screencapture utility needs to know which window ID to look at.
So let’s take that same GetWindowID command from earlier and store the result into a variable called ‘gcwindow’.

gcwindow=$(./GetWindowID "Firefox" "Mozilla Firefox Start Page");

Now gcwindow has the value 1072 from before. Let’s feed that into the screencapture utility:

screencapture -t jpg -T 40 -l $gcwindow -x ~/Desktop/screens/firefoxtest/$site.jpg;

When this runs the program will wait 40 seconds from the "-T 40” parameter then take a screenshot of Window ID 1072, which is our Firefox instance. The JPG file will be stored in a folder on my desktop under screens/firefoxtest. The rest of the script is looping through each website name that we’ve entered, opening a new browser window, opening the website we want to capture, killing the browser process after each screenshot and some sleep commands in between that give the computer time to execute each step.

There are some browser preferences and considerations that you will want to be aware of before running these scripts.

Why do all this in OSX? Cause I like to work on a mac, I guess. OK I don’t have a good reason but if you want to make it work on a Linux docker or something cool that’d be super sweet. The other thing to keep in mind is I’m looking at viewport screenshots right now, full page would be nice, but we’ll get there.

So the side by side comparison of popular sites is pretty useful but looking at things is a lot of work. It would be cool if we could automate some or all of that looking, right? Luckily there are image comparison tools that can help with this. I decided to try out Yahoo’s blink-diff tool which is built using node.js.

First off only PNG’s are supported with this tool, but that’s easy to change using the screencapture command line tool.

So we use 'screencapture -t png’ instead of 'screencapture -t jpg’.

Let’s go through setting this up for a single test. You’ll need to have node.js installed first.
We need to create a new folder, the name isn’t important.

mkdir onetime-diff

Then download this javascript file from Github and put it in that folder. Now let’s initialize our project:

npm init

And just accept all the defaults. Next let’s install the dependancies:

npm install blink-diff
npm install pngjs-image

Great, it’s ready to run now. The index.js file we downloaded looks for two files in the same folder called firefox.png and chrome.png and will generate a file called output.png. If you need a couple files to test with:


Note that if you provide your own PNG files, you may need to adjust the cropping parameters. I’ve configured the script to work best for Firefox and Chrome screenshots captured on a retina display, if you aren’t using a retina display divide those numbers by 2. You can see here y:160 and y:144, this is cropping out the top portion of the screenshot where the browser's “chrome” is.

cropImageA: { x:0, y:160, width:0, height:0 }, // Firefox
cropImageB: { x:0, y:144, width:0, height:0 }, // Chrome

Once you’re ready to run the test, execute:

node index.js

After a minute, it should generate an output.png file that looks like this and the script will return a result to the command prompt:

Found 1116908 differences.

So this is a good start, we have an image comparison program and an automated screenshot utility. To make it more useful I created another script that combines these together. On a high level it works like this:

First site > Screenshot Firefox > Screenshot Chrome > Compare images in background process > Next Site...

It has the same dependancies as before, but now we run it like this:


After giving this is a few runs and playing with the settings, I started to see some issues.

  • Advertisements placed in different positions, sizes, style or even amount
  • Regional site redirects
  • Different home page, providing a ‘fresh look’ or they are A/B testing
  • Site surveys or other pop ups
  • Large image sliders
  • Random overlay pop up ads
  • Rotating background images
  • Very slow process when using one computer

We want each site to have a decent amount of time to load, I normally use between 30-40 seconds. But that adds up over 1000 or more sites. I decided to hack something basic together to allow multiple computers in my house to split the load. It helps but it would be much better to have this running on Linux virtual machines or dockers.

So what’s next?

  • More sample runs to find a decent set of parameters for the baseline
  • Identifying in the top 1000 sites, which ones will continue to fail
  • Can we set higher thresholds and still detect when something breaks?
  • Can the tool ignore areas that are constantly changing?
  • Get the results out in the open for others to look at

If any of this interests you and want to get involved, I’d love to hear from you. Or if you have advice on how to make this better, please reach out as well.

Matjaž HorvatImproving in-page localization in Pontoon

We’re improving the way in-page localization works in Pontoon by droping a feature instead of introducing a new one. Translating text on the web page itself using the contentEditable attribute has been turned off.

That means the actual translation (typing) always takes place in the translation editor, which gives you access to all the relevant information you need for the translation.

The sidebar is always visible, allowing you to select strings from the list and then translate them. Additionally, you can still use the Inspector-like tool to select any localizable string on the page, which will then open in the translation editor in the sidebar to be translated.

Translation within the web page has turned out to be suboptimal for various reasons:

  • Original string is not always presented unambiguously, e.g. if containing markup,
  • Additional string details like comments and file paths are not displayed,
  • Suggestions from history, machinery and other locales are not available,
  • Only the first plural form can be translated,
  • It’s hard to control markup or new lines on various sites if they’re part of the string.

Mozilla Addons BlogCompleting Firefox Accounts on AMO

In Feburary we rolled out Firefox Accounts on (AMO). That first phase created a migration flow from old AMO accounts over to Firefox Accounts. Since then, 84% of developers who have logged in have transitioned over to a Firefox Account.

The next step is to remove the ability to log in using an old AMO account. Once this is complete, the only way to log in to AMO is by using Firefox Accounts.

If you have an old account on AMO and have not gone through the migration flow, you can still access your account if the email you use to log in through Firefox Accounts is the same as the one previously registered on AMO.

We expect that the removal of old logins will be completed in a couple of weeks, unless any unforeseen problems occur.

Frequently asked questions

What happens to the add-ons I develop when I convert to a new Firefox Account?

All the add-ons are accessible to the new Firefox Account.

Why do I want a Firefox Account?

Firefox Accounts is the identity system that is used to synchronize Firefox across multiple devices. Many Firefox products and services will soon begin migrating over, simplifying your sign-in process and making it easier for you to manage all your accounts.

Where do I change my password?

Once you have a Firefox Account, you can go to, sign in, and click on Password.

If you have forgotten your current password:

  1. Go to the AMO login page
  2. Click on I forgot my password
  3. Proceed to reset the password

QMOFirefox 49.0 Aurora Testday, July 22nd

Hello Mozillians,

Good news! We are having another testday for you 😀 This time we will take a swing at Firefox 49.0 Aurora, this Friday, 22nd of July.  The main focus during the testing will be around Context Menu, PDF Viewer and Browser Customization. Check out the detailed instructions via this etherpad.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

I know this is short notice but we hope you will join us in the process of making Firefox a better browser. See you on Friday!

Dustin J. MitchellRecovering from TaskWarrior Corruption

I use TaskWarrior along with TaskWarrior for Android to organize my life. I use FreeCinc to synchronize all of my desktops, VPS, and phone, using a crontask. Most of the time, it works pretty well.

FreeCinc Fail

However, yesterday, all of FreeCinc’s keys expired. There’s a big red warning on the home page instructing users to download new keys.; Since my sync’s operate on a crontask, I didn’t notice this until I discovered tasks I remembered modifying in one place did not appear in another. By that time, I had modifed tasks everywhere – a few things to buy on my phone, some work stuff on the laptop, some more work stuff on the VPS, and some personal stuff on the desktop.

So, downloading new keys is easy. However, TaskWarrior doesn’t magically take four different sets of tasks and combine them into a single coherent set of tasks, just by syncing to a server. No, in fact, since there are no changes to sync, it does nothing. Just leaves the different sets of tasks in place on different machines. So basically everything I modified in 24 hours, across four machines, was now unsynchronized. And I use this to run my life, so it was probably 100 or so changes.

What Was I Doing Again?

Here’s how I fixed this:

I copied and from all four hosts onto a single host. These files are in a pretty simple one-task-per-line format, with a uuid and modification timestamp embedded in each line. The rough approach was to take all of the tasks in all of these files, and select most recent instance for each uuid. There’s a little bit of extra complication to handle whether a task is completed or not. I used the following script to do this calculation:

import re

uuid_re = re.compile(r'uuid:"([^"]*)"')
modified_re = re.compile(r'modified:"([0-9]*)"')
def read(filename):
	with open(filename) as f:
		for l in f:
			uuid =
				modified =
			except AttributeError:
				modified = 0
			yield uuid, int(modified), l

def add_to(uuid, modified, completed, line, coll):
	if uuid in coll:
		ex_modified, ex_completed, _ = coll[uuid]
		if ex_modified >= modified:
		if ex_completed and not completed:
	coll[uuid] = (modified, completed, line)

by_uuid = {}
for c, fn in [
	(True, ""),
	(True, ""),
	(True, ""),
	(True, ""),
	(False, ""),
	(False, ""),
	(False, ""),
	for uuid, modified, line in read(fn):
		add_to(uuid, modified, c, line, by_uuid)

with open("", "w") as f:
	for _, completed, line in by_uuid.itervalues():
		if completed:

with open("", "w") as f:
	for _, completed, line in by_uuid.itervalues():
		if not completed:

As it turns out, I might have simplified this a little by looking at the status field: completed and deleted are in, and the rest are in

Once I was happy with the results (approximately the right number of pending tasks, basically), I copied them into ~/.task on one machine, and ran some task queries to check everything looked good (looking for tasks I recalled adding on various machines). Satisfied with this, I downloaded yet another set of keys from FreeCinc and installed them on that same machine. I deleted ~/.task/ on that machine (just in case) and ran task sync init which appeared to upload all pending tasks. Great!

Next, I deleted ~/.task/*.data on all of the other machines, installed the new FreeCinc keys, and ran task sync. On these machines, it happily downloaded the pending tasks. And we’re back in business!

I chose not to just copy ~/.task/*.data between systems because I run slightly different versions of TaskWarrior on different systems, so the data format might be different. I might have used task export and task import with some success, but I didn’t think of it in time.

Julia ValleraMozilla Clubs end of year goals

Mozilla Clubs are excited to share our goals for the rest of 2016. We’ve come a long way since the program’s launch in 2015. What lies ahead for us is exciting and challenging. Below is what we will be working on and information about how you can join in the fun.

Curious to learn more about Mozilla Clubs? Check out our website, facebook page, event gallery, and discussion forum.

Mozilla Club leaders come together in June 2016 at Mozilla all-hands. Photo by Randy Macdonald

Mozilla Club leaders came together in June 2016 at Mozilla all-hands. Photo by Randy Macdonald

Our process

In June 2016, eight Mozilla Club leaders came together in London, UK for Mozilla’s bi-annual All Hands gathering. They participated in many conversations, one of which was a 90 minute deep dive session to identify objectives for clubs over the next six months. During the session we brainstormed topics, ideated in pairs and had a group share out. In addition to informing our goals for the rest of 2016, this session gave club leaders the opportunity to learn more about each other’s work and regional challenges.

In July, we shared the results of our deep dive session more broadly during our monthly call for club leaders and internal clubs info session. This allowed us to gather more feedback and ultimately votes on what goals we should focus on for Mozilla Clubs between now and January 2017.

Here is the list of goals that resulted, why they are important to our work how we plan to approach them.

Six Month Goals

Curate and/or create new resources for running clubs offline
  • Why: We want to build and curate more web literacy curriculum that can be used without internet access so that club participants can learn offline.
  • How: We will make our current offline activities and curriculum easier to locate, curate new resources and build new ones.
Connect the community through a global gathering
  • Why: Club participants learn from each other and feel connected to a global community when they have the opportunity to see each other face-to-face.
  • How: We will draw from event models across Mozilla like global sprints, state of the Hive and Mozilla Festival to connect club participants (virtually and/or in person) to work on challenges, share experiences and exchange knowledge.
Continue to localize content and resources
  • Why: As we translate more curriculum, activities and club guides into languages other than English more people can access and learn from them.
  • How: We will work with Mozilla volunteers, staff and partners to build localization into the process of content creation and start with translating current activities and creating new location-specific resources.
Reward and recognize club leaders
  • Why: Club leaders need rewards and recognition for their work so that they feel empowered to grow and spread web literacy in their communities.
  • How: We will recognize club leaders for their work through a formal rewards process and develop an agreement policy to create more clarity around the responsibilities of being a club leader.
Strengthen clubs as an organizing model for Mozilla campaigns
  • Why: Mozilla Club participants should continue to have an active role in Mozilla campaigns like Maker Party, Copyright, Take back the Web, Encryption, etc.
  • How: We will leverage club calls, office hours, the discussion forum, etc. to get input from club participants as campaigns take shape and will share campaign related activities that can be incorporated into their offerings.
Connect club participants across Mozilla
  • Why: Mozilla program participants have a lot of expertise to share and they should be able to connect with each other easily and frequently.
  • How: Create opportunities for community members in Clubs, Hives, Open Science and Advocacy to share work with each other, get feedback, build networks and more.
Assess club activity
  • Why: It is important that we maintain an accurate and up-to-date list of active clubs so that we can provide support where it is needed most.
  • How: We will identify which clubs are active by holding individual meetings, checking in via email and reviewing the club event reporter.

Join in the fun!

Here are some ways you can contribute to our work over the next six months and beyond.

  1. Connect with a Mozilla Club in your area. Don’t see any clubs in your area? Apply to start your own!
  2. Help us translate one of our web literacy activities into your preferred language.
  3. Use our offline activities, tell us what you think and suggest new ones.
  4. Join our facebook group to get updates about upcoming events and campaigns.

Jen Kagandraggable min-vid, part 1

since merging john and i’s css PR, i’ve been digging into min-vid again. lots has changed! dave rewrote min-vid in react.js to make it easier for contributors to plug in.

why react.js? because we won’t have to write a thousand different platform checks anymore. for example, we’d have to trigger one set of behaviors if the platform was and another set of behaviors if the platform was this wasn’t scalable and it wasn’t very contributor-friendly. now, to add support for additional video-streaming platforms, contributors will just have to construct the URL to access the platform’s video files (hopefully via a well-documented API) and add the new URL constructing code to min-vid’s /lib folder in file called get-[platform]-url.js.

so that’s awesome!

right now, i’m working on how to make the video panel draggable within the browser window so you’re not just limited to watching yr vids in the lower left-hand corner:

Screen Shot 2016-07-20 at 12.23.26 PM

john came up with a hacky idea for draggability where, on mouseDown, we’ll:

  1. create an invisible container the size of the entire browser window
  2. as long as mouseDown is true, drag the panel wherever we want within the invisible container
  3. onMouseUp, snap the container to be the size of the panel again.

the idea is to make dragging less glitchy by changing our dragging process so we’re no longer sending data back and forth between react, the add-on, and the window.

how to get started? jared broke down the task into smaller pieces for me. here’s the first piece:

Screen Shot 2016-07-20 at 12.25.41 PM

the function for setting up the panel size is in the index.js file. we determine how and when to and panel.hide() based on the block of code below. the code tells the panel to listen for

  1. a message being emitted and
  2. for the content of that message, in this case from the controls.js file:

// require the Panel element from the Mozilla SDK
var panel = require('sdk/panel').Panel({
// set the panel content using the /default.html file
  contentURL: './default.html',
// set the panel functionality using the /controls.js file
  contentScriptFile: './controls.js',
// set the panel dimensions and position
  width: 320,
  height: 180,
  position: {
    bottom: 10,
    left: 10

then, do different stuff based on what the message said.

// turn the panel port on to listen for a 'message' being emitted
panel.port.on('message', opts => {
// assign title to be whatever 'opts' were emitted
  var title = opts.action;

  if (title === 'send-to-tab') {
    const pageUrl = getPageUrl(opts.domain,;
    if (pageUrl) require('sdk/tabs').open(pageUrl);
    else console.error('could not parse page url for ', opts); // eslint-disable-line no-console
  } else if (title === 'close') {
  } else if (title === 'minimize') {
      height: 40,
      position: {
        bottom: 0,
        left: 10
  } else if (title === 'maximize') {
      height: 180,
      position: {
        bottom: 10,
        left: 10

i added another little chunk in there which says: if the title is drag, hide the panel and then show it again with these new dimensions. the whole new block of code looks like this:

panel.port.on('message', opts => {
  var title = opts.action;

  if (title === 'send-to-tab') {
    const pageUrl = getPageUrl(opts.domain,;
    if (pageUrl) require('sdk/tabs').open(pageUrl);
    else console.error('could not parse page url for ', opts); // eslint-disable-line no-console
  } else if (title === 'close') {
  } else if (title === 'minimize') {
      height: 40,
      position: {
        bottom: 0,
        left: 10
  } else if (title === 'maximize') {
      height: 180,
      position: {
        bottom: 10,
        left: 10
  else if (title === 'drag') {
      height: 360,
      width: 640,
      position: {
        bottom: 0,
        left: 0

so we have some new instructions for the panel. but how do we trigger them?  we trigger the instructions by creating the drag function within the PlayerView component and then rendering it. this code says: on whatever new custom event, send a message. the content of the message is an object with the format {detail: obj}—in this case, {action: 'drag'}. then, render the trigger in a <div> in an <a> tag.

function sendToAddon(obj) {
  window.dispatchEvent(new CustomEvent('message', {detail: obj}));

const PlayerView = React.createClass({
  getInitialState: function() {
    return {showVolume: false, hovered: false};
  drag: function() {
    sendToAddon({action: 'drag'});
render: function() {
    return (
     <div className={'right'}>
       <a onClick={this.drag} className={'drag'} />

and we style the class in our css file:

.drag {
    background: red;

so we get something like this, before clicking the red square:

Screen Shot 2016-07-20 at 1.19.37 PM

and after clicking the red square:

Screen Shot 2016-07-20 at 1.19.45 PM

next, i have to see if i can make the panel fill the page, then only drag the video element inside the panel, then snap the panel its position on the window and put it back to its original size, 320 x 180.

Mozilla WebDev CommunityBeer and Tell – July 2016

Once a month, web developers from across the Mozilla Project get together to talk about our side projects and drink, an occurrence we like to call “Beer and Tell”.

There’s a wiki page available with a list of the presenters, as well as links to their presentation materials. There’s also a recording available courtesy of Air Mozilla.

Moby von Briesen: Jam Circle

This week’s only presenter was mobyvb, who shared Jam Circle, a webapp that lets users play music together. Users who connect join a shared room and see each other as circles connected to a central node. Using the keyboard (or, in browsers that support it, any MIDI-capable device), users can play notes that all other users in the channel hear and see as colored lines on each circle’s connection to the center.

The webapp also includes the beginnings of an editor that will allow users to write chord progressions and play them alongside live playback.

A instance of the site is up and running at Check it out!

If you’re interested in attending the next Beer and Tell, sign up for the mailing list. An email is sent out a week beforehand with connection details. You could even add yourself to the wiki and show off your side-project!

See you next month!

Daniel PocockHow many mobile phone accounts will be hijacked this summer?

Summer vacations have been getting tougher in recent years. Airlines cut into your precious vacation time with their online check-in procedures and a dozen reminder messages, there is growing concern about airport security and Brexit has already put one large travel firm into liquidation leaving holidaymakers in limbo.

If that wasn't all bad enough, now there is a new threat: while you are relaxing in the sun, scammers fool your phone company into issuing a replacement SIM card or transferring your mobile number to a new provider and then proceed to use it to take over all your email, social media, Paypal and bank accounts. The same scam has been appearing around the globe, from Britain to Australia and everywhere in between. Many of these scams were predicted in my earlier blog SMS logins: an illusion of security (April 2014) but they are only starting to get publicity now as more aspects of our lives are at risk, scammers are ramping up their exploits and phone companies are floundering under the onslaught.

With the vast majority of Internet users struggling to keep their passwords out of the wrong hands, many organizations have started offering their customers the option of receiving two-factor authentication codes on their mobile phone during login. Rather than making people safer, this has simply given scammers an incentive to seize control of telephones, usually by tricking the phone company to issue a replacement SIM or port the number. It also provides a fresh incentive for criminals to steal phones while cybercriminals have been embedding code into many "free" apps to surreptitiously re-route the text messages and gather other data they need for an identity theft sting.

Sadly, telephone networks were never designed for secure transactions. Telecoms experts have made this clear numerous times. Some of the largest scams in the history of financial services exploited phone verification protocols as the weakest link in the chain, including a $150 million heist reminiscent of Ocean's 11.

For phone companies, SMS messaging came as a side-effect of digital communications for mobile handsets. It is less than one percent of their business. SMS authentication is less than one percent of that. Phone companies lose little or nothing when SMS messages are hijacked so there is little incentive for them to secure it. Nonetheless, like insects riding on an elephant, numerous companies have popped up with a business model that involves linking websites to the wholesale telephone network and dressing it up as a "security" solution. These companies are able to make eye-watering profits by "purchasing" text messages for $0.01 and selling them for $0.02 (one hundred percent gross profit), but they also have nothing to lose when SIM cards are hijacked and therefore minimal incentive to take any responsibility.

Companies like Google, Facebook and Twitter have thrown more fuel on the fire by encouraging and sometimes even demanding users provide mobile phone numbers to "prove they are human" or "protect" their accounts. Through these antics, these high profile companies have given a vast percentage of the population a false sense of confidence in codes delivered by mobile phone, yet the real motivation for these companies does not appear to be security at all: they have worked out that the mobile phone number is the holy grail in cross-referencing vast databases of users and customers from different sources for all sorts of creepy purposes. As most of their services don't involve any financial activity, they have little to lose if accounts are compromised and everything to gain by accurately gathering mobile phone numbers from as many users as possible.

Can you escape your mobile phone while on vacation?

Just how hard is it to get a replacement SIM card or transfer/port a user's phone number while they are on vacation? Many phone companies will accept instructions through a web form or a phone call. Scammers need little more than a user's full name, home address and date of birth: vast lists of these private details are circulating on the black market, sourced from social media, data breaches (99% of which are never detected or made public), marketing companies and even the web sites that encourage your friends to send you free online birthday cards.

Every time a company has asked me to use mobile phone authentication so far, I've opted out and I'll continue to do so. Even if somebody does hijack my phone account while I'm on vacation, the consequences for me are minimal as it will not give them access to any other account or service, can you and your family members say the same thing?

What can be done?

  • Opt-out of mobile phone authentication schemes.
  • Never give the mobile phone number to web sites unless there is a real and pressing need for them to call you.
  • Tell firms you don't have a mobile phone or that you share your phone with your family and can't use it for private authentication.
  • If you need to use two-factor authentication, only use technical solutions such as smart cards or security tokens that have been engineered exclusively for computer security. Leave them in a locked drawer or safe while on vacation. Be wary of anybody who insists on SMS and doesn't offer these other options.
  • Rather than seeking to "protect" accounts, simply close some or all social media accounts to reduce your exposure and eliminate the effort of keeping them "secure" and updating "privacy" settings.
  • If your bank provides a relationship manager or other personal contact, this
    can also provide a higher level of security as they get to know you.

Previous blogs on SMS messaging, security and two factor authentication, including my earlier blog SMS Logins: an illusion of security.

Air MozillaThe Joy of Coding - Episode 64

The Joy of Coding - Episode 64 mconley livehacks on real Firefox bugs while thinking aloud.

Air MozillaThe Invention Cycle: Going From Inspiration to Implementation with Tina Seelig

The Invention Cycle: Going From Inspiration to Implementation with Tina Seelig Bringing fresh ideas to life and ultimately to market is not a well charted course. In July, our guest Tina Seelig will share a new...

Daniel Stenbergcurl wants to QUIC

The interesting Google transfer protocol that is known as QUIC is being passed through the IETF grinding machines to hopefully end up with a proper “spec” that has been reviewed and agreed to by many peers and that will end up being a protocol that is thoroughly documented with a lot of protocol people’s consensus. Follow the IETF QUIC mailing list for all the action.

I’d like us to join the fun

Similarly to how we implemented HTTP/2 support early on for curl, I would like us to get “on the bandwagon” early for QUIC to be able to both aid the protocol development and serve as a testing tool for both the protocol and the server implementations but then also of course to get us a solid implementation for users who’d like a proper QUIC capable client for data transfers.


The current version (made entirely by Google and not the output of the work they’re now doing on it within the IETF) of the QUIC protocol is already being widely used as Chrome speaks it with Google’s services in preference to HTTP/2 and other protocol options. There exist only a few other implementations of QUIC outside of the official ones Google offers as open source. Caddy offers a separate server implementation for example.

the Google code base

For curl’s sake, it can’t use the Google code as a basis for a QUIC implementation since it is C++ and code used within the Chrome browser is really too entangled with the browser and its particular environment to become very good when converted into a library. There’s a libquic project doing exactly this.

for curl and others

The ideal way to implement QUIC for curl would be to create “nghttp2” alternative that does QUIC. An ngquic if you will! A library that handles the low level protocol fiddling, the binary framing etc. Done that way, a QUIC library could be used by more projects who’d like QUIC support and all people who’d like to see this protocol supported in those tools and libraries could join in and make it happen. Such a library would need to be written in plain C and be suitably licensed for it to be really interesting for curl use.

a needed QUIC library

I’m hoping my post here will inspire someone to get such a project going. I will not hesitate to join in and help it get somewhere! I haven’t started such a project myself because I think I already have enough projects on my plate so I fear I wouldn’t be a good leader or maintainer of a project like this. But of course, if nobody else will do it I will do it myself eventually. If I can think of a good name for it.

some wishes for such a library

  • Written in C, to offer the same level of portability as curl itself and to allow it to get used as extensions by other languages etc
  • FOSS-licensed suitably
  • It should preferably not “own” the socket but also work in-memory and to allow applications to do many parallel connections etc.
  • Non-blocking. It shouldn’t wait for things on its own but let the application do that.
  • Should probably offer both client and server functionality for maximum use.
  • What else?

Air MozillaConnected Devices Weekly Program Update, 19 Jul 2016

Connected Devices Weekly Program Update Weekly project updates from the Mozilla Connected Devices team.

Mozilla Localization (L10N)Localization Hackathon in Berlin

After much delays, collectively we picked a balmy first weekend of June and Berlin as our host city for a localization hackathon. We had four representing each of Dutch/Frisian and Ukrainian communities, three of German, one of South African English. Most of them had not been to an l10n hackathon, many have never not met in person within the community even though they had been collaborating for years.

Group shot

As with the other hackathons this year we allowed each team to plan how they spent their time together, and set team goals on what they wanted to accomplish over the weekend. The localization drivers would lead some group discussions. As a group, we split the weekend covering the following topics:

A series of spectrograms where attendees answer yes/no, agree/disagree questions by physically standing on a straight line from one side of the room to the other. We learned a lot about our group on recognition, about the web in their language, and about participation patterns. As we’re thinking about how to improve localization of Firefox, gaining insights into localizers hearts and life is always helpful.

Axel shared some organizational updates from the Orlando All-Hands: we recaped the status of Firefox OS and the new focus on Connected Devices. We also covered the release schedule of Firefox for iOS and Android.

We spent a bit more time talking about the upcoming changes to localization of Firefox, with L20n and repository changes coming up. In the meantime, we have a dedicated blog post on l20n for localizers, so read up on l20n there. Alongside, we’ll stop using individual repositories and workflows for localizing Firefox Nightly, Developer Edition, Beta, and release. Instead the strings needed for all of them will be in a single place. That’s obviously quite a few changes coming up, and we got quite a few questions in the conversations. At least Axel enjoys answering them.


Our renewed focus on translation quality that resulted in development of the style guide template as a guideline for localization communities to emulate. We went through all the categories and sub-categories and explained what was expected of them to elaborate and provide locale specific examples. We stressed the importance of having one as it would help with consistency between multiple contributors to a single product or all products and projects across the board. This exercise encouraged some of the communities who thought they had a guide to review and update, and those who didn’t have one to create one. The Ukrainian community created a draft version soon after they returned home. Having an established style guide would help with training and on boarding new contributors.
We also went over the categories and definitions specified in MQM. We immediately used that knowledge to review through live demo in Pontoon-like tool some inconsistencies in the strings extracted from projects in Ukrainian. To me, that was one of the highlights of the weekend: 1) how to give constructive feedback using one of the defined categories; 2) Reoccurring type of mistakes either by a particular contributor or locale; 3). Terminology consistency within a project, product or a group of products, especially with multiple contributors; 4) Importance of peer review

For the rest of the weekend, each of the community had their own breakout sessions, reviewed their own to-do list, fixed bugs, completed some projects, and spent one on one time with the l10n drivers.

Brandenburg Gate and the teamWe were incredibly blessed with great weather. The unusually heavy rain that flooded many parts of Germany stopped during our visit. A meetup like this would not be complete without experiencing some local cultures. Axel, a Berlin native was tasked to show us around. We walked, walked and walked and with occasionally public transportation in between. We covered several landmarks such as the Berlin Wall, the Brandenburg Gate, several memorials, the landmark Gedächtniskirche as well as parks and streets crowded with the locals. Of course we sampled cuisines that reflected the diverse culture that Berlin had been: we had great kebabs and the best kebabs, Chinese fusion, the seasonal asparagus and of course the German beer. For some of us, this was not the first Berlin visit. But a group activity together, with Axel as our guide, the visit was so much memorable. Before we said goodbye, the thought of next year’s hackathon came to mind. Our Ukraine community had volunteered to host it in Lviv, a beautiful city in the western part of the country. We shall see.

Air MozillaMartes mozilleros, 19 Jul 2016

Martes mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos. Bi-weekly meeting to talk (in Spanish) about Mozilla status, community and...

David LawrenceHappy BMO Push Day!

the following changes have been pushed to

  • [1283323] Rename “Triage Report” link on Reports page.
  • [1286650] Allow explicit specification of an API key in scripts/
  • [1287039] Please add Katharina Borchert and CIO to recruiting lists
  • [1286960] certain github commit messages are not being auto-linkified properly
  • [1254882] develop a nightly script to revoke access to legal bugs from ex-employees

discuss these changes on

Armen ZambranoUsability improvements for Firefox automation initiative - Status update #1

The developer survey conducted by Engineering Productivity last fall indicated that debugging test failures that are reported by automation is a significant frustration for many developers. In fact, it was the biggest deficit identified by the survey. As a result,
the Engineering Productivity Team (aka A-Team) is working on improving the user experience for debugging test failures in our continuous integration and speeding up the turnaround for Try server jobs.

This quarter’s main focus is on:
  • Debugging tests on interactive workers (only Linux on TaskCluster)
  • Improve end to end times on Try (Thunder Try project)

For all bugs and priorities you can check out the project management page for it:

In this email you will find the progress we’ve made recently. In future updates you will see a delta from this email.

PS = These status updates will be fortnightly

Debugging tests on interactive workers
Accomplished recently:
  • Landed support for running reftest and xpcshell via
  • Many UX improvements to the interactive loaner workflow

  • Make sure Xvfb is running so you can actually run the tests!
  • Mochitest support + all other harnesses

Thunder Try - Improve end to end times on try

Project #1 - Artifact builds on automation

Accomplished recently:
  • Landed prerequisites for Windows and OS X artifact builds on try.
  • Identified which tests should be skipped with artifact builds

  • Provide a try syntax flag to trigger only artifact builds instead of full builds; starting with opt Linux 64.

Project #2 - S3 Cloud Compiler Cache

Accomplished recently:
  • Sccache’s Rust re-write has reached feature parity with Python’s sccache
  • Now testing sccache2 on Try

  • We want to roll out a two-tier sccache for Try, which will enable it to benefit from cache objects from integration branches

Project #3 - Metrics

Accomplished recently:

  • Putting Mozharness steps’ data inside Treeherder’s database for aggregate analysis

  • TaskCluster Linux builds are currently built using a mix of m3/r3/c3 2xlarge AWS instances, depending on pricing and availability. We’re going to be looking to assess the effects on build speeds of using more powerful AWS instances types, as one potential way of reducing e2e Try times.

Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

This Week In RustThis Week in Rust 139

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

New Crates & Project Updates

Crate of the Week

This week has a belated Crate of the Week with Vincent Esche's self-submitted cargo-modules, which gives us the cargo modules subcommand that shows the module structure of our crates in a tree view, optionally warning of orphans. Thanks, Vincent!

Submit your suggestions for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

105 pull requests were merged in the last two weeks.

New Contributors

  • abhi
  • Aravind Gollakota
  • Ben Boeckel
  • Ben Stern
  • David
  • Dridi Boukelmoune
  • Isaac Andrade
  • Zhen Zhang

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Upcoming Events

  • 7/20. Rust Community Team Meeting at #rust-community on
  • 7/21. Rust Hack & Learn Karlsruhe.
  • 7/27. Rust Community Team Meeting at #rust-community on

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

fn work(on: RustProject) -> Money

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

fzammetti: Am I the only one that finds highly ironic the naming of something that's supposed to be new and cutting-edge after a substance universally synonymous with old, dilapidated and broken down?

paperelectron: Rust is as close to the bare metal as you can get.

On /r/programming.

Submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and brson.

Karl Dubost[worklog] Edition 027. Tracking protection and a week of boxes.

Tracking protection is an interesting beast. A feature to help users but users think the site is broken. I guess it's something similar to habits. If you put a mask on your face and you have forgotten about it, you may be surprised that people do not want to talk to you.

Webcompat Life

Progress this week:

Today: 2016-07-19T11:32:54.030052
316 open issues
needsinfo       5
needsdiagnosis  76
needscontact    20
contactready    41
sitewait        168

You are welcome to participate

Webcompat issues

(a selection of some of the bugs worked on this week).

  • issue with MLB site displaying the plays.
  • Interesting CSS issue about display:table and max-height having a different behavior in Chrome and Firefox, maybe something related to a known issue. To be confirmed.
  • Enabling Tracking Protection in Firefox creates a lot of issues, which are not completely understood by users. We are starting to have a set of Web Compatibility reports because the site breaks or crashes when tracking protection is enabled. Usually, the JavaScript code of the site didn't take into account that some people might want to block some of the page assets, and this creates unintended consequences. There is probably something around UX to improve here. So users really understand their choices. dev

Reading List

  • CSS Containment.

    the contain property, which indicates that the element’s subtree is independent of the rest of the page.

    If I understand, this seems like something which would answer many of the complaints we hear from Web developers about CSS isolation. Specifically the layout term: contain: layout.

    This value turns on layout containment for the element. This ensures that the containing element is totally opaque for layout purposes; nothing outside can affect its internal layout, and vice versa.

    Implemented in Blink. I didn't find an issue on WebKit project (Safari). I didn't find a bug in Mozilla Bugzilla either. Can I use?. Probably no.

Follow Your Nose


  • Document how to write tests on using test fixtures.
  • ToWrite: Amazon prefetching resources with <object> for Firefox only.


Roberto A. VitilloData Analysis Review Checklist

Writing good code is hard, writing a good analysis is harder. Peer-review is an essential tool to fight repetitive errors, omissions and more generally divulge knowledge. I found the use of a checklist to be invaluable to help me remember the most important things I should watch out for during a review. It’s far too easy to focus on few details and ignore others which might be catched (or not) in a successive round.

I don’t religiously apply every bullet point of the following checklist to every analysis, nor is this list complete; more items would have to be added depending on the language, framework, libraries, models, etc. used.

  • Is the question the analysis should answer clearly stated?
  • Is the best/fastest dataset that can answer the question being used?
  • Do the variables used measure the right thing (e.g. submission date vs activity date)?
  • Is a representative sample being used?
  • Are all data inputs checked (for the correct type, length, format, and range) and encoded?
  • Do outliers need to be filtered or treated differently?
  • Is seasonality being accounted for?
  • Is sufficient data being used to answer the question?
  • Are comparisons performed with hypotheses tests?
  • Are estimates bounded with confidence intervals?
  • Should the results be normalized?
  • If any statistical method is being used, are the assumptions of the model met?
  • Is correlation confused with causation?
  • Does each plot communicate an important piece of information or address a question of interest?
  • Are legends and axes labelled and do the they start from 0?
  • Is the analysis easily reproducible?
  • Does the code work, i.e. does it perform its intended function?
  • Is there a more efficient way to solve the problem, assuming performance matters?
  • Does the code read like prose?
  • Does the code conform to the agreed coding conventions?
  • Is there any redundant or duplicate code?
  • Is the code as modular as possible?
  • Can any global variables be replaced?
  • Is there any commented out code and can it be removed?
  • Is logging missing?
  • Can any of the code be replaced with library functions?
  • Can any debugging code be removed?
  • Where third-party utilities are used, are returning errors being caught?
  • Is any public API commented?
  • Is any unusual behavior or edge-case handling described?
  • Is there any incomplete code? If so, should it be removed or flagged with a suitable marker like ‘TODO’?
  • Is the code easily testable?
  • Do tests exist and do they actually test that the code is performing the intended functionality?

Christian HeilmannA great time and place to ask about diversity and inclusion

whiteboard code

There isn’t a single day going by right now where you can’t read a post or see a talk about diversity and inclusiveness in our market. And that’s a great thing. Most complain about the lack of them. And that’s a very bad thing.

It has been proven over and over that diverse teams create better products. Our users are all different and have different needs. If your product team structure reflects that you’re already one up against the competition. You’re also much less likely to build a product for yourself – and we are not our end users.

Let’s assume we are pro-diversity and pro-inclusiveness. And it should be simple for us – we come from a position of strength:

  • We’re expert workers and we get paid well.
  • We are educated and we have companies courting us and looking after our needs once we have been hired.
  • We’re not worried about being able to pay our bills or random people taking our jobs away.

I should say yet, because automation is on the rise and even our jobs can be optimised away sooner or later. Some of us are even working on that.

For now, though, we are in a very unique position of power. There are not enough expert workers to fill the jobs. We have job offers thrown at us and our hiring bonuses, perks and extra offers are reaching ridiculous levels. When you tell someone outside our world about them, you get shocked looks. We’re like the investment bankers and traders of the eighties and we should help to ensure that our image won’t turn into the same they have now.

If we really want to change our little world and become a shining beacon of inclusion, we need not to only talk about it – we should demand it. A large part of the lack of diversity in our market is that it is not part of our hiring practices. The demands to our new hires make it very hard for someone not from a privileged background or with a degree from a university of standing to get into our market. And that makes no sense. The people who can change that is us – the people in the market who tick all the marks.

To help the cause and make the things we demand in blog posts and keynotes happen, we should bring our demands to the table when and where they matter: in job interviews and application processes.

Instead of asking for our hardware, share options and perks like free food and dry cleaning we should ask for the things that really matter:

  • What is the maternity leave process in the company? Can paternity leave be matched? We need to make it impossible for an employer to pick a man over a woman because of this biological reason.
  • Why is a degree part of the job? I have none and had lots of jobs that required one. This seems like an old requirement that just got copied and pasted because of outdated reasons.
  • What is the long term plan the company has for me? We kept getting asked where we see ourselves in five years. This question has become cliché by now. Showing that the company knows what to do with you in the long term shows commitment, and it means you are not a young and gifted person to be burned out and expected to leave in a year.
  • Is there a chance for a 4 day week or flexible work hours? For a young person it is no problem doing an 18 hours shift in an office where all is provided for you. As soon as you have children all kind of other things add to your calendar that can’t me moved.
  • What does this company do to ensure diversity? This might be a bit direct, but it is easy to weed out those that pay lip service.
  • What is the process to move in between departments in this company? As you get older and you stay around for longer, you might want to change career. A change in your life might make that necessary. Is the company supporting this?
  • Is there a way to contribute to hiring and resourcing even when you are not in HR? This could give you the chance to ask the right questions to weed out applicants that are technically impressive but immature or terrible human beings.
  • What is done about accessibility in the internal company systems? I worked for a few companies where internal systems were inaccessible to visually impaired people. Instead of giving them extra materials we should strive for making internal systems available out-of-the-box.
  • What is the policy on moving to other countries or working remotely? Many talented people can not move or don’t want to start a new life somewhere else. And they shouldn’t have to. This is the internet we work on.
  • What do you do to prevent ageism in the company? A lot of companies have an environment that is catering to young developers. Is the beer-pong table really a good message to give?

I’ve added these questions to a repo on GitHub, please feel free to add more questions if you find them.

FWIW, I started where I am working right now because I got good answers to questions like these. My interviews were talking to mixed groups of people telling me their findings as teams and not one very aggressive person asking me to out-code them. It was such a great experience that I started here, and it wasn’t a simple impression. The year I’ve worked here now proved that even in interviewing, diversity very much matters.

Photo Credit: shawncplus

Mozilla WebDev CommunityExtravaganza – July 2016

Once a month, web developers from across Mozilla get together to talk about the work that we’ve shipped, share the libraries we’re working on, meet new folks, and talk about whatever else is on our minds. It’s the Webdev Extravaganza! The meeting is open to the public; you should stop by!

You can check out the wiki page that we use to organize the meeting, or view a recording of the meeting in Air Mozilla. Or just read on for a summary!

Shipping Celebration

The shipping celebration is for anything we finished and deployed in the past month, whether it be a brand new site, an upgrade to an existing one, or even a release of a library.

Basket switch to Salesforce

First up was pmac, who shared the news that Basket, email newsletter subscription service, has switched to using Salesforce as the backend for storing newsletter subscriptions. In addition, the service now has a nifty public DataDog metrics dashboard showing off statistics about how the service is performing.

Engagement Engineering Status Board

Next was giorgos, who shared, a status page listing the current status of all the services that Engagement Engineering maintains. The status board pulls monitoring information from Dead Man’s Snitch as well as New Relic‘s application and Synthetics monitoring. The app runs a worker using AWS Lambda that pulls the information and writes it to a YAML file in the repo‘s gh-pages branch, and the status page itself reads the YAML file via JavaScript to build the display.


ErikRose stopped by to share more cool things that shipped in DXR this month:

  • Indexing for XBL and JavaScript.
  • Indexing 32+ new projects
  • Added a 3rd build server
  • Several performance optimizations that cut down build times by roughly 25%.
  • C++ macro definitions, method overrides, pure virtuals, substructs, and more are all now indexed. In addition, you can now easily jump between header files and their implementations.
  • UI improvements, including contrast improvements, a new filename filter, and jumping directly to files that are the only result of a query.

Special thanks to intern new_one and contributors twointofive and abbeyj. Also special thanks to MXR for being shut down due to security bugs and allowing DXR to flourish in its wake.

Fathom 1.0 and 1.1

Erik also brought up Fathom, an experimental framework for extracting meaning from webpages. Fathom allows you to write declarative rules that score and classify DOM nodes, and then extract those nodes from a DOM that it analyzes.

This month we shipped the 1.0 version of Fathom, as well as a 1.1 release with a bug fix for Firefox support as well as an optimization fix. It’s available as an NPM module for use as a library.


The Roundtable is the home for discussions that don’t fit anywhere else.

Engagement Engineering Hiring – Senior Webdev and Site Reliability Engineer

Last up was pmac again, who wanted to mention that the Mozilla Engagement Engineering team is hiring a Senior Web Developer and a Site Reliability Engineer. If you’re interested in working at Mozilla, click those links to apply on our careers site!

If you’re interested in web development at Mozilla, or want to attend next month’s Extravaganza, subscribe to the mailing list to be notified of the next meeting, and maybe send a message introducing yourself. We’d love to meet you!

See you next month!

Chris H-CUnits and Data Follow-Up: Pokémon GO in the United Kingdom

Hit augmented-reality mobile gaming sensation Pokémon GO is now available in the UK, so it’s time to test my hypothesis about searches for 5km converted to miles in that second bastion of “Let’s use miles as distance units in defiance of basically every other country”:


Results are consistent with hypothesis.

(( Now if only I could get around how the Google Play Store is identifying my Z10 as incompatible with the game… ))


Mozilla Addons BlogA Better Add-on Discovery Experience

People who personalize Firefox like their Firefox better. However, many people don’t know that they can, and for those who know it isn’t particularly easy to do. So a few months ago, we began rethinking our entire add-on discovery experience—from helping people understand the benefits of personalization, to making it easier to install an add-on, to putting the right add-on in front of people at the right time.

The first step we’ve taken towards a better discovery experience is in the redesign of our Add-on Discovery Pane. This is typically the first page users see when they launch the Add-on Manager at about:addons.

Add-on Discovery Pane before Firefox 48

Add-on Discovery Pane before Firefox 48

We updated this page to target people who are just getting started with add-ons, by simplifying add-on installation to just one click and using clean images and text to quickly orient a new user.

Disco Pane One Click Install

It features a tightly curated list of add-ons that provide customizations that are easy for new users to understand.

Add-on Discovery Pane starting with Firefox 48

Add-on Discovery Pane starting with Firefox 48

We started with a small list and collaborated with their developers to ensure the best possible experience for users. For future releases, we will refresh the featured content on a more frequent basis and open up the nomination process for inclusion.

Our community of developers create awesome add-ons, and we want to help users discover them and love their Firefox even more. In the coming months, we are going to continue improving the experience by making recommendations that are as uniquely helpful to users as possible.

In the meantime, this first step toward improving the Firefox personalization experience will land in Firefox 48 on August 1, and is available in Firefox Beta now. So download Firefox Beta, go to about:addons and give it a try! (You can also reach this page by going to the Tools menu and choosing “Add-ons”). We would love to hear your feedback in the forums.