Justin DolskePhoton Engineering Newsletter #9

Well, here we go again with a big update for Newsletter #9! [Since the last update, I took a vacation and ever since have been getting caught up. So this update is terribly overdue. Sorry about that!]


Animations for toolbar icons are finally landing! Yaaaaay! :kermithands: The stop/reload and pin-to-overflow animations are in Nightly, and the Pocket, bookmarks, and downloads icons are next. I’d like to talk a little about the way these animations are implemented. The effect is simple, but there’s a lot of work behind the scenes.

As is often the case in engineering, we had a number of requirements that needed to be satisfied. The most important was performance – we’re making Firefox 57 blazing fast, and it wouldn’t do to have animations hurting that work. We worked closely with our Graphics and Platform teams to develop a technique (more on that in a second) that was efficient, and specifically could run entirely within the compositor process. This helps ensure the animation can smoothly run at 60fps without impacting (or being impacted by) all the other stuff running in the browser. The other big requirement was around UX. We wanted to minimize the platform technical constraints for our visual designers, so that their ideas were not bound by platform limitations. [For example, we had an old patch to convert the page-loading throbber from an APNG to a static image animated with a CSS transform (rotation). That’s nice, but then you’re limited to effects possible in pure CSS.] We also needed to use SVG in the animation, since the rest of Firefox’s icons are SVG now. This gives the best result when scaling to different display DPI settings, and allows us to dynamically adjust the icon colors based on system style.

The technique we’re using for Photon is based around an SVG “filmstrip” – each frame is pre-drawn separately within an SVG image. We then use CSS to crop it to a specific frame, and a CSS Animation to move the crop box for each frame. Since it’s all declarative, Gecko and the compositor process can handle the animation very efficiently.


A demo is worth 10,000 words, so check out Jared’s SVG Filmstrip Demo to see how this works in detail. (Since it’s all based on web technologies, you can view-source and see the same technique running in your browser that we use in chrome.)

Of course, nothing is ever simple! There’s a lot of work that goes into getting these assets ready. First, our visual designers prototype an animation in Adobe After Effects. It then goes through a process of design iteration and user testing. (Is is distracting? Does it communicate what we want? Too fast? Too slow?) The animation is then exported to an SVG file, but the markup from Adobe is usually not very good. So we use SVGO to optimize it, often further hand-tune the SVGO output, and finally tweak it to make use of the dynamic context-fill color specified from the browser’s CSS.

Phew! But thankfully, as a user, you can just enjoy the finished result. We’d like to refine the creation process, but overall it’s a pretty powerful and flexible technique that we’ll be using more and more.

Recent Changes


  • Lots of minor bugfixes to various aspects of the earlier changes we made. Thanks to contributors Jeongkyu Kim (bug 1376484) and Adolfo Jayme (bug 1376097) for their work!
  • Flexible spacers are now available in customize mode. This is the first step towards having the location field be centered in the navbar, as part of the Photon design.
  • Work on the Page Action panel is progressing. We gave a prototype sneak-peek at the All-Hands (see Photon Newsletter #8), but it’s not quite ready to land yet.
  • Made the Library button work correctly in the overflow panel.
  • Removed the Page Setup item from the hamburger menu, as it was a bit confusing and the functionality can be accessed elsewhere.


  • The Pin to Overflow animation landed, try it now! Right-click on a toolbar button and use “Pin to Overflow Menu” to see the awesome animation!
  • The Stop/Reload animation also landed! We’ve got great feedback about speeding it up and making it less distracting, will be working on that soon. You may have noticed that, while animating, the icon looks thinner than the normal (static) icon… We’ll be updating all the toolbar icons to this new Photon style in a few weeks, which will make the transition between states feel smoother.
  • The Save to Pocket animation is going through review, once that’s landed we’ll start implementation of the new bookmarking animation.
  • The new arrow-panel animations landed… but bounced due to test failures. We’re working around the failures by disabling the animation in tests, which means we bumped up the priority of having the panel animations be controlled by the toolkit.cosmeticAnimations.enabled preference. Once this is all hooked up and tests are happy, the animation will reland.


  • The prefs reorg v2.0 has now landed! (Check out the design specs for the details of exactly what changed.) This had a rough time getting landed, as it encountered a string of tricky intermittent test failures and memory leaks. Kudos to the team for working through it all and getting it landed!
  • Improved display on small screens by fixing some wrapping issues that were causing scrolling.

Visual redesign:

  • On Windows 10, we will now use the system accent color in the title bar, when you’ve enabled this in your Windows settings.
    • You’ll first need to open the following OS dialog, by right-clicking on your Desktop, selecting Personalize, and then selecting the Colors section. Scroll down to “Show accent color on the following surfaces” and enable the “Title bars” checkbox.
    • You can now select different colors here, and have them show up in Firefox. (Or use the setting at the top of that dialog to have Windows automatically pick a color that complements your desktop background)
  • Styling followups to the identity block (the leftmost button in the URL bar) when hovering/clicking it.
  • The UI Density control is now ordered by logical size, and we fixed an issue that made the text labels unreadable on Linux.
  • Search textbox widget styling has been updated to be common across the browser.
  • Slightly increased the size of the tab overflow indicator.
  • A number of followup bugfixes for the title bar on Windows 10.


  • Turned off the compact newtab tiles. Originally we thought enabling the new style would be a good transition to the Activity Stream style, but on further reflection it felt too much like making 2 differing changes. So Firefox release users will continue to see the existing (old) newtab page in Firefox 56, until Activity Stream fully replaces it in 57. (Activity Stream is now enabled on Nightly, so all the changes here make it a little confusing to explain, sorry!)
  • The onboarding overlay will be hidden when browser is too narrow, as the current design/implementation requires a minimum width or else it overflows the window.
  • The tour notifications shown at the bottom of a new tabs and about:home are now suppressed for the first few minutes of using the browser, and will rotate to a different suggestion after a certain amount of time or number of impressions. Also, the slide-up animation was removed to make it less distracting.
  • The tour step to set your default browser will now show an alternative message if you’ve already made Firefox your default browser.
  • Our UITour support can now open the awesomebar with a search term pre-filled, which will be used for Firefox 57 Address Bar tour to demonstrate searching from the awesomebar.



That’s all (!) for this update.

Michael KohlerWebVR/A-Frame Hackathon in Lausanne

Last Saturday the Mozilla Switzerland community, together with Liip, has organized a WebVR/A-Frame Hackathon in Lausanne, Switzerland. As always (also in Zurich), we turned to Liip for support regarding a venue, and as always they were happy to host us in Lausanne. At this point, a big “thank you” to them! Without their support we couldn’t organize events as easy as we can now.

A-Frame is a framework for WebVR, making it easy to create scenes and components that can be viewed in the browser, with Cardboard, the HTC Vive and other VR devices. A-Frame even includes an Inspector that makes it easy to develop VR applications in the browser without having to test it constantly on a real VR device.

About the event

We gathered at 10 o’clock to have some initial talks and a light breakfast. It also took some time to set up the Vive and making sure we can demo A-Painter and other VR applications. At around 10:30 we started with a quick intro by Geoffroy and me. We explained the agenda and some general information about Mozilla. After that, Ben gave an intro to WebVR and how they are using A-Frame at Archilogic. It’s amazing to see how it is being used in the wild for really amazing and helpful projects! Thank you for your time to come to Lausanne and present to us!

Then we did some demos with our gadgets and started hacking. With the help of Ben it was easy to create our own first projects. What struck me most is that my creativity and design skills came to end quite quickly, but nevertheless I could easily create a good Proof of Concept for a future iteration (through skilled people and Pull Requests). :)

Here are some projects that were created/started during this hackathon (click on the images to see it live in your browser):

Another thank you – our preparation hero

A big “thank you” goes to Geoffroy, who has helped us out with the whole coordination with the venue and did an amazing job promoting the event to his network. We are so happy to have people like Geoffroy who are helping us keep the Open Web public and accessible to all!

Honestly, the only thing I did was the administration parts of creating event pages on the Reps portal and Meetup, coordinate for having a speaker and giving a short intro talk about the agenda (with his help). Everything about the venue, food, gadgets was organized by him. He even went as far as traveling to Zurich to get an HTC Vive so we can show case that as well!


We had a lot of fun, and we hope our attendees did too! The verbal feedback was good, however we couldn’t get a lot of responses through the feedback form, even though I reminded all participants after the event through email to please fill out the feedback form. Additionally we need to do better in terms of diversity!

All photos are used with permission by Liip or their respective owners if linked to a tweet.

Mozilla Localization (L10N)Making a change with Pootle

tl;dr: As of 1 September 2017, Mozilla’s Pootle instance (mozilla.locamotion.org) will be turned off. Between now and then, l10n-drivers will be assisting l10n communities using Pootle in moving all projects over to Pontoon. Pootle’s positive impact in Mozilla’s continued l10n evolution is undeniable and we thank them for all of their contributions throughout the years.

Mozilla’s localization story has evolved over time. While our mission to improve linguistic accessibility on the Web and in the browser space hasn’t changed, the process and tools that help us to accomplish this have changed over the years. Some of us can remember when a Mozilla localizer needed to be skilled in version control systems, Unix commands, text editors, and Bugzilla in order to make an impactful contribution to l10n. Over time (and in many ways thanks to Pootle), it became clear that the technical barrier to entry was actually preventing us from achieving our mission. Beginning with Pootle (Verbatim) and Narro, we set out to lower that barrier through web-based, open source translation management systems. These removed many of the technical requirements on localizers, which in turn led to us being able to ship Firefox in languages that other browsers either couldn’t or simply wouldn’t ship; making Firefox the most localized browser on the market! Thanks to Pootle, we’ve learned that optimizing l10n impact through these tools is critical to our ability to change and adapt to new,  faster development processes taking the Internet and software industries by storm. We created Pontoon to take things further and focus on in-context localization. The demand for that tool became so great that we ended up adding more and more projects to it. Today I’m announcing the next step in our evolution: as of 1 September 2017, all Mozilla l10n communities using Pootle will be migrated to Pontoon and the Mozilla Pootle instance (mozilla.locamotion.org) will be turned off.


Over the years, we’ve developed a fond relationship with Translate House (the organization behind Pootle), as have many members of the Mozilla l10n community. Nearly five years ago, we entered into a contract agreement with the Translate House team to keep a Mozilla instance of Pootle running, to develop custom features for that instance, and to mentor l10n communities. As l10n has shifted through the Mozilla organization year after year, the l10n team recently found themselves as part of another internal reorganization, right at the moment in which contract renewal was up for discussion. With that reorganization, came new priorities for l10n and a change in budget for the coming year. In the face of those changes, we were unable to renew our contract with Translate House.

What now?

Before 1 September, the l10n-drivers will be proactively contacting l10n communities using Pootle in order to perform project migrations into Pontoon. Moving project-to-project, we’ll start with the locales that we’re currently shipping for a project, then move to those which are in pre-release, and finally those that have seen activity in the last three months. In the process, we’ll look out for any technical unknown unknowns that Pontoon engineers can address to make the transition a positive and seamless one.

There are a few things you can do to make the transition run smoothly:

  1. Log into Pontoon with your Firefox Account. If you don’t already have a Firefox account, please create one.
  2. Process all pending suggestions in your Pootle projects (i.e., bring your community’s suggestion queue down to 0).
  3. Flag issues with Pontoon to the l10n-drivers so that we can triage them and address them in a timely manner. To do this, please file a bug here, or reach out to the l10n-drivers if you’re not yet comfortable with Bugzilla.

We understand that this is a major change to those contributing to Mozilla through Pootle right now. We know that changing tools will make you less productive for a while. We’ll be holding a public community video call to address concerns, frustrations, and questions face-to-face on Thursday, 27 July at 19:00 UTC. You’re all invited to attend. If you can’t attend due to time zones, we’ll record it and publish it on air.mozilla.org. You can submit questions for the call beforehand on this etherpad doc and we’ll talk about them on the call. We’ve also created this FAQ to help answer any outstanding questions. We’ll be adding the questions and answers from the call to this document as well.

Finally, I would like to personally extend my thanks to Translate House. Their impact on open source localization is unmatched and I’ve truly enjoyed the relationships we’ve built with that team. We wish them all the best in their future direction and hope to have opportunities to collaborate and stand together in support of open localization in the future.

Mozilla Open Innovation TeamMozilla OI Update from the Open and User Innovation Conference 2017

Written by Rosana Ardila and Rina Jensen

For second successive year we attended the Open and User Innovation Conference (The OUI Conference) — the leading conference in the field — held this year in the beautiful city of Innsbruck, Austria.

A few of our highlights:

Success through building movements

When HTT joined Elon Musk’s hyperloop challenge they knew that getting to a solution with their current limited resources alone was unrealistic. Instead, they took learnings from activist grassroot groups and growing crowdsourcing projects and focused on building a movement. They did this by focusing on 5 design principles that reject traditional organisational design, all focusing on different ways of organising in the open, principles they have label Wicked Goal Design. Implementing these design principles allowed HTT to gather top engineers across different fields and disciplines to help work on the problem in their spare time, by setting up clear structures for engagement such as donating personal time and money to get the opportunity to work on the project. The project has had remarkable effects, growing the movement to a community of 40.000 screened and selected individuals, of which 800 are bought in collaborators. They also build up a solid network of key partners who contribute much of the work. In 2018 they are expected to test their first passenger system.

Source: http://hyperloop.global/team/

A healthy community is a strong network

The R community, is an open source community building a system for statistical computing. It has had tremendous success because of the tight networks and the ever increasing partner pool that help them develop the technology in new and interesting directions. For us, this highlighted the need to think about a healthy open source community, not as the sum of individuals contained within it, but also by the networks and ecosystems that surround it and support it.

Source: R Community Presentation

HeroX and the search for diversity

HeroX is a crowdsourcing platform specialising in many types of competitions, prizes and challenges. As such, they have a great need for a diverse community but the HeroX platform shows much lower contribution rates from women than men. The team therefore started a project experimenting with the effects on diversity from having in-person expert mentors help new members. We will follow with interest to see the results of the remote mentorship model.

Co-creation, rather than just sharing information, maintains productive communities

A current theme is the role of co-creation in open communities. Thomas Maillart from the University of Geneva found that co-creation, in the form of hackathons, leads to creative outbursts in communities, this can only be sustained through continuous feedback and follow up. A netnographic study of the Open Energy Monitor community, looked into what triggers successful prototyping and came to similar findings. Asking and receiving feedback are the most important actions to trigger successful prototyping. Finally, Maria Halbinger from CUNY uncovered in her study of makerspaces that while identifying with a community decreases entrepreneurship in individuals, co-creation can actually mitigate this effect. This research highlights that communities thrive when they are creating together, rather than just sharing information, spaces or social bonds. A lesson for Mozilla communities is that a thriving community goes beyond identifying with the mission. It comes down to solving problems and creating together.

Overall we can see that open innovation, including open source and crowdsourcing, are exploding as areas of research. If you are interested in more details about the specific papers you can check out the Book of abstracts.

Mozilla OI Update from the Open and User Innovation Conference 2017 was originally published in Mozilla Open Innovation on Medium, where people are continuing the conversation by highlighting and responding to this story.

Air MozillaMozilla Weekly Project Meeting, 24 Jul 2017

Mozilla Weekly Project Meeting The Monday Project Meeting

Will Kahn-GreeneSoloists: code review on a solo project


I work on some projects with other people, but I also spend a lot of time working on projects by myself. When I'm working by myself, I have difficulties with the following:

  1. code review
  2. bouncing ideas off of people
  3. peer programming
  4. long slogs
  5. getting help when I'm stuck
  6. publicizing my work
  7. dealing with loneliness
  8. going on vacation

I started a #soloists group at Mozilla figuring there are a bunch of other Mozillians who are working on solo projects and maybe if we all work alone together, then that might alleviate some of the problems of working solo. We hang out in the #soloists IRC channel on irc.mozilla.org. If you're solo, join us!

I keep thinking about writing a set of blog posts for things we've talked about in the channel and how I do things. Maybe they'll help you.

This one covers code review.

Read more… (10 mins to read)

Hacks.Mozilla.OrgOptimizing Performance of A-Frame Scenes for Mobile Devices

A-Frame makes building 3D and VR web applications easy, so developers of all skill levels can create rich and interactive virtual worlds – and help make the web the best and largest deployment surface for VR content. For an Oregon State University capstone project focused on WebVR, our team investigated performance and optimizations for A-Frame on Android smartphones. We developed a means of benchmarking the level of 3D complexity a mobile phone is capable of, and determining which performance metrics are required for such a benchmark.


From the left, Team OVRAR (Optimizing Virtual Reality and Augmented Reality):

Branden Berlin: Javascript Compatibility and Model Lighting
Charles Siebert: Team Captain, Project Designer, and Modeling
Yipeng (Roger) Song: Animations and Texturing

Results and Recommendations

Texture size: The framework resizes textures to the nearest power of two, which heavily increases the loading and rendering workload in the scenes. We found that high-resolution textures that didn’t match the criteria reached sizes of 8196×8196, with one texture taking up to 20 MB! Using texture dimensions that are a power of two helps ensure optimal memory use. Check the Web Console for warnings when textures are resized.

Asset Limit: We found that having more than 70 MB of assets loaded for one web page was unrealistic in a phone environment. It caused significant delays in loading the scene fully, and in some cases crashed the browser on our phones. Use the Allocations recorder in the Performance Tool in Firefox to check your scene’s memory usage, and the A-Frame Inspector to tune aspects of rendering for individual objects.

Tree map

Resolution cost: Higher resolution trees caused delays in loading the models and significant slowdowns in rendering the scenes. Our high resolution tree features 37,000 vertices which increases the graphics rendering workload, including lighting from multiple light sources. This heavily limited the number of models we could load into our scene. We also found an upper limit for our devices while handling these trees: When our room reached about 1,000,000 vertices, our phone browsers would crash after spending a few minutes attempting to load and render. You can add the “stats” property to your tag to see the number of vertices in the scene.

Object count: Load times increased linearly based on the number of models to be drawn to the scene. This would add a significant amount of time, if each object to be loaded took, for example, three milliseconds. Further inspecting the memory snapshot shows that our object models are read in and stored into object arrays for quicker access and rendering. Larger object models would also increase linearly based off the number of vertices and faces that are used to create the model, and their resulting normal vectors. Check the A-Frame stats monitor to keep an eye on your object count.

Measurement overhead: During the testing, we used WebIDE to monitor on-device performance. We found that the overhead of USB debugging on our Android devices caused performance to drop by nearly half. Our testing showed that CPU performance was not the leading bottleneck in rendering the scenes. CPU usage hovered at 10-25% during heavy performance drops. This shows that the rendering is mostly done on the GPU, which follows how OpenGL ES 2.0 operates in this framework.

Testing Approach

Our approach was to:

  • render multiple scenes while measuring specific metrics
  • determine the best practices for those metrics on mobile
  • report any relevant bugs that appear.

The purpose of creating a benchmark application for a mobile device is to give a baseline for what is possible to develop, so developers can use this information to plan their own projects.

We tested on the LG Nexus 5X and used the WebIDE feature in Firefox Nightly to pull performance statistics from the phone while it was rendering our scenes, tracking frames-per-second (FPS), and using memory. Additionally, we tracked processor usage on the device through Android’s native developer settings.

To begin, we broke down the fundamental parts of what goes into rendering computer graphics, and created separate scenes to test each of these parts on the device. We tested object modeling, texturing, animation, and lighting, and then created standards of performance that the phone needed to meet for each. We aimed to first find a baseline performance of 30 FPS for each part and then find the upper bound – the point at which the feature breaks or causes visual drops in performance. We separated these features by creating a VR environment with four “rooms” that tested each in A-Frame.

Room 1: Loading object models using obj-loader

Room 4 screenshot

In the first room, we implemented a high-resolution tree, loading a large number of low vertice-count objects and comparing to a low number of high vertice-count objects. Having a comparable number of vertices rendered in either scene helped us determine the performance impact of loading multiple objects at once.

Room 2: Animations and textures

In this room, we implemented textures and animations to determine the impact on initial load times and the impact in calculating animation methods. We used A-Frame’s built-in functions to attach assets to objects to texture them, and we used A-Frame’s animation methods to animate the objects in this room. This allowed us to easily test this scenario of animating the textured objects and measure the differences between the two iterations. In the first iteration, we implemented low-resolution textures on objects to compare them with high-resolution textures in the second iteration. These resolution sizes varied from 256×256 to 8196×8196. We also wanted to compare the performance between the two rooms, and see if texturing the objects would cause any unforeseen issues with animations other than the initial load time when downloading the assets from the web page.

Room 3: User interaction and lighting

This room’s initial concept focused on the basis of gaming: user interaction. We utilized JavaScript within A-Frame to allow the user to interact with objects scattered about a field. Due to the limited mobility of mobile-VR interaction, we kept it to visual interaction. Once the user looked at an object, it would either shrink or enlarge. We wanted to see if any geometric change due to interaction would impact hardware demand. We manipulated the growth size of object interactions and found a few unreliable stutters. Generally, though, the hardware performance was stable.

For the second iteration, we ramped up the effects of user interactions. We saw that nothing changed when it came to physical effects on objects in the world, so we decided to include something that is more taxing on the hardware: lighting.

As the user interacted with an object, the object would then turn into a light source, producing an ambient light at maximum intensity. We scattered these objects around the room and had the user turn them on, one by one. We started with 10 ‘suns’ and noticed an initial lag when loading the room, as well as a 2-3 second FPS drop to 13, when turning on the first sphere. After that, the rest of the spheres turned on smoothly. We noticed a steady and consistent drop of about 10 FPS for every 10 max-intensity light sources. However, as the intensity was decreased, more and more lighting sources were allowed before a noticeable change in performance occurred.

Room 3 screenshots

Room 4: All previous features implemented together.

Developers are unlikely to use just one of these specific features when creating their applications. We created this room to determine if the performance would drop at an exponential rate if all features were added together, as this would be a realistic scenario.

Further Information

You can find all the source code and documentation for our OVRAR project on Github.

If you have any questions, ask in the comments below. Thanks!

QMOFirefox Developer Edition 55 Beta 11 Testday Results


As you may already know, last Friday – July 21st – we held a new Testday event, for Firefox Developer Edition 55 Beta 11.

Thank you all for helping us make Mozilla a better place – Ilse Macias, Athira Appu, Iryna Thompson.

From India team:  Fahima Zulfath A, Nagarajan .R, AbiramiSD, Baranitharaan, Bharathvaj, Surentharan.R.A, R.Krithika Sowbarnika, M.ponmurugesh.

From Bangladesh team: Maruf Rahman, Sajib Hawee, Towkir Ahmed, Iftekher Alam, Tanvir Rahman, Md. Raihan Ali, Sazzad Ehan, Tanvir Mazharul, Md Maruf Hasan Hridoy, Saheda Reza Antora, Anika Alam Raha, Taseenul Hoque Bappi.


– several test cases executed for the Screenshots, Simplify Page and Shutdown Video Decoder features;

– 7 new logged bugs: 1383397, 1383403, 1383410, 1383102, 1383021, #3196, #3177

– 3 bugs verified: 1061823, 1357915, 1381692

Thanks for another successful testday! 🙂

We hope to see you all in our next events, all the details will be posted on QMO!

Cameron KaiserDo you work for Facebook? Do you like old Power Macs? Help a brother out.

Are you a front-end engineer for Facebook? Are you sympathetic to us struggling masses trying to keep our Power Macs out of the landfill? Do you, too, secretly agree that the iMac G4 looks far better than any of the crap Tim Cook and friends churn out now?

If so, give your TenFourFox users a little hand. Yes, I will preface this by saying I do not use, nor (candidly) particularly like, Facebook, but lots of TenFourFox users do and I've reached the limit of my debugging ability against the minified front end of Facebook.

We have an issue with our custom keyboard handling code with users pressing Delete repeatedly: it not only doesn't delete anything but the first character, but actually inserts repeated 0x08 bytes (i.e., the ASCII value for Delete). Everything else seems to work. Obviously this is our bug, because regular Firefox (including Firefox ESR 45, on which we are ultimately based) doesn't do this, but I can't determine what DOM fields our keydown events aren't correctly providing to Facebook's keydown handler because the code is minified to hell. And debugging minified event handlers even with my Quad G5 cranked to ludicrous is not a good time. No other site seems to do this, so I don't have something smaller to test against either.

So, if you're a Mac retronerd and also a Facebook front-end engineer, does this ring a bell to you? If it doesn't, is there at least a non-minified version I can test against? I sent an E-mail to browsers at fb but got no reply, so I'm asking publicly and begging for your mercy so we can keep our otherwise perfectly good Macs serviceable and running. And for some people, "serviceable and running" means Facebook. So I won't judge. Honest.

Jokes aside, I'd really appreciate any of your insights and time. You can drop a line to classilla at floodgap dawt com if you have a suggestion (or better yet the answer).

We're delayed on FPR2 because I'm trying to track down a regression one of the security backports caused during shutdown when network connections are pending. However, this blog post is being typed in it, so it's otherwise working and I'm still shooting for the official beta next week.

Chris PearceHow to install Ubuntu 17.04 on Dell XPS 15 9550

I had some trouble installing Ubuntu 17.04 to dual-boot with Windows 10 on my Dell XPS 15 9550, so documenting here in case it helps others...

Once I got Ubuntu installed, it runs well. I'm using the NVIDIA proprietary driver, and I've had no major issues with hardware yet so far.

Most of the installation hurdles for me were caused by Ubuntu not being able to see the disk drive while it was operating in Raid mode, and UEFI/Secure Boot seemed to block the install somehow.

The trick to getting past these hurdles was to set Windows to boot into Safe Mode and then switch the disk drive to AHCI and disable UEFI in the BIOS before booting back into Windows in Safe Mode, and then switching Windows back to non-Safe Mode.

I found rcasero's notes on installing Ubuntu on Dell XPS 15 9560 useful.

Detailed steps to install...
  1. (If your Windows partition is encrypted, print out a copy of your BitLocker key. You'll need to enter this on boot after changing anything in your BIOS.
  2. Boot into Windows 10.
  3. I also needed to resize my main Windows partition from inside Windows; the Ubuntu installer seemed unable to cope with resizing my encrypted Windows partition for some reason. You can resize your main Windows partition using Windows' "Create or edit Disk Partitions" tool.
  4. Configure Windows to boot into safe mode: Press Win+R and run msconfig.exe > Boot > Safe Mode. Reboot.
  5. Press the F12 key while the BIOS splash screen comes up. Just repeatedly pressing it while the machine is booting seems to be the most reliable tactic.
  6. In the BIOS menu, BIOS Setup > System Configuration > SATA Operation, change "RAID On" to "AHCI".
  7. In the BIOS menu, disable Secure Boot.
  8. Reboot into Windows. You'll need to enter your BitLocker key to unlock the drive since the BIOS changed. Windows will boot into Safe Mode. If you don't have your Windows install set to boot into Safe Mode, you'll get a BSOD.
  9. Once you've booted into Windows Safe Mode, you can configure Windows to boot in normal (non-Safe Mode) with msconfig.exe > Boot > Safe Mode again.
  10. Reboot with your Ubuntu USB Live Disk inserted, and press F12 while booting to select to boot from the Live USB disk.
  11. The rest of the install Just Worked.
  12. Once you've installed Ubuntu, for better reliability and performance, enable the proprietary GPU drivers, in System Settings > Software and Updates > Additional Drivers. I enabled the NVIDIA and Intel drivers.
  13. I've found the touchpad often registers clicks while I'm typing. Turning off System Settings > Mouse and Touchpad > "Tap to click" fixed this and gives pretty good touchpad behaviour.
  14. Firefox by default has its hardware accelerated layers disabled, but force-enabling it seems to work fine. Open "about:config" in Firefox, and toggle "layers.acceleration.force-enabled" to true. Restart Firefox.

Emma IrwinWe See You! Reaching Diverse Audiences in FOSS

This is the third in a series of posts reporting findings from research into the state of D&I in Mozilla’s communities. The current state of our communities is a mix, when it comes to inclusivity: we can do better, and as with the others, this blog post is an effort to be transparent about what we’ve learned in working toward that goal.

This post shares findings focused on inclusive outreach, communication and engagement.

Photo Credit: “Eye”

When joining Mozilla or any other open source community, we look to see our identities reflected in the values, language(s), methods of connection and behaviors of the community and project leadership. We uncovered a number of challenges, and opportunities in making project communication more accessible and inclusive.

Say my name, say my name

Research showed, that in communication and project initiatives and even advocacy messaging, we appear to tiptoe around naming diversity dynamics such as gender-identity, sexual orientation, age, physical ability and neurodiversity. Some we interviewed, shared an impression that this might be intended to not overstep cultural lines, or to upset religious beliefs. It was also suggested that we lean on ‘umbrella’ identities like ‘Women’ as catch-all for non-male, non-binary people.

“This goes into the gender thing — a lot of the time I see non-binary people get lumped in with “women” in diversity things — which is very dysphoria-inducing, as someone who was assigned female but is definitely *not*.” — community interview

Through inclusive language , by identifying people the way they self-identify in project communication and invitation — only then are we truly welcoming diverse collaboration, connection and engagement.

The Community Participation Guidelines are a great reference for how to work with inclusive language and the full range of diversities we seek to include.

Breaking the Language Barrier

Qualitative research showed only 21% of our respondents spoke English as a first language. Although this snapshot was taken with limited data-sets, our interviews also generated a narrative of struggles for non-native English speakers in an English-first organization*.

Most striking was how the intersection of language and other diversity raises barriers for those already limited — for example parents and non-binary contributors who struggle with English for an almost impossible challenge.

“My friend was a contributor until she had her baby, and since most time she had would be taken trying to translate information from English, she stopped” — community interview

* Primary project communication channels, media, print/copy resources are in English. Mozilla has a very active L10N community.

“Non-English speakers struggle to find opportunity before it expires — People want to feel part, but they feel they are late to everything that Mozilla throws. They need enough contact with those responsibilities or someone who speaks their language.” — community interview

Overall our interviews left us with a strong sense of how difficult, frustrating and even heartbreaking the experience of speaking, and listening within an English-first project is — and that the result was reduced opportunity for innovation, and the mission overall.

As a result it’s clear that creating strategies for speaking and listening to people in their own language is critical in obtaining global perspectives that would otherwise remain hidden.

We found early evidence of this value in this research, conducting a series of first language interviews, especially for people already marginalized within their culture. This could possibly align with our other recommendations for identity groups.

Exclusion by Technical Jargon

‘Technical Jargon/Lingo’, and overly complicated/technical language was cited as a primary challenge for getting involved, and not always for the reasons you might think — with data showing that a type of ‘technical confidence’ might be influencing that choice.

In one community survey, men had much greater confidence in their technical ability than women which might explain low technical participation from women and is backed up by other research showing women only apply for jobs they feel 100% qualified for.

“I feel uncomfortable when they talk about a new technology [at community events].” “I am excited about the new technology and want to jump in, but level of talk and people can be exclusive. I end up leaving.” — community interview

“Technology focus only feels exclusive — we need to say why that technology helps people, not just that it is cool” — community interview

By curbing unnecessary technical language in project descriptions, invitations, issues and outreach anyone can step into a project. This, combined with opportunities for mentorship may have huge impact on diversity of technical projects. Intending to do just that is the Rust project starting with this fantastic initiative.

Making Communication Accessible

Photo Credit: Tim Mossholder via Visualhunt

As part of the interview process we offered text-based focus groups, and interviews in Telegram — and approximately 25% of people selected this option over video.

While we initially offered text-based interviews to overcome issues of bandwidth and connectivity it was noticeable that many chose Telegram for other reasons. This method of communication, we found, better allowed people to translate questions at their own pace, and for those leaning towards introvert space to think, and time to respond.

“More than half of the world is still without Internet, and even people who do have access may be limited by factors like high cost, unreliable connections or censorship. Digital Inclusion” — The Internet Health Report

A repeated theme was that the project standard of using Vidyo created struggle, or existed as a blocker for many to engage for reasons of bandwidth and technology compatibility. Additionally, media produced like plenaries and important keynotes from All Hands lack captioning and which would benefit non-English speakers, and those with visual impairment.

Overall.. Connecting people to Mozilla’s mission, and opportunity to participant is largely dependent on our ability, and determination to make communication accessible and inclusive. Recommendations will be formed based on these findings, and we welcome your feedback and ideas for standards & best practices that can help us get there.

Our next post in this series ‘Frameworks for Incentive & Consequence in FOSS’, will be published on July 28th. Until then, check out the Internet Health Report on digital inclusion.

Cross-posted to Medium.


Eitan IsaacsonFirefox’s Accessibility Preferences

If you use Firefox Nightly, you may notice that there is no more Accessibility section in the preferences screen, this change will arrive in Firefox 56 as part of a preferences reorg. This is good news!

Screenshot of the new "Browsing" section, which includes scrolling options as well as search while you type and cursor keys navigation.

Cursor browsing and search while you type, are still available under the Browsing section, as these options offer convenience for everybody, regardless of disability. Users should now be able to find an option under an appropriate feature section, or search for it in the far upper corner. This is a positive trend, that I hope will continue as we imagine our users more broadly with a diverse set of use-cases, that include, but are not exclusive to disability.

Thanks to everyone who made this happen!

Air MozillaWebdev Beer and Tell: July 2017

Webdev Beer and Tell: July 2017 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...

Nicholas NethercoteDMD is usable again on all Tier 1 platforms

DMD is heap profiler built into Firefox, best known for being the tool used to  diagnose the sources of “heap-unclassified” memory in about:memory.

It’s been unusable on Win32 for a long time due to incredibly slow start-up times. And recently it became very slow on Mac due to a performance regression in libunwind.

Fortunately I have been able to fix this in both cases (Win32, Mac) by using FramePointerStackWalk() instead of MozStackWalk() to do the stack tracing within DMD. (The Gecko Profiler likewise uses FramePointerStackWalk() on those two platforms, and it was my recent work on the profiler that taught me that there was an alternative stack walker available.)

So DMD should be usable and effective on all Tier 1 platforms. I have tested it on  Win32, Win64, Linux64 and Mac. I haven’t tested it in Linux32 or Android. Please let me know if you encounter any problems.

Ehsan AkhgariQuantum Flow Engineering Newsletter #16

It has been almost a month and a half since the last time that I talked about our progress in fighting sync IPC issues.  So I figured it’s time to prepare another Sync IPC Analysis report.  Again, unfortunately only the latest data is available in the spreadsheet.  But here are screenshot of the C++ and JS IPC message pie charts:

As you can see, as we have made even more progress in fixing more sync IPC issues, now the document.cookie issue is even a larger relative share of the pie, at 60%.  That is followed by some JS IPC, PAPZCTreeManager::Msg_ReceiveMouseInputEvent (which is a fast sync IPC message used by the async pan zoom component which would be hard to replace), followed by more JS IPC, followed by PContent::Msg_GetBlocklistState which is recently fixed, followed by PBrowser::Msg_NotifyIMEFocus, followed by more JS IPC and CPOW overhead before we get to the longer tail.  If you look at the JS sync IPC chart, you will see that almost all the overhead there is due to add-ons.  Hopefully none of this will be an issue after Firefox 57 with the new out of process WebExtensions for Windows users.  The only message in this chart stemming from our code that shows up in the data is contextmenu.

The rate of progress here has been really great to see, and this is thanks to the hard work of many people across many different teams.  Some of these issues have required heroic efforts to fix, and it’s really great to see this much progress made in so little time.

The development of Firefox 56 in coming to a close rapidly.  Firefox 57 branches off on Aug 2, and we have about 9 weeks from now until Firefox 57 rides the trains to beta.  So far, according to our burn-down chart, we have closed around 224 [qf:p1] bugs and have yet 110 more to fix.  Fortunately Quantum Flow is not one of those projects that needs all of those bugs to be fixed, because we may not end up having enough time to fix these bugs for the final release, especially since we usually keep adding new bugs to the list in our weekly triage sessions.  Soon we will probably need to reassess the priority of some of these bugs as the eventual deadline approaches.

It is now time for me to acknowledge the great work of everyone who helped by contributing performance improvements over the past two weeks.  As usual, I hope I’m not forgetting any names!

Air MozillaWorking Across Personality Types: The Introvert-Extrovert Survival Guide, with Jennifer Selby-Long

Working Across Personality Types: The Introvert-Extrovert Survival Guide, with Jennifer Selby-Long On July 20, Jennifer Selby Long, an expert in the ethical use of the Myers-Briggs Type Indicator® (MBTI®), will lead us in an interactive session...

Air MozillaReps Weekly Meeting Jul. 20, 2017

Reps Weekly Meeting Jul. 20, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Hacks.Mozilla.OrgThe Next Generation of Web Gaming

Over the last few years, Mozilla has worked closely with other browsers and the industry to advance the state of games on the Web. Together, we have enabled developers to deploy native code on the web, first via asm.js, and then with its successor WebAssembly. Now available in Firefox and Chrome, and also soon in Edge and WebKit, WebAssembly enables near-native performance of code in the browser, which is great for game development, and has also shown benefits for WebVR applications. WebAssembly code is able to deliver more predictable performance due to JIT compilation and garbage collection being avoided. Its wide support across all major browser engines opens up paths to near-native speed, making it possible to build high-performing plugin-free games on the web.

“In 2017 Kongregate saw a shift away from Flash with nearly 60% of new titles using HTML5,” said Emily Greer, co-founder and CEO of Kongregate.  “Developers were able to take advantage of improvements in HTML5 technologies and tools while consumers were able to enjoy games without the need for 3rd-party plugins.  As HTML5 continues to evolve it will enable developers to create even more advanced games that will benefit the millions of gamers on Kongregate.com and the greater, still thriving, web gaming industry.”

Kongregate’s data shows that on average, about 55% of uploaded games are HTML5 games.

And we can also see that these are high-quality games, with over 60% of HTML5 titles receiving a “great” score (better than a 4.0 out of 5 rating).

In spite of this positive trend, opportunities for improvement exist. The web is an ever-evolving platform, and developers are always looking for better performance. One major request we have often heard is for multithreading support on the web. SharedArrayBuffer is a required building block for multithreading, which enables concurrently sharing memory between multiple web workers. The specification is finished, and Firefox intends to ship SharedArrayBuffer support in Firefox 55.

Another common request is for SIMD support. SIMD is short for Single Instruction, Multiple Data. It’s a way for a CPU to parallelize math instructions, offering significant performance improvements for math-heavy requirements such 3D rendering and physics.

The WebAssembly Community Group is now focused on enabling hardware parallelism with SIMD and multithreading as the next major evolutionary steps for WebAssembly. Building on the momentum of shipping the first version of WebAssembly and continued collaboration, both of these new features should be stable and ready to ship in Firefox in early 2018.

Much work has gone into optimizing runtime performance over the last few years, and with that we learned many lessons. We have collected many of these learnings in a practical blog post about porting games from native to web, and look forward to your input on other areas for improvement. As multithreading support lands in 2018, expect to see opportunities to further invest in improving memory usage.

We again wish to extend our gratitude to the game developers, publishers, engine providers, and other browsers’ engine teams who have collaborated with us over the years. We could not have done it without your help — thank you!

Hacks.Mozilla.OrgWebAssembly for Native Games on the Web

The biggest improvement this year to web performance has been the introduction of WebAssembly. Now available in Firefox and Chrome, and coming soon in Edge and WebKit, WebAssembly enables the execution of code at a low assembly-like level in the browser.

Mozilla has worked closely with the games industry for several years to reach this stage: including milestones like the release of games built with Emscripten in 2013, the preview of Unreal Engine 4 running in Firefox (2014), bringing the Unity game engine to WebGL also in 2014, exporting an indie Unity game to WebVR in 2016, and most recently, the March release of Firefox 52 with WebAssembly.

WebAssembly builds on Mozilla’s original asm.js specification, which was created to serve as a plugin-free compilation target approach for applications and games on the web. This work has accumulated a great deal of knowledge at Mozilla specific to the process of porting games and graphics technologies. If you are an engineer working on games and this sounds interesting, read on to learn more about developing games in WebAssembly.

Where Does WebAssembly Fit In?

By now web developers have probably heard about WebAssembly’s promise of performance, but for developers who have not actually used it, let’s set some context for how it works with existing technologies and what is feasible. Lin Clark has written an excellent introduction to WebAssembly. The main point is that unlike JavaScript, which is generally written by hand, WebAssembly is a compilation target, just like native assembly. Except perhaps for small snippets of code, WebAssembly is not designed to be written by humans. Typically, you’d develop the application in a source language (e.g. C/C++) and then use a compiler (e.g. Emscripten), which transforms the source code to WebAssembly in a compilation step.

This means that existing JavaScript code is not the subject of this model. If your application is written in JavaScript, then it already runs natively in a web browser, and it is not possible to somehow transform it to WebAssembly verbatim. What can be possible in these types of applications however, is to replace certain computationally intensive parts of your JavaScript with WebAssembly modules. For example, a web application might replace its JavaScript-implemented file decompression routine or a string regex routine by a WebAssembly module that does the same job, but with better performance. As another example, web pages written in JavaScript can use the Bullet physics engine compiled to WebAssembly to provide physics simulation.

Another important property: Individual WebAssembly instructions do not interleave seamlessly in between existing lines of JavaScript code; WebAssembly applications come in modules. These modules deal with low-level memory, whereas JavaScript operates on high-level object representations. This difference in structure means that data needs to undergo a transformation step—sometimes called marshalling—to convert between the two language representations. For primitive types, such as integers and floats, this step is very fast, but for more complex data types such as dictionaries or images, this can be time consuming. Therefore, replacing parts of a JavaScript application works best when applied to subroutines with large enough granularity to warrant replacement by a full WebAssembly module, so that frequent transitions between the language barriers are avoided.

As an example, in a 3D game written in three.js, one would not want to implement a small Matrix*Matrix multiplication algorithm alone in WebAssembly. The cost of marshalling a matrix data type into a WebAssembly module and then back would negate the speed performance that is gained in doing the operation in WebAssembly. Instead, to reach performance gains, one should look at implementing larger collections of computation in WebAssembly, such as image or file decompression.

On the other end of the spectrum are applications that are implemented as fully in WebAssembly as possible. This minimizes the need to marshal large amounts of data across the language barrier, and most of the application is able to run inside the WebAssembly module. Native 3D game engines such as Unity and Unreal Engine implement this approach, where one can deploy a whole game to run in WebAssembly in the browser. This will yield the best possible performance gain. However, WebAssembly is not a full replacement for JavaScript. Even if as much of the application as possible is implemented in WebAssembly, there are still parts that are implemented in JavaScript. WebAssembly code does not interact directly with existing browser APIs that are familiar to web developers, your program will call out from WebAssembly to JavaScript to interact with the browser. It is possible that this behavior will change in the future as WebAssembly evolves.

Producing WebAssembly

The largest audience currently served by WebAssembly are native C/C++ developers, who are often positioned to write performance sensitive code. An open source community project supported by Mozilla, Emscripten is a GCC/Clang-compatible compiler toolchain that allows building WebAssembly applications on the web. The main scope of Emscripten is support for the C/C++ language family, but because Emscripten is powered by LLVM, it has potential to allow other languages to compile as well. If your game is developed in C/C++ and it targets OpenGL ES 2 or 3, an Emscripten-based port to the web can be a viable approach.

Mozilla has benefited from games industry feedback – this has been a driving force shaping the development of asm.js and WebAssembly. As a result of this collaboration, Unity3D, Unreal Engine 4 and other game engines are already able to deploy content to WebAssembly. This support takes place largely under the hood in the engine, and the aim has been to make this as transparent as possible to the application.

Considerations For Porting Your Native Game

For the game developer audience, WebAssembly represents an addition to an already long list of supported target platforms (Windows, Mac, Android, Xbox, Playstation, …), rather than being a new original platform to which projects are developed from scratch. Because of this, we’ve placed a great deal of focus on development and feature parity with respect to other existing platforms in the development of Emscripten, asm.js, and WebAssembly. This parity continues to improve, although on some occasions the offered features differ noticeably, most often due to web security concerns.

The remainder of this article focuses on the most important items that developers should be aware of when getting started with WebAssembly. Some of these are successfully hidden under an abstraction if you’re using an existing game engine, but native developers using Emscripten should most certainly be aware of the following topics.

Execution Model Considerations

Most fundamental are the differences where code execution and memory model are concerned.

  • Asm.js and WebAssembly use the concept of a typed array (a contiguous linear memory buffer) that represents the low level memory address space for the application. Developers specify an initial size for this heap, and the size of the heap can grow as the application needs more memory.
  • Virtually all web APIs operate using events and an event queue mechanism to provide notifications, e.g. for keyboard and mouse input, file IO and network events. These events are all asynchronous and delivered to event handler functions. There are no polling type APIs for synchronously asking the “browser OS” for events, such as those that native platforms often provide.
  • Web browsers execute web pages on the main thread of the browser. This property carries over to WebAssembly modules, which are also executed on the main thread, unless one explicitly creates a Web Worker and runs the code there. On the main thread it is not allowed to block execution for long periods of time, since that would also block the processing of the browser itself. For C/C++ code, this means that the main thread cannot synchronously run its own loop, but must tick simulation and animation forward based on an event callback, so that execution periodically yields control back to the browser. User-launched pthreads will not have this restriction, and they are allowed to run their own blocking main loops.
  • At the time of writing, WebAssembly does not yet have multithreading support – this capability is currently in development.
  • The web security model can be a bit more strict compared to other platforms. In particular, browser APIs constrain applications from gaining direct access to low-level information about the system hardware, to mitigate being able to generate strong fingerprints to identify users. For example, it is not possible to query information such as the CPU model, the local IP address, amount of RAM or amount of available hard disk space. Additionally, many web features operate on web domain boundaries, and information traveling across domains is configured by cross-origin access control rules.
  • A special programming technique that web security also prevents is the dynamic generation and mutation of code on the fly. It is possible to generate WebAssembly modules in the browser, but after loading, WebAssembly modules are immutable and functions can no longer be added to it or changed.
  • When porting C/C++ code, standard compliant code should compile easily, but native compilers relax certain features on x86, such as unaligned memory accesses, overflowing float->int casts and invoking function pointers via signatures that mismatch from the actual type of the function. The ubiquitousness of x86 has made these kind of nonstandard code patterns somewhat common in native code, but when compiling to asm.js or WebAssembly, these types of constructs can cause issues at runtime. Refer to Emscripten documentation for more information about what kind of code is portable.

Another source of differences comes from the fact that code on a web page cannot directly access a native filesystem on the host computer, and so the filesystem solution that is provided looks a bit different than native. Emscripten defines a virtual filesystem space inside the web page, which backs onto the IndexedDB API for persistence across page visits. Browsers also store downloaded data in navigation caches, which sometimes is desirable but other times less so.

Developers should be mindful in particular about content delivery. In native application stores the model of upfront downloading and installing a large application is an expected standard, but on the web, this type of monolithic deployment model can be an off-putting user experience. Applications can download and cache a large asset package at first run, but that can cause a sizable first-time download impact. Therefore, launching with minimal amount of downloading, and streaming additional asset data as needed can be critical for building a web-friendly user experience.

Toolchain Considerations

The first technical challenge for developers comes from adapting the existing build systems to target the Emscripten compiler. To make this easier, the compiler (emcc & em++) is designed to operate closely as a drop-in replacement for GCC or Clang. This eases migration of existing build systems that are already aware of GCC-like toolchains. Emscripten supports the popular CMake build system configuration generator, and emulates support for GNU Autotools configure scripts.

A fact that is sometimes confused is that Emscripten is not a x86/ARM -> WebAssembly code transformation toolchain, but a cross-compiler. That is, Emscripten does not take existing native x86/ARM compiled code and transform that to run on the web, but instead it compiles C/C++ source code to WebAssembly. This means that you must have all the source available (or use libraries bundled with Emscripten or ported to it). Any code that depends on platform-specific (often closed source) native components, such as Win32 and Cocoa APIs, cannot be compiled, but will need to be ported to utilize other solutions.

Performance Considerations

One of the most frequently asked questions about asm.js/WebAssembly is whether it is fast enough for a particular purpose. Curiously, developers who have not yet tried out WebAssembly are the ones who most often doubt its performance. Developers who have tried it, rarely mention performance as a major issue. There are some performance caveats however, which developers should be aware of.

  • As mentioned earlier, multithreading is not available just yet, so applications that heavily depend on threads will not have the same performance available.
  • Another feature that is not yet available in WebAssembly, but planned, is SIMD instruction set support.
  • Certain instructions can be relatively slower in WebAssembly compared to native. For example, calling virtual functions or function pointers has a higher performance footprint due to sandboxing compared to native code. Likewise, exception handling is observed to cause a bigger performance impact compared to native platforms. The performance landscape can look a bit different, so paying attention to this when profiling can be helpful.
  • Web security validation is known to impact WebGL noticeably. It is recommended that applications using WebGL are careful to optimize their WebGL API calls, especially by avoiding redundant API calls, which still pay the cost for driver security validation.
  • Last, application memory usage is a particularly critical aspect to measure, especially if targeting mobile support as well. Preloading big asset packages on first run and uncompressing large amounts of audio assets are two known sources of memory bloat that are easy to do by accident. Applications will likely need to optimize specifically for this when porting, and this is an active area of optimization in WebAssembly and Emscripten runtime as well.


WebAssembly provides support for executing low-level code on the web at high performance, similar to how web plugins used to, except that web security is enforced. For developers using some of the super-popular game engines, leveraging WebAssembly will be as easy as choosing a new export target in the project build menu, and this support is available today. For native C/C++ developers, the open source Emscripten toolchain offers a drop-in compatible way to target WebAssembly. There exists a lively community of developers around Emscripten who contribute to its development, and a mailing list for discussion that can help you getting started. Games that run on the web are accessible to everyone independent of which computation platform they are on, without compromising portability, performance, or security, or requiring up front installation steps.

WebAssembly is only one part of a larger collection of APIs that power web-based games, so navigate on to the MDN games section to see the big picture. Hop right on in, and happy Emscriptening!

The Mozilla BlogFirefox Focus for Android Hits One Million Downloads! Today We’re Launching Three New User-Requested Features

Since the launch of Firefox Focus for Android less than a month ago, one million users have downloaded our fast, simple privacy browser app. Thank you for all your tremendous support for our Firefox Focus for Android app. This milestone marks a huge demand for users who want to be in the driver’s seat when it comes to their personal information and web browsing habits.

When we initially launched Firefox Focus for iOS last year, we did so based on our belief that everyone has a right to protect their privacy.  We created the Firefox Focus for Android app to support all our mobile users and give them the control to manage their online browsing habits across platforms.

Within a week of the the Firefox Focus for Android launch, we’ve had more than 8,000 comments, and the app is rated 4.5 stars. We’re floored by the response!

Feedback from Firefox Focus Users

“Awesome, the iconic privacy focused Firefox browser now is even more privacy and security focused.” 

“Excellent! It is indeed extremely lightweight and fast.” 

“This is the best browser to set as your “default”, hands down. Super fast and lightweight.”

 “Great for exactly what it’s built for, fast, secure, private and lightweight browsing. “

New Features

We’re always looking for ways to improve and your comments help shape our products. We huddled together to decide what features we can quickly add and we’re happy to announce the following new features less than a month since the initial launch:

  • Full Screen Videos: Your comments let us know that this was a top priority. We understand that if you’re going to watch videos on your phone, it’s only worth it if you can expand to the full size of your cellphone screen. We added support for most video sites with YouTube being the notable exception. YouTube support is dependent on a bug fix from Google and we will roll it out as soon as this is fixed.
  • Supports Downloads: We use our mobile phones for entertainment – whether it’s listening to music, playing games, reading an ebook, or doing work.  And for some, it requires downloading a file. We updated the Firefox Focus app to support files of all kind.
  • Updated Notification Actions: No longer solely for reminders to erase your history, Notifications now features a shortcut to open Firefox Focus. Finally, a quick and easy way to access private browsing.  

We’re on a mission to make sure our products meet your needs. Responding to your feedback with quick, noticeable improvements is our way of saying thanks and letting you know, “Hey, we’re listening.”

You can download the latest version of Firefox Focus on Google Play and in the App Store. Stay tuned for additional feature updates over the coming months!


The post Firefox Focus for Android Hits One Million Downloads! Today We’re Launching Three New User-Requested Features appeared first on The Mozilla Blog.

The Mozilla BlogFirefox for iOS Offers New and Improved Browsing Experience with Tabs, Night Mode and QR Code Reader

Here at Firefox, we’re always looking for ways for users to get the most out of their web experience. Today, we’re rolling out some improvements that will set the stage for what’s to come in the Fall with Project Quantum. Together these new features help to enhance your mobile browsing experience and make a difference in how you use Firefox for iOS.

What’s new in Firefox for iOS:

New Tab Experience

We polished our new tab experience and will be gradually rolling it out so you’ll see recently visited sites as well as highlights from previous web visits.

Night Mode

For the times when you’re in a dark room and the last thing you want to do is turn on your cellphone to check the time – we added Night Mode which dims the brightness of the screen and eases the strain on your eyes. Now, it’ll be easier to read and you won’t get caught checking your email.


QR Code Reader

Trying to limit the number of apps on your phone?  We’ve eliminated the need to download a separate app for QR codes with a built-in QR code reader that allows you to quickly access QR codes.

Feature Recommendations

Everyone loves shortcuts and our Feature Recommendations will offer hints and timesavers to improve your overall Firefox experience. To start, this will be available in US and Germany.

To experience the newest features and use the latest version of Firefox for iOS, download the update and let us know what you think.

We hope you enjoy it!


The post Firefox for iOS Offers New and Improved Browsing Experience with Tabs, Night Mode and QR Code Reader appeared first on The Mozilla Blog.

Soledad Penades“*Utils” classes can be a code smell: an example

You might have heard that “*Utils” classes are a code smell.

Lots of people have written about that before, but I tend to find the reasoning a bit vague, and some of us work better with examples.

So here’s one I found recently while working on this bug: you can’t know what part of the Utils class is used when you require it, unless you do further investigation.

Case in point: if you place a method in VariousUtils.js and then import it later…

var { SomeFunction } = require('VariousUtils');

it’ll be very difficult to actually pinpoint when VariousUtils.SomeFunction was used in the code base. Because you could also do this:

var VariousUtils = require('VariousUtils');
var SomeFunction = VariousUtils.SomeFunction;

or this:

var SomeFunction = require('VariousUtils').SomeFunction;

or even something like…

var SomeFunction;
lazyRequire('VariousUtils').then((res) {
  SomeFunction = res.SomeFunction;

Good luck trying to write a regular expression to search for all possible variations of non-evident ways to include SomeFunction in your codebase.

You want to be able to search for things easily because you might want to refactor later. Obvious requires make this (and other code manipulation tasks) easier.

My suggestion is: if you are importing just that one function, place it on its own file.

It makes things very evident:

var SomeFunction = require('SomeFunction');

And searching in files becomes very easy as well:

grep -lr "require('SomeFunction');" *

But I have many functions and it doesn’t make sense to have one function per file! I don’t want to load all of them individually when I need them!!!!111

Then find a common pattern and create a module which doesn’t have Utils in its name. Put the individual functions on a directory, and make a module that imports and exposes them.

For example, with an `equations` module and this directory structure:


You would still have to require('equations').linear or some other way of just requiring `linear` if that’s what you want (so the search is “complicated” again). But at least the module is cohesive, and it’s obvious what’s on it: equations. It would not be obvious if it had been called “MathUtils” — what kind of utilities is that? formulas? functions to normalise stuff? matrix kernels? constants? Who knows!

So: steer away from “assorted bag of tricks” modules because they’ll make you (or your colleagues) waste time (“what was in that module again?”), and you’ll eventually find yourself splitting them at some point, once they grow enough to not make any sense, with lots of mental context switching required to work on them: “ah, here’s this function for formatting text… now a function to generate UUIDs… and this one for making this low level system call… and… *brainsplosion*” 😬

An example that takes this decomposition in files to the “extreme” is lodash. Then it can generate a number of different builds thanks to its extreme modularity.

Update: Another take: write code that is easy to delete. I love it!

flattr this!

The Rust Programming Language BlogAnnouncing Rust 1.19

The Rust team is happy to announce the latest version of Rust, 1.19.0. Rust is a systems programming language focused on safety, speed, and concurrency.

If you have a previous version of Rust installed, getting Rust 1.19 is as easy as:

$ rustup update stable

If you don’t have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.19.0 on GitHub.

What’s in 1.19.0 stable

Rust 1.19.0 has some long-awaited features, but first, a note for our Windows users. On Windows, Rust relies on link.exe for linking, which you can get via the “Microsoft Visual C++ Build Tools.” With the recent release of Visual Studio 2017, the directory structure for these tools has changed. As such, to use Rust, you had to stick with the 2015 tools or use a workaround (such as running vcvars.bat). In 1.19.0, rustc now knows how to find the 2017 tools, and so they work without a workaround.

On to new features! Rust 1.19.0 is the first release that supports unions:

union MyUnion {
    f1: u32,
    f2: f32,

Unions are kind of like enums, but they are “untagged”. Enums have a “tag” that stores which variant is the correct one at runtime; unions elide this tag.

Since we can interpret the data held in the union using the wrong variant and Rust can’t check this for us, that means reading or writing a union’s field is unsafe:

let u = MyUnion { f1: 1 };

unsafe { u.f1 = 5 };

let value = unsafe { u.f1 };

Pattern matching works too:

fn f(u: MyUnion) {
    unsafe {
        match u {
            MyUnion { f1: 10 } => { println!("ten"); }
            MyUnion { f2 } => { println!("{}", f2); }

When are unions useful? One major use-case is interoperability with C. C APIs can (and depending on the area, often do) expose unions, and so this makes writing API wrappers for those libraries significantly easier. Additionally, from its RFC:

A native union mechanism would also simplify Rust implementations of space-efficient or cache-efficient structures relying on value representation, such as machine-word-sized unions using the least-significant bits of aligned pointers to distinguish cases.

This feature has been long awaited, and there’s still more improvements to come. For now, unions can only include Copy types and may not implement Drop. We expect to lift these restrictions in the future.

As a side note, have you ever wondered how new features get added to Rust? This feature was suggested by Josh Triplett, and he gave a talk at RustConf 2016 about the process of getting unions into Rust. You should check it out!

In other news, loops can now break with a value:

// old code
let x;

loop {
    x = 7;

// new code
let x = loop { break 7; };

Rust has traditionally positioned itself as an “expression oriented language”, that is, most things are expressions that evaluate to a value, rather than statements. loop stuck out as strange in this way, as it was previously a statement.

What about other forms of loops? It’s not yet clear. See its RFC for some discussion around the open questions here.

A smaller feature, closures that do not capture an environment can now be coerced to a function pointer:

let f: fn(i32) -> i32 = |x| x + 1;

We now produce xz compressed tarballs and prefer them by default, making the data transfer smaller and faster. gzip‘d tarballs are still produced in case you can’t use xz for some reason.

The compiler can now bootstrap on Android. We’ve long supported Android in various ways, and this continues to improve our support.

Finally, a compatibility note. Way back when we were running up to Rust 1.0, we did a huge push to verify everything that was being marked as stable and as unstable. We overlooked one thing, however: -Z flags. The -Z flag to the compiler enables unstable flags. Unlike the rest of our stability story, you could still use -Z on stable Rust. Back in April of 2016, in Rust 1.8, we made the use of -Z on stable or beta produce a warning. Over a year later, we’re fixing this hole in our stability story by disallowing -Z on stable and beta.

See the detailed release notes for more.

Library stabilizations

The largest new library feature is the eprint! and eprintln! macros. These work exactly the same as print! and println! but instead write to standard error, as opposed to standard output.

Other new features:

And some freshly-stabilized APIs:

See the detailed release notes for more.

Cargo features

Cargo mostly received small but valuable improvements in this release. The largest is possibly that Cargo no longer checks out a local working directory for the crates.io index. This should provide smaller file size for the registry and improve cloning times, especially on Windows machines.

Other improvements:

See the detailed release notes for more.

Contributors to 1.19.0

Many people came together to create Rust 1.19. We couldn’t have done it without all of you. Thanks!

Air MozillaThe Joy of Coding - Episode 106

The Joy of Coding - Episode 106 mconley livehacks on real Firefox bugs while thinking aloud.

Hacks.Mozilla.OrgCreating a WebAssembly module instance with JavaScript

This is the 1st article in a 3-part series:

  1. Creating a WebAssembly module instance with JavaScript
  2. Memory in WebAssembly (and why it’s safer than you think)
  3. WebAssembly table imports… what are they?

WebAssembly is a new way of running code on the web. With it, you can write modules in languages like C or C++ and run them in the browser.

Currently modules can’t run on their own, though. This is expected to change as ES module support comes to browsers. Once that’s in place, WebAssembly modules will likely be loaded in the same way as other ES modules, e.g. using <script type="module">.

But for now, you need to use JavaScript to boot the WebAssembly module. This creates an instance of the module. Then your JavaScript code can call functions on that WebAssembly module instance.

For example, let’s look at how React would instantiate a WebAssembly module. (You can learn more in this video about how React could use WebAssembly.)

When the user loads the page, it would start in the same way.

The browser would download the JS file. In addition, a .wasm file would be fetched. That contains the WebAssembly code, which is binary.

Browser downloading a .js file and a .wasm file

We’ll need to load the code in these files in order to run it. First comes the .js file, which loads the JavaScript part of React. That JavaScript will then create an instance of a WebAssembly module… the reconciler.

To do that, it will call WebAssembly.instantiate.

React.js robot calling WebAssembly.instantiate

Let’s take a closer look at this.

The first thing we pass into WebAssembly.instantiate is going to be the binary code that we got in that .wasm file. That’s the module code.

So we extract the binary into a buffer, and then pass it in.

Binary code being passed in as the source parameter to WebAssembly.instantiate

The engine will start compiling the module code down to something that is specific to the machine that it’s running on.

But we don’t want to do this on the main thread. I’ve talked before about how the main thread is like a full stack developer because it handles JavaScript, the DOM, and layout. We don’t want to block the main thread while we compile the module. So what WebAssembly.instantiate returns is a promise.

Promise being returned as module compiles

This lets the main thread get back to its other work. The main thread knows that once the compiler is finished compiling this module, it will be notified by the promise. That promise will give it the instance.

But the compiled module is not the only thing needed to create the instance. I think of the module as kind of like an instruction book.

The instance is like a person who’s trying to make something with the instruction book. In order to make that thing, they also need raw materials. They need things that they can work with.

Instruction book next to WebAssembly robot

This is where the second parameter to WebAssembly.instantiate comes in. That is the imports object.

Arrow pointing to importObject param of WebAssembly.instantiate
I think of the imports object as a box of those raw materials, like you would get from IKEA. The instance uses these raw materials—these imports—to build a thing, as directed by the instructions. Just as an instruction manual expects a certain set of raw materials, each module expects a specific set of imports.

Imports box next to WebAssembly robot

So when you are instantiating a module, you pass it an imports object that has those imports attached to it. Each import can be one of these four kinds of imports:

  • values
  • function closures
  • memory
  • tables


It can have values, which are basically global variables. The only types that WebAssembly supports right now are integers and floats, so values have to be one of those two types. That will change as more types are added in the WebAssembly spec.

Function closures

It can also have function closures. This means you can pass in JavaScript functions, which WebAssembly can then call.

This is particularly useful because in the current version of WebAssembly, you can’t call DOM methods directly. Direct DOM access is on the WebAssembly roadmap, but not part of the spec yet.

What you can do in the meantime is pass in a JavaScript function that can interact with the DOM in the way you need. Then WebAssembly can just call that JS function.


Another kind of import is the memory object. This object makes it possible for WebAssembly code to emulate manual memory management. The concept of the memory object confuses people, so I‘ve gone into a little bit more depth in another article, the next post in this series.


The final type of import is related to security as well. It’s called a table. It makes it possible for you to use something called function pointers. Again, this is kind of complicated, so I explain it in the third part of this series.

Those are the different kinds of imports that you can equip your instance with.

Different kinds of imports going into the imports box

To return the instance, the promise returned from WebAssembly.instantiate is resolved. It contains two things: the instance and, separately, the compiled module.

The nice thing about having the compiled module is that you can spin up other instances of the same module quickly. All you do is pass the module in as the source parameter. The module itself doesn’t have any state (that’s all attached to the instance). That means that instances can share the compiled module code.

Your instance is now fully equipped and ready to go. It has its instruction manual, which is the compiled code, and all of its imports. You can now call its methods.

WebAssembly robot is booted

In the next two articles, we’ll dig deeper into the memory import and the table import.

Hacks.Mozilla.OrgMemory in WebAssembly (and why it’s safer than you think)

This is the 2nd article in a 3-part series:

  1. Creating a WebAssembly module instance with JavaScript
  2. Memory in WebAssembly (and why it’s safer than you think)
  3. WebAssembly table imports… what are they?

Memory in WebAssembly works a little differently than it does in JavaScript. With WebAssembly, you have direct access to the raw bytes… and that worries some people. But it’s actually safer than you might think.

What is the memory object?

When a WebAssembly module is instantiated, it needs a memory object. You can either create a new WebAssembly.Memory and pass that object in. Or, if you don’t, a memory object will be created and attached to the instance automatically.

All the JS engine will do internally is create an ArrayBuffer (which I explain in another article). The ArrayBuffer is a JavaScript object that JS has a reference to. JS allocates the memory for you. You tell it how much memory are going to need, and it will create an ArrayBuffer of that size.

React.js requesting a new memory object and JS engine creating one

The indexes to the array can be treated as though they were memory addresses. And if you need more memory later, you can do something called growing to make the array larger.

Handling WebAssembly’s memory as an ArrayBuffer — as an object in JavaScript — does two things:

  1. makes it easy to pass values between JS and WebAssembly
  2. helps make the memory management safe

Passing values between JS and WebAssembly

Because this is just a JavaScript object, that means that JavaScript can also dig around in the bytes of this memory. So in this way, WebAssembly and JavaScript can share memory and pass values back and forth.

Instead of using a memory address, they use an array index to access each box.

For example, the WebAssembly could put a string in memory. It would encode it into bytes…

WebAssembly robot putting string "Hello" through decoder ring

…and then put those bytes in the array.

WebAssembly robot putting bytes into memory

Then it would return the first index, which is an integer, to JavaScript. So JavaScript can pull the bytes out and use them.

WebAssembly robot returning index of first byte in string

Now, most JavaScript doesn’t know how to work directly with bytes. So you’ll need something on the JavaScript side, like you do on the WebAssembly side, that can convert from bytes into more useful values like strings.

In some browsers, you can use the TextDecoder and TextEncoder APIs. Or you can add helper functions into your .js file. For example, a tool like Emscripten can add encoding and decoding helpers.

JS engine pulling out bytes, and React.js decoding them

So that’s the first benefit of WebAssembly memory just being a JS object. WebAssembly and JavaScript can pass values back and forth directly through memory.

Making memory access safer

There’s another benefit that comes from this WebAssembly memory just being a JavaScript object: safety. It makes things safer by helping to prevent browser-level memory leaks and providing memory isolation.

Memory leaks

As I mentioned in the article on memory management, when you manage your own memory you may forget to clear it out. This can cause the system to run out of memory.

If a WebAssembly module instance had direct access to memory, and if it forgot to clear out that memory before it went out of scope, then the browser could leak memory.

But because the memory object is just a JavaScript object, it itself is tracked by the garbage collector (even though its contents are not).

That means that when the WebAssembly instance that the memory object is attached to goes out of scope, this whole memory array can just be garbage collected.

Garbage collector cleaning up memory object

Memory isolation

When people hear that WebAssembly gives you direct access to memory, it can make them a little nervous. They think that a malicious WebAssembly module could go in and dig around in memory it shouldn’t be able to. But that isn’t the case.

The bounds of the ArrayBuffer provide a boundary. It’s a limit to what memory the WebAssembly module can touch directly.

Red arrows pointing to the boundaries of the memory object

It can directly touch the bytes that are inside of this array but it can’t see anything that’s outside the bounds of this array.

For example, any other JS objects that are in memory, like the window global, aren’t accessible to WebAssembly. That’s really important for security.

Whenever there’s a load or a store in WebAssembly, the engine does an array bounds checks to make sure that the address is inside the WebAssembly instance’s memory.

If the code tries to access an out-of-bounds address, the engine will throw an exception. This protects the rest of the memory.

WebAssembly trying to store out of bounds and being rejected

So that’s the memory import. In the next article, we’ll look at another kind of import that makes things safer… the table import.

Hacks.Mozilla.OrgWebAssembly table imports… what are they?

This is the 3rd article in a 3-part series:

  1. Creating a WebAssembly module instance with JavaScript
  2. Memory in WebAssembly (and why it’s safer than you think)
  3. WebAssembly table imports… what are they?

In the first article, I introduced the four different kinds of imports that a WebAssembly module instance can have:

  • values
  • function imports
  • memory
  • tables

That last one is probably a little unfamiliar. What is a table import and what is it used for?

Sometimes in a program you want to be able to have a variable that points to a function, like a callback. Then you can do things like pass it into another function.Defining a callback and passing it into a function

In C, these are called function pointers. The function lives in memory. The variable, the function pointer, just points to that memory address.

Function pointer at memory address 4 points to the callback at memory address 1

And if you need to, later you could point the variable to a different function. This should be a familiar concept.

Function pointer at memory address 4 changes to point to callback2 at memory address 4

In web pages, all functions are just JavaScript objects. And because they’re JavaScript objects, they live in memory addresses that are outside of WebAssembly’s memory.

JS function living in JS managed memory

If we want to have a variable that points to one of these functions, we need to take its address and put it into our memory.

Function pointer in WebAssembly memory pointing to function

But part of keeping web pages secure is keeping those memory addresses hidden. You don’t want code on the page to be able to see or manipulate that memory address. If there’s malicious code on the page, it can use that knowledge of where things are laid out in memory to create an exploit.

For example, it could change the memory address that you have in there, to point to a different memory location.

Then when you try and call the function, instead you would load whatever is in the memory address the attacker gave you.

Malicious actor changing the address in WebAssembly memory to point to malicious code

That could be malicious code that was inserted into memory somehow, maybe embedded inside of a string.

Tables make it possible to have function pointers, but in a way that isn’t vulnerable to these kinds of attacks.

A table is an array that lives outside of WebAssembly’s memory. The values are references to functions.

Another region of memory is added, distinct from WebAssembly memory, which contains the function pointer

Internally, these references contain memory addresses, but because it’s not inside WebAssembly’s memory, WebAssembly can’t see those addresses.

It does have access to the array indexes, though.

All memory outside of the WebAssembly memory object is obfuscated

If the WebAssembly module wants to call one of these functions, it passes the index to an operation called call_indirect. That will call the function.

call_indirect points to the first element of the obfuscated array, which in turn points to the function

Right now the use case for tables is pretty limited. They were added to the spec specifically to support these function pointers, because C and C++ rely pretty heavily on these function pointers.

Because of this, the only kinds of references that you can currently put in a table are references to functions. But as the capabilities of WebAssembly expand—for example, when direct access to the DOM is added—you’ll likely see other kinds of references being stored in tables and other operations on tables in addition to call_indirect.

Andreas GalFirefox marketshare revisited

Why building a better browser doesn’t translate to a better marketshare

I posted a couple weeks ago about Chrome effectively having won the browser wars. The market share observations in the blog post were based on data provided by StatCounter. Several commenters criticized the StatCounter data as inaccurate so I decided to take a look at raw installation data Mozilla publishes to see whether it aligns with the StatCounter data.

Active Firefox Installs

Mozilla’s public data shows that the number of active Firefox Desktop installs running the most recent version of Firefox has been declining for several years. Based on this data, 22% fewer Firefox Desktop installations are active today than a year ago. This is a loss of 16 million Firefox installs in a year. The year over year decline used to be below 10% but accelerated to 14% in 2016. It returned to a more modest 10% year over year loss late 2016, which could be the result of a successful marketing campaign (Mozilla’s biggest marketing campaigns are often in the fall). That effect was temporary as the accelerating decline this year shows (Philipp suggests that the two recent drops could be the result of support for older machines and Windows versions being removed and those users continuing to use previous versions of Firefox, see comments).

Year over Year Firefox Active Daily Installs (Desktop). The Y axis is not zero-based. Click on the graph to enlarge.

Obtaining the data

Mozilla publishes aggregated Firefox usage data in form of Active Daily Installs (ADIs) here (Update: looks like the site requires a login now. It used to be available publicly for years and was public until a few days ago). The site is a bit clumsy and you can look at individual days only so I wrote some code to fetch the data for the last 3 years so its easier to analyze (link). The raw ADI data is pretty noisy as you can see here:

Desktop Firefox Daily Active Installs. The Y axis is not zero-based. Click on the graph to enlarge.

During any given week the ADI number can vary substantially. For the last week the peak was around 80 million users and the low was around 53 million users. To understand why the data is so variable it’s necessary to understand how Active Daily Installs are calculated.

Firefox tries to contact Mozilla once a day to check for security updates. This is called the “updater ping”. The ADI number is the aggregate number of these pings that were seen on a given day and can be understood as the number of running Firefox installs on that day.

The main reason that ADI data is so noisy is that work machines are switched off on the weekend. Those Firefox installs don’t check over the weekend, so the ADI number drops significantly. This also explains why ADIs don’t map 1:1 to Active Daily Users (ADUs). A user may be briefly active on a given day but then switches off the machine before Firefox had a chance to phone home. The ADI count can miss this user. Inversely, Firefox may be active on a day but the user actually wasn’t. Mozilla has a disclaimer on the site that publishes ADI data to point out that ADI data is imprecise, and from data I have seen actual Active Daily Users are about 10% higher than ADIs but this is just a ballpark estimate.

The graphs above also only look at the most recent version of Firefox. A subset of users are often stranded on older versions of Firefox. This subset tends to be relatively small since Mozilla is doing a good job these days converting as many users as possible to the most recently/most secure/most performant version of Firefox.

The first graph in this post was obtained by sliding a 90 day window over the data and comparing for each 90 day window the total number of active daily installs to the 90 day window a year prior. This helps eliminate some of the variability in the data and shows are clearer trend. In addition to weekly swings there is also a strong seasonality. College students being on break and people spending time with family over Christmas are some of the effects you can see in the raw data that the sliding window mechanism can filter to a degree.

If you want to explore the data yourself and see what effect shorter window parameters have you can use this link. If you see any mistakes or have ideas how to improve the visualization please send a pull request.

What about Mobile?

Mozilla doesn’t publish ADI data for Firefox for iOS and Firefox Focus, but since none of these appear in any browser statistics it means their market share is probably very small. ADI data is available for Firefox for Android and that graph looks quite a bit different from Desktop:

Firefox for Android Active Daily Installs. The Y axis is not zero-based. Click on the graph to enlarge.

Firefox for Android has been growing pretty consistently over the last few years. There is a big drop in early 2015 which is likely when Mozilla stopped support for very old versions of Android. The otherwise pretty consistent albeit slow growth seems to have stopped this year but it’s too early still to tell whether this trend will hold.

As you can see ADI data for mobile is not as noisy as desktop. This makes sense because people are much less likely to switch of their phones than their PCs.


A lot of commenters asked why Firefox marketshare is falling off a cliff. I think that question can be best answered with a few screenshots Mozilla engineer Chris Lord posted:

Google is aggressively using its monopoly position in Internet services such as Google Mail, Google Calendar and YouTube to advertise Chrome. Browsers are a mature product and its hard to compete in a mature market if your main competitor has access to billions of dollars worth of free marketing.

Google’s incentives here are pretty clear. The Desktop market is not growing much any more, so Google can’t acquire new users easily which threatens Google’s revenue growth. Instead, Google is going after Firefox and other browsers to grow. Chrome allows Google to lock in a user and make sure that that user heads to Google services first. No wonder Google is so aggressively converting everyone to Chrome, especially if the marketing for that is essentially free to them.

This explains why the market share decline of Firefox has accelerated so dramatically the last 12 months despite Firefox getting much better during the same time window. The Firefox engineering team at Mozilla has made amazing improvements to Firefox and is rebuilding many parts of Firefox with state of the art technology based on Mozilla’s futuristic rendering engine Servo. Firefox is today as good as Chrome in most ways, and better in some (memory use for example). However, this simply doesn’t matter in this market.

Firefox’s decline is not an engineering problem. Its a market disruption (Desktop to Mobile shift) and monopoly problem. There are no engineering solutions to these market problems. The only way to escape this is to pivot to a different market (Firefox OS tried to pivot Mozilla into mobile), or create a new market. The latter is what Brendan’s Brave is all about: be the browser for a better less ad infested Web instead of a traditional Desktop browser.

What makes today very different from the founding days of Mozilla is that Google isn’t neglecting Chrome and the Web the way Microsoft did during the Internet Explorer 6 days. Google continues to advance Chrome and the Web at breakneck pace. Despite this silver lining it is still very concerning that we are headed towards a Web monoculture dominated by Chrome.

What about Mozilla?

Mozilla helped the Web win but Firefox is now losing an unwinnable marketing fight against Google. This does not mean Firefox is not a great browser. Firefox is losing despite being a great browser, and getting better all the time. Firefox is simply the victim of Google’s need to increase profit in a relatively stagnant market. And it’s also important to note that while Firefox Desktop is probably headed for extinction over the next couple years, today it’s still a product used by some 90 million people, and still generating significant revenue for Mozilla for some time.

While I no longer work for Mozilla and no longer have insight into their future plans, I firmly believe that the decline of Firefox won’t necessarily mean the decline of Mozilla. There is a lot of important work beyond Firefox that Mozilla can do and is doing for the Web. Mozilla’s Rust programming language has crossed into the mainstream and is growing steadily and Rust might become Mozilla’s second most lasting contribution to the world.

Mozilla’s engineering team is also building a futuristic rendering engine Servo which is a fascinating piece of technology. If you are interested in the internals of a modern rendering engine, you should definitely take a look. Finding a relevant product to use Servo in will be a challenge, but that doesn’t diminish Servo’s role in pushing the envelope of how fast the Web can be.

And, last but not least, Mozilla is also still actively engaged in Web standards (WebAssembly and WebVR for example), even though it has to rely more on good will than market might these days. The battle for the open web is far from over.

Filed under: Mozilla

Firefox NightlyThese Weeks in Firefox: Issue 20


Friends of the Firefox team

(Give a shoutout/thanks to people for helping fix and test bugs. Introductions)

Project Updates


Activity Stream

Electrolysis (e10s)

Firefox Core Engineering

  • We have a sample of top crashers (by signature) from FF53 release crash pings (not reports), for 5/19-5/25, broken down by process type. Some interesting things there, sent to the stability@ list for further investigation.
  • Updates to 64-bit begin in FF56 (stub installer introduced this in FF55).
  • About to land: LZMA compression and SHA384 support for update downloads for FF56, reducing the size of the download and improving its security.

Form Autofill


  • We’re working hard to ship Android Activity Stream in Firefox for Android 57!
  • We’ve got a working draft of an open-source Android library that will allow you to log into your Firefox Sync account and download your bookmarks, history, and passwords. Check it out here.
  • Firefox Focus for Android v1.0 shipped one week before the all hands and v1.1 will be coming shortly, featuring full screen videos! Here’s the full v1.1 bug list
  • We are working on some UI refresh in Firefox for Android 57 to align the Firefox Desktop! Follow along in this bug.
  • We are also planning to phase out the support of Android 4.0 (API 15). Hoping to do this in Fennec 56. Here’s the tracking bug.


  • Built the prototype for adding the ability for the user to pin frequently-used items from the Page Action menu into the URL bar. This work adds a context menu to items in the action menu to control this. The prototype also added Page Action menu entries for Pocket and Screenshots (and as a next step, their existing buttons in the navbar will be removed). Eventually there will be an WebExtensions API so that Addons can extend this menu (but that work may not make 57).

    The context menu for the page action menu to pin actions to the URL bar

    Coming soon!

  • The bookmark star has moved into the URL bar. This (as with Pocket and Screenshots, mentioned above) is part of our work to consolidate actions you perform with the page into the Page Action menu.
  • The sidebar button is now in the toolbar by default. This gives easy one-click access to toggle the sidebar.
  • Customize Mode got a few updates. Its general style has been refreshed for Photon, and we’ve removed the “grid” style around the edges and shrinking-animation when opened. Also, the info panel that’s shown the first time a user enters customization mode (which helps explain that you can drag’n’drop items to move them around) has been replaced with a Photon critter – the Dragondrop. I hope you can appreciate this delightfully terrible pun. 😉

    The visual pun shown in the overflow menu when it is empty.

    Dragondrop! Get it?! Ba-dum-tish!

  • The Library panel will now show Bookmarks and Downloads. (Bookmarks are already in Nightly, Downloads was built during the week but needs more work before landing).
  • We also fixed a number of random polish bugs here and there. “Polish” bugs are changes that are not implementing new features, but are just fixing smaller issues with new or existing features. We’ll be seeing an increasing amount of these as we get closer to shipping, and focus on improving polish and quality overall.)

Search and Navigation

Sync / Firefox Accounts

Test Pilot (written only)

  • Page Shot, Activity Stream, Tab center, Pulse all graduated from Test Pilot
  • New experiments coming next week
  • We started a new blog to publish experiment results.  Watch https://medium.com/firefox-test-pilot for new posts soon

Web Payments

Here are the raw meeting notes that were used to derive this list.

Want to help us build Firefox? Get started here!

Here’s a tool to find some mentored, good first bugs to hack on.

Cameron KaiserTalos take II

First, ob-TenFourFox stuff. As the wonderful Dutch progressive rock band Focus plays "Sylvia" in the CD player, I'm typing this in a partially patched up build of FPR2, which has a number of further optimizations including an AltiVec-accelerated memchr() implementation (this improves JavaScript regex matching by about 15 percent, but also beefs up some other parts of the browser which call the same library function) and some additional performance backports ripped off from Mozilla's Quantum project. This version also has a re-tuned G5 build with some tweaked compiler settings to better match the 970 cache line size, picking up some small but measurable improvements on Acid3 and other tests. Even the G3 gets some love: while it obviously can't use the AltiVec memchr(), it now uses a better unrolled character matcher instead and picks up a few percentage points that way. I hope to finish the security patch work by this weekend, though I am still annoyed to note I cannot figure out what's up with issue 72.

Many of you will remember the Raptor Talos, an attempt to bring a big beefy POWER8 to the desktop that sadly did not meet its crowdsource funding goal. Well, I'm gratified to note that Raptor is trying again with a smaller scale system but a bigger CPU: the POWER9-based Talos II. You want a Power-based, free and open non-x86 alternative that can kick Intel's @$$? Then you can get one of these and not have to give up performance or processing (eheheh) power. The systems will use the "scale-out" dual socket POWER9 with DDR4 RAM and while the number of maximum supported cores on Talos II has not yet been announced, I'll just say that POWER9 systems can go up to 24 cores and we'll leave it at that. With five PCIe slots, you can stick a couple cool video cards in there too and rock out. It runs ppc64le Linux, just like the original Talos.

I'm not (just) interested in a thoroughly modern RISC workstation, though: I said before I wanted Talos to be the best way to move forward from the Power Mac, and I mean it. I'm working on tuning up Firefox for POWER8 with optimizations that should carry to POWER9, and once that builds, beefing the browser up further with a new 64-bit Power ISA JavaScript JIT with what we've learned from TenFourFox's 32-bit implementation. I'd also like to optimize QEMU for the purpose of being able to still run instances of OS 9 and PowerPC OS X in emulation at as high performance on the Talos II as possible so you can bring along your legacy applications and software. When pre-orders open up in August -- yes, next month! -- I'm going to be there with my hard-earned pennies and you'll hear about my experiences with it here first.

But don't worry: the G5 is still going to be under my desk for awhile even after the Talos II arrives, and there's still going to be improvements to TenFourFox for the foreseeable future because I'll still be using it personally for the foreseeable future. PowerPC forever.

Mozilla Security BlogA Security Audit of Firefox Accounts

FXA-01-reportTo provide transparency into our ongoing efforts to protect your privacy and security on the Internet, we are releasing a security audit of Firefox Accounts (FxA) that Cure53 conducted last fall. At Mozilla, we sponsor security audits of core open source software underpinning the Web and Internet, recently relaunched our web bug bounty program, find and fix vulnerabilities ourselves, and open source our code for anyone to review. Despite being available to more reviewers, open source software is not necessarily reviewed more thoroughly or frequently than closed source software, and the extra attention from third party reviewers can find outstanding issues and vulnerabilities. To augment our other initiatives and improve the overall security of our web services, we engage third party organizations to audit the security and review the code of specific services.

As Firefox’s central authentication service FxA is a natural first target. Its security is critical to millions of users who rely on it to authenticate with our most sensitive services, such as addons.mozilla.org and Sync. Cure53 ran a comprehensive security audit that encompassed the web services powering FxA and the cryptographic protocol used to protect user accounts and data. They identified 15 issues, none of which were exploited or put user data at risk.

We thank Cure53 for reviewing FxA and increasing our trust in the backbone of Firefox’s identity system. The audit is a step toward providing higher quality and more secure services to our users, which we will continue to improve through our various security initiatives. In the rest of this blog post, we discuss the technical details of the four highest severity issues. The report is available here and you can sign up or log into Firefox Accounts on your desktop or mobile device at: https://accounts.firefox.com/signup


FXA-01-001 HTML injection via unsanitized FxA relier Name

The one issue Cure53 ranked as critical, FXA-01-001 HTML injection via unsanitized FxA relier Name, resulted from displaying the name of a relier without HTML escaping on the relier registration page. This issue was not exploitable from outside Mozilla, because the endpoint for registering new reliers is not open to the public. A strict Content Security Policy (CSP) blocked most Cross-Site-Scripting (XSS) on the page, but an attacker could still exfiltrate sensitive authentication data via scriptless attacks and deface or repurpose the page for phishing. To fix the vulnerability soon after Cure53 reported it to us, we updated the template language to escape all variables and use an explicit naming convention for unescaped variables. Third party relier names are now sanitized and escaped.

FXA-01-004 XSS via unsanitized Output on JSON Endpoints

The first of three issues ranked high, FXA-01-004 XSS via unsanitized Output on JSON Endpoints, affected legacy browsers handling JSON endpoints with user controlled fields in the beginning of the response. For responses like the following:

        "id": "81730c8682f1efa5",
        "name": "<img src=x onerror=alert(1)>",
        "trusted": false,
        "image_uri": "",
        "redirect_uri": "javascript:alert(1)"

an attacker could set the name or redirect_uri such that legacy browsers sniff the initial bytes of a response, incorrectly guess the MIME type as HTML instead of JSON, and execute user defined scripts.  We added the HTTP header X-Content-Type-Options: nosniff (XCTO) to disable MIME type sniffing, and wrote middleware and patches for the web frameworks to unicode escape <, >, and & characters in JSON responses.

FXA-01-014 Weak client-side Key Stretching

The second issue with a high severity ranking, FXA-01-014 Weak client-side Key Stretching, is “a tradeoff between security and efficiency”. The onepw protocol threat model includes an adversary capable of breaking or bypassing TLS. Consequently, we run 1,000 iterations of PBKDF2 on user devices to avoid sending passwords directly to the server, which runs a further 216 scrypt iterations on the PBKDF2-stretched password before storing it. Cure53 recommended storing PBKDF2 passwords with a higher work factor of roughly 256,000 iterations, but concluded “an exact recommendation on the number of iterations cannot be supplied in this instance”. To keep performance acceptable on less powerful devices, we have not increased the work factor yet.

FXA-01-010 Possible RCE if Application is run in a malicious Path

The final high severity issue, FXA-01-010 Possible RCE if Application is run in a malicious Path, affected people running FxA web servers from insecure paths in development mode. The servers exposed an endpoint that executes shell commands to determine the release version and git commit they’re running in development mode. For example, the command below returns the current git commit:

var gitDir = path.resolve(__dirname, '..', '..', '.git')
var cmd = util.format('git --git-dir=%s rev-parse HEAD', gitDir)
exec(cmd, …)

Cure53 noted malicious commands like rm -rf * in the directory path __dirname global would be executed and recommended filtering and quoting parameters. We modified the script to use the cwd option and avoid filtering the parameter entirely:

var cmd = 'git rev-parse HEAD'
exec(cmd, { env: { GIT_CONFIG: gitDir } } ...)

Mozilla does not run servers from insecure paths, but some users host their own FxA services and it is always good to consider malicious input from all sources.


We reviewed the higher ranked issues from the report, circumstances limiting their impact, and how we fixed and addressed them. We invite you to contribute to developing Firefox Accounts and report security issues through our bug bounty program as we continue to improve the security of Firefox Accounts and other core services.

The post A Security Audit of Firefox Accounts appeared first on Mozilla Security Blog.

Air MozillaIntern Presentations: Round 1: Tuesday, July 18th

Intern Presentations: Round 1: Tuesday, July 18th Intern Presentations 4 presenters Time: 1:00PM - 2:00PM (PDT) - each presenter will start every 15 minutes 2 in MTV, 2 in TOR

Georg FritzscheFirefox data platform & tools update, Q2 2017

Beta “main” ping submission delay analysis by :chutten.

The data platform and tools teams are working on our core Telemetry system, the data pipeline, providing core datasets and maintaining some central data viewing tools.

To make new work more visible, we provide quarterly updates.

What’s new in the last few months?

A lot of work in the last months was on reducing latency, supporting experimentation and providing a more reliable experience of the data platform.

On the data collection side, we have significantly improved reporting latency from Firefox 55, with preliminary results from Beta showing we receive 95% of the “main” ping within 8 hours (compared to previously over 90 hours). Curious for more detail? #1 and #2 should have you covered.

We also added a “new-profile” ping, which gives a clear and timely signal for new clients.

There is a new API to record active experiments in Firefox. This allows annotating experiments or interesting populations in a standard way.

The record_in_processes field is now required for all histograms. This removes ambiguity about which process they are recorded in.

The data documentation moved to a new home: docs.telemetry.mozilla.org. Are there gaps in the documentation you want to see filled? Let us know by filing a bug.

For datasets, we added telemetry_new_profile_parquet, which makes the data from the “new-profile” ping available.

Additionally, the main_summary dataset now includes all scalars and uses a whitelist for histograms, making it easy to add them. Important fields like active_ticks and Quantum release criteria were also added and backfilled.

For custom analysis on ATMO, cluster lifetimes can now be extended self-serve in the UI. The stability of scheduled job stability also saw major improvements.

There were first steps towards supporting Zeppelin notebooks better; they can now be rendered as Markdown in Python.

The data tools work is focused on making our data available in a more accessible way. Here, our main tool Redash saw multiple improvements.

Large queries should no longer show the slow script dialog and scheduled queries can now have an expiration date. Finally, a new Athena data source was introduced, which contains a subset of our Telemetry-based derived datasets. This brings huge performance and stability improvements over Presto.

What is up next?

For the next few months, interesting projects in the pipeline include:

  • The experiments viewer & pipeline, which will make it much easier to run pref-flipping experiments in Firefox.
  • Recording new probes from add-ons into the main ping (events, scalars, histograms).
  • We are working on defining and monitoring basic guarantees for the Telemetry client data (like reporting latency ranges).
  • A re-design of about:telemetry is currently on-going, with more improvements on the way.
  • A first version of Mission Control will be available, a tool for more real-time release monitoring.
  • Analyzing the results of the Telemetry survey, (thanks everyone!) to inform our planning.
  • Extending the main_summary dataset to include all histograms.
  • Adding a pre-release longitudinal dataset, which will include all measures on those channels.
  • Looking into additional options to decrease the Firefox data reporting latency.

How to contact us.

Please reach out to us with any questions or concerns.

Firefox data platform & tools update, Q2 2017 was originally published in Georg Fritzsche on Medium, where people are continuing the conversation by highlighting and responding to this story.

The Mozilla BlogMozilla Announces “Net Positive: Internet Health Shorts” – A Film Screening About Society’s Relationship With The Internet

Mozilla, the non-profit behind the Firefox browser, is excited to support Rooftop Films in bringing a memorable evening of film and discussion to The Courtyard of Industry City, in beautiful Brooklyn, New York on Saturday, July 29 starting at 8 PM ET. As a part of Rooftop Films Annual Summer Series, Hitrecord will premiere a film produced by Joseph Gordon-Levitt about staying safe online.

Mozilla believes the Internet is the most fantastically fun, awe-inspiring place we’ve ever built together. It’s where we explore new territory, build innovative products and services, swap stories, get inspired, and find our way in the world. It was built with the intention that everyone is welcome.

Right now, however, we’re at a tipping point. Big corporations want to privatize our largest public resource. Fake news and filter bubbles are making it harder for us to find our way. Online bullies are silencing inspired voices. And our desire to explore is hampered by threats to our safety and privacy.

“The Internet is a vast, vibrant ecosystem,” said Jascha Kaykas-Wolff, Mozilla’s Chief Marketing Officer. “But like any ecosystem, it’s also fragile. If we want the Internet to thrive as a diverse, open and safe place where all voices are welcome, it’s going to take committed citizens standing tall to protect it. Mozilla is proud to support the artists and filmmakers who are raising awareness for Internet health through creativity and storytelling.”

Dan Nuxoll, Program Director at Rooftop Films said, “In such a pivotal year for the Internet, we are excited to be working with Mozilla in support of films that highlight with such great detail our relationship with the web. As a non-profit, we are thrilled to be collaborating with another non-profit in support of consumer education and awareness about issues that matter most.”

Joseph Gordon-Levitt, actor and filmmaker said, “Mozilla is really a great organization, it’s all about keeping the Internet free, open and neutral — ideas very near and dear to my heart. I was flattered when Mozilla knocked on hitRECord’s door and asked us to collaborate.”

Join us as we explore, through short films, what’s helping and what’s hurting the Web. We are calling the event, “Net Positive: Internet Health Shorts.” People can register now to secure a spot.

Featured Films:
Harvest – Kevin Byrnes
Hyper Reality – Keiichi Matsuda
I Know You From Somewhere – Andrew Fitzgerald
It Should Be Easy – Ben Meinhardt
Lovestreams – Sean Buckelew
Project X – Henrik Moltke and Laura Poitras
Too Much Information – Joseph Gordon Levitt & hitRECord
Price of Certainty – Daniele Anastasion
Pizza Surveillance – Micah Laaker

Saturday, July 29
Venue: The Courtyard of Industry City
Address: 274 36th Street (Sunset Park, Brooklyn)
8:00 PM: Doors Open
8:30 PM: Live Music
9:00 PM: Films Begin
10:30 PM: Post-Screening Discussion with Filmmakers
11:00 PM: After-party sponsored by Corona Extra, Tanqueray, Freixenet, DeLeón Tequila, and Fever-Tree Tonic

In the past year, Mozilla has supported the movement to raise awareness for Internet Health by launching the IRL podcast, hosting events around the country, and collaborating with change-makers such as Joseph Gordon-Levitt to educate the public about a healthy and safe Internet environment.

About Mozilla

Mozilla has been a pioneer and advocate for the open web for more than 15 years. We promote open standards that enable innovation and advance the Web as a platform for all. Today, hundreds of millions of people worldwide use Mozilla Firefox to experience the Web on computers, tablets, and mobile devices. For more information, visit www.mozilla.org.

About Rooftop Films

Rooftop Films is a non-profit organization whose mission is to engage and inspire the diverse communities of New York City by showcasing the work of emerging filmmakers and musicians. In addition to their annual Summer Series – which takes place in unique outdoor venues every weekend throughout the summer – Rooftop provides grants to filmmakers, rents equipment at low-cost to artists and non-profits, and supports film screenings citywide with the Rooftop Films Community Fund. At Rooftop Films, we bring underground movies outdoors. For more information and updates please visit their website at www.rooftopfilms.com.

The post Mozilla Announces “Net Positive: Internet Health Shorts” – A Film Screening About Society’s Relationship With The Internet appeared first on The Mozilla Blog.

Mozilla Addons BlogAdd-on Compatibility for Firefox 56

Firefox 56 will be released on September 26th. Here’s the list of changes that went into this version that can affect add-on compatibility. There is more information available in Firefox 56 for Developers, so you should also give it a look. Also, if you haven’t yet, please read our roadmap to Firefox 57.

Compatibility changes

Let me know in the comments if there’s anything missing or incorrect on these lists. We’d like to know if your add-on breaks on Firefox 56.

The automatic compatibility validation and upgrade for add-ons on AMO will run in a few weeks, so keep an eye on your email if you have an add-on listed on our site with its compatibility set to Firefox 55.

Last stop!

LEGO end of train line

Firefox 56 will be the last version of Firefox to support legacy add-ons. It’s the last release cycle you’ll have to port your add-ons to WebExtensions. Many planned APIs won’t make the cut for 57, so make sure that you plan your development timeline accordingly.

This is also the last compatibility overview I’ll write. I started writing these 7 years ago, the first one covering Firefox 4. Looking ahead, backwards-incompatible changes in WebExtensions APIs should be rare. When and if they occur, we’ll post one-offs about them, so please keep following this blog for updates.

The post Add-on Compatibility for Firefox 56 appeared first on Mozilla Add-ons Blog.

Hacks.Mozilla.OrgPicasso Tower 360º tour with A-Frame

A 360º tour refers to an experience that simulates an in-person visit through the surrounding space. This “walkthrough” visit is composed of scenes in which you can look around at any point, similar to how you can look around in Google Street View. In a 360º tour, different scenes are accessible via discrete hotspots that users can enable or jump into, transporting themselves to a new place in the tour.

The magenta octahedron represents the user’s point of view. The image covers the inner surface of the sphere.

The magenta octahedron represents the user’s point of view. The image covers the inner surface of the sphere.

With A-Frame, creating such an experience is a surprisingly simple task.

360º panoramas

In photography, panoramas are essentially wide-angle images. Wide-angle means wide field of view, so the region of the physical space captured by the camera is wider than in regular pictures. A 360º panorama captures the space all the way around the camera.

In the same way that wide-angle photography requires special lenses, 360º panoramas require special cameras. You can read Kevin Ngo’s guide to 360º photography for advice and recommendations when creating panoramas.

Trying to represent a sphere in a rectangular format results in what we call a projection. Projection introduces distortion —straight lines become curves. You will probably be able to recognize panoramic pictures thanks to the effects of distortion that occur when panoramic views are represented in a bi-dimensional space:

To undo the distortion, you have to project the rectangle back into a sphere. With A-Frame, that means using the panorama as the texture of a sphere facing the camera. The simplest approach is to use the a-sky primitive. The projection of the image must be equirectangular in order to work in this setup.

See the Pen 360º panorama viewer by Salvador de la Puente González (@lodr) on CodePen.

By adding some bits of JavaScript, you can modify the src attribute of the sky primitive to change the panorama texture and enable the user to teleport to a different place in your scene.

Getting equirectangular images actually depends on the capturing device. For instance, the Samsung Gear 360 camera requires the use of official Samsung stitching software to combine the native dual-fisheye output into the equirectangular version; while the Ricoh Theta S outputs both equirectangular and dual-fisheye images without further interaction.

A dual-fisheye image arranges two fisheye images side by side

A dual-fisheye image is the common output of 360º cameras. A stitching software can convert this into an equirectangular image.

A 360º tour template

To create such an experience, you can use the 360 tour template that comes with aframe-cli. The aframe-360-tour-template encapsulates the concepts mentioned above in reusable components and meaningful primitives, enabling a developer to write semantic 360º tours in just a few steps.

aframe-cli has not been released yet (this is bleeding edge A-Frame tooling) but you can install a pre-release version with npm by running the following command:

npm install -g aframevr-userland/aframe-cli

Now you can access aframe-cli using the aframe command. Go to your workspace directory and start a new project by specifying the name of the project folder and the template:

$ aframe new tour --template 360-tour
$ cd tour

Start the experience with the following command:

$ aframe serve

And visit to experience the tour.

Adding panoramas

Visit my Picasso Tower 360 gallery on Flickr and download the complete gallery. (Images are public domain so don’t worry about licensing issues.)

Decompress the file and paste the images inside the app/assets/images/ folder. I will use just three images in this example. After you finish this article, you can experiment with the complete tour. Be sure to notice that the panorama order matches naming: 360_0071_stitched_injected_35936080846_o goes before 360_0072_stitched_injected_35936077976_o, which goes before 360_0073_stitched_injected_35137574104_o and so on…

Edit index.html to locate the panoramas section inside the a-tour primitive. Change current panoramas by modifying their src attribute or add new ones by writing new a-panorama primitives. Replace the current panoramas with the following ones:

<a-panorama id="garden" src="images/360_0071_stitched_injected_35936080846_o.jpg"></a-panorama>
<a-panorama id="corner" src="images/360_0074_stitched_injected_35936077166_o.jpg"></a-panorama>
<a-panorama id="facade" src="images/360_0077_stitched_injected_35137573294_o.jpg"></a-panorama>

Save and reload your browser tab to see the new results.

It is possible you’ll need to correct the rotation of the panorama to make the user face in the direction you want. Change the rotation component of the panorama to do so. (Remember to save and reload to see your changes.):

<a-panorama id="garden" src="images/360_0071_stitched_injected_35936080846_o.jpg" rotation=”0 90 0”></a-panorama>

Now you need to connect the new sequence to the other panoramas with positioned hotspots. Replace current hotspots with the following one and look at the result by reloading the tab:

<a-hotspot id="garden-to-corner" for="garden" to="corner" mixin="hotspot-target" position="-3.86 -0.01 -3.18" rotation="-0.11 50.47 -0.00">
  <a-text value="CORNER" align="center" mixin="hotspot-text"></a-text>

Remember that in order to activate a hotspot, while in desktop mode, you have to place the black circle over the magenta octahedron and click on the screen.

Placing hotspots

Positioning hotspots can be a frustrating endeavour. Fortunately, the template comes with an useful component to help with this task. Simply add the hotspot-helper component to your tour, referencing the hotspot you want to place as the value for the target property: <a-tour hotspot-helper="target: #corner-to-garden">. The component will move the hotspot as you look around and will display a widget in the top-left corner showing the world position and rotation of the hotspot, allowing you to copy these values to the clipboard.

Custom hotspots

You can customise the hotspot using mixins. Edit index.html and locate hotspot-text and hotspot-target mixin primitives inside the assets section.

For instance, to avoid the need to copy the world rotation values, we are going to use ngokevin’s lookAt component which is already included in the template.

Modify the entity with hotspot-text id to looks like this:

<a-mixin id="hotspot-text" look-at="[camera]" text="font: exo2bold; width: 5" geometry="primitive: plane; width: 1.6; height: 0.4" material="color: black;" position="0 -0.6 0"></a-mixin>

Cursor feedback

If you enter VR mode, you will realise that teleporting to a new place requires you to fix your gaze on the hotspot you want to get to for an interval of time. We can change the duration of this interval, modifying the cursor component. Try increasing the timeout to two seconds:

<a-entity cursor="fuse: true; fuse-timeout: 2000" position="0 0 -1"
          geometry="primitive: ring; radiusInner: 0.02; radiusOuter: 0.03"
          material="color: black; shader: flat">

Once you add fuse: true to your cursor component, you won’t need to click on the screen, even out of VR mode. A click event will trigger after fuse-timeout milliseconds.

Following the suggestion in the article about the cursor component, you can create the perception that something is about to happen by attaching an a-animation primitive inside the cursor entity:

<a-entity cursor="fuse: true; fuse-timeout: 2000" position="0 0 -1"
          geometry="primitive: ring; radiusInner: 0.02; radiusOuter: 0.03"
          material="color: black; shader: flat">
      <a-animation begin="fusing" end="mouseleave" easing="ease-out" attribute="scale"
                   fill="backwards" from="1 1 1" to="0.2 0.2 0.2"
Fix the gaze on a hotspot for 2 seconds to activate the hotspot and teleport.

Click on the picture above to see fuse and the animation feedback in action.

Ambient audio

Sound is a powerful tool for increasing the illusion of presence. You can find several places on the Internet offering royalty-free sounds like soundbible.com. Once you decide on the perfect ambient noise for the experience you’re creating, grab the file URL or download it if not available and serve the file locally. Create a new folder sounds under app/assets and put the audio file inside.

Add an audio tag that points to the sound file URL inside the <a-assets> element in order for the file to load:

   <audio id="ambient-sound" src="sounds/environment.mp3"></audio>

And use the sound component referencing the audio element id to start playing the audio:

<a-tour sound="src: #ambient-sound; loop: true; autoplay: true; volume: 0.4"></a-tour>

Adjust the volume by modifying the volume property which ranges from 0 to 1.


360º tours offer first-time WebVR creators a perfect starting project that does not require exotic or expensive gear to begin VR development. Panoramic 360º scenes naturally fall back to regular 2D visualization on a desktop or mobile screen and with a cardboard headset or VR head-mounted display, users will enjoy an improved sense of immersion.

With aframe-cli and the 360º tour template you can now quickly set up the basics to customise and publish your 360º VR tour. Create a new project to show us your favourite places (real or imaginary!) by adding panoramic views, or start hacking on the template to extend its basic functionality. Either way, don’t forget to share your project with the A-Frame community in Slack and Twitter.

The Mozilla Blog60,000,000 Clicks for Copyright Reform

More than 100,000 people—and counting—are demanding Internet-friendly copyright laws in the EU


60,000,000 digital flyers.

117,000 activists.

12,000 tweets to Members of the European Parliament (MEPs).

Europe has been Paperstormed.

Earlier this year, Mozilla and our friends at Moniker launched Paperstorm.it, a digital advocacy tool that urges EU policymakers to update copyright laws for the Internet age.

Paperstorm.it users drop digital flyers onto maps of European landmarks, like the Eiffel Tower and the Reichstag Building in Berlin. When users drop a certain amount, they trigger impassioned tweets to European lawmakers:

“We built Paperstorm as a fun (and mildly addictive) way for Internet users to learn about and engage with a serious issue: the EU’s outdated copyright laws,” says Mozilla’s Brett Gaylor, one of Paperstorm’s creators.

“The Parliament has a unique opportunity to reform copyright,” says Raegan MacDonald, Mozilla’s Senior EU Policy Manager. “We hope this campaign served as a reminder that EU citizens want a modern framework that will promote — not hinder — innovation and creativity online. The success of this reform hinges on whether the interests of these citizens — whether creators, innovators, teachers, librarians, or anyone who uses the internet — are truly taken into account in the negotiations.”

Currently, lawmakers are crafting amendments to the proposal for a new copyright law, a process that will end this year. Now is the time to make an impact. And we are.

Over the last two months, more than 100,000 Internet users visited Paperstorm.it. They sent 12,000 tweets to key MEPs, like France’s Jean-Marie Cavada, Germany’s Angelika Niebler, and Lithuania’s Antanas Guoga. In total, Paperstormers contacted 13 MEPs in 10 countries: Austria, France, Germany, Italy, Lithuania, Malta, Poland, Romania, Sweden and the UK.

Then, we created custom MEP figurines inside Paperstorm snowglobes. A Mozilla community member from Italy hand-delivered these snowglobes right to MEPs offices in Brussels, alongside a letter urging a balanced copyright reform for the digital age. Here’s the proof:

Angelika Niebler, Member, ITRE (left) and Jean-Marie Cavada, Vice-Chair, JURI

JURI Committee Vice-Chair, MEP Laura Ferrara, Italy (center) with Mozilla’s Raegan MacDonald and Edoardo Viola

Thanks for clicking. We’re looking forward to what’s ahead: 100,000,000 clicks—and common-sense copyright laws for the Internet age.

The post 60,000,000 Clicks for Copyright Reform appeared first on The Mozilla Blog.

Mozilla Open Policy & Advocacy BlogMozilla statement on Supreme Court hearings on Aadhaar

The Supreme Court of India is setting up a nine judge bench to consider whether the right to privacy is a fundamental right under the Indian Constitution. This move is a result of multiple legal challenges to Aadhaar, the Indian national biometric identity database, which the Government of India is currently operating without any meaningful privacy protections.

We’re pleased to see the Indian Supreme Court take this important step forward in considering the privacy implications of Aadhaar. At a time when the Government of India is increasingly making Aadhaar mandatory for everything from getting food rations, to accessing healthcare, to logging into a wifi hotspot, a strong framework protecting privacy is critical. Indians have been waiting for years for a Constitutional Bench of the Supreme Court to take up these Aadhaar cases, and we hope the Right to Privacy will not be in question for much longer.

The post Mozilla statement on Supreme Court hearings on Aadhaar appeared first on Open Policy & Advocacy.

This Week In RustThis Week in Rust 191

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is extfsm, a crate to help build finite state machines. Thanks to Tony P. for the suggestion.

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

103 pull requests were merged in the last week

New Contributors

  • Luca Barbato
  • Lynn
  • Sam Cappleman-Lynes
  • Valentin Brandl
  • William Brown
  • Yorwba

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Style RFCs

Style RFCs are part of the process for deciding on style guidelines for the Rust community and defaults for Rustfmt. The process is similar to the RFC process, but we try to reach rough consensus on issues (including a final comment period) before progressing to PRs. Just like the RFC process, all users are welcome to comment and submit RFCs. If you want to help decide what Rust code should look like, come get involved!

The RFC style is now the default style in Rustfmt - try it out and let us know what you think!

An interesting issue:

Good first issues:

We're happy to mentor these, please reach out to us in #rust-style if you'd like to get involved

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

No jobs listed for this week.

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Good farmers use their bare hands, average farmers use a combine harvester.

/u/sin2pifx in response to "Good programmers write C, average programmers write Rust".

Thanks to Rushmore for the suggestion.

Submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and brson.

Code SimplicityThe Fundamental Philosophy of Debugging

Sometimes people have a very hard time debugging. Mostly, these are people who believe that in order to debug a system, you have to think about it instead of looking at it.

Let me give you an example of what I mean. Let’s say you have a web server that is silently failing to serve pages to users 5% of the time. What is your reaction to this question: “Why?”

Do you immediately try to come up with some answer? Do you start guessing? If so, you are doing the wrong thing.

The right answer to that question is: “I don’t know.”

So this gives us the first step to successful debugging:

When you start debugging, realize that you do not already know the answer.

It can be tempting to think that you already know the answer. Sometimes you can guess and you’re right. It doesn’t happen very often, but it happens often enough to trick people into thinking that guessing the answer is a good method of debugging. However, most of the time, you will spend hours, days, or weeks guessing the answer and trying different fixes with no result other than complicating the code. In fact, some codebases are full of “solutions” to “bugs” that are actually just guesses—and these “solutions” are a significant source of complexity in the codebase.

Actually, as a side note, I’ll tell you an interesting principle. Usually, if you’ve done a good job of fixing a bug, you’ve actually caused some part of the system to go away, become simpler, have better design, etc. as part of your fix. I’ll probably go into that more at some point, but for now, there it is. Very often, the best fix for a bug is a fix that actually deletes code or simplifies the system.

But getting back to the process of debugging itself, what should you do? Guessing is a waste of time, imagining reasons for the problem is a waste of time—basically most of the activity that happens in your mind when first presented with the problem is a waste of time. The only things you have to do with your mind are:

  1. Remember what a working system behaves like.
  2. Figure out what you need to look at in order to get more data.

Because you see, this brings us to the most important principle of debugging:

Debugging is accomplished by gathering data until you understand the cause of the problem.

The way that you gather data is, almost always, by looking at something. In the case of the web server that’s not serving pages, perhaps you would look at its logs. Or you could try to reproduce the problem so that you can look at what happens with the server when the problem is happening. This is why people often want a “reproduction case” (a series of steps that allow you to reproduce the exact problem)—so that they can look at what is happening when the bug occurs.

Sometimes the first piece of data you need to gather is what the bug actually is. Often users file bug reports that have insufficient data. For example, let’s say a user files the bug, “When I load the page, the web server doesn’t return anything.” That’s not sufficient information. What page did they try to load? What do they mean by “doesn’t return anything?” Is it just a white page? You might assume that’s what the user meant, but very often your assumptions will be incorrect. The less experienced your user is as a programmer or computer technician, the less well they will be able to express specifically what happened without you questioning them. In these cases, unless it’s an emergency, the first thing that I do is just send the user back specific requests to clarify their bug report, and leave it at that until they respond. I don’t look into it at all until they clarify things. If I did go off and try to solve the problem before I understood it fully, I could be wasting my time looking into random corners of the system that have nothing to do with any problem at all. It’s better to go spend my time on something productive while I wait for the user to respond, and then when I do have a complete bug report, to go research the cause of the now-understood bug.

As a note on this, though, don’t be rude or unfriendly to users just because they have filed an incomplete bug report. The fact that you know more about the system and they know less about the system doesn’t make you a superior being who should look down upon all users with disdain from your high castle on the shimmering peak of Smarter-Than-You Mountain. Instead, ask your questions in a kind or straightforward manner and just get the information. Bug filers are rarely intentionally being stupid—rather, they simply don’t know and it’s part of your job to help them provide the right information. If people frequently don’t provide the right information, you can even include a little questionnaire or form on the bug-filing page that makes them fill in the right information. The point is to be helpful to them so that they can be helpful to you, and so that you can easily resolve the issues that come in.

Once you’ve clarified the bug, you have to go and look at various parts of the system. Which parts of the system to look at is based on your knowledge of the system. Usually it’s logs, monitoring, error messages, core dumps, or some other output of the system. If you don’t have these things, you might have to launch or release a new version of the system that provides the information before you can fully debug the system. Although that might seem like a lot of work just to fix a bug, in reality it often ends up being faster to release a new version that provides sufficient information than to spend your time hunting around the system and guessing what’s going on without information. This is also another good argument for having fast, frequent releases—that way you can get out a new version that provides new debugging information quickly. Sometimes you can get a new build of your system out to just the user who is experiencing the problem, too, as a shortcut to get the information that you need.

Now, remember above that I mentioned that you have to remember what a working system looks like? This is because there is another principle of debugging:

Debugging is accomplished by comparing the data that you have to what you know the data from a working system should look like.

When you see a message in a log, is that a normal message or is it actually an error? Maybe the log says, “Warning: all the user data is missing.” That looks like an error, but really your web server prints that every single time it starts. You have to know that a working web server does that. You’re looking for behavior or output that a working system does not display. Also, you have to understand what these messages mean. Maybe the web server optionally has some user database that you aren’t using, which is why you get that warning—because you intend for all the “user data” to be missing.

Eventually you will find something that a working system does not do. You shouldn’t immediately assume you’ve found the cause of the problem when you see this, though. For example, maybe it logs a message saying, “Error: insects are eating all the cookies.” One way that you could “fix” that behavior would be to delete the log message. Now the behavior is like normal, right? No, wrong—the actual bug is still happening. That’s a pretty stupid example, but people do less-stupid versions of this that don’t fix the bug. They don’t get down to the basic cause of the problem and instead they paper over the bug with some workaround that lives in the codebase forever and causes complexity for everybody who works on that area of the code from then on. It’s not even sufficient to say “You will know that you have found the real cause because fixing that fixes the bug.” That’s pretty close to the truth, but a closer statement is, “You will know that you have found a real cause when you are confident that fixing it will make the problem never come back.” This isn’t an absolute statement—there is a sort of scale of how “fixed” a bug is. A bug can be more fixed or less fixed, usually based on how “deep” you want to go with your solution, and how much time you want to spend on it. Usually you’ll know when you’ve found a decent cause of the problem and can now declare the bug fixed—it’s pretty obvious. But I wanted to warn you against papering over a bug by eliminating the symptoms but not handling the cause.

And of course, once you have the cause, you fix it. That’s actually the simplest step, if you’ve done everything else right.

So basically this gives us four primary steps to debugging:

  1. Familiarity with what a working system does.
  2. Understanding that you don’t already know the cause of the problem.
  3. Looking at data until you know what causes the problem.
  4. Fixing the cause and not the symptoms.

This sounds pretty simple, but I see people violate this formula all the time. In my experience, most programmers, when faced with a bug, want to sit around and think about it or talk about what might be causing it—both forms of guessing. It’s okay to talk to other people who might have information about the system or advice on where to look for data that would help you debug. But sitting around and collectively guessing what could cause the bug isn’t really any better than sitting around and doing it yourself, except perhaps that you get to chat with your co-workers, which could be good if you like them. Mostly though what you’re doing in that case is wasting a bunch of people’s time instead of just wasting your own time.

So don’t waste people’s time, and don’t create more complexity than you need to in your codebase. This debugging method works. It works every time, on every codebase, with every system. Sometimes the “data gathering” step is pretty hard, particularly with bugs that you can’t reproduce. But at the worst, you can gather data by looking at the code and trying to see if you can see a bug in it, or draw a diagram of how the system behaves and see if you can perceive a problem there. I would only recommend that as a last resort, but if you have to, it’s still better than guessing what’s wrong or assuming you already know.

Sometimes, it’s almost magical how a bug resolves just by looking at the right data until you know. Try it for yourself and see. It can actually be fun, even.


Mozilla Marketing Engineering & Ops BlogMozMEAO SRE Status Report - July 18, 2017

Here’s what happened on the MozMEAO SRE team from July 11th - July 18th.

Current work


Decommissioning old infrastructure

We’re planning on decommissioning our Deis 1 infrastructure starting with Ireland, as our apps are all running on Kubernetes in multiple regions. Once the Ireland cluster has been shut down, we’ll continue on to our Portland cluster.

Additionally, we’ll be scaling down our Virginia cluster, as our apps are being moved to regions with lower latencies for the majority of our users.


The Rust Programming Language BlogThe 2017 Rust Conference Lineup

The Rust Community is holding three major conferences in the near future!

Aug 18-19: RustConf

RustConf is a two-day event held in Portland, OR, USA on August 18-19. The first day offers tutorials on Rust given directly by members of the Rust core team, ranging from absolute basics to advanced ownership techniques. In addition to the training sessions, on Friday there will be a RustBridge workshop session for people from underrepresented groups in tech, as well as a session on Tock, the secure embedded operating system.

The second day is the main event, with talks at every level of expertise, covering basic and advanced techniques, experience reports, guidance on teaching, and interesting libraries.

Tickets are still on sale! We offer a scholarship for those who would otherwise find it difficult to attend. Join us in lovely Portland and hear about the latest developments in the Rust world!

Follow us on Twitter @rustconf.

April 29-30th & Sept 30-01: Rust Fest

Hot off another successful event in Kyiv earlier this year, we invite you to join us at RustFest, the European Rust community conference series. Over the weekend of the 30th of September we’ll gather in Zürich, Switzerland to talk Rust, its ecosystem and community. All day Saturday will have talks with topics ranging from hardware and testing over concurrency and disassemblers, and all the way to important topics like community, learning and empathy. While Sunday has a focus on learning and connecting, either at one of the many workshops we are hosting or in the central meet-n-greet-n-hack area provided.

Thanks to the many awesome sponsors, we are able to offer affordable tickets to go on sale in couple weeks! Keep an eye on rustfest.eu, get all the updates on the blog and don’t forget to follow us on Twitter @rustfest. Want to get a glimpse into what it’s like? Check out the videos from Kyiv or Berlin!

Oct 26-27: Rust Belt Rust

For Rust Belt Rust’s second year, we’ll be in Columbus, OH, USA at the Columbus Athenaeum, and tickets are on sale now! We will have a day of workshops on Thursday and a day of single track talks on Friday. Speakers include Nell Shamrell, who works on Habitat at Chef, Emma Gospodinova, who is doing a GSoC project working on the Rust plugin for the KDevelop IDE, and Core Team members Aaron Turon, Niko Matsakis, and Carol Nichols. We’d love for YOU to be a speaker as well - our CFP is open now until Aug 7. We hope to see you at the Rustiest conference in the eastern US! Follow us on Twitter @rustbeltrust for the latest news.

Mozilla Addons BlogAdd-ons at Mozilla All Hands San Francisco

Firefox add-on staff and contributors gathered at Mozilla’s recent All Hands meeting in San Francisco to spend time as a group focusing on our biggest priority this year: the Firefox 57 release in November.

During the course of the week, Mozillians could be found huddled together in various conference spaces discussing blocker issues, making plans, and hacking on code. Here’s a  recap of the week and a glance at what we have in store for the second half of 2017.

Add-on Engineering

Add-on engineers Luca Greco and Kumar McMillan take a break to model new add-on jackets.

For most of the engineering team, the week was a chance to catch up on the backlog of bugs. (The full list of bugs closed during the week can be found here.)

We also had good conversations about altering HTTP Response in the webRequest API, performance problems with the blocklist on Firefox startup, and sketching out a roadmap for web-ext, the command line tool for extension development. We also had a chance to make progress on the browser.proxy API.

Improving addons.mozilla.org (AMO)

Having recently completed the redesign of AMO for Android, we’ve now turned our attention to refreshing the desktop version. Goals for the next few months include modernizing the homepage and making it easier to find great add-ons. Here’s a preview of the new look:


Another area of focus was migrating to Django 1.11. Most of the work on the Django upgrade involved replacing and removing incompatible libraries and customizations, and a lot of progress was made during the week.

Add-on Reviews

Former intern Elvina Valieva helped make improvements to the web-ext command line tool, in addition to doing some impressive marine-themed photoshopping.

Review queue wait times have dramatically improved in the past few weeks, and we’re on track to deliver even more improvements in the next few months. During our week together, we also discussed ideas for improving the volunteer reviewer program and evolving it to stay relevant to the new WebExtensions model. We’ll be reaching out to the review team for feedback in the coming weeks.

Get Involved

Interested in contributing to the add-ons community? Check out our wiki to see a list of current opportunities.


The post Add-ons at Mozilla All Hands San Francisco appeared first on Mozilla Add-ons Blog.

Gervase MarkhamTurns Out, Custom T-Shirts Are Cheap

The final party at the recent Mozilla All Hands, organized by the ever-awesome Brianna Mark, had a “Your Favourite Scientist” theme. I’ve always been incredibly impressed by Charles Babbage, the English father of the digital programmable computer. And he was a Christian, as well. However, I didn’t really want to drag formal evening wear all the way to San Francisco.

Instead, I made some PDFs in 30 minutes and had a Babbage-themed t-shirt made up by VistaPrint, for the surprising and very reasonable sum of around £11, with delivery inside a week. I had no idea one-off custom t-shirts were so cheap. I must think of other uses for this information. Anyway, here’s the front:

and the back:

The diagram is, of course, part of his original plans for his Difference Engline. Terrible joke, but there you go. The font is Tangerine. Sadly, the theme was not as popular as the Steampunk one we did a couple of All Hands ago, and there weren’t that many people in costume. And the Academy of Sciences was cold enough that I had my hoodie on most of the time…

Aaron KlotzWin32 Gotchas

For the second time since I have been at Mozilla I have encountered a situation where hooks are called for notifications of a newly created window, but that window has not yet been initialized properly, causing the hooks to behave badly.

The first time was inside our window neutering code in IPC, while the second time was in our accessibility code.

Every time I have seen this, there is code that follows this pattern:

HWND hwnd = CreateWindowEx(/* ... */);
if (hwnd) {
  // Do some follow-up initialization to hwnd (Using SetProp as an example):
  SetProp(hwnd, "Foo", bar);

This seems innocuous enough, right?

The problem is that CreateWindowEx calls hooks. If those hooks then try to do something like GetProp(hwnd, "Foo"), that call is going to fail because the “Foo” prop has not yet been set.

The key takeaway from this is that, if you are creating a new window, you must do any follow-up initialization from within your window proc’s WM_CREATE handler. This will guarantee that your window’s initialization will have completed before any hooks are called.

You might be thinking, “But I don’t set any hooks!” While this may be true, you must not forget about hooks set by third-party code.

“But those hooks won’t know anything about my program’s internals, right?”

Perhaps, perhaps not. But when those hooks fire, they give third-party software the opportunity to run. In some cases, those hooks might even cause the thread to reenter your own code. Your window had better be completely initialized when this happens!

In the case of my latest discovery of this issue in bug 1380471, I made it possible to use a C++11 lambda to simplify this pattern.

CreateWindowEx accepts a lpParam parameter which is then passed to the WM_CREATE handler as the lpCreateParams member of a CREATESTRUCT.

By setting lpParam to a pointer to a std::function<void(HWND)>, we may then supply any callable that we wish for follow-up window initialization.

Using the previous code sample as a baseline, this allows me to revise the code to safely set a property like this:

std::function<void(HWND)> onCreate([](HWND aHwnd) -> void {
  SetProp(aHwnd, "Foo", bar);

HWND hwnd = CreateWindowEx(/* ... */, &onCreate);
// At this point is already too late to further initialize hwnd!

Note that since lpParam is always passed during WM_CREATE, which always fires before CreateWindowEx returns, it is safe for onCreate to live on the stack.

I liked this solution for the a11y case because it preserved the locality of the initialization code within the function that called CreateWindowEx; the window proc for this window is implemented in another source file and the follow-up initialization depends on the context surrounding the CreateWindowEx call.

Speaking of window procs, here is how that window’s WM_CREATE handler invokes the callable:

switch (uMsg) {
  case WM_CREATE: {
    auto createStruct = reinterpret_cast<CREATESTRUCT*>(lParam);
    auto createProc = reinterpret_cast<std::function<void(HWND)>*>(

    if (createProc && *createProc) {

    return 0;
  // ...

TL;DR: If you see a pattern where further initialization work is being done on an HWND after a CreateWindowEx call, move that initialization code to your window’s WM_CREATE handler instead.

Mozilla Open Policy & Advocacy BlogMozilla files comments to save the internet… again

Today, we filed Mozilla’s comments to the FCC. Just want to take a look at them them? They’re right here – or read on for more.

Net neutrality is critical to the internet’s creators, innovators, and everyday users. We’ve talked a lot about the importance of net neutrality over the years, both in the US and globally — and there have been many positive developments. But today there’s a looming threat: FCC Chairman Pai’s plan to roll back enforceable net neutrality protections in his so-called “Restoring Internet Freedom” proceeding.

Net neutrality — enforceable and with clear rules for providers — is critical to the future of the internet. Our economy and society depend on the internet being open. For net neutrality to work, it must be enforceable. In the past, when internet service providers (ISPs) were not subject to enforceable rules, they violated net neutrality. ISPs prevented users from chatting on FaceTime and streaming videos, among other questionable business practices. The 2015 rules fixed this: the Title II classification of broadband protected access to the open internet and made all voices free to be heard. The 2015 rules preserved– and made enforceable– the fundamental principles and assumptions on which the internet have always been rooted. To abandon these core assumptions about how the internet works and is regulated has the potential to wreak havoc. It would hurt users and stymie innovation. It could very well see the US fall behind the other 47 countries around the world that have enforceable net neutrality rules.

We’ve asked you to comment, and we’ve been thrilled with your response. Thank you! Keep it coming! Now it’s our turn. Today, we are filing Mozilla’s comments on the proceeding, arguing against this rollback of net neutrality protections. Net neutrality is a critical part of why the internet is great, and we need to protect it:

  • Net neutrality is fundamental to free speech. Without it, big companies could censor anyone’s voice and make it harder to speak up online.
  • Net neutrality is fundamental to competition. Without it, ISPs can prioritize their businesses over newcomer companies trying to reach users with the next big thing.
  • Net neutrality is fundamental to innovation. Without it, funding for startups could dry-up, as established companies that can afford to “pay to play” become the only safe tech investments.
  • And, ultimately, net neutrality is fundamental to user choice. Without it, ISPs can choose what you access — or how fast it may load — online.

The best way to protect net neutrality is with what we have today: clear, lightweight rules that are enforceable by the FCC. There is no basis to change net neutrality rules, as there is no clear evidence of a negative impact on anything, including ISPs’ long-term infrastructure investments. We’re concerned that user rights and online innovation have become a political football, when really most people and companies agree that net neutrality is important.

There’s more to come in this process — many will write “reply comments” over the next month. After that, the Commission should consider these comments (and we hope they reconsider the plan entirely) and potentially vote on the proposal later this year. We fully expect the courts to weigh in here if the new rule is enacted, and we’ll engage there too. Stay tuned!

The post Mozilla files comments to save the internet… again appeared first on Open Policy & Advocacy.

Will Kahn-GreeneAntenna: post-mortem and project wrap-up


Socorro is the crash ingestion pipeline for Mozilla's products like Firefox. When Firefox crashes, the Breakpad crash reporter asks the user if the user would like to send a crash report. If the user answers "yes!", then the Breakpad crash reporter collects data related to the crash, generates a crash report, and submits that crash report as an HTTP POST to Socorro--specifically the Socorro collector.

The Socorro collector is one of several components that comprise Socorro. Each of the components has different uptime requirements and different security risk profiles. However, all the code is maintained in a single repository and we deploy everything every time we do a deploy. This is increasingly inflexible and makes it difficult for us to make architectural changes to Socorro without affecting everything and incurring uptime risk for components that have high uptime requirements.

Because of that, in early 2016, we embarked on a rearchitecture to split out some components of Socorro into separate services. The first component to get split out was the Socorro collector since it needs has the highest uptime requirements of all the Socorro components, but rarely changes, so it'd be a lot easier to meet those requirements if it was separate from the rest of Socorro.

Thus I was tasked with splitting out the Socorro collector and this blog post covers that project. It's a bit stream-of-consciousness, because I think there's some merit to explaining the thought process behind how I did the work over the course of the project for other people working on projects.

Read more… (15 mins to read)

Wladimir PalantEasy Passwords released as a Web Extension

I’ve finally released Easy Passwords as a Web Extension (not yet through AMO review at the time of writing), so that it can continue working after Firefox 57. To be precise, this is an intermediate step, a hybrid extension meant to migrate data out of the Add-on SDK into the Web Extension part. But all the functionality is in the Web Extension part already, and the data migration part is tiny. Why did it take me so long? After all, Easy Passwords was created when Mozilla’s Web Extensions plan was already announced. So I was designing the extension with Web Extensions in mind, which is why it could be converted without any functionality changes now. Also, Easy Passwords has been available for Chrome for a while already.

The trouble was mostly the immaturity of the Web Extensions platform, which is why I chose to base the extension on the Add-on SDK initially (hey, Mozilla used to promise continued support for the Add-on SDK, so it looked like the best choice back then). Even now I had to fight a bunch of bugs before things were working as expected. Writing to clipboard is weird enough in Chrome, but in Firefox there is also a bug preventing you from doing so in the background page. Checking whether one of the extension’s own pages is open? Expect this to fail, fixed only very recently. Presenting the user with a file download dialog? Not without hacks. And then there are some strange keyboard focus issues that I didn’t even file a bug for yet.

There is still plenty more bugs and inconsistencies. For example, I managed to convert my Enforce Encryption extension so that it would work in Chrome, but it won’t work in Firefox due to a difference in the network stack. But Mozilla’s roadmap is set in stone, Firefox 57 it is. The good news: it could have been worse, Microsoft Edge shipped with an even more immature extensions platform. I complained about difficulties presenting a file download dialog to the user? In Edge, there are three bugs playing together nicely to make this task completely impossible: 1, 2, 3.

Firefox NightlyPreview Storage API in Firefox Nightly

We are happy to announce that the Storage API feature is ready for testing in Firefox Nightly Desktop!

Storage API

The Storage API allows Web sites to find out how much space users can use (quota), how much they are already using (usage) and can also tell Firefox to store this data persistently and per origin. This feature is available only in secure contexts (HTTPS). You can also use Storage APIs via Web Workers.
There are plenty of APIs that can be used for storage, e.g.,  localStorage, IndexedDB. The data stored for a Web site managed by Storage API — which is defined by the Storage Standard specification — includes:
  • IndexedDB databases
  • Cache API data
  • Service Worker registrations
  • Web Storage API data
  • History state information saved using pushState()
  • Notification data

Storage limits

The maximum browser storage space is dynamic — it is based on your hard drive size when Firefox is launching. The global limit is calculated as 50% of free disk space. There’s also another limit called group limit — basically this is defined as 20% of the global limit, to prevent individual sites from using exorbitant amounts of storage where there is a free space, the group limit is set to 2GB (maximum). Each origin is part of a group (group of origins).

Site Storage

Basically, each origin has an associated site storage unit. Site storage consists of zero or more site storage units. A site storage unit contains a single box. A box has a mode which is either “best-effort” or “persistent”.
Traditionally, when users run out of storage space on their device, data stored with these APIs gets lost without the user being able to intervene, because modern browsers automatically manage storage space, it stores data until quota is reached and do housekeeping work automatically,  it’s called “best-effort” mode.
But this doesn’t fit web games, or music streaming sites which might have offline storage use cases for commute-friendly purpose, since storage space will be evicted when available storage space is low, it will be an awful experience to re-download data again. And web games or music sites may need more space, “best-effort” mode is  also bound by group limit, upper bound is just 2GB.
With Storage API, if the site has “persistent-storage” permission, it can call navigator.storage.persist(), let user decide to keep site data manually, and that is “persistent” mode.

if (navigator.storage && navigator.storage.persist) {
  navigator.storage.persist().then(persisted => {
    if (persisted)
      console.log(“Persistent mode. Quota is bound by global limit (50% of disk space).”);
      console.log(“Best-effort mode. Storage may be evicted under storage pressure.");

Site Storage Units

  • Each example is independent here.
  • If a user allows the site to store persistently, the user can store more data into disk, and the site storage quota for origin is not limited by group limit but global limit.
  • Site Storage Unit of Origin A consists three different types of storage, IndexedDB Databases, Local Storage, Cache API Data; Site Storage Unit of Origin B consists Cache API Data only. Site Storage Unit of Origin A and Bs’ quota is limited to global limit.
  • Site Storage Unit of Origin C is full, it is reached to quota (global limit) and can’t store any data without removing existed site storage data. UA will start to evict “best-effort” site storage units under a least recently used (LRU policy), if all best-effort site storage units are freed but still not enough, the user agent will send storage pressure notification to clear storage explicitly. See below thex storage pressure notification screenshot. Firefox may notify users when data usage is more than 90% of global limit to clear storage.
  • Site Storage Unit of Origin D is also full, the box mode is “best-effort”, so quota is storage limit per origin (Firefox 56 is still bound by group limit), and best-effort mode is smaller than persistent storage. User agent will try to retain the data contained in the box for as long as it can, but will not warn users if storage space runs low and it becomes necessary to clear the box to relieve the storage pressure.

Indicate user to clear persistent storage



Persistent Storage Permission

Preferences – Site Data


If user “persist” the site, that site data of that origin won’t be evicted until the user manually delete them in Preferences. With the new ‘Site Data Manager’, the user now can manage site data easily and delete persistent site data manually in the same place. Although cookies are not part of Site Storage, Site Storage should be cleared along with cookies to prevent user tracking or data inconsistency.

Storage API is now available for testing in Firefox Nightly 56.

What’s currently supported

  • navigator.storage.estimate()
  • navigator.storage.persist()
  • navigator.storage.persisted()
  • new Site Data Manager in Preferences
  • IndexedDB, asm.js caching, Cache API data are managed by Storage API

Storage API V1.5

  • Local Storage/History state information/Notification data are managed by Storage API

Storage API V2

  • Multiple Boxes

Try it Out

To use the new Site Data Manager, open “Privacy Preferences” (you can type about:preferences#privacy in the address bar). Click the “Settings…” button besides “Site Data”.

Take a look at the Storage api wiki page for much more information and to get involved.

Robert O'CallahanConfession Of A C/C++ Programmer

I've been programming in C and C++ for over 25 years. I have a PhD in Computer Science from a top-ranked program, and I was a Distinguished Engineer at Mozilla where for over ten years my main job was developing and reviewing C++ code. I cannot consistently write safe C/C++ code. I'm not ashamed of that; I don't know anyone else who can. I've heard maybe Daniel J. Bernstein can, but I'm convinced that, even at the elite level, such people are few and far between.

I see a lot of people assert that safety issues (leading to exploitable bugs) with C and C++ only afflict "incompetent" or "mediocre" programmers, and one need only hire "skilled" programmers (such as, presumably, the asserters) and the problems go away. I suspect such assertions are examples of the Dunning-Kruger effect, since I have never heard them made by someone I know to be a highly skilled programmer.

I imagine that many developers successfully create C/C++ programs that work for a given task, and no-one ever fuzzes or otherwise tries to find exploitable bugs in those programs, so those developers naturally assume their programs are robust and free of exploitable bugs, creating false optimism about their own abilities. Maybe it would be useful to have an online coding exercise where you are given some apparently-simple task, you write a C/C++ program to solve it, and then your solution is rigorously fuzzed for exploitable bugs. If any such bugs are found then you are demoted to the rank of "incompetent C/C++ programmer".

Robert O'CallahanAn Inflection Point In The Evolution Of Programming Langauges

Two recent Rust-related papers are very exciting.

Rustbelt formalizes (a simplified version of) Rust's semantics and type system and provides a soundness proof that accounts for unsafe code. This is a very important step towards confidence in the soundness of safe Rust, and towards understanding what it means for unsafe code to be valid — and building tools to check that.

This systems paper is about exploiting Rust's remarkably strong control of aliasing to solve a few different OS-related problems.

It's not often you see a language genuinely attractive to the systems research community (and programmers in general, as the Rust community shows) being placed on a firm theoretical foundation. (It's pretty common to see programming languages being advocated to the systems community by their creators, but this is not that.) Whatever Rust's future may be, it is setting a benchmark against which future systems programming languages should be measured. Key Rust features — memory safety, data-race freedom, unique ownership, and minimal space/time overhead, justified by theory — should from now on be considered table stakes.

Mozilla Addons BlogAdd-ons Update – 2017/07

Here’s the monthly update of the state of the add-ons world.

The Road to Firefox 57 explains what developers should look forward to in regards to add-on compatibility for the rest of the year. So please give it a read if you haven’t already.

The Review Queues

In the past month, our team reviewed 1,597 listed add-on submissions:

  • 1294 in fewer than 5 days (81%).
  • 110 between 5 and 10 days (7%).
  • 193 after more than 10 days (12%).

301 listed add-ons are awaiting review.

If you’re an add-on developer and are looking for contribution opportunities, please consider joining us. Visit our wiki page for more information.

Compatibility Update

We published the blog post for 55 and the bulk validation has been run. Additionally, the compatibility post for 56 is coming up.

Make sure you’ve tested your add-ons and either use WebExtensions or set the multiprocess compatible flag in your manifest. As always, we recommend that you test your add-ons on Beta.

If you’re an add-ons user, you can install the Add-on Compatibility Reporter. It helps you identify and report any add-ons that aren’t working anymore.


We would like to thank the following people for their recent contributions to the add-ons world:

  • Aayush Sanghavi
  • Santiago Paez
  • Markus Strange
  • umaarabdullah
  • Ahmed Hasan
  • Fiona E Jannat
  • saintsebastian
  • Atique Ahmed
  • Apoorva Pandey
  • Cesar Carruitero
  • J.P. Rivera
  • Trishul Goel
  • Santosh
  • Christophe Villeneuve

You can read more about their work in our recognition page.

The post Add-ons Update – 2017/07 appeared first on Mozilla Add-ons Blog.

Hacks.Mozilla.OrgIntroducing sphinx-js, a better way to document large JavaScript projects

Until now, there has been no good tool for documenting large JavaScript projects. JSDoc, long the sole contender, has some nice properties:

  • A well-defined set of tags for describing common structures
  • Tooling like the Closure Compiler which hooks into those tags

But the output is always a mere alphabetical list of everything in your project. JSDoc scrambles up and flattens out your functions, leaving new users to infer their relationships and mentally sort them into comprehensible groups. While you can get away with this for tiny libraries, it fails badly for large ones like Fathom, which has complex new concepts to explain. What I wanted for Fathom’s manual was the ability to organize it logically, intersperse explanatory prose with extracted docs, and add entire sections which are nothing but conceptual overview and yet link into the rest of the work.1

The Python world has long favored Sphinx, a mature documentation tool with support for many languages and output formats, along with top-notch indexing, glossary generation, search, and cross-referencing. People have written entire books in it. Via plugins, it supports everything from Graphviz diagrams to YouTube videos. However, its JavaScript support has always lacked the ability to extract docs from code.

Now sphinx-js adds that ability, giving JavaScript developers the best of both worlds.

sphinx-js consumes standard JSDoc comments and tags—you don’t have to do anything weird to your source code. (In fact, it delegates the parsing and extraction to JSDoc itself, letting it weather future changes smoothly.) You just have Sphinx initialize a docs folder in the root of your project, activate sphinx-js as a plugin, and then write docs to your heart’s content using simple reStructuredText. When it comes time to call in some extracted documentation, you use one of sphinx-js’s special directives, modeled after the Python-centric autodoc’s mature example. The simplest looks like this:

.. autofunction:: linkDensity

That would go and find this function…

 * Return the ratio of the inline text length of the links in an element to
 * the inline text length of the entire element.
 * @param {Node} node - The node whose density to measure
 * @throws {EldritchHorrorError|BoredomError} If the expected laws of the
 *     universe change, raise EldritchHorrorError. If we're getting bored of
 *     said laws, raise BoredomError.
 * @returns {Number} A ratio of link length to overall text length: 0..1
function linkDensity(node) {

…and spit out a nicely formatted block like this:

(the previous comment block, formatted nicely)

Sphinx begins to show its flexibility when you want to do something like adding a series of long examples. Rather than cluttering the source code around linkDensity, the additional material can live in the reStructuredText files that comprise your manual:

.. autofunction:: linkDensity
   Anything you type here will be appended to the function's description right
   after its return value. It's a great place for lengthy examples!

There is also a sphinx-js directive for classes, either the ECMAScript 2015 sugared variety or the classic functions-as-constructors kind decorated with @class. It can optionally iterate over class members, documenting as it goes. You can control ordering, turn private members on or off, or even include or exclude specific ones by name—all the well-thought-out corner cases Sphinx supports for Python code. Here’s a real-world example that shows a few truly public methods while hiding some framework-only “friend” ones:

.. autoclass:: Ruleset(rule[, rule, ...])
   :members: against, rules

(Ruleset class with extracted documentation, including member functions)

Going beyond the well-established Python conventions, sphinx-js supports references to same-named JS entities that would otherwise collide: for example, one foo that is a static method on an object and another foo that is an instance method on the same. It does this using a variant of JSDoc’s namepaths. For example…

  • someObject#foo is the instance method.
  • someObject.foo is the static method.
  • And someObject~foo is an inner member, the third possible kind of overlapping thing.

Because JSDoc is still doing the analysis behind the scenes, we get to take advantage of its understanding of these JS intricacies.

Of course, JS is a language of heavy nesting, so things can get deep and dark in a hurry. Who wants to type this full path in order to document innerMember?


Yuck! Fortunately, sphinx-js indexes all such object paths using a suffix tree, so you can use any suffix that unambiguously refers to an object. You could likely say just innerMember. Or, if there were 2 objects called “innerMember” in your codebase, you could disambiguate by saying staticMethod~innerMember and so on, moving to the left until you have a unique hit. This delivers brevity and, as a bonus, saves you having to touch your docs as things move around your codebase.

With the maturity and power of Sphinx, backed by the ubiquitous syntactical conventions and proven analytic machinery of JSDoc, sphinx-js is an excellent way to document any large JS project. To get started, see the readme. Or, for a large-scale example, see the Fathom documentation. A particularly juicy page is the Rule and Ruleset Reference, which intersperses tutorial paragraphs with extracted class and function docs; its source code is available behind a link in its upper right, as for all such pages.

I look forward to your success stories and bug reports—and to the coming growth of rich, comprehensibly organized JS documentation!

1JSDoc has tutorials, but they are little more than single HTML pages. They have no particular ability to cross-link with the rest of the documentation nor to call in extracted comments.

Alessio PlacitelliGetting Firefox data faster: introducing the ‘new-profile’ ping

Let me state this clearly, again: data latency sucks. This is especially true when working on Firefox: a nicely crafted piece of software that ships worldwide to many people. When something affects the experience of our users we need to know and react fast. The story so far… We started improving the latency of the … 

Soledad PenadesIf using ES6 `extends`, call `super()` before accessing `this`

I am working on rewriting some code that used an ES5 “Class” helper, to use actual ES6 classes.

I soon stumbled upon a weird error in which apparently valid code would be throwing an |this| used uninitialized in A class constructor error:

class A extends B {
  constructor() {
    this.someVariable = 'some value'; // fails

I was absolutely baffled as to why this was happening… until I found the answer in a stackoverflow post: I had to call super() before accessing this.

With that, the following works perfectly:

class A extends B {
  constructor() {
    super(); // ☜☜☜ ❗️❗️❗️
    this.someVariable = 'some value'; // works!

Edit: filed a bug in Firefox to at least get a better error message!

flattr this!

Dave TownsendThoughts on the module system

For a long long time Mozilla has governed its code (and a few other things) as a series of modules. Each module covers an area of code in the source and has an owner and a list of peers, folks that are knowledgeable about that module. The full list of modules is public. In the early days the module system was everything. Module owners had almost complete freedom to evolve their module as they saw fit including choosing what features to implement and what bugs to fix. The folks who served as owners and peers came from diverse areas too. They weren’t just Mozilla employees, many were outside contributors.

Over time things have changed somewhat. Most of the decisions about what features to implement and many of the decisions about the priority of bugs to be fixed are now decided by the product management and user experience teams within Mozilla. Almost all of the module owners and most of the peers are now Mozilla employees. It might be time to think about whether the module system still works for Mozilla and if we should make any changes.

In my view the current module system provides two things that it’s worth talking about. A list of folks that are suitable reviewers for code and a path of escalation for when disagreements arise over how something should be implemented. Currently both are done on a per-module basis. The module owner is the escalation path, the module peers are reviewers for the module’s code.

The escalation path is probably something that should remain as-is. We’re always going to need experts to defer decisions to, those experts often need to be domain specific as they will need to understand the architecture of the code in question. Maintaining a set of modules with owners makes sense. But what about the peers for reviewing code?

A few years ago both the Toolkit and Firefox modules were split into sub-modules with peers for each sub-module. We were saying that we trusted folks to review some code but didn’t trust them to review other code. This frequently became a problem when code changes touched more than one sub-module, a developer would have to get review from multiple peers. That made reviews go slower than we liked. So we dropped the sub-modules. Instead every peer was trusted to review any code in the Firefox or Toolkit module. The one stipulation we gave the peers was that they should turn away reviews if they didn’t feel like they knew the code well enough to review it themselves. This change was a success. Of course for complicated work certain reviewers will always be more knowledgeable about a given area of code but for simpler fixes the number of available reviewers is much larger than it was otherwise.

I believe that we should make the same change for the entire code-base. Instead of having per-module peers simply designate every existing peer as a “Mozilla code reviewer” able to review code anywhere in the tree so long as they feel that they understand the change well enough. They should escalate architectural concerns to the module’s owner.

This does bring up the question of how do patch authors find reviewers for their code if there is just one massive list of reviewers. That is where Bugzilla’s suggested reviewers feature comes it. We have per-Bugzilla lists of reviewers who are the appropriate choices for that component. Patches that cover more than one component can choose whoever they like to do the review.


The Mozilla BlogDefending Net Neutrality: Millions Rally to Save the Internet, Again

We’re fighting for net neutrality, again, because it is crucial to the future of the internet. Net neutrality serves to enable free speech, competition, innovation and user choice online.

On July 12, it was great to see such a diversity of voices speak up and join together to support a neutral internet. We need to protect the internet as a shared global public resource for us all. This Day of Action makes it clear, yet again, that net neutrality it a mainstream issue, which the majority of Americans (76% from our recent survey) care about and support.

We were happy to see a lot of engagement with our Day of Action activities:

  • Mozilla collected more than 30,000 public comments on July 12 alone — bringing our total number of public comments to more than 75,000. We’ll be sharing these with the FCC
  • Our nine hour Soothing Sounds of Activism: Net Neutrality video, along with interviews from Senators Al Franken and Ron Wyden, received tens of thousands of views
  • The net neutrality public comments displayed on the U.S. Firefox snippet made 6.8 million impressions
  • 30,000 listeners tuned in for the net neutrality episode of our IRL podcast

The Day of Action was timed a few days before the first deadline for comments to the FCC on the proposed rollback of existing net neutrality protections. This is just the first step though. Mozilla takes action to protect net neutrality every day, because it’s obviously not a one day battle.

Net neutrality is not the sole responsibility any one company, individual or political party. We need to join together because the fight for net neutrality impacts the future of the internet and everyone who uses it.

What’s Next?

Right now, we’re finishing our FCC comments to submit on July 17. Next, we’ll continue to advocate for enforceable net neutrality through all FCC deadlines and we’ll defend the open internet, just like we did with our comments and efforts to protect net neutrality in 2010 and 2014.

The post Defending Net Neutrality: Millions Rally to Save the Internet, Again appeared first on The Mozilla Blog.

QMOFirefox Developer Edition 55 Beta 11 Testday, July 21st

Hello Mozillians,

We are happy to let you know that Friday, July 21st, we are organizing Firefox Developer Edition 55 Beta 11 Testday. We’ll be focusing our testing on the following features: Screenshots, Shutdown Video Decoder and Customization.

Check out the detailed instructions via this etherpad.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better!

See you on Friday!

Air MozillaReps Weekly Meeting Jul. 13, 2017

Reps Weekly Meeting Jul. 13, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

William LachanceTaking over an npm package: sanity prevails

Sometimes problems are easier to solve than expected.

For the last few months I’ve been working on the front end of a new project called Mission Control, which aims to chart lots of interesting live information in something approximating realtime. Since this is a greenfield project, I thought it would make sense to use the currently-invogue framework at Mozilla (react) along with our standard visualization library, metricsgraphics.

Metricsgraphics is great, but its jquery-esque api is somewhat at odds with the react way. The obvious solution to this problem is to wrap its functionality in a react component, and a quick google search determined that several people have tried to do exactly that, the most popular one being one called (obviously) react-metrics-graphics. Unfortunately, it hadn’t been updated in quite some time and some pull requests (including ones implementing features I needed for my project) weren’t being responded to.

I expected this to be pretty difficult to resolve: I had no interaction with the author (Carter Feldman) before but based on my past experiences in free software, I was expecting stonewalling, leaving me no choice but to fork the package and give it a new name, a rather unsatisfying end result.

But, hey, let’s keep an open mind on this. What does google say about unmaintained npm packages? Oh what’s this? They actually have a policy?

tl;dr: You email the maintainer (politely) and CC support@npmjs.org about your interest in helping maintain the software. If you’re unable to come up with a resolution on your own, they will intervene.

So I tried that. It turns out that Carter was really happy to hear that Mozilla was interested in taking over maintenance of this project, and not only gave me permission to start publishing newer versions to npm, but even transferred his repository over to Mozilla (so we could preserve issue and PR history). The project’s new location is here:


In hindsight, this is obviously the most reasonable outcome and I’m not sure why I was expecting anything else. Is the node community just friendlier than other areas I’ve worked in? Have community standards improved generally? In any case, thank you Carter for a great piece of software, hopefully it will thrive in its new home. :P

Firefox UXFirefox Workflow User Research in Germany

Munich Public Transit (Photo: Gemma Petrie)

Last year, the Firefox User Research team conducted a series of formative research projects studying multi-device task continuity. While these previous studies broadly investigated types of task flows and strategies for continuity across devices, they did not focus on the functionality, usability, or user goals behind these specific workflows.

For most users, interaction with browsers can be viewed as a series of specific, repeatable workflows. Within the the idea of a “workflow” is the theory of “flow.” Flow has been defined as:

a state of mind experienced by people who are deeply involved in an activity. For example, sometimes while surfing the Net, people become so focused on their pursuit that they lose track of time and temporarily forget about their surroundings and usual concerns…Flow has been described as an intrinsically enjoyable experience.¹

As new features and service integrations are introduced to existing products, there is a risk that unarticulated assumptions about usage context and user mental models could create obstacles for our users. Our goal for this research was to identify these obstacles and gain a detailed understanding of the behaviors, motivations, and strategies behind current browser-based user workflows and related device or app-based workflows. These insights will help us develop products, services, and features for our users.

Primary Research Questions

  • How can we understand users’ current behaviors to develop new workflows within the browser?
  • How do workflows & “flow” states differ between and among different devices?
  • In which current browser workflows do users encounter obstacles? What are these obstacles?
  • Are there types of workflows for specific types of users and their goals? What are they?
  • How are users’ unmet workflow needs being met outside of the browser? And how might we meet those needs in the browser?


In order to understand users’ workflows, we employed a three-part, mixed method approach.


The first phase of our study was a twenty question survey deployed to 1,000 respondents in Germany provided by SSI’s standard international general population panel. We asked participants to select the Internet activities they had engaged in in the previous week. Participants were also asked questions about their browser usage on multiple devices as well as perceptions of privacy. We modeled this survey off of Pew Research Center’s “The Internet and Daily Life” study.

Experience Sampling

In the second phase, a separate group of 26 German participants were recruited from four major German cities: Cologne, Hamburg, Munich, and Leipzig. These participants represented a diverse range of demographic groups and half of the participants used Firefox as their primary browser on at least one of their devices. Participants were asked to download a mobile app called Paco. Paco cued participants up to seven times daily asking them about their current Internet activities, the context for it, and their mental state while completing it.

In-Person Interviews

In the final phase of the study, we selected 11 of the participants from the Experience Sampling segment from Hamburg, Munich, and Leipzig. Over the course of 3 weeks, we visited these participants in their homes and conducted 90 minute interview and observation sessions. Based on the survey results and experience sampling observations, we explored a small set of participants’ workflows in detail.

Product Managers participating in affinity diagramming in the Mozilla Toronto office. (Photo: Gemma Petrie)

Field Team Participation

The Firefox User Research team believes it is important to involve a wide variety of staff members in the experience of in-context research and analysis activities. Members of the Firefox product management and UX design teams accompanied the research team for these in-home interviews in Germany. After the interviews, the whole team met in Toronto for a week to absorb and analyze the data collected from the three segments. The results presented here are based on the analysis provided by the team.


Based on our research, we define a workflow as a habitual, frequently employed set of discrete steps that users build into a larger activity. Users employ the tools they have at hand (e.g., tabs, bookmarks, screenshots) to achieve a goal. Workflows can also span across multiple devices, range from simple to technically sophisticated, exist across noncontinuous durations of time, and contain multiple decisions within them.

Example Workflow from Hamburg Participant #2

We observed that workflows appear to be naturally constructed actions to participants. Their workflows were so unconscious or self-evident, that participants often found it challenging to articulate and reconstruct their workflows. Examples of workflows include: Comparison shopping, checking email, checking news updates, and sharing an image with someone else.

Workflows Model

Based on our study, we have developed a general two-part model to illustrate a workflow.

Part 1: Workflows are constructed from discrete steps. These steps are atomic and include actions like typing in a URL, pressing a button, taking a screenshot, sending a text message, saving a bookmark, etc. We mean “atomic” in the sense that the steps are simple, irreducible actions in the browser or other software tools. When employed alone, these actions can achieve a simple result (e.g. creating a bookmark). Users build up the atomic actions into larger actions that constitute a workflow.

Part 2: Outside factors can influence the choices users make for both a whole workflow or steps within a workflow. These factors include software components, physical components, and pyscho/social/cultural factors.

Trying to find the Mozilla Berlin office. (Photo: Gemma Petrie)

Factors Influencing Workflows

While workflows are composed from atomic building blocks of tools, there is a great deal more that influences their construction and adoption among users.

Software Components

Software components are features of the operating system, the browser, and the specs of web technology that allow users to complete small atomic tasks. Some software components also constrain users into limited tasks or are obstacles to some workflows.

The basic building blocks of the browser are the features, tools, and preferences that allow users to complete tasks with the browser. Some examples include: Tabs, bookmarks, screenshots, authentication, and notifications.

Physical Components

Physical components are the devices and technology infrastructure that inform how users interact with software and the Internet. These components employ software but it is users’ physical interaction with them that makes these factors distinct. Some examples include: Access to the internet, network availability, and device form factors.

Psycho/Social/Cultural Factors

Psycho/Social/Cultural influences are contextual, social, and cognitive factors that affect users’ approaches to and decisions about their workflows.

Participants use memory to fill in gaps in their workflows where technology does not support persistence. For example, when comparison shopping, a user has multiple tabs open to compare prices; the user is using memory to keep in mind prices from the other tabs for the same item.

Participants exercised control over the role of technology in their lives either actively or passively. For example, some participant believed that they received too many notifications from apps and services, and often did not understand how to change these settings. This experience eroded their sense of control over their technology and forced these participants to develop alternate strategies for regaining control over these interruptions. For others, notifications were seen as a benefit. For example, one of our Leipzig participants used home automation tools and their associated notifications on his mobile devices to give him more control over his home environment.

Other examples of psycho/social/cultural factors we observed included: Work/personal divides, identity management, fashion trends in technology adoption, and privacy concerns.

Using the Workflows Model

When analyzing current user workflows, the parts of the model should be cues to examine how the workflow is constructed and what factors influence its construction. When building new features, it can be helpful to ask the following questions to determine viability:

  • Are the steps we are creating truly atomic and usable in multiple workflows?
  • Are we supplying software components that give flexibility to a workflow?
  • What affect will physical factors have on the atomic components in the workflow?
  • How do psycho-social-cultural factors influence users’ choices about the components they are using in the workflow?
Hamburg Train Station (Photo: Gemma Petrie)

Design Principles & Recommendations

  • New features should be atomic elements, not complete user workflows.
  • Don’t be prescriptive, but facilitate efficiency.
  • Give users the tools to build their own workflows.
  • While software and physical components are important, psycho/social/cultural factors are equally as important and influential over users’ workflow decisions.
  • Make it easy for users to actively control notifications and other flow disruptors.
  • Leverage online content to support and improve offline experiences.
  • Help users bridge the gap between primary-device workflows and secondary devices.
  • Make it easy for users to manage a variety of identities across various devices and services.
  • Help users manage memory gaps related to revisiting and curating saved content.

Future Research Phases

The Firefox User Research team conducted additional phases of this research in Canada, the United States, Japan, and Vietnam. Check back for updates on our work.


¹ Pace, S. (2004). A grounded theory of the flow experiences of Web users. International journal of human-computer studies, 60(3), 327–363.

Firefox Workflow User Research in Germany was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story.

Air MozillaThe Joy of Coding - Episode 105

The Joy of Coding - Episode 105 mconley livehacks on real Firefox bugs while thinking aloud.

Chris H-CLatency Improvements, or, Yet Another Satisfying Graph

This is the third in my ongoing series of posts containing satisfying graphs.

Today’s feature: a plot of the mean and 95th percentile submission delays of “main” pings received by Firefox Telemetry from users running Firefox Beta.

Screenshot-2017-7-12 Beta _Main_ Ping Submission Delay in hours (mean, 95th %ile)

We went from receiving 95% of pings after about, say, 130 hours (or 5.5 days) down to getting them within about 55 hours (2 days and change). And the numbers will continue to fall as more beta users get the modern beta builds with lower latency ping sending thanks to pingsender.

What does this mean? This means that you should no longer have to wait a week to get a decently-rigorous count of data that comes in via “main” pings (which is most of our data). Instead, you only have to wait a couple of days.

Some teams were using the rule-of-thumb of ten (10) days before counting anything that came in from “main” pings. We should be able to reduce that significantly.

How significantly? Time, and data, will tell. This quarter I’m looking into what guarantees we might be able to extend about our data quality, which includes timeliness… so stay tuned.

For a more rigorous take on this, partake in any of dexter’s recent reports on RTMO. He’s been tracking the latency improvements and possible increases in duplicate ping rates as these changes have ridden the trains towards release. He’s blogged about it if you want all the rigour but none of Python.


FINE PRINT: Yes, due to how these graphs work they will always look better towards the end because the really delayed stuff hasn’t reached us yet. However, even by the standards of the pre-pingsender mean and 95th percentiles we are far enough after the massive improvement for it to be exceedingly unlikely to change much as more data is received. By the post-pingsender standards, it is almost impossible. So there.

FINER PRINT: These figures include adjustments for client clocks having skewed relative to server clocks. Time is a really hard problem when even on a single computer and trying to reconcile it between many computers separated by oceans both literal and metaphorical is the subject of several dissertations and, likely, therapy sessions. As I mentioned above, for rigour and detail about this and other aspects, see RTMO.

The Mozilla BlogDefending Net Neutrality: A Day of Action

Mozilla is participating in the Day of Action with a new podcast, video interviews with U.S. Senators, a special Firefox bulletin, and more


As always, Mozilla is standing up for net neutrality.

And today, we’re not alone. Hundreds of organizations — from the ACLU and GitHub to Amazon and Fight for the Future — are participating in a Day of Action, voicing loud support for net neutrality and a healthy internet.

“Mozilla is supporting the majority of Americans who believe the web belongs to individual users, without interference from ISP gatekeepers,” says Ashley Boyd, Mozilla’s VP of Advocacy. “On this Day of Action, we’re amplifying what millions of Americans have been saying for years: Net neutrality is crucial to a free, open internet.”

“We are fighting to protect net neutrality, again, because it’s crucial to the future of the internet,” says Denelle Dixon, Mozilla Chief Legal and Business Officer. “Net neutrality prohibits ISPs from engaging in prioritization, blocking or throttling of content and services online. As a result, net neutrality serves to enable free speech, competition, innovation and user choice online.”

The Day of Action is a response to FCC Commissioner Ajit Pai’s proposal to repeal net neutrality protections enacted in 2015. The FCC voted to move forward with Pai’s proposal in May; we’re currently in the public comment phase. You can read more about the process here.

Here’s how Mozilla is participating in the Day of Action — and how you can get involved, too:

Nine hours of public comments. Over the past few months, Mozilla has collected more than 60,000 comments from Americans in defense of net neutrality.

“The internet should be open for all and not given over to big business,” wrote one commenter. “Net neutrality protects small businesses and innovators who are just getting started,” penned another.

We’ll share all 60,000 comments with the FCC. But first, we’re reading a portion of them aloud in a nine-hour, net neutrality-themed spoken-word marathon.

And we’re showcasing the comments on Firefox, to inspire more Americans to stand up for net neutrality. When Firefox users open a new window today, a different message in support of net neutrality will appear in the “snippet,” the bulletin above and beneath the search bar.

It’s not too late to submit your own comment. Visit mzl.la/savetheinternet to add your voice.

A word from Senators Franken and Wyden. Senator Al Franken (D-Minnesota) and Senator Ron Wyden (D-Oregon) are two of the Senate’s leading voices for net neutrality. Mozilla spoke with both about net neutrality’s connection to free speech, competition, and innovation. Here’s what they had to say:

Stay tuned for more interviews with Congress members about the importance of net neutrality.

Comments for the FCC. Mozilla’s Public Policy team is finishing up comments to the FCC on the importance of enforceable net neutrality to ensure that voices are free to be heard. They will speak to how net neutrality fundamentally enables free speech, online competition and innovation, and user choice. Like our comments from 2010 and 2014, we will defend all users’ ability to create and consume online, and will defend the vitality of the internet. User rights should not be used in a political play.

Net neutrality podcast. We just released the second episode of Mozilla’s original podcast, IRL, which focuses on who wins — and who loses — if net neutrality is repealed. Listen to host Veronica Belmont explore the issue in depth with a roster of guests holding different viewpoints, from Patrick Pittaluga of Grubbly Farms (a maggot farming business in Georgia), to Jessica González of Free Press, to Dr. Roslyn Layton of the American Enterprise Institute.

Subscribe wherever you get your podcasts, or listen on our website.

Today, we’re amplifying the voices of millions of Americans. And we need your help: Visit mzl.la/savetheinternet to join the movement. The future of net neutrality — and the very health of the internet — depends on it.

Note: This blog was updated on July 12 at 2:30 p.m. ET to reflect the most recent number of public comments collected.

The post Defending Net Neutrality: A Day of Action appeared first on The Mozilla Blog.

Air MozillaWebdev Extravaganza: July 2017

Webdev Extravaganza: July 2017 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on. This...

Air MozillaRain of Rust -4th online meeting

Rain of Rust -4th online meeting An online event - part of the RainofRust campaign

Michael KellyUsing NPM Libraries in Firefox via Webpack

I work on a system add-on for Firefox called the Shield Recipe Client. We develop it in a monorepo on Github along with the service it relies on and a few other libraries. One of these libraries is mozJexl, an expression language that we use to specify how to filter experiments and surveys we send to users.

The system add-on relies on mozJexl, and for a while we were pulling in the dependency by copying it from node_modules and using a custom CommonJS loader to make require() calls work properly. This wasn't ideal for a few reasons:

  • We had to determine manually which file contained the exports we needed, instead of being able to use the documented exports that you'd get from a require() call.

  • Because library files could require() any other file within node_modules we copied the entire directory within our add-on.

  • We didn't hit this with mozJexl, but I'm pretty sure that if a library we wanted to include had dependencies of its own, our custom loader wouldn't have resolved the paths properly.

While working on another patch, I hit a point where I wanted to pull in ajv to do some schema validation and decided to see if I could come up with something better.


I already knew that a few components within Firefox are using Webpack, such as debugger.html and Activity Stream. As far as I can tell, they bundle all of their code together, which is standard for Webpack.

I wanted to avoid this, because we sometimes get fixes from Firefox developers that we upstream back to Github. We also get help in the form of debugging from developers investigating issues that lead back to our add-on. Both of these would be made more difficult by landing webpacked code that is different from the source code we normally work on.

Instead, my goal was to webpack only the libraries that we want to use in a way that provided a similar experience to require(). Here's the Webpack configuration that I came up with:

/* eslint-env node */
var path = require("path");
var ConcatSource = require("webpack-sources").ConcatSource;
var LicenseWebpackPlugin = require("license-webpack-plugin");

module.exports = {
  context: __dirname,
  entry: {
    mozjexl: "./node_modules/mozjexl/",
  output: {
    path: path.resolve(__dirname, "vendor/"),
    filename: "[name].js",
    library: "[name]",
    libraryTarget: "this",
  plugins: [
     * Plugin that appends "this.EXPORTED_SYMBOLS = ["libname"]" to assets
     * output by webpack. This allows built assets to be imported using
     * Cu.import.
    function ExportedSymbols() {
      this.plugin("emit", function(compilation, callback) {
        for (const libraryName in compilation.entrypoints) {
          const assetName = `${libraryName}.js`; // Matches output.filename
          compilation.assets[assetName] = new ConcatSource(
            "/* eslint-disable */", // Disable linting
            `this.EXPORTED_SYMBOLS = ["${libraryName}"];` // Matches output.library
    new LicenseWebpackPlugin({
      pattern: /^(MIT|ISC|MPL.*|Apache.*|BSD.*)$/,
      filename: `LICENSE_THIRDPARTY`,

(See also the pull request itself.)

Each entry point in the config is a library that we want to use, with the key being the name we're using to export it, and the value being the path to its directory in node_modules1. The output of this config is one file per entry point inside a vendor subdirectory. You can then import these files as if they were normal .jsm files:

const jexl = new moxjexl.Jexl();


The key turned out to be Webpack's options for bundling libraries:

By setting output.library to a name like mozJexl, and output.libraryTarget to this, you can produce a bundle that assigns the exports from your entry point to this.mozJexl. In the configuration above, I use the webpack variable [name] to set it to the name for each export, since we're exporting multiple libraries with one config.


Assuming that the bundle will work in a chrome environment, this is very close to being a JavaScript code module. The only thing missing is this.EXPORTED_SYMBOLS to define what names we're exporting. Luckily, we already know the name of the symbols being exported, and we know the filename that will be used for each entry point.

I used this info to write a small Webpack plugin that prepends an eslint-ignore comment to the start of each generated file (since we don't want to lint bundled code) and this.EXPORTED_SYMBOLS to the end of each generated file:

function ExportedSymbols() {
  this.plugin("emit", function(compilation, callback) {
    for (const libraryName in compilation.entrypoints) {
      const assetName = `${libraryName}.js`; // Matches output.filename
      compilation.assets[assetName] = new ConcatSource(
        "/* eslint-disable */", // Disable linting
        `this.EXPORTED_SYMBOLS = ["${libraryName}"];` // Matches output.library


During code review, mythmon brought up an excellent question; how do we retain licensing info for these files when we sync to mozilla-central? Turns out, there's a rather popular Webpack plugin called license-webpack-plugin that collects license files found during a build and outputs them into a single file:

new LicenseWebpackPlugin({
  pattern: /^(MIT|ISC|MPL.*|Apache.*|BSD.*)$/,

(Why MIT/ISC/MPL/etc.? I just used what I thought were common licenses for libraries we were likely to use.)

Future Improvements

This is already a useful improvement over our old method of pulling in dependencies, but there are some potential improvements I'd eventually like to get to:

  • The file size of third-party libraries is not insignificant, especially with their own dependencies. I'd like to consider minifying the bundles, potentially with source maps to aid debugging. I'm not even sure that's a thing for chrome code, though.

  • Some libraries may rely on browser globals, like fetch. I'd like to figure out how to auto-prepend Components.utils.importGlobalProperties to library files that need certain globals that aren't normally available.

  • If several system add-ons use this pattern, we might end up with multiple copies of the same library in mozilla-central. Deduplicating this code where possible would be nice.

  • If there's enough interest in it, I'd be interested in pulling this pattern out into some sort of plugin/preset so that other system add-ons can also use npm libraries with ease.

  1. Did you know that Webpack will automatically use the main module defined in package.json as the entry point if the path points to a directory with that file?

Emma IrwinReflection — Inclusive Organizing for Open Source Communities

This, the second in a series of posts reporting findings from three months of research into the state of D&I in Mozilla’s communities. The current state of our communities is a mix, when it comes to inclusivity: we can do better, and this blog post is an effort to be transparent about what we’ve learned in working toward that goal.

If we are to truly unleash the full value of a participatory, open Mozilla, we need to personalize and humanize Mozilla’s value by designing diversity and inclusion into our strategy. To unlock the full potential of an open innovation model we need radical change in how we develop community and who we include.

Photo credit: national museum of american history via VisualHunt.com / CC BY-NC

We learned that while bottom-up, or grassroots organizing enables ideas and vision to surface within Mozilla, those who succeed in that environment tend to be of the same backgrounds, usually the more dominant or privileged background, and often in tenured* community roles.

*tenured — Roles assigned by Mozilla, or legacy to the community that exist without timelines, or cycles for renewal/election.

Our research dove into many areas of inclusive organizing, and three clusters of recommendations surfaced:

1. Provide Organizational Support to Identity Groups


For a diversity of reasons including, safety, friendship, mentorship, advocacy and empowerment — identity groups serve both as a safe space, and as a springboard into, and out of greater community participation — for the betterment of both. Those in Mozilla’s Tech Speakers program, spoke very positively of the opportunity, responsibility and camaraderie provided by this program, which brings together contributors focused on building speaking skills for conferences.

Building a sense of belonging in small groups, no matter focus (Tech Speakers, Women of Color, Spanish Speaking), is an inclusion methodology.” — Non-Binary contributor, Europe

Womoz (Women of Mozilla) showed up in our research an example of an identity group trying to solve and overcome systemic problems faced by women in open source. However, without centralized goals or structured organizational support for the program, communities were left on their own to define the group, its goals, who is included, and why. As a result we found vastly different perspectives, and implementations of ‘Womoz’, from successful inclusion methodologies to exclusive or even relegatory groups:

  • As a ‘label’. ‘Catch-all’ for non- male non-binary contributors.
  • A way for women to gather that aligns with cultural traditions for convening of young women — and approved by families within specific cultural norms.
  • A safe, and empowered way to meet and talk with people of the same gender-identity about topics and problems specific to women.
  • A way to escape toxic communities where discussion and opportunity is dominated by men, or where women were purposely excluded.
  • An online community with several different channels for sharing advocacy (mailing list, Telegram group, website, wiki page).
  • As a stereotypical way to reference women-only contributing in specific areas (such as crafts, non-technical) or relegating women to those areas.

For identity groups to fully flourish and to avoid negative manifestations like tokenism and relegation, the recommendation is that organizational support (dedicated strategic planning, staffing, time, and resources) should be prioritized where leadership and momentum exists, and where diversity can thrive. Mozilla is already investing in pilot for identity with staff and core contributors which is exciting progress on what we’ve learned. The full potential is that identity groups act as ‘diversity launch pads’ into and out of the larger community for benefit of both.

2. Build Project-Wide Strategies for Toxic Behavior

Photo by Paul Riismandel BY-NC-SA 2.0

Research exposed toxic behavior as a systemic issue being tackled in many different ways, and by both employees and community leaders truly dedicated to creating healthy communities.

Insights amplified external findings into toxic behavior, that shows toxic people tend to be highly productive thus masking the true impact of their behavior which deters, and excludes large numbers of people. Furthermore, the perceived value or reliance on the work of toxic individuals and regional communities was suspected to complicate, or deter and dilute intervention effectiveness.

Within in this finding we have three recommendations:

  1. Develop regionally-Specific strategies for community organizing. Global regions with laws and cultural norms that exclude, or limit opportunity to women and other marginalized groups demonstrate the challenge of a global-approach to open source community health. Generally, we found that those groups with cultural and economic privilege within a given local context were also the majority of the people who succeeded in participation. Where toxic communities, and individuals exist, the disproportionate representation of diverse groups was magnified to the point where standard approaches to community health were not successful. Specialized strategies might be very different per region, but with investment can amplify the potential of marginalized people within.
  2. Create an organization-wide strategy for tracking decisions, and outcomes of Community Participation Guidelines violations. While some project and project maintainers have been successful in banning and managing toxic behavior, there is no central mechanism for tracking those decisions for the benefit of others who may encounter and be faced with similar decisions, or the same people. We learned of at least one case where a banned toxic contributor surfaced in another area of the project without the knowledge of that team. Efforts in this vein including recent revisions to the guidelines and current efforts to train community leaders and build a central mechanism need to be continued and expanded.
  3. Build team strategies for managing conflict and toxic behavior. A range of approaches to community health surfaced success, and struggle managing conflict, and toxic behavior within projects, and community events. What appears to be important in successful moderation and resolution is that within project-teams there are designed specific roles, with training, accountability and transparency.

3. Develop Inclusive Community Leadership Models

Public Domain Photo

In qualitative interviews and data analysis, ‘gatekeeping* showed up as one of the greatest blockers for diversity in communities.

Characteristics of gatekeeping in Mozilla’s communities included withholding opportunity and recognition (like vouching), purposeful exclusion, delay of, or limiting language translation of opportunity, and well as lying about the availability or existence of leadership roles, and event invitations/applications. We also heard a lot about nepotism, as well as both passive and deliberate banishment of individuals — especially women, who showed a sharp decrease in participation over time compared with men.

*Gatekeeping is noted as only one reason participation for women decreased, and we will expand on that in other posts.

Much of this data reinforced what we know about the myth of meritocracy, and that despite its promise, meritocracy actually enables toxic, exclusionary norms that prevent development of resilient, diverse communities.

“New people are not really welcomed, they are not given a lot of information on purpose.” (male contributor, South America)

As a result of these findings, we recommend all community leadership roles be designed with accountability for community health, inclusion and surfacing the achievement of others as core function. Furthermore, renewable cycles for community roles should be required, with designed challenges (like elections) for terms that extend beyond one year, with outreach to diverse groups for participation.

The long-term goal is that leadership begins a cultural shift in community that radiates empowerment for diverse groups who might not otherwise engage.

Cross-posted to Medium

Our next post in this series ‘We See you — Reaching Diverse Audiences in FOSS’, will be published on July 17th. Until then, check out ‘Increasing Rusts Reach an initiative that emerged from a design sprint focused on this research.


QMOFirefox 55 Beta 7 Testday Results

Hello Mozillians!

As you may already know, last Friday – July 7th – we held a new Testday event, for Firefox 55.0b7.

Thank you all for helping us make Mozilla a better place – Gabriela, Athira Appu, Surentharan.R.A.

From India team:  Surentharan.R.A, Fahima Zulfath A, Kaviya D, Baranitharaan,  Nagarajan .R, Terry John Paul.P, Vinothini.K , ponmurugesh.M, Haritha K Sankari.

From Bangladesh team: Nazir Ahmed Sabbir, Maruf Rahman, Md.Tarikul Islam Oashi, Sajal Ahmed, Tanvir Rahman, Sajedul Islam, Jakaria Muhammed Jakir, Md. Harun-Or-Rashid sajjad, Iftekher Alam, Md. Almas Hossain, Md.Rahatul Islam, Jobayer Ahmed Mickey, Humayra Khanum, Anika Alam, Md.Majedul islam, Md. Mafidul Islam Porag, Farhadur Raja Fahim,
Azad Mohammad, Mahadi Hasan Munna, Sayed Ibn Masud.

– several test cases executed for the Shutdown Video Decoder and Customization features;

– 4 bugs verified: 1359681136733813676271370645.

Thanks for another successful testday 🙂

We hope to see you all in our next events, all the details will be posted on QMO!

Mozilla Open Innovation TeamReflection — Inclusive Organizing for Open Source Communities

This, the second in a series of posts reporting findings from three months of research into the state of D&I in Mozilla’s communities. The current state of our communities is a mix, when it comes to inclusivity: we can do better, and this blog post is an effort to be transparent about what we’ve learned in working toward that goal.

If we are to truly unleash the full value of a participatory, open Mozilla, we need to personalize and humanize Mozilla’s value by designing diversity and inclusion into our strategy. To unlock the full potential of an open innovation model we need radical change in how we develop community and who we include.
Photo credit: national museum of american history via VisualHunt.com / CC BY-NC

We learned that while bottom-up, or grassroots organizing enables ideas and vision to surface within Mozilla, those who succeed in that environment tend to be of the same backgrounds, usually the more dominant or privileged background, and often in tenured* community roles.

*tenured — Roles assigned by Mozilla, or legacy to the community that exist without timelines, or cycles for renewal/election.

Our research dove into many areas of inclusive organizing, and three clusters of recommendations surfaced:

1. Provide Organizational Support to Identity Groups

For a diversity of reasons including, safety, friendship, mentorship, advocacy and empowerment — identity groups serve both as a safe space, and as a springboard into, and out of greater community participation — for the betterment of both. Those in Mozilla’s Tech Speakers program, spoke very positively of the opportunity, responsibility and camaraderie provided by this program, which brings together contributors focused on building speaking skills for conferences.

Building a sense of belonging in small groups, no matter focus (Tech Speakers, Women of Color, Spanish Speaking), is an inclusion methodology.” — Non-Binary contributor, Europe

Womoz (Women of Mozilla) showed up in our research an example of an identity group trying to solve and overcome systemic problems faced by women in open source. However, without centralized goals or structured organizational support for the program, communities were left on their own to define the group, its goals, who is included, and why. As a result we found vastly different perspectives, and implementations of ‘Womoz’, from successful inclusion methodologies to exclusive or even relegatory groups:

  • As a ‘label’. ‘Catch-all’ for non- male non-binary contributors.
  • A way for women to gather that aligns with cultural traditions for convening of young women — and approved by families within specific cultural norms.
  • A safe, and empowered way to meet and talk with people of the same gender-identity about topics and problems specific to women.
  • A way to escape toxic communities where discussion and opportunity is dominated by men, or where women were purposely excluded.
  • An online community with several different channels for sharing advocacy (mailing list, Telegram group, website, wiki page).
  • As a stereotypical way to reference women-only contributing in specific areas (such as crafts, non-technical) or relegating women to those areas.

For identity groups to fully flourish and to avoid negative manifestations like tokenism and relegation, the recommendation is that organizational support (dedicated strategic planning, staffing, time, and resources) should be prioritized where leadership and momentum exists, and where diversity can thrive. Mozilla is already investing in pilot for identity with staff and core contributors which is exciting progress on what we’ve learned. The full potential is that identity groups act as ‘diversity launch pads’ into and out of the larger community for benefit of both.

2. Build Project-Wide Strategies for Toxic Behavior

Photo by Paul Riismandel BY-NC-SA 2.0

Research exposed toxic behavior as a systemic issue being tackled in many different ways, and by both employees and community leaders truly dedicated to creating healthy communities.

Insights amplified external findings into toxic behavior, that shows toxic people tend to be highly productive thus masking the true impact of their behavior which deters, and excludes large numbers of people. Furthermore, the perceived value or reliance on the work of toxic individuals and regional communities was suspected to complicate, or deter and dilute intervention effectiveness.

Within in this finding we have three recommendations:

  1. Develop regionally-Specific strategies for community organizing. Global regions with laws and cultural norms that exclude, or limit opportunity to women and other marginalized groups demonstrate the challenge of a global-approach to open source community health. Generally, we found that those groups with cultural and economic privilege within a given local context were also the majority of the people who succeeded in participation. Where toxic communities, and individuals exist, the disproportionate representation of diverse groups was magnified to the point where standard approaches to community health were not successful. Specialized strategies might be very different per region, but with investment can amplify the potential of marginalized people within.
  2. Create an organization-wide strategy for tracking decisions, and outcomes of Community Participation Guidelines violations. While some project and project maintainers have been successful in banning and managing toxic behavior, there is no central mechanism for tracking those decisions for the benefit of others who may encounter and be faced with similar decisions, or the same people. We learned of at least one case where a banned toxic contributor surfaced in another area of the project without the knowledge of that team. Efforts in this vein including recent revisions to the guidelines and current efforts to train community leaders and build a central mechanism need to be continued and expanded.
  3. Build team strategies for managing conflict and toxic behavior. A range of approaches to community health surfaced success, and struggle managing conflict, and toxic behavior within projects, and community events. What appears to be important in successful moderation and resolution is that within project-teams there are designed specific roles, with training, accountability and transparency.

3. Develop Inclusive Community Leadership Models

In qualitative interviews and data analysis, ‘gatekeeping* showed up as one of the greatest blockers for diversity in communities.

Characteristics of gatekeeping in Mozilla’s communities included withholding opportunity and recognition (like vouching), purposeful exclusion, delay of, or limiting language translation of opportunity, and well as lying about the availability or existence of leadership roles, and event invitations/applications. We also heard a lot about nepotism, as well as both passive and deliberate banishment of individuals — especially women, who showed a sharp decrease in participation over time compared with men.

*Gatekeeping is noted as only one reason participation for women decreased, and we will expand on that in other posts.

Much of this data reinforced what we know about the myth of meritocracy, and that despite its promise, meritocracy actually enables toxic, exclusionary norms that prevent development of resilient, diverse communities.

“New people are not really welcomed, they are not given a lot of information on purpose.” (male contributor, South America)

As a result of these findings, we recommend all community leadership roles be designed with accountability for community health, inclusion and surfacing the achievement of others as core function. Furthermore, renewable cycles for community roles should be required, with designed challenges (like elections) for terms that extend beyond one year, with outreach to diverse groups for participation.

The long-term goal is that leadership begins a cultural shift in community that radiates empowerment for diverse groups who might not otherwise engage.

Our next post in this series ‘We See you — Reaching Diverse Audiences in FOSS’, will be published on July 21st. Until then, check out ‘Increasing Rusts Reach an initiative that emerged from a design sprint focused on this research.

Reflection — Inclusive Organizing for Open Source Communities was originally published in Mozilla Open Innovation on Medium, where people are continuing the conversation by highlighting and responding to this story.