Justin WoodMeasuring Localization Time (in CI)

As we all know, measuring things is a good way to get concrete information. Now that Firefox CI is fully on Taskcluster this was a good opportunity to measure and see what we can learn about timing in localization tasks.

The code that generated the following charts and data is viewable in my Github repo, though as of this writing the code is rough and was modified manually to help generate the related graphs in various configurations. I’m working on adjusting them to be more general purpose and have better actual testing around logic.

One of the first things I looked at was per-task duration for the actual nightly-l10n task on beta/release by Gecko version per platform.  [Note: Android is missing on these graphs because we do not do Android Single Locale on Beta/Release anymore, so we don’t have metrics to compare] (graphs and takeaways after the jump)

With this graph, it is clear both, that we made some general improvements between Gecko 59 and Gecko 60, and that Windows takes significantly longer than OSX/Linux (both of the latter are performed on linux hosts).

Nothing in this graph was all that surprising to me, and it meshed with my preconceived understanding of the current state.

Next I was wondering if the data would be different if I split the tasks down to “per locale” timing because the number of locales run in a single task, while roughly uniform for a given release could vary from release to release, especially as we add new locales or change chunking values, etc.

This was interesting in that it shows the per locale time is amounts to just a bit under 10 minutes per locale for linux, and at least double that for windows, and it appears there is a slight regression for windows on beta, but with the variability in earlier windows tasks it’s hard to conclude much (though we did reduce some of that variability in 61/62 releases)

Now over to the Nightly side of things to look closer, I realized that using plot.ly public charting wouldn’t let me do the nightly graphs for all platforms at once (too many data points!) I resorted to a bit of back and forth locally. But eventually made a full-nightly graph of all windows, split by week, per task. [This eventually was able to publish to plotly public api]

With this one you’ll notice the obvious task regression near the end of 2017 and the smaller one near the end of the Gecko 62 cycle…

I spent a bit of time, and looked into what happened at the end of 2017, but turns out this wasn’t really a regression so much as a stabilization. Prior to that, we had done 2 locales per task to work around some signing timing issues we hit early in the tc-migration work, and decided it was ok to bump to ~5 locales per task. This, of course, bumps our per-task time up relatively significantly, however as you can see on my next graph, it doesn’t actually change the real metric much… (per Locale graph)

Unlike the first regression (end of 2017), the “just before Gecko 62” regression is still present in this graph. In offline mode (sorry no plotly link for these) I dug in deeper to see what could be causing this… (the following Graph has all platforms as my own mini sanity check)

Once I did some data-mining on my raw data, I found the changesets and taskIDs involved and traced it down to an actual regression in hg clone/update times, and filed Bug 1474159. That resulted in the OCC (Open Cloud Config) Pull Request that is currently awaiting landing and deployment.

If I had to take one thing away from this mini exercise is that there is certainly value in doing these types of graphs, (and more) and that we should do some investment in more types of metrics like this going forward and keep them up, where possible.

On the personal front I’ve learned a few new tools for my own toolbelt here, Plotly, the python package ‘Pandas’, etc. I’ll see how we can put these tools to better use in the coming months.

The Firefox FrontierPopular Firefox extensions now available in 7 new locales

Firefox is available in over 90 languages, giving millions of people around the world access to the web in words they understand. Our community of translators and localizers do this … Read more

The post Popular Firefox extensions now available in 7 new locales appeared first on The Firefox Frontier.

Mozilla Addons BlogNo Longer Lost in Translation

9 popular extensions in 7 new locales


You might have noticed that while Firefox supports 90 languages, many extensions and their listings on addons.mozilla.org (AMO) are only available in English.

At present, we don’t have a way to connect extension developers with the translation community at scale, and  Pontoon, Mozilla’s tool for localizing products and websites, currently only supports translating the AMO site itself.

What we do have, however, is a desire to make translation resources available, a longstanding and active community of localizers, and friends on Mozilla’s Open Innovation team who specialize in putting the two together. Part of Open Innovation’s work is to explore new ways to connect communities of enthusiastic non-coding contributors to meaningful projects within Mozilla. Together with Rubén Martín, we ran a campaign to localize an initial group of top Firefox extensions into the 7 most popular languages of Firefox users.

More than 100 multilingual Mozillians answered the call for participation and submitted more than 140,000 translated words for these extensions using CrowdIn, a localization platform most recently used for Mozilla’s Common Voice project. These translations were reviewed by a core team of experienced localizers, who then provided approved translations to developers involved in the campaign to include in their next version update.

Now, you can enjoy this collection of extensions in Chinese (simplified), Dutch, French, German, Italian, Portuguese (Brazilian),  and Spanish:

1-Click Youtube Download* · Adblock for Firefox · Download Flash and Video
Greasemonkey · New Tab Override · NoScript Security Suite
Pinterest Save Button · signTextJS plus · To Google Translate

While this campaign is limited to a small group of extensions, we hope to roll out a localization process for all interested extension developers in the near future. Stay tuned for more information!

If you would like to participate in future campaigns at Mozilla, subscribe to Mission-Mozillians-Campaigns tag on Discourse to learn how to get involved. If you are specifically interested in localizing other content for Mozilla, check out the L10n wiki to learn how to get started.

Many thanks to the extension developers and localizers who participated in this campaign.

* Coming soon! If you would like a localized version of this extension in one of the languages listed above, install the extension now. During its next update, you will be automatically switched to the version for your locale.

The post No Longer Lost in Translation appeared first on Mozilla Add-ons Blog.

Mozilla Reps Community

H2 2018 Reps Plan for the Council

This blogpost was authored by Daniele Scasciafratte

What we have done

In the first half of the year, our focus was to align the program with Mission Driven Mozillians and prepare the mandatory Reps onboarding course to ensure it aligns with the D&I work. We have also started improving the understanding of issues of different Reps roles like Mentors and Alumni.

We also worked on administration issues like the transition of inactive Reps to Alumni, the cleanup of old Reps applications still open after years and wrong mentors assigned to profiles and the improvement of our reporting system. Last but not least, we moved to the creation of expertise teams that took over everyday work day tasks from the Council (newsletter and onboarding).

We’ve chosen to split our ideas and tasks in 3 different areas for the next half of the year (with percentage):

  • Prepare the ground for Mission Driven Mozillians: 45%
  • Visible wins: 45%
  • Miscellaneous: 10%

Prepare the ground for Mission Driven Mozillians

The Reps program is working to prepare the ground for Mission Driven Mozillians and there are different tasks and issues to face for that.

The most important point for the Reps Council is the Roles of Reps inside the communities. We know that in Mozilla there are a lot of international communities, local community and project specific communities, and we need to understand and be ready to support all of them.

We gathered different ideas that we will investigate soon:

  • Role of Reps
    • Update all the Reps about the last updates of the Mission Driven Mozillians program
      • Understanding and agreement on Leadership Agreement
      • Start conversations on where Reps belong/Reps Role
      • We are already working on:
        • Restart Newsletter with a new team
        • Discussion about migrating the Reps mailing list to Discourse
    • Create courses for various roles
      • Review Team and Mentors already have a course
      • We are working on a new onboarding course
    • Skills criteria for various roles
      • Every role has a skillset and we are working to improve them as requirements
  • Mozilla Groups (MDM)
    • How they fit inside the Reps program
      • This is an experiment and we need to understand how the program can fit to be part of this initiative
  • Onboarding
    • New SOPs for various roles/teams
      • We updated SOPs in the past and we need to define where we need to update them
    • Courses inside the program
      • Few roles have courses but for Reps we need to define if we can do general courses or if we can find them already available
    • CPG Alignment (update the Reps with updated CPG information, add it as mandatory part to the onboarding course)

Visible wins

The program also needs to be more visible inside Mozilla and the others communities, so we’ve chosen to focus on 3 different areas:

  • Communication
    • Discover new areas where Reps are missing
      • There are a lot of communities where we don’t have a large representation or there is no interest
    • Reps news to all Mozilla
      • Share what Reps are doing, for example during the Weekly Monday Project Call
  • Statistics
    • Showcase events inside Mozilla
      • Improve the sharing of statistics about what volunteers are doing
  • Campaigns
    • Plan for more than 3 weeks out
      • Let the community be aware of a new campaign in advice
    • Pocket involvement
      • Part of Mozilla but there is no involvement of volunteers right now
    • Create an event asset repository
      • Events often require the same resources such as graphics or links, we need to gather them


The last area that we chose are the miscellaneous ideas, something that does not block the program’s goals but can improve it, at the same time this has low priority:

  • Improving mentee/mentors relationship
  • Improving understanding of Alumni role
  • Improve reps-tweet social usage
  • Style guidelines for the community
  • Reporting activities
    • Encourage Reps to report activities
      • Without reports it’s difficult to do data-driven decisions
    • Understand issues Reps face with reporting

This list of ideas will be evaluated in the next quarters by the Council and as usual we are open to feedback!

Will Kahn-GreeneThoughts on Guido retiring as BDFL of Python

I read the news of Guido van Rossum announcing his retirement as BDFL of Python and it made me a bit sad.

I've been programming in Python for almost 20 years on a myriad of open source projects, tools for personal use, and work. I helped out with several PyCon US conferences and attended several others. I met a lot of amazing people who have influenced me as a person and as a programmer.

I started PyVideo in March 2012. At a PyCon US after that (maybe 2015?), I found myself in an elevator with Guido and somehow we got to talking about PyVideo and he asked point-blank, "Why work on that?" I tried to explain what I was trying to do with it: create an index of conference videos across video sites, improve the meta-data, transcriptions, subtitles, feeds, etc. I remember he patiently listened to me and then said something along the lines of how it was a good thing to work on. I really appreciated that moment of validation. I think about it periodically. It was one of the reasons Sheila and I worked hard to transition PyVideo to a new group after we were burned out.

It wouldn't be an overstatement to say that through programming in Python, I've done some good things and become a better person.

Thank you, Guido, for everything!

QMOFirefox 62 Beta 8 Testday Results

Hello Mozillians!

As you may already know, last Friday July 13th – we held a new Testday event, for Firefox 62 Beta 8.

Thank you all for helping us make Mozilla a better place: Douglas, rs and yaros.

From India team: Abhishek Haridass, amirtha venkataramani, Aishwarya Narasimhan, kesavamoorthy, Monisha.R and Mohammed Bawas.

From Bangladesh team: Hossain Al Ikram, Maruf Rahman, Tanvir Rahman, Moinul Haq, Kazi Ashraf Hossain, Yasin Hossain Rakib, Tanvir Mazharul, Nazmul Islam, Syed Irfan Hossain, H.M Sadman Amin, Md.Majedul islam, Sajedul Islam, Dola Mondal, Md. Raihan Ali, MIM AHMED JOY, MD Mizanur Rahman Rony and Tanzina Tonny,


– several test cases executed for 3-Pane Inspector, React animation inspector and Web Compatibility.

– 3 bugs verified: 146246914645361462469.

Thanks for another successful testday! 🙂

Mike HommeyAnnouncing git-cinnabar 0.5.0 beta 4

Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.

Get it on github.

These release notes are also available on the git-cinnabar wiki.

What’s new since 0.5.0 beta 3?

  • Fixed incompatibility with Mercurial 3.4.
  • Performance and memory consumption improvements.
  • Work around networking issues while downloading clone bundles from Mozilla CDN with range requests to continue past failure.
  • Miscellaneous metadata format changes.
  • The prebuilt helper for Linux now works across more distributions (as long as libcurl.so.4 is present, it should work)
  • Updated git to 2.18.0 for the helper.
  • Properly support the pack.packsizelimit setting.
  • Experimental support for initial clone from a git repository containing git-cinnabar metadata.
  • Changed the default make rule to only build the helper.
  • Now can successfully clone the pypy and GNU octave mercurial repositories.
  • More user-friendly errors.

The Firefox FrontierGet your game on, in the browser

The web is a gamer’s dream. It works on any device, can connect players across the globe, and can run a ton of games—from classic arcade games to old-school computer … Read more

The post Get your game on, in the browser appeared first on The Firefox Frontier.

Robert KaiserVR Map - A-Frame Demo using OpenStreetMap Data

As I mentioned previously, the Mixed Reality "virus" has caught me recently and I spend a good portion of my Mozilla contribution time with presenting and writing demos for WebVR/XR nowadays.

The prime driver for writing my first such demo was that I wanted to do something meaningful with A-Frame. Previously, I had only played around with the Hello WebVR example and some small alterations around the basic elements seen in that one, which is also pretty much what I taught to others in the WebVR workshops I held in Vienna last year. Now, it was time to go beyond that, and as I had recently bought a HTC Vive, I wanted something where the controllers could be used - but still something that would fall back nicely and be usable in 2D mode on a desktop browser or even mobile screens.

While I was thinking about what I could work on in that area, another long-standing thought crossed my mind: How feasible is it to render OpenStreetMap (OSM) data in 3D using WebVR and A-Frame? I decided to try and find out.

Image No. 23346Image No. 23344Image No. 23338

First, I built on my knowledge from Lantea Maps and the fact that I had a tile cache server set up for that, and created a layer of a certain set of tiles on the ground to for the base. That brought me to a number of issue to think about and make decisions on: First, should I respect the curvature of the earth, possibly put the tiles and the viewer on a certain place on a virtual globe? Should I respect the terrain, especially the elevation of different points on the map? Also, as the VR scene relates to real-world sizes of objects, how large is a map tile actually in reality? After a lot of thinking, I decided that this would be a simple demo so I would assume the earth is flat - both in terms of curvature or "the globe" and terrain, and the viewer would start off at coordinates 0/0/0 with x and z coordinates being horizontal and y the vertical component, as usual in A-Frame scenes. For the tile size, I found that with OpenStreetMap using Mercator projection, the tiles always stayed squares, with different sizes based on the latitude (and zoom level, but I always use the same high zoom there). In this respect, I still had to take account of the real world being a globe.

Once I had those tiles rendering on the ground, I could think about navigation and I added teleport controls, later also movement controls to fly through the scene. With W/A/S/D keys on the desktop (and later the fly controls), it was possible to "fly" underneath the ground, which was awkward, so I wrote a very simple "position-limit" A-Frame control later on, which prohibits that and also is a very nice example for how to build a component, because it's short and easy to understand.

All this isn't using OSM data per se, but just the pre-rendered tiles, so it was time to go one step further and dig into the Overpass API, which allows to query and retrieve raw geo data from OSM. With Overpass Turbo I could try out and adjust the queries I wanted to use ad then move those into my code. I decided the first exercise would be to get something that is a point on the map, a single "node" in OSM speak, and when looking at rendered maps, I found that trees seemed to fit that requirement very well. An Overpass query for "node[natural=tree]" later and some massaging the result into a format that JavaScript can nicely work with, I was able to place three-dimensional A-Frame entities in the places where the tiles had the symbols for trees! I started with simple brown cylinders for the trunks, then placed a sphere on top of them as the crown, later got fancy by evaluating various "tags" in the data to render accurate height, crown diameter, trunk circumference and even a different base model for needle-leaved trees, using a cone for the crown.

But to make the demo really look like a map, it of course needed buildings to be rendered as well. Those are more complex, as even the simpler buildings are "ways" with a variable amount of "nodes", and the more complex ones have holes in their base shape and therefore require a compound (or "relation" in OSM speak) of multiple "ways", for the outer shape and the inner holes. And then, the 2D shape given by those properties needs to be extruded to a certain height to form an actual 3D building. After finding the right Overpass query, I realized it would be best to create my own "building" geometry in A-Frame, which would get the inner and outer paths as well as the height as parameters. In the code for that, I used the THREE.js library underlying A-Frame to create a shape (potentially with holes), extrude it to the right height and rotate it to actually stand on the ground. Then I used code similar to what I had for trees to actually create A-Frame entities that had that custom geometry. For the height, I would use the explicit tags in the OSM database, estimate from its levels/floors if given or else fall back to a default. And I would even respect the color of the building if there was a tag specifying it.

With that in place, I had a pretty nice demo that uses data directly from OpenStreetMap to render Virtual Reality scenes that could be viewed in the desktop or mobile browser, or even in a full VR headset!

It's available under the name of "VR Map" at vrmap.kairo.at, and of course the source code can also be expected, copied and forked on GitHub.

Image No. 23343

Again, this is intended as a demo, not a full-featured product, and e.g. does at this time only render an area of a defined size and does not include any code to load additional scenery as you are moving around. Also, it does not support "building parts", which are the way to specify in OSM that a different pieces of a building have e.g. different heights or colors. It could also be extended to actually render models of the buildings when they exist and are referred in the database (so e.g. the Eiffel Tower would look less weird when going to the Paris preset). There are a lot of things that still can be done to improve on this demo for sure, but as it stands, it's a pretty simple piece of code that shows the power of both A-Frame and the OpenStreetMap data, and that's what I set out to do, after all.

My plan is to take this to multiple meetups and conferences to promote both underlying projects and get people inspired to think about what they can do with those ideas. Please let me know if you know of a good event where I can present this work. The first of those presentations happened a at the ViennaJS May Meetup, see the slides and video.
I'm also in an email conversation with another OSM contributor who is using this demo as a base for some of his work, e.g. on rendering building models in 3D and VR and allowing people to correct their position data.

Image No. 23347

I hope that this demo spawns more ideas of what people can do with this toolset, and I'll also be looking into more demos that will probably move into different directions. :)

Firefox Test PilotThe Evolution of Side View

Side View is a new Firefox Test Pilot experiment which allows you to send any webpage to the Firefox sidebar, giving you an easy way to view two webpages side-by-side. It was released June 5 through the Test Pilot program, and we thought we would share with you some of the different approaches we tried while implementing this idea.

<figcaption>The XPCOM Tab Split implementation</figcaption>

Beginnings — Tab Split

The history of Side View implementations goes back to the end of 2017, when a prototype implementation was completed as an old style XPCOM add-on. It was originally called Tab Split and the spec called for being able to split any tab into two, with full control over the URL bar and history for each side of the split. Clicking on either the left side or the right side of the split would focus that side, causing the URL bar, back button, and other controls to apply to that half of the split. The spec also originally mentioned being able to split the window either horizontally or vertically, but this feature may have made it difficult to understand which page the URL bar was referring to, so we decided to focus on only allowing viewing pages side-by-side.

Firefox Quantum

With the release of Firefox Quantum, XPCOM add-ons are no longer supported. We therefore needed to rewrite our prototype as a WebExtension. The implementation was simply an Embedded WebExtension API Experiment containing the entire previous XPCOM implementation. Our WebExtension wrapper handled the click on the browser action by calling a function exposed by the API Experiment which then called into the old XPCOM implementation. In this way, we quickly had a working WebExtension implementation that had all the capabilities of the old version. However, we encountered a bug in Firefox which broke all of the webExtension APIs if an embedded web extension experiment was loaded. This bug was eventually fixed, but we decided to see how far we could get with a pure WebExtension implementation since this entire implementation seemed prone to failure.

Tab Split 2: WebExtension Sidebar

WebExtension APIs are new standardized APIs which can be used to write web browser add-ons in a cross-browser way. They are supported by Chrome, Opera, Firefox, and Edge, although there are some APIs that are only available on specific browsers. Add-ons written using WebExtension APIs do not have as much freedom to modify the browser as XPCOM add-ons used to have. Therefore, we were not able to implement the user interface we had previously used where any number of tabs could be split into side by side pairs, with the focused side having control over the navigation bar user interface. However, WebExtensions are allowed to display a pane in the sidebar. We decided that we could show a web page in the sidebar with some of our user interface around it to allow the user to change the page that was being shown in the sidebar, since the sidebar webpage would not be able to use the browser’s navigation bar user interface as a normal webpage would. This limited the usefulness of navigating in the web page that was being shown in the sidebar, so any links that were clicked in the sidebar web page would be opened in a new tab instead of navigating the sidebar.

<figcaption>The first sidebar implementation</figcaption>

IFRAME troubles

The WebExtension APIs allow a web extension to set the sidebar to a particular webpage. We needed some user interface in the sidebar, so the webpage we set it to was an HTML page inside our add-on which had a bar at the top showing the URL of the page currently being viewed with a back button, the actual webpage embedded using an iframe, and a bottom bar which contained some miscellaneous UI. We also had a Tab Split homepage which would appear when you first pressed the button. This home page showed you a list of all your current tabs and a list of tabs that you had recently sent to the sidebar, allowing you to choose one of them to be loaded in the sidebar.

<figcaption>The sidebar user interface landing page</figcaption>

However, the fact that our implementation required the sidebar webpage to be embedded in an iframe caused a large number of problems. Many webpages use frame busting techniques to detect if they are being embedded in a frame and attempt to prevent it. The original technique for frame busting involves checking whether window.top is equal to window.self. We were able to fix some pages which use this technique by setting the sandbox attribute on the iframe, but this caused a number of other problems.

Some more modern Web servers use CORS headers to tell the browser not to frame the page. Since we had code running in the WebExtension, we were able to successfully strip out just the part of the header that caused this. Nevertheless, this approach felt unstable since we would have to consistently mess with site security to make it work.

A new approach

After struggling with the problems our anti-frame-busting approaches were encountering, I finally had a new idea which removed the need for us to put the sidebar webpage in an iframe. We would put the previous homepage in a pop-up panel instead of in the sidebar. This would allow us to show our user interface in the panel, while reserving the entire sidebar for the chosen webpage. While this required changing the UI slightly, it solved all the technical problems we were encountering and made the implementation much more solid.

<figcaption>The user interface in a popup panel</figcaption>

A new name for the launch

Since the implementation changed to be more of a sidebar browser than a tab splitting feature, marketing gave us a new name: Side View. Side View is now available for you to try on Firefox Test Pilot. If you try it, we would love your feedback! Press the “Give Feedback” button in the Side View panel to submit suggestions and comments, and use our GitHub repository to submit bug reports.

The Evolution of Side View was originally published in Firefox Test Pilot on Medium, where people are continuing the conversation by highlighting and responding to this story.

Mozilla B-Teamhappy bmo push day!

happy bmo push day! We’re one step away from full unicode support and now username completion is much faster.

release tag

the following changes have been pushed to bugzilla.mozilla.org:

  • [1473726] WebExtensions bugs missing crash reports when viewed
  • [1469911] Make user autocompletion faster
  • [1328659] Add support for utf8=utf8mb4 (switches to dynamic/compressed row format, and changes charset to utf8mb4)
  • [1472896] Update to Gear Store inventory dropdown

discuss these changes on mozilla.tools.bmo.

View On WordPress

Mozilla VR BlogThis week in Mixed Reality: Issue 12

This week in Mixed Reality: Issue 12

This week we landed a bunch of core features: in the browsers space, we landed WebVR support and immersive controllers; in the social area, added media tools to Hubs; and in the content ecosystem, we now have WebGL2 support on the WebGLRenderer in three.js.


This week has been super exciting for the Firefox Reality team. We spent the last few weeks working on immersive WebVR support and it's finally here!

  • Completed prototypes of immersive mode for WebVR support
  • Implemented private browsing on the new UI
  • Added mini tray
  • Immersive mode for presentation is now implemented across the supported standalone headsets
  • New controller models landed
  • Support for asynchronous loading of models and textures
  • Mapped out client side error pages for text only (DNS error, SSL certificate error, etc)

Here are two sneek peaks of Firefox Reality. First is support for entering and existing WebVR:

Quick enter and exit WebVR test in Firefox Reality from Imanol Fernández Gorostizaga on Vimeo.

The second is controller support:

Firefox Reality WebVR controllers from Imanol Fernández Gorostizaga on Vimeo.

Servo's Android support is receiving some care and attention. Not only can developers now build a real Android app locally that embeds Servo alongside a URL bar and other fundamental UI controls, but we can now support regression tests that verify that the Android port won't crash on startup. These important steps will allow us to work on more complicated Android embedding strategies with greater confidence in the future.


We are making great progress towards adding new features on Hubs by Mozilla:

  • Making improvements on the following: scene management and nested scenes, references, GLTF export, breadcrumb UI/UX, many bug fixes
  • Demoed media tools (paste a URL: get an image, video, or model in the space) at Friday’s meetup. Finishing up polish and bugfixing on that and adding support for more content types.
  • More progress on drawing tools, better geometry generation and networking implementation in place.

Interested in joining our public Friday stand ups? For more details, join our public WebVR Slack #social channel to participate in on the discussion!

Content ecosystem

This week, we landed initial WebGL2 support to the WebGLRenderer in three.js!

Found a critical bug on the Unity WebVR exporter? File it in our public GitHub repo or let us know on the public WebVR Slack #unity channel and as always, join us in our discussion!

Stay tuned for new features and improvements across our three areas!

Mozilla Open Policy & Advocacy BlogIndia advances globally leading net neutrality regulations

India is now one step away from having some of the strongest net neutrality regulations in the world. This week, the Indian Telecom Commission’s approved the Telecom Regulatory Authority of India’s (TRAI) recommendations to introduce net neutrality conditions into all Telecom Service Provider (TSP) licenses. This means that any net neutrality violation could cause a TSP to lose its license, a uniquely powerful deterrent. Mozilla commends this vital action by the Telecom Commission, and we urge the Government of India to move swiftly to implement these additions to the license terms.

Eight months ago,TRAI recommended a series of net neutrality conditions to be introduced in TSP licenses, which we applauded at the time. Some highlights of these regulations:

  • Prohibit Telecom Service Providers from engaging in “any form of discrimination or interference” in the treatment of online content.
  • Any deviance from net neutrality, including for traffic management practices, must be “proportionate, transient and transparent in nature.”
  • Specialized services cannot be “usable or offered as a replacement for Internet Access Services;” and “the provision of the Specialised Services is not detrimental to the availability and overall quality of Internet Access Service.”
  • The creation of a multistakeholder body to collaborate and assist TRAI in the monitoring and enforcement of net neutrality. While we must be vigilant that this body not become subject to industry capture, there are good international examples of various kinds of multi-stakeholder bodies working collaboratively with regulators, including the Brazilian Internet Steering Committee (CGI.br) and the Broadband Internet Technical Advisory Group (BITAG).

The Telecom Commissions’ approval is a critical step to finish this process and ensure that not just differential pricing (prohibited through regulations in 2016) but other forms of differential treatment are also restricted by law. Mozilla has engaged at each step of the two and half years of consultations and discussions on this topic (see our filings here), and we applaud the Commission for taking this action to protect the open internet.

The post India advances globally leading net neutrality regulations appeared first on Open Policy & Advocacy.

Mozilla Addons BlogUpcoming changes for themes

Theming capabilities on addons.mozilla.org (AMO) will undergo significant changes in the coming weeks. We will be switching to a new theme technology that will give designers more flexibility to create their themes. It includes support for multiple background images, and styling of toolbars and tabs. We will migrate all existing themes to this new format, and their users should not notice any changes.

As part of this upgrade, we need to remove the theme preview feature on AMO. This feature allowed you to hover over the theme image and see it applied on your browser toolbar. It doesn’t work very reliably because image sizes and network speed can make it slow and unpredictable.

Given that the new themes are potentially more complex, the user experience is likely to worsen. Thus, we decided to drop this in favor of a simpler install and uninstall experience (which is also coming soon). The preview feature will be disabled starting today.

It’s only a matter of weeks before we release the new theme format on AMO. Keep following this blog for that announcement.

The post Upcoming changes for themes appeared first on Mozilla Add-ons Blog.

Cameron KaiserOverbiteNX is now available from Mozilla Add-Ons for beta testing

OverbiteNX, a successor to OverbiteFF which allows Firefox to continue to access legacy resources in Gopher in the brave courageous new world of WebExtensions, is now in public beta. Unlike the alpha test, which required you to download the repo and install the extension using add-on debugging, OverbiteNX is now hosted on Mozilla Add-Ons.

Because WebExtensions still doesn't have a TCP sockets API, nor a spec, OverbiteNX uses its bespoke Onyx native component to do network operations. Onyx is written in open-source portable C with no dependencies and is available in pre-built binaries for macOS 10.12+ and Windows (or get the repo and build it yourself on almost any POSIX system).

To try OverbiteNX, install Onyx from the links above, and then install the extension from Mozilla Add-ons. If you use(d) OverbiteWX, which is the proxy-based strict WebExtensions add-on, please disable it as it may conflict. Copious debugging output is emitted to the browser console for this test version. If you file an issue (or better still a pull request) on Github, please include the output so that we can see the execution trace.

Axel HechtLocalization, Translation, and Machines

TL;DR: Is there research bringing together Software Analysis and Machine Translation to yield Machine Localization of Software?

I’m Telling You, There Is No Word For ‘Yes’ Or ‘No’ In Irish

from Brendan Caldwell

The art of localizing a piece of software with a Yes button is to know what that button will do. This is an example of software UI that makes assumptions on language that hold for English, but might not for other languages. A more frequent example in both UI and languages that are affecting is piecing together text and UI controls:

In the localization tool, you’ll find each of those entries as individual strings. The localizer will recognize that they’re part of one flow, and will move fragments from the shared string to the drop-down as they need. Merely translating the individual segments is not going to be a proper localization of that piece of UI.

If we were to build a rule-based machine localization system, we’d find rules like

  • gaelic-yes:
    If the title of your dialog contains a verb, localize Yes by translating the found verb.

  • pieced-ui:
    For each variant,

    • Piece together the fragments of English to a single sentence
    • Translate the sentences into the target language
    • Find shared content in matching positions to the original layout
    • Split each translated fragment, and adjust the casing and spacing
    • Map the subfragments to the localization of the English individual fragments

    Map the shared fragment to the localization of the English shared fragment

Now that’s rule-based, and it’d be tedious to maintain these rules. Neural Machine Translation (NMT) has all the buzz now, and Machine Learning in general. There is plenty of research that improves how NMT systems learn about the context of the sentence they’re translating. But that’s all text.

It’d be awesome if we could bring Software Analysis into the mix, and train NMT to localize software instead of translating fragments.

For Firefox, could one train on English and localized DOM? For Android’s XML layout, a similar approach could work? For projects with automated screenshots, could one train on those? Is there enough software out there to successfully train a neural network?

Do you know of existing research in this direction?

The Mozilla BlogNew Features in Firefox Focus for iOS, Android – now also on the BlackBerry Key2

Since the launch of Firefox Focus as a content blocker for iOS in December 2015, we’ve continuously improved the now standalone browser for Apple and Android while always being mindful of users’ requests and suggestions. We analyze app store reviews and evaluate regularly which new features make our privacy browser even more user-friendly, efficient and secure. Today’s update for iOS and Android adds functionality to further simplify accessing information on the web. And we are happy to make Focus for Android available to a new group: BlackBerry Key2 users.

On point: find keywords easily with “Find in page” feature for all websites

Desktop browsing without the “search page” feature? Unthinkable! On mobile, it is now also super simple to find the content you’re looking for on a website: Open the Focus menu, select “Find in page” and enter your search term. Firefox Focus will immediately highlight any mention of your keyword or phrase on the site, including the count of instances. You can then use the handy arrow buttons to jump between the instances.

“Find in page” applies to all kinds of websites, no matter if they’re optimized for mobile browsing or not. Why are we pointing this out? Many users are still not completely comfortable browsing the web on their mobile devices because mobile, non-responsive versions of their favorite websites may not have the full range of features, are confusing or simply less appealing simplified versions of the desktop page, reduced to fit the smaller screen. As of today, Firefox Focus enables users to display the desktop page in such cases. Simply choose “request desktop page” in the browser menu to browse the more familiar desktop version of your favorite website.

Find your search terms easily on mobile and desktop versions of your favorite websites


Make Focus your own — with Custom Tabs and biometric access features

Our intent with Firefox Focus is to give users the most comfortable browsing experience — by making them feel safe and protected, enabling them to enjoy intuitive navigation as well as an appealing design. A great example for how we ensure that is our support and continuous improvement of Custom Tabs. When opening links from some third-party apps, such as Twitter or Yelp, and when Focus is set as default browser on the respective device, Firefox Focus will display the corresponding page in the familiar look and feel of the original app, including menu colors and options. Now you can share this experience even faster with your friends. Just long press the URL to copy it to the clipboard for sharing or pasting elsewhere. Currently, this feature is available only to Android users.

iOS users will enjoy even more personalization with today’s version of Focus because they can now set Focus up to lock whenever it is backgrounded, and unlock only with a successful Face or Touch ID verification. This feature is common in banking apps, and provides another layer of security for browsing privacy by only allowing you, and no unauthorized user, to access your version of Focus.

Unlock Firefox Focus via Touch or Face ID to add another layer of security to your private browsing experience.

A shout out to BlackBerry users

Recently, the BlackBerry KEY2 – manufactured by TCL Communication – was introduced, representing the most advanced Blackberry ever. Bringing Firefox’s features, functionality and choice to our users no matter how they browse is important to us. So we’re proud to announce that Firefox Focus is pre-installed as part of the Locker application found on the BlackBerry KEY2.

This data protection application, integrated into the KEY2 user experience, can only be opened by fingerprint or password, which makes it the ideal solution for securely storing sensitive user data such as photos, documents and even apps — as well as the perfect place for Firefox Focus.


The latest version of Firefox Focus for Android and iOS is now available for download on Google Play, in the App Store.

The post New Features in Firefox Focus for iOS, Android – now also on the BlackBerry Key2 appeared first on The Mozilla Blog.

Robert O'CallahanWhy Isn't Debugging Treated As A First-Class Activity?

Mark Côté has published a "vision for engineering workflow at Mozilla": part 2, part 3. It sounds really good. These are its points:

  • Checking out the full mozilla-central source is fast
  • Source code and history is easily navigable
  • Installing a development environment is fast and easy
  • Building is fast
  • Reviews are straightforward and streamlined
  • Code is landed automatically
  • Bug handling is easy, fast, and friendly
  • Metrics are comprehensive, discoverable, and understandable
  • Information on “code flow” is clear and discoverable

Consider also Gitlab's advertised features:

  • Regardless of your process, GitLab provides powerful planning tools to keep everyone synchronized.
  • Create, view, and manage code and project data through powerful branching tools.
  • Keep strict quality standards for production code with automatic testing and reporting.
  • Deploy quickly at massive scale with integrated Docker Container Registry.
  • GitLab's integrated CI/CD allows you to ship code quickly, be it on one - or one thousand servers.
  • Configure your applications and infrastructure.
  • Automatically monitor metrics so you know how any change in code impacts your production environment.
  • Security capabilities, integrated into your development lifecycle.

One thing developers spend a lot of time on is completely absent from both of these lists: debugging! Gitlab doesn't even list anything debugging-related in its missing features. Why isn't debugging treated as worthy of attention? I genuinely don't know — I'd like to hear your theories!

One of my theories is that debugging is ignored because people working on these systems aren't aware of anything they could do to improve it. "If there's no solution, there's no problem." With Pernosco we need to raise awareness that progress is possible and therefore debugging does demand investment. Not only is progress possible, but debugging solutions can deeply integrate into the increasingly cloud-based development workflows described above.

Another of my theories is that many developers have abandoned interactive debuggers because they're a very poor fit for many debugging problems (e.g. multiprocess, time-sensitive and remote workloads — especially cloud and mobile applications). Record-and-replay debugging solves most of those problems, but perhaps people who have stopped using a class of tools altogether stop looking for better tools in the class. Perhaps people equate "debugging" with "using an interactive debugger", so when trapped in "add logging, build, deploy, analyze logs" cycles they look for ways to improve those steps, but not for tools to short-circuit the process. Update This HN comment is a great example of the attitude that if you're not using a debugger, you're not debugging.

Mozilla Open Policy & Advocacy BlogMozilla applauds passage of Brazilian data protection law

After nearly a decade of debate, the Brazilian Congress has just passed the Brazilian Data Protection Bill (PLC 53/2018). The following statement can be attributed to Mozilla COO Denelle Dixon:

“As a company that has fought for strong data protection around the world, Mozilla congratulates Brazilian lawmakers on their action to protect the rights of Brazilian users. At a time when privacy is at risk like never before, this law contains critical safeguards and will act as a powerful check on both companies and the government. We believe that individual security and privacy is fundamental and cannot be treated as optional, and this is a welcome and important step toward that goal.”

Mozilla’s previous statement supporting the Brazilian Data Protection Bill can be found here. The bill will now go to Brazilian President Michel Temer for his signature.

The post Mozilla applauds passage of Brazilian data protection law appeared first on Open Policy & Advocacy.

Robert KaiserMy Journey to Tech Speaking about WebVR/XR

Ever since a close encounter with burning out (thankfully, I didn't quite get there) forced me to leave my job with Mozilla more than two years ago, I have been looking for a place and role that feels good for me in the Mozilla community. I immediately signed up to join Tech Speakers as I always loved talking about Mozilla tech topics and after all breaking down complicated content and communicating it to different groups is probably my biggest strength - but finding the topics I want to present at conferences and other events has been a somewhat harder journey.

I knew I had to keep my distance to crash stats, despite knowing the area in and out and having developed some passion for it, but staying in the same area as a volunteer than in a job that almost burned me out was just not a good idea, from multiple points of view. I thought about building up some talks about working with data but it still was a bit too close to that past and not what I presently do a lot (I work in blockchain technology mostly today), so that didn't go far (but maybe it will happen at some point).
On the other hand, I got more and more interested in some things the Open Innovation group at Mozilla was doing, and even more in what the Emerging Technologies teams bring into the Mozilla and web sphere. My talk (slides) at this year's local "Linuxwochen Wien" conference was a very quick run-through of what's going on there and it's a whole stack of awesomeness, from Mixed Reality via codecs, Rust, Voice and whatnot to IoT. I would love to dig a bit into the latter but I didn't yet find the time.

What I did find some time for is digging into WebVR (now WebXR, where "XR" means "Mixed Reality") and the A-Frame library that Mozilla has created to make it dead simple to create your own VR/XR experiences. Last year I did two workshops in Vienna on that area, another one this year and I'm planning more of them. It's great how people with just some HTML knowledge can build something easily there as well as people who are more into JS programming, who can dig even deeper. And the immersiveness of VR with a real headset blows people away again and again in any case, so a good thing to show off.

While last year I only had cardboards with some left-over Sony Z3C phones (thanks to Mozilla) to show some basic 3DoF (rotation only) VR with low resolution, this proved to be interesting already to people I presented to or made workshops with. Now, this year I decided to buy a HTC Vive, seeing its price go down somewhat before the next generation of headsets would be shipped. (As a side note, I chose the Vive over the Rift because of Linux drivers being available and because I don't want to give money to Facebook.) Along with a new laptop with a high-end GPU that can drive the VR headset, I got into fully immersive 6DoF VR and, I have to say, got somewhat addicted to the experience. ;-)

Image No. 23334 Image No. 23341 Image No. 23338

I ran a demo booth with A-Painter at "Linuxwochen Wien" in May, and people were both awed at the VR experience and that this was all running in plain Firefox! Spreading the word about new web technologies can be really fun and rewarding with experiences like that! Next to showing demos and using VR myself, I also got into building WebVR/XR demos myself (I'm more the person to do demos and prototypes and spread the word, rather than building long-lasting products) - but I'll leave that to another blog post that will be upcoming very soon! :)

So, for the moment, I have found a place I feel very comfortable with in the community, doing demos and presentations about WebVR or "Mixed Reality" (still need to dig into AR but I don't have fitting hardware for that yet) as well as giving people and overview of the Emerging Technologies "we" (MoCo and the Mozilla community) are bringing to the web, and trying to make people excited and use the technologies or hopefully even contribute to them. Being at the forefront of innovation for once feels really good, I hope it lasts long!

The Mozilla BlogMozilla Funds Top Research Projects

We are very happy to announce the results of the 2018H1 Mozilla Research Grants. This was an extremely competitive process, with over 115 applicants. We selected a total of eight proposals, ranging from tools to fight online harassment to systems for generating speech. All these projects support Mozilla’s mission to make the Internet safer, more empowering, and more accessible.

The Mozilla Research Grants program is part of Mozilla’s Emerging Technologies commitment to being a world-class example of inclusive innovation and impact culture-and reflects Mozilla’s commitment to open innovation, continuously exploring new possibilities with and for diverse communities. We will open the 2018H2 round in Fall of 2018: see our Research Grant webpage for more details and to sign up to be notified when applications open.


Principal Investigator Institution Department Title
Jeff Huang Texas A&M University Department of Computer Science and Engineering Predictively Detecting and Debugging Multi-threaded Use-After-Free Vulnerabilities in Firefox


Eduardo Vicente Gonçalves Open Knowledge Foundation, Brazil Data Science for Civic Innovation Programme A Brazilian bot to read government gazettes and bills: Using NLP to empower citizens and civic movements


Leah Findlater University of Washington Human Centered Design and Engineering Task-Appropriate Synthesized Speech


Laura James University of Cambridge Trustworthy Technologies Initiative Trust and Technology: building shared understanding around trust and distrust


Libby Hemphill University of Michigan School of Information and Institute for Social Research Learning and Automating De-escalation Strategies in Online Discussions


Pamela Wisniewski University of Central Florida Department of Computer Science A Community-based Approach to Co-Managing Privacy and Security for Mozilla’s Web of Things


Munmun De Choudhury Georgia Institute of Technology School of Interactive Computing Combating Professional Harassment Online via Participatory Algorithmic and Data-Driven Research


David Joyner Georgia Institute of Technology College of Computing Virtual Reality for Classrooms-at-a-Distance in Online Education



Many thanks to all our applicants in this very competitive and high-quality round.

The post Mozilla Funds Top Research Projects appeared first on The Mozilla Blog.

The Mozilla BlogAn Invisible Tax on the Web: Video Codecs

Here’s a surprising fact: It costs money to watch video online, even on free sites like YouTube. That’s because about 4 in 5 videos on the web today rely on a patented technology called the H.264 video codec.

A codec is a piece of software that shrinks large media files so they can travel quickly over the internet. In browsers, codecs decode video files so we can play them on our phones, tablets, computers, and TVs. As web users, we take this performance for granted. But the truth is, companies pay millions of dollars each year to bring us free video – and the bills are only going to get bigger.

Today most video files can play on most devices, thanks to the ubiquity of H.264. How might this situation change? Let’s start with some facts and factors that govern the big business of web video.

Streaming video costs (a lot of) money. A lot of companies pay a lot of money to use H.264. They include software and networking companies; content creators and distributors like Netflix, Amazon, and YouTube; and chip manufacturers like ARM. Where does the money go? To MPEG-LA, which represents  tech innovators in the U.S., Japan, South Korea, Germany, France, and Holland.

Newer codecs are twice as efficient.  In the business world, efficiency equals money. Better compression opens the door to two key business benefits: better video quality and lower bandwidth costs. Companies like Cisco, YouTube, and Netflix pay massive networking bills to send video files to your browser. Today,  more than 70% of all internet traffic is video, and that percentage is predicted to top 80% in the next few years.

New codecs may cost ten times more. MPEG-LA’s next-generation HEVC/H.265 is more efficient than H.264. The downside is, it carries 23 patents and remarkably confusing terms, originally created for DVD players. Early estimates show licensing fees for H.265 could cost ten times more than today’s H.264. Who will absorb those costs? How much will companies like Netflix have to pass on in fee hikes to stay profitable?

With H.264, small players get a free ride. To help build momentum for the H.264 codec, Cisco announced in 2013 it would open-source H.264. Cisco offered H.264 binaries to developers free of charge, so small shops could add streaming functionality to their applications. Mozilla uses Cisco’s OpenH264 in Firefox. If not for Cisco’s generosity, Mozilla would be paying estimated licensing fees of $9.75 million a year. Now the question is: Will Cisco cover licensing fees for HEVC/H.265 as well? If not, what impact will royalties have on web development? How will startups, hobbyists, and open source projects get access to this crucial web technology?

A drive to create royalty-free codecs

Mozilla is driven by a mission to make the web platform more capable, safe, and performant for all users. With that in mind, the company has been supporting work at the Xiph.org Foundation to create royalty-free codecs that anyone can use to compress and decode media files in hardware, software, and web pages.

But when it comes to video codecs, Xiph.org Foundation isn’t the only game in town.

Over the last decade, several companies started building viable alternatives to patented video codecs. Mozilla worked on the Daala Project, Google released VP9, and Cisco created Thor for low-complexity videoconferencing. All these efforts had the same goal: to create a next-generation video compression technology that would make sharing high-quality video over the internet faster, more reliable, and less expensive.

In 2015, Mozilla, Google, Cisco, and others joined with Amazon and Netflix and hardware vendors AMD, ARM, Intel, and NVIDIA to form AOMedia. As AOMedia grew, efforts to create an open video format coalesced around a new codec: AV1. AV1 is based largely on Google’s VP9 code and incorporates tools and technologies from Daala, Thor, and VP10.

Why Mozilla loves AV1

Mozilla loves AV1 for two reasons: AV1 is royalty-free, so anyone can use it free of charge. Software companies can use it to build video streaming into their applications. Web developers can build their own video players for their sites. It can open up business opportunities, and remove barriers to entry for entrepreneurs, artists, and regular people. Most importantly, a royalty-free codec can help keep high-quality video affordable for everyone.

Source: Graphics & Media Lab Video Group, Moscow State University

The second reason we love AV1 is that it delivers better compression technology than even high-efficiency codecs – about 30% better, according to a Moscow State University study. For companies, that translates to smaller video files that are faster and cheaper to transmit and take up less storage space in their data centers. For the rest of us, we’ll have access to gorgeous, high-definition video through the sites and services we already know and love.

Open source, all the way down

AV1 is well on its way to becoming a viable alternative to patented video codecs. As of June 2018, the AV1 1.0 specification is stable and available for public use on a royalty-free basis. Looking for a deep dive into the specific technologies that made the leap from Daala to AV1? Check out our Hacks post, AV1: next generation video – The Constrained Directional Enhancement Filter.

The post An Invisible Tax on the Web: Video Codecs appeared first on The Mozilla Blog.

Wladimir PalantFTAPI SecuTransfer - the secure alternative to emails? Not quite...

Emails aren’t private, so much should be known by now. When you communicate via email, the contents are not only visible to yours and the other side’s email providers, but potentially also to numerous others like the NSA who intercepted your email on the network. Encrypting emails is possible via PGP or S/MIME, but neither is particularly easy to deploy and use. Worse yet, both standard were found to have security deficits recently. So it is not surprising that people and especially companies look for better alternatives.

It appears that the German company FTAPI gained a good standing in this market, at least in Germany, Austria and Switzerland. Their website continues to stress how simple and secure their solution is. And the list of references is impressive, featuring a number of known names that should have a very high standard when it comes to data security: Bavarian tax authorities, a bank, lawyers etc. A few years ago they even developed a “Secure E-Mail” service for Vodafone customers.

I now had a glimpse at their product. My conclusion: while it definitely offers advantages in some scenarios, it also fails to deliver the promised security.

Quick overview of the FTAPI approach

The primary goal of the FTAPI product is easily exchanging (potentially very large) files. They solve it by giving up on decentralization: data is always stored on a server and both sender and recipient have to be registered with that server. This offers clear security benefits: there is no data transfer between servers to protect, and offering the web interface via HTTPS makes sure that data upload and download are private.

But FTAPI goes beyond that: they claim to follow the Zero Knowledge approach, meaning that data transfers are end-to-end encrypted and not even the server can see the contents. For that, each user defines their “SecuPass” upon registration which is a password unknown to the server and used to encrypt data transfers.

Why bother doing crypto in a web application?

The first issue is already shining through here: your data is being encrypted by a web application in order to protect it from a server that delivers that web application to you. But the server can easily give you a slightly modified web application, one that will steal your encryption key for example! With several megabytes of JavaScript code executing here, there is no way you will notice a difference. So the server administrator can read your emails, e.g. because of being ordered by the company management, the whole encryption voodoo didn’t change that fact. Malicious actors who somehow gained access to the server will have even less scruples of course. Worse yet, malicious actors don’t need full control of the server. A single Cross-site scripting vulnerability is sufficient to compromise the web application.

Of course, FTAPI also offers a desktop client as well as an Outlook Add-in. While I haven’t looked at either, it is likely that no such drawbacks exist there. The only trouble: FTAPI fails to communicate that encryption is only secure outside of the browser. The standalone clients are being promoted as improvements to convenience, not as security enhancements.

Another case of weak key derivation function

According to the FTAPI website, there is a whitepaper describing their SecuTransfer 4.0 approach. Too bad that this whitepaper isn’t public, and requesting at least in my case didn’t yield any response whatsoever. Then again, figuring out the building blocks of SecuTransfer took merely a few minutes.

Your SecuPass is used as input to PBKDF2 algorithm in order to derive an encryption key. That encryption key can be used to decrypt your private RSA key as stored on the server. And the private RSA key in turn can be used to recover the encryption key for incoming files. So somebody able to decrypt your private RSA key will be able to read all your encrypted data stored on the server.

If somebody in control of the server wants to read your data, how do they decrypt your RSA key? Why, by guessing your SecuPass of course. While the advise is to choose a long password here, humans are very bad at choosing good passwords. In my previous article I already explained why LastPass doing 5000 PBKDF2 iterations isn’t a real hurdle preventing attackers from guessing your password. Yet FTAPI is doing merely 1,000 iterations which means that bruteforcing attacks will be even faster, by factor 5 at least (actually more, because FTAPI is using SHA-1 whereas LastPass is using SHA-256). This means that even the strongest passwords can be guessed within a few days.

Mind you, PBKDF2 isn’t a bad algorithm and with 100,000 iterations (at least, more is better) it can currently be considered reasonably secure. There days, there are better alternatives however — bcrypt and scrypt are the fairly established ones, whereas Argon2 is the new hotness.

And the key exchange?

One of the big challenges with end-to-end encryption is always the key exchange — how do I know that the public key belongs to the person I want to communicate with? S/MIME solves it via a costly public trust infrastructure whereas PGP relies on a network of key servers with its own set of issues. On the first glance, FTAPI dodges this issue with its centralized architecture: the server makes sure that you always get the right public key.

Oh, but we didn’t want to trust the server. What if the server replaces the real public key by the server administrator’s (or worse: a hacker’s), and we make our files visible to them? There is also a less obvious issue: FTAPI still uses the insecure email for bootstrapping. If you aren’t registered yet, email is how you get notified that you received a file. If somebody manages to intercept that email, they will be able to register at the FTAPI server and receive all the “secure” data transfers meant for you.

Final notes

While sharing private data via an HTTPS-protected web server clearly has benefits over sending it via email, the rest of FTAPI’s security measures is mostly appearance of security right now. Partly, it is a failure on their end: 1,000 PBKDF2 iterations were already offering way too little protection in 2009, back when FTAPI prototype was created. But there are also fundamental issues here: real end-to-end encryption is inherently complicated, particularly solving key exchange securely. And of course, end-to-end encryption is impossible to implement in a web application, so you have to choose between convenience (zero overhead: nothing to install, just open the site in your browser) and security.

Mozilla Open Policy & Advocacy BlogSearching for sustainable and progressive policy solutions for illegal content in Europe

As we’ve previously blogged, lawmakers in the European Union are reflecting intensively on the problem of illegal and harmful content on the internet, and whether the mechanisms that exist to tackle those phenomena are working well. In that context, we’ve just filed comment with the European Commission, where we address some of the key issues around how to efficiently tackle illegal content online within a rights and ecosystem-protective framework.

Our filing builds upon our response to the recent European Commission Inception Impact Assessment on illegal content online, and has four key messages:

  • There is no one-size-fits-all approach to illegal content regulation. While some solutions can be generalised, each category of online content has nuances that must be appreciated.
  • Automated control solutions such as content filtering are not a panacea. Such solutions are of little value when context is required to assess the illegality and harm of a given piece of content (e.g. copyright infringement, ‘hate speech’).
  • Trusted flaggers – non-governmental actors which have dedicated training in understanding and identifying potentially illegal or harmful content – offer some promise as a mechanism for enhancing the speed and quality of content removal. However, such entities must never replace courts and judges as authoritative assessors of the legality of content, and as such, their role should be limited to ‘fast-track’ notice procedures.
  • Fundamental rights safeguards must be included in illegal content removal frameworks by design, and should not simply be patched on at the end. Transparency and due process should be at the heart of such mechanisms.

Illegal content is symptomatic of an unhealthy internet ecosystem, and addressing it is something that we care deeply about. To combat an online environment in which harmful content and activity continue to persist, we recently adopted an addendum to our Manifesto, where we affirmed our commitment to an internet that promotes civil discourse, human dignity, and individual expression. The issue is also at the heart of our recently published Internet Health Report, though its dedicated section on digital inclusion.

As a mission-driven not-for-profit and the steward of a community of internet builders, we can bring a unique perspective into this debate. Indeed, our filing seeks to firmly root the fight against illegal content online within a framework that is both rights-protective and attune with the technical realities of the internet ecosystem.

This is a really challenging policy space, to which we are committed to advancing progressive and sustainable policy solutions.


Read more:

The post Searching for sustainable and progressive policy solutions for illegal content in Europe appeared first on Open Policy & Advocacy.

Daniel Stenbergcurl 7.61.0

Yet again we say hello to a new curl release that has been uploaded to the servers and sent off into the world. Version 7.61.0 (full changelog). It has been exactly eight weeks since 7.60.0 shipped.


the 175th release
7 changes
56 days (total: 7,419)

88 bug fixes (total: 4,538)
158 commits (total: 23,288)
3 new curl_easy_setopt() options (total: 258)

4 new curl command line option (total: 218)
55 contributors, 25 new (total: 1,766)
42 authors, 18 new (total: 596)
  1 security fix (total: 81)

Security fixes

SMTP send heap buffer overflow (CVE-2018-0500)

A stupid heap buffer overflow that can be triggered when the application asks curl to use a smaller download buffer than default and then sends a larger file - over SMTP. Details.

New features

The trailing dot zero in the version number reveals that we added some news this time around - again.

More microsecond timers

Over several recent releases we've introduced ways to extract timer information from libcurl that uses integers to return time information with microsecond resolution, as a complement to the ones we already offer using doubles. This gives a better precision and avoids forcing applications to use floating point math.

Bold headers

The curl tool now outputs header names using a bold typeface!

Bearer tokens

The auth support now allows applications to set the specific bearer tokens to pass on.

TLS 1.3 cipher suites

As TLS 1.3 has a different set of suites, using different names, than previous TLS versions, an application that doesn't know if the server supports TLS 1.2 or TLS 1.3 can't set the ciphers in the single existing option since that would use names for 1.2 and not work for 1.3 . The new option for libcurl is called CURLOPT_TLS13_CIPHERS.

Disallow user name in URL

There's now a new option that can tell curl to not acknowledge and support user names in the URL. User names in URLs can brings some security issues since they're often sent or stored in plain text, plus if .netrc support is enabled a script accepting externally set URLs could risk getting exposing the privately set password.

Awesome bug-fixes this time

Some of my favorites include...

Resolver local host names faster

When curl is built to use the threaded resolver, which is the default choice, it will now resolve locally available host names faster. Locally as present in /etc/hosts or in the OS cache etc.

Use latest PSL and refresh it periodically

curl can now be built to use an external PSL (Public Suffix List) file so that it can get updated independently of the curl executable and thus better keep in sync with the list and the reality of the Internet.

Rumors say there are Linux distros that might start providing and updating the PSL file in separate package, much like they provide CA certificates already.

fnmatch: use the system one if available

The somewhat rare FTP wildcard matching feature always had its own internal fnmatch implementation, but now we've finally ditched that in favour of the system fnmatch() function for platforms that have such a one. It shrinks footprint and removes an attack surface - we've had a fair share of tiresome fuzzing issues in the custom fnmatch code.

axTLS: not considered fit for use

In an effort to slowly increase our requirement on third party code that we might tell users to build curl to use, we've made curl fail to build if asked to use the axTLS backend. This since we have serious doubts about the quality and commitment of the code and that project. This is just step one. If no one yells and fights for axTLS' future in curl going forward, we will remove all traces of axTLS support from curl exactly six months after step one was merged. There are plenty of other and better TLS backends to use!

Detailed in our new DEPRECATE document.

TLS 1.3 used by default

When negotiating TLS version in the TLS handshake, curl will now allow TLS 1.3 by default. Previously you needed to explicitly allow that. TLS 1.3 support is not yet present everywhere so it will depend on the TLS library and its version that your curl is using.

Coming up?

We have several changes and new features lined up for next release. Stay tuned!

First, we will however most probably schedule a patch release, as we have two rather nasty HTTP/2 bugs filed that we want fixed. Once we have them fixed in a way we like, I think we'd like to see those go out in a patch release before the next pending feature release.

RabimbaFirefoxOS, A keyboard and prediction: Story of my first contribution

Returning to my cubical holding a hot cup of coffee and with a head loaded with frustration and panic over a system codebase that I managed to break with no sufficient time to fix it before the next morning. 

This was at IBM, New York where I was interning and working on the TJ Watson project. I returned back to my desk, turned on my dual monitors, started reading some blogs and engaging on Mozilla IRC (a new found and pretty short lived hobby). Just a few days before that, FirefoxOS was launched in India in the form of an Intex phone with a $35 price tag. It was making waves all around, because of its hefty price and poor performance . The OS struggle was showing up in the super low cost hardware. I was personally furious about some of the shortcomings, primarily the keyboard which at that time didn’t support prediction in any language other than English and also did not learn new words. Coincidentally, I came upon Dietrich Ayala in the FirefoxOS IRC channel, who at that time was a Platform Engineer at Mozilla. To my surprise he agreed with many of my complaints and asked me if I want to contribute my ideas. I very much wanted to, but then again, I had no idea how. The idea of contributing to the codebase of something like FirefoxOS terrified me. He suggested I first send a proposal and then proceed from there. With my busy work schedule at IBM, this discussion slipped my mind and did not fully boil in my head until I returned home from my internship.

That proposal now lives here. And was being tracked here as well part of Mozilla Developer Network Hacks (under “Word Prediction for Bengali”)

Fast forward a couple of years. And now we don’t have a FirefoxOS anymore. So I decided about time I give a go about what went into the implementation. And get to the nitty gritty of the code.

A little summary of what was done

But first Related Work. By that I mean the existing program for predictive text input.When using on-screen keyboard in Firefox OS, the program shows you three most likely words starting with the letters you just typed. This works even if you made a typo (see the right screenshot).

Firefox OS screenshot

Each word in the dictionary has a frequency associated with it, for example, the top words in the English dictionary are:

the 222
of 214
and 212
in 210
a 208
to 208

For example, if you type “TH”, the program will suggest you “the” and “this” (with frequencies of 222 and 200), but not something rare like “thundershower” (which has a frequency of 10).

Mozilla developers previously used a Bloom filter for this task, but then switched to DAWG.

Using DAWG, you can quickly find mistyped words, however you cannot easily find the most frequent words for a given prefix. The existing implementation of DAWG in Firefox OS code (made by Christoph Kerschbaumer) worked like this: each word had its frequency stored in the last node of the word; the program just traversed the whole subtree and looked for three most frequent words. To make the tree traversal feasible, it was limited to two characters, so the program was able to suggest only 1-2 characters to finish the word typed by user.
Dictionary size was another problem. Each DAWG node consisted of the letter, frequency, left, right, and middle pointers (four bytes each; 20 bytes in total). The whole English dictionary was around 200'000 nodes or 3.9 MB.
Whole tree traversal

In this example, the program looks up the prefix TH in the tree, enumerates all words in the dictionary starting with TH (by traversing the red subtree), and find three words with maximum frequencies. For the full English dictionary, the size of subtree can be very large.

Sorting nodes by maximum frequency of the prefix

Here is what I proposed and implemented for Firefox OS.
If you can find a way to store the nodes sorted by maximum frequency of the words, then you can visit just the nodes with the most frequent words instead of traversing the whole subtree.

Note that a TST can be viewed as consisting of several binary trees; each of them is used to find a letter in the specific position. There are many ways to store the same letters in a binary tree (see green nodes on the drawing), so several TSTs are possible for the same words:

Words: tap ten the to

Three equivalent TSTs

Usually, you want to balance the binary tree to speed up the search. But for this task, I thought to put the letter that continues the most frequent word at the root of the binary tree instead of perfectly balancing it. In this way, you can quickly locate the most frequent word. The left and the right subtrees of the root will be balanced as usual.

However, the task is to find the three most frequent words, not just one word. So you can create a linked list of letters sorted by the maximum frequency of the words containing this letters. An additional pointer will be added to each node to maintain the linked list. The nodes will be sorted by two criteria: alphabetically in the binary tree (so that you can find the prefix) and by frequency in the linked list (so that you can find the most frequent words for this prefix).
For example, there are the following words and frequencies:
the 222
thou 100
ten 145
to 208 tens 110
voices 118
voice 139

For the first letter T, you have the following maximum frequencies (of the words starting with this prefix):
  • TH — 222 (the full word is “the”; “thou” has a lower frequency);
  • TO — 208 (the full word is “to”);
  • TE — 145 (the full word is “ten”; “tens” has a lower frequency).
The node with the highest maximum frequency (H in “th”) will be the root of the binary tree and the head of the linked list; O will be the next item, and E will be the last item in the linked list.
ternary search tree

So you have built the data structure described above. To find N most frequent words starting with a prefix, you first find this prefix in the DAWG (as usual, using the left and right pointers in the binary tree). Then, you follow the middle pointers to locate the most frequent word (remember, it's always at the root of the binary tree). When following this path, you save the second most likely nodes, so that after finding the first word, you already know where to start looking for the second one.

Please take a look at the drawing above. For example, the user types the letter T. You go down to the yellow binary tree. In its root, you find the prefix of the most frequent word (H); you also remember the second most likely frequency (208 for the letter O) by looking in the linked list.

You follow to the green binary tree. E is the prefix of the most frequent word here; you also remember the second frequency (100 for the letter O). So, you have found the first word (THE). Where to look for the second most frequent word? You compare the saved frequencies:
tho 100

to 208

and find that TO, not THO is a prefix of the second most frequent word.
So you continue the search from the TO node and find the second word “to”. TE is the next node sorted by frequency in the linked list, so you save it instead of TO:
tho 100

te 145
Now, TE has greater frequency, so you choose this path and find the third word, “ten”.
You can store the candidate prefixes in a priority queue (sorted by the frequency) for faster retrieval of the next best candidate, but I chose a sorted array for this task, because there are just three words to find. If you already have the required number of candidates (three) and you found a candidate that is worse the already found candidates, you can skip it. Don't insert it at the end of the priority queue (or the sorted array), because it will not be used anyway. But if you found a candidate that is better than the already found ones, you should store it (it's possible to replace the worse candidate in this case). So you can store just three candidates at any time.

The advantage of this method is that you can find the most frequent words without traversing the whole subtree (not thousands of nodes, but typically less than 100 nodes). You also can use fuzzy search to find a prefix with possible typos.

Averaging the frequency

Another problem is the file size and the suffixes (such as “-ing”, “-ed”, and “-s” in English language). When converting from TST to DAWG, you can join together the suffixes only if their frequencies are equal (the previous implementation by Christoph Kerschbaumer used this strategy), or you can join them if their order in the linked list is the same and store the average frequency in the compressed node.

In the later case, you can reduce the number of the nodes (in the English dictionary, the number of nodes went from 200'000 to 130'000). The frequencies are averaged only if doing so does not change the order of words (a less frequent suffix is never joined with a more frequent suffix).

For example, consider the same words:

the 222
thou 100
ten 145
to 208 tens 110
voices 118
voice 139
The prefix “-s” has an average frequency of (110+118)/2=114 and the null ending (the end of the word) has an average frequency of (222+100+208+145+110+139+118)/7=149, so the latter will be suggested more often.
ternary search tree
Joining partially equal linked lists

The nodes are joined together only if their subtrees are equal and the linked lists are equal. For example, if “ends” were more likely than “end”, it would not be joined with “tens” that is less likely than “ten”. The averaging changes frequencies, but preserves the relative order of words with the same prefix.

The program is careful enough to preserve the linked lists when joining the nodes. But if the linked lists are partially equal, it can join them. Consider the following example (see the drawing to the right of this text):
ended 144
ending 135
standards 136
ends 130 standing 134
stands 133
The “-s” node has the averaged frequency of round((133+130)/2)=132, and “-ing” nodes have round((134+135)/2)=134. The linked lists and the nodes are partially joined: the common part of “standARDS — standING — standS” and “endED — endING — endS” is “-ing”, “-s”, and this part is joined. Again, if “standing” were more likely than “standards” or less likely than “stands”, it would be impossible to join the nodes, because their order in the linked list would be different.

Besides that, I allocated 20 bits (two bytes + additional four bits) for each pointer instead of 32 bits. Sixteen bits (65'536 nodes) were not enough for Mozilla dictionaries, but twenty bits (1 million nodes) are enough and leave a room for further expansion. The nodes are stored as a Uint16Array, including the letter and the frequency (16 bits each), the left, the right, the middle pointer, and the pointer for the linked list (the lower 16 bits of each pointer). An additional Uint16 stores the higher four bits of each pointer (4 bits × 4 pointers). After these changes, the dictionary size went down from the initial 3.9 MB to 1.8 MB.


The optimized program is not only faster but also uses a smaller dictionary.

This finishes up on how we can optimize it. Then comes the learning part. You can have a look at my talk which has a working demo as well on how it works. And specially with Bengali too

The following people have guided me on this course and I am forever grateful to them for whatever I could do in the project
  • Dietrich Ayala - To actually help me start the project. getting me connected to everyone who helped me
  • Indranil Das Gupta - For valuable suggestions and also helping me get FIRE corpus
  • Sankarshan Mukhopadhyay - Valuable suggestions on my method and pointing out related work in Fedora
  • Prasenjit Majumder, Ayan Bandyopadhyay - For getting me access to the FIRE corpus
  • Tim Chien and Jan Jongboom - I learned from their previous works and for handling all my queries
  • Mahay Alam Khan - For getting me in touch with Anirudhha
  • Countless people in #gaia in Mozilla IRC who patiently listened to all my problems while I tried to build gaia in a Windows machine (*sigh*) and later all the installation problems
Related Talks:

I gave two talks on the related topic. 

The OSB talks was my first time speaking in a conference, so you can visibly see how sloppy I am. By JSFoo, I was a little experienced (one talk experience) so became little less sloppy (still plenty).
Another derivation I did present in OpenSource HongKong. But I don't believe that is recorded anywhere.

JSFoo 2015

OpenSource Bridge 2015

Mozilla Addons BlogNew Site for Thunderbird and SeaMonkey Add-ons

When Firefox Quantum (version 57) launched in November 2017, it exclusively supported add-ons built using WebExtensions APIs. addons.mozilla.org (AMO) has followed a parallel development path to Firefox and will soon only support WebExtensions-based add-ons.

As Thunderbird and SeaMonkey do not plan to fully switch over to the WebExtensions API in the near future, the Thunderbird Council has agreed to host and manage a new site for Thunderbird and SeaMonkey add-ons. This new site, addons.thunderbird.net, will go live in July 2018.

Starting on July 12th, all add-ons that support Thunderbird and SeaMonkey will be automatically ported to addons.thunderbird.net. The update URLs of these add-ons will be redirected from AMO to the new site and all users will continue to receive automatic updates. Developer accounts will also be ported and developers will be able to log in and manage their listings on the new site.

Thunderbird or SeaMonkey add-ons that also support Firefox or Firefox for Android will remain listed on AMO.

If you come across any issues or need support during the migration, please post to this thread in our community forum.

The post New Site for Thunderbird and SeaMonkey Add-ons appeared first on Mozilla Add-ons Blog.

Mark CôtéA Vision for Engineering Workflow at Mozilla (Part Three)

This is the last post in a three-part series on A Vision for Engineering Workflow at Mozilla. The first post in this series provided some background, while the second introduced the first four points of our nine-point vision.

The Engineering Workflow Vision (continued)

5. Reviews are straightforward and streamlined

The Engineering Workflow team has spent a lot of time over the last few years on review tools, starting with Splinter, moving into MozReview, and now onto Phabricator. In particular, MozReview was a grand experiment; its time may be over, but we learned a lot from the project and are incorporating these lessons not just into our new tools but also into how we work.

There are a lot of aspects to the code-review process. First and foremost is, of course, the tool that is used to actually leave reviews. One important meta-aspect of review-tool choice is that there should only be one. Mozilla has suffered from the problems caused by multiple review tools for quite a long time. Even before MozReview, users had the choice of raw diffs versus Splinter. Admittedly, the difference there is fairly minimal, but if you look at reviews conducted with Splinter, you will see the effect of having two systems: initial reviews are done in Splinter, but follow ups are almost always done as comments left directly in the bug. The Splinter UI rarely shows any sort of conversation. We didn’t even use this simple tool entirely effectively.

Preferences for features and look and feel in review tools vary widely. One of the sole characteristics that is uncontroversial is that it should be fast—but of course even this is a trade off, as nothing is faster than commenting directly on a diff and pasting it as a comment into Bugzilla. However, at a minimum the chosen tool should not feel slow and cumbersome, regardless of features.

Other aspects that are more difficult but nice to have include

  • Differentiating between intentional changes made by the patch author versus those from the patch being rebased
  • Clear and effective interdiff support
  • Good VCS integration

For the record, while not perfect, we believe Phabricator, our chosen review tool for the foreseeable future, fares pretty well against all of these requirements, while also being relatively intuitive and visually pleasing.

There are other parts of code review that can be automated to ease the whole process. Given that they are fairly specific to the way Mozilla works, they will likely need to be custom solutions, but the work and maintenance involved should easily be paid off in terms of efficiency gains. These include

  • Automated reviews to catch all errors that don’t require human judgement, e.g., linting. Even better would be the tool fixing such errors automatically, which would eliminate an extra review cycle. This feedback should ideally be available both locally and after review submission.
  • Reviewers are intelligently suggested. At the minimum, our module system should be reflected in the tool, but we can do better by calculating metrics based on file history, reviewer load and velocity, and other such markers.
  • Similarly, code owners should be clearly identified and enforced; it should be made clear if the appropriate reviewers have not signed off on a change, and landing should be prevented.

This last point segues into the next item in the vision.

6. Code is landed automatically

Mozilla has had an autoland feature as part of MozReview for about 2.5 years now, and we recently launched Lando as our new automatic-landing tool integrated with Phabricator. Lando has incorporated some of the lessons we learned from MozReview (not the least of which is “don’t build your custom tools directly into your third-party tools”), but there is much we can do past our simple click-to-land system.

One feature that will unlock a lot of improvements is purely automatic landings, that is, landings that are initiated automatically after the necessary reviews are granted. This relies on the system understanding which reviews are necessary (see above), but beyond that it needs just a simple checkbox to signal the author’s intent to land (so we avoid accidentally landing patches that are works in progress). Further, as opposed to Try runs for testing, developers don’t tend to care too much about the time to land a completed patch as long as a whole series lands together, so this feature could be used to schedule landings over time to better distribute load on the CI systems.

Automatic landings also provide opportunities to reduce manual involvement in other processes, including backouts, uplifts, and merges. Using a single tool also provides a central place for record-keeping, to both generate metrics and follow how patches move through the trains. More on this in future sections.

7. Bug handling is easy, fast, and friendly

Particularly at Mozilla, bug tracking is a huge topic, greater than code review. For better or worse, Bugzilla has been a major part of the central nervous system of Mozilla engineering since its earliest days; indeed, Bugzilla turns 20 in just a couple months! Discussing Bugzilla’s past, present, and future roles at Mozilla would take many blog posts, if not a book, so I’ll be a bit broad in my comments here.

First, and probably most obviously, Mozilla’s bug tracker should prioritize usability and user experience (yes they’re different). Mozilla engages not just full-time engineer employees but also a very large community with diverse backgrounds and skill sets. Allowing an engineer to be productive while encouraging users without technical backgrounds to submit bug reports is quite a challenge, and one that most high-tech organizations never have to face.

Another topic that has come up in the past is search functionality. Developers frequently need to find bugs they’ve seen previously, or want to find possible duplicates of recently filed bugs. The ideal search feature would be fast, of course, but also accurate and relevant. I think about these two aspects are slightly differently: accuracy pertains to finding a specific bug, whereas relevancy is important when searching for a class of bugs matching some given attributes.

Over the past couple years we have been trying to move certain use cases out of Bugzilla, so that we can focus specifically on engineering. This is part of a grander effort to consolidate workflows, which has a host of benefits ranging from simpler, more intuitive interfaces to reduced maintenance burden. However this means we need to understand specific use cases within engineering and implement features to support them, in addition to the more general concerns above. A recent example is the refinement of triage processes, which is helped along by specific improvements to Bugzilla.

8. Metrics are comprehensive, discoverable, and understandable

The value of data about one’s products and processes is not something that needs much justification today. Mozilla has already invested heavily in a data-driven approach to developing Firefox and other applications. The Engineering Workflow team is starting to do the same, thanks to infrastructure built for Firefox telemetry.

The list of data we could benefit from collecting is endless, but a few examples include * backout rates and causes * build times * test-run times * patch-review times * tool adoption

We’re already gathering and visualizing some of these stats:

Naturally such data is even more valuable if shared so other teams can analyze it for their benefit.

9. Information on “code flow” is clear and discoverable

This item builds on the former. It is the most nebulous, but to me it is one of the most interesting.

Code changes (patches, commits, changesets, whatever you want to call them) have a life cycle:

  1. A developer writes one or more patches to solve a problem. Sometimes the patches are in response to a bug report; sometimes a bug report is filed just for tracking.

  2. The patches are often sent to Try for testing, sometimes multiple times.

  3. The patches are reviewed by one or more developers, sometimes through multiple cycles.

  4. The patches are landed, usually on an integration branch, then merged to mozilla-central.

  5. Occasionally, the patches are backed out, in which case flow returns to step 1.

  6. The patches are periodically merged to the next channel branch, or occasionally uplifted directly to one or more branches.

  7. The patches are included in a specific channel build.

  8. Repeat 6. and 7. until the patch ends up in the mozilla-release branch and is included in a Release build.

There’s currently no way to easily follow a code change through these stages, and thus no metrics on how flow is affected by the various aspects of a change (size, area of code, author, reviewer(s), etc.). Further, tracking this information could provide clear indicators of flow problems, such as commits that are ready to land but have merge conflicts, or commits that have been waiting on review for an extended period. Collecting and visualizing this information could help improve various engineering processes, as well as just the simple thrill of literally watching your change progress to release.

This is a grand idea that needs a lot more thought, but many of the previous items feed naturally into it.


This vision is just a starting point. We’re building a road map for short-to-medium-term map, while we think about a larger 2-to-3-year plan. Figuring out how to prioritize by a combination of impact, feasibility, risk, and effort is no small feat, and something that we’ll likely have to course-correct over time. Overall, the creation of this vision has been inspiring for my team, as we can now envision a better world for Mozilla engineering and understand our part in it. I hope the window this provides into the work of the Engineering Workflow team is valuable to other teams both within and outside of Mozilla.

Firefox Test PilotNotes is available on Android

Today we are releasing Test Pilot Notes on Android. This new app allows you to access all your notes from the Firefox sidebar on your Android device.

<figcaption>Download Notes for Android on Google Play</figcaption>

The mobile companion application supports the same multi-note and end-to-end encryption features as the WebExtension. After you sign in into the app, it will sync all your existing notes from Firefox desktop, so you can access them on the go. You can also use the app standalone, but we suggest you pair it with the WebExtension for maximum efficiency.

Please provide any feedback and share your experience using the “Feedback” button in the app drawer. This is one of the first mobile Test Pilot experiments and we would like to hear from you and understand your expectations for future Test Pilot mobile applications.

Updated WebExtension

We have also released a new update to WebExtension version of Notes.

The editor has been updated to the latest CKEditor 5 release. This brings improvements to Chinese, Japanese and Korean inputs into Notes. Other important fixes include:

  • Improved line break support, pasted plain text now supports line breaks.
  • Better font size display features.
  • Updated toolbar icons.

Big thanks to CKEditor development team and our SoftVision QA team for making these releases happen! We would also like to extend our gratitude to our language localisation contributors, release manager Sylvestre Ledru, and open source contributors - Sébastien Barbier and Rémy Hubscher.

Notes is available on Android was originally published in Firefox Test Pilot on Medium, where people are continuing the conversation by highlighting and responding to this story.

Firefox Test PilotTake your passwords everywhere with Firefox Lockbox

Firefox users, you can now easily access the passwords you save in the browser in a lightweight iOS app!

Download Firefox Lockbox from the App Store. Sign in with your Firefox Account, and your saved usernames and passwords will securely sync to your device using 256-bit encryption, giving you convenient access to your apps and websites, wherever you are. Find out more about the experiment on Firefox Test Pilot.

We have so many online accounts, and it’s hard to keep track of them all. The browser can save them, but they’re not always easy to find or access later, especially when trying to get into the same account on mobile. The Firefox Lockbox iOS app is our first experiment to help you find and use your passwords everywhere.

Take back control of your digital life with Firefox Lockbox.

Take your passwords everywhere with Firefox Lockbox was originally published in Firefox Test Pilot on Medium, where people are continuing the conversation by highlighting and responding to this story.

The Mozilla BlogIntroducing Firefox’s First Mobile Test Pilot Experiments: Lockbox and Notes

This summer, the Test Pilot team has been heads down working on experiments for our Firefox users. On the heels of our most recent and successful desktop Test Pilot experiments, Firefox Color and Side View, it was inevitable that the Test Pilot Program would expand to mobile.

Today, we’re excited to announce the first Test Pilot experiments for your mobile devices. With these two experiences, we are pushing beyond the boundaries of the desktop browser and into mobile apps. We’re taking the first steps toward bringing Mozilla’s mission of privacy, security and control to mobile apps beyond the browser.

What Are the New Mobile Test Pilot Experiments?

Firefox Lockbox for iOS – Take your passwords everywhere

Are you having a tough time keeping track of all the different passwords you’ve made for your online accounts? How many times have you had to reset a password you forgot? What do you do when you’ve saved a password on your desktop but have no way to access that online account on your mobile device?  Look no further, we’ve created a simple app to take your passwords anywhere you go.

With Firefox Lockbox, iOS users will be able to seamlessly access Firefox saved passwords. This means you can use any password you’ve saved in the browser to log into any online account like your Twitter or Instagram app. No need to open a web page. It’s that seamless and simple. Plus, you can also use Face ID and Fingerprint touch to unlock the app, so you can safely access your accounts.

Notes by Firefox for Android – Simple, secure, note-taking anywhere

Jotting down quick notes is something many of us do everyday to keep track of our busy lives. Whether you’re on your desktop at home or at the office, or on the go with your mobile device, we want to make sure you’re able to access those notes wherever you are.

Notes by Firefox is a simple, secure place to take and store notes across your devices – desktop AND mobile. Now Firefox account users have the option to sync notes from any Firefox browser on any Android smartphone or tablet. Plus, your files are encrypted from end-to-end,, which means that only you can read them.

Sync notes from any Firefox browser on any Android smartphone or tablet

How do I get started?

The Test Pilot program is open to all Firefox users and helps us test and evaluate a variety of potential Firefox features. To activate the new Lockbox and Notes extensions, you must have a Firefox Account and Firefox Sync for full functionality.

If you’re familiar with Test Pilot then you know all our projects are experimental, so we’ve made it easy to give us feedback or disable features at any time from testpilot.firefox.com.

We’re committed to making your web browsing experience more efficient, and are excited for the even bigger mobile experiments still ahead.

Check out the new Firefox Lockbox and Notes by Firefox extensions and help us decide which new features to build into future versions of Firefox.

The post Introducing Firefox’s First Mobile Test Pilot Experiments: Lockbox and Notes appeared first on The Mozilla Blog.

Don MartiBug futures: business models

Recent question about futures markets on software bugs: what's the business model?

As far as I can tell, there are several available models, just as there are multiple kinds of companies that can participate in any securities or commodities market.

Cushing, Oklahoma

Oracle operator: Read bug tracker state, write futures contract state, profit. This business would take an agreed-upon share of any contract in exchange for acting as a referee. The market won't work without the oracle operator, which is needed in order to assign the correct resolution to each contract, but it's possible that a single market could trade contracts resolved by multiple oracles.

Actively managed fund: Invest in many bug futures in order to incentivize a high-level outcome, such as support for a particular use case, platform, or performance target.

Bot fund: An actively managed fund that trades automatically, using open source metrics and other metadata.

Analytics provider: Report to clients on the quality of software projects, and the market-predicted likelihood that the projects will meet the client's maintenance and improvement requirements in the future.

Stake provider: A developer participant in a bug futures market must invest to acquire a position on the fixed side of a contract. The stake provider enables low-budget developers to profit from larger contracts, by lending or by investing alongside them.

Arbitrageur: Helps to re-focus development efforts by buying the fixed side of one contract and the unfixed side of another. For example, an arbitrageur might buy the fixed side of several user-facing contracts and the unfixed side of the contract on a deeper issue whose resolution will result in a fix for them.

Arbitrageurs could also connect bug futures to other kinds of markets, such as subscriptions, token systems, or bug bounties.

Previous items in the bug futures series:

Bugmark paper

A trading market to incentivize secure software: Malvika Rao, Georg Link, Don Marti, Andy Leak & Rich Bodo (PDF) (presented at WEIS 2018)

Bonus link

Corporate Prediction Markets: Evidence from Google, Ford, and Firm X (PDF) by Bo Cowgill and Eric Zitzewitz.

Despite theoretically adverse conditions, we find these markets are relatively efficient, and improve upon the forecasts of experts at all three firms by as much as a 25% reduction in mean squared error.

(This paper covers a related market type, not bug futures. However some of the material about interactions of market data and corporate management could also turn out to be relevant to bug futures markets.)

Creative Commons

Pipeline monument in Cushing, Oklahoma: photo by Roy Luck for Wikimedia Commons. This file is licensed under the Creative Commons Attribution 2.0 Generic license.

This Week In RustThis Week in Rust 242

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is cargo-geiger, which detects usage of unsafe Rust in your project and its dependencies.

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

158 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.


No RFCs are currently in final comment period.

Tracking Issues & PRs

New RFCs

Upcoming Events

North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

actix-web has removed all unsound use of unsafe in its codebase. It’s down to less than 15 occurences of unsafe from 100+.

u/_ar7 celebrating this commendable achievement.

Thanks to Jules Kerssemakers for the suggestion!

Please submit your quotes for next week!

This Week in Rust is edited by: nasa42 and llogiq.

Kartikaya GuptaHowto: FEMP stack on Amazon EC2

I recently migrated a bunch of stuff (including this website) to Amazon EC2, running on a FEMP (FreeBSD, nginx, MySQL, PHP) stack. I had to fiddle with a few things to get it running smoothly, and wanted to document the steps in case anybody else is trying to do this (or I need to do it again later). This assumes you have an Amazon AWS account and some familiarity with how to use it.

Before you start

Ensure you know what region and instance type you want. I used the Canada (Central) region but it should work the same in any other region. And I used a t2.micro instance type because I have a bunch of stuff running on the instance, but presumably you could use a t2.nano type if you wanted to go even lighter. Also, I'm using Amazon Route53 to handle the DNS, but if you have DNS managed separately that's fine too.

Upload your SSH public key

In the EC2 dashboard, select "Key Pairs" under the "Network and Security" section in the left pane. Click on "Import Key pair" and provide the public half of your SSH keypair. This will get installed into the instance so that you can SSH in to the instance when it boots up.

Create the instance

Select "Instances" in the EC2 dashboard, and start the launch wizard by clicking "Launch Instance". You'll find the FreeBSD images under "Community AMIs" if you search for FreeBSD using the search. Generally you want to grab the most recent FreeBSD release you can find (note: the search results are not sorted by recency). If you want to make sure you're getting an official image, head over to the freebsd-announce mailing list, and look for the most recent release announcement email. As of this writing it is 11.2-RELEASE. The email should contain AMI identifiers for all the different EC2 regions; for example the Canada AMI for 11.2-RELEASE is ami-a2f97bc6. Searching for that in the launch wizard finds it easily.

Next step is to select the instance type. Select one that's appropriate for your needs (I'm using t2.micro). The next step is to configure instance details. Unless you have specific changes you want to make here you can leave this with the default settings. Next you have to choose the root volume size. With my instances I like using a 10 GB root volume for the system and swap, and using a separate EBS volume for the "user data" (home folders and whatnot). For reference my 10G root volume is currently 54% full with the base system, extra packages, and a 2G swap file.

After that you can add tags if you want, change the security groups, and finally review everything. I just go with the defaults for the security groups, since it allows external SSH access and you can always change it later. Finally, time to launch! In the launch dialog you select the keypair from the previous step and you're off to the races.

Logging in

Once the instance is up, the EC2 console should display the public IP address. You'll need to log in with the user ec2-user at that IP address, using the keypair you selected previously. If you're paranoid about security (and you should be), you can verify the host key that SSH shows you by selecting the instance in the EC2 console, going to Actions -- Instance Settings -- Get Instance Screenshot. The screenshot should display the host keys as shown on the instance itself, and you can compare it to what SSH is showing to ensure you're not getting MITM'd.

Initial housekeeping

This part is sort of optional, but I like having a reasonable hostname and shell installed before I get to work. I'm going to use jasken.example.com as the hostname and I like using bash as my default shell. To do this, run the following commands:

su                                  # switch to root shell
sysrc hostname="jasken.example.com" # this modifies /etc/rc.conf
pkg update                          # update package manager
pkg install -y vim bash             # install useful packages
chsh -s /usr/local/bin/bash root    # change shell to bash for root and ec2-user
chsh -s /usr/local/bin/bash ec2-user

At this point I also like to reboot the machine (pretty much the only time I ever have to) because I've found that not everything picks up the hostname change if you change it via the hostname command while the instance is running. So run reboot and log back in once the instance is back up. The rest of the steps need root access so go ahead and su to root once you're back in.

IPv6 configuration

While you're rebooting, you can also set up IPv6 support. FreeBSD has everything built-in, you just need to fiddle with the VPC settings in AWS to get an IP address assigned. Note the VPC ID and Subnet ID in your instance's details, and then go to the VPC dashboard (it's a separate AWS service, not inside EC2). Find the VPC your instance is in, then go to Actions -- Edit CIDRs. Click on the "Add IPv6 CIDR" button and then "Close". Still in the VPC dashboard, select "Subnets" from the left panel and select the subnet of your instance. Here again, go to Actions -- Edit IPv6 CIDRs, and then click on "Add IPv6 CIDR". Put "00" in the box that appears to fill in the IPv6 subnet and hit ok.

Next, go to the "Route Tables" section of the VPC dashboard, and select the route table for the VPC. In the Routes tab, add a new route with destination ::/0 and the same gateway as the entry. This ensures that outbound IPv6 connections will use the external network gateway.

Finally, go back to the EC2 dashboard, select your instance, and go to Actions -- Networking -- Manage IP addresses. Under IPv6 addresses, click "Assign new IP" and "Yes, Update" to auto-assign a new IPv6 address. That's it! If you SSH in to the instance you should be able to ping6 google.com successfully for example. It might take a minute or so for the connection to start working properly.

Installing packages

For the "EMP" part of the FEMP stack, we need to install nginx, mysql, and php. Also because we're not barbarians we're going to make sure our webserver is TLS-enabled with a Let's Encrypt certificate that renews automatically, for which we want certbot. So:

pkg install -y nginx mysql56-server php56 php56-mysql php56-mysqli php56-gd php56-json php56-xml php56-dom php56-openssl py36-certbot

Note that the set of PHP modules you need may vary; I'm just listing the ones that I needed, but you can always install/uninstall more later if you need to.

PHP setup

To use PHP over CGI with nginx we're going to use the php-fpm service. Instead of having the service listen over a network socket, we'll have it listen over a file socket, and make sure PHP and nginx are in agreement about the info passed back and forth. The sed commands below do just that.

cd /usr/local/etc/
sed -i "" -e "s#listen = = /var/run/php-fpm.sock#" php-fpm.conf
sed -i "" -e "s#;listen.owner#listen.owner#" php-fpm.conf
sed -i "" -e "s#;listen.group#listen.group#" php-fpm.conf
sed -i "" -e "s#;listen.mode#listen.mode#" php-fpm.conf
sed -e "s#;cgi.fix_pathinfo=1#cgi.fix_pathinfo=0#" php.ini-production > php.ini
sysrc php_fpm_enable="YES"
service php-fpm start

MySQL setup

This is really easy to set up. The hard part is optimizing the database for your workload, but that's outside the scope of my knowledge and of this tutorial.

sysrc mysql_enable="YES"
service mysql-server start
mysql_secure_installation   # this is interactive, you'll want to set a root password
service mysql-server restart

Swap space

MySQL can eat up a bunch of memory, and it's good to have some swap set up. Without this you might find, as I did, that weekly periodic tasks such as rebuilding the locate database can result in OOM situations and take down your database. On a t2.micro instance which has 1GB of memory, a 2GB swap file works well for me:

# Make a 2GB (2048 1-meg blocks) swap file at /usr/swap0
dd if=/dev/zero of=/usr/swap0 bs=1m count=2048
chmod 0600 /usr/swap0
# Install the swap filesystem
echo 'md99 none swap sw,file=/usr/swap0,late 0 0' >> /etc/fstab
# and enable it
swapon -aL

You can verify the swap is enabled by using the swapinfo command.

nginx setup

Because the nginx config can get complicated, specially if you're hosting multiple websites, it pays to break it up into manageable pieces. I like having an includes/ folder which contains snippets of reusable configuration (error pages, PHP stuff, SSL stuff), and a sites-enabled/ folder that has a configuration per website you're hosting. Also, we want to generate some Diffie-Hellman parameters for TLS purposes. So:

cd /usr/local/etc/nginx/
openssl dhparam -out dhparam.pem 4096
mkdir includes
cd includes/
# This creates an error.inc file with error handling snippet
cat >error.inc <<'END'
error_page 500 502 503 504  /50x.html;
location = /50x.html {
    root   /usr/local/www/nginx-dist;
# PHP snippet
cat >php.inc <<'END'
location ~ \.php$ {
    try_files $uri =404;
    fastcgi_split_path_info ^(.+\.php)(/.+)$;
    fastcgi_pass   unix:/var/run/php-fpm.sock;
    fastcgi_index  index.php;
    fastcgi_param  SCRIPT_FILENAME  $request_filename;
    include        fastcgi_params;
# SSL snippet
cat >ssl.inc <<'END'
ssl on;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_dhparam /usr/local/etc/nginx/dhparam.pem;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_stapling on;
ssl_stapling_verify on;

If you want to fiddle with the above, feel free. I'm not a security expert so I don't know what a lot of the stuff in the ssl.inc does, but based on the Qualys SSL test it seems to provide a good security/compatibility tradeoff. I mostly cobbled it together from various recommendations on the Internet.

Finally, we set up the server entry (assuming we're serving up the website "example.com") and start nginx:

cd /usr/local/etc/nginx/
cat >nginx.conf <<'END'
user  www;
worker_processes  1;

error_log  /var/log/nginx/error.log info;

events {
    worker_connections  1024;

http {
    include       mime.types;
    default_type  application/octet-stream;

    sendfile        on;
    keepalive_timeout  65;

    include  sites-enabled/example;
mkdir sites-enabled
cd sites-enabled/
cat >example <<'END'
server {
    listen       [::]:80;
    listen       80;
    server_name  example.com;
    root /usr/local/www/nginx;
    index  index.php index.html index.htm;

    include includes/php.inc;

    location / {
        try_files $uri $uri/ =404;

    include includes/error.inc;
sysrc nginx_enable="YES"
service nginx start

Open up ports

The nginx setup above is sufficient to host an insecure server on port 80, which is what we need in order to get the certificate that we need to enable TLS. So at this point go to your DNS manager, wherever that is, and point the A and AAAA records for "example.com" (or whatever site you're hosting) to the public IP addresses for your instance. Also, go to the "Security Groups" pane in the EC2 dashboard and edit the "Inbound" tab for your instance's security group to allow HTTP traffic on TCP port 80 from source, ::/0, and the same for HTTPS traffic on TCP port 443.

After you've done that and the DNS changes have propagated, you should be able to go to http://example.com in your browser and get the nginx welcome page, served from your very own /usr/local/www/nginx folder.


Now it's time to get a TLS certificate for your example.com webserver. This is almost laughably easy once you have regular HTTP working:

certbot-3.6 certonly --webroot -n --agree-tos --email 'admin@example.com' -w /usr/local/www/nginx -d example.com
crontab <(echo '0 0 1,15 * * certbot-3.6 renew --post-hook "service nginx restart"')

Make sure to replace the email address and domain above as appropriate. This will use certbot's webroot plugin to get a Let's Encrypt TLS cert and install it into /usr/local/etc/letsencrypt/live/. It also installs a cron job to automatically attempt renewal of the cert twice a month. This does nothing if the cert isn't about to expire, but otherwise renews it using the same options as the initial request. The final step is updating the sites-enabled/example config to redirect all HTTP requests to HTTPS, and use the aforementioned TLS cert.

cd /usr/local/etc/nginx/sites-enabled/
cat >example <<'END'
server {
    listen       [::]:80;
    listen       80;
    server_name  example.com;
    return 301 https://$host$request_uri;

server {
    listen       [::]:443;
    listen       443;
    server_name  example.com;
    root   /usr/local/www/nginx;
    index  index.php index.html index.htm;

    include includes/ssl.inc;
    ssl_certificate /usr/local/etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /usr/local/etc/letsencrypt/live/example.com/privkey.pem;

    include includes/php.inc;

    location / {
        try_files $uri $uri/ =404;

    include includes/error.inc;
service nginx restart

And that's all, folks!

Parting words

The above commands set things up so that they persist across reboots. That is, if you stop and restart the EC2 instance, everything should come back up enabled. The only problem is that if you stop and restart the instance, the IP address changes so you'll have to update your DNS entry.

If there's commands above you're unfamiliar with, you should use the man pages to read up on them. In general copying and pasting commands from some random website into a command prompt is a bad idea unless you know what those commands are doing.

One thing I didn't cover in this blog post is how to deal with the daily emails that FreeBSD will send to the root user. I also run a full blown mail gateway with postfix and I plan to cover that in another post.

The Rust Programming Language BlogAnnouncing Rust 1.27.1

The Rust team is happy to announce a new version of Rust, 1.27.1. Rust is a systems programming language focused on safety, speed, and concurrency.

If you have a previous version of Rust installed via rustup, getting Rust 1.27.1 is as easy as:

$ rustup update stable

If you don’t have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.27.1 on GitHub.

What’s in 1.27.1 stable

This patch release fixes a bug in the borrow checker verification of match expressions. This bug was introduced in 1.26.0 with the stabilization of match ergonomics. We are uncertain that this specific problem actually indicated unsoundness in the borrow checker, but suspected that it might be a possibility, so decided to issue a point release. The code sample below caused a panic inside the compiler prior to this patch.

fn main() {
    let a = vec!["".to_string()];
            .take_while(|(_, &t)| false)

1.27.1 will reject the above code with this error message:

error[E0507]: cannot move out of borrowed content
    --> src/main.rs:4:30
  4 |             .take_while(|(_, &t)| false)
    |                              ^-
    |                              ||
    |                              |hint: to prevent move, use `ref t` or `ref mut t`
    |                              cannot move out of borrowed content

error: aborting due to previous error

Alongside the match ergonomics fix, a security vulnerability was also found in rustdoc, the standard documentation generator for Rust projects. That vulnerability is addressed by the second patch contained in this release, by removing the default search path for rustdoc plugins. This functionality will be entirely removed in Rust 1.28.0. This plugin infrastructure predates Rust 1.0 and has never been usable on stable, and has been unusable on nightly for many months. Expect to hear more about the removal in the next release: the current patch removes the default search path (instead, users must specify it explicitly), while the next release will remove the functionality entirely.

Cameron KaiserPro tip: sleep's good for your Power Mac

Now that I'm alternating between two daily drivers (my Quad G5 and my Talos II), the Quad G5 sleeps fairly reliably with the Sonnet USB-FireWire combo card out once I move the KVM focus off it. If I don't do that, the KVM detects the G5 has slept and moves the focus automatically away from it, which the G5 detects as USB activity, and exits sleep. The solution is a little AppleScript that waits a few seconds for me to switch the KVM and then tells the Finder to snooze. The Talos II doesn't sleep yet but I'll be interested to see if later firmware updates support that.

But sleeping the G5 has unquestionably been a good thing. Not only does it prolong its life by reducing heat (another plus in summer) as well as saving a substantial amount of energy (around 20W sleeping versus 200-250W running), but sleeping also can speed up TenFourFox. If you have lots of tabs open and those tabs are refreshing their data or otherwise running active content, then this contributes to a greater need for garbage collection and this will slow down your user experience as this overhead accumulates. (This is why running TenFourFox from a "fresh" start is much faster than when it's been chugging away for awhile.) It's possible to "pause" TenFourFox to a certain extent but the browser really isn't tested this way and may not behave properly when this is done. Sleeping the Power Mac pauses everything, so the cruft in memory that garbage collection has to clean out doesn't pile up while you're not using the machine, and everything comes back up in sync.

A whole lot of stuff has landed for TenFourFox FPR9. More about that when the beta is out, which I'm hoping to do by the middle-end of July.

Frédéric WangReview of Igalia's Web Platform activities (H1 2018)

This is the semiyearly report to let people know a bit more about Igalia’s activities around the Web Platform, focusing on the activity of the first semester of year 2018.



Igalia has proposed and developed the specification for BigInt, enabling math on arbitrary-sized integers in JavaScript. Igalia has been developing implementations in SpiderMonkey and JSC, where core patches have landed. Chrome and Node.js shipped implementations of BigInt, and the proposal is at Stage 3 in TC39.

Igalia is also continuing to develop several features for JavaScript classes, including class fields. We developed a prototype implementation of class fields in JSC. We have maintained Stage 3 in TC39 for our specification of class features, including static variants.

We also participated to WebAssembly (now at First Public Working Draft) and internationalization features for new features such as Intl.RelativeTimeFormat (currently at Stage 3).

Finally, we have written more tests for JS language features, performed maintenance and optimization and participated to other spec discussions at TC39. Among performance optimizations, we have contributed a significant optimization to Promise performance to V8.


Igalia has continued the standardization effort at the W3C. We are pleased to announce that the following milestones have been reached:

A new charter for the ARIA WG as well as drafts for ARIA 1.2 and Core Accessibility API Mappings 1.2 are in preparation and are expected to be published this summer.

On the development side, we implemented new ARIA features and fixed several bugs in WebKit and Gecko. We have refined platform-specific tools that are needed to automate accessibility Web Platform Tests (examine the accessibility tree, obtain information about accessible objects, listen for accessibility events, etc) and hope we will be able to integrate them in Web Platform Tests. Finally we continued maintenance of the Orca screen reader, in particular fixing some accessibility-event-flood issues in Caja and Nautilus that had significant impact on Orca users.

Web Platform Predictability

Thanks to support from Bloomberg, we were able to improve interoperability for various Editing/Selection use cases. For example when using backspace to delete text content just after a table (W3C issue) or deleting a list item inside a content cell.

We were also pleased to continue our collaboration with the AMP project. They provide us a list of bugs and enhancement requests (mostly for the WebKit iOS port) with concrete use cases and repro cases. We check the status and plans in WebKit, do debugging/analysis and of course actually submit patches to address the issues. That’s not always easy (e.g. when it is touching proprietary code or requires to find some specific reviewers) but at least we make discussions move forward. The topics are very diverse, it can be about MessageChannel API, CSSOM View, CSS transitions, CSS animations, iOS frame scrolling custom elements or navigating special links and many others.

In general, our projects are always a good opportunity to write new Web Platform Tests import them in WebKit/Chromium/Mozilla or improve the testing infrastructure. We have been able to work on tests for several specifications we work on.


Thanks to support from Bloomberg we’ve been pursuing our activities around CSS:

We also got more involved in the CSS Working Group, in particular participating to the face-to-face meeting in Berlin and will attend TPAC’s meeting in October.


We have also continued improving the web platform implementation of some Linux ports of WebKit (namely GTK and WPE). A lot of this work was possible thanks to the financial support of Metrological.

Other activities

Preparation of Web Engines Hackfest 2018

Igalia has been organizing and hosting the Web Engines Hackfest since 2009, a three days event where Web Platform developers can meet, discuss and work together. We are still working on the list of invited, sponsors and talks but you can already save the date: It will happen from 1st to 3rd of October in A Coruña!

New Igalians

This semester, new developers have joined Igalia to pursue the Web platform effort:

  • Rob Buis, a Dutch developer currently living in Germany. He is a well-known member of the Chromium community and is currently helping on the web platform implementation in WebKit.

  • Qiuyi Zhang (Joyee), based in China is a prominent member of the Node.js community who is now also assisting our compilers team on V8 developments.

  • Dominik Infuer, an Austrian specialist in compilers and programming language implementation who is currently helping on our JSC effort.

Coding Experience Programs

Two students have started a coding experience program some weeks ago:

  • Oriol Brufau, a recent graduate in math from Spain who has been an active collaborator of the CSS Working Group and a contributor to the Mozilla project. He is working on the CSS Logical Properties and Values specification, implementing it in Chromium implementation.

  • Darshan Kadu, a computer science student from India, who contributed to GIMP and Blender. He is working on Web Platform Tests with focus on WebKit’s infrastructure and the GTK & WPE ports in particular.

Additionally, Caio Lima is continuing his coding experience in Igalia and is among other things working on implementing BigInt in JSC.


Thank you for reading this blog post and we look forward to more work on the web platform this semester!

The Mozilla BlogWelcoming Sunil Abraham – Mozilla Foundation’s New VP, Leadership Programs

I’m thrilled to welcome Sunil Abraham as Mozilla Foundation’s new VP, Leadership Programs. Sunil joins us from The Centre for Internet and Society, the most recent chapter in a 20 year career of developing free and open source software and an open internet agenda.

During our search we stated a goal of finding someone with deep experience working on some aspect of internet health; and a proven track record building high impact organizations and teams.” In Sunil we have managed to find just that. An engineer by training, much of Sunil’s research and policy work has deep technical grounding. For example one of his current projects is doing a human rights review of various Internet Engineering Task Force (IETF) standards with Article 19. He has also founded and run two organizations: Mahiti, a non-profit software development shop; and The Centre for Internet and Society, a policy and technology think tank in Bangalore. Sunil is truly ‘of the movement’ and is perfectly positioned to help us build a strong cadre of internet health leaders all around the world.

Our global community is the linchpin in our strategy to grow a movement to create a healthier digital world. In this role, Sunil will head up the programs that bring people from around the world into our work — the Internet Health Report, MozFest, our fellowships and awards — with the aim of supporting people who want to take a leadership role in this community. In addition to a great passion for Mozilla’s mission and issues, Sunil also brings a tremendous amount of experience working on this kind of leadership development. He’s worked closely with Ashoka and the Open Society Foundation in developing leaders for many years. And, notably, the Centre for Internet and Society has been a home for many of the key players in India’s open internet space.

Sunil is starting out immediately as an advisor to Mozilla’s executive team and directors, working a few hours per week. He will move to Berlin and start full time in his new role in January 2019.  We will be planning a community town hall to welcome Sunil to our community and give everyone a chance to connect with him. Look for more in September.

The post Welcoming Sunil Abraham – Mozilla Foundation’s New VP, Leadership Programs appeared first on The Mozilla Blog.

Chris IliasFirefox now supports the macOS share menu

Firefox 61 has a great new feature on macOS, and I don’t think it’s getting enough attention. Maybe it’s not a big deal for most other users, but it is for me!

Firefox now supports the macOS share menu. This means you can send the current page you are viewing to another application. For instance, you can add a link to your Things 3 or Omnifocus inbox, add a page to Apple Notes, send a link to Evernote, send a link to someone using messages, or share a link to a social network.

To share a page in Firefox, open the Page Actions menu (aka. the three dots), and go to the Share menu.

QMOFirefox 62 Beta 8 Testday, July 13th

Greetings Mozillians!

We are happy to let you know that Friday, July 13th, we are organizing Firefox 62 Beta 8 Testday. We’ll be focusing our testing on 3-Pane Inspector and React animation inspector features.

Check out the detailed instructions via this etherpad.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better!

See you on Friday!

Wladimir PalantIs your LastPass data really safe in the encrypted online vault?

Disclaimer: I created PfP: Pain-free Passwords as a hobby, it could be considered a LastPass competitor in the widest sense. I am genuinely interested in the security of password managers which is the reason both for my own password manager and for this blog post on LastPass shortcomings.

TL;DR: LastPass fanboys often claim that a breach of the LastPass server isn’t a big deal because all data is encrypted. As I show below, that’s not actually the case and somebody able to compromise the LastPass server will likely gain access to the decrypted data as well.

A while back I stated in an analysis of the LastPass security architecture:

So much for the general architecture, it has its weak spots but all in all it is pretty solid and your passwords are unlikely to be compromised at this level.

That was really stupid of me, I couldn’t have been more wrong. Turned out, I relied too much on the wishful thinking dominating LastPass documentation. January this year I took a closer look at the LastPass client/server interaction and found a number of unpleasant surprises. Some of the issues went very deep and it took LastPass a while to get them fixed, which is why I am writing about this only now. A bunch of less critical issues remain unresolved as of this writing, so that I cannot disclose their details yet.

Cracking the encryption

In 2015, LastPass suffered a security breach. The attackers were able to extract some data from the server yet LastPass was confident:

We are confident that our encryption measures are sufficient to protect the vast majority of users. LastPass strengthens the authentication hash with a random salt and 100,000 rounds of server-side PBKDF2-SHA256, in addition to the rounds performed client-side. This additional strengthening makes it difficult to attack the stolen hashes with any significant speed.

What this means: anybody who gets access to your LastPass data on the server will have to guess your master password. The master password isn’t merely necessary to authenticate against your LastPass account, it also allows encrypting your data locally before sending it to the server. The encryption key here is derived from the master password, and neither is known to the LastPass server. So attackers who managed to compromise this server will have to guess your master password. And LastPass uses PBKDF2 algorithm with a high number of iterations (LastPass prefers calling them “rounds”) to slow down verifying guesses. For each guess one has to derive the local encryption key with 5,000 PBKDF2 iterations, hash it, then apply another 100,000 PBKDF2 iterations which are normally added by the LastPass server. Only then can the result be compared to the authentication hash stored on the server.

So far all good: 100,000 PBKDF2 iterations should be ok, and it is in fact the number used by the competitor 1Password. But that protection only works if the attackers are stupid enough to verify their master password guesses via the authentication hash. As mentioned above, the local encryption key is derived from your master password with merely 5,000 PBKDF2 iterations. And it is used to encrypt various pieces of data: passwords, private RSA keys, OTPs etc. The LastPass server stores these encrypted pieces of data without any additional protection. So a clever attacker would guess your master password by deriving the local encryption key from a guess and trying to decrypt some data. Worked? Great, the guess is correct. Didn’t work? Try another guess. This approach speeds up guessing master passwords by factor 21.

So, what kind of protection do 5,000 PBKDF2 iterations offer? Judging by these numbers, a single GeForce GTX 1080 Ti graphics card (cost factor: less than $1000) can be used to test 346,000 guesses per second. That’s enough to go through the database with over a billion passwords known from various website leaks in barely more than one hour. And even if you don’t use any of the common passwords, it is estimated that the average password strength is around 40 bits. So on average an attacker would need to try out half of 240 passwords before hitting the right one, this can be achieved in roughly 18 days. Depending on who you are, spending that much time (or adding more graphics cards) might be worth it. Of course, more typical approach would be for the attackers to test guesses on all accounts in parallel, so that the accounts with weaker passwords would be compromised first.

Statement from LastPass:

We have increased the number of PBKDF2 iterations we use to generate the vault encryption key to 100,100. The default for new users was changed in February 2018 and we are in the process of automatically migrating all existing LastPass users to the new default. We continue to recommend that users do not reuse their master password anywhere and follow our guidance to use a strong master password that is going to be difficult to brute-force.

Extracting data from the LastPass server

Somebody extracting data from the LastPass server sounds too far fetched? This turned out easier than I expected. When I tried to understand the LastPas login sequence, I noticed the script https://lastpass.com/newvault/websiteBackgroundScript.php being loaded. That script contained some data on the logged in user’s account, in particular the user name and a piece of encrypted data (private RSA key). Any website could load that script, only protection in place was based on the Referer header which was trivial to circumvent. So when you visited any website, that website could get enough data on your LastPass account to start guessing your master password (only weak client-side protection applied here of course). And as if that wasn’t enough, the script also contained a valid CSRF token, which allowed this website to change your account settings for example. Ouch…

To me, the most surprising thing about this vulnerability is that no security researcher found it so far. Maybe nobody expected that a script request receiving a CSRF token doesn’t actually validate this token? Or have they been confused by the inept protection used here? Beats me. Either way, I’d consider the likeliness of some blackhat having discovered this vulnerability independently rather high. It’s up to LastPass to check whether it was being exploited already, this is an attack that would leave traces in their logs.

Statement from LastPass:

The script can now only be loaded when supplying a valid CSRF token, so 3rd-parties cannot gain access to the data. We also removed the RSA sharing keys from the scripts generated output.

The “encrypted vault” myth

LastPass consistently calls its data storage the “encrypted vault.” Most people assume, like I did originally myself, that the server stores your data as an AES-encrypted blob. A look at https://lastpass.com/getaccts.php output (you have to be logged in to see it) quickly proves this assumption to be incorrect however. While some data pieces like account names or passwords are indeed encrypted, others like the corresponding URL are merely hex encoded. This 2015 presentation already pointed out that the incomplete encryption is a weakness (page 66 and the following ones). While LastPass decided to encrypt more data since then, they still don’t encrypt everything.

The same presentation points out that using ECB as block cipher mode for encryption is a bad idea. One issue in particular is that while passwords are encrypted, with ECB it is still possible to tell which of them are identical. LastPass mostly migrated to CBC since that publication and a look at getaccts.php shouldn’t show more than a few pieces of ECB-encrypted data (you can tell them apart because ECB is encoded as a single base64 blob like dGVzdHRlc3R0ZXN0dGVzdA== whereas CBC is two base64 blobs starting with an exclamation mark like !dGVzdHRlc3R0ZXN0dGVzdA==|dGVzdHRlc3R0ZXN0dGVzdA==). It’s remarkable that ECB is still used for some (albeit less critical) data however. Also, encryption of older credentials isn’t being “upgraded” it seems, if they were encrypted with AES-ECB originally they stay this way.

I wonder whether the authors of this presentation got their security bug bounty retroactively now that LastPass has a bug bounty program. They uncovered some important flaws there, many of which still exist to some degree. This work deserves to be rewarded.

Statement from LastPass:

The fix for this issue is being deployed as part of the migration to the higher iteration count in the earlier mentioned report.

A few words on backdoors

People losing access to their accounts is apparently an issue with LastPass, which is why they have been adding backdoors. These backdoors go under the name “One-Time Passwords” (OTPs) and can be created on demand. Good news: LastPass doesn’t know your OTPs, they are encrypted on the server side. So far all fine, as long as you keep the OTPs you created in a safe place.

There is a catch however: one OTP is being created implicitly by the LastPass extension to aid account recovery. This OTP is stored on your computer and retrieved by the LastPass website when you ask for account recovery. This means however that whenever LastPass needs to access your data (e.g. because US authorities requested it), they can always instruct their website to silently ask LastPass extension for that OTP and you won’t even notice.

Another consequence here: anybody with access to both your device and your email can gain access to your LastPass account. This is a known issue:

It is important to note that if an attacker is able to obtain your locally stored OTP (and decrypt it while on your pc) and gain access to your email account, they can compromise your data if this option is turned on. We feel this threat is low enough that we recommend the average user not to disable this setting.

I disagree on the assessment that the threat here is low. Many people had their co-workers play a prank on them because they left their computer unlocked. Next time one these co-workers might not send a mail in your name but rather use account recovery to gain access to your LastPass account and change your master password.

Statement from LastPass:

This is an optional feature that enables account recovery in case of a forgotten master password. After reviewing the bug report, we’ve added further security checks to prevent silent scripted attacks.


As this high-level overview demonstrates: if the LastPass server is compromised, you cannot expect your data to stay safe. While in theory you shouldn’t have to worry about the integrity of the LastPass server, in practice I found a number of architectural flaws that allow a compromised server to gain access to your data. Some of these flaws have been fixed but more exist. One of the more obvious flaws is the Account Settings dialog that belongs to the lastpass.com website even if you are using the extension. That’s something to keep in mind whenever that dialog asks you for your master password: there is no way to know that your master password won’t be sent to the server without applying PBKDF2 protection to it first. In the end, the LastPass extension depends on the server in many non-obvious ways, too many for it to stay secure in case of a server compromise.

Statement from LastPass:

We greatly appreciate Wladimir’s responsible disclosure and for working with our team to ensure the right fixes are put in place, making LastPass stronger for our users. As stated in our blog post, we’re in the process of addressing each report, and are rolling out fixes to all LastPass users. We’re in the business of password management; security is always our top priority. We welcome and incentivize contributions from the security research community through our bug bounty program because we value their cyber security knowledge. With their help, we’ve put LastPass to the test and made it more resilient in the process.

Mike TaylorNotable moments in Firefox desktop pre-release UA string history

I'm sure everyone remembers this super great blog post from 2010 about changes in the Firefox 4 user agent string. In terms of "blog posts about UA string changes", it's, well, one of them.

The final post-v4 release desktop UA string is as follows:

Mozilla/5.0 (<platform>; rv:<geckoversion>) Gecko/20100101 Firefox/<firefoxversion>

As sort of a companion piece (that nobody asked for), I wrote up the following table of minor changes to the pre-release desktop UA string between then and now.

(You know, in case you or I find ourselves needing to know them one day—apologies in advance for whatever crappy future scenario we ended up in where we need to re-visit this blog post.)

Version Sample Windows 10 pre-release UA string
5 Mozilla/5.0 (Windows NT 6.2; rv:2.2a1pre) Gecko/20110412 Firefox/4.2a1pre
6 Mozilla/5.0 (Windows NT 6.2; rv:6.0a1) Gecko/20110524 Firefox/6.0a1
13 Mozilla/5.0 (Windows NT 6.2; rv:13.01) Gecko/20120313 Firefox/13.0a1
15 Mozilla/5.0 (Windows NT 6.2; rv:16.0) Gecko/16.02 Firefox/16.0a1
16 Mozilla/5.0 (Windows NT 6.2; rv:16.0) Gecko/16.0 Firefox/16.03
20 Mozilla/5.0 (Windows NT 6.2; rv:20.0) Gecko/201301074 Firefox/20.0
25 Mozilla/5.0 (Windows NT 6.3; rv:25.0) Gecko/201001015 Firefox/25.0

Footnotes: 1. In 572659 we dropped pre-release indicators and patch level version numbers from geckoversion 2. In 588909 we replaced Gecko/<builddate> with Gecko/<geckoversion> 3. In 572659 we dropped pre-release indicators and patch level version numbers from firefoxversion 4. In 815743 we reverted the changes from 588909 (because it broke too many sites, mostly banks) 5. In 728773 we began using the frozen 201001011 builddate for all releases

Apocryphal Footnotes to the Footnotes: 1. It's widely believed Jan 1, 2010 was chosen as the frozen build date because it marked the day that North Korea pledged lasting peace and a nuclear free Korean peninsula. That only happens once in a lifetime. Probably.

Mozilla B-TeamBMO ❤️ Emoji: bugzilla.mozilla.org will be down for eight hours on July 14th, 2018

BMO ❤️ Emoji: bugzilla.mozilla.org will be down for eight hours on July 14th, 2018

Dear BMO users:

bugzilla.mozilla.org will be completely unavailable on Saturday, July 14th between 13:00:00 UTC and 21:00:00 UTC.

During this time the database will be migrated to support unicode characters outside of the Basic Multilingual Plane.

After the work is completed (which may take less than 8 hours) BMO will support all unicode characters.

View On WordPress

Firefox NightlyThese Weeks in Firefox: Issue 40


  • Nightly builds of Focus / Klar (with GeckoView) are now available!
  • Introducing Firefox Monitor, a new security tool we’re testing to help keep track of data breaches on the Web!
  • The certificate error pages have been redesigned to provide information more clearly to the user: explain what the issue is and offer the user the option to return to safety
    • A screenshot of a new certificate error page in Firefox.

      Warning: New Certificate Page Designs Ahead!

  • Most Policy Engine policies that were ESR-only (except for Search) have been modified so that can be used on the Rapid-Release channel
  • The first version of the Federated Learning add-on has been completed by Florian Hartmann, which that will allow us to study and improve matching in the Address Bar without violating user privacy. Read this blog post for more details.
  • We’re working on the redesigning the bookmarking panel to show richer favicons and preview images.
    • The bookmark panel in Firefox, showing a screenshot of the page being bookmarked.

      We’re updating the much-beloved bookmarking panel!

  • Our first Test Pilot experiments for Mobile are coming soon!
    • ✨ 🎉 Lockbox for iOS launching July 10 🎉 ✨
    • ✨ 🎉 Notes for Android launching July 10 🎉 ✨
  • Privacy UI redesign v1 landed, v2 in progress for 63!
    • Showing off the new design of the Identity / Privacy panel in Firefox

      Tracking Protection: ON

      Showing off the Firefox main menu, with a Tracking Protection toggle.

      Tracking Protection controls are right near your fingertips.

Friends of the Firefox team


  • Vicky Chin, managing Firefox Performance across both Desktop and Mobile (spiritual successor to the Quantum Flow team)

Resolved bugs (excluding employees)

Project Updates

Add-ons / Web Extensions

Activity Stream

Browser Architecture

  • PSA: Upcoming regressions – XBL Replacement Newsletter Special Edition (read more on firefox-dev)
  • XML pretty printing now uses Shadow DOM when available instead of XBL in the content process (bug). Planning to extend this with the “UA Shadow DOM” for other in-content widgets (videocontrols, plugin click to play, etc).



Policy Engine


  • Privacy UI improvements (metabug): starting to work on v2 for 63

Search and Navigation

Address Bar & Search

Application Services (Sync / Firefox Accounts / Push)

Test Pilot

  • Test Pilot & Screenshots H2 planning underway

Web Payments

  • Very productive meetings at SF All Hands with UX and Engineering to refine and track the remaining scope required before enabling on Nightly.
  • Team has completed 84% of the entire Milestone 1 – 3 Backlog.
  • There have been quite a few recent specification changes (e.g. payment method and payer events) that are also getting implemented in Gecko thanks to Marcos and mrbkap.
  • Sam is working on implementing a timeout error page.
  • Jared required essential address fields and added country-specific Postal Code validation.
  • Prathiksha is switching our dropdowns to use <select> behind the scenes.
  • MattN is working on visual polish to match the visual spec.

Hacks.Mozilla.OrgMDN Changelog for June 2018

Editor’s note: A changelog is “a log or record of all notable changes made to a project. [It] usually includes records of changes such as bug fixes, new features, etc.” Publishing a changelog is kind of a tradition in open source, and a long-time practice on the web. We thought readers of Hacks and folks who use and contribute to MDN Web Docs would be interested in learning more about the work of the MDN engineering team, and the impact they have in a given month. We’ll also introduce code contribution opportunities, interesting projects, and new ways to participate.

Done in June

Here’s what happened in June to the code, data, and tools that support MDN Web Docs:

Here’s the plan for July:

Shipped 100+ HTML Interactive Examples

In June, we shipped over 100 HTML interactive examples, adding quick references and playgrounds for our HTML element documentation.

On the left, the HTML is displayed, and a tab would show the CSS. On the right, the rendered HTML of a hamster image with a caption below.

HTML Interactive Example for <figcaption>

Schalk Neethling fixed the remaining blockers, such as applying an output class as a style target (PR 961), and adding some additional size options (PR 962). wbamberg wrote instructions for adding the examples to MDN, and SphinxKnight and Irene Smith pitched in to deploy them in less than 24 hours. MDN visitors have fun, informative examples on JS, CSS, and now HTML reference pages.

Irene Smith joined the MDN writer’s team in June as a Firefox Developer Content Manager. She started work right away, including helping with this project. Welcome to the team, Irene!

Shipped Django 1.11

We deployed Django 1.11 on June 6th. There are no visible changes, but also no errors for MDN visitors. Sometimes a engineering project is successful if you just get back to where you started.

Falcon Heavy rockets moments from landing

Like the Falcon rockets, we traveled far to return to the same place.

We did most of the preparation work in development environments. On June 1st, we shipped Django 1.11 to our staging environment (PR 4830). This exposed a few issues that were quick to fix, such as the logging configuration (PR 4831) and client-side translation catalogs (PR 4831). Tests in the production-like staging environment ensured the last bugs were fixed before MDN’s visitors saw them.

We’re looking ahead to the next framework update. Django 1.11 is the last to support Python 2.7, so we’ll need to switch to Python 3.6. We’ve added a Python 3 build to TravisCI, and will ratchet up compatibility over time (PR 4848). Anthony Maton started the Python 3 changes at the Mozilla All-Hands, updating tests to expect random dictionary order, a security feature enabled in Python 3.3 (PR 4851). We expect many more changes, and we plan to switch to Python 3 with Django 1.11 by the end of the year.

The next Django Long-Term Support (LTS) release will be 2.2, scheduled for April 2019. The work will be similar to the last upgrade, updating the code to be compatible with Django 1.11 through 2.2. Ryan Johnson started the process by converting to the new-style middleware, required in 2.0 (PR 4841). We plan a smooth switch to Django 2.2 by June 2019.

Shipped Tweaks and Fixes

There were 252 PRs merged in June:

32 of these were from first-time contributors:

Other significant PRs:

Planned for July

July is the start of the second half of the year, and the team is thinking about what can be done by the end of the year. We’re planning on finishing the compatibility data migration this year, and expanding interactive examples to another documentation section.

Decommission Zones

Zones are a wiki engine feature that moves a tree of pages to a different URL, and applies additional CSS for those pages. Zones also add complexity to every request, require additional testing, and are a frequent source of bugs and translation problems. Zones have more enemies than fans on the MDN staff.

We’ve been deprecating zones for a few years. We stopped using new zones as a design tool. In last year’s site redesign, we de-emphasized the style differences between zones and “standard” wiki content (PR 4348). When migrating MDN to a new data center, we added a redirects framework that can elegantly handle the custom URLs. We’re ready for the final steps.

At the Mozilla All-Hands, Ryan Johnson and wbamberg prepared to remove zones. The work took the entire week. On the engine side, custom URLs need redirects to the standard wiki URLs, and some zone styles need to be preserved (PR 4853). Zone sidebars need to be reimplemented as KumaScript sidebars, along with translations (PR 711). Finally, content needs to be changed, to add the KumaScript sidebars and to use standard wiki CSS. While the changes are large, the effect is subtle.

On the Progressive Web Apps MDN page, the zone style has an icon next to the tile, that it will lose without zone styles.

The subtle differences between zone styles and without zone styles

After the work week, we reviewed and refined the code, double-checked the changes, and clarified the plan. We’ll ship the code, update the content, and delete zones in July.

Focus on Performance

We’re wrapping up the performance audit of MDN in July. We’ve picked some key performance metrics we’d like to track, and the headline metric is how long it takes for the interactive example to be ready on CSS, JS, and HTML reference pages. Schalk Neethling is implementing the timing measurements (IE PR 967,Kuma PR 4854, and others), using the PerformanceTiming API so the measurements will be available in browser tools. We also track timing in Google Analytics, to get real-user metrics from MDN’s global audience, unless the user has requested that we don’t track them.

We’ve found several performance bottlenecks, and we’re prioritizing them to pick the quick wins and the high-impact changes. We’ll ship improvements in July and beyond.

Chris H-CWhen do All Firefox Users Update?

Last time we talked about updates I wrote about all of what goes into an individual Firefox user’s ability to update to a new release. We looked into how often Firefox checks for updates, and how we sometimes lie and say that there isn’t an update even after release day.

But how does that translate to a population?

Well, let’s look at some pictures. First, the number of “update” pings we received from users during the recent Firefox 61 release:update_ping_volume_release61

This is a real-time look at how many updates were received or installed by Firefox users each minute. There is a plateau’s edge on June 26th shortly after the update went live (around 10am PDT), and then a drop almost exactly 24 hours later when we turned updates off. This plot isn’t the best for looking into the mechanics of how this works since it shows volume of all types of “update” pings from all versions, so it includes users finally installing Firefox Quantum 57 from last November as well as users being granted the fresh update for Firefox 61.

Now that it’s been a week we can look at our derived dataset of “update” pings and get a more nuanced view (actually, latency on this dataset is much lower than a week, but it’s been at least a week). First, here’s the same graph, but filtered to look at only clients who are letting us know they have the Firefox 61 update (“update” ping, reason: “ready”) or they have already updated and are running Firefox 61 for the first time after update (“update” ping, reason: “success”):update_ping_volume_61only_wide

First thing to notice is how closely the two graphs line up. This shows how, during an update, the volume of “update” pings is dominated by those users who are updating to the most recent version.

And it’s also nice validation that we’re looking at the same data historically that we were in real time.

To step into the mechanics, let’s break the graph into its two constituent parts: the users reporting that they’ve received the update (reason: “ready”) and the users reporting that they’re now running the updated Firefox (reason: “success”).


The first graph shows the two lines stacked for maximum similarity to the graphs above. The second unstacks the two so we can examine them individually.

It is now much clearer to see how and when we turned updates on and off during this release. We turned them on June 26, off June 27, then on again June 28. The blue line also shows us some other features: the Canada Day weekend of lower activity June 30 and July 1, and even time-of-day effects where our sharpest peaks are (EDT) 6-8am, 12-2pm, and a noticeable hook at 1am.

(( That first peak is mostly made up of countries in the Central European Timezone UTC+1 (e.g. Germany, France, Poland). Central Europe’s peak is so sharp because Firefox users, like the populations of European countries, are concentrated mostly in that one timezone. The second peak is North and South America (e.g. United States, Brazil). It is broader because of how many timezones the Americas span (6) and how populations are dense in the Eastern Timezone and Pacific Timezone which are 3 hours apart (UTC-5 to UTC-8). The noticeable hook is China and Indonesia. China has one timezone for its entire population (UTC+8), and Indonesia has three. ))

This blue line shows us how far we got in the last post: delivering the updates to the user.

The red line shows why that’s only part of the story, and why we need to look at populations in addition to just an individual user.

Ultimately we want to know how quickly we can reach our users with updated code. We want, on release day, to get our fancy new bells and whistles out to a population of users small enough that if something goes wrong we’re impacting the fewest number of them, but big enough that we can consider their experience representative of what the whole Firefox user population would experience were we to release to all of them.

To do this we have a two levers: we can change how frequently Firefox asks for updates, and we can change how many Firefox installs that ask for updates actually get it right away. That’s about it.

So what happens if we change Firefox to check for updates every 6 hours instead of every 12? Well, that would ensure more users will check for updates during periods they’re offered. It would also increase the likelihood of a given user being offered the update when their Firefox asks for one. It would raise the blue line a bit in those first hours.

What if we change the %ge of update requests that are offers? We could tune up or down the number of users who are offered the updates. That would also raise the blue line in those first hours. We could offer the update to more users faster.

But neither of these things would necessarily increase the speed at which we hit that Goldilocks number of users that is both big enough to be representative, and small enough to be prudent. Why? The red line is why. There is a delay between a Firefox having an update and a user restarting it to take advantage of it.

Users who have an update won’t install it immediately (see how the red line is lower than the blue except when we turn updates off completely), and even if we turn updates off it doesn’t stop users who have the update from installing it (see how the red line continues even when the blue line is floored).

Even with our current practices of serving updates, more users receive the update than install it for at least the first week after release.

If we want to accelerate users getting update code, we need to control the red line. Which we don’t. And likely can’t.

I mean, we can try. When an update has been pending for long enough, you get a little arrow on your Firefox menu: update_arrow.png

If you leave it even longer, we provide a larger piece of UI: a doorhanger.


We can tune how long it takes to show these. We can show the doorhanger immediately when the update is ready, asking the user to stop what they’re doing and–


…maybe we should just wait until users update their own selves, and just offer some mild encouragement if they’ve put it off for, say, four days? We’ll give the user eight days before we show them anything as invasive as the doorhanger. And if they dismiss that doorhanger, we’ll just not show it again and trust they’ll (eventually) restart their browser.

…if only because Windows restarted their whole machines when they weren’t looking.

(( If you want your updates faster and more frequent, may I suggest installing Firefox Beta (updates every week) or Firefox Nightly (updates twice a day)? ))

This means that if the question is “When do all Firefox users update?” the answer is, essentially, “When they want to.” We can try and serve the updates faster, or try to encourage them to restart their browsers sooner… but when all is said and done our users will restart their browsers when they want to restart their browsers, and not a moment before.

Maybe in the future we’ll be able to smoothly update Firefox when we detect the user isn’t using it and when all the information they have entered can be restored. Maybe in the future we’ll be able to update seamlessly by porting the running state from instance to instance. But for now, given our current capabilities, we don’t want to dictate when a user has to restart their browser.

…but what if we could ship new features more quickly to an even more controlled segment of the release population… and do it without restarting the user’s browser? Wouldn’t that be even better than updates?

Well, we might answers for that… but that’s a subject for another time.


Mozilla B-Teamhappy bmo push day!

Last few pushes were a bit rough, so I forgot to post them. Woops. Won’t happen again.

release tag

the following changes have been pushed to bugzilla.mozilla.org:

  • [1472326] group member syncing is currently broken on production bmo
  • [1472755] False positives on IP blocking logic

discuss these changes on mozilla.tools.bmo.

View On WordPress

Mozilla B-Teamhappy bmo push day (old post)

This was supposed to be posted friday, but I screwed up.

release tag

the following changes have been pushed to bugzilla.mozilla.org:

  • [1471417] Remove XUL from attachment Content Type options; add SVG to standard options; mark PDF viewable
  • [1344080] Module headers should be minified when the module is open
  • [1275545] Use Sereal for Cache::Memcached::Fast
  • [1469378] Update feed daemon to only manage subscribers on a revision if the bug is private,…

View On WordPress

The Rust Programming Language BlogSecurity Advisory for rustdoc

Quick overview

The Rust team was recently notified of a security vulnerability affecting rustdoc plugins. If you are not using rustdoc plugins, you are not affected. We’re not aware of any usage of this feature. The associated CVE is CVE-2018-1000622.

You can find the full announcement on our rustlang-security-announcements mailing list here.


On Tuesday July 3rd, Red Hat reported a security vulnerability in rustdoc to us. The problem was in rustdoc’s obscure plugin functionality, consisting of its loading plugins from a path that is globally writable on most platforms, /tmp/rustdoc/plugins. This feature permitted a malicious actor to write a dynamic library into this path and have another user execute that code. The security issue only happens if you’re actively using the feature, and so this behavior will be removed from rustdoc in the near future, with patches landing for each channel over the next week. The plugin infrastructure predates 1.0 and is not usable on stable or nightly Rust today. Its removal should not impact any Rust users.

As Rust’s first official CVE, this is somewhat of a milestone for us. The fix will be out in 1.27.1 on Tuesday July 10th. Because there’s no embargo, we are filing for a CVE now, and will update this post with the number once we are assigned one.

Despite the acknowledge low impact and severity of this bug, the Rust team decided to follow the full procedure we have for security bugs. We know of no one who uses this functionality, so we felt comfortable discussing it publicly ahead of the patch release. The impact is limited due to the plugin functionality being long deprecated and being unusable on all current versions of Rust, as the required library is not shipped to users. However, since the bug can potentially cause problems for users, we decided to include this in the 1.27.1 stable release.

It’s worth noting that while Rust does prevent a lot of issues in your code at compile time, they’re issues that result from memory unsafety. This bug is a logic error. Rust code is not inherently secure, or bug-free. Sometimes, people get enthusiastic and make overly-broad claims about Rust, and this incident is a good demonstration of how Rust’s guarantees can’t prevent all bugs.

Thank you to Red Hat for responsibly disclosing the problem and working with us to ensure that the fix we plan to ship is correct.

Mozilla Addons BlogExtensions in Firefox 62

Last week Firefox 62 moved into the Beta channel. This version has fewer additions and changes to the WebExtensions API than the last several releases. Part of that is due to the maturing nature of the API as we get farther away from the WebExtension API cutover back in release 57, now over seven months ago. Part of it was a focus on cleaning up some internal features — code changes that increase the maintainability of Firefox but are not visible to external developers. And, let’s be honest, part of it is the arrival of summer in the Northern hemisphere, resulting in happy people taking time to enjoy life outside of browser development.

User Interface Improvements

Extensions with a toolbar button (browser action) can now be managed directly from the context menu of the button.  This is very similar to the behavior with page actions – simply right click on the toolbar button for an extension and select Manage Extension from the context menu.  This will take you to the extension’s page in about:addons.

Manage Extension Context Menu

You can now manage hidden tabs, introduced in Firefox 61, via a down-arrow that is added to the end of the tab strip. When clicked, this icon will show all of your tabs, hidden and visible.  Firefox 62 introduces a new way get to that same menu via the History item on the menu bar. If you have hidden tabs and select the History menu, it will display a submenu item called “Hidden Tabs.”  Selecting that will take you to the normal hidden tabs menu panel.

Hidden Tabs Menu

API Improvements

A few enhancements to the WebExtensions API are now available in Firefox 62, including:

Theme Improvements

A couple of changes to the WebExtensions theme API landed in this release:

Tab Background Separator

Bug Fixes

A few noticeable bug fixes landed in Firefox release 62, including:

Thank You

A total of 48 features and improvements landed as part of Firefox 62. As always, a sincere thank you to every contributor for this release, especially our community volunteers including Tim Nguyen, Jörg Knobloch, Oriol Brufau, and Tomislav Jovanovic. It is only through the combined efforts of Mozilla and our amazing community that we can ensure continued access to the open web. If you are interested in contributing to the WebExtensions ecosystem, please take a look at our wiki.


The post Extensions in Firefox 62 appeared first on Mozilla Add-ons Blog.

Mark CôtéA Vision for Engineering Workflow at Mozilla (Part Two)

In my last post I touched on the history and mission of the Engineering Workflow team, and I went into some of the challenges the team faces, which informed the creation of the team’s vision. In this post I’ll go into the vision itself.

First, a bit of a preamble to set context and expectations.

About the Vision

Members of the Engineering Workflow team have had many conversations with Firefox engineers, managers, and leaders across many years. The results of these conversations have led to various product decisions, but generally without a well-defined overarching direction. Over the last year we took a step back to get a more comprehensive understanding of the needs and inefficiencies in Firefox engineering. This enables us to lay out a map of where Engineering Workflow could go over the course of years, rather than our previous short-term approaches.

As I mentioned earlier, I couldn’t find much in the way of examples of tooling strategies to work from. However, there are many projects out there that have developed tooling and automation ecosystems that can provide ideas for us to incorporate into our vision. A notable example is the Chromium project, the open-source core of the Chrome browser. Aspects of their engineering processes and systems have made their way into what follows.

It is very important to understand that this vision, if not vision statements in general, is aspirational. I deliberately crafted it such that it could take many engineer-years to achieve even a large part of it. It should be something we can reference to guide our work for the foreseeable future. To ensure it was as comprehensive as possible, it was constructed without attention given to feasibility nor, therefore, the priority of its individual pieces. A road map for how to best approach the implementation of the vision for the most impact is a necessary next step.

The resulting vision is nine points laying out the ideal world from an Engineering Workflow standpoint. I’ll go through them one by one up to point four in this post, with the remaining five to follow.

The Engineering Workflow Vision

1. Checking out the full mozilla-central source is fast

The repository necessary for building and testing Firefox, mozilla-central, is massive. Cloning and updating the repo takes quite a while even for engineers located close by the central hg.mozilla.org servers; the experience for more distant contributors can be much worse. Furthermore, this affects our CI systems, which are constantly cloning the source to execute builds and tests. Thus there is a big benefit to making cloning and updating the Firefox source as fast as possible.

There are various ways to tackle this problem. We are currently working on geo-distributed mirrors of the source code that are at least read-only to minimize the distance the data has to travel to get onto your local machine. There is also work we can do to reduce the amount of data that needs to be fetched, by determining what data is actually required for a given task and using that to allow shallow and/or narrow clones.

There are other issues in the VCS space that hamper the productivity of both product and tooling engineers. One is our approach to branching. The various train, feature, and testing branches are in fact separate repositories altogether, stemming from the early days of the switch to Mercurial. This nonstandard approach is both confusing and inefficient. There are also multiple integration “branches”, in particular autoland and mozilla-inbound, which require regular merging which in turn complicates history.

Supporting multiple VCSes also has a cost. Although Mercurial is the core VCS for Firefox development, the rise of Git led to the development of git-cinnabar as an alternate avenue to interacting with Firefox source. If not a completely de juror solution, it has enough users to warrant support from our tools, which means extra work. Furthermore, it is still sufficiently different from Git, in terms of installation at least, to trip some contributors up. Ideally, we would have a single VCS in use throughout Firefox engineering, or at least a well-defined pipeline for contributions that allows smooth use of vanilla Git even if the core is still kept in Mercurial.

2. Source code and history is easily navigable

To continue from the previous point, the vast size of the Firefox codebase means that it can be quite tricky for even experienced engineers, let alone new contributors, to find their way around. To reduce this burden, we can both improve the way the source is laid out and support tools to make sense of the whole.

One confusing aspect of mozilla-central is the lack of organization and discoverability of the many third-party libraries and applications that are mirrored in. It is difficult to even figure out what is externally sourced, let alone how and how often our versions are updated. We have started a plan to provide metadata and reorganize the tree to make this more discoverable, with the eventual goal to automate some of the manual processes for updating third-party code.

Mozilla also has not just one but two tools for digging deep into Firefox source code: dxr and searchfox. Neither of these tools are well maintained at the moment. We need to critically examine these, and perhaps other, tools and choose a single solution, again improving discoverability and maintainability.

3. Installing a development environment is fast and easy

Over the years Mozilla engineers have developed solutions to simplify the installation of all the applications and libraries necessary to build Firefox that aren’t bundled into its codebase. Although they work relatively well, there are many improvements that can be made.

The rise of Docker and other container solutions has resulted in an appreciation of the benefits of isolating applications from the underlying system. Especially given the low cost of disk space today, a Firefox build and test environment should be completely isolated from the rest of the host system, preventing unwanted interactions between other versions of dependent apps and libraries that may already be installed on the system, and other such cross-contamination.

We can also continue down the path that was started with mach and encapsulate other common tasks in simple commands. Contributors should not have to be familiar with the intricacies of all of our tools, in-house and third-party, to perform standard actions like building, running tests, submitting code reviews, and landing patches.

4. Building is fast

Building Firefox is a task that individual developers perform all the time, and our CI systems spend a large part of their time doing the same. It should be pretty obvious that reducing the time to build Firefox with a code change anywhere in the tree has a serious impact.

There are myriad ways our builds can be made faster. We have already done a lot of work to abstract build definitions in order to experiment with different build systems, and it looks like tup may allow us to have lightning-fast incremental builds. Also, the strategy we used to isolate platform components written in C++ and Rust from the front-end JavaScript pieces, which dramatically lowered build times for people working on the latter, could similarly be applied to isolate the building of system add-ons, such as devtools, from the rest of Firefox. We should do a comprehensive evaluation of places existing processes can be tightened up and continue to look for where we can make larger changes.

Stay tuned for the final part of this series of posts.

Cameron KaiserAnother one bites the Rust

And another one gone, and another one gone (capitalization sic):

As Herwig Bauernfeind from Bitwise Works made clear in his presentation he gave at Warpstock 2018 Toronto, Firefox for OS/2 is on its way out for OS/2 after version 52 ESR. The primary reason is because Firefox is switching to RUST. Rust is a general purpose programming language sponsored by Mozilla Research. It is unlikely that RUST will ever be ported to OS/2.

Rust was the primary reason we dropped source parity for TenFourFox also (though there were plenty of other reasons such as changes to the graphics stack, the hard requirement for Skia, Electrolysis and changes to ICU; all of this could have been worked around, but with substantial difficulty, and likely with severe compromises). Now that Firefox 52ESR, the last ESR to not require Rust support, is on its last legs, this marks the final end of "Warpzilla" and Firefox on OS/2. SPARC (and apparently Solaris in general) doesn't have rustc or cargo support either, so this is likely the end of Firefox on any version of Solaris as well. Yes, I use Firefox on my Sun Ultra-3 laptop with Solaris 10. There are probably other minor platforms just hanging on that will wink out and disappear which I haven't yet discovered.

Every platform that dies is a loss to the technical diversity of the Mozilla community, no matter how you choose to put a happy face on it.

If you were trying to get a web browser up on a new platform these days, much as it makes me sick to say it, you'd probably be better off with WebKit rather than wrestle with Rust. Or NetSurf, despite its significant limitations, though I love the fact the project even exists. At least there's Rust for the various forms of PowerPC on Linux, including 64-bit and little-endian, so the Talos II can still run Firefox.

With FPR9 TenFourFox will switch to backporting security updates from Firefox 60ESR, though any last-minute chemspills to 52ESR will of course be reviewed promptly.

UPDATE 7/5: Someone in the discussion on Hacker News found that at least $12,650 was raised by the OS/2 community, and they're going to port a Qt based browser, which means ... WebKit. I told you so.

Mozilla Open Policy & Advocacy BlogEU Parliament rejects rubber stamping disastrous copyright bill

The European Parliament has today heard the voice of European citizens and voted against proposals that would have dealt a hammer blow to the open internet in Europe.

By a clear majority, MEPs have rejected rubber stamping proposals that would have forced internet companies to filter the web, and would have introduced an unprecedented tax on linking online.

This is great news for Europe’s citizens, its SMEs and startups, especially those in the creative sectors as, while the proposed rules were supposed to protect and support them, they would have been the ones to suffer most under the new regime.

The last few weeks have seen a massive mobilisation of public opinion in Europe – as the impact of this regressive law upon everything from memes to news articles online became clear. The momentum is growing and Mozilla will go on fighting to make sure this proposal serves its purpose of modernising copyright in Europe.

The future of an open internet and creativity in Europe depends on it.

The post EU Parliament rejects rubber stamping disastrous copyright bill appeared first on Open Policy & Advocacy.

Marco ZeheEasy ARIA tip #8: Use aria-roledescription to selectively enhance the user experience

In WAI-ARIA 1.1, the aria-roledescription attribute has been added to give web authors the ability to further describe the function of a widget. Here are a few tips for usage.

The definition of role

When screen readers describe any control or item on a website, they use multiple pieces of information to describe it to the user:

  1. The name or label. What does the element do? “Add a file”, for example.
  2. The role of the item. What kind of control is it? To stick with our example, it’s most probably a button.
  3. For some controls such as checkboxes or radio buttons, toggle buttons etc., the state or states. Am I checked? Am I the selected radio button in a group of radio buttons? Am I pressed?
  4. The description, if present. Further information that might show up as a tool tip when hovering over it with the mouse, or focusing with the keyboard. To stick with our example: “Adds a new file to the current folder”.

1 and 2 should always be present correctly, 3 for the control types that states apply to, and 4 is optional.

So, for a correctly coded widget, and for assistive technologies to use it, 1 and 2 must be correctly set, keyboard interaction implemented etc. For some, web developers must also make sure that the correct states (3) are always applied or changed.

Add aria-roledescription

What aria-roledescription does, in a nutshell, is it tells the screen reader to speak and braille something for item 2 above that is defined by the web developer. For roles, screen readers have a default, localized, set of terms they use to describe items. “Button”, “text field”, “checkbox” are some examples. aria-roledescription overrides the assistive technology’s default role description for an item exposed by the accessibility APIs. This attribute should be used wisely, since it can be the yay or nay for an assistive technology user to understand, or not understand, your user interface.

Here are a few guidelines you as a web developer should follow strictly when considering to use aria-roledescription.

  1. Make sure your widget is understandable without it. Pretend that aria-roledescription doesn’t exist, and make sure your widget or compound widget works with a correct standard role or set of roles first.
  2. If you feel that your UI is understood better with aria-roledescription added, add it, bearing in mind that the string you use is used literally, not being mapped to some ID or numeric value. The raw string you put in there will be passed on to the screen reader. This also means that, if you offer your UI in different languages, make it localizable and offer the localized string for each language. Make clear in the localization notes that this is a string being spoken to blind users to help them understand the widget better.
  3. Make sure you do not take away information. If you want to enhance the description of a button, make sure the fact that this is a button is still in there somewhere, or your users might not know that they need to press Space to activate.

A few examples

These examples by ETS research show good use of the aria-roledescription attribute. The first two are regions that are explicitly declared as slides (like in a PowerPoint or Google Slides presentation), while the other two are good examples of enhancing the button’s description, but still keeping the button semantic clearly audible.

I recently also came across an example in the newly released Open-Xchange OX App Suite 7.10 where a newly introduced window manager and task bar can be enhanced to provide the user with information that this is a special kind of toolbar, namely a task bar. There is no task bar role in either HTML or WAI-ARIA. So what can be done, and what I also suggested to the developers, is to make sure everything works as if this was a normal toolbar, including the appropriate roles, and then enhance the container item that has the role “toolbar” with an aria-description specifying that this is, indeed, a task bar. We’ll see if they add it and if so, how this plays out. 🙂


As with everything in WAI-ARIA, be mindful of using attributes and roles. Think about the impact your choice might have. But if used in the right context, aria-roledescription can be a great enhancement to the user experience. If used wrongly, though, it can also lead to greater inaccessibility of your web application.

Also note that not all combinations of assistive technologies and browsers support aria-roledescription yet. Firefox does, in combination with NVDA at least, and VoiceOver and Safari on macOS are said to have suport as well. Other combinations may or may not have it yet, so it is very important that you follow the advice and make sure your widgets “speak” without it as well.

Mozilla Open Policy & Advocacy BlogA step forward for government vulnerability disclosure in Europe

We’ve argued for many years that governments should implement transparent processes to review and disclose the vulnerabilities that they learn about. Such processeses are essential for the cybersecurity of citizens, businesses, and indeed governments themselves. To advance policy discourse on this issue in Europe, we recently participated in the Centre of European Policy Studies (CEPS) Taskforce on Software Vulnerability Disclosure. The Taskforce’s final report was published this week and makes a strong case for the need for government vulnerability disclosure policies, and comes at a critical juncture as European policymakers debate the EU Cybersecurity Act.

As the developer of a browser used by hundreds of millions of people every day, it is essential for us that vulnerabilities in our software are quickly identified and patched. Simply put, the safety and security of our users depend on it. The disclosure of such vulnerabilities (and the processes that underpin it) is particularly important with respect to governments. Governments often have unique knowledge of vulnerabilities, and learn about them in many ways: through their own research and development, by purchasing them, through intelligence work, or by reports from third parties. Crucially, governments can face conflicting incentives as to whether to disclose the existence of such vulnerabilities to the vendor immediately, or to delay disclosure in order to support offensive intelligence-gathering and law enforcement activities (so-called government hacking).

The Centre for European Policy Studies (CEPS) report on Software Vulnerability Disclosure in Europe is the product of a broad stakeholder taskforce that included a diverse body of actors such as Airbus, the European Telecom Network Operators Association (ETNO), and the global digital rights advocacy group Access Now. Importantly, it reaffirms the need for European governments to put in place robust, accountable, and transparent government vulnerability disclosure review processes. While the taskforce’s work focused on the disclosure of vulnerabilities acquired by government, it is clear that more policy work is required with respect to the processes underpinning acquisition, exploitation and the operational mechanics of disclosure by governments in Europe.

Unfortunately, most EU governments have not yet implemented vulnerability disclosure review processes, a fact that constitutes a serious concern at a time when the cyber attack surface continues to widen.  Luckily, European Union lawmakers have a unique opportunity to address this issue, and advance the norm that all Member States should have vulnerability disclosure processes. The European Parliament and the EU Council are presently debating the proposed EU Cybersecurity Act, and we reiterate our call to European policymakers use this legislation to give ENISA (the EU Cybersecurity agency) the mandate to assist and advise Member States on the development of policy and practices for government vulnerability disclosure.

The post A step forward for government vulnerability disclosure in Europe appeared first on Open Policy & Advocacy.

Chris H-CFaster Event Telemetry with “event” Pings

Screenshot_2018-07-04 New Query(1).pngEvent Telemetry is the means by which we can send ordered interaction data from Firefox users back to Mozilla where we can use it to make product decisions.

For example, we know from a histogram that the most popular way of opening the Developer Tools in Firefox Beta 62 is by the shortcut key (Ctrl+Shift+I). And it’s nice to see that the number of times the Javascript Debugger was opened was roughly 1/10th of the number of times the shortcut key was used.

…but are these connected? If so, how?

And the Javascript Profiler is opened only half as often as the Debugger. Why? Isn’t it easy to find that panel from the Debugger? Are users going there directly from the DOM view or is it easier to find from the Debugger?

To determine what parts of Firefox our users are having trouble finding or using, we often need to know the order things happen. That’s where Event Telemetry comes into play: we timestamp things that happen all over the browser so we can see what happens and in what order (and a little bit of how long it took to happen).

Event Telemetry isn’t new: it’s been around for about 2 years now. And for those two years it has been piggy-backing on the workhorse of the Firefox Telemetry system: the “main” ping.

The “main” ping carries a lot of information and is usually sent once per time you close your Firefox (or once per day, whichever is shorter). As such, Event Telemetry was constrained in how it was able to report this ordered data. It takes two whole days to get 95% of it (because that’s how long it takes us to get “main” pings), and it isn’t allowed to send more than one thousand events per process (lest it balloon the size of the “main” ping, causing problems).

This makes the data slow, and possibly incomplete.

With the landing of bug 1460595 in Firefox Nightly 63 last week, Event Telemetry now has its own ping: the “event” ping.

The “event” ping maintains the same 1000-events-per-process-per-ping limit as the “main” ping, but can send pings as frequently as one ping every ten minutes. Typically, though, it waits the full hour before sending as there isn’t any rush. A maximum delay of an hour still makes for low-latency data, and a minimum delay of ten minutes is unlikely to be overrun by event recordings which means we should get all of the events.

This means it takes less time to receive data that is more likely to be complete. This in turn means we can use less of it to get our answers. And it means more efficiency in our decision-making process, which is important when you’re competing against giants.

If you use Event Telemetry to answer your questions with data, now you can look forward to being able to do so faster and with less worry about losing data along the way.

And if you don’t use Event Telemetry to answer your questions, maybe now would be a good time to start.

The “event” ping landed in Firefox Nightly 63 (build id 20180627100027) and I hope to have it uplifted to Firefox Beta 62 in the coming days.

Thanks to :sunahsuh for her excellent work reviewing the proposal and in getting the data into the derived datasets so they can be easily queried, and further thanks to the Data Team for their support.


Support.Mozilla.OrgState of Mozilla Support: 2018 Mid-year Update – Part 2

Hello, present and future Mozillians!

Last time we had a closer look at a metrics-based report analysis from the Analysen & Tal agency that took a deep dive into support.mozilla.org’s historical data.

Today, we are going to continue sharing external insights into our community and support model with a summary of the comparative study conducted for us by a team of the Copenhagen Institute of Interaction Design from Denmark. While we were not exactly looking for the essence of hygge, we found quite a few interesting nuggets of wisdom and inspiration that we’d like to share with you below. You can find the presentation here.

(Please note: this content has already been shared with the Support contributors on our forums, but now we are making it more public through this post).

The study’s goals were to help the Support team and community understand how Mozilla’s approach to helping users compares to other similar organizations, and what possible future approaches could help meeting the increasing demand for high quality help.

The methodology for the study was a mix of a series of interviews, case studies, and design explorations. The people interviewed came from external organizations and groups, as well as from Mozilla’s support community. The three entities chosen for the comparative study were the user communities around WordPress.org, Arduino, and Kaggle.


In general, when compared to our Support community, WordPress.org’s is more centralized and controlled, even if on average among the three case studies it resembles our own structures the most.

The contribution forums and Slack channels are used to discuss and escalate important issues. Similarly to /r/firefox, there is also an active subreddit for WordPress that’s self-moderated and open to non-support content, if necessary. Slack seems to be a place for fruitful conversations, so we may be exploring this idea in the near future for more engaged contributors.

Another interesting aspect is that the support site, apart from community-owned-and-driven support, offers full time employees of Automattic/Wordpress.com a chance to do rotations in the open source community. This, while tried by us in the past, has not been something we have considered as a long-standing project, but it could drive more engagement and knowledge share on all levels.

The primary incentives for contribution seem to be status and connecting with others, which we understand to be also present in our own community. However, we still need to get better at identifying the incentives for joining and staying as a long-term Support contributors to continue delivering community-driven support at scale.


Arduino’s support community, focusing on a plethora of open source software and hardware uses “in the wild”, is definitely way more decentralized and ad hoc in its practices than Mozilla’s Support.

The nature of the environment in which the community operates makes a concentrated and concerted support effort much harder, but the main activity happens across the official community forum, a StackExchange Q&A forum, and Arduino’s main Project Hub.

At the time of the study, the community forum show little conscious effort in organization from Arduino’s staff, serving a more social function with its open nature. On the other hand, the StackExchange based support forum has a well developed peer-driven reputation system, with the community moderators being voted in and gaining access and privileges based on their long-standing contributions. The StackExchange model is by far more successful and useful for support in Arduino’s case.

Finally, the Project Hub is a content creation and maintenance space that centers support-related content (for example documentation and instructables) around specific projects. Quality content is encouraged by official contests and rewards for contributors. Additionally, language and interactions presented on the site encourage a positive and inclusive community approach. As a result and thanks to the self-learning and guided aspect of using Arduino products, quality content is easier to find and produce.


Kaggle’s community model is an unusual hybrid of competitiveness and collaboration, fueled by commercially supported projects. With the community being the core and the product of Kaggle’s business model, the platform it lives on is highly sophisticated and the interactions within it appear to be meticulously engineered.

To this end, gamification of competing and collaborating is one of the main driving forces behind, encouraging high quality contributions and teamwork. The design of the community environment shows sharp focus on its key functions. The community is not directed to activities not considered core. That said, a large part of community engagement and motivation is happening outside of official forums, within self-created and user-governing communities. What’s interesting, many returning contributors consider their voluntary involvement a stepping stone towards their own professional careers.

Insights and Observations

The CIID researchers, based on their study of external support models listed above and .interviews with various members of different communities, gathered a set of recommendations or paths to explore for our community’s consideration.

Structured communities always leak

Simple explanation: Whatever we define as “Mozilla Support” will always see activity or interest outside of what we think is the “main” place where it happens.

Regardless of the community setup at the core of the experience, there will always be engagement and involvement taking place outside of centrally defined features or tasks. There is an opportunity in allowing bottom-up organization of people, while keeping the support tasks clear and accessible for everyone willing to participate. The Support site should still be the main place where supporting Mozilla’s users happens, but it should not be rigorously the only one out there.

The main challenge here is distributing our knowledge and expertise in an accessible way to other places on the web where users look for support.

Gamed incentives are valves, not pipes

Simple explanation: Gamification is a tool to improve or guide working contribution mechanisms, not to create new contribution mechanisms.

Any attempts at gamification should be modelled to embrace and enhance existing behaviours and interactions – not to create completely new ones. It is also better when it focuses on quality rather than quantity. Rewarding expertise over volume drives engagement from or creates opportunities for subject matter experts.

Here, the main challenge is to encourage positive behaviours and interactions at scale and in real time, as much as possible. This may mean looking into “high touch automation” (like bots or scripts) or rolling out a focused education/certification offering to increase quality contributions.

Contributors have ownership without agency

Simple explanation: Support contributors have great impact on how users perceive Firefox (as a browser and brand), but this is not clearly shown in the way Firefox is improved or marketed to users.

While our community is at the front line in receiving feedback from the users (and acting upon it), we do not have comparable impact on what is going on in the product world, mostly through lack of a strong enough feedback loop into the product organization, but also a lack of connection and understanding of what it is that we do from the creators and promoters of Firefox.

It would be hard to prove the impact of continuous support efforts without transparent and meaningful metrics at hand, so finding a way to collate and present such data is key for this concept to work. With our community’s work validated and acknowledged, it should be much easier to incorporate our feedback into the development and marketing process of Firefox.

Contributors aren’t born – they’re made

Simple explanation: New contributors can be found in more places than just among Mozillians not already contributing to Support

Many people decide to contribute to Mozilla’s mission based on their own strong beliefs in the future of the web – but many others get on board because they have received support from our community and would like to give back to Mozilla and its other users through activities that they can easily participate in. Supporting others very often proves to be just that (when compared to coding or web development, for example).

Encouraging casual users or those looking for help to give a try to helping others (or get involved with Mozilla’s mission in other ways) could be key to growing our community over all upcoming releases and finding new core contributors among the many people who already chose to use Firefox on a daily basis.

Support the supporters

Simple explanation: Community members should have access to knowledge and tools that allow them to work together and support each other regardless of administrator presence and support.

As the admin team for Support is quite small and each of its members specializes in a different aspect of the site, sometimes contributor questions or emergency escalations may go unnoticed for a while. This increases community fragility and pressure on single points of failure.

In order to address that, our community could consider developing a simple (but complete) escalation and reaction system that is transparent for everyone involved in supporting users. This could increase the resilience and cohesiveness of the Support community, regardless of personal experience or community roles held by various community members involved in escalating or responding to support requests.

Leverage the properties of each channel

Simple explanation: Each tool or resource should have a clear and defined role and purpose. The community needs useful tools, rather than access to all tools used by everyone else for other reasons.

With several places that our community uses for communication and support purposes, it is important to keep the roles and methods of using these separate tools clear and focused. We sometimes tend to “hop on the bandwagon” and try to be everywhere and use everything to be more in line with other teams at Mozilla.

This may not be the best use of everyone’s energy and time, so reviewing the tools we have and the ways they are used is an important step towards empowering contributors and streamlining processes that at this moment may not be working as optimal as possible.

Workshop Outcomes

As part of the working session, we sat all together and invested a few hours into a collaborative synthesis workshop, based on the data and research presented by our external partners. The output of the workshop was a series of project ideas that could influence the future Support strategy. The goal of these ideas is to improve what’s out there already and make Support ready for Mozilla’s future needs.

After a ton of small team work, three projects emerged:

Support Propaganda

General goal: increase awareness and impact of Support across Mozilla.


  • Opening up participation in Support to all Mozillians (especially new hires for any position at Mozilla)
  • Creating a deeper connection between Support, Product, and Marketing through highlighting what Support does to help Product and Marketing deliver quality to users (data driven insights)

Switchboard Operator

General goal: High-touch targetted support for Mozilla’s software users across the web


  • Gathering information and insights about all major locations where conversations are happening about Firefox (within the context of support)
  • Reaching the users with the right support information wherever they are

Alchemist’s Journey

General goal: Quality self-directed learning resources and trainings for future generations of casual or core contributors


  • First wave trial resources developed in collaboration with existing core contributors
  • Second wave researched resources developed based on experiences from the first wave and input from external online education experts

Next Steps

There are more updates to come that should show you how the above work is influencing what direction we think the future of Support at Mozilla should look like.

We will keep working together with Open Innovation (closely and directly) and CIID (for future research projects) and informing you of what is up with Support at Mozilla.

We will also keep you informed (and engaged) in the future of Support at Mozilla.

Thank you for being a vital part of Mozilla’s mission for an open and helpful web!

Benjamin BouvierMaking calls to WebAssembly fast and implementing anyref

Since this is the end of the first half-year, I think it is a good time to reflect and show some work I've been doing over the last few months, apart from the regular batch of random issues, security bugs, reviews and the fixing of 24 bugs found by our …

Hacks.Mozilla.OrgDark Theme Darkening: Better Theming for Firefox Quantum

The Team

Project Dark Theme Darkening was part of Michigan State University’s Computer Science capstone experience. Twenty-four groups of five students were each assigned an industry sponsor based on preference and skill set. We had the privilege of working with Mozilla on Firefox Quantum’s Theming API. Our project increases a user’s ability to customize the appearance of the Firefox browser.

(left to right)
Vivek Dhingra: MSU Student Contributor
Zhengyi Lian: MSU Student Contributor
Connor Masani: MSU Student Contributor
Dylan Stokes: MSU Student Contributor
Bogdan Pozderca: MSU Student Contributor

Jared Wein: Mozilla Staff
Mike Conley: Mozilla Staff
Tim Nguyen: Volunteer Contributor

The Project

Our goal was to expand upon the existing “lightweight” Theming API in Quantum to allow for more areas of customization. Themes had the ability to alter the appearance of the default toolbars, but did not have the ability to style menus, or customize auto-complete popups. Our team also worked on adding a more fluid transition when dynamic themes changed to allow for a smoother user experience.

Project Video

This video showcases a majority of the improvements we added to the Theming API and gives a good explanation of what our project was about. Enjoy — and then read on for the rest of the details:


Prior to this project, none of us had experience with Firefox development. After downloading the mozilla-central repository and exploring through the 40+ million lines of source, it was a bit daunting for all of us. Our mentors: Jared, Mike, Tim, and the Mozilla community on IRC all helped us through squashing our first bug.

Through the project, we learned to ask questions sooner rather than later. Being programmers, we were stubborn and wanted to figure out our issues ourselves but could have solved them a lot faster if we just simply asked in the Mozilla IRC. Everyone on there is extremely helpful and friendly!

All code written was in JavaScript and CSS. It was neat to see that the UI of Firefox is made in much the same way as other web pages. We got a great introduction to Mercurial by the end of the project and used some sweet tools to help our development process such as searchfox.org for indexed searching of mozilla-central, and janitor for web-based development.

Auto-complete Popups

We added the ability to customize the URL auto-complete popups. With this addition, we had to take in account the text color of the ac-url and ac-action tips associated with each result. For example, if the background of the auto-complete popup was dark, the text color of the tips are set to a light color so they can be seen.

We did this by calculating the luminance and comparing it to a threshold. The lwthemetextcolor attribute is set to either dark or bright based on this luminance threshold:

["--lwt-text-color", {
     lwtProperty: "textcolor",
     processColor(rgbaChannels, element) {
          if (!rgbaChannels) {
               return null;
          const {r, g, b, a} = rgbaChannels;
          const luminance = 0.2125 * r + 0.7154 * g + 0.0721 * b;
          element.setAttribute("lwthemetextcolor", luminance <= 110 ? "dark" : "bright");
          element.setAttribute("lwtheme", "true");
          return `rgba(${r}, ${g}, ${b}, ${a})` || "black";

The top image shows the auto-complete popup with the native default theme while the bottom image shows the auto-complete popup with the Dark theme enabled. Notice that the ac-action (“Switch To Tab”) text color and ac-url are changed so they can be more easily seen on the Dark background.

Theme Properties Added

We added many new theme properties that developers like you can use to customize more of the browser. These properties include:

  • icons – The color of toolbar icons.
  • icons_attention – The color of toolbar icons in attention state such as the starred bookmark icon or finished download icon.
  • frame_inactive – color of the accent color when the window is not in the foreground
  • tab_loading – The color of the tab loading indicator and the tab loading burst.
  • tab_selected – The background color of the selected tab.
  • popup – The background color of popups (such as the url bar dropdown and the arrow panels).
  • popup_text – The text color of popups.
  • popup_border – The border color of popups.
  • popup_highlight – The background color of items highlighted using the keyboard inside popups (such as the selected URL bar dropdown item)
  • popup_highlight_text – The text color of items highlighted using the keyboard inside popups.
  • toolbar_field_focus – The focused background color for fields in the toolbar, such as the URL bar.
  • toolbar_field_text_focus – The color of text in focused fields in the toolbar, such as the URL bar.
  • toolbar_field_border_focus – The focused border color for fields in the toolbar.
  • button_background_active – The color of the background of the pressed toolbar buttons.
  • button_background_hover – The color of the background of the toolbar buttons on hover.

The toolbar_field, and toolbar_field_border properties now apply to the “Find” toolbar.
Additionally, these new properties now apply to the the native Dark theme.

colors: {
    accentcolor: 'black',
    textcolor: 'white',
    toolbar: 'rgb(32,11,50)',
    toolbar_text: 'white',
    popup: "rgb(32,11,50)",
    popup_border: "rgb(32,11,50)",
    popup_text: "#FFFFFF",
    popup_highlight: "rgb(55,36,71)",
    icons: "white",
    icons_attention: "rgb(255,0,255)",
    frame_inactive: "rgb(32,11,50)",
    tab_loading: "#0000FF",
    tab_selected: "rgb(32,11,50)",

Above shows an example of some of the added properties being set in a theme manifest file and what it looks like in the browser below:


Our team learned a lot about web browser development over the semester of our project, and we had the opportunity to write and ship real production-level code. All of the code we wrote shipped with the recent releases of Firefox Quantum 60 and 61 and will impact millions of users, which is an awesome feeling. We want to thank everyone at Mozilla and the Mozilla community for giving us this opportunity and mentoring us through the process. We are looking forward to seeing what developers and Firefox enthusiasts create using the improved Theming API!

Botond BalloReview of the Purism Librem 13

Towards the end of last year, I got a new laptop: the Purism Librem 13. It replaced the Lenovo ThinkPad X250 that I was using previously, which maxed out at 8 GB RAM and was beginning to be unusable for Firefox builds.

This is my first professional laptop that isn’t a ThinkPad; as I’ve now been using it for over half a year, I thought I’d write some brief notes on what my experience with it has been like.

Why Purism?

My main requirement from a work point of view was having at least 16 GB RAM while staying in the same weight category as the X250. There were options meeting those criteria in the ThinkPad line (like the X270 or newer generations of X1 Carbon), so why did I choose Purism?

Purism is a social benefit corporation that aims to make laptops that respect your privacy and freedom — at the hardware and firmware levels in addition to software — while remaining competitive with other productivity laptops in terms of price and specifications.

The freedom-respecting features of the Librem 13 that you don’t typically find in other laptops include:

  • Hardware kill switches for WiFi/Bluetooth and the microphone/camera
  • A open-source bootloader (coreboot)
  • A disabled Intel Management Engine, a component of Intel CPUs that runs proprietary software at (very) elevated privilege levels, which Intel makes very hard to disable or replace
  • An attempt to ship hardware components with open-source firmware, though this is very much a work in progress
  • Tamper evidence via Heads, though this is a newer feature and was not available at the time I purchased my Librem 13.

These are features I’ve long wanted in my computing devices, and it was exciting to see someone producing competitively priced laptops with all the relevant configuration, sourcing of parts, compatibility testing etc. done for you.



The Librem’s aluminum chassis looks nicer and feels sturdier than the X250’s plastic one.


At 13.3″, the Librem’s screen size is a small but noticeable and welcome improvement over the X250’s 12.5″.

The X250 traded off screen size for battery life. It’s the same weight as the 14″ ThinkPad X1 Carbon; the weight savings from a smaller screen size go into extra thickness, which allows for a second battery. I was pleased to see that the Librem, which is the same thickness as the X1 Carbon and only has one battery, has comparable battery life to the X250 (5-6 hours on an average workload).

The Librem’s screen is not a touchscreen. I noticed this because I used the X250’s touchscreen to test touch event support in Firefox, but I don’t think the average user has much use of a touchscreen in a conventional laptop (it’s more useful in 2-in-1 laptops, which Purism also offers, and that does have a touchscreen), so I don’t hold this against Purism.

The maximum swivel angle between the Librem’s keyboard and its screen is 130 degrees, compared to the X250’s almost 180 degrees. I did occasionally use the X250’s greater swivel angle (e.g. when lying on a couch), but I didn’t find its absence in the Librem to be a significant issue.


The one feature of ThinkPad laptops that I miss the most in the Librem, is the TrackPoint, the red button in the middle of the keyboard that allows you to move the cursor without having to move your hand down to the touchpad. I didn’t realize how much I relied on this until I didn’t have it, though I’ve been getting by without it. (I view it as additional motivation for me to use the keyboard more and the cursor less.)

Also missing in the Librem are the buttons above the touchpad for left-, right-, and middle-clicking; you instead have to click by tapping the touchpad with one, two, or three fingers (respectively), which I find more awkward and prone to accidental taps.

Finally, while I haven’t noticed this very much myself (but I tend not to be very discerning in this area), several people who have briefly used my Librem commented that the sensitivity of its touchpad is significantly reduced compared to other touchpads they’re used to.


The Librem’s keys feel better to the press than the X250’s. However, I’ve found you have to hit the keys fairly close to their centre for the press to register; the X250’s keys were more sensitive in this respect (hitting the side of the key would still trigger it), so this took some getting used to.

The keyboard can be backlit (at two different levels of intensity, though I don’t think I’ve ever used the second one). However, the shortcut to activate the backlight (Fn + F10) is significantly harder to find in the dark than the X250’s (Fn + Space).

I’ve also found the Librem’s keys get sweaty more easily, I’m guessing due to different materials.


The Librem’s keyboard layout differs from the X250’s in several small but important ways. Some of the changes are welcome; others, less so.

Here is a picture of the keyboard to illustrate:

Librem 13 keyboard

  • One thing that I think the Librem’s keyboard gets right that the X250 got wrong, is that the key in the bottom left corner is Ctrl, with Fn being next to it, rather than the other way around. I find this significantly aids muscle memory when moving between the Librem’s keyboard and external / desktop keyboards (which invariably have Ctrl in the bottom left corner). (I know that issues like this can technically be worked around by remapping keys, but it’s nice not to have to.)
  • On the other hand, the biggest deficiency in the Librem’s keyboard is the lack of PageUp, PageDown, Home, and End keys. The X250 had all of these: PageUp and PageDown above the right and left arrow keys, Home and End in the top row. With the Librem, you have to use the arrow keys with the Fn modifier to invoke these operations. My typing style is such that I use these operations fairly heavily, and as such I’ve missed the separate keys a lot.
  • A related minor annoyance is the fact that the rightmost key in the second row from the bottom is not Shift as it usually is, but a second Fn key; that’s also an impedient to muscle memory across different keyboards.
  • Lastly, the key in the top right corner is the power key, not Delete which is what I was used to from the X250.

None of these are necessarily dealbreakers, but they did take some getting used to.


Every time I’ve tried the Librem’s microphone so far, the recording quality has been terrible, with large amounts of static obscuring the signal. I haven’t yet had a chance to investigate whether this is a hardware or software issue.


The Librem 13 comes with Purism’s own Linux distribution, PureOS. PureOS is basically a light repack of Debian and GNOME 3, with some common software pre-installed and, in some cases, re-branded.

I got the impression that PureOS and its software doesn’t get much in the way of maintenance. For example, for the re-branded browser that came with PureOS, “PureBrowser”, the latest version available in the PureOS repository at the time I got my Librem was based on Firefox 45 ESR, which had been out of support for some 6 months by that time!

I’m also not a huge fan of GNOME 3. I tolerated this setup for all of about two weeks, and then decided to wipe the PureOS installation and replace it with a plain Debian stable installation, with KDE, my preferred desktop environment. This went without a hitch, indicating that — as far as I can tell — there isn’t anything in the PureOS patches that’s necessary for running on this hardware.

Generally, running Linux on the Librem 13 has been a smooth experience; I haven’t seen much in the way of glitches or compatibility issues. Occasionally, I get something like a crashed power management daemon (shortcuts to increase/decrease brightness stop working), but nothing too serious.


The Purism Librem 13 has largely lived up to my goal of having a lightweight productivity laptop with a decent amount of memory (though I’m sad to say that the Firefox build has continued to get larger and slower over time, and linking is sometimes a struggle even with 16 GB of RAM…) while also going the extra mile to protect my privacy and freedoms. The Librem 13 has a few deficiencies in comparison to the ThinkPad line, but they’re mostly in the category of papercuts. At the end of the day it boils down to whether living with a few small annoyances to benefit from the additional privacy features is the right tradeoff for you. For me, so far, it has been, although I certainly hope the Purism folks take feedback like this into account and improve future iterations of the Librem line.

Wladimir PalantGoogle to developers: We take down your extension, because we can

Today, I found this email from Google in my inbox:

We routinely review items in the Chrome Web Store for compliance with our Program policies to ensure a safe and trusted experience for our users. We recently found that your item, “Google search link fix,” with ID: cekfddagaicikmgoheekchngpadahmlf, did not comply with our Developer Program Policies. Your item did not comply with the following section of our policy:

We may remove your item if it has a blank description field, or missing icons or screenshots, and appears to be suspicious. Your item is still published, but is at risk of being removed from the Web Store.

Please make the above changes within 7 days in order to avoid removal.

Not sure why Google chose the wrong email address to contact me about this (the account is associated with another email address) but luckily this email found me. I opened the extension listing and the description is there, as is the icon. What’s missing is a screenshot, simply because creating one for an extension without a user interface isn’t trivial. No problem, spent a bit of time making something that will do to illustrate the principle.

And then I got another mail from Google, exactly 2 hours 30 minutes after the first one:

We have not received an update from you on your Google Chrome item, “Google search link fix,” with ID: cekfddagaicikmgoheekchngpadahmlf, item before the expiry of the warning period specified in our earlier email. Because your item continues to not comply with our policies stated in the previous email, it has now been removed from the Google Chrome Web Store.

I guess, Mountain View must be moving at extreme speeds, which is why time goes by way faster over there — relativity theory in action. Unfortunately, communication at near-light speeds is also problematic, which is likely why there is no way to ask questions about their reasoning. The only option is resubmitting, but:

Important Note: Repeated or egregious policy violations in the Chrome Web Store may result in your developer account being suspended or could lead to a ban from using the Chrome Web Store platform.

In other words: if I don’t understand what’s wrong with my extension, then I better stay away from the resubmission button. Or maybe my update with the new screenshot simply didn’t reach them yet and all I have to do is wait?

Anyway, dear users of my Google search link fix extension. If you happen to use Google Chrome, I sincerely recommend switching to Mozilla Firefox. No, not only because of this simple extension of course. But Addons.Mozilla.Org policies happen to be enforced in a transparent way, and appealing is always possible. Mozilla also has a good track record of keeping out malicious extensions, something that cannot be said about Chrome Web Store (a recent example).

Update (2018-07-04): The Hacker News thread lists a bunch of other cases where extensions were removed for unclear reasons without a possibility to appeal. It seems that having a contact within Google is the only way of resolving this.

Update 2 (2018-07-04): The extension is back, albeit without the screenshot I added (it’s visible in the Developer Dashboard but not on the public extension page). Given that I didn’t get any notification whatsoever, I don’t know who to thank for this and whether it’s a permanent state or whether the extension is still due for removal in a week.

Update 3 (2018-07-04): Now I got an email from somebody at Google, thanks to a Google employee seeing my blog post here. So supposedly this was an internal miscommunication, which resulted in my screenshot update being rejected. All should be good again now and all I have to do is resubmit that screenshot.

Mike TaylorGoogle Tier 1 Search in Firefox for Android Nightly

Late last week we quietly landed a Nightly-only addon that spoofs the Chrome Mobile user agent string for Google Search (well, Facebook too, but that's another blog post).


Bug 975444 is one of the most-duped web compat bugs, which documents the fact that the version of Google Search that Firefox for Android users receive is a less rich version than the one served to Chrome Mobile. And people notice (hence all the dupes).

In order to turn this situation around, we've been working on a number of platform interop bugs (in collaboration with some friendly members of the Blink team) and have hopes in making progress towards receiving Tier 1 search by default.

Part of the plan is to sniff out bugs we don't know about (or new bugs, as the site changes very quickly) by exposing the Nightly population to the spoofed Tier 1 version for 4 weeks (which should be July 27, 2018). If things get too bad, we can back out the addon earlier.

If you've found a bug, please report it at https://webcompat.com/issues/new.

And in the meantime, if the bugs are too annoying to deal with, you can disable it by going to about:config and setting extensions.gws-and-facebook-chrome-spoof.enabled to false (just search for gws).

Note: don't hit reset; instead, tap the true/false value and then hit toggle when that appears.

(yeah, yeah, I'll go charge my phone now.)

This Week In RustThis Week in Rust 241

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is datafrog, the lightweight embeddable datalog engine that powers Rust's non-lexical lifetimes (NLL). Thanks to Jules Kerssemakers for the suggestion.

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

174 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs

New RFCs

Upcoming Events

North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Freedom to shoot yourself in the foot is not a rust marketing point 😉

eugene2k on rust-users

Thanks to DPC for the suggestion!

Please submit your quotes for next week!

This Week in Rust is edited by: nasa42 and llogiq.

Mozilla Open Policy & Advocacy BlogÀ frente da votação no Senado Federal, Mozilla endossa a aprovação de Lei Brasileira de Proteção de Dados (PLC 53/ 2018)

OSenado Brasileiro poderá votar esta semana o Projeto de Lei de Proteção de Dados Pessoais (PLC 53/2018), aprovado pela Câmara dos Deputados em 29 de maio, após quase uma década de debate em torno de várias proposições sobre o tema. Embora alguns aspectos do Projeto ainda sejam passíveis de aprimoramentos, a Mozilla acredita que o texto representa uma estrutura básica de proteção de dados para o Brasil e instamos os reguladores brasileiros à sua urgente aprovação.

Especificamente, o PLC 53/2018:

  1. É o resultado de um processo de consultas inclusivo e aberto à sociedade brasileira, seguindo o exemplo do Marco Civil da Internet (Lei n. 12.965/2014). O processo de discussão do PLC 53/2018 envolveu várias partes interessadas do governo, setor privado, sociedade civil e academia. O projeto também recebeu apoio público de várias organizações do setor privado e da sociedade civil.
  2. Não faz distinções entre o setor privado e ao governo e aplica suas disposições isonomicamente. A criação de exceções amplas para o Poder Público, conforme disposto em outros Projetos de Lei alternativos acabaria por diluir a eficácia da lei com relação à salvaguarda dos direitos do usuário. O Governo Federal é, indiscutivelmente, o maior coletor de dados pessoais no Brasil e a coleta de dados é requisito obrigatório para o acesso aos serviços. A proximidade das eleições de 2018 e a ausência de uma lei de proteção de dados despertam preocupações relativas à eventual utilização de dados pessoais para influenciar o processo eleitoral. Esse ponto faz-se especialmente importante a luz dos recentes debates e revelações em torno da Cambridge Analytica.
  3. Introduz uma entidade reguladora nacional auto-suficiente, independente e robusta. A eficácia de um marco legal de proteção de dados pessoais reside na existência de mecanismos de garantia das obrigações e direitos, indispensavelmente. Isso inclui um alto grau de independência do governo, uma vez que o regulador deve ter jurisdição sobre as atividades de proteção de dados do Governo também. Parabenizamos também a introdução de um órgão participativo e multissetorial responsável por emitir diretrizes, garantir a transparência e avaliar a implementação da lei.
  4. Institui um conjunto de direitos para os indivíduos robusto, ressaltando a importância da obtenção de consentimento do usuário e exigindo que os responsáveis por atividades de tratamento de dados respeitem os princípios de minimização de dados, limitação das atividades de uso e coleta de dados, bem como segurança de bases de dados. Ao qualificar o consentimento como livre, informado e inequívoco o PLC 53/2018 não só estipula um alto padrão de consentimento como coloca os usuários no controle de seus dados e experiências on-line. Por fim, o projeto também reforça os mecanismos de responsabilização, ao passo que (a) coloca sobre o agente o ônus para demonstrar a adoção e eficácia das medidas de proteção de dados, e (b) permite aos usuários a possibilidade de acessar e retificar dados sobre si mesmos, bem como a possibilidade de oposição ao tratamento de dados.
  5. Define categorias de dados pessoais sensíveis; a respeito deste ponto, é bom ver dados biométricos incluídos nesta lista. Acreditamos que um regime mais rigoroso dados sensíveis é útil para sinalizar aos responsáveis pelo tratamento de dados que um nível mais alto de proteção e segurança será necessário ante a sensibilidade das informações.

A falta de uma lei abrangente de proteção de dados expõe os cidadãos brasileiros a riscos decorrentes do uso indevido de seus dados pessoais tanto pelo Governo quanto pelos serviços privados. Este é um momento oportuno e histórico, onde o Brasil tem a oportunidade de finalmente aprovar uma lei geral de proteção de dados que irá salvaguardar os direitos dos brasileiros por gerações a vir.

Este post foi publicado originalmente em inglês.

The post À frente da votação no Senado Federal, Mozilla endossa a aprovação de Lei Brasileira de Proteção de Dados (PLC 53/ 2018) appeared first on Open Policy & Advocacy.

Karl DubostFive years

On July 2, 2013, I was hired by Mozilla on the Web Compatibility team. It has been 5 years. I didn't count the emails, the commits, the bugs opened and resolved. We do not necessary strive by what we accomplished, specifically when expressed in raw numbers. But there was a couple of transformations and skills that I have acquired during these last five years which are little gems for taking the next step.

Working is also a lot of failures, drawbacks, painful learning experiences. A working space is humanity. The material we work with (at least in computing) is mostly ideas conveyed by humans. Not the right word at the right moment, a wrong mood, a desire for a different outcome, we do fail. Then we try to rebuild, to protect ourselves. This delicate balance is though a risk worth taking on the long term.

I'm looking forward the next step, I really mean the next footstep. The one in the path, the one of the hikers, just the next one, which brings you closer from the next flower, the next grass, which transforms the landscape in an undetectable way. Breathing, discovering, learning, with tête-à-tête or alone.

Thanks to Mozilla and its community to allow me to share some of my values with some of yours. I'm very much looking forward the next day to continue this journey with you.


Mozilla Open Policy & Advocacy BlogAhead of Senate vote, Mozilla endorses Brazilian Data Protection Bill (PLC 53/2018)

As soon as this week, the Brazilian Senate may vote on Brazilian Data Protection Bill (PLC 53/2018), which was approved by the Chamber of Deputies on May 29th following nearly a decade of debate on various draft bills. While aspects of the bill will no doubt need to be refined and evolve with time, overall, Mozilla believes this bill represents a strong baseline data protection framework for Brazil, and we urge Brazilian policymakers to pass it quickly.

Specifically, this bill:

  1. Is the outcome of an inclusive and open consultation process, following the example of the landmark Brazilian Civil Rights Framework for the Internet (‘Marco Civil’). The consultation has involved multiple stakeholders from government, private sector, civil society, and academia. The bill has also received public support from various organizations in the private sector and civil society.
  2. Applies with equal strength to private sector and the government. Creating broad exceptions for government use of data, as proposed in alternative bills, would dilute the effectiveness of the data protection law to safeguard user rights. The government is arguably the largest data collector in Brazil, and government data collection is often mandatory for access to services. As the Brazilian general election approaches, some are concerned that in the absence of a data protection law, personal data could be used to influence the election. This is especially salient given the recent debates and revelations around Cambridge Analytica.
  3. Introduces a well-resourced, independent, and empowered national regulator. A strong enforcement mechanism is critical for any data protection framework to be effective. This includes a high degree of independence from the government, since the regulator should have jurisdiction over claims against the government as well. We also welcome the introduction of a participatory multi-stakeholder body to issue guidelines, ensure transparency, and evaluate the implementation of the law.
  4. Puts in place a robust framework of user rights with meaningful user consent at its core, requiring data controllers and processors to abide by the principles of data minimisation, purpose limitation, collection limitation, and data security. In particular, it includes a high standard of free, informed, and unequivocal consent, putting users in control of their data and online experiences. It also emphasizes mechanisms for accountability, putting the onus on the agent to demonstrate both the adoption and effectiveness of data protection measures, and allows for the user to access and rectify data about themselves as well as withdraw consent for any reason.
  5. Defines categories of sensitive personal data; in particular, it’s good to see biometric data included in this list. A stricter regime for certain categories of sensitive data is useful in order to signal to data controllers that a higher level of protection and security will be required given the sensitivity of the information.

The lack of a comprehensive data protection law exposes Brazilian citizens to risks of misuse of their personal data by both government and private services. This is a timely and historic moment where Brazil has the opportunity to finally pass a baseline data protection law that will safeguard the rights of Brazilians for generations to come.

Click here for a Portuguese translation of this post.

The post Ahead of Senate vote, Mozilla endorses Brazilian Data Protection Bill (PLC 53/2018) appeared first on Open Policy & Advocacy.

Benjamin BouvierMaking calls to WebAssembly fast and implementing anyref

Since this is the end of the first half-year, I think it is a good time to reflect and show some work I've been doing over the last few months, apart from the regular batch of random issues, security bugs, reviews and the fixing of 24 bugs found by our …

Mozilla Addons BlogJuly’s Featured Extensions

Firefox Logo on blue background

Pick of the Month: Midnight Lizard

by Pavel Agarkov
More than just dark mode, Midnight Lizard lets you customize the readability of the web in granular detail—adjust everything from color schemes to lighting contrast.

“This has got to be the best dark mode add-on out there, how is this not more popular? 10/10”

Featured: Black Menu for Google

by Carlos Jeurissen
Enjoy easy access to Google services like Search, Translate, Google+, and more without leaving the webpage you’re on.

“Awesome! Makes doing quick tasks with any Google app faster and simpler!”

Featured: Authenticator

by mymindstorm
Add an extra layer of security by generating two-step verification codes in Firefox.

“Thank you so much for making this. I would not be able to use many websites without it now days, literally, since I don’t use a smartphone. Thank you thank you thank you. Works wonderfully.”

Featured: Turbo Download Manager

by InBasic
A download manager with multi-threading support.

“One of the best.”

Featured: IP Address and Domain Information

by webdev7
Know the web you travel! See detailed information about every IP address, domain, and provider you encounter in the digital wild.

“The site provides valuable information and is a tool well worth having.”

If you’d like to nominate an extension for featuring, please send it to amo-featured [at] mozilla [dot] org for the board’s consideration. We welcome you to submit your own add-on!

The post July’s Featured Extensions appeared first on Mozilla Add-ons Blog.

Mozilla Addons BlogLarger image support on addons.mozilla.org

Last week, we pushed an update that enables add-on developers to use larger image sizes on their add-on listings.

We hadn’t updated our size limits for many years, so the images on listing pages are fairly small. The image viewer on the new website design scales the screenshots to fit the viewport, which makes these limitations even more obvious.

For example, look at this old listing of mine.

Old listing image on new site

The image view on the new site. Everything in this screenshot is old.

The image below better reflects how the magnified screenshot looks like on my browser tab.

All of the pixels


After this fix, developers can upload images as large as they prefer. The maximum image display size on the site is 1280×800 pixels, which is what we recommend they upload. For other image sizes we recommend using the 1.6:1 ratio. If you want to update your listings to take advantage of larger image sizes, you might want to consider using these tips to give your listing a makeover to attract more users.

We look forward to beautiful, crisper images on add-on listing pages.

The post Larger image support on addons.mozilla.org appeared first on Mozilla Add-ons Blog.

Mozilla Security BlogRoot Store Policy Updated

After several months of discussion on the mozilla.dev.security.policy mailing list, our Root Store Policy governing Certification Authorities (CAs) that are trusted in Mozilla products has been updated. Version 2.6 has an effective date of July 1st, 2018.

More than one dozen issues were addressed in this update, including the following changes:

  • Section 2.2 “Validation Practices” now requires CAs with the email trust bit to clearly disclose their email address validation methods in their CP/CPS.
  • The use of IP Address validation methods defined by the CA has been banned in certain circumstances.
  • Methods used for IP Address validation must now be clearly specified in the CA’s CP/CPS.
  • Section 3.1 “Audits” increases the WebTrust EV minimum version to 1.6.0 and removes ETSI TS 102 042 and 101 456 from the list of acceptable audit schemes in favor of EN 319 411.
  • Section 3.1.4 “Public Audit Information” formalizes the requirement for an English language version of the audit statement supplied by the Auditor.
  • Section 5.2 “Forbidden and Required Practices” moves the existing ban on CA key pair generation for SSL certificates into our policy.
  • After January 1, 2019, CAs will be required to create separate intermediate certificates for issuing SSL and S/MIME certificates. Newly issued Intermediate certificates will need to be restricted with an EKU extension that doesn’t contain anyPolicy, or both serverAuth and emailProtection. Intermediate certificates issued prior to 2019 that do not comply with this requirement may continue to be used to issue new end-entity certificates.
  • Section 5.3.2 “Publicly Disclosed and Audited” clarifies that Mozilla expects newly issued intermediate certificates to be included on the CA’s next periodic audit report. As long as the CA has current audits, no special audit is required when issuing a new intermediate. This matches the requirements in the CA/Browser Forum’s Baseline Requirements (BR) section 8.1.
  • Section 7.1 “Inclusions” adds a requirement that roots being added to Mozilla’s program must have complied with Mozilla’s Root Store Policy from the time that they were created. This effectively means that roots in existence prior to 2014 that did not receive BR audits after 2013 are not eligible for inclusion in Mozilla’s program. Roots with documented BR violations may also be excluded from Mozilla’s root store under this policy.
  • Section 8 “CA Operational Changes” now requires notification when an intermediate CA certificate is transferred to a third party.

A comparison of all the policy changes is available here.

The post Root Store Policy Updated appeared first on Mozilla Security Blog.

Marco ZeheRediscovering blindness products

In recent months, I have discovered a tendency within myself that longs for more focused, hassle-free environments or niches, where distractions are reduced to a minimum, and I can immerse myself in one thing, and one thing only. And that has lead to rediscovering the merits of some blindness-specific products.

A bit of history

Products that specifically fulfill the needs of blind users have been around for a long time, and in earlier years, were often the only means of accessing certain information or content. These range from everything like either tactile or talking scales and talking thermometers, and what not, to sophisticated reading machines such as the Optacon, and braille displays. Later, as technology advanced, some of these means of access were replaced by scanners, or as of today, cameras, both stationary and mobile, combined with computers and optical character recognition software.

And as accessibility has become more mainstream, so have many assistive technology concepts such as DAISY book reading or GPS navigation. The greatest example is probably the popularity of audio books. Originally mostly used by the blind, shipped in big boxes full of cassette tapes (or even tape spools, narrated by often semi-professional volunteers, and only available through specialized libraries for the blind, they have evolved into a mainstream way of consumption through services such as Audible. They are very popular among sighted consumers, and narrators are often professional speakers or actors.

Enter the mainstream of smartphones and tablets

While in the early 2000s, cell phones were still mainly made accessible by specialized software, for example Talks or MobileSpeak for the Nokia S60 platform, or special devices such as the PAC Mate were created to bring mainstream operating systems onto special hardware with a screen reader added, the advance of iPhone and Android devices in the late 2000s brought a revolution for blind and visually impaired people. For the first time, accessibility was being built into the platform, and no special software or hardware was needed to operate these devices.

Over the next couple of years, many of the things specialized hardware did before were transferred, or newly developed, for these platforms. To name a few:

  • DAISY e-book players with interfaces to certain library services for the blind
  • special GPS solutions such as BlindSquare or Sendero
  • the KNFB Reader to scan documents with the built-in camera
  • VoiceDream Reader to read all kinds of material, including DAISY, MP3 books, epub, Word, PDF and other documents etc.

All these apps fulfill certain needs for blind users that seem to no longer require the use of special hardware.

Moreover, the cost of these apps is far lower than many of the price tags for the special hardware such as a DAISY player/recorder, stand-alone GPS solution, or reading machines.

One thing that didn’t get replaced, and which is still as important today as it ever was, is a braille display. So smartphones and tablets usually also support braille displays, mainly via Bluetooth. If one is able and desires to read braille, there is still no way around specialized hardware for this purpose.

Problems crept in

All good, or what? Well, I thought so, for a long time, too. I even sold some blindness-related products such as my first generation Victor Reader Stream by Humanware because I thought my iPhone and iPad could now fulfill all my needs. And for the most part, they do, but at a cost.

And that cost is not, in most cases, technical in nature, but rather has to do with the sheer fact that the device I am running the app on is a mainstream device. Many of these problems are, in one form or another, also applicable to people who aren’t blind, but might impact them less than they do me.

Distractions while reading

Here I am, immersed in a book in my eBook app of choice, and the screen reader is merrily reading it to me at a comfortable pace. Suddenly, a notification comes in, for example a Twitter notification, crashing in like a loud bell, either chiming over my narrative, or even stopping it in its tracks. When it does what, I have not figured out yet, seems to be randomly switching. Granted, I can turn Do Not Disturb mode on in such a way that it even suppresses notifications when the screen is active. But I have to consciously do that. At other times, I might want to have Do Not Disturb on while the phone is locked, but get notifications when it’s on, so I have to go back into the third or fourth level deep Settings screen to toggle the option.

This is not blindness specific. A person reading in the same app will get distracted by a banner notification popping up visually just as I am acoustically.

Another example is listening to music or podcasts. With the mobile screen reader on, any notification will by default cause the normal audio source to duck (its volume turned down) so that the screen reader’s voice can dominantly talk over it. You can then either quickly silence speech, or pause the audio source. But what if you’re on your Bluetooth earphones doing something on the opposite side of your room from where your smartphone is? Suddenly, you have two voices talking to you at once. I, for one, cannot cope with two vocal sources talking to me at the same time. I either miss one of them, or don’t catch what is being said at all any more.

Reading e-books in braille

I don’t know about other blind people, but I found that, having tested several systems over the years, none of them get continuous reading right with a braille display. Doesn’t matter which OS, desktop or mobile, I was using, they all had flaws that made a continuous reading experience, fully submerging in the contents of a great fantasy or science fiction novel, a pain in the posterior. One or two of them had German forward translation of grade 2 all wrong for literature, forcing upon me capitalized writing, which is uncommon. Others would jump erratically around when pages were flipped. Others wouldn’t even offer grade 2 in a language I am speaking at all, and while I can read computer messages or code in computer braille just fine, I just downright refuse to do it with literature. It’s just not acceptable.

I also found that most of them did a better job in English than other languages, like my mother tongue German. And from asking around, i hear similar things about French or Spanish. But even in English, I heard several reports from people I talked to that the braille displays jumping around unpredictably at times causing massive reading interruptions, is a common theme even in English.

So after extensive testing and research, I can confidently say that all mainstream solutions thus far have failed to address one major part of my needs: Continuous literature reading in braille. The fact that there are several solutions out there that do braille grade 2 translations well in various languages means that it can be done, it’s just that it isn’t being done in all of the screen readers I tested.

GPS in pedestrian mode

Yes, there are lots of GPS solutions that try to address the needs of blind pedestrians, some more costly than others, some on a subscription model, others not. But all of them fall short in that they usually don’t integrate well with the maps solution that is on the devices by default, use their own map material, and some also use additional services for points of interest, but they all just don’t quite get the precision right that I’d expect. I was doing a little shootout with my best friend, both on main and side streets as well as an off-road situation in a park or forest. He was using a Trekker Breeze from Humanware, I was using my iPhone with several GPS apps on it. We both set landmarks and also compared how our devices would behave in situations with street junctions or other situations. We both went the same routes, so we had roughly the same GPS coverage. Let me say that, while on streets, routes were OK with both devices, although even there his intersection announcements were more precise than mine and my GPS apps would more often get wrong addresses than his Breeze, it was in the off-road situation where he really beat the crap out of my apps. The precision with which the Breeze alerted him of landmarks for turning points in park pathways and other situations was just mind-boggling.

In addition to these observations, I also noticed that my apps, regardless of which one I used, were all very verbal and distracting, trying to give me lots of information I didn’t need at that moment. And the UIs were all much more complex than on the Trekker, I took longer, even thogh I was familiar with the apps, to get certain tasks done than he was. And did I mention that I cannot sort two voices talking at the same time? With my screen reader on, and much of the apps being self-voicing anyway, I often found myself in situations where two different synths were babbeling information I could have found useful, but couldn’t keep separated because of the two-voices problem.

General inconsistencies and inaccessibility

One other problem that keeps me always on the edge when using mainstream devices are screen reader inconsistencies and inaccessible apps or websites. Any update to an app can break accessibility, any update to the OS can break certain screen reader behavior, or web content I might need or have to consume at a particular moment can prove to be inaccessible, requiring me to either fiddle around with screen reader tricks to kick it into obedience, or simply not being able to get something done at all. Yes, despite all web accessibility efforts, this is still more often the case in 2018 than any of us could want.

Reorienting my views and behavior

All of these problems didn’t dawn on me at once, except maybe for the direct comparison of GPS solutions described above. They crept in, and over time became more and more annoying, even a stress factor at times. Reporting bugs in the braille translation to various companies and projects have not yielded much of improvements, not by me or by other braille users. And others just cannot really be fixed without some serious rethinking along the lines of “Hey, the user is reading a book, so maybe we should be smart enough to not disturb them as much…”

So I thought what could be done to rectify this situation? I want to get less distracted while doing certain things, or get more precise directions when walking in my favorite park. I recently lost a lot of weight and became more mobile physically, so this is something that is more important to me today than it was a few years ago. But it must be an enjoyable experience, not one that stresses me out just because of the cognitive load of trying to compensate for my apps’ lack of precision. And as I get older, I can tolerate less of these stress levels than when I was younger. Was burnt out once, don’t need to go there again for sure!

It was then that a few things happened at once. I was talking to my best friend, and at some point, he mentioned that his belovid Breeze was slowly, but surely, coming apart after 7 years of heavy use, and that he needed something new. Through research, I found the Victor Reader Trek by Humanware, which combines two devices into one: A Victor Reader Stream and a Trekker Breeze. Since we’re both in Germany, we’ll still have to wait a bit until this device is released here, but it’s definitely something both of us are curious about. The Trek, as it’s called, can do multiple things well: Audio books from various sources, podcasts, and other media consumption in controlled, secluded environment without distractions, and it can do navigation in its orientation mode. If you’re interested in hearing a bit about it, I found the Tech Doctor Podcast episode on the Trek that was released a few weeks before this writing.

The second thing that happened was that I came across the ElBraille, a Windows 10 PC in docking station format that can connect with two models of the Freedom Scientific Focus Blue braille displays. I own the 14 cell version, so I asked a local dealer to give me one to try. It turned out to not be a fit for me, for various braille-related reasons and more, but it prompted me to look for other small braille devices that could be used to read a braille book that had been converted from a digital source. And because I was a huge fan of the Handy Tech Bookworm in the 1990s and 2000s, an 8 cell braille device that fit in the palm of one hand and could be used to read for hours and hours, I eventually landed with the Help Tech Actilino, a 16 cell display with many features similar to the bookworm. I’ll be getting a device in two or three weeks to test it for a while, and am very curious if I can use it to submerge fully in a piece of braille literature without the weight of a full braille book crushing my ribs. If it is anything close to the Bookworm experience, I am almost sure I’m gonna love it.


As I have been using many mainstream devices for several years almost exclusively until now, I have found that often, the sheer richness of them can cause a cognitive load that gets in the way of enjoying certain activities that are meant to help you unwind, just get to where you need to be, or otherwise cause more hassle than should be necessary. It dawned upon me that after all, some of these blindness specific devices could again have a place in my life to help me relax more or take some other stress out of daily activities. After all, these devices are a very controlled environment, and I could imagine that not having to deal with the inconsistencies of a screen reader, the inaccessibility of a website, or the translation goofiness of some braille output could be refreshing for a change.

I fully realize that this is not an option for everyone, and I want to clearly state that these are views coming out of my personal experiences over the past few years. These assistive technology devices still have quite a high price tag, and especially given the high unemployment rate among blind people in all parts of the world, and considering that government funding is not available everywhere, it is more important than ever that mainstream devices become more accessible with every day. But people also differ, and time has shown that touch screen devices aren’t for every person, either. Some people just cannot effectively use them. So the statement that for some, such special devices are still the only means of accessing certain content, remains as true today as it was ten, twenty, or thirty years ago.

Interestingly, I have also noticed this tendency to consciously take time off from the distractions of the smartphone has become more prominent around me. My partner, for example, uses a Kindle Oasis for reading books exclusively. It does not have notification bells and whistles, but she can zoom the text to a scale that’s comfortable for her eyes. As she once said: Books on paper can’t be zoomed. So this tendency to use dedicated devices for immersive tasks is a theme I am seeing more and more within people across a multitude of abilities.

Time will tell if this will actually work out for me the way I am hoping it will, but the desire for some simplification and reduction of overhead is definitely strong within me. 🙂


The products mentioned here are purely mentioned as the result of my research. This is not a sponsored blog post, I get no money from any of the mentioned companies for mentioning their products. I did a lot of product research and looked at far more options than I mentioned here, but these were the ones I settled on for wanting to try them out.

Chris H-CSome More Very Satisfying Graphs

I guess I just really like graphs that step downwards:

Screenshot_2018-06-27 Telemetry Budget Forecasting

Earlier this week :mreid noticed that our Nightly population suddenly started sending us, on average, 150 fewer kilobytes (uncompressed) of data per ping. And they started doing this in the middle of the previous week.

Step 1 was to panic that we were missing information. However, no one had complained yet and we can usually count on things that break to break loudly, so we cautiously-optimistically put our panic away.

Step 2 was to see if the number of pings changed. It could be we were being flooded with twice as many pings at half the size, for the same volume. This was not the case:

Screenshot_2018-06-27 Telemetry Budget Forecasting(2)

Step 3 was to do some code archaeology to try and determine the “culprit” change that was checked into Firefox and resulted in us sending so much less data. We quickly hit upon the removal of BrowserUITelemetry and that was that.

…except… when I went to thank :Standard8 for removing BrowserUITelemetry and saving us and our users so much bandwidth, he was confused. To the best of his knowledge, BrowserUITelemetry was already not being sent. And then I remembered that, indeed, back in March :janerik had been responsible for stopping many things like BrowserUITelemetry from being sent (since they were unmaintained and unused).

So I fired up an analysis notebook and started poking to see if I could find out what parts of the payload had suddenly decreased in size. Eventually, I generated a plot that showed quite clearly that it was the keyedHistograms section that had decreased so radically.

Screenshot_2018-06-27 main_ping_size - Databricks

Around the same time :janerik found the culprit in the list of changes that went into the build: we are no longer sending a couple of incredibly-verbose keyed histograms because their information is now much more readily available in profiles.

The power of cleaning up old code: removing 150kb from the average “main” ping sent multiple times per day by each and every Firefox Nightly user.

Very satisfying.


Cameron KaiserAd-blocker-blockers hit a new low. What's the solution?

It may be the wrong day to slam the local newspapers, but this was what greeted me trying to click through to a linked newspaper article this morning on Firefox Android. The link I was sent was from the Riverside Press-Enterprise, but this appears to be throughout the entire network of the P-E's owners, the Southern California News Group (which includes the Orange County Register, San Bernardino Sun and Los Angeles Daily News):

That's obnoxious. SCNG is particularly notorious for not being very selective about ads and they tend to be colossally heavy and sometimes invasive; there's no way on this periodically green earth that I'm turning the adblocker off. I click "no thanks." The popover disappears, but what it was covering was this:

That's not me greeking the article so you can't see what article I was reading. The ad-blocker-blocker did it so that a clever user or add-on can't just set the ad-blocker-blocker's popover to display:none or something. The article is now incomprehensible text.

My first reaction is that any possibility I had of actually paying $1 for the 4 week subscription to any SCNG paper just went up in the flames of my great furious wrath (after all, this is a blog s**tpost). The funny part is that TenFourFox's basic adblock actually isn't defeated by this, probably because we're selective about what actually gets blocked and so the ad-blocker-blocker thinks ads are getting through. But our old systems are precisely those that need adblockers because of all the JavaScript (particularly) that modern ad systems lard their impressions up with. Anyway, to read the article I actually ended up looking at it on the G5. There was no way I was going to pay them for engaging in this kind of behaviour.

The second thought I had was, how do you handle this? I'm certainly sympathetic to the view that we need stronger local papers for better local governance, but print ads are a much different beast than the dreck that online ads are. (Yes, this blog has ads. I don't care if you block them or not.) Sure, I could have subscriptions to all the regional papers, or at least the ones that haven't p*ssed me off yet, but then I have to juggle all the memberships and multiple charges and that won't help me read papers not normally in my catchment area. I just want to click and read the news, just like I can anonymously pick up a paper and read it at the bar.

One way to solve this might be to have revenue sharing arrangements between ISPs and papers. It could be a mom-and-pop ISP and the local paper, if any of those or those still exist, or it could be a large ISP and a major national media group. Users on that ISP get free access (as a benefit of membership even), the paper gets a piece. Everyone else can subscribe if they want. This kind of thing already exists on Apple TV devices, after all: if I buy the Spectrum cable plan, I get those channels free on Apple TV over my Spectrum Internet access, or I pay if I don't. Why couldn't newspapers work this way?

Does net neutrality prohibit this?

Mozilla VR BlogThis week in Mixed Reality: Issue 11

This week in Mixed Reality: Issue 11

This week, we're making great strides in adding new features and making a wide range of improvements and our new contributors are also helping us fix bugs.


We are churning out new features and continuing to make UI changes to deliver the best possible experience on Firefox Reality by implementing the following:

  • Focus mode with the new design
  • Full screen mode and widget resizing
  • Reusable quad node which adds supports different scale modes
  • World Fade Out/In API and blitter
  • Back handler API
  • WidgetResizer utility node
  • Settings panel
  • A single window UI design with a browser window and bar below

Here is a sneek peak of Firefox Reality with focus mode, full screen mode and widget resizing with the new UX/UI:

Firefox Reality Focus mode, full screen mode and widget resizing from Imanol Fernández Gorostizaga on Vimeo.


We are working towards a content creator and content import updates on Hubs by Mozilla and added some new features:

  • Continued work on image and model spawning: animated GIFs, object deletion, proxy integration
  • Editor filesystem management feature complete, GLTF scene saving/loading, property editing
  • Migration to Maya GLTF exporter for architecture kit
  • Proof of concept of 3d spline generation and rendering for drawing tool
  • Media proxy (farspark) operationalized and deployed

Join our public WebVR Slack #social channel to participate in on the discussion!

Content ecosystem

This week, we launched v1.4.0, this includes, adding a new example scene and for handling Unity code for swapping scenes for navigation on the Unity WebVR project.

Shout out to Kyle Reczek for contributing his patch to fixing the state of the VR camera and manager since the state was not correct when exiting VR for switching scenes.

Found a critical bug? File it in our public GitHub repo or let us know on the public WebVR Slack #unity channel and as always, join us in our discussion!

Stay tuned for new features and improvements across our three areas!

Dave HuntPython unit tests now running with Python 3 at Mozilla

I’m excited to announce that you can now run the Python unit tests for packages in the Firefox source code against Python 3! This will allow us to gradually build support for Python 3, whilst ensuring that we don’t later regress. Any tests not currently passing in Python 3 are skipped with the condition skip-if = python == 3 in the manifest files, so if you’d like to see how they fail (and maybe provide a patch to fix some!) then you will need to remove that condition locally. Once you’ve done this, use the mach python-test command with the new optional argument --python. This will accept a version number of Python or a path to the binary. You will need to make sure you have the appropriate version of Python installed.

Once you’re ready to enable tests to run in TaskCluster, you can simply update the python-version value in taskcluster/ci/source-test/python.yml to include the major version numbers of Python to execute the tests against. At the current time our build machines have Python 2.7 and Python 3.5 available.

To summarise:

  1. Remove skip-if = python == 3 from manifest files. These are typically named manifest.ini or python.ini, and are usually found in the tests directory for the package.
  2. Run mach python-test --python=3 with your target path or subsuite.
  3. Fix the package(s) to support Python 3 and ensure the tests are passing
  4. Add Python 3 to the python-version for the appropriate job in taskcluster/ci/source-test/python.yml.

At the time of writing, the pythonclock.org tells me that we have just over 18 months before Python 2.7 will be retired. What this actually means is still somewhat unknown, but it would be a good idea to check if your code is compatible with Python 3, and if it’s not, to do something about it. The Firefox build system at Mozilla uses Python, and it’s still some way from supporting Python 3. We have a lot of code, it’s going to be a long journey, and we could do with a bit of help!

Whilst we do plan to support Python 3 in the Firefox build system (see bug 1388447), my initial concern and focus has been the Python packages we distribute on the Python Package Index (PyPI). These are available to use outside of Mozilla’s build system, and therefore a lack of Python 3 support will prevent any users from adopting Python 3 in their projects. One such example is Treeherder, which uses mozlog for parsing log files. Treeherder is a django project, which recently dropped support for Python 2 (unless you’re using their long term support release, which will support Python 2 until 2020).

Updating these packages to support Python 3 isn’t necessary that hard to do, especially with tools such as six, which provides utilities for handling the differences between Python 2 and Python 3. The problem has been that we had no way to run the tests against Python 3 in TaskCluster. This is no longer the case, and Python unit tests can now be run against Python 3!

So far I have enabled Python 3 jobs for our mozbase unit tests (this includes the aforementioned mozlog), and our mozterm unit tests. There are still many tests in mozbase that are not passing in Python 3, so as mentioned above, these have been conditionally skipped in the manifest files. This will allow us to enable these tests as support is added, and this condition could even be used in the future if we have a package that doesn’t have full compatibility with Python 2.

Now that running the tests against multiple versions of Python is relatively easy, it’s a great time for me to encourage our community to help us with supporting Python 3. If you’d like to help, we have a tracking bug for all of our mozbase packages. Find a package you’d like to work on, read the comments to understand what you need and how to get set up, and let me know if you get stuck!