Mozilla Open Policy & Advocacy BlogMozilla’s Cyber(in)security Summit

We’re excited to announce Mozilla’s Cyber(in)security Summit on October 24th in Washington, D.C. and streaming on Air Mozilla. Join us for a discussion on how we can all help secure the internet ecosystem.

Mozilla is excited to announce Cyber(in)security, a half-day policy summit that will explore the key issues surrounding the U.S. Government’s role in cybersecurity, the full cycle process of how the U.S. Government acquires, discloses and exploits vulnerabilities and what steps it can take to make Americans more secure. This is an important part of securing the global internet.

“With nonstop news of data breaches and ransomware attacks, it is critical to discuss the U.S. Government’s role in cybersecurity,” said Denelle Dixon, Mozilla’s Chief Business and Legal Officer. “User security is a priority and we believe it is necessary to have a conversation about the reforms needed to strengthen and improve the Vulnerabilities Equities Process to ensure that it is properly transparent and doesn’t compromise our national security or our fellow citizens’ privacy. Protecting cybersecurity is a shared responsibility and governments, tech companies and users all need to work together to make the internet as secure as possible.”

Cyber(in)security, to be held on Tuesday, October 24th at the Loft at 600 F in Washington, D.C., will take place from 1:00 pm to 7:00 pm ET. There will be four one-hour sessions followed by a networking happy hour.

You can RSVP here to attend here.

The post Mozilla’s Cyber(in)security Summit appeared first on Open Policy & Advocacy.

Mozilla GFXWebRender newsletter #4

We skipped the newsletter for a few weeks (sorry about that!), but we are back. I don’t have a lot to report today, in part because I don’t yet have a good workflow to track the interesting changes (especially in gecko) so I am most likely missing a lot of them, and a lot of us are working on big pieces of the project that are taking time to come together and I am waiting for these to be completed before they make it in the newsletter.

Notable WebRender changes

  • Glenn started reorganizing the shader sources to make them compile faster (important for startup time).
  • Morris implemented the backface-visibility property.
  • Glenn added some optimizations to the clipping code.
  • Glenn improved the scheduling/batching of alpha passes to reduce the number of render target switches.
  • Sotaro improved error handling.
  • Glenn improved the transfer of the primitive data to the GPU by using pixel buffer objects instead of texture uploads.
  • Glenn added a web-based debugger UI to WebRender. It can inspect display lists, batches and can control various other debugging options.

Notable Gecko changes

  • Kats enabled layers-free mode for async scrolling reftests.
  • Kats and Morris enabled rendering tables in WebRender.
  • Gankro fixed a bug with invisible text not casting shadows.
  • Gankro improved the performance of generating text display items.

Gary KwongPorting a legacy add-on to WebExtensions

tl;dr: Search Keys has been ported successfully and it is known as Add Search Number. Please try it! It works with Google, Yahoo (HK/TW/US), Bing, DuckDuckGo and even Wikipedia’s search page.

188279

Add Search Number

I have been using the excellent Search Keys add-on (original page) for a long time. It allows one to “go to search results by pressing the number of the search”. However, it hasn’t been updated for the better half of a decade and most features (e.g. support for Yahoo! and Bing) had broken, except for the numbers for Google Search.

Recently, there has been a push to move to the WebExtensions API, especially since Firefox 57 will stop supporting legacy XUL add-ons. Hence, I set about a quest to see what it takes to port Search Keys away from XUL, and I kept the author updated throughout.

Discoveries:

  • Using GitHub with Travis and ESLint integration was crucial for saving myself time avoiding silly syntax errors. I’m sure it could be done in your favourite online repository hosting alternative (Bitbucket/GitLab), etc.
  • Getting web-ext via npm also proved essential, along with WebExtension examples
  • You need to check if the old APIs have equivalents.
    • Search Keys was using nsIIOService which had no equivalent, but it was only used for ensure that a URL is indeed an URL, so I just did it another way (new URL = “<url>”). Thanks :MattN for the tip.
    • Another usage was for openUILinkIn, and there are similar-enough WebExtensions equivalents for this (tabs, windows).
    • (File a bug if an equivalent isn’t available, but first check for dupes)
  • Migrating an old project by another person is hard. I had to have several commits where I removed features, wither down the code to the bare minimum (my objective was just to add numbers for Google Search results), got it to work, tested, and then re-added support for Yahoo!, Bing, and even DuckDuckGo and Wikipedia.
  • Comments proved extremely helpful, in the absence of documentation.

I took about 2 days to port the add-on. Developing an add-on now has come a long way and is now much easier due to the presence of these tools (GitHub, devtools, etc.) as well as AMO being much improved ever since I started writing my first add-on (ViewAbout) almost a decade ago. Sadly, ViewAbout will unlikely be ported to WebExtensions, if ever. (Reasons are at that link)

This was tested on Firefox 55, exactly experiencing the difficulties that an add-on developer currently would face right now.

The only major caveat?

There were times when I set a breakpoint on a content script using Firefox’s Developer Tools (instantiated via web-ext). After I refresh the page, the extension would occasionally “disappear” from the devtools. I would then have to close Firefox, restart it via web-ext, re-set the breakpoint, then cross my fingers and hope that the devtools will stop at the required breakpoint.

In my experience, this has been straightforward, as the original add-on was fairly simple. I understand that for other complex add-ons, the porting process is much more complicated and take a much longer time.

What has your experience been like?

(Please note that this post does not discuss the pros and cons of whether Firefox 57 and later should use WebExtensions or not, hence cutting off legacy support, any comments on this will be removed.)


This Week In RustThis Week in Rust 200

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is rug, a crate providing arbitrary-precision integers, rationals and floating-point numbers, using GMP, MPFR and MPC. Thank you, Trevor Spiteri for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

160 pull requests were merged in the last week

New Contributors

  • 42triangles
  • David Adler
  • Gauri Kholkar
  • Ixrec
  • J. Cliff Dyer
  • Michal Budzynski
  • rwakulszowa
  • smt923
  • Trevor Merrifield

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

<heycam> one of the best parts about stylo has been how much easier it has been to implement these style system optimizations that we need, because Rust <heycam> can you imagine if we needed to implement this all in C++ in the timeframe we have <bholley> heycam: yeah srsly <bholley> heycam: it's so rare that we get fuzz bugs in rust code <bholley> heycam: considering all the complex stuff we're doing * heycam remembers getting a bunch of fuzzer bugs from all kinds of style system stuff in gecko <bholley> heycam: think about how much time we could save if each one of those annoying compiler errors today was swapped for a fuzz bug tomorrow :-) <njn> you guys sound like an ad for Rust

Conversation between some long-time Firefox developers.

Thanks to Josh Matthews for the suggestion.

Submit your quotes for next week!

This Week in Rust is edited by: nasa42 and llogiq.

Mike HoyeThirty-Five Minutes Ago

Well, that’s done.

mhoye@ANGLACHEL:~/src/planet-content/branches/planet$ git diff | grep "^-name" | wc -l
401
mhoye@ANGLACHEL:~/src/planet-content/branches/planet$ git commit -am "The Great Purge of 2017"

Purging the Planet blogroll felt a lot like being sent to exorcise the ghosts of my own family estate. There were a lot of old names, old memories and more than a few recent ones on the business end of the delete key today.

I’ve pulled out all the feeds that errored out, everyone who isn’t currently involved in some reasonably direct capacity in the Mozilla project and a bunch of maybes that hadn’t put anything up since 2014, and I’d like the record to show that I didn’t enjoy doing any of that.

If you believe your feed was pulled in error, please file a bug so I can reinstate it directly.

Air MozillaMozilla Weekly Project Meeting, 18 Sep 2017

Mozilla Weekly Project Meeting The Monday Project Meeting

The Firefox FrontierAd Blocker Roundup: 5 Ad Blockers That Improve Your Internet Experience

Ad Blockers are a specific kind of software called an extension, a small piece of software that adds new features or functionality to Firefox. Using Ad Blockers, you can eliminate … Read more

The post Ad Blocker Roundup: 5 Ad Blockers That Improve Your Internet Experience appeared first on The Firefox Frontier.

David HumphreyFixing a bug in TensorBoard

This week I'm talking with my open source students about bugs. Above all, I want them to learn how The Bug is the unit of work of open source. Learning to orient your software practice around the idea that we incrementally improve an existing piece of code (or a new one) by filing, discussing, fixing, and landing bugs is an important step. Doing so makes a number of things possible:

  • it socializes us to the fact that software is inherently buggy: all code has bugs, whether we are aware of them yet or not. Ideally this leads to an increased level of humility
  • it allows us to ship something now that's good enough, and improve it as we go forward. This is in contrast to the idea that we'll wait until things are done or "correct."
  • it provides an interface between the users and creators of software, where we can interact outside purely economic relationships (e.g., buying/selling).
  • connected with the above, it enables a culture of participation. Understanding how this culture works provides opportunities to become involved.

One of the ways that new people can participate in open source projects is through Triaging existing bugs: testing if a bug (still) exists or not, connecting the right people to it, providing more context, etc.

As I teach these ideas this week, I thought I'd work on triaging a bug in a project I haven't touched before. When you're first starting out in open source, the process can be very intimidating and mysterious. Often I find my students look at what goes on in these projects as something they do vs. something I could do. It almost never feels like you have enough knowledge or skill to jump in and join the current developers, who all seem to know so much.

The reality is much more mundane. The magic you see other developers doing turns out to be indistinguishable from trial and error, copy/pasting, asking questions, and failing more than you succeed. It's easy to confuse the end result of what someone else does with the process you'd need to undergo if you wanted to do the same.

Let me prove it too you: let's go triage a bug.

TensorFlow and TensorBoard

One of the projects that's trending right now on GitHub is Google's open source AI and Machine Learning framework, TensorFlow. I've been using TensorFlow in a personal project this year to do real-time image classification from video feeds, and it's been amazing to work with and learn. There's a great overview video of the kinds of things Google and others are doing with TensorFlow to automate all kinds of things on the tensorflow.org web site, along with API docs, tutorials, etc.

TensorFlow is just under 1 million lines of C++ and Python, and has over 1,100 contributors. I've found the quality of the docs and tools to be first class, especially for someone new to AI/ML like myself.

One of those high quality tools is TensorBoard.

TensorBoard

TensorBoard is a Python-based web app that reads log data generated by TensorFlow as it trains a network. With TensorBoard you can visualize your network, understand what's happening with learning and error rates, and gain lots of insight into what's actually going on with your training runs. There's an excellent video from this year's TensorFlow Dev Summit (more videos at that link) showing a lot of the cool things that are possible.

A Bug in TensorBoard

When I started using TensorFlow and TensorBoard this spring, I immediately hit a bug. My default browser is Firefox, and here's what I saw when I tried to view TensorBoard locally:

Firefox running TensorBoard

Notice all the errors in the console related to Polymer and document.registerElement not being a function. It looks like an issue with missing support for Custom Elements. In Chrome, everything worked fine, so I used that while I was iterating on my neural network training.

Now, since I have some time, I thought I'd go back and see if this was fixable. The value of having the TensorBoard UI be web based is that you should be able to use it in all sorts of contexts, and in all sorts of browsers.

Finding/Filing the Bug

My first step was to see if this bug was known. If someone has already filed it, then I won't need to; it may even be that someone is already fixing it, or that it's fixed in an updated version.

I begin by looking at the TensorBoard repo's list of Issues. As I said above, one of the amazing things about true open source projects is that more than just the code is open: so too is the process by which the code evolves in the form of bugs being filed/fixed. Sometimes we can obtain a copy of the source for a piece of software, but we can't participate in its development and maintenance. It's great that Google has put both the code and entire project on GitHub.

At the time of writing, there are only 120 open issues, so one strategy would be to just look through them all for my issue. This often won't be possible, though, and a better approach is to search the repo for some unique string. In this case, I have a bunch of error messages that I can use for my search.

I search for document.registerElement and find 1 issue, which is a lovely outcome:

Searching GitHub for my issue

Issue #236: tensor board does not load in safari is basically what I'm looking for, and discusses the same sorts of errors I saw in Firefox, but in the context of Safari.

Lesson: often a bug similar to your own is already filed, but may be hiding behind details different from the one you want to file. In this case, you might unknowingly file a duplicate (dupe), or add your information to an existing bug. Don't be afraid to file the bug: it's better to have it filed in duplicate than for it to go unreported.

Forking and Cloning the repo

Now that I've found Issue #236, I have a few options. First, I might decide that having this bug filed is enough: someone on the team can fix it when they have time. Another possibility is that I might have found that someone was already working on a fix, and a Pull Request was open for this Issue, with code to address the problem. A third option is for you to fix the bug yourself, and this is the route I want to go now.

My first step is to Fork the TensorBoard repo into my own GitHub account. I need a version of the code that I can modify vs. just read.

Forking the TensorBoard Repo

Once that completes, I'll have an exact copy of the TensorBoard repo that I control, and which I can modify. This copy lives on GitHub. To work with it on my laptop, I'll need to Clone it to my local computer as well, so that I can make and test changes:

Clone my fork

Setting up TensorBoard locally

I have no idea how to run TensorBoard from source vs. as part of my TensorFlow installation. I begin by reading their README.md file. In it I notice a useful discussion within the Usage section, which talks about how to proceed. First I'll need to install Bazel.

Lesson: in almost every case where you'll work on a bug in a new project, you'll be asked to install and setup a development environment different from what you already have/know. Take your time with this, and don't give up too easily if things don't go as smoothly as you expect: many fewer people test this setup than do the resulting project it is meant to create.

Bazel is a build/test automation tool built and maintained by Google. It's available for many platforms, and there are good instructions for installing it on your particular OS. I'm on macOS, so I opt for the Homebrew installation. This requires Java, which I also install.

Now I'm able to try and do the build I follow the instructions in the README, and within a few seconds get an error:

$ cd tensorboard
$ bazel build tensorboard:tensorboard
Extracting Bazel installation...  
.............
ERROR: /private/var/tmp/_bazel_humphd/d51239168182c03bedef29cd50a9c703/external/local_config_cc/BUILD:49:5: in apple_cc_toolchain rule @local_config_cc//:cc-compiler-darwin_x86_64: Xcode version must be specified to use an Apple CROSSTOOL.  
ERROR: Analysis of target '//tensorboard:tensorboard' failed; build aborted.  
INFO: Elapsed time: 8.965s  

This error is a typical example of the kind of problem one encounters working on a new project. Specifically, it's OS specific, and relates to a first-time setup issue--I don't have XCode setup properly.

I spend a few minutes searching for a solution. I look to see if anyone has filed an issue with TensorBoard on GitHub specifically about this build error--maybe someone has had this problem before, and it got solved? I also Google to see if anyone has blogged about it or asked on StackOverflow: you are almost never the only person who has hit a problem.

I find some help on StackOverflow, which suggests that I don't have XCode properly configured (I know it's installed). It suggests some commands I can try to fully configure things, none of which solve my issue.

It looks like it wants the full version of XCode vs. just the commandline tools. The full XCode is massive to download, and I don't really want to wait, so I do a bit more digging to see if there is any other workaround. This may turn out to be a mistake, and it might be better to just do the obvious thing instead of trying to find a workaround. However, I'm willing to spend an additional 20 minutes of research to save hours of downloading.

Some more searching reveals an interesting issue on the Bazel GitHub repo. Reading through the comments on this issues, it's clear that lots of other people have hit this--it's not just me. Eventually I read this comment, with 6 thumbs-up reactions (i.e., some agreement that it works):

just for future people. sudo xcode-select -s /Applications/Xcode.app/Contents/Developer could do the the trick if you install Xcode and bazel still failing.

This allows Bazel to find my compiler and the build to proceed further...before stopping again with a new error: clang: error: unknown argument: '-fno-canonical-system-headers'.

This still sounds like a setup issue on my side vs. something in the TensorBoard code, so I keep reading. This discussion on the Bazel Google Group seems useful: it sounds like I need to clean my build and regenerate things, now that my XCode toolchain is properly setup. I do that, and my build completes without issue.

Lesson: getting this code to build locally required me to consult GitHub, StackOverflow, and Google Groups. In other words, I needed the community to guide me via asking and answering questions online. Don't be afraid to ask questions in public spaces, since doing so leaves traces for those who will follow in your footsteps.

Running TensorBoard

Now that I've built the source, I'm ready to try running it. TensorBoard is meant to be used in conjunction with Tensorflow. In this case, however, I'm interested in using it on its own, purely for the purpose of reproducing my bug, and testing a fix. I don't actually
care about having Tensorflow and real training data to visualize. I notice that the DEVELOPMENT.md file seems to indicate that it's possible to fake some training data and use that in the absence of a real TensorFlow project. I try what it suggests, which fails:

...
line 40, in create_summary_metadata  
   metadata = tf.SummaryMetadata(
AttributeError: 'module' object has no attribute 'SummaryMetadata'  
ERROR: Non-zero return code '1' from command: Process exited with status 1.  

From having programmed with TensorFlow before, I assume here that tf (i.e. the TensorFlow Python module) is missing an expected attribute, namely, SummaryMetadata. I've never heard of it, but Google helps me locate the necessary API docs.

This leads me to conclude that my installed version of TensorFlow (I installed it 4 months earlier) might not have this new API, and the code in TensorBoard now expects it to exist. The API docs I'm consulting are for version 1.3 of the TensorFlow API. What do I have installed?

$ pip search tensorflow
...
 INSTALLED: 1.2.1
 LATEST:    1.3.0

Maybe upgrading from 1.2.1 to 1.3.0 will solve this? I update my laptop to TensorFlow 1.3.0 and am now able to generate the fake data for TensorBoard.

Lesson: running portions of a larger project in isolation often means dealing with version issues and manually installing dependencies. Also, sometimes dependencies are assumed, as was TensorFlow 1.3 in this case. Likely the TensorBoard developers all have TensorFlow installed and/or are developing it at the same time. In cases like this a README may not mention all the implied dependencies.

Using this newly faked data, I try running my version of TensorBoard...which again fails with a new error:

...
   from tensorflow.python.debug.lib import grpc_debug_server
ImportError: cannot import name grpc_debug_server  
ERROR: Non-zero return code '1' from command: Process exited with status 1.  

After some more searching, I find a 10-day old open bug in TensorBoard itself. This particular bug seems to be another version skew issue between dependencies, TensorFlow, and TensorBoard. The module in question, grpc_debug_server, seems to come from TensorFlow. Looking at the history of this file, the code is pretty new, making me wonder if it is once again that I'm running something with an older API. A comment in this issue gives a clue as to a possible fix:

FYI, I ran into the same problem, and I did pip install grpc which seemed to fix the problem.

I give this a try, but TensorBoard still won't run. Further on in this issue I read another comment indicating I need the "nightly version of TensorFlow." I've never worked with the nightly version of TensorFlow before (didn't know such a thing existed), and I have no idea how to install that (the comment assumes one knows how to do this).

A bit more searching reveals the answer, and I install the nightly version:

$ pip install tf-nightly

Once again I try running my TensorBoard, and this time, it finally works.

Lesson: start by assuming that an error you're seeing has been encountered before, and go looking for an existing issue. If you don't find anything, maybe you are indeed the first person to hit it, in which case you should file a new issue yourself so you can start a discussion and work toward a fix. Everyone hits these issues. Everyone needs help.

Reproducing the Bug

With all of the setup now behind us, it's time to get started on our actual goal. My first step in tackling this bug is to make sure I can reproduce it, that is, make sure I can get TensorBoard to fail in Safari and Firefox. I also want to confirm that things work in Chrome, which would give me some assurance that I've got a working source build.

Here's my local TensorBoard running in Chrome:

TensorBoard on Chrome

Next I try Safari:

TensorBoard on Safari

And...it works? I try Firefox too:

TensorBoard on Firefox

And this works too. At this point I have two competing emotions:

  1. I'm pleased to see that the bug is fixed.
  2. I'm frustrated that I've done all this work to accomplish nothing--I was hoping I could fix it.

The Value of Triaging Bugs

It's kind of ironic that I'm upset about this bug being fixed: that's the entire point of my work, right? I would have enjoyed getting to try and fix this myself, to learn more about the code, to get involved in the project. Now I feel like I have nothing to contribute.

Here I need to challenge my own feelings (and yours too if you're agreeing with me). Do I really have nothing to offer after all this work? Was it truly wasted effort?

No, this work has value, and I have a great opportunity to contribute something back to a project that I love. I've been able to discover that a previous bug has been unknowingly fixed, and can now be closed. I've done the difficult work of Confirming and Triaging a bug, and helping the project to close it.

I leave a detailed comment with my findings. This then causes the bug to get closed by a project member with the power to do so.

So the result of my half-day of fighting with TensorBoard is that a bug got closed. That's a great outcome, and someone needed to do this work in order for this to happen. My willingness to put some effort into it was key. It's also paved the way for me to do follow-up work, if I choose: my computer now has a working build/dev environment for this project. Maybe I will work on another bug in the future.

There's more to open source than fixing bugs: people need to file them, comment on them, test them, review fixes, manage them through their lifetime, close them, etc. We can get involved in any/all of these steps, and it's important to realize that your ability to get involved is not limited to your knowledge of how the code works.

QMOFirefox Developer Edition 56 Beta 12 Testday Results

Hello Mozillians!

As you may already know, last Friday – September 15th – we held a new Testday event, for Developer Edition 56 Beta 12.

Thank you all for helping us make Mozilla a better place – Athira Appu.

From India team: Baranitharan & BaraniCool, Abirami& AbiramiSD, Vinothini.K, Surentharan, vishnupriya.v, krishnaveni.B, Nutan sonawane, Shubhangi Patil, Ankita Lahoti, Sonali Dhurjad, Yadnyesh Mulay, Ankitkumar Singh.

From Bangladesh team: Nazir Ahmed Sabbir, Tanvir Rahman, Maruf Rahman, Saddam Hossain, Iftekher Alam, Pronob Kumar Roy, Md. Raihan Ali, Sontus Chandra Anik, Saheda Reza Antora, Kazi Nuzhat Tasnem, Md. Rahimul Islam, Rahim Iqbal, Md. Almas Hossain, Ali sarif, Md.Majedul islam, JMJ Saquib, Sajedul Islam, Anika Alam, Tanvir Mazharul, Azmina Akter Papeya, sayma alam mow. 

Results:

– several test cases executed for the Preferences Search, CSS Grid Inspector Layout View and Form Autofill features.

– 6 bugs verified: 1219725 , 13739351391014 , 1382341, 1383720 , 1377182

– 1 new bug filed: 1400203

Thanks for another successful testday 🙂

We hope to see you all in our next events, all the details will be posted on QMO!

RabimbaLinuxCon China 2017: Trip Report


Linux Foundation held a combination of three events in China as part of their foray into Asia early this year. It was a big move for them since this was supposed to be the first time Linux Foundation would hold an event in Asia.
I was invited to present a talk on Hardening IoT endpoints. The event was held in Beijing, and since I have never been to Beijing before I was pretty excited for the talk. However, it turned out the journey is pretty long and expensive. Much more than a student like me can hope to bear. Normally I represent Mozilla in such situations, but the topic of the talk was too much into security and not aligned much with the goals of Mozilla at that moment. Fortunately, Linux Foundation gave me a Scholarship to come and speak at LinuxCon China which enabled me to attend LinuxCon and the awesome team at Mozilla TechSpeakers including Michael Ellis and Havi helped me get ready for the talk.


The event was held at China National Convention Center. It's a beautiful and enormous convention center just middle of Beijing. One of the big problems I soon realized after reaching China is, most of the services in my mobile was not working. The great wall (the firewall not the actual one) was preventing most of the Google services I had, unfortunately, that included two apps I was heavily relying on. Google Maps and Google Translate. There, of course, is a local alternative to Google Maps which is Baidu Maps, but since the interface itself also was in Chinese, it wasn't of much help to me. Fortunately, my VPN setup came into my rescue and that has been my source of relief for the next two days in China.

Pro Tip: If you have to goto china and you rely on some form of service which might be blocked in China. It's better to use a good VPN. One you know will work there or roll your own. I had rolled my own since my commercial vpn also was blocked there.

The day started with Linus Trovalds having an open discussion regarding which way Linux is moving. And with very interesting aspects and views.
One of the recurring theme in the discussion, which kept coming up was regarding how the core linux maintainer circle worked. And why it was being relied on only one those very few people. The reply was most stimulating.
The very interesting quote from him was
The other talks were interesting as well. I would have really liked to attend three more talks, namely by Greg on serverless computing on edge, by Swati on Kubernates and by Kai Zhang on container-based virtualization, but that one clashed with my own talk.

My talk was on the second day and on a relatively good time, which was especially important for me as the conference wifi was the only one where I could work on my slides.
Lesson Learned: Don't rely on Google Slides in China
Fortunately courtesy to my vpn I was able to work on it and have a backup local copy ready for the talk.
That room was pretty big, didn't see this coming
What I did not anticipate earlier was how eager people were for the talk. In a nutshell this was how the room was looking when I took the podium.

My first reaction was: Wow that's a lot of people! Guess they are really interested in the talk!
And then: Shit! I hope my talk is as interesting as all of the super industry relevant talks going on around me in all other rooms.

Fortunately, the talk went pretty well. I always judge my talk based on how many queries, questions I get after the talk and also how many reactions in twitter. Judging on the number of queries afterwards I guessed atleast it wasn't that bad. I was though super disappointed on the complete radio silence in twitter regarding my talk. Only to realize later that twitter is also blocked in China.

To Do: Next time come up with better ways to track engagement.

My only complain here was, normally every Linux Foundation conference records your talk. LinuxCon didn't. Though they did upload all our slides, so if you want to go over a textual version of what I presented, have a sneak peak here. I will be all ears to listen to your feedback

SecurityPI - Hardening your IoT endpoints in Home. from LinuxCon ContainerCon CloudOpen China

This would have normally finished my recount of the event, but this time it didn't I finally went to a BoF session on Fedora and CentOS, and ended up having a 2 hour long discussion on the various issues Mozilla and Fedora communities face and pain points with Brian Exelbierd. We temporarily suspended the discussion with no clear path to a solution but with a notion to touch base with each other again on it.

Conclusion: LinuxCon was a perfect example of how to handle and manage a huge footfall with a multilingual audience and still make the conference good. The quality of the talks was astounding as well as speakers. I really loved my experience there. Made some great friends (I am looking at you Greg and Swati :D), had some awesome conversation.

And did I mention the speakers at the day caught up and decided we needed a memoir for us? Which happens to be us discussing everything related to Linux to Mozilla to security in Forbidden City
That in a nutshell were the speakers
Like I said, one hell of a conference.

PS: If you want to talk to me about anything related to the talk, don't hesitate to get in touch using either my email or twitter.

The Rust Programming Language Blogimpl Future for Rust

The Rust community has been hard at work on our 2017 roadmap, but as we come up on the final quarter of the year, we’re going to kick it into high gear—and we want you to join us!

Our goals for the year are ambitious:

To finish off these goals, we intend to spend the rest of the year focused purely on “implementation” work—which doesn’t just mean code! In particular, we are effectively spinning down the RFC process for 2017, after having merged almost 90 RFCs this year!

So here’s the plan. Each Rust team has put together several working groups focused on a specific sub-area. Each WG has a leader who is responsible for carving out and coordinating work, and a dedicated chat channel for getting involved. We are working hard to divvy up work items into many shapes and sizes, and to couple them with mentoring instructions and hands-on mentors. So if you’ve always wanted to contribute to Rust but weren’t sure how, this is the perfect opportunity for you. Don’t be shy—we want and need your help, and, as per our roadmap, our aim is mentoring at all levels of experience. To get started, say hello in the chat rooms for any of the work groups you’re interested in!

A few points of order

There are a few online venues for keeping in the loop with working group activity:

  • There is a dedicated Gitter community with channels for each working group, as well as a global channel for talking about the process as a whole, or getting help finding your way to a working group. For those who prefer IRC, a good bridge is available!

  • The brand-new findwork site, which provides an entry point to a number of open issues across the Rust project, including those managed by working groups (see the “impl period” tab). Thanks, @nrc, for putting this together!

We also plan two in-person events, paired with upcoming Rust conferences. Each of them is a two-day event populated in part by Rust core developers; come hang out and work together!

As usual, all of these venues abide by the Rust code of conduct. But more than that: this “impl period” is a chance for us all to have fun collaborating and helping each other, and those participating in the official venues are expected to meet the highest standards of behavior.

The working groups

Without further ado, here’s the initial lineup! (A few more working groups are expected to arise over time.)

If you find a group that interests you, please say hello in the corresponding chat room!

Compiler team

WG-compiler-errors Make Rust's error messages even friendlier. Learn more Chat
WG-compiler-front Dip your toes in with parsing and syntax sugar. Learn more Chat
WG-compiler-middle Implement features that involve typechecking. Learn more Chat
WG-compiler-traits Want generic associated types? You know what to do. Learn more Chat
WG-compiler-incr Finish incremental compilation; receive undying love. Learn more Chat
WG-compiler-nll Delve into the bowels of borrowck to slay the beast: NLL! Learn more Chat
WG-compiler-const Const generics. Enough said. Learn more Chat

Libs team

WG-libs-blitz Help finish off the Blitz before all the issues are gone! Learn more Chat
WG-libs-cookbook Work on bite-sized examples to get folks cooking with Rust. Learn more Chat
WG-libs-guidelines Take the wisdom from the Blitz and pass it on. Learn more Chat
WG-libs-simd Provide access to hardware parallelism in Rust! Learn more Chat
WG-libs-openssl Want better docs for openssl? So do we. Learn more Chat
WG-libs-rand Craft a stable, core crate for randomness. Learn more Chat

Docs team

WG-docs-rustdoc Help make docs beautiful for everyone! Learn more Chat
WG-docs-rustdoc2 Get in on a bottom-up revamp of rustdoc! Learn more Chat
WG-docs-rbe Teach others Rust in the browser. Learn more Chat

Dev tools team

WG-dev-tools-rls Help make Rust's IDE experience first class. Learn more Chat
WG-dev-tools-vscode Improve Rust's IDE experience for VSCode. Learn more Chat
WG-dev-tools-clients Implement new RLS clients: Atom, Sublime, Visual Studio... Learn more Chat
WG-dev-tools-IntelliJ Polish up an already-rich Rust IDE experience. Learn more Chat
WG-dev-tools-rustfmt Make Rust's code the prettiest! Learn more Chat
WG-dev-tools-rustup Make Rust's first impression even better! Learn more Chat
WG-dev-tools-clippy It looks like you're trying to write a linter. Want help? Learn more Chat
WG-dev-tools-bindgen Make FFI'ing to C and C++ easy, automatic, and robust! Learn more Chat

Cargo team

WG-cargo-native Let's make native dependencies as painless as we can. Learn more Chat
WG-cargo-registries Going beyond crates.io to support custom registries. Learn more Chat
WG-cargo-pub-deps Teach Cargo which of your dependencies affects your users. Learn more Chat
WG-cargo-integration How easy can it be to use Cargo with your build system? Learn more Chat

Infrastructure team

WG-infra-crates.io Try your hand at a production Rust web app! Learn more Chat
WG-infra-perf Let's make sure Rust gets faster. Learn more Chat
WG-infra-crater Regularly testing the compiler against the Rust ecosystem. Learn more Chat
WG-infra-secure Help us implement best practices for Rust's infrastructure! Learn more Chat
WG-infra-host Managing the services that keep the Rust machine running. Learn more Chat
WG-infra-rustbuild Streamline the compiler build process. Learn more Chat

Core team

WG-core-site The web site is getting overhauled; help shape the new content! Learn more Chat

Bryce Van DykWhy Does Firefox Use e4 and e5 Values to Fill Memory?

I was once talking to some colleagues about a Firefox crash bug. As we gazed at the crash report, one leaned over and pointed at the value in one of the CPU registers: 0xe5e5e5e9. “Freed memory,” he sagely indicated: “e5”.

Magic debug numbers

Using special numbers to indicate something in memory is an old trick. Wikipedia even has Wikipedia even has famous examples of such things! Neato! These numbers are often referred to as “poison” or “junk” in the context of filling memory (because they’re supposed to cause the program to fail, or be meaningless garbage).

Mozilla uses this trick (and the “poison” terminology) in Firefox debug builds to indicate uninitialized memory (e4), as well as freed memory (e5). Thus the presence of these values in a crash report, or other failure report, indicate that something has gone wrong with memory handling. But why e4 and e5?

jemalloc

jemalloc is a general purpose implementation of malloc. Firefox utilizes a modified version of jemalloc to perform memory allocation. There's a pretty rich history here, and it would take another blog post to cover how and why Mozilla use jemalloc. So I'm going to hand wave and say that it is used, and the reasons for doing so are reasonable.

jemalloc can use magic/poison/junk values when performing malloc or free. However, jemalloc will use the value a5 when allocating, and 5a when freeing, so why do we see something different in Firefox?

A different kind of poison

When using poison values, it's possible for the memory with these values to still be used. The hope is that when doing so the program will crash and you can see which memory is poisoned. However, with the 5a value in Firefox there was concern that 1) the program would not crash, and 2) as a result, it could be exploited: see this bug.

As a result of these concerns, it was decided to use the poison values we see today. The code that sets these values has undergone some changes since the above bug, but the same values are used. If you want to take a look at the code responsible here is a good place to start.

Mitchell BakerBusting the myth that net neutrality hampers investment

This week I had the opportunity to share Mozilla’s vision for an Internet that is open and accessible to all with the audience at MWC Americas.

I took this opportunity because we are at a pivotal point in the debate between the FCC, companies, and users over the FCC’s proposal to roll back protections for net neutrality. Net neutrality is a key part of ensuring freedom of choice to access content and services for consumers.

Earlier this week Mozilla’s Heather West wrote a letter to FCC Chairman Ajit Pai highlighting how net neutrality has fueled innovation in Silicon Valley and can do so still across the United States.

The FCC claims these protections hamper investment and are bad for business. And they may vote to end them as early as October. Chairman Pai calls his rule rollback “restoring internet freedom” but that’s really the freedom of the 1% to make decisions that limit the rest of the population.

At Mozilla we believe the current rules provide vital protections to ensure that ISPs don’t act as gatekeepers for online content and services. Millions of people commented on the FCC docket, including those who commented through Mozilla’s portal that removing these core protections will hurt consumers and small businesses alike.

Mozilla is also very much focused on the issues preventing people coming online beyond the United States. Before addressing the situation in the U.S., journalist Rob Pegoraro asked me what we discovered in the research we recently funded in seven other countries into the impact of zero rating on Internet use:


(Video courtesy: GSMA)

If you happen to be in San Francisco on Monday 18th September please consider joining Mozilla and the Internet Archive for a special night: The Battle to Save Net Neutrality. Tickets are available here.

You’ll be able to watch a discussion featuring former FCC Chairman Tom Wheeler; Representative Ro Khanna; Mozilla Chief Legal and Business Officer Denelle Dixon; Amy Aniobi, Supervising Producer, Insecure (HBO); Luisa Leschin, Co-Executive Producer/Head Writer, Just Add Magic (Amazon); Malkia Cyril, Executive Director of the Center for Media Justice; and Dane Jasper, CEO and Co-Founder of Sonic. The panel will be moderated by Gigi Sohn, Mozilla Tech Policy Fellow and former Counselor to Chairman Wheeler. It will discuss how net neutrality promotes democratic values, social justice and economic opportunity, what the current threats are, and what the public can do to preserve it.

The Mozilla BlogBusting the myth that net neutrality hampers investment

This week I had the opportunity to share Mozilla’s vision for an Internet that is open and accessible to all with the audience at MWC Americas.

I took this opportunity because we are at a pivotal point in the debate between the FCC, companies, and users over the FCC’s proposal to roll back protections for net neutrality. Net neutrality is a key part of ensuring freedom of choice to access content and services for consumers.

Earlier this week Mozilla’s Heather West wrote a letter to FCC Chairman Ajit Pai highlighting how net neutrality has fueled innovation in Silicon Valley and can do so still across the United States.

The FCC claims these protections hamper investment and are bad for business. And they may vote to end them as early as October. Chairman Pai calls his rule rollback “restoring internet freedom” but that’s really the freedom of the 1% to make decisions that limit the rest of the population.

At Mozilla we believe the current rules provide vital protections to ensure that ISPs don’t act as gatekeepers for online content and services. Millions of people commented on the FCC docket, including those who commented through Mozilla’s portal that removing these core protections will hurt consumers and small businesses alike.

Mozilla is also very much focused on the issues preventing people coming online beyond the United States. Before addressing the situation in the U.S., journalist Rob Pegoraro asked me what we discovered in the research we recently funded in seven other countries into the impact of zero rating on Internet use:


(Video courtesy: GSMA)

If you happen to be in San Francisco on Monday 18th September please consider joining Mozilla and the Internet Archive for a special night: The Battle to Save Net Neutrality. Tickets are available here.

You’ll be able to watch a discussion featuring former FCC Chairman Tom Wheeler; Representative Ro Khanna; Mozilla Chief Legal and Business Officer Denelle Dixon; Amy Aniobi, Supervising Producer, Insecure (HBO); Luisa Leschin, Co-Executive Producer/Head Writer, Just Add Magic (Amazon); Malkia Cyril, Executive Director of the Center for Media Justice; and Dane Jasper, CEO and Co-Founder of Sonic. The panel will be moderated by Gigi Sohn, Mozilla Tech Policy Fellow and former Counselor to Chairman Wheeler. It will discuss how net neutrality promotes democratic values, social justice and economic opportunity, what the current threats are, and what the public can do to preserve it.

The post Busting the myth that net neutrality hampers investment appeared first on The Mozilla Blog.

Marco CastelluccioOverview of the Code Coverage Architecture at Mozilla

Firefox is a huge project, consisting of around 20K source files and 3M lines of code (if you only consider the Linux part!), supporting officially four operating systems, and being written in multiple programming languages (C/C++/JavaScript/Rust). We have around 200 commits landing per day in the mozilla-central repository, with developers committing even more often to the try repository. Usually, code coverage analysis is performed on a single language on small/medium size projects. Therefore, collecting code coverage information for such a project is not an easy task.

I’m going to present an overview of the current state of the architecture for code coverage builds at Mozilla.

Tests in code coverage builds are slower than in normal builds, especially when we will start disabling more compiler optimizations to get more precise results. Moreover, the amount of data generated is quite large, each report being around 20 MB. If we had one report for each test suite and each commit, we would have around ~100 MB x ~200 commits x ~20 test suites = ~400 GB per day. This means we are, at least currently, only running a code coverage build per mozilla-central push (which usually contain around ~50 to ~100 commits), instead of per mozilla-inbound commit.

View of linux64-ccov build and tests from Treeherder Figure 1: A linux64-ccov build (B) with associated tests, from https://treeherder.mozilla.org/#/jobs?repo=mozilla-central&filter-searchStr=linux64-ccov&group_state=expanded.

Each test machine, e.g. bc1 (first chunk of the Mochitest browser chrome) in Figure 1, generates gcno/gcda files, which are parsed directly on the test machine to generate a LCOV report.

Because of the scale of Firefox, we could not rely on some existing tools like LCOV. Instead, we had to redevelop some tooling to make sure the whole process would scale. To achieve this goal, we developed grcov, an alternative to LCOV written in Rust (providing performance and parallelism), to parse the gcno/gcda files. With the standard LCOV, parsing the gcno/gcda files takes minutes as opposed to seconds with grcov (and, if you multiply that by the number of test machines we have, it becomes more than 24 hours vs around 5 minutes).

Let’s take a look at the current architecture we have in place:

Architecture view Figure 2: A high-level view of the architecture.

Both the Pulse Listener and the Uploader Task are part of the awesome Mozilla Release Engineering Services (https://github.com/mozilla-releng/services). The release management team has been contributing to this project to share code and efforts.

Pulse Listener

We are running a pulse listener process on Heroku which listens to the taskGroupResolved message, sent by TaskCluster when a group of tasks finishes (either successfully or not). In our case, the group of tasks is the linux64-ccov build and its tests (note: you can now easily choose this build on trychooser, run your own coverage build and generate your report. See this page for instructions).

The listener, once it receives the “group resolved” notification for a linux64-ccov build and related tests, spawns an “uploader task”.

The source code of the Pulse Listener can be found here.

Uploader Task

The main responsibility of the uploader task is aggregating the coverage reports from the test machines.

In order to do this, the task:

  1. Clones mozilla-central;
  2. Builds Firefox (using artifact builds for speed); this is currently needed in order to generate the mapping between the URLs of internal JavaScript components and modules (which use special protocols, such as chrome:// or resource://) to the corresponding files in the mozilla-central repository (e.g. resource://gre/modules/Services.jsmtoolkit/modules/Services.jsm);
  3. Rewrites the LCOV files generated by the JavaScript engine for JavaScript code, using the mapping generated in step 2 and also resolving preprocessed files (yes, we do preprocess some JavaScript source files with a C-style preprocessor);
  4. Runs grcov again to aggregate the LCOV reports from the test machines into a single JSON report, which is then sent to codecov.io and coveralls.io.

Both codecov.io and coveralls.io, in order to show source files with coverage overlay, take the contents of the files from GitHub. So, we can’t directly use our Mercurial repository, but we have to rely on our Git mirror hosted on GitHub (https://github.com/mozilla/gecko-dev). In order to map the mercurial changeset hash associated with the coverage build to a Git hash, we use a Mapper service.

Code coverage results on Firefox code can be seen on:

The source code of the Uploader Task can be found here.

Future Directions

Reports per Test Suite and Scaling Issues

We are interested in collecting code coverage information per test suite. This is interesting for several reasons. First of all, we could suggest developers which suite they should run in order to cover the code they change with a patch. Moreover, we can evaluate the coverage of web platform tests and see how they fare against our built-in tests, with the objective to make web platform tests cover as much as possible.

Both codecov.io and coveralls.io support receiving multiple reports for a single build and showing the information both in separation and in aggregation (“flags” on codecov.io, “jobs” on coveralls.io). Unfortunately, both services are currently choking when we present them with too much data (our reports are huge, given that our project is huge, and if we send one per test suite instead of one per build… things blow).

Coverage per Push

Understanding whether the code introduced by a set of patches is covered by tests or not is very valuable for risk assessment. As I said earlier, we are currently only collecting code coverage information for each mozilla-central push, which means around 50-100 commits (e.g. https://hg.mozilla.org/mozilla-central/pushloghtml?changeset=07484bfdb96b), instead of for each mozilla-inbound push (often only one commit). This means we don’t have coverage information for each set of patches pushed by developers.

Given that most mozilla-inbound pushes in the same mozilla-central push will not change the same lines in the same files, we believe we can infer the coverage information for intermediate commits from the coverage information of the last commit.

Windows, macOS and Android Coverage

We are currently only collecting coverage information for Linux 64 bit. We are looking into expanding it to Windows,macOS and Android. Help is appreciated!

Support for Rust

Experimental support for gcov-style coverage collection landed recently in Rust. The feature needs to be ship in a stable release of Rust before we can use it; this issue is tracking its stabilization.

Air MozillaWebdev Beer and Tell: September 2017, 15 Sep 2017

Webdev Beer and Tell: September 2017 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...

David HumphreyWhy Good-First-Bugs often aren't

Let me start by saying that I'm a huge fan of open source projects putting a Good-First-Bug type label on issues. From my own experience over the past decade trying to get students engaged in open source projects, it's often a great way to quickly find entry points for new people. Right now I've got 40 students learning open source with me at Seneca, and because I want to get them all working on bugs ASAP, you better believe that I'm paying close attention to good-first-bug labels!

Maintainers

There are a few ways that we tend to interact with good-first-bugs. First, we have project maintainers who add the label when they think they see something that might be a good fit for someone new. I've been this person, and I know that it takes some discipline to not fix every bug. To be honest, it's faster and easier to just fix small things yourself. It's also going to mean you end up having to fix everything yourself--you won't grow new contributors this way.

What you need to do instead is to write large amounts of prose of the type "Here's what you need to do if you're interested in fixing this". You need sample code, links to relevant files, screenshots, etc. so that someone who lands in this bug can readily assess whether their current skill level (or aspirational skill level), and the bug's requirements, meet.

Sometimes maintainers opt not to do this, and instead say, "I'd be willing to mentor this." The problem with this approach, in my experience, is that it becomes a kind of debt with which you saddle your future self. Are you sure you'll want to mentor this bug in 2 years, when it's no longer on your roadmap? You'd be better to "mentor" the bug upfront, and just spell out what has to happen in great detail: "Do this, this, and this." If you can't do that, the reality is it's not a good-first-bug.

New Contributors

The second way we encounter good-first-bugs is as someone looking for an opportunity to contribute. I make a habit of finding/fixing these in various projects so that I can use real examples to show my students the steps. I also tag along with my students as they attempt them, and I've seen it all. It's interesting what you encounter on this side of things. Sometimes it goes exactly as you'd hope: you make the fix and the patch is accepted. However, more often then not you run into snags.

First, before you even get going on a fix, finding a bug that isn't already being worked on can be hard. A lot of people are looking for opportunities to get started, and when you put up a sign saying "Start Here," people will! Read through the comments on many good-first-bugs and you'll find an unending parade of "I'd like to work on this bug!" and "Can you assign this to me?" followed by "Are you still working on this?" and "I'm new, can you help me get started?". That stream of comments often repeats forever, leaving the project maintainers frustrated, the contributors lost, and the bug untouched.

Expiry Dates

Once people do get started on a bug, another common problem I see is that the scope of the bug has shifted such that the problem/fix described no longer makes sense. You see a lot of responses like this: "Thanks for this fix, but we've totally refactored this code, and it's not necessary any more. Closing!" This doesn't feel great, or make you want to put more effort into finding something else to do.

The problem here wasn't that the bug was wrong...when filed. The bug has become obsolete over time. Good-first-bugs really need an expiry date. If a project isn't triaging its good-first-bugs on a somewhat regular basis, it's basically going to end up in this state eventually, with some or all of them being useless, and therefore bad-first-bugs. You're better off closing bugs like this and having no good-first-bugs listed, than to have 50 ancient bugs that no one on the project cares about, wants to review, or has time to discuss.

Good First Experience

This week I've been thinking a lot about ways to address some of the problems above. In their lab this week, I asked my students to build Firefox, and also to make some changes to the code. I had a few goals with this:

  • Build a large open source project to learn about setting up dev environments, obtaining source code, build systems, etc.
  • Gain some experience making a change and rebuilding Firefox, to prove to themselves that they could do it and to remove some of the mystery around how one does this.
  • Learn how to start navigating around in large code, see how things are built (e.g., Firefox uses JS and CSS for its front-end).
  • Have some fun doing something exciting and a bit scary

I've done this many times in the past, and often I've gone looking for a simple good-first-bug to facilitate these goals. This time I wanted to try something different. Instead of a good-first-bug, I wanted what I'll call a "Good First Experience."

A Good First Experience tries to do the following:

  • It's reproducible by everyone. Where a good-first-bug is destroyed in being fixed, a good-first-experience doesn't lose its potential after someone completes it.
  • It's not tied to the current state of the project, and therefore doesn't become obsolete (as quickly). Where a good-first-bug is always tied to the project's goals, coding practices, and roadmap, a good-first-experience is independent of the current state of the project.
  • It's meant to be fun, exploratory, and safe. Where a good-first-bug is about accomplishing a task, and is therefore defined and limited by that task, a good-first-experience can be the opposite: an unnecessary change producing an outcome whose value is measured by the participate vs the project.

Toward a Good First Experience with Firefox

I reached out to a half-dozen Mozilla colleagues for ideas on what we could try (thanks to all who replied). I ended-up going with some excellent suggestions from Blake Winton (@bwinton). Blake has a history of being whimsical in his approach to his work at Mozilla, and I think he really understood what I was after.

Based on his suggestions, I gave the students some options to try:

  • In browser/base/content/browser.js change the function OpenBrowserWindow to automatically open cat GIFs. You can alter the code like so:
function OpenBrowserWindow(options) {  
  return window.open("http://www.chilloutandwatchsomecatgifs.com");
}
  • Look at the CSS in browser/base/content/browser.css and try changing some of the colours.

  • Modify the way tabs appear by playing with CSS in browser/themes/shared/tabs.inc.css, for example: you could alter things like min-height.

  • You could try adding a background: url(http://i.imgur.com/UkT7jcm.gif); to the #TabsToolbar in browser/themes/windows/browser.css to add something new

  • Modify the labels for menu items like "New Window" in browser/locales/en-US/chrome/browser/browser.dtd to something else.

None of these changes are necessary, prudent, or solving a problem. Instead, they are fun, exploratory, and simple. Already students are having some success, which is great to see.

Example of what students did

Nature via National Park vs. Wilderness

I was reflecting that the real difference between a good-first-experience and a real bug is a lot like experiencing nature by visiting a National Park vs. setting out in the wilderness. There isn't a right or wrong way to do this, and both have obvious advantages and disadvantages. However, what National Parks do well is to make the experience of nature accessible to everyone: manicured paths, maps with established trails to follow, amenities so you can bring your family, information. It's obviously not the same as cutting a trail into a forest, portaging your canoe between lakes, or hiking on the side of a mountain. But it means that more people can try the experience of doing the real thing in relative safety, and without a massive commitment of time or effort. It's also a mostly self-guided experience vs. something you need a guide (maintainer) to accomplish. In the end, this experience might be enough for many people, and will help bring awareness and an enriching experience. For others, it will be the beginning of bolder outings into the unknown.

I don't think my current attempt represents a definitive good-first-experience in Mozilla, but it's got me thinking more about what one might look like, and I wanted to get you thinking about them too. I know I'm not alone in wanting to bring students into projects like Mozilla and Firefox, and needing a repeatable entry point.

Mozilla Addons BlogAdd-ons Update – 2017/09

Here’s your monthly add-ons update.

The Review Queues

In the past month, our team reviewed 2,490 listed add-on submissions:

  • 2,074 in fewer than 5 days (83%).
  • 89 between 5 and 10 days (4%).
  • 327 after more than 10 days (13%).

244 listed add-ons are awaiting review.

If you’re an add-on developer and are looking for contribution opportunities, please consider joining us. Visit our wiki page for more information.

Compatibility Update

We published the blog post for 56 and the bulk validation has been run. This is the last one of these we’ll do, since compatibility is a much smaller problem with the WebExtensions API.

Firefox 57 is now on the Nightly channel and will soon hit Beta, only accepting WebExtension add-ons by default. Here are some changes we’re implementing on AMO to ease the transition to 57.

Recognition

We would like to thank the following people for their recent contributions to the add-ons world:

  • Amola Singh
  • yfdyh000
  • bfred-it
  • Tiago Morais Morgado
  • Divya Rani
  • angelsl
  • Tim Nguyen
  • Atique Ahmed Ziad
  • Apoorva Pandey
  • Kevin Jones
  • ljbousfield
  • asamuzaK
  • Rob Wu
  • Tushar Sinai
  • Trishul Goel
  • zombie
  • tmm88
  • Christophe Villeneuve
  • Hemanth Kumar Veeranki

You can read more about their work in our recognition page.

The post Add-ons Update – 2017/09 appeared first on Mozilla Add-ons Blog.

Ehsan AkhgariQuantum Flow Engineering Newsletter #24

I hope you’re not tired of reading these newsletters so far.  If not, I applaud your patience with me in the past few months.  But next week, as Firefox 57 will merge to the Beta channel, I’m planning to write the last one of this series.

Nightly has been pretty solid on performance.  It is prudent at this point to focus our attention more on other aspects of quality for the 57 release, to make sure that things like the crash rate and regressions are under control.  The triage process that we set up in March to enable everyone to take part in finding and nominating performance problems which they think should be fixed in Firefox 57 was started with the goal of creating a large pool of prioritized bugs that we believed would vastly impact the real world performance of Firefox for the majority of our users.  I think this process worked quite well overall, but it has mostly served its purpose, and participating in the triage takes a lot of time (we sometimes had two meetings per week to be able to deal with the incoming volume of bugs!)  With one week left, it seemed like a good decision to stop the triage meetings now.  We also had a weekly 30-minute standup meeting where people talked about what they had done on Quantum Flow during the past week (and you read about many of those in the newsletters!), and for similar reasons that meeting also will be wound down.  This gives back several person-hours back on their calendars to people who really need it, hurray!

The work on the Speedometer benchmark for 57 is more or less wrapped up at this point.  One noteworthy change that happened last week which I should mention here is this jump in the numbers which happened on September 7.  The reason behind it was a change on the benchmark side to switch from reporting the score using arithmetic mean to using geometric mean.  This is a good change in my opinion because it means that the impact of a few of the JS frameworks being tested wouldn’t dominate the overall score.  The unfortunate news is that as a result of this change, Firefox took a bigger hit in numbers than Chrome did, but I’m still very proud of all the great work that happened when optimizing for this benchmark, and I think the right response to this change is for us to optimize more to get the few percentages of head-to-head comparison that we lost back.  🙂

Speedometer changes as a result of computing the benchmark score using geometric mean

Even though most of the planned performance work for Firefox 57 is done, it doesn’t mean that people are done pouring in their great fixes as things are making it to the finish line last-minute!  So now please allow me to take a moment to thank everyone who helped make Firefox faster in the last week, as usual, I hope I’m not forgetting any names here:

The Firefox FrontierPut your multiple online personalities in Firefox Multi-Account Containers

Our new Multi-Account Containers extension for Firefox means you can finally wrangle multiple email/social accounts. Maybe you’ve got two Gmail or Instagram or Twitter or Facebook accounts (or a few … Read more

The post Put your multiple online personalities in Firefox Multi-Account Containers appeared first on The Firefox Frontier.

Air MozillaMeasuring the Subjective: The Performance Dashboard with Estelle Weyl

Measuring the Subjective: The Performance Dashboard with Estelle Weyl Performance varies quite a bit depending on the site, the environment and yes, the user. And users don't check your performance metrics. Instead, they perceive...

About:CommunityFirefox 56 new contributors

With the upcoming release of Firefox 56, we are pleased to welcome the 37 developers who contributed their first code change to Firefox in this release, 29 of whom were brand new volunteers! Please join us in thanking each of these diligent and enthusiastic individuals, and take a look at their contributions:

Air MozillaReps Weekly Meeting Sep. 14, 2017

Reps Weekly Meeting Sep. 14, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Hacks.Mozilla.OrgBuilding the DOM faster: speculative parsing, async, defer and preload

In 2017, the toolbox for making sure your web page loads fast includes everything from minification and asset optimization to caching, CDNs, code splitting and tree shaking. However, you can get big performance boosts with just a few keywords and mindful code structuring, even if you’re not yet familiar with the concepts above and you’re not sure how to get started.

The fresh web standard <link rel="preload">, that allows you to load critical resources faster, is coming to Firefox later this month. You can already try it out in Firefox Nightly or Developer Edition, and in the meantime, this is a great chance to review some fundamentals and dive deeper into performance associated with parsing the DOM.

Understanding what goes on inside a browser is the most powerful tool for every web developer. We’ll look at how browsers interpret your code and how they help you load pages faster with speculative parsing. We’ll break down how defer and async work and how you can leverage the new keyword preload.

Building blocks

HTML describes the structure of a web page. To make any sense of the HTML, browsers first have to convert it into a format they understand – the Document Object Model, or DOM. Browser engines have a special piece of code called a parser that’s used to convert data from one format to another. An HTML parser converts data from HTML into the DOM.

In HTML, nesting defines the parent-child relationships between different tags. In the DOM, objects are linked in a tree data structure capturing those relationships. Each HTML tag is represented by a node of the tree (a DOM node).

The browser builds up the DOM bit by bit. As soon as the first chunks of code come in, it starts parsing the HTML, adding nodes to the tree structure.

The DOM has two roles: it is the object representation of the HTML document, and it acts as an interface connecting the page to the outside world, like JavaScript. When you call document.getElementById(), the element that is returned is a DOM node. Each DOM node has many functions you can use to access and change it, and what the user sees changes accordingly.

CSS styles found on a web page are mapped onto the CSSOM – the CSS Object Model. It is much like the DOM, but for the CSS rather than the HTML. Unlike the DOM, it cannot be built incrementally. Because CSS rules can override each other, the browser engine has to do complex calculations to figure out how the CSS code applies to the DOM.

 

The history of the <script> tag

As the browser is constructing the DOM, if it comes across a <script>...</script> tag in the HTML, it must execute it right away. If the script is external, it has to download the script first.

Back in the old days, in order to execute a script, parsing had to be paused. It would only start up again after the JavaScript engine had executed code from a script.

Why did the parsing have to stop? Well, scripts can change both the HTML and its product―the DOM. Scripts can change the DOM structure by adding nodes with document.createElement(). To change the HTML, scripts can add content with the notorious document.write() function. It’s notorious because it can change the HTML in ways that can affect further parsing. For example, the function could insert an opening comment tag making the rest of the HTML invalid.

Scripts can also query something about the DOM, and if that happens while the DOM is still being constructed, it could return unexpected results.

document.write() is a legacy function that can break your page in unexpected ways and you shouldn’t use it, even though browsers still support it. For these reasons, browsers have developed sophisticated techniques to get around the performance issues caused by script blocking that I will explain shortly.

What about CSS?

JavaScript blocks parsing because it can modify the document. CSS can’t modify the document, so it seems like there is no reason for it to block parsing, right?

However, what if a script asks for style information that hasn’t been parsed yet? The browser doesn’t know what the script is about to execute—it may ask for something like the DOM node’s background-color which depends on the style sheet, or it may expect to access the CSSOM directly.

Because of this, CSS may block parsing depending on the order of external style sheets and scripts in the document. If there are external style sheets placed before scripts in the document, the construction of DOM and CSSOM objects can interfere with each other. When the parser gets to a script tag, DOM construction cannot proceed until the JavaScript finishes executing, and the JavaScript cannot be executed until the CSS is downloaded, parsed, and the CSSOM is available.

Another thing to keep in mind is that even if the CSS doesn’t block DOM construction, it blocks rendering. The browser won’t display anything until it has both the DOM and the CSSOM. This is because pages without CSS are often unusable. If a browser showed you a messy page without CSS, then a few moments later snapped into a styled page, the shifting content and sudden visual changes would make a turbulent user experience.

See the Pen Flash of Unstyled Content by Milica (@micikato) on CodePen.

That poor user experience has a name – Flash of Unstyled Content or FOUC

To get around these issues, you should aim to deliver the CSS as soon as possible. Recall the popular “styles at the top, scripts at the bottom” best practice? Now you know why it was there!

Back to the future – speculative parsing

Pausing the parser whenever a script is encountered means that every script you load delays the discovery of the rest of the resources that were linked in the HTML.

If you have a few scripts and images to load, for example–

<script src="slider.js"></script>
<script src="animate.js"></script>
<script src="cookie.js"></script>
<img src="slide1.png">
<img src="slide2.png">

–the process used to go like this:

 

That changed around 2008 when IE introduced something they called “the lookahead downloader”. It was a way to keep downloading the files that were needed while the synchronous script was being executed. Firefox, Chrome and Safari soon followed, and today most browsers use this technique under different names. Chrome and Safari have “the preload scanner” and Firefox – the speculative parser.

The idea is: even though it’s not safe to build the DOM while executing a script, you can still parse the HTML to see what other resources need to be retrieved. Discovered files are added to a list and start downloading in the background on parallel connections. By the time the script finishes executing, the files may have already been downloaded.

The waterfall chart for the example above now looks more like this:

The download requests triggered this way are called “speculative” because it is still possible that the script could change the HTML structure (remember document.write ?), resulting in wasted guesswork. While this is possible, it is not common, and that’s why speculative parsing still gives big performance improvements.

While other browsers only preload linked resources this way, in Firefox the HTML parser also runs the DOM tree construction algorithm speculatively. The upside is that when a speculation succeeds, there’s no need to re-parse a part of the file to actually compose the DOM. The downside is that there’s more work lost if and when the speculation fails.

(Pre)loading stuff

This manner of resource loading delivers a significant performance boost, and you don’t need to do anything special to take advantage of it. However, as a web developer, knowing how speculative parsing works can help you get the most out of it.

The set of things that can be preloaded varies between browsers. All major browsers preload:

  • scripts
  • external CSS
  • and images from the <img> tag

Firefox also preloads the poster attribute of video elements, while Chrome and Safari preload @import rules from inlined styles.

There are limits to how many files a browser can download in parallel. The limits vary between browsers and depend on many factors, like whether you’re downloading all files from one or from several different servers and whether you are using HTTP/1.1 or HTTP/2 protocol. To render the page as quickly as possible, browsers optimize downloads by assigning priority to each file. To figure out these priorities, they follow complex schemes based on resource type, position in the markup, and progress of the page rendering.

While doing speculative parsing, the browser does not execute inline JavaScript blocks. This means that it won’t discover any script-injected resources, and those will likely be last in line in the fetching queue.

var script = document.createElement('script');
script.src = "//somehost.com/widget.js";
document.getElementsByTagName('head')[0].appendChild(script);

You should make it easy for the browser to access important resources as soon as possible. You can either put them in HTML tags or include the loading script inline and early in the document. However, sometimes you want some resources to load later because they are less important. In that case, you can hide them from the speculative parser by loading them with JavaScript late in the document.

You can also check out this MDN guide on how to optimize your pages for speculative parsing.

defer and async

Still, synchronous scripts blocking the parser remains an issue. And not all scripts are equally important for the user experience, such as those for tracking and analytics. Solution? Make it possible to load these less important scripts asynchronously.

The defer and async attributes were introduced to give developers a way to tell the browser which scripts to handle asynchronously.

Both of these attributes tell the browser that it may go on parsing the HTML while loading the script “in background”, and then execute the script after it loads. This way, script downloads don’t block DOM construction and page rendering. Result: the user can see the page before all scripts have finished loading.

The difference between defer and async is which moment they start executing the scripts.

defer was introduced before async. Its execution starts after parsing is completely finished, but before the DOMContentLoaded event. It guarantees scripts will be executed in the order they appear in the HTML and will not block the parser.

async scripts execute at the first opportunity after they finish downloading and before the window’s load event. This means it’s possible (and likely) that async scripts are not executed in the order in which they appear in the HTML. It also means they can interrupt DOM building.

Wherever they are specified, async scripts load at a low priority. They often load after all other scripts, without blocking DOM building. However, if an async script finishes downloading sooner, its execution can block DOM building and all synchronous scripts that finish downloading afterwards.

Note: Attributes async and defer work only for external scripts. They are ignored if there’s no src.

preload

async and defer are great if you want to put off handling some scripts, but what about stuff on your web page that’s critical for user experience? Speculative parsers are handy, but they preload only a handful of resource types and follow their own logic. The general goal is to deliver CSS first because it blocks rendering. Synchronous scripts will always have higher priority than asynchronous. Images visible within the viewport should be downloaded before those below the fold. And there are also fonts, videos, SVGs… In short – it’s complicated.

As an author, you know which resources are the most important for rendering your page. Some of them are often buried in CSS or scripts and it can take the browser quite a while before it even discovers them. For those important resources you can now use <link rel="preload"> to communicate to the browser that you want to load them as soon as possible.

All you need to write is:

<link rel="preload" href="very_important.js" as="script">

You can link pretty much anything and the as attribute tells the browser what it will be downloading. Some of the possible values are:

  • script
  • style
  • image
  • font
  • audio
  • video

You can check out the rest of the content types on MDN.

Fonts are probably the most important thing that gets hidden in the CSS. They are critical for rendering the text on the page, but they don’t get loaded until browser is sure that they are going to be used. That check happens only after CSS has been parsed, and applied, and the browser has matched CSS rules to the DOM nodes. This happens fairly late in the page loading process and it often results in an unnecessary delay in text rendering. You can avoid it by using the preload attribute when you link fonts.

One thing to pay attention to when preloading fonts is that you also have to set the crossorigin attribute even if the font is on the same domain:

<link rel="preload" href="font.woff" as="font" crossorigin>

The preload feature has limited support at the moment as the browsers are still rolling it out, but you can check the progress here.

Conclusion

Browsers are complex beasts that have been evolving since the 90s. We’ve covered some of the quirks from that legacy and some of the newest standards in web development. Writing your code with these guidelines will help you pick the best strategies for delivering a smooth browsing experience.

If you’re excited to learn more about how browsers work here are some other Hacks posts you should check out:

Quantum Up Close: What is a browser engine?
Inside a super fast CSS engine: Quantum CSS (aka Stylo)

The Mozilla BlogPublic Event: The Fate of Net Neutrality in the U.S.

Mozilla is hosting a free panel at the Internet Archive in San Francisco on Monday, September 18. Hear top experts discuss why net neutrality matters and what we can do to protect it

 

Net neutrality is under siege.

Despite protests from millions of Americans, FCC Chairman Ajit Pai is moving forward with plans to dismantle hard-won open internet protections.

“Abandoning these core protections will hurt consumers and small businesses alike,” Mozilla’s Heather West penned in an open letter to Pai earlier this week, during Pai’s visit to San Francisco.

The FCC may vote to gut net neutrality as early as October. What does this mean for the future of the internet?

Join Mozilla and the nation’s leading net neutrality experts at a free, public event on September 18 to discuss just this. We will gather at the Internet Archive to discuss why net neutrality matters to a healthy internet — and what can be done to protect it.

RSVP: The Battle to Save Net Neutrality

Net neutrality is under siege. Mozilla is hosting a public panel in San Francisco to explore what’s ahead

<WHAT>

The Battle to Save Net Neutrality, a reception and discussion in downtown San Francisco. Register for free tickets

<WHO>

Mozilla Tech Policy Fellow and former FCC Counselor Gigi Sohn will moderate a conversation with the nation’s leading experts on net neutrality, including Mozilla’s Chief Legal and Business Officer, Denelle Dixon, and:

Tom Wheeler, Former FCC Chairman who served under President Obama and was architect of the 2015 net neutrality rules

Representative Ro Khanna, (D-California), who represents California’s 17th congressional district in the heart of Silicon Valley

Amy Aniobi, Supervising Producer of HBO’s “Insecure”

Luisa Leschin, Co-Executive Producer/Head Writer of Amazon’s “Just Add Magic”

Malkia Cyril, Executive Director of the Center for Media Justice

and Dane Jasper, CEO and Co-Founder of Sonic.

<WHEN>

Monday, September 18, 2017 from 6 p.m. to 9 p.m. PT

<WHERE>

The Internet Archive, 300 Funston Avenue San Francisco, CA 94118

RSVP: The Battle to Save Net Neutrality

The post Public Event: The Fate of Net Neutrality in the U.S. appeared first on The Mozilla Blog.

Mike Taylorhyperlinks in buttons are probably not a great idea

Over in web-bug #9726, there's an interesting issue reported against glitch.com (which is already fixed because those peeps are classy):

Basically, they had an HTML <button> that when clicked would display:block a descendent <dialog> element that contained some hyperlinks to help you create a new project.

screenshot of glitch.com button

The simplest test case:

<button>
  <a href="https://example.com">do cool thing</a>
</button>

Problem is, clicking on an anchor with an href inside of a button does nothing in Firefox (and Opera Presto, which only 90s kids remember).

What the frig, web browsers.

But it turns out HTML is explicit on the subject, as it often is, stating that a button's content model must not have an interactive content descendant.

(and <a href> is totally, like, interactive content, itsho*)

Soooo, probably not a good idea to follow this pattern. And who knows what it means for accessibility.

The fix for glitch is simple: just make the <dialog> a sibling, and hide and show it the same way.

* in the spec's humble opinion

Robert O'CallahanSome Opinions On The History Of Web Audio

People complain that Web Audio provides implementations of numerous canned processing features, but they very often don't do exactly what you want, and working around those limitations by writing your own audio processing code in JS is difficult or impossible.

This was an obvious pitfall from the moment the Web Audio API was proposed by Chris Rogers (at Google, at that time). I personally fought pretty hard in the Audio WG for an API that would be based on JS audio processing (with allowance for popular effects to be replaced with browser-implemented modules). I invested enough to write a draft spec for my alternative and implement a lot of that spec in Firefox, including Worker-based JS sample processing.

My efforts went nowhere for several reasons. My views on making JS sample manipulation a priority were not shared by the Audio WG. Here's my very first response to Chris Rogers' reveal of the Web Audio draft; you can read the resulting discussion there. The main arguments against prioritizing JS sample processing were that JS sample manipulation would be too slow, and JS GC (or other non-realtime behaviour) would make audio too glitchy. Furthermore, audio professionals like Chris Rogers assured me they had identified a set of primitives that would suffice for most use cases. Since most of the Audio WG were audio professionals and I wasn't, I didn't have much defense against "audio professionals say..." arguments.

The Web Audio API proceeded mostly unchanged because there wasn't anyone other than me trying to make significant changes. After an initial burst of interest Apple's WG participation declined dramatically, perhaps because they were getting Chris Rogers' Webkit implementation "for free" and had nothing to gain from further discussion. I begged Microsoft people to get involved but they never did; in this and other areas they were (are?) apparently content for Mozilla and Google to spend energy to thrash out a decent spec that they later implement.

However, the main reason that Web Audio was eventually standardized without major changes is because Google and Apple shipped it long before the spec was done. They shipped it with a "webkit" prefix, but they evangelized it to developers who of course started using it, and so pretty soon Mozilla had to cave.

Ironically, soon after Web Audio won, the "extensible Web" become a hot buzzword. Web Audio had a TAG review at which it was clear Web Audio was pretty much the antithesis of "extensible Web", but by then it was too late to do anything about it.

What could I have done better? I probably should have reduced the scope of my spec proposal to exclude MediaStream/HTMLMediaElement integration. But I don't think that, or anything else I can think of, would have changed the outcome.

Mozilla Security BlogVerified cryptography for Firefox 57

Traditionally, software is produced in this way: write some code, maybe do some code review, run unit-tests, and then hope it is correct. Hard experience shows that it is very hard for programmers to write bug-free software. These bugs are sometimes caught in manual testing, but many bugs still are exposed to users, and then must be fixed in patches or subsequent versions. This works for most software, but it’s not a great way to write cryptographic software; users expect and deserve assurances that the code providing security and privacy is well written and bug free.

Even innocuous looking bugs in cryptographic primitives can break the security properties of the overall system and threaten user security. Unfortunately, such bugs aren’t uncommon. In just the last year, popular cryptographic libraries have issued dozens of CVEs for bugs in their core cryptographic primitives or for incorrect use of those primitives. These bugs include many memory safety errors, some side-channels leaks, and a few correctness errors, for example, in bignum arithmetic computations… So what can we do?

Fortunately, recent advances in formal verification allow us to significantly improve the situation by building high assurance implementations of cryptographic algorithms. These implementations are still written by hand, but they can be automatically analyzed at compile time to ensure that they are free of broad classes of bugs. The result is that we can have much higher confidence that our implementation is correct and that it respects secure programming rules that would usually be very difficult to enforce by hand.

This is a very exciting development and Mozilla has partnered with INRIA and Project Everest  (Microsoft Research, CMU, INRIA) to bring components from their formally verified HACL* cryptographic library into NSS, the security engine which powers Firefox. We believe that we are the first major Web browser to have formally verified cryptographic primitives.

The first result of this collaboration, an implementation of the Curve25519 key establishment algorithm (RFC7748), has just landed in Firefox Nightly. Curve25519 is widely used for key-exchange in TLS, and was recently standardized by the IETF.  As an additional bonus, besides being formally verified, the HACL* Curve25519 implementation is also almost 20% faster on 64 bit platforms than the existing NSS implementation (19500 scalar multiplications per second instead of 15100) which represents an improvement in both security and performance to our users. We expect to ship this new code as part as our November Firefox 57 release.

Over the next few months, we will be working to incorporate other HACL* algorithms into NSS, and will also have more to say about the details of how the HACL* verification works and how it gets integrated into NSS.

Benjamin Beurdouche, Franziskus Kiefer & Tim Taubert

The post Verified cryptography for Firefox 57 appeared first on Mozilla Security Blog.

Dave TownsendHow do you become a Firefox peer? The answer may surprise you!

So you want to know how someone becomes a peer? Surprisingly the answer is pretty unclear. There is no formal process for peer status, at least for Firefox and Toolkit. I haven’t spotted one for other modules either. What has generally happened in the past is that from time to time someone will come along and say, “Oh hey, shouldn’t X be a peer by now?” to which I will say “Uhhh maybe! Let me go talk to some of the other peers that they have worked with”. After that magic happens and I go and update the stupid wiki pages, write a blog post and mail the new peers to congratulate them.

I’d like to formalise this a little bit and have an actual process that new peers can see and follow along to understand where they are. I’d like feedback on this idea, it’s just a straw-man at this point. With that I give you … THE ROAD TO PEERSHIP (cue dramatic music).

  1. Intro patch author. You write basic patches, request review and get them landed. You might have level 1 commit access, probably not level 3 yet though.
  2. Senior patch author. You are writing really good patches now. Not just simple stuff. Patches that touch multiple files maybe even multiple areas of the product. Chances are you have level 3 commit access. Reviewers rarely find significant issues with your work (though it can still happen). Attention to details like maintainability and efficiency are important. If your patches are routinely getting backed out or failing tests then you’re not here yet.
  3. Intro reviewer. Before being made a full peer you should start reviewing simple patches. Either by being the sole reviewer for a patch written by a peer or doing an initial review before a peer does a final sign-off. Again paying attention to maintainability and efficiency are important. As is being clear and polite in your instructions to the patch author as well as being open to discussion where disagreements happen.
  4. Full peer. You, your manager or a peer reach out to me showing me cases where you’ve completed the previous levels. I double-check with a couple of peers you’ve work with. Congratulations, you made it! Follow-up on review requests promptly. Be courteous. Re-direct reviews that are outside your area of expertise.

Does this sound like a reasonable path? What criteria am I missing? I’m not yet sure what length of time we would expect each step to take but I am imagine that more senior contributors could skip straight to step 2.

Feedback welcome here or in private by email.

Air MozillaThe Joy of Coding - Episode 112

The Joy of Coding - Episode 112 mconley livehacks on real Firefox bugs while thinking aloud.

Will Kahn-GreeneSocorro and Firefox 57

Summary

Socorro is the crash ingestion pipeline for Mozilla's products like Firefox. When Firefox crashes, the Breakpad crash reporter asks the user if the user would like to send a crash report. If the user answers "yes!", then the Breakpad crash reporter collects data related to the crash, generates a crash report, and submits that crash report as an HTTP POST to Socorro--specifically the Socorro collector.

Teams at Mozilla are feverishly working on Firefox 57. That's super important work and we're getting down to the wire. Socorro is a critical part of that development work as it collects incoming crashes, processes them, and has tools for analysis.

This blog post covers some of the things Socorro engineering has been doing to facilitate that work and what we're planning from now until Firefox 57 release.

This quarter

This quarter, we replaced Snappy with Tecken for more reliable symbol lookup in Visual Studio and other clients.

We built a Docker-based local dev environment for Socorro making it easier to run Socorro on your local machine configured like crash-stats.mozilla.com. It now takes five steps to getting Socorro running on your computer.

We also overhauled the signature generation system in Socorro and slapped on a command-line interface. Now you can test the effects of signature generation changes on specific crashes as well as groups of crashes on your local machine.

We've also been fixing stability issues and bugs and myriad other things.

Now until Firefox 57

Starting today and continuing until after Firefox 57 release, we are:

  1. prioritizing your signature generation changes, getting them landed, and pushing them to -prod
  2. triaging Socorro bugs into "need it right now" and "everything else" buckets
  3. deferring big changes to Socorro until after Firefox 57 including API endpoint deprecation, major UI changes to the crash-stats interface, and other things that would affect your workflow

We want to make sure crash analysis is working as best as it can so you can do the best you can so we can have a successful Firefox 57.

Please contact us if you need something!

We hang out on #breakpad on irc.mozilla.org. You can also write up bugs.

Hopefully this helps. If not, let us know!

Mozilla Open Policy & Advocacy BlogAnnouncing the 2017 Ford-Mozilla Open Web Fellows!

At the foundation of our net policy and advocacy platforms at Mozilla is our support for the growing network of leaders all over the world. For the past two years, Mozilla and the Ford Foundation have partnered over fourteen organizations with progressive technologists operating at the intersection of open web security and policy; and in 2017-2018 we plan to continue our Open Web Fellows Program with our largest cohort yet! Following months of deliberation, and a recruitment process that included close to 300 competitive applicants from our global community, we’re delighted to introduce you to our 2016-2017 Open Web Fellows:

                      

This year, we’ll host an unprecedented set of eleven fellows embedded in four incumbent and seven new host organizations! These fellows will partner with their host organizations over the next 10 months to work on independent research and project development that amplifies issues of Internet Health, privacy and security, as well as net neutrality and open web policy on/offline.

If you’d like to learn more about our fellows, we encourage you to browse their bios, read up on their host organizations, and follow them on Twitter! We look forward to updating you on our Fellows’ progress, and can’t wait to learn more from them over the coming months. Stay tuned!

The post Announcing the 2017 Ford-Mozilla Open Web Fellows! appeared first on Open Policy & Advocacy.

Mozilla Open Innovation TeamOpen Source Needs Students To Thrive

This past year, thousands of computer science students in the United States were inspired by open source, yet in many cases their flames of interest were doused by the structure of technical education at most colleges. Concerns about students plagiarizing each other’s work, lack of structural support, resources, and community connections are making it hard for students to jump between curious to capable in the world of open source.

As part of our ongoing efforts to engage college students and develop a program to support open source clubs, Mozilla’s Open Innovation Team recently conducted a study to better understand the current state of open source on US Campuses. We also asked ourselves “what can Mozilla do to support and fuel students who are actively engaged in advancing open source?” Read the full research report here.

We ran a broad screening process to identify students with an interest in technology, an interest in open source, and who also represented a diversity of gender identities, academic focuses, locations and schools. We ultimately selected 25 students with whom to conduct an in-depth interview.

Photo distributed with CC BY-NC-ND

We found that open source is usually learned outside the classroom, there is strong interest, but the overall level for open source literacy is low.

Students are excited about open source, but there’s a knowledge gap

Students are generally excited about the idea of open source, citing the control it gives them over the software they use, the opportunities it provides for them to build skills, and the emphasis on community.

However, for many students a lot is still unknown, and there are core aspects of open source that lots of students weren’t aware of. For example, a challenge that many students faced when trying to contribute to an existing open source project was not knowing how to analytically read code. One student described his challenges trying to read a codebase for the first time…

I looked at a codebase and I had no idea where to begin. It felt like it would take weeks just to come up to speed.” — Eric, Georgia Tech

Students also were worried about how viable open source is a career path, leading one student to ask “how can I pay my student loans with open source”.

Another example was at a hackathon attended by our researcher, for submissions to the “Best Open Source Hack” category. In fact, only 5 of the 16 entries correctly licensed their software. 10 of the disqualified teams expressed surprise that a license was required. They had believed that all that was required to make software open source was to release it on Github.

I had been told that being on Github was enough. I had never heard about licensing before!

Open source isn’t taught, it’s learned informally

A major reason for this lack of literacy is that open source is rarely taught as part of university curriculum (except the Portland State University). In fact, the structure and culture of most computer science programs often unintentionally reinforces behaviors that are counter to developing the skills necessary to make contributions to existing open source projects. A large part of this seems to come from a desire to prevent academic dishonesty.

“An [Open Source Club] member recently told me that one of the reasons he joined was that he wanted to be able to code along side other people and help them solve problems with their code. He didn’t feel like he could normally do that in his classes without being accused of helping people ‘cheating.” — Wes, Rensselaer Polytechnic

As a result most students learn about open source informally through hobbies, like robotics programming, extracurriculars or their peers.

The reality is that on most college campuses, Open Source is learned in students off time and during club times. Its students teaching students, not professors teaching us.— Semirah, UMass Dartmouth

Implications: Starting their careers with a knowledge and skills gap

A generation of developers are at risk of starting their technical careers without understanding or even knowing about open source or the value of open. Mozilla purposefully designs open products and technologies which can grow and change the Web because of passionate OS contributors but we need to enable the next generation to drive the mission forward.

“Open source offers an alternative to corporate control of programs and the web. That’s something that needs to be encouraged.” — Casey, Portland State

Opportunities: Filling the need for bottom-up support

As people who care about open source we can tackle this by supporting organizations like POSSE who are working to get better open source education into the classroom. Ensuring that more students are exposed to open source concepts and the basic skills they’d need to participate, as a part of their education.

Given the challenges and wait times associated with introducing new curriculum in most universities, there is also an immediate and present need for well-supported, networked, informal structures that help teach, instill and provide access to open source projects and technologies for students. From what we learned so far and from the feedback we got from the students, there is a real opportunity for Mozilla to fill this need and make a difference on campuses interested in open source.

Next Steps for Mozilla’s Open Source Student Network

Based on this research, we are currently working with a team of student leaders to design a program that makes it easy for students to learn about and contribute to open source on their campuses.

We are also working closely with organizations already in this space such as POSSE, Red Hat, and the Open Source Initiative to create educational content and connect with professors and students who share our mission.

Furthermore we’re partnering with other teams and projects inside Mozilla such as Add Ons, Rust, Dev Tools, and Mozilla VR/AR, to create activities and challenges that motivate and engage a vast network of students and professors in our products and technology development processes.

Does this reflect your experience? Tell us what it’s like on your campus in the comments here or reach out to us on discourse or via email at campusclubs@mozilla.com!


Open Source Needs Students To Thrive was originally published in Mozilla Open Innovation on Medium, where people are continuing the conversation by highlighting and responding to this story.

The Mozilla BlogMozilla Announces 15 New Fellows for Science, Advocacy, and Media

These technologists, researchers, activists, and artists will spend the next 10 months making the Internet a better place

 

Today, Mozilla is announcing 15 new Fellows in the realms of science, advocacy, and media.

Fellows hail from Mexico, Bosnia & Herzegovina, Uganda, the United States, and beyond. They are multimedia artists and policy analysts, security researchers and ethical hackers.

Over the next several months, Fellows will put their diverse abilities to work making the Internet a healthier place. Among their many projects are initiatives to make biomedical research more open; uncover technical solutions to online harassment; teach privacy and security fundamentals to patrons at public libraries; and curtail mass surveillance within Latin American countries.

 

<Meet our Ford-Mozilla Open Web Fellows>

 

The 2017 Ford-Mozilla Open Web Fellows

Ford-Mozilla Open Web Fellows are talented technologists who are passionate about privacy, security, and net neutrality. Fellows embed with international NGOs for 10 months to work on independent research and project development.

Past Open Web Fellows have helped build open-source whistle-blowing software, and analyzed discriminatory police practice data.

Our third cohort of Open Web Fellows was selected from more than 300 applications. Our 11 2017 Fellows and host organizations are:

Sarah Aoun | Hollaback!

Carlos Guerra | Derechos Digitales

Sarah Kiden | Research ICT Africa

Bram Abramson | Citizen Lab

Freddy Martinez | Freedom of the Press Foundation

Rishab Nithyanand | Data & Society

Rebecca Ricks | Human Rights Watch

Aleksandar Todorović | Bits of Freedom

Maya Wagoner | Brooklyn Public Library

Orlando Del Aguila | Majal

Nasma Ahmed | MPower Change

Learn more about our Open Web Fellows.

 

<Meet our Mozilla Fellows in Science>

Mozilla’s Open Science Fellows work at the intersection of research and openness. They foster the use of open data and open source software in the scientific community, and receive training and support from Mozilla to hone their skills around open source, participatory learning, and data sharing.

Past Open Science fellows have developed online curriculum to teach the command line and scripting languages to bioinformaticians. They’ve defined statistical programming best-practices for instructors and open science peers. And they’ve coordinated conferences on the principles of working open.

Our third cohort of Open Science Fellows — supported by the Siegel Family Endowment — was selected from a record pool of 1,090 applications. Our two 2017 fellows are:

Amel Ghouila

A computer scientist by background, Amel earned her PhD in Bioinformatics and is currently a bioinformatician at Institut Pasteur de Tunis. She works on the frame of the pan-African bioinformatics network H3ABionet, supporting researchers and their projects while developing bioinformatics capacity throughout Africa. Amel is passionate about knowledge transfer and working open to foster collaborations and innovation in the biomedical research field. She is also passionate about empowering and educating young girls — she launched the Technovation Challenge Tunisian chapter to help Tunisian girls learn how to address community challenges by designing mobile applications.

Follow Amel on Twitter and Github.

 

Chris Hartgerink

Chris is an applied statistics PhD-candidate at Tilburg University, as part of the Metaresearch group. He has contributed to open science projects such as the Reproducibility Project: Psychology. He develops open-source software for scientists. And he conducts research on detecting data fabrication in science. Chris is particularly interested in how the scholarly system can be adapted to become a sustainable, healthy environment with permissive use of content, instead of a perverse system that promotes unreliable science. He initiated Liberate Science to work towards such a system.

Follow Chris on Twitter and Github.

Learn more about our Open Science Fellows.

 

<Meet our Mozilla Fellows in Media>

This year’s Mozilla Fellows cohort will also be joined by media producers.  These makers and activists have created public education and engagement work that explores topics related to privacy and security.  Their work incites curiosity and inspires action, and over their fellowship year will work closely with the Mozilla fellows cohort to understand and explain the most urgent issues facing the open Internet. Through a partnership with the Open Society Foundation, these fellows join other makers who have benefited from Mozilla’s first grants to media makers. Our two 2017 fellows are:

Hang Do Thi Duc

Hang Do Thi Duc is a media maker whose artistic work is about the social web and the effect of data-driven technologies on identity, privacy, and society. As a German Fulbright and DAAD scholar, Hang received an MFA in Design and Technology at Parsons in New York City. She most recently created Data Selfie, a browser extension that aims to provide users with a personal perspective on data mining and predictive analytics through their Facebook consumption.

Joana Varon

Joana is Executive Directress and Creative Chaos Catalyst at Coding Rights, a women-run organization working to expose and redress the power imbalances built into technology and its application. Coding Rights focuses on imbalances that reinforce gender and North/South inequalities.

 

Meet more Mozilla fellows. The Mozilla Tech Policy Fellowship, launched in June 2017, brings together tech policy experts from around the world. Tech Policy Fellows participate in policy efforts to improve the health of the Internet. Find more details about the fellowship and individuals involved. Learn more about the Tech Policy Fellows.

The post Mozilla Announces 15 New Fellows for Science, Advocacy, and Media appeared first on The Mozilla Blog.

Dave TownsendNew Firefox and Toolkit module peers

Please join me in welcoming another set of brave souls willing to help shepherd new code into Firefox and Toolkit:

  • Luke Chang
  • Ricky Chien
  • Luca Greco
  • Kate Hudson
  • Tomislav Jovanovic
  • Ray Lin
  • Fischer Liu

While going through this round of peer updates I’ve realised that it isn’t terribly clear how people become peers. I intend to rectify that in a coming blog post.

Cameron KaiserBlueBorne and the Power Mac TL;DR: low practical risk, but assume the worst

Person of Interest, which is one of my favourite shows (Can. You. Hear. Me?) was so very ahead of its time in many respects, and awfully prescient about a lot else. One of those things was taking control of a device for spying purposes via Bluetooth, which the show variously called "forced pairing" or "bluejacking."

Because, thanks to a newly discovered constellation of flaws nicknamed BlueBorne, you can do this for real. Depending on the context and the flaw in question, which varies from operating system to operating system, you can achieve anything from information leaks and man-in-the-middle attacks to full remote code execution without the victim system having to do anything other than merely having their Bluetooth radio on. (And people wonder why I never have Bluetooth enabled on any of my devices and use a wired headset with my phone.)

What versions of OS X are likely vulnerable? The site doesn't say, but it gives us a couple clues with iOS, which shares the XNU kernel. Versions 9.3.5 and prior are all vulnerable to remote code execution, including AppleTV version 7.2.2 which is based on iOS 8.4.2; this correlates with a XNU kernel version of 15.6.0, i.e., El Capitan. Even if we consider there may be some hardening in contemporary desktop versions of macOS, 10.4 and 10.5 are indisputably too old for that, and 10.6 very likely as well. It is therefore reasonable to conclude Power Macs are vulnerable.

As a practical matter, though, an exploit that relies on remote code execution would have to put PowerPC code somewhere it could execute, i.e., the exploit would have to be specific to Power Macs. Unless your neighbour is, well, me, this is probably not a high probability in practice. A bigger risk might be system instability if an OS X exploit is developed and weaponized and tries spraying x86 code at victim systems instead. On a 10.6 system you'd be at real risk of being pwned (more on that below). On a PowerBook G4, they wouldn't be able to take your system over, but it has a good chance of getting bounced up and down and maybe something damaged in the process. This is clearly a greater risk for laptops than desktop systems, since laptops might be in more uncontrolled environments where they could be silently probed by an unobserved attacker.

The solution is obvious: don't leave Bluetooth on, and if you must use it, enable it only in controlled environments. (This would be a good time to look into a wired keyboard or a non-Bluetooth wireless mouse.) My desktop daily drivers, an iMac G4 and my trusty Quad G5, don't have built-in Bluetooth. When I need to push photos from my Pixel, I plug in a USB Bluetooth dongle and physically disconnect it when I'm done. As far as my portable Power Macs in the field, I previously used Bluetooth PAN with my iBook G4 for tethering but I think I'll be switching to WiFi for that even though it uses more power, and leave Bluetooth disabled except if I have no other options. I already use a non-Bluetooth wireless mouse that does not require drivers, so that's covered as well.

Older Intel Mac users, it goes without saying that if you're on anything prior to Sierra you should assume the worst as well. Apple may or may not offer patches for 10.10 and 10.11, but they definitely won't patch 10.9 and earlier, and you are at much greater risk of being successfully exploited than Power Mac users. Don't turn on Bluetooth unless you have to.

Very Soon Now(tm) I will be doing an update to our old post on keeping Power Macs safe online, and this advice will be part of it. Watch for that a little later.

Meanwhile, however, the actual risk to our Power Macs isn't the biggest question this discovery poses. The biggest question is, if the show got this right, what if there's really some sort of Samaritan out there too?

Mozilla VR BlogSHA Hacker Camp: Learning a byte about Virtual Reality on the Web

SHA Hacker Camp: Learning a byte about Virtual Reality on the Web

SHA (Still Hacking Anyways) is an nonprofit, outdoor hacker-camp series organized every four years. SHA2017 was held this August 4-8 in Zeewolde, Netherlands.

Attended by more than 3500 hackers, SHA was a fun, knowledge-packed four-day festival. The festival featured a wide range of talks and workshops, including sessions related to Internet of Things (IoT), hardware and software hacking, security, privacy, and much more!

Ram Dayal Vaishnav, a Tech Speaker from Mozilla’s Indian community, presented a session on WebVR, Building a Virtual-Reality Website using A-Frame. Check out a video recording of Ram’s talk:

Head on over to Ram’s personal blog to catch a few more highlights from SHA2017.

Mike HoyeCleaning House

Current status:


Current Status

When I was desk-camping in CDOT a few years ago, one thing I took no small joy in was the combination of collegial sysadminning and servers all named after cities or countries that made a typical afternoon’s cubicle chatter sound like a rapidly-developing multinational diplomatic crisis.

Change management when you’re module owner of Planet Mozilla and de-facto administrator of a dozen or so lesser planets is kind of like that. But way, way better.

Over the next two weeks or I’m going to going to be cleaning up Planet Mozilla, removing dead feeds and culling the participants list down to people still actively participating in the Mozilla project in some broadly-defined capacity. As well, I’ll be consuming decommissioning a number of uninhabited lesser under- or unused planets and rolling any stray debris back into Planet Mozilla proper.

With that in mind, if anything goes missing that you expected to survive a transition like that, feel free to email me or file a bug. Otherwise, if any of your feeds break I am likely to be the cause of that, and if you find a planet you were following has vanished you can take some solace in the fact that it was probably delicious.

Firefox NightlyThese Weeks in Firefox: Issue 23

The team is busy sanding down the last few rough edges, and getting Firefox 57 ready to merge to beta! So busy in fact, that there are no screenshots or GIFs for this blog post. Sorry!

If you’re hankering for a more visual update, check out dolske’s Photon Engineering Newsletter #15!

Highlights

Friends of the Firefox team

(Give a shoutout/thanks to people for helping fix and test bugs. Introductions)

Project Updates

Add-ons

Activity Stream

Browser Architecture

Firefox Core Engineering

  • Bug 1390703 – Flash Click-to-Play being increased to 25% on Release 55, (hopefully) shortly followed to 100%
  • Bug 1397562 – Update staging is now disabled on OSX and Linux (update staging was disabled on Windows in bug 1397562).
    • This is in response to what we think may be an issue with e10s sandboxing.
    • This is why you may suddenly be seeing a flash of the “Nightly is applying updates” (like in bug 1398641).
  • Bug 1380252, bug 1380254 – Optimized data in crash reports and crash ping processing.
  • Open call for ideas/investigation on bug 1276488 — suspected omnijar corruption, but not much to go on.

Form Autofill

Mobile

  • Firefox iOS 8.3 Shipped last week and contains primarily bug fixes
  • Firefox iOS 9.0 has been sent to QA for final verification and is expected to ship next week. It is a fantastic release with the following highlights:
    • Support for syncing your mobile bookmarks between all your devices
    • Tracking Protection will be enabled by default for Private mode and can be enabled for Regular Mode
    • Large improvements in our data storage layer that should improve performance and stability
    • Many small bug fixes
    • Compatibility with iOS 11 (which likely ships next week)

Photon

Performance
  • For 57 we had to disable tab warming when hovering tabs because it caused more  regressions than we are comfortable fixing for 57. We are now planning to ship this significant perf improvement in 58.
  • All the significant performance improvements we are still working on at this point are at risk for 57 because we are trying to avoid risk.
Structure
Animation
  • Investigation ongoing for bug 1397092 – high cpu usage possibly caused by new 60fps tab loading indicator
  • Fatter download progressbar bug 1387557 in for review, is last animation feature planned for 57
  • Polishing, please report any glitches you see

Search and Navigation

Test Pilot

  • We reduced our JS bundle size from 2.6MB to 736k
  • Send is working on A/B tests and adding password protection

Air MozillaRust Berlin Meetup September 2017

Rust Berlin Meetup September 2017 Talks: An overview of the Servo architecture by Emilio and rust ❤️ sensors by Claus

Hacks.Mozilla.OrgExperimenting with WebAssembly and Computer Vision

This past summer, four time-crunched engineers with no prior WebAssembly experience began experimenting. The result after six weeks of exploration was WebSight: a real-time face detection demo based on OpenCV.

By compiling OpenCV to WebAssembly, the team was able to reuse a well-tested C/C++ library directly in the browser and achieve performance an order of magnitude faster than a similar JavaScript library.

I asked the team members—Brian Feldman, Debra Do, Yervant Bastikian, and Mark Romano—to write about their experience.

Note: The report that follows was written by the team members mentioned above.

WebAssembly (“wasm”) made a splash this year with its MVP release, and eager to get in on the action, we set out to build an application that made use of this new technology.

We’d seen projects like WebDSP compile their own C++ video filters to WebAssembly, an area where JavaScript has historically floundered due to the computational demands of some algorithms. This got us interested in pushing the limits of wasm, too. We wanted to use an existing, specialized, and time-tested C++ library, and after much deliberation, we landed on OpenCV, a popular open-source computer vision library.

Computer vision is highly demanding on the CPU, and thus lends itself well to wasm. Building off of some incredible work put forward by the UC Irvine SysArch group and Github user njor, we were able to update outdated asm.js builds of OpenCV to compile with modern versions of Emscripten, exposing much of OpenCV’s core functionality in JavaScript callable formats.

Working with these Emscripten builds went much differently than we expected. As Web developers, we’re used to writing code and being able to iterate and test very quickly. Introducing a large C++ library with 10-15 minute build times was a foreign experience, especially when our normal working environments are Webpack, Nodemon, and hot reloading everywhere. Once compiled, we approached the wasm build as a bit of a black box: the module started as an immutable beast of an object, and though we understood it more and more throughout the process, it never became ‘transparent’.

The efforts spent on compiling the wasm file, and then incorporating it into our JavaScript were worthwhile: it outperformed JavaScript with ease, and was significantly quicker than WebAssembly’s predecessor, asm.js.

We compared these formats through the use of a face detection algorithm. The architecture of the functions that drove these algorithms was the same, the only difference was the implementation language for each algorithm. Using web workers, we passed video stream data into the algorithms, which returned with the coordinates of a rectangle that would frame any faces in the image, and calculated an FPS measure. While the range of FPS is dependent on the user’s machine and the browser being used (Firefox takes the cake!), we noted that the FPS of the wasm-powered algorithm was consistently twice as high as the FPS of the asm.js implementation, and twenty times higher than the JS implementation, solidifying the benefits of web assembly.

Building in cutting edge technology can be a pain, but the reward was worth the temporary discomfort. Being able to use native, portable, C/C++ code in the browser, without third-party plugins, is a breakthrough. Our project, WebSight, successfully demonstrated the use of OpenCV as a WebAssembly module for face and eye detection. We’re really excited about the  future of WebAssembly, especially the eventual addition of garbage collection, which will make it easier to efficiently run other high-level languages in the browser.

You can view the demo’s GitHub repository at github.com/Web-Sight/WebSight.

Air MozillaMartes Mozilleros, 12 Sep 2017

Martes Mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos. Bi-weekly meeting to talk (in Spanish) about Mozilla status, community and...

Mozilla Open Innovation TeamMozilla running into CHAOSS to Help Measure and Improve Open Source Community Health

This week the Linux Foundation announced project CHAOSS, a collaborative initiative focused on creating the analytics and metrics to help define the health of open source communities, and developing tools for analyzing and improving the contributor experience in modern software development.

credit: Chaoss project

Besides Mozilla, initial members contributing to the project include Bitergia, Eclipse Foundation, Jono Bacon Consulting, Laval University (Canada), Linaro, OpenStack, Polytechnique Montreal (Canada) Red Hat, Sauce Labs, Software Sustainability Institute, Symphony Software Foundation, University of Missouri, University of Mons (Belgium), University of Nebraska at Omaha, and University of Victoria.

With the combined expertise from academic researchers and practitioners from industry the CHAOSS metrics committee aims to “define a neutral, implementation-agnostic set of reference metrics to be used to describe communities in a common way.” The analytical work will be complemented by the CHAOSS software committee, “formed to provide a framework for establishing an open source GPLv3 reference implementation of the CHAOSS metrics.”

Mozilla’s Open Innovation strategist Don Marti will be part of the CHAOSS project’s governance board, which is responsible for the overall oversight of the Project and coordination of efforts of the technical committees.

As a member of CHAOSS, Mozilla is committed to supporting research that will help maintainers pick the right open source metrics to focus on — metrics that will help open source projects make great software and provide a rewarding experience for contributors.

If you want to learn more about how to participate in the project have a look at the CHAOSS community website: https://chaoss.community.


Mozilla running into CHAOSS to Help Measure and Improve Open Source Community Health was originally published in Mozilla Open Innovation on Medium, where people are continuing the conversation by highlighting and responding to this story.

Chris H-CTwo Days, or How Long Until The Data Is In

Two days.

It doesn’t seem like long, but that is how long you need to wait before looking at a day’s Firefox data and being sure than 95% of it has been received.

There are some caveats, of course. This only applies to current versions of Firefox (55 and later). This will very occasionally be wrong (like, say, immediately after Labour Day when people finally get around to waking up their computers that have been sleeping for quite some time). And if you have a special case (like trying to count nearly everything instead of just 95% of it) you might want to wait a bit longer.

But for most cases: Two Days.

As part of my 2017 Q3 Deliverables I looked into how long it takes clients to send their anonymous usage statistics to us using Telemetry. This was a culmination of earlier ponderings on client delay, previous work in establishing Telemetry client health, and an eighteen-month (or more!) push to actually look at our data from a data perspective (meta-data).

This led to a meeting in San Francisco where :mreid, :kparlante, :frank, :gfritzsche, and I settled upon a list of metrics that we ought to measure to determine how healthy our Telemetry system is.

Number one on that list: latency.

It turns out there’s a delay between a user doing something (opening a tab, for instance) and them sending that information to us. This is client delay and is broken into two smaller pieces: recording delay (how long from when the user does something until when we’ve put it in a ping for transport), and submission delay (how long it takes that ready-for-transport ping to get to Mozilla).

If you want to know how many tabs were opened on Tuesday, September the 5th, 2017, you couldn’t tell on the day itself. All the tabs people open late at night won’t even be in pings, and anyone who puts their computer to sleep won’t send their pings until they wake their computer in the morning of the 6th.

This is where “Two Days” comes in: On Thursday the 7th you can be reasonably sure that we have received 95% of all pings containing data from the 5th. In fact, by the 7th, you should even have that data in some scheduled datasets like main_summary.

How do we know this? We measured it:

Screenshot-2017-9-12 Client "main" Ping Delay for Latest Version(1).png(Remember what I said about Labour Day? That’s the exceptional case on beta 56)

Most data, most days, comes in within a single day. Add a day to get it into your favourite dataset, and there you have it: Two Days.

Why is this such a big deal? Currently the only information circulating in Mozilla about how long you need to wait for data is received wisdom from a pre-Firefox-55 (pre-pingsender) world. Some teams wait up to ten full days (!!) before trusting that the data they see is complete enough to make decisions about.

This slows Mozilla down. If we are making decisions on data, our data needs to be fast and reliably so.

It just so happens that, since Firefox 55, it has been.

Now comes the hard part: communicating that it has changed and changing those long-held rules of thumb and idées fixes to adhere to our new, speedy reality.

Which brings us to this blog post. Consider this your notice that we have looked into the latency of Telemetry Data and is looks pretty darn quick these days. If you want to know about what happened on a particular day, you don’t need to wait for ten days any more.

Just Two Days. Then you can have your answers.

:chutten

(Much thanks to :gsvelto and :Dexter’s work on pingsender and using it for shutdown pings, :Dexter’s analyses on ping delay that first showed these amazing improvements, and everyone in the data teams for keeping the data flowing while I poked at SQL and rearranged words in documents.)

 


Daniel StenbergThe backdoor threat

— “Have you ever detected anyone trying to add a backdoor to curl?”

— “Have you ever been pressured by an organization or a person to add suspicious code to curl that you wouldn’t otherwise accept?”

— “If a crime syndicate would kidnap your family to force you to comply, what backdoor would you be be able to insert into curl that is the least likely to get detected?” (The less grim version of this question would instead offer huge amounts of money.)

I’ve been asked these questions and variations of them when I’ve stood up in front of audiences around the world and talked about curl and how it is one of the most widely used software components in the world, counting way over three billion instances.

Back door (noun)
— a feature or defect of a computer system that allows surreptitious unauthorized access to data.

So how is it?

No. I’ve never seen a deliberate attempt to add a flaw, a vulnerability or a backdoor into curl. I’ve seen bad patches and I’ve seen patches that brought bugs that years later were reported as security problems, but I did not spot any deliberate attempt to do bad in any of them. But if done with skills, certainly I wouldn’t have noticed them being deliberate?

If I had cooperated in adding a backdoor or been threatened to, then I wouldn’t tell you anyway and I’d thus say no to questions about it.

How to be sure

There is only one way to be sure: review the code you download and intend to use. Or get it from a trusted source that did the review for you.

If you have a version you trust, you really only have to review the changes done since then.

Possibly there’s some degree of safety in numbers, and as thousands of applications and systems use curl and libcurl and at least some of them do reviews and extensive testing, one of those could discover mischievous activities if there are any and report them publicly.

Infected machines or owned users

The servers that host the curl releases could be targeted by attackers and the tarballs for download could be replaced by something that carries evil code. There’s no such thing as a fail-safe machine, especially not if someone really wants to and tries to target us. The safeguard there is the GPG signature with which I sign all official releases. No malicious user can (re-)produce them. They have to be made by me (since I package the curl releases). That comes back to trusting me again. There’s of course no safe-guard against me being forced to signed evil code with a knife to my throat…

If one of the curl project members with git push rights would get her account hacked and her SSH key password brute-forced, a very skilled hacker could possibly sneak in something, short-term. Although my hopes are that as we review and comment each others’ code to a very high degree, that would be really hard. And the hacked person herself would most likely react.

Downloading from somewhere

I think the highest risk scenario is when users download pre-built curl or libcurl binaries from various places on the internet that isn’t the official curl web site. How can you know for sure what you’re getting then, as you couldn’t review the code or changes done. You just put your trust in a remote person or organization to do what’s right for you.

Trusting other organizations can be totally fine, as when you download using Linux distro package management systems etc as then you can expect a certain level of checks and vouching have happened and there will be digital signatures and more involved to minimize the risk of external malicious interference.

Pledging there’s no backdoor

Some people argue that projects could or should pledge for every release that there’s no deliberate backdoor planted so that if the day comes in the future when a three-letter secret organization forces us to insert a backdoor, the lack of such a pledge for the subsequent release would function as an alarm signal to people that something is wrong.

That takes us back to trusting a single person again. A truly evil adversary can of course force such a pledge to be uttered no matter what, even if that then probably is more mafia level evilness and not mere three-letter organization shadiness anymore.

I would be a bit stressed out to have to do that pledge every single release as if I ever forgot or messed it up, it should lead to a lot of people getting up in arms and how would such a mistake be fixed? It’s little too irrevocable for me. And we do quite frequent releases so the risk for mistakes is not insignificant.

Also, if I would pledge that, is that then a promise regarding all my code only, or is that meant to be a pledge for the entire code base as done by all committers? It doesn’t scale very well…

Additionally, I’m a Swede living in Sweden. The American organizations cannot legally force me to backdoor anything, and the Swedish versions of those secret organizations don’t have the legal rights to do so either (caveat: I’m not a lawyer). So, the real threat is not by legal means.

What backdoor would be likely?

It would be very hard to add code, unnoticed, that sends off data to somewhere else. Too much code that would be too obvious.

A backdoor similarly couldn’t really be made to split off data from the transfer pipe and store it locally for other systems to read, as that too is probably too much code that is too different than the current code and would be detected instantly.

No, I’m convinced the most likely backdoor code in curl is a deliberate but hard-to-detect security vulnerability that let’s the attacker exploit the program using libcurl/curl by some sort of specific usage pattern. So when triggered it can trick the program to send off memory contents or perhaps overwrite the local stack or the heap. Quite possibly only one step out of several steps necessary for a successful attack, much like how a single-byte-overwrite can lead to root access.

Any past security problems on purpose?

We’ve had almost 70 security vulnerabilities reported through the project’s almost twenty years of existence. Since most of them were triggered by mistakes in code I wrote myself, I can be certain that none of those problems were introduced on purpose. I can’t completely rule out that someone else’s patch modified curl along the way and then by extension maybe made a vulnerability worse or easier to trigger, could have been made on purpose. None of the security problems that were introduced by others have shown any sign of “deliberateness”. (Or were written cleverly enough to not make me see that!)

Maybe backdoors have been planted that we just haven’t discovered yet?

Discussion

Follow-up discussion/comments on hacker news.

This Week In RustThis Week in Rust 199

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is pikkr, a JSON parser that can extract values without tokenization and is blazingly fast using AVX2 instructions, Thank you, bstrie for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

99 pull requests were merged in the last week

New Contributors

  • bgermann
  • Douglas Campos
  • Ethan Dagner
  • Jacob Kiesel
  • John Colanduoni
  • Lance Roy
  • Mark
  • MarkMcCaskey
  • Max Comstock
  • toidiu
  • Zaki Manian

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Style RFCs

Style RFCs are part of the process for deciding on style guidelines for the Rust community and defaults for Rustfmt. The process is similar to the RFC process, but we try to reach rough consensus on issues (including a final comment period) before progressing to PRs. Just like the RFC process, all users are welcome to comment and submit RFCs. If you want to help decide what Rust code should look like, come get involved!

The RFC style is now the default style in Rustfmt - try it out and let us know what you think!

We're currently writing up the discussions, we'd love some help. Check out the tracking issue for details.

PRs:

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

No jobs listed for this week.

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

When programmers are saying that there are a lot of bicycles in code that means that it contains reimplementations of freely available libraries instead of using them

Presumably the metric for this would be bicyclomatic complexity?

/u/tomwhoiscontrary on reddit.

Thanks to Matt Ickstadt for the suggestion.

Submit your quotes for next week!

This Week in Rust is edited by: nasa42 and llogiq.

Niko MatsakisCyclic queries in chalk

In my last post about chalk queries, I discussed how the query model in chalk. Since that writing, there have been some updates, and I thought it’d be nice to do a new post covering the current model. This post will also cover the tabling technique that scalexm implemented for handling cyclic relations and show how that enables us to implement implied bounds and other long-desired features in an elegant way. (Nice work, scalexm!)

What is a chalk query?

A query is simply a question that you can ask chalk. For example, we could ask whether Vec<u32> implements Clone like so (this is a transcript of a cargo run session in chalk):

?- load libstd.chalk
?- Vec<u32>: Clone
Unique; substitution [], lifetime constraints []

As we’ll see in a second, the answer “Unique” here is basically chalk’s way of saying “yes, it does”. Sometimes chalk queries can contain existential variables. For example, we might say exists<T> { Vec<T>: Clone } – in this case, chalk actually attempts to not only tell us if there exists a type T such that Vec<T>: Clone, it also wants to tell us what T must be:

?- exists<T> { Vec<T>: Clone }
Ambiguous; no inference guidance

The result “ambiguous” is chalk’s way of saying “probably it does, but I can’t say for sure until you tell me what T is”.

So you think can think of a chalk query as a kind of subroutine like Prove(Goal) = R that evaluates some goal (the query) and returns a result R which has one of the following forms:

  • Unique: indicates that the query is provable and there is a unique value for all the existential variables.
    • In this case, we give back a substitution saying what each existential variable had to be.
    • Example: exists<T> { usize: PartialOrd<T> } would yield unique and return a substitution that T = usize, at least today (since there is only one impl that could apply, and we haven’t implemented the open world modality that aturon talked about yet).
  • Ambiguous: the query may hold but we could not be sure. Typically, this means that there are multiple possible values for the existential variables.
    • Example: exists<T> { Vec<T>: Clone } would yield ambiguous, since there are many T that could fit the bill).
    • In this case, we sometimes give back guidance, which are suggested values for the existential variables. This is not important to this blog post so I’ll not go into the details.
  • Error: the query is provably false.

(The form of these answers has changed somewhat since my previous blog post, because we incorporated some of aturon’s ideas around negative reasoning.)

So what is a cycle?

As I outlined long ago in my first post on lowering Rust traits to logic, the way that the Prove(Goal) subroutine works is basically just to iterate over all the possible ways to prove the given goal and try them one at a time. This often requires proving subgoals: for example, when we were evaluating ?- Vec<u32>: Clone, internally, this would also wind up evaluating u32: Clone, because the impl for Vec<T> has a where-clause that T must be clone:

impl<T> Clone for Vec<T>
where
  T: Clone,
  T: Sized,
{ }

Sometimes, this exploration can wind up trying to solve the same goal that you started with! The result is a cyclic query and, naturally, it requires some special care to yield a valid answer. For example, consider this setup:

trait Foo { }
struct S<T> { }
impl<U> Foo for S<U> where U: Foo { }

Now imagine that we were evaluating exists<T> { T: Foo }:

  • Internally, we would process this by first instantiating the existential variable T with an inference variable, so we wind up with something like ?0: Foo, where ?0 is an as-yet-unknown inference variable.
  • Then we would consider each impl: in this case, there is only one.
    • For that impl to apply, ?0 = S<?1> must hold, where ?1 is a new variable. So we can perform that unification.
      • But next we must check that ?1: Foo holds (that is the where-clause on the impl). So we would convert this into “closed” form by replacing all the inference variables with exists binders, giving us something like exists<T> { T: Foo }. We can now perform this query.
        • Only wait: This is the same query we were already trying to solve! This is precisely what we mean by a cycle.

In this case, the right answer for chalk to give is actually Error. This is because there is no finite type that satisfies this query. The only type you could write would be something like

S<S<S<S<...ad infinitum...>>>>: Foo

where there are an infinite number of nesting levels. As Rust requires all of its types to have finite size, this is not a legal type. And indeed if we ask chalk this query, that is precisely what it answers:

?- exists<T> { S<T>: Foo }
No possible solution: no applicable candidates

But cycles aren’t always errors of this kind. Consider a variation on our previous example where we have a few more impls:

trait Foo { }

// chalk doesn't have built-in knowledge of any types,
// so we have to declare `u32` as well:
struct u32 { }
impl Foo for u32 { }

struct S<T> { }
impl<U> Foo for S<U> where U: Foo { }

Now if we ask the same query, we get back an ambiguous result, meaning that there exists many solutions:

?- exists<T> { T: Foo }
Ambiguous; no inference guidance

What has changed here? Well, introducing the new impl means that there is now an infinite family of finite solutions:

  • T = u32 would work
  • T = S<u32> would work
  • T = S<S<u32>> would work
  • and so on.

Sometimes there can even be unique solutions. For example, consider this final twist on the example, where we add a second where-clause concerning Bar to the impl for S<T>:

trait Foo { }
trait Bar { }

struct u32 { }
impl Foo for u32 { }

struct S<T> { }
impl<U> Foo for S<U> where U: Foo, U: Bar { }
//                                 ^^^^^^ this is new

Now if we ask the same query again, we get back yet a different response:

?- exists<T> { T: Foo }
Unique; substitution [?0 := u32], lifetime constraints []

Here, Chalk figured out that T must be u32. How can this be? Well, if you look, it’s the only impl that can apply – for T to equal S<U>, U must implement Bar, and there are no Bar impls at all.

So we see that when we encounter a cycle during query processing, it doesn’t necessarily mean the query needs to result in an error. Indeed, the overall query may result in zero, one, or many solutions. But how does should we figure out what is right? And how do we avoid recursing infinitely while doing so? Glad you asked.

Tabling: how chalk is handling cycles right now

Naturally, traditional Prolog interpreters have similar problems. It is actually quite easy to make a Prolog program spiral off into an infinite loop by writing what seem to be quite reasonable clauses (quite like the ones we saw in the previous section). Over time, people have evolved various techniques for handling this. One that is relevant to us is called tabling or memoization – I found this paper to be a particularly readable introduction. As part of his work on implied bounds, scalexm implemented a variant of this idea in chalk.

The basic idea is as follows. When we encounter a cycle, we will actually wind up iterating to find the result. Initially, we assume that a cycle means an error (i.e., no solutions). This will cause us to go on looking for other impls that may apply without encountering a cycle. Let’s assume we find some solution S that way. Then we can start over, but this time, when we encounter the cyclic query, we can use S as the result of the cycle, and we would then check if that gives us a new solution S’.

If you were doing this in Prolog, where the interpreter attempts to provide all possible answers, then you would keep iterating, only this time, when you encountered the cycle, you would give back two answers: S and S’. In chalk, things are somewhat simpler: multiple answers simply means that we give back an ambiguous result.

So the pseudocode for solving then looks something like this:

  • Prove(Goal):
    • If goal is ON the stack already:
      • return stored answer from the stack
    • Else, when goal is not on the stack:
      • Push goal on to the stack with an initial answer of error
      • Loop
        • Try to solve goal yielding result R (which may generate recursive calls to Solve with the same goal)
        • Pop goal from the stack and return the result R if any of the following are true:
          • No cycle was encountered; or,
          • the result was the same as what we started with; or,
          • the result is ambiguous (multiple solutions).
        • Otherwise, set the answer for Goal to be R and repeat.

If you’re curious, the real chalk code is here. It is pretty similar to what I wrote above, except that it also handles “coinductive matching” for auto traits, which I won’t go into now. In any case, let’s apply this to our three examples of proving exists<T> { T: Foo }:

  • In the first example, where we only had impl<U> Foo for S<U> where U: Foo, the cyclic attempt to solve will yield an error (because the initial answer for cyclic alls is errors). There is no other way for a type to implement Foo, and hence the overall attempt to solve yields an error. This is the same as what we started with, so we just return and we don’t have to cycle again.
  • In the second example, where we added impl Foo for u32, we again encounter a cycle and return error at first, but then we see that T = u32 is a valid solution. So our initial result R is Unique[T = u32]. This is not what we started with, so we try again.
    • In the second iteration, when we encounter the cycle trying to process impl<U> Foo for S<U> where U: Foo, this time we will give back the answer U = u32. We will then process the where-clause and issue the query u32: Foo, which succeeds. Thus we wind up yielding a successful possibility, where T = S<u32>, in addition to the result that T = u32. This means that, overall, our second iteration winds up producing ambiguity.
  • In the final example, where we added a where clause U: Bar, the first iteration will again produce a result of Unique[T = u32]. As this is not what we started with, we again try a second iteration.
    • In the second iteration, we will again produce T = u32 as a result for the cycle. This time however we go on to evaluate u32: Bar, which fails, and hence overall we still only get one successful result (T = u32).
    • Since we have now reached a fixed point, we stop processing.

Why do we care about cycles anyway?

You may wonder why we’re so interested in handling cycles well. After all, how often do they arise in practice? Indeed, today’s rustc takes a rather more simplistic approach to cycles. However, this leads to a number of limitations where rustc fails to prove things that it ought to be able to do. As we were exploring ways to overcome these obstacles, as well as integrating ideas like implied bounds, we found that a proper handling of cycles was crucial.

As a simple example, consider how to handle “supertraits” in Rust. In Rust today, traits sometimes have supertraits, which are a subset of their ordinary where-clauses that apply to Self:

// PartialOrd is a "supertrait" of Ord. This means that
// I can only implement `Ord` for types that also implement
// `PartialOrd`.
trait Ord: PartialOrd { }

As a result, whenever I have a function that requires T: Ord, that implies that T: PartialOrd must also hold:

fn foo<T: Ord>(t: T) {
  bar(t); // OK: `T: Ord` implies `T: PartialOrd`
}  

fn bar<T: PartialOrd>(t: T) {
  ...
}  

The way that we handle this in the Rust compiler is through a technique called elaboration. Basically, we start out with a base set of where-clauses (the ones you wrote explicitly), and then we grow that set, adding in whatever supertraits should be implied. This is an iterative process that repeats until a fixed-point is reached. So the internal set of where-clauses that we use when checking foo() is not {T: Ord} but {T: Ord, T: PartialOrd}.

This is a simple technique, but it has some limitations. For example, RFC 1927 proposed that we should elaborate not only supertraits but arbitrary where-clauses declared on traits (in general, a common request). Going further, we have ideas like the implied bounds RFC. There are also just known limitations around associated types and elaboration.

The problem is that the elaboration technique doesn’t really scale gracefully to all of these proposals: often times, the fully elaborated set of where-clauses is infinite in size. (We somewhat arbitrarily prevent cycles between supertraits to prevent this scenario in that special case.)

So we tried in chalk to take a different approach. Instead of doing this iterative elaboration step, we push that elaboration into the solver via special rules. The basic idea is that we have a special kind of predicate called a WF (well-formed) goal. The meaning of something like WF(T: Ord) is basically “T is capable of implementing Ord” – that is, T satisfies the conditions that would make it legal to implement Ord. (It doesn’t mean that T actually does implement Ord; that is the predicate T: Ord.) As we lower the Ord and PartialOrd traits to simpler logic rules, then, we can define the WF(T: Ord) predicate like so:

// T is capable of implementing Ord if...
WF(T: Ord) :-
  T: PartialOrd. // ...T implements PartialOrd.

Now, WF(T: Ord) is really an “if and only if” predicate. That is, there is only one way for WF(T: Ord) to be true, and that is by implementing PartialOrd. Therefore, we can define also the opposite direction:

// T must implement PartialOrd if...
T: PartialOrd :-
  WF(T: Ord). // ...T is capable of implementing Ord.

Now if you think this looks cyclic, you’re right! Under ordinary circumstances, this pair of rules doesn’t do you much good. That is, you can’t prove that (say) u32: PartialOrd by using these rules, you would have to use other rules for that (say, rules arising from an impl).

However, sometimes these rules are useful. In particular, if you have a generic function like the function foo we saw before:

fn foo<T: Ord>() { .. }

In this case, we would setup the environment of foo() to contain exactly two predicates {T: Ord, WF(T: Ord)}. This is a form of elaboration, but not the iterative elaboration we had before. We simply introduce WF-clauses. But this gives us enough to prove that T: PartialOrd (because we know, by assumption, that WF(T: Ord)). What’s more, this setup scales to arbitrary where-clauses and other kinds of implied bounds.

Conclusion

This post covers the tabling technique that chalk currently uses to handle cycles, and also the key ideas of how Rust handles elaboration.

The current implementation in chalk is really quite naive. One interesting question is how to make it more efficient. There is a lot of existing work on this topic from the Prolog community, naturally, with the work on the well-founded semantics being among the most promising (see e.g. this paper). I started doing some prototyping in this direction, but I’ve recently become intrigued with a different approach, where we use the techniques from Adapton (or perhaps other incremental computation systems) to enable fine-grained caching and speed up the more naive implementation. Hopefully this will be the subject of the next blog post!

Mozilla Open Policy & Advocacy BlogWelcome to San Francisco, Chairman Pai – We Depend on Net Neutrality

This is an open letter to FCC Chairman Ajit Pai as he arrives in San Francisco for an event. He has said that Silicon Valley is a magically innovative place – and we agree. An open internet makes that possible, and enables other geographical areas to grow and innovate too.

Welcome to San Francisco, Chairman Pai! As you have noted in the past, the Bay Area has been a hub for many innovative companies. Our startups, technology companies, and service providers have added value for billions of users online.

The internet is a powerful tool for the economy and creators. No one owns the internet – we can all create, shape, and benefit from it. And for the future of our society and our economy, we need to keep it that way – open and distributed.

We are very concerned by your proposal to roll back net neutrality protections that the FCC enacted in 2015 and that are currently in place. That enforceable policy framework provides vital protections to ensure that ISPs don’t act as gatekeepers for online content and services. Abandoning these core protections will hurt consumers and small businesses alike.

As network engineers have noted, your proposal mischaracterizes many aspects of the internet, and does not show that the 2015 open internet order would benefit anyone other than major broadband providers. Instead, this seems like a politically loaded decision made about rules that have not been tested, either in the courts or in the field. User rights, the American economy, and free speech should not be used as political footballs. We deserve more from you, an independent regulator.

Broadband providers are in a position to restrict internet access for their own business objectives: favoring their own products, blocking sites or brands, or charging different prices (either to users or to content providers) and offering different speeds depending on content type. Net neutrality prohibits network providers from discriminating based on content, so everyone has equal access to potential users – whether you are a powerful incumbent or an up-and-coming disruptive service. That’s key to a market that works.

The open internet aids free speech, competition, innovation and user choice. We need more than the hollow promises and wishful thinking of your proposal – we must have enforceable rules. And net neutrality enforcement under non-Title II theories has been roundly rejected by the courts.

Politics is a terrible way to decide the future of the internet, and this proceeding increasingly has the makings of a spectator sport, not a serious debate. Protecting the internet should not be a political, or partisan, issue. The internet has long served as a forum where all voices are free to be heard – which is critical to democratic and regulatory processes. These suffer when the internet is used to feed partisan politics. This partisanship also damages the Commission’s strong reputation as an independent agency. We don’t believe that net neutrality, internet access, or the open internet is – or ever should be – a partisan issue. It is a human issue.

Net neutrality is most essential in communities that don’t count giant global businesses as their neighbors like your hometown in Kansas. Without it, consumers and businesses will not be able to compete by building and utilizing new, innovative tools. Proceed carefully – and protect the entire internet, not just giant ISPs.

The post Welcome to San Francisco, Chairman Pai – We Depend on Net Neutrality appeared first on Open Policy & Advocacy.

Justin DolskePhoton Engineering Newsletter #15

I’m back from a vacation to see the eclipse, so it’s time for Newsletter #15! (It’s taking me some time to get caught up, so this update covers the last 2 or so weeks.)

As noted in my previous update, Mike and Jared took over Newsletter duties while I was out. If you somehow missed their excellent updates – Newsletter #13 and Newsletter #14 – please check them out. (Go ahead, I’ll wait.)

We’re getting very close to Firefox 57 entering Beta! Code merges to the Beta on September 20th, and the first Beta release should come on the 26th. The Photon project is targeting the 15th to be ready for Beta, just to make sure there’s a bit of time to spare. We’ll be continuing to fix bugs and improve polish during the Beta, but the type of fixes we make will begin to scale back, as we focus on making sure 57 is a rock-solid release. This means becoming increasingly risk-adverse – there will always be bugs (and more releases to fix them in), so we very much want to avoid causing new regressions shortly before 57 ships to everybody. Last-minute firedrills are no fun for anyone. But we’re in really great shape right now – we’re done with feature development, are already shifting to more minor fixes, and there isn’t anything really scary waiting to be fixed.

Recent Changes

Menus/structure:

Animation:

Preferences:

  • Once last P1 bug to feature complete!
  • Team to move to help out Onboarding once all P1 and important P3s are fixed.

Visual redesign:

Onboarding:

Performance:


Julien VehentLessons learned from mentoring

Over the last few weeks, a number of enthusiastic students have asked me when the registration for the next edition of Mozilla Winter of Security would open. I've been saddened to inform them that there won't be an edition of MWoS this year. I understand this is disappointing to many who were looking forward to work on cool security projects alongside experienced engineers, but the truth is simply don't have the time, resources and energy to mentor students right now.


Firefox engineers are cramming through bugs for the Firefox 57 release, planned for November 14th. We could easily say "sorry, too busy making Firefox awesome, kthnksbye", but there is more to the story of not running MWoS this year than the release of 57. In this blog post, I'd like to explore some of these reasons, and maybe share tips with folks who would like to become mentors.


After running MWoS for 3 years, engaging with hundreds of students and personally mentoring about a dozen, I learned two fundamental lessons:

  1. The return on investment is extremely low, when it's not a direct loss to the mentor.
  2. Students engagement is very hard to maintain, and many are just in it for the glory.
Those are hard-learned lessons that somewhat shattered my belief in mentoring. Let's dive into each.

Return on investment

Many mentors will tell you that having an altruistic approach to mentoring is the best way to engage with students. That's true for short engagements, when you spare a few minutes to answer questions and give guidance, but it's utter bullshit for long engagements.
It is simply not realistic to ask engineers to invest two hours a week over four months without getting something out of it. Your time is precious, have some respect for it. When we initially structured MWoS, we made sure that each party (mentors, students and professors) would get something out of it, specifically:
  • Mentors get help on a project they would not be able to complete alone.
  • Students get a great experience and a grade as part of their school curriculum.
  • Professors get interesting projects and offload the mentoring to Mozilla.
Making sure that students received a grade from their professors helped maintain their engagement (but only to some extend, more on that later), and ensured professors approved of the cost a side project would make to their very-busy-students.
The part that mattered a lot for us, mentors, besides helping train the next generation of engineers, was getting help on projects we couldn't complete ourselves. After running MWoS for three years and over a few dozen projects, the truth is we would be better off writing the code ourselves in the majority of cases. The time invested in teaching students would be better used implementing the features we're looking for, because even when students completed their projects, the code quality was often too low for the features to be merged without significant rewrites.

There have been exceptions, of course, and some teams have produced code of good quality. But those have been the exceptions, not the rule. The low return on investment (and often negative return when mentors invested time into projects that did not complete), meant that it became increasingly hard for busy engineers to convince their managers to dedicate 5 to 10% of their time supporting teams that will likely produce low quality code, if any code at all.
It could be said that we sized our projects improperly, and made them too complex for students to complete. It's a plausible explanation, but at the same time, we have not observed a correlation between project complexity and completion. This leads into the next point.

Students engagement is hard to maintain

You would imagine that a student who is given the opportunity to work with Mozilla engineers for several months would be incredibly engaged, and drop everything for the opportunity to work on interesting, highly visible, very challenging projects. We've certainly seen students like that, and they have been fantastic to work with. I remain friends with a number of them, and it's been rewarding to see them grow into accomplished professional who know way more about the topics I mentored them on than I do today. Those are the good ones. The exceptions. The ones that keep on going when your other mentoring projects keep on failing.

And then, you have the long tail of students who have very mixed interest in their projects. Some are certainly overwhelmed by their coursework and have little time to dedicate to their projects. I have no issue with overwhelmed students, and have repeatedly told many of my mentee to prioritize their coursework and exams over MWoS projects.

The ones that rub me the wrong way are students that are more interested in getting into MWoS than actually completing their projects. This category of resume-padding students cares for the notoriety of the program more than the work they accomplish. They are very hard to notice at first, but after a couple years of mentoring, you start to see the patterns: eagerness to name-drop, github account filled with forks of projects and no authored code, vague technical answers during interview questions, constant mention of their references and people they know, etc.
When you mentor students that are just in it for the glory, the interest in the project will quickly drop. Here's how it usually goes:
  • By week 2, you'll notice students have no plan to implement the project, and you find yourself holding their hands through the roadmap, sometimes explaining concepts so basic you wonder how they could not be familiar with them yet.
  • By week 4, students are still "going through the codebase to understand how it is structured", and have no plans to implement the project yet. You spend meeting explaining how things work, and grow frustrated by their lack of research. Did they even look at this since our last meeting?
  • By week 6, you're pretty much convinced they only work on the project for 30min chunks when you send them a reminder email. The meetings are becoming a drag, a waste of a good half hour in your already busy week. Your tone changes and you become more and more prescriptive, less and less enthusiastic. Students nod, but you have little hope they'll make progress.
  • By week 8, it's the mid-term, and no progress is made for another month.
You end up cancelling the weekly meeting around week 10, and ask students to contact you when they have made progress. You'll hear back from them 3 months later because their professor is about to grade them. You wonder how that's going to work, since the professor never showed up to the weekly meeting, and never contacted you directly for an assessment. Oh well, they'll probably get an A just because they have Mozilla written next to their project...

This is a somewhat overly dramatic account of a failed engagement, but it's not at all unrealistic. In fact, in the dozen projects I mentored, this probably happened on half of them.
The problem with lowly-engaged students is that they are going to drain your motivation away. There is a particular light in the eye of the true nerd-geek-hacker-engaged-student that makes you want to work with them and guide them through their mistakes. That's the reward of a mentor, and it is always missing from students that are not engaged. You learn to notice it after a while, but often long after the damage done by the opportunists have taken away your interest in mentoring.

Will MWoS rise from the ashes?

The combination of low return on investment and poorly engaged students, in addition to a significant increase in workload, made us cancel this year's round. Maybe next year, if we find the time and energy, we will run MWoS again. It's also possible that other folks at Mozilla, and in other organizations, will run similar programs in the future. Should we run it again, we would be a lot stricter on filtering students, and make sure they are ready to invest a lot of time and energy into their projects. This is fairly easy to do: throw them a challenge during the application period, and check the results. "Implement a crude Diffie-Hellman chat on UDP sockets, you've got 48 hours", or anything along those line, along with a good one hour conversation, ought to do it. We were shy to ask those questions at first, but it became obvious over the years that stronger filtering was desperately needed.

For folks looking to mentor, my recommendation is to open your organization to internships before you do anything else. There's a major difference in productivity between interns and students, mostly because you control 100% of an intern's daily schedule, and can make sure they are working on the tasks you assign them too. Interns often complete their projects and provide direct value to the organization. The same cannot be said by mentee of the MWoS program.

Firefox NightlyDeveloper Tools visual refresh coming to Nightly

Good evening, Nightly friends! As the UX designer for DevTools, I’ve been working on fresh new themes for Firefox 57. My colleague Gabriel Luong is handling the implementation and will be landing new syntax colors in Nightly soon. I want to give you a preview of the new changes and explain some of the reasoning behind them. I’ll also be inviting you to test the new design and give feedback.

New Nightly icon, some Photon colors, new tabs

57: New icon, new colors, new tabs

Firefox 57’s new design—codenamed Photon—features vibrant colors and bold, modern styling. Aligning with Photon was the main goal of this DevTools restyling, and my hope was to use this opportunity to improve the usability of the tools with cleaner interfaces and more readable text.

The new DevTools tab bar is a simpler version of the new Firefox tab bar. Compared to the old tabs, this means fewer lines, slightly more padding, and subtler use of color.

New DevTools tabs

New DevTools tabs

In dark mode, all the slate blues have been replaced with deep grays, and the sidebars are a darker shade to give more visual priority to the center column.

New dark debugger

New dark debugger

Syntax highlighting was the most challenging part of this project due the abundance of opinions and the lack of solid research. To keep my decisions as data-informed as possible, I referenced the following resources:

  • Each color was checked for accessible contrast levels to keep the new themes AA-compliant.
  • This study on syntax highlighting showed that it’s beneficial to highlight a larger variety of keywords with different colors.
  • This study on computer readability concluded that, while light themes are generally better than dark themes for readability, many people have a good experience with chromatic dark themes that feature the universal favorite color: blue.
  • Using the Sim Daltonism app, the themes were informally checked for color blindness conditions.

In addition, I wanted to move away from the use of red for non-error text, and mostly use cool colors accented with warm colors. After some experimentation in the browser toolbox, a blue/magenta/navy theme emerged based on the Photon design system colors.

The old design used translucency to de-emphasize <head>, <script> tags, and hidden elements, which made them a bit difficult to read. For the new design, head and script tags will be treated normally, since they tend to be some of the most important elements in HTML. Hidden divs and other elements will be desaturated instead of translucent.

Firefox's current syntax highlighting

Old HTML/CSS

New syntax highlighting - HTML/CSS

New HTML/CSS

For the dark theme, I aimed for a slightly lower-contrast, calmer theme, intended for lengthy screen-staring sessions in dimmer rooms. (There’s a huge variety in the contrast levels of popular dark themes, but for this project, it felt important to balance the light theme’s high contrast with a lower-contrast theme.) The bold Photon colors looked too glaring against a dark background, so I created a more pastel version of each color for this theme.

Old HTML/CSS (dark)

Old HTML/CSS (dark)

New HTML/CSS (dark)

New HTML/CSS (dark)

For JavaScript in the Debugger, I added a few extra colors to allow for more variation than what the previous theme had—for example, keywords and properties will now be different colors. These mockups show the general color direction, but exact highlighting patterns are under discussion and will continue to be developed.

New dark and light JS highlighing schemes

New JS colors (tentative)

Feedback Wanted

These changes should be arriving in a few days. Much more polish is planned, so if you have any feedback, I’d love to hear it! I know dealing with UI changes can be jarring, but try it out for a couple days in your usual workflow and let me know what you think. I hope to hear from both developers and designers working in all different kinds of environments, and I’m especially interested in hearing from users with accessibility needs.

You can send me feedback either through this Discourse thread or by talking to me on Twitter. Thank you!

Air MozillaAutomating Web Accessibility Testing

Automating Web Accessibility Testing A conclusion to my internship on automating web accessibility testing.

QMOFirefox 56 Beta 8 Testday Results

As you may already know, last Friday – September 1st – we held a new Testday event, for Firefox 56 Beta 8.

Thank you Fahima Zulfath A,  Surentharan, P Avinash Sharma and Surentharan R.A for helping us make Mozilla a better place.

It seems that from technical problems the Bangladesh team did not received a reminder for this event, so we hope to see you on our next events.

Note that on September 15th we are organizing Firefox Developer Edition 56 Beta 12 Testday.

Results:
– several test cases executed for Form Autofill and Media Block Autoplay features;

Thanks for another successful testday 🙂

We hope to see you all in our next events, all the details will be posted on QMO!

The Mozilla BlogA Copyright Vote That Could Change the EU’s Internet

On October 10, EU lawmakers will vote on a dangerous proposal to change copyright law. Mozilla is urging EU citizens to demand better reforms.

 

On October 10, the European Parliament Committee on Legal Affairs (JURI) will vote on a proposal to change EU copyright law.

The outcome could sabotage freedom and openness online. It could make filtering and blocking online content far more routine, affecting the hundreds of millions of EU citizens who use the internet everyday.

Dysfunctional copyright reform is threatening Europe’s internet

Why Copyright Reform Matters

The EU’s current copyright legal framework is woefully outdated. It’s a framework created when the postcard, and not the iPhone, was a reigning communication method.

But the EU’s proposal to reform this framework is in many ways a step backward. Titled “Directive on Copyright in the Digital Single Market,” this backward proposal is up for an initial vote on October 10 and a final vote in December.

“Many aspects of the proposal and some amendments put forward in the Parliament are dysfunctional and borderline absurd,” says Raegan MacDonald, Mozilla’s Senior EU Policy Manager. “The proposal would make filtering and blocking of online content the norm, effectively undermining innovation, competition and freedom of expression.”

Under the proposal:

  • If the most dangerous amendments pass, everything you put on the internet will be filtered, and even blocked. It doesn’t even need to be commercial — some proposals are so broad that even photos you upload for friends and family would be included.

 

  • Linking to and accessing information online is also at stake: extending copyright to cover news snippets will restrict our ability to learn from a diverse selection of sources. Sharing and accessing news online would become more difficult through the so-called “neighbouring right” for press publishers.

 

  • The proposal would remove crucial protections for intermediaries, and would force most online platforms to monitor all content you post — like Wikipedia, eBay, software repositories on Github, or DeviantArt submissions.

 

  • Only scientific research institutions would be allowed to mine text and datasets. This means countless other beneficiaries — including librarians, journalists, advocacy groups, and independent scientists — would not be able to make use of mining software to understand large data sets, putting Europe in a competitive disadvantage in the world.

Mozilla’s Role

In the weeks before the vote, Mozilla is urging EU citizens to phone their lawmakers and demand better reform. Our website and call tool — changecopyright.org — makes it simple to contact Members of European Parliament (MEPs).

This isn’t the first time Mozilla has demanded common-sense copyright reform for the internet age. Earlier this year, Mozilla and more than 100,000 EU citizens dropped tens of millions of digital flyers on European landmarks in protest. And in 2016, we collected more than 100,000 signatures calling for reform.

Well-balanced, flexible, and creativity-friendly copyright reform is essential to a healthy internet. Agree? Visit changecopyright.org and take a stand.

Note: This blog has been updated to include a link to the reform proposal.

The post A Copyright Vote That Could Change the EU’s Internet appeared first on The Mozilla Blog.

Tantek ÇelikMy First #Marathon @theSFMarathon

Morning of the race

We woke up so early that morning of July 23rd — I cannot remember exactly when. So early that maybe I blocked it out. My friend Zoe had crashed on my couch the night before — she decided just a month before to race with me in solidarity.

Having laid out our race kits the night before, we quickly got ready. I messaged my friend & neighbor Michele who came over and called a Lyft for us. The driver took us within a couple of blocks of Harrison & Spear street, and we ran the rest of the way. After a quick pitstop we walked out to the Embarcadero to find our starting waves.

I only have a few timestamps from the photos I took before the race.

The Embarcadero and Bay Bridge still lit with lights at dawn

05:32. The Bay Bridge lights with a bit of dawn’s orange glow peeking above the East Bay mountains.

We sent Michele on her way to wave 3, and Zoe & I found our way around the chain link fences to join the crowd in wave 5.

Tantek, a police officer in full uniform & cap, and Zoe lined up in wave 5 of the San Francisco Marathon 2017

05:47. We found a police officer in the middle of the crowd, or rather he found us. He had seen our November Project tagged gear and shouted out “Hey November Project! I used to run that in Boston!” We shared our experiences running the steps at Harvard Stadium. Glancing down we noticed he had a proper race bib and everything. He was doing the whole race in his full uniform, including shoes.

Tantek and Zoe in wave 5 waiting for the start of the San Francisco Marathon

05:52. We took a dawn photo of us in the darkness with the Bay Bridge behind us. Zoe calls this the “When we were young and naïve” shot.

We did a little warm up run (#no_strava) back & forth on the Embarcadero until just minutes before our wave start time.

Sunrise behind the Bay Bridge

05:57. I noticed the clouds were now glowing orange, reflections glistening on the bay, and took another photo. Wave 5 got a proper sunrise send-off.

As we were getting ready to start, Zoe told me to feel free to run ahead, that she didn’t want to slow me down.

In the weeks leading up to the race, all my runner friends had told me: enjoy yourself, don’t focus on time. So that’s what I told Zoe: Don’t worry about time, we’re going to support each other and enjoy ourselves. She quietly nodded. We were ready.

The San Francisco Marathon 2017 Full Marathon Course Map

Map of the 2017 full marathon course from the official website.

Mile 1

06:02. We started with the crowd. That was the last timestamp I remember.

We ran at an easy sub-10minute/mile pace up The Embarcadero, you could see the colors of everything change by the minute as the sun rose.

It took deliberate concentration to keep a steady pace and not let the excitement get to me. I focused on keeping an even breath, an even pace.

That first mile went by in mere moments. I remember feeling ready, happy, and grateful to be running with a friend. With all that energy and enthusiasm from the crowd, it felt effortless. More like gliding than running. Then poof, there went the Mile 1 sign.

Mile 2

It was like gliding until the cobblestones in Fisherman’s Wharf. The street narrowed and they were hard to avoid. The cobblestones made running awkward and slowed us down. Noted for future races.

Mile 3

Running into Aquatic park was a relief.

Yes this is NPSF Mondays home territory, look out. We got excited and picked up the pace. Latent Monday morning competitive memories of so many burnout sprints on the sand. Turned the cove corner and ran up the ramp to the end of Van Ness.

Hey who just shouted my name from the sidelines? It was Lindsay Bolt at a med station. Stopped for a lightning hug and sprinted back to catch Zoe.

Back on asphalt, made it to the Fort Mason climb. Let’s do this.

Time to stretch those hill climbing legs. Strava segment PR NBD, even with a 1 min walk at the top at our 5/1 run/walk pace. Picked up the run just in time for...

Those smiles, that beard. Yes none other than perhaps two of the most positive people in NPSF, Tony & Holly! This race just keeps getting better and better.

These two are so good at sharing and beaming their joy out into the world, it lifts you off the ground. Seriously. I felt like I was running on air, flying.

Fort Mason downhill, more NPSF Mondays home territory. Glanced at my watch to see low-6min/mile pace (!!!). I know I’m supposed to be taking it easy, but it felt like less work to lean forward and flow with gravity’s downhill pull, rather than resist.

Mile 4

Slight veer to the right then left crossing the mile 3 marker to the Marina flats which brought me back to my sustainable ~10min/mile pace.

Somehow it got really crowded. We had caught up to a slower group and had to slalom back and forth to get through them. It was hard to keep a consistent pace. Slowed to about a 10-11min/mile.

Just as we emerged from the slow cluster, the path narrowed and sent us on a zig towards the beach away from Mason street. Then left, another left, and right back onto Mason street after the mile 4 marker.

What was the point of this momentum-killing jut out towards the bay? They couldn’t figure out some other place to put that distance? Really hope they fix it next year.

Mile 5

The long and fairly straight stretch of Mason street was a nice relief. Though it was at this point that I first felt like I had to to pee. I figured I could probably ignore it for a bit, especially with the momentum we had picked up.

I should note that Zoe and I have been run/walking 5min/1min intervals so far this entire time, maybe fudging them a bit to overlap with the water stations so we could walk at each one. We grabbed a cup of water every time. One cup only.

So it was with the station before the mile 5 marker. That station was particularly well placed, right before one of the biggest hills in the course.

Mile 6

We flew by the mile 5 marker and started the uphill grind towards the bridge. I just ran this hill 3 weeks ago. Piece of cake I thought.

Practicing hills for a race course is a huge confidence booster, because nearly everyone else slows down, even slowing to a walk, because hills seem to intrinsically evoke fear in runners, likely mostly fear of the unknowns. How long is this hill? Am I going to run out of energy/steam/breath trying to run up it? Am I going to tire myself out? Practicing a hill removes such mysteries and you know just how long you’ll have to push how hard to summit it how fast. Then you can run uphill with confidence, knowing full well how much energy it will take to get to the top.

Despite all that, hills are still the hardest thing for me. Zoe quickly outpaced me and pulled ahead. I kept her in sight.

We kept a nice 5/1 run/walk pace. And while running up the hill, I glanced at my heart monitor to pace myself and keep it just under 150bpm.

Now for the bridge. Did I mention the view running up to the bridge? I did not, because there was almost no view of the bridge, just a blanket of fog in the Marina.

On the bridge we could see maybe a few hundred meters in front of us, and just the base of the towers. @karlthefog was out stronger than I’ve seen in any SF Marathon of the past four years. And I was quite grateful because I’d forgotten to put on sunscreen.

Mile 7

That blanket of fog also meant nearly no views, which meant nearly no one stopping to selfie in the middle of the bridge. This was the smoothest I have ever seen a race run over the Golden Gate Bridge.

The initial uphill on the bridge went by faster than I ever remember. As the road flattened approaching the halfway point, it started to feel like it was downhill. I couldn’t tell if that was an illusion from excitement or actually gravity.

Sometime after the midpoint, as the bridge cables started to rise once again, I finally saw my first NP racer wearing a November Project tagged shirt coming the other way. He was a tall guy that I did not recognize, likely visiting from another city. We shouted “NP” and high fived as we passed. Smack.

Mile 8

As we crossed the bridge into Marin, the fog thinned to reveal sunlit hills in front of us. Pretty easy loop around the North Vista Point parking lot, biggest challenge was dodging everyone stopping for gu. It was nice to get a bit of sunshine.

We looped back onto the bridge with just enough momentum to keep up a little speed, with the North tower in sight.

Mile 9

The Golden Gate Bridge felt even faster on the way back, and it actually felt good to run back into the fog. Sunglasses off.

We picked up even more speed as the grade flattened, eventually becoming a downhill as we approached the South Tower. That mile felt particularly fast.

Mile 10

Launching into the tenth mile with quite a bit of momentum, I kept us running a bit longer than the five minutes of our 5/1 run/walk, flying around the turns until the bottom of the downhill right turn onto Lincoln Boulevard.

I didn’t know it at the time, but I had just set PRs for the Strava segments across the bridge, having run it significantly faster than any practice runs.

Flying run turned into fast walk, we shuffled up the Lincoln climb at a good clip, which felt less steep than ever before.

Fast walked right up to the aid station, our run/walk timing had worked out well. After we downed a cup of water each and started running again, we both related that a quick bathroom stop would be a good idea, and agreed to take a pee-break at the next set of porta-potties.

Mile 11

One more run/walk up to the top of the Lincoln hill. Been here many times, whether running the first half of the SF Marathon, or coming the other direction in the Rock & Roll SF half, or running racing friends up to the top. Again it felt less steep than before.

All those Friday NPSF hillsforbreakfast sessions followed by Saturday mornings with SFRC running trails in the Marin headlands had prepared me to keep pushing even after 10 miles. Zoe pulled ahead, stronger on the uphills.

We knew going in that we had different strengths, she was faster up the hills and I was faster down them, so we encouraged each other to go faster when we could, figuring we would sync-up on the flats.

Having reached the end of our 1 minute walk as we crested the hill, we picked up our run, I leaned forward and let gravity pull me through. Zooming down the hill faster than I’d expected, by the time I walked through the water stop at the bottom I had lost sight of Zoe. I kept walking and looking but couldn’t see her.

Apparently I had missed the porta-potties by the aid station, she had not, and had stopped as we had agreed.

Mile 12

Crossing mile marker 11, I turned around and started walking backwards, hoping to see Zoe. A few people looked at me like I was nuts but I didn’t care, I was walking uphill backwards nearly as fast as some were shuffling forwards. And I knew from experience that walking backwards works your muscles very differently, so I looked at it as a kind of super active-recovery.

After walking nearly a half mile backwards I finally spotted Zoe running / fast walking to catch-up; I think she spotted me first.

Just after we sync’d back up, and switched back to walking, a swing-dancing friend of mine who I had not seen in years spotted me and cheered us on at 27th & Clement!

We finally got to the top of the Richmond hill (at Anza street I think), and could see Golden Gate Park downhill in front of us.

Mile 12 was my slowest mile of the race, just after my fastest (mile 11). We picked up the pace once more.

Mile 13

We sped into the park, and slowed once we hit the uphill approaching the aid station there. I remember this point in the course very clearly from last year’s first half. At that point last year my knees were unhappy and I was struggling to finish. This year was a different story. Yes I felt the hill, however, my joints felt solid. Ankles, knees, hips all good. A little bit of soreness in my left hip flexor but nothing unmanageable.

However this hill did not feel easy like the others. Not sure if that was due to being tired or someting else.

Making a note to practice this hill in particular if (when) I plan to next run the first half of the SF Marathon (maybe next year).

Speaking of, just after the aid station this is where they divide up first half and full marathon runners. At the JFK intersection, the half runners turn left with a bit more uphill toward their last sprint to the finish, and the marathoners turn right, downhill towards the beach.

I have lost count of the number of times I have run down JFK to the beach, in races like Bay to Breakers, and Sunday training runs in Golden Gate Park. Zoe & I in particular have run this route more times than I can remember. This was super familiar territory and very easy for us to get into a comfortable groove and just go.

Mile 14

As we flew past the mile 13 marker, we high-fived (as we did at every mile marker we passed together), and I told Z hey we’re basically halfway done, we totally got this!

This part of JFK is always so enjoyable — a sweeping curving downhill with broad green meadows and a couple of lakes.

I saw the aid station at Spreckels Lake and gave Z a heads-up that I needed to take a quick pit stop.

Ran back into the fray and while I knew we were passing the bison on our right, I don’t actually remember looking over to see any. I think we were too focused on the road in front of us.

Mile 15

The mile 14 marker seemed to come up even quicker, maybe because we briefly stopped just a half mile or so before. Seeing that “14” had a huge impact on me, a number I had never before run up to in any race.

I remembered from the course map that we were approaching where the second half marathoners were going to start.

We turned left toward MLK drive, right by the second half start, and there was no sign of the second half marathoners.

My dad was running the second half, originally in wave 9, and we had thoughts of somehow trying to cross paths during our races. Not only was he long gone, but he had ended up starting in wave 5, and the second half overall started 15 minutes earlier than expected. Regardless I knew there was very little chance of catching him since all the second half runners were long gone.

MLK drive is a bit of a long uphill slog and we naturally slowed down a bit. It finally started to feel like “work” to get to the mile 15 marker.

Mile 16

Right after the mile 15 marker we zigged left then right onto the forgettably named Middle drive, which I had not run in quite a while, I’m not sure ever. I vaguely remembered rollerblading on it many years ago.

The pavement was a bit rougher, and the slow uphill slog continued. I decided I would chew half of one of my caffeinated cherry Nuun energy tablets at the next aid station, swallowing it with water.

The half tablet started to fizz as I chewed it so I was happy to wash it down. The fizziness felt a bit odd in my stomach. So far in the race I had had zero stomach problems or weirdnesses, so this was maybe not the greatest idea. Yeah, that thing about don’t change your fuel on raceday, that. I was mostly ok, but I think the fizziness threw me off.

I wasn’t really enjoying this part of the race, despite it being in Golden Gate park. I wasn’t hating it either. It just felt kind of meh.

Mile 17

Crossing the mile 16 marker and high-fiving I remember thinking, only ten-ish miles left, that doesn’t seem so bad. Turning right back onto JFK felt good though, finally we were back in familiar territory.

Then I remembered we still had to run up and around Stow lake. When I saw the course map I remember looking forward to that, but at this point I felt done with hills and was no longer looking forward to it.

After we turned right and started running up towards Stow Lake, I decided to walk and wait to sync up with Z, which was good timing it turns out. My friend Michele (who started a couple of waves before us) was just finishing Stow Lake and on her way down that same street.

She expressed that she wasn’t feeling too good, I told her she looked great and she smiled. We hugged, she told me and Zoe that it was only about 15 minutes to go around the lake and come back down, which made it feel more doable.

Still, it continued to feel like “work”. As we ran past the back (South) side of the lake, it was nice to have a bit of downhill, especially down to the next mile marker.

Mile 18

Crossing the mile 17 marker I turned to Zoe and told her hey, less than ten miles left! Single digits! She managed a smile. We kept pushing up and around the lake.

The backside of the lake felt easier since I knew the downhill to JFK was coming up. Picked up speed again, and then walked once I reached JFK, waiting for Zoe to catch back up.

We could see the first half marathoners finishing to our left, and I had flashbacks to how I felt finishing the first half last year. I was feeling a lot better this year at mile 17+ than last year at mile 13+, and I actually felt pretty good last year. That was a huge confidence boost.

As they got their finishers medals, we had an uphill to climb toward the de Young museum tower. This was really the last major hill. Once we crested it and could see the mile 18 marker, knowing it was mostly downhill made it feel like we didn’t have that far to go.

Mile 19

More familiar territory on JFK. Another aid station as we passed the outdoor "roller rink" on the left. The sun finally started to break through the clouds & fog, and we could see blue skies ahead.

I chatted with Z a bit as we passed the Conservatory of Flowers, about how we have done this run so many times, and how it was mostly downhill from here.

Up ahead I heard a couple of people shouting my name and then saw the sign.

Tim Johnson holding a sign 'Tantek: faster than fiber optics' cheering at mile 19 in the San Francisco Marathon

Photo by Amanda Blauvelt. Tim & Amanda surprised me with a sign at the edge of Golden Gate Park! (you can see me in the orange on the left).

I couldn’t help laughing. Ran up and hugged them both. Background: Last year Amanda ran the SF Marathon (her first full), and I conspired with her best friend from out of town to have her show up and surprise Amanda at around mile 10 by jumping in and running with her. The tournabout surprise was quite appreciated.

In my eager run up to Tim & Amanda, I somehow lost Zoe.

First I paused and look around, looked ahead to see if she had run past me and did not see her. Looked behind me to see if she was approaching and also did not see her.

I picked up the pace figuring she may have run past me when I saw Tim and Amanda, or I would figure it out later. (After the race Tim told me they saw Zoe moments after I had left).

The race looped back into Golden Gate park for a bit.

Mile 20

Passing the mile 19 marker, the course took us under a bridge, up to and across Stanyan street onto Haight street, the last noticeable uphill.

This was serious home territory for me, having run up Haight street to the market near Ashbury more times than I can remember.

Tantek running on Haight street just after crossing Ashbury.

Photo by mom. I saw my mom cheering at the intersection of Haight & Ashbury, and positioned myself clear of other runners because I knew she was taking photos. Then I went to her, hugged her, told her I love her, and asked where dad was. An hour ahead of me. No way I’m going to catch him before the finish.

I could see the mile 20 marker, but just as I was passing Buena Vista park on my right, I heard another familiar voice cheering me on. Turning to look I immediately recognized my friend Leah who helped get me into running in the first place, by encouraging me to start with very short distances.

She asked if I wanted company because she had to get in a 6-7 mile run herself and I said sure! Leah asked if I wanted to run quietly, or for her to talk or listen, and I said I was happy to listen to her talk about anything and appreciated the company.

I told her about how I’d lost Zoe earlier. Leah put Zoe’s info into the SF Marathon app on her phone to track Zoe’s progress to see if we could find her as we ran.

We were crushing it down the hill to Divisadero literally passing everyone else around us (downhills are my jam), and she was surprised at how well I looked and sounded so far into the race, at this point farther than I’d ever run before.

Mile 21

As we flew by the mile 20 marker, I remember thinking wow 20 miles and I feel great. I felt like I could just keep running on Clif blocks and Nuun electrolytes for hours. It was an amazing feeling of strength and confidence.

I realized I was doing something I thought I would never do, but more than that, it felt sustainable. I felt unstoppable.

My hip flexors were both a bit sore now, but at least they were evenly sore, which helped both balance things out, and then forget about them. My knees were just a tiny bit sore now, but again, about the same on both sides.

Just as we reached Scott street, they started redirecting racers up Scott to Waller. One more tiny uphill block, I remember complaining and then thinking just gotta push through. Up to Waller street then again a slight eastward downhill.

Once again picking up speed, I really started to enjoy all the cheering from folks who had come out of their houses to cheer us on. There was a family with kids offering small cups of water and snacks to the runners.

As we approached the last block before Buchanan street, I could hear a house on the North side blasting the Chariots of Fire theme song on huge speakers. Louder than the music I was listening to. Brilliant for that last Waller street block which happenned to be uphill. Of course it was a boost.

Making the right turn to run down Buchanan street, we only made it a block before they redirected us eastward down Hermann street to the Market street crossing and veering right onto Guerrero.

Running these familiar streets felt so easy and comfortable.

Once again we picked up speed running downhill, barely slowing down to pick up two cups of Nuun at the aid station before the mile 21 marker.

Mile 22

We kept running South on Guerrero until the course turned East again at 16th street.

16th street in the Mission is a bit of mess. Lots of smells, from various things in the street, to the questionable oily meats spewing clouds of smoke from questionable grills. I think this was my least favorite stretch of the race. Literally disgusting.

The smells didn’t clear until about Folsom street. Still relatively flat, I knew we had a climb coming up to Bryant street, so I was mentally ready for it.

Just before we reached Bryant street, they redirected us South one block onto 17th street.

Still no sign of Zoe. With all these race route switches I was worried that we had been switched different ways, and would have difficulty finding each other.

The racer tracking app was also fairly inaccurate. In several places it showed Zoe as being literally right by us, or just ahead or just behind when she was nowhere to be seen.

Mile 23

Slow climb up to Potrero. It’s not very enticing running there. Mostly industrial. Still felt familiar enough, we just pressed on, occasionally looking for Zoe.

Leah kept up a nice friendly distracting dialog that helped this fairly unremarkable part of the course go by quicker than it otherwise would have.

Another aid station, more Nuun. I started to feel I wasn’t absorbing fluids as fast as I had been earlier. Something also felt a bit off about my stomach. Not sure if it was the fizzing from the cherry Nuun tablet I had chewed on. Or the smells of 16th street.

I only sipped half a cup of Nuun and tossed the rest.

We were almost at 280, turned briefly down Missisippi street for a block, then over on Mariposa to cross underneath 280, and I could see the mile 23 marker just on the other side.

Mile 24

Downhill to Indiana street so we flew right by the marker.

Twenty-three miles done. Just a little over 5km left.

Made a hard right onto Indiana street where it flattened out once more. We had entered the industrial backwaters of the Dogpatch.

Still run/walking at about a 5 to 1 split, but I was starting to slowly feel more tired. No “wall” like I have often heard about. I wondered if the feeling was really physical, or just mental.

Maybe it was just the street and the few memories I had associated with it. Some just two years old, some older. Nothing remarkable. Maybe this was my chance to update my memories of Indiana street.

The sun was shining, and I was running. Over 23 miles into my first marathon and I still felt fine. There were scant few out cheering on this stretch. But I knew the @Nov_Project_SF cheerstation wasn’t far.

The sound of two people shouting my name brought my attention back to my surroundings. My friends @Nov_Project Ava and Tara had run backwards along the course from the cheerstation!

They checked in with me, asked how I was doing. I was able to actually answer while running which was a good sign. They ran with me a bit and then sprinted ahead a few blocks to just past the next corner.

Turning onto 22nd street, I grabbed another half cup of Nuun. At this point I did not feel like eating anything, my stomach had an odd half-full not-hungry feeling. I sipped the Nuun and tossed the cup.

There were Ava & Tara again, cheering me on, like a personal traveling cheersquad. So grateful. I’m convinced smiling helps you go faster, and especially when friends are cheering you on. They sprinted on ahead again and I lost sight of them.

Finally the turn onto 3rd street. There is something very powerful about feeling like you are finally heading directly towards the finish.

It was getting warmer, and the sweat was making it harder to see. This is the point where I was glad I had brought my sunglasses with me, despite the thick clouds this morning. No clouds remained. Just clear blue skies.

Kept going through Dog Patch and China Basin, really not the most attractive places to run. Except once again I saw Ava & Tara up ahead at 20th street, and they cheered us through the corner, and then disappeared again.

Just one block East on 20th and then North again onto Illinois street. I could see the next marker.

Mile 25

Just over a couple of miles left. Slight right swerve onto Terry A Francois Boulevard, and I could see and hear the very excited Lululemon Cheerstation waving their signs, shouting, and cheering on all of us runners.

Then perhaps the second best part of the race. Actually maybe tied for best with finishing.

I saw brightly colored neon shirts up ahead and heard a roar. (I’m having trouble even writing this four weeks later without tearing up.)

The November Project San Francisco cheerstation. What a sight even from a distance.

My friend Caity Rogo ran towards me & Leah, and I had this thought like I should be surprised to see her but I couldn’t remember why.

Leah and Tantek running with Caity beside them right before the NPSF cheergang at the San Francisco Marathon 2017

Photo by Kirstie Polentz. I do not remember what I said to Caity. Later I would remember that just the day before she was away running a Ragnar relay race! Somehow she had made it back in time to cheer.

At this point my cognitive bandwidth was starting to drop. I had just enough to focus on the race, and pay attention to the amazing friends cheering and accompanying me.

Tantek running through the November Project cheergang

Photo by Lillian Lauer. So many high fives. So many folks pacing me. I think there were hugs? It was kind of a blur. I asked and found out Zoe was about 2 min ahead of me, so I picked up the pace in an attempt to catch up to her.

Tantek with Nuun cup walking next to Henri asking him how he is doing during the San Francisco Marathon 2017

Photo by Kirstie Polentz. I remember Henry Romeo asking me what I wanted from the next water station, running ahead, bringing me a Nuun electrolyte cup, and keeping me company for a bit.

After snapping a few photos, my pal Krissi ran with me despite a recent calf injury, grinning with infectious joy and confidence. She ran me past the mile 25 marker, checking to make sure I was ok, how I was feelng etc.

As good as I thought I was feeling before, the cheer station was a massive boost.

Mile 26

Found Zoe again! Or rather she saw me. She was walking slowly or had stopped and was looking for me.

Having reconnected I checked in with her, how was everything feeling. We kept up our run/walk, with still a bit over a mile left.

Apparently there was a ballgame on at AT&T park. I couldn’t help but feel a sharp contrast between the sports fans on one side of the race barrier and runners on the other. Each of us were doing our own thing. A few sports fans cheered us on and reached across to give out high fives which we gladly accepted.

Finally we made it around the ballpark and out to the Embarcadero, our home stretch. Half mile or so to go.

We were all tired, with various body parts aching, and yet did our best to keep up a decent pace.

Leah peeled off at mile 26, shouting encouragements for us to push hard to the finish.

Finish

Past the mile 26 marker we curved a little to the left and could see the finish just a few blocks in front of us.

I talked Zoe into keeping up a regular pace as we approached the finish line. Checking to make sure she was good and still smiling, I picked up the pace with whatever energy I had, just to see how many people I could pass in the last 400 meters.

I actually saw people slowing down, which felt like an enticement to go even faster. I sprinted the last 100m as fast as I could, passing someone with just feet to go to the finish. Maybe a silly bit of competitiveness, but it’s always felt right to push hard to a finish, using any motivation at hand.

5:35:59.

I kept walking and got my finishers medal.

Zoe and Tantek at the finish of of the San Francisco Marathon wearing their medals

Turning around I found Zoe. We had someone take our photo. We had done it. Marathon finishers!

We kept walking and found my dad. We picked up waters & fruit cups and saw my mom & youngest sister on the other side of the barriers.

Tantek and Zoe stretching after finishing the San Francisco Marathon

Photo by Ayşan Çelik. We stopped to stretch our legs and take more photos.

We found more @Nov_Project friends. I stopped by the Nuun booth and kept refilling my cup and Steve gave me big hug too.

I was a little sore in parts, but nothing was actually hurting. No blood, no limping, no pain. Just a blister on one left toe, and one on my right heel that had already popped. Slight chafing on my right ankle where my shoe had rubbed against it.

I felt better than after most of my past half marathon races. Something was different.

Whether it was all the weekly hours of intense Vinyasa yoga practice, from the March through May yoga teacher training @YogaFlowSF and since, or the months of double-gang workouts @Nov_Project_SF (5:30 & 6:30 every Wednesday morning), or doing nearly all my long runs on Marin trails Saturday mornings hosted by @SFRunCo in Mill Valley, setting new monthly meters climbing records leading up to the race, I was stronger than ever before, physically and mentally. Something had changed.

I had just finished my first marathon, and I felt fine.

Tantek wearing the San Francisco Marathon 52 Club hoodie, finisher medal, and 40 for 40 medal.

I waited til I got home to finally put on my San Francisco Marathon “52 Club” hoodie (for having run the first half last year, and the second half the year before that), with the medals of course.

As much as all the training prepared me as an individual, the experience would not have been the same without the incredible support from fellow @Nov_Project runners, from my family, even just knowing my dad was ahead of me running the second half, Leah and other friends that jumped in and ran alongside, and especially starting & finishing with my pal Zoe, encouraging each other along the way.

Grateful for having the potential, the opportunity to train, and all the community, friends, and family support. Yes it took a lot of personal determination and hard work, but it was all the support that made the difference. And yes, we enjoyed ourselves.

(Thanks to Michele, Zoe, Krissi, and Lillian for reviewing drafts of this post, proofreading, feedback, and corrections! Most photos above were posted previously and link to their permalinks. The few new to this post are also on Instagram.)

Cameron KaiserIrma's silver lining: text is suddenly cool again

In Gopherspace (proxy link if your browser doesn't support it), plain text with low bandwidth was always cool and that's how we underground denizens roll. But as our thoughts and prayers go to the residents of the Caribbean and Florida peninsula being lashed by Hurricane Irma, our obey your media thought overlords newspapers and websites are suddenly realizing that when the crap hits the oscillating storm system, low-bandwidth text is still a pretty good idea.

Introducing text-only CNN. Yes, it's really from CNN. Yes, they really did it. It loads lickety-split in any browser, including TenFourFox and Classilla. And if you're hunkered down in the storm cellar and the radio's playing static and all you can get is an iffy 2G signal from some half-damaged cell tower miles away, this might be your best bet to stay current.

Not to be outdone, there's a Thin National Public Radio site too, though it only seems to have quick summaries instead of full articles.

I hope CNN keeps this running after Irma has passed because we really do need less crap overhead on the web, and in any disaster where communications are impacted, low-bandwidth solutions are the best way to communicate the most information to the most people. Meanwhile, please donate to the American Red Cross and the Salvation Army (or your relief charity of choice) to help the victims of Hurricanes Harvey and Irma today.

Andy McKayMy third Gran Fondo

Yesterday was my third Gran Frondo, the last was in 2016.

Last year was a bit of an odd year, I knew what to face, yet I struggled. I was planning on correcting that this year.

The most important part of the Fondo is the months and months of training before hand. This year that went well. Up to this point I've been on the bike for 243 hours, 5,050km over 198 bike rides. I only ended up doing Mt Seymour 3 times. But rides with Steed around some North and West Vancouver gave me some extra hill practice.

I managed to lose 20lbs over the training, but have gained a lot of muscle mass especially in my legs. I also did the challenge route of the Ride to Conquer Cancer with some awesome Mozilla friends. The weekend before I did the same route 3 times, on the last day I hit a pile of personal records.

Two equipment changes also helped. I had a computer to tell me how fast I was going (yeah, should have had one earlier) and I moved from Mountain Bike pedals over to Shimano road pedals.

So know what I was facing I had a slightly different plan, focusing on my nemesis, the last hour of the ride. To do that I focused on:

  • Drafting on the flats where I can
  • Taking energy gels every hour to replenish electrolytes
  • Not charging up every hill
  • Going for a faster cadence in a lower gear
  • Saving the energy for the last half (same as last year)

As the day arrived a new challenge appeared. It was raining. Pretty much the entire bloody way.

The first part felt good, I knew what time I would have to arrive each rest stop to beat the last time. I made it to the first stop 13 mins ahead of schedule. But then made it to the next stop about 10 mins ahead of schedule. Then the sticky peice of plastic with the times on flew off.

At this point I was getting anxious, I seemed to be slowing down. All I could remember was the time I needed to be at the last rest stop. Then came the hills.

The difference's here were: the rain was keeping me cool so I wasn't dehydrating (also energy gels helped), I knew my pace and I had energy in my legs. Over the last 20 km I floored it (well comparatively for me) where as, in previous years I just fell apart. The whole second half of the race were personal records.

The result? I ended up crossing at 4h 44m. That's 17 minutes faster than a younger version me.

Today, my knees, wrists and other parts of my body all hurt and I skipped the Steed ride. But other than that I'm feeling not too bad.

Also, I signed up for the Fondo next year. I'm going to get below 4hr 30min next year.

QMOFirefox Developer Edition 56 Beta 12, September 15th

Hello Mozillians!

We are happy to let you know that Friday, September 15th, we are organizing Firefox Developer Edition 56 Beta 12 Testday. We’ll be focusing our testing on the following new features: Preferences SearchCSS Grid Inspector Layout View and Form Autofill. 

Check out the detailed instructions via this etherpad.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better!

See you on Friday!

Hacks.Mozilla.OrgMeta 2 AR Headset with Firefox

One of the biggest challenges in developing immersive WebVR experiences today is that immersion takes you away from your developer tools. With Meta’s new augmented reality headset, you can work on and experience WebVR content today without ever taking a headset on or off, or connecting developer tools to a remote device. Our friends at Meta have just released their Meta 2 developer kit and it works right out of the box with the latest 64-bit Firefox for Windows.

The Meta 2 is a tethered augmented reality headset with six degrees of freedom (6DOF). Unlike existing 3D mobile experiences like Google Cardboard, the Meta 2 can track both your orientation (three degrees of freedom) and your position (another three degrees). This means that not only can you look at 3D content, you can also move towards and around it. (3+3 = 6DOF).

In the video above, talented Mozilla engineer Kip Gilbert is editing the NYC Snowglobe demo with the A-Frame inspector on his desktop. After he edits the project, he just lifts his head up to see the rendered 3D scene in the air in front of him.  Haven’t tried A-Frame yet? It’s the easiest way for web developers to build interactive 3D apps on the web. Best of all, Kip didn’t have to rewrite the snowglobe demo to support AR. It just works! Meta’s transparent visor combined with Firefox enables this kind of seamless 3D development.

The Meta 2 is stereoscopic and also has a 90-degree field of view, creating a more immersive experience on par with a traditional VR headset. However, because of the see-through visor, you are not isolated from the real world. The Meta 2 attaches to your existing desktop or laptop computer, letting you work at your desk without obstructing your view, then just look up to see virtual windows and objects floating around you.

In this next video, Kip is browsing a Sketchfab gallery. When he sees a model he likes he can simply look up to see the model live in his office. Thanks to the translucent visor optics, anything colored black in the original 3D scene automatically becomes transparent in the Meta 2 headset.

Meta 2 is designed for engineers and other professionals who need to both work at a computer and interact with high performance visualizations like building schematics or a detailed 3D model of a new airplane. Because the Meta 2 is tethered it can use the powerful GPU in your desktop or laptop computer to render high definition 3D content.

Currently, the Meta team has released Steam VR support and is working to add support for hands as controllers. We will be working with the Meta engineers to transform their native hand gestures into Javascript events that you can interact with in code. This will let you build fully interactive high performance 3D apps right from the comfort of your desktop browser. We are also using this platform to help us develop and test proposed extensions for AR devices to the existing WebVR specification.

You can get your own Meta 2 developer kit and headset on the Meta website. WebVR is supported in the latest release version of FireFox for Windows, with other platforms coming soon.

Mozilla Addons BlogLast chance to migrate your legacy user data

If you are working on transitioning your add-on to use the WebExtensions API, you have until about mid-October (a month before Firefox 57 lands to allow time for testing and migrating), to port your legacy user data using an Embedded WebExtension.

This is an important step in giving your users a smooth transition because they can retain their custom settings and preferences when they update to your WebExtensions version. After Firefox 57 reaches the release channel on November 13, you will no longer be able to port your legacy data.

If you release your WebExtensions version after the release of Firefox 57, your add-on will be enabled again for your users and they will still keep their settings if you port the data beforehand. This is because WebExtensions APIs cannot read legacy user settings, and legacy add-ons are disabled in Firefox 57. In other words, even if your WebExtensions version won’t be ready until after Firefox 57, you should still publish an Embedded WebExtension before Firefox 57 in order to retain user data.

When updating to your new version, we encourage you to adopt these best practices to ensure a smooth transition for your users.

The post Last chance to migrate your legacy user data appeared first on Mozilla Add-ons Blog.

Mozilla Security BlogMozilla Releases Version 2.5 of Root Store Policy

Recently, Mozilla released version 2.5 of our Root Store Policy, which continues our efforts to improve standards and reinforce public trust in the security of the Web. We are grateful to all those in the security and Certificate Authority (CA) communities who contributed constructively to the discussions surrounding the new provisions.

The changes of greatest note in version 2.5 of our Root Store Policy are as follows:

  • CAs are required to follow industry best practice for securing their networks, for example by conforming to the CA/Browser Forum’s Network Security Guidelines or a successor document.
  • CAs are required to use only those methods of domain ownership validation which are specifically documented in the CA/Browser Forum’s Baseline Requirements version 1.4.1.
  • Additional requirements were added for intermediate certificates that are used to sign certificates for S/MIME. In particular, such intermediate certificates must be name constrained in order to be considered technically-constrained and exempt from being audited and disclosed on the Common CA Database.
  • Clarified that point-in-time audit statements do not replace the required period-of-time assessments. Mozilla continues to require full-surveillance period-of-time audits that must be conducted annually, and successive audit periods must be contiguous.
  • Clarified the information that must be provided in each audit statement, including the distinguished name and SHA-256 fingerprint for each root and intermediate certificate in scope of the audit.
  • CAs are required to follow and be aware of discussions in the mozilla.dev.security.policy forum, where Mozilla’s root program is coordinated, although they are not required to participate.
  • CAs are required at all times to operate in accordance with the applicable Certificate Policy (CP) and Certificate Practice Statement (CPS) documents, which must be reviewed and updated at least once every year.
  • Our policy on root certificates being transferred from one organization or location to another has been updated and included in the main policy. Trust is not transferable; Mozilla will not automatically trust the purchaser of a root certificate to the level it trusted the previous owner.

The differences between versions 2.5 and 2.4.1 may be viewed on Github. (Version 2.4.1 contained exactly the same normative requirements as version 2.4 but was completely reorganized.)

As always, we re-iterate that participation in Mozilla’s CA Certificate Program is at our sole discretion, and we will take whatever steps are necessary to keep our users safe. Nevertheless, we believe that the best approach to safeguard that security is to work with CAs as partners, to foster open and frank communication, and to be diligent in looking for ways to improve.

Mozilla Security Team

The post Mozilla Releases Version 2.5 of Root Store Policy appeared first on Mozilla Security Blog.

Ehsan AkhgariQuantum Flow Engineering Newsletter #23

As was announced earlier today, Firefox 57 will be merged to the Beta channel on September 21, which is two weeks from today.  That wraps up the long development cycle that has gone on for maybe about a year now.  We had a lot of ambitious plans when we started this effort, and a significant part of what we set out to deliver is either already shipped or landed on Nightly.  It is now a good time to focus on making sure that what we have which isn’t shipped yet is as high quality possible, by ensuring the crash rates are low, regressions are triaged and fixed in time, and remaining rough edges are smoothed out before the release.  There are still a lot of ongoing projects in flight, and of course many of open Quantum Flow bugs that have already been triaged which aren’t fixed yet.  I’ll write more about what we plan to do about those later.

Let’s now have a quick look at where we are on the battle on the synchronous IPC performance bottlenecks.  The TL;DR is: things are looking great, and we have solved most of this issue by now!  This worked has happened in 50+ bugs over the course of the past 8 months.  The current telemetry data shows a very different picture on the synchronous IPC messages between our processes compared to how things looked back when we started.  These graphs show where things are now on the C++ side and on the JS side.  For comparison, you can see the latest graphs I posted about four weeks ago.

Sync IPC Analysis (2017-09-07)JS Sync IPC Analysis (2017-09-07)

Things are looking a lot different now compared to a month ago.  On the C++ side, the highest item on the list now is PAPZCTreeManager::Msg_ReceiveMouseInputEvent, which is an IPC message with a mean time of 0.6ms, so not all that bad.  The reason this appears as #1 on the list is that it occurs a lot.  This is followed by PBrowser::Msg_SyncMessage and PBrowser::Msg_RpcMessage, which are the C++ versions of JS initiated synchronous IPCs, followed by PDocAccessible::Msg_SyncTextChangeEvent which is a super rare IPC message.  After that we have PContent::Msg_ClassifyLocal, which will probably be helped by a fix landed two days ago, followed by PCompositorBridge::Msg_FlushRendering (with a mean time of 2.2ms), PAPZCTreeManager::Msg_ReceiveScrollWheelInputEvent (with a mean time of 0.6ms), PAPZCTreeManager::Msg_ReceiveKeyboardInputEvent (with a mean time of 1.3ms) and PAPZCTreeManager::Msg_ProcessUnhandledEvent (with a mean time of 2.8ms).

On the JavaScript side, the messages in the top ten list that are coming from Firefox are contextmenu, Findbar:Keypress, RemoteLogins:findRecipes (which was recently almost completely fixed), Addons:Event:Run (which is a shim for legacy extensions which we will remove later but has no impact on our release population as of Firefox 57) and WebRequest:ShouldLoad (which was recently fixed).

As you’ll note, there is still the long tail of these messages to go through and keep on top of, and we need to keep watching our telemetry data to make sure that we catch other existing synchronous IPC messages if they turn into performance problems.  But I think at this point we can safely call the large umbrella effort under Quantum Flow to address this aspect of the performance problems we’ve had in Firefox done!  This couldn’t have been done in such a short amount of time without the help of so many people in digging through each one of these bugs, analyzing them, figuring out how to rework the code to avoid the need for the synchronous messaging between our processes, helping with reviews, etc.  I’d like to thank everyone who helped us get to this point.

In other exciting performance news, Stylo is now our default CSS engine and is riding the trains.  It’s hard to capture the impact of this project in terms of Talos improvements only, but we had some nonetheless!  Hopefully all the remaining issues will be addressed in time, to make Stylo part of the Firefox 57 release.  A big congrats to the Stylo team for hitting this milestone.

With this, I’d like to take a moment to thank the hard work of everyone who helped make Firefox faster during the past week.  I hope I’m not forgetting any names.

Mozilla Addons BlogTell your users what to expect in your WebExtensions version

The migration to WebExtensions APIs is picking up steam, with thousands of compatible add-ons now available on addons.mozilla.org (AMO). To ensure a good experience for the growing number of users whose legacy add-ons have been updated to WebExtensions versions, we’re encouraging developers to adopt the following best practices.

(If your new version has the same features and settings as your legacy version, your users should get a seamless transition once you update your listing, and you can safely ignore the rest of this post.)

If your new version has different features, is missing legacy features, or requires additional steps to recover custom settings, please do one or both of the following.

Update your AMO listing description

If your new version did not migrate with all of its legacy features intact, or has different features, please let your users know in the “About this Add-on” section of your listing.

If your add-on is losing some of its legacy features, let your users know if it’s because they aren’t possible with the WebExtensions API, or if you are waiting on bug fixes or new APIs to land before you can provide them. Include links to those bugs, and feel free to send people to the forum to ask about the status of bug fixes and new APIs.

Retaining your users’ settings after upgrade makes for a much better experience, and there’s still time to do it using Embedded WebExtensions. But if this is not possible for you and there is a way to recover them after upgrade, please include instructions on how to do that, and refer to them in the Version notes. Otherwise, let your users know which settings and preferences cannot be recovered.

Add an announcement with your update

If your new version is vastly different from your legacy version, consider showing a new tab to your users when they first get the update. It can be the same information you provide in your listing, but it will be more noticeable if your users don’t have to go to your listing page to see it. Be sure to show it only on the first update so it doesn’t annoy your users.

To do this, you can use the runtime.onInstalled API which can tell you when an update or install occurs:

function update(details) {

if (details.reason === 'install' || details.reason === 'update') {

browser.tabs.create({url: 'update-notes.html'});

}

}

browser.runtime.onInstalled.addListener(update);

This will open the page update-notes.html in the extension when the install occurs. For example:

For greater control, the runtime.onInstalled event also lets you know when the user updated and what their previous version was so you can tailor your release notes.

Thank you

A big thanks to all the developers who have put in the effort to migrate to the WebExtensions API. We are here to support you, so please reach out if you need help.

The post Tell your users what to expect in your WebExtensions version appeared first on Mozilla Add-ons Blog.

Georg FritzscheRecording new Telemetry from add-ons

One of the successes for Firefox Telemetry has been the introduction of standardized data types; histograms and scalars.

They are well defined and allow teams to autonomously add new instrumentation. As they are listed in machine-readable files, our data pipeline can support them automatically and new probes just start showing up in different tools. A definition like this enables views like this.

Measurements Dashboard for max_concurrent_tab_count.

This works great when shipping probes in the Firefox core code, going through our normal release and testing channels, which takes a few weeks.

Going faster

However, often we want to ship code faster using add-ons: this may mean running experiments through Test Pilot and SHIELD or deploying Firefox features through system add-ons.

When adding new instrumentation in add-ons, there are two options:

  • Instrumenting the code in Firefox core code, then waiting a few weeks until it is in release.
  • Implementing a custom ping and submitting it through Telemetry, requiring additional client and pipeline work.

Neither are satisfactory; there is significant manual effort for running simple experiments and adding features.

Filling the gap

This is one of the main pain-points coming up for adding new data collection, so over the last months we were planning how to solve this.

As the scope of an end-to-end solution is rather large, we are currently focused on getting the support built into Firefox first. This can enable some use-cases right away. We can then later add better and automated integration in our data pipeline and tooling.

The basic idea is to use the existing Telemetry APIs and seamlessly allow them to record data from new probes as well. To enable this, we will extend the API with registration of new probes from add-ons at runtime.

The recorded data will be submitted with the main ping, but in a separate bucket to tell them apart.

What we have now

We now support add-on registration of events from Firefox 56 on. We expect event recording to mostly be used with experiments, so it made sense to start here.

With this new addition, events can be registered at runtime by Mozilla add-ons instead of using a registry file like Events.yaml.

When starting, add-ons call nsITelemetry.registerEvents() with information on the events they want to record:

Services.telemetry.registerEvents(“myAddon.ui”, {
“click”: {
methods: [“click”],
objects: [“redButton”, “blueButton”],
}
});

Now, events can be recorded using the normal Telemetry API:

Services.telemetry.recordEvent(“myAddon.ui”, “click”,
“redButton”);

This event will be submitted with the next main ping in the “dynamic” process section. We can inspect them through about:telemetry:

On the pipeline side, the events are available in the events table in Redash. Custom analysis can access them in the main pings under payload/processes/dynamic/events.

The larger plan

As mentioned, this is the first step of a larger project that consists of multiple high-level pieces. Not all of them are feasible in the short-term, so we intend to work towards them iteratively.

The main driving goals here are:

  1. Make it easy to submit new Telemetry probes from Mozilla add-ons.
  2. New Telemetry probes from add-ons are easily accessible, with minimal manual work.
  3. Uphold our standards for data quality and data review.
  4. Add-on probes should be discoverable from one central place.

This larger project then breaks down into roughly these main pieces:

Phase 1: Client work.

This is currently happening in Q3 & Q4 2017. We are focusing on adding & extending Firefox Telemetry APIs to register & record new probes.

Events are supported in Firefox 56, scalars will follow in 57 or 58, then histograms on a later train. The add-on probe data is sent out with the main ping.

Phase 2: Add-on tooling work.

To enable pipeline automation and data documentation, we want to define a variant of the standard registry formats (like Scalars.yaml). By providing utilities we can make it easier for add-on authors to integrate them.

Phase 3: Pipeline work.

We want to pull the probe registry information from add-ons together in one place, then make it available publically. This will enable automation of data jobs, data discovery and other use-cases. From there we can work on integrating this data into our main datasets and tools.

The later phases are not set in stone yet, so please reach out if you see gaps or overlap with other projects.

Questions?

As always, if you want to reach out or have questions:


Recording new Telemetry from add-ons was originally published in Georg Fritzsche on Medium, where people are continuing the conversation by highlighting and responding to this story.

Mozilla Localization (L10N)L10n Report: September Edition

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet.

Welcome!

New localizers

Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

New community/locales added

In the past weeks we’ve added several languages to Pontoon, in particular from the Mozilla Nativo project:

  • Mixteco Yucuhiti (meh)
  • Mixtepec Mixtec (mix)
  • Quechua Chanka (quy)
  • Quichua (qvi)

We’ve also started localizing Firefox in Interlingua (ia), while Shuar (jiv) will be added soon for Firefox for Android.

New content and projects

What’s new or coming up in Firefox desktop

A few deadlines are approaching:

  • September 13 is the last day to make changes to Beta projects.
  • September 20 is merge day, and all strings move from Central to Beta. There are currently a few discussions about moving this date, but nothing has been decided yet. We’ll communicate through all channels if anything changes.

Photon in Nightly is almost ready for Firefox 57, only a few small changes need to land for the onboarding experience. Please make sure to test your localization on clean profiles, and ask your friend to test and report bugs like mistranslations, strings not displayed completely in the interface, etc.

What’s new or coming up in Test Pilot

Firefox Send holds the record for the highest number of localizations in the Test Pilot family (with SnoozeTabs), with 38 languages completely translated.

For those interested in more technical details, Pontoon is now committing localizations for the Test Pilot project in a l10n branch. This also means that the DEV server URL has changed. Note that the link is also available in the project resources in Pontoon.

What’s new or coming up in mobile
  • Have you noticed that Photon is slowly but surely arriving on Firefox for Android Nightly version? The app is getting a visual refresh and things are looking bright and shiny! There’s a new onboarding experience, icons are different, the awesome bar has never been this awesome, tabs have a new look… and the whole experience is much smoother already! Come check it out.
  • Zapoteco and Belarussian are going to make it to release with the upcoming Firefox Android 56 release.
What’s new or coming up in web projects
  • Mozilla.org:
    • This past month, we continued the trend of creating new pages to replace the old ones, with new layout and color scheme.  We will have several new pages in the work in September.  Some are customized for certain markets and others will have two versions to test the markets.
    • Thanks to all the communities that have completed the new Firefox pages released for localization in late June. The pages will be moved to the new location at Firefox/… replacing the obsolete pages.
    • Germany is the focused market with a few more customized pages than other locales.
    • New pages are expected for mobile topic in September and in early October. Check web dashboard and email communications for pending projects.
  • Snippets: We will have a series snippets campaigns starting early September targeting users of many Mozilla products.
  • MOSS: the landing page is made available in Hindi in time for the partnership announcement on August 31 along with a press release.
  • Legal: Firefox Privacy Notice will be rewritten.  Once localization is complete in a few locales, we invite communities to review them.
What’s new or coming up in Foundation projects
  • Our call tool at changecopyright.org is live! Many thanks to everyone who participated in the localization of this campaign, let’s call some MEPs!
  • The IoT survey has been published, and adding new languages plus snippets made a huge difference. You can learn more in the accomplishments section below.
What’s new or coming up in Pontoon
  • Check out the brand new Pontoon Tools Firefox extension, which you can install from AMO! It brings notifications from Pontoon directly to your Firefox, but that’s just the beginning. It also shows you your team’s statistics and allows you to search for strings straight from Mozilla.org and SUMO. A huge shout out to its creator Michal Stanke, a long time Pontoon user and contributor!
  • We changed the review process by introducing the ability to reject suggestions instead of deleting them. Each suggestion can now be approved, unreviewed or rejected. This will finally make it easy to list all suggestions needing a review using the newly introduced Unreviewed Suggestions filter. To make the filter usable out of the box, all existing suggestions have been automatically rejected if an approved translation was available and approved after the suggestion has been submitted. The final step in making unreviewed suggestions truly discoverable is to show them in dashboards. Thanks to Adrian, who only joined Pontoon team in July and already managed to contribute this important patch!
  • Pontoon homepage will now redirect you to the team page you make most contributions to. You can also pick a different team page or the default Pontoon homepage in your user settings. Thanks to Jarosław for the patch!
  • Editable team info is here! If you have manager permission, you can now edit the content of the Info tab on your team page:

  • Most teams use this place to give some basic information to newcomers. Thanks to Axel, who started the effort of implementing this feature and Emin, who took over!
  • The notification popup (opened by clicking on the bell icon) is no longer limited to unread notifications. Now it displays the latest 7 notifications, which includes both – read and unread. If there are more than 7 unread notifications, all are displayed.
  • Sync with version control systems is now 10 times faster and uses 12 times less computing power. Average sync time dropped from around 20 minutes to less than 2.
  • For teams that localize all projects in Pontoon, we no longer pull Machinery suggestions from Transvision, because they are already included in Pontoon’s internal translation memory. This has positive impact on Machinery performance and the overall string navigation performance. Transvision is still enabled for the following locales: da, de, es-ES, it, ja, nl, pl.
  • Thanks to Michal Vašíček, Pontoon logo now looks much better on HiDPI displays.
  • Background issues have been fixed on in-context pages with a transparent background like the Thimble feature page.
  • What’s coming up next? We’re working on making searching and filtering of strings faster, which will also allow for loading, searching and filtering of strings across projects. We’re also improving the experience of localizing FTL files, adding support for using Microsoft Terminology during the translation process and adding API support.
Newly published localizer facing documentation
  • Community Marketing Kit: showcases ways to leverage existing marketing content, resort to approved graphic asset, and utilize social channels to market Mozilla products in your language.
  • AMO: details the product development cycle that impacts localization. AMO frontend will be revamped in Q4. The documentation will be updated accordingly.
  • Snippets: illustrates the process on how to create locale relevant snippet, or launch snippet in languages that is not on the default snippet locale list.
  • SUMO: covers the process to localize the product, which is different from localizing the articles.
Events
  • Want to showcase an event coming up that your community is participating in? Reach out to any l10n-driver and we’ll include that (see links to emails at the bottom of this report)

 

Accomplishments

We would like to share some good results

Responses by country (not locale), for the 32,000 responses to the privacy survey ran by the Advocacy team back in March, localized in French and German:

It was good, but now let’s compare that with the responses by country for our IoT survey How connected are you? that received over 190,000 responses! We can see that the survey performed better in France, Germany and Italy than in the US. Spanish is underrepresented because it’s spread across several countries, but we expect the participation to be similar. These major differences are explained by the fact that we added support for three more languages, and promoted it with snippets in Firefox. This will give us way more diverse results, so thanks for your hard work everyone! This also helped get new people subscribed to our newsletter, which is really important for our advocacy activities, to fuel a movement for a healthier Internet.
The survey results might also be reused by scientists and included in the next edition of the Internet Health Report How cool is that? Stay tuned for the results.

 

Friends of the Lion

Image by Elio Qoshi

  • Kabyle (kab) organized a Kab Mozilla Days on August, 18-19 in Algeria, discussing localization, Mozilla mission, open source and promotion of indigenous languages.
  • Triqui (trs) community has made significant progress post Asunción workshop, Triqui is now officially supported on mozilla.org. Congratulations!!
  • Wolof (wo): Congrats to Ibra and Ibra (!) who have been keeping up with Firefox for Android work. They have now been added to multi-locale builds, which means they reach release at the same time as Firefox 57! Congrats guys!
  • Eduardo (eo): thanks for catching the mistake in a statement appeared on mozilla.org. The paragraph has been since corrected, published and localized.
  • Manuel (azz) from Spain and Misael (trs) from Mexico met for the first time at the l10n workshop in Asunción, Paraguay. They bonded instantly! Misael will introduce his friends who are native speakers of Highland Puebla Nahuatl, the language Manuel is working on all by himself. He can’t wait to be connected with these professionals, to collaborate, and promote the language through Mozilla products.

 

Know someone in your l10n community who’s been doing a great job and should appear here? Contact on of the l10n-drivers and we’ll make sure they get a shout-out (see list at the bottom)!

Useful Links

Questions? Want to get involved?

 

Did you enjoy reading this report? Let us know how we can improve by reaching out to any one of the l10n-drivers listed above.

 

David TellerBinary AST - Motivations and Design Decisions - Part 1

By Kannan Vijayan, Mike Hoye “The key to making programs fast is to make them do practically nothing.” - Mike Haertel, creator of GNU Grep. Binary AST - “Binary Abstract Syntax Tree” - is Mozilla’s proposal for specifying a binary-encoded syntax for JS with the intent of allowing browsers and other JS-executing environments to parse and load code as much as 80% faster than standard minified JS.

Mark CôtéDecisions, decisions, decisions: Driving change at Mozilla

As the manager responsible for driving the decision and process behind the move to Phabricator at Mozilla, I’ve been asked to write about my recent experiences, including how this decision was implemented, what worked well, and what I might have done differently. I also have a few thoughts about decision making both generally and at Mozilla specifically.

Please note that these thoughts reflect only my personal opinions. They are not a pronouncement of how decision making is or will be done at Mozilla, although I hope that my account and ideas will be useful as we continue to define and shape processes, many of which are still raw years after we became an organization with more than a thousand employees, not to mention the vast number of volunteers.


Mozilla has used Bugzilla as both an issue tracker and a code-review tool since its inception almost twenty years ago. Bugzilla was arguably the first freely available web-powered issue tracker, but since then, many new apps in that space have appeared, both free/open-source and proprietary. A few years ago, Mozilla experimented with a new code-review solution, named (boringly) “MozReview”, which was built around Review Board, a third-party application. However, Mozilla never fully adopted MozReview, leading to code review being split between two tools, which is a confusing situation for both seasoned and new contributors alike.

There were many reasons that MozReview didn’t completely catch on, some of which I’ve mentioned in previous blog and newsgroup posts. One major factor was the absence of a concrete, well-communicated, and, dare I say, enforced decision. The project was started by a small number of people, without a clearly defined scope, no consultations, no real dedicated resources, and no backing from upper management and leadership. In short, it was a recipe for failure, particularly considering how difficult change is even in a perfect world.

Having recognized this failure last year, and with the urging of some people at the director level and above, my team and I embarked on a process to replace both MozReview and the code-review functionality in Bugzilla with a single tool and process. Our scope was made clear: we wanted the tool that offered the best mechanics for code-review at Mozilla specifically. Other bits of automation, such as “push-to-review” support and automatic landings, while providing many benefits, were to be considered separately. This division of concerns helped us focus our efforts and make decisions clearer.

Our first step in the process was to hold a consultation. We deliberately involved only a small number of senior engineers and engineering directors. Past proposals for change have faltered on wide public consultation: by their very nature, you will get virtually every opinion imaginable on how a tool or process should be implemented, which often leads to arguments that are rarely settled, and even when “won” are still dominated by the loudest voices—indeed, the quieter voices rarely even participate for fear of being shouted down. Whereas some more proactive moderation may help, using a representative sample of engineers and managers results in a more civil, focussed, and productive discussion.

I would, however, change one aspect of this process: the people involved in the consultation should be more clearly defined, and not an ad-hoc group. Ideally we would have various advisory groups that would provide input on engineering processes. Without such people clearly identified, there will always be lingering questions as to the authority of the decision makers. There is, however, still much value in also having a public consultation, which I’ll get to shortly.

There is another aspect of this consultation process which was not clearly considered early on: what is the honest range of solutions we are considering? There has been a movement across Mozilla, which I fully support, to maximize the impact of our work. For my team, and many others, this means a careful tradeoff of custom, in-house development and third-party applications. We can use entirely custom solutions, we can integrate a few external apps with custom infrastructure, or we can use a full third-party suite. Due to the size and complexity of Firefox engineering, the latter is effectively impossible (also the topic for a series of posts). Due to the size of engineering-tools groups at Mozilla, the first is often ineffective.

Thus, we really already knew that code-review was a very likely candidate for a third-party solution, integrated into our existing processes and tools. Some thorough research into existing solutions would have further tightened the project’s scope, especially given Mozilla’s particular requirements, such as Mercurial support, which are in part due to a combination of scale and history. In the end, there are few realistic solutions. One is Review Board, which we used in MozReview. Admittedly we introduced confusion into the app by tying it too closely to some process-automation concepts, but it also had some design choices that were too much of a departure from traditional Firefox engineering processes.

The other obvious choice was Phabricator. We had considered it some years ago, in fact as part of the MozReview project. MozReview was developed as a monolithic solution with a review tool at its core, so the fact that Phabricator is written in PHP, a language without much presence at Mozilla today, was seen as a pretty big problem. Our new approach, though, in which the code-review tool is seen as just one component of a pipeline, means that we limit customizations largely to integration with the rest of the system. Thus the choice of technology is much less important.

The fact that Phabricator was virtually a default choice should have been more clearly communicated both during the consultation process and in the early announcements. Regardless, we believe it is in fact a very solid choice, and that our efforts are truly best spent solving the problems unique to Mozilla, of which code review is not.

To sum up, small-scale consultations are more effective than open brainstorming, but it’s important to really pay attention to scope and constraints to make the process as effective and empowering as possible.


Lest the above seem otherwise, open consultation does provide an important part of the process, not in conceiving the initial solution but in vetting it. The decision makers cannot be “the community”, at least, not without a very clear process. It certainly can’t be the result of a discussion on a newsgroup. More on this later.

Identifying the decision maker is a problem that Mozilla has been wrestling with for years. Mitchell has previously pointed out that we have a dual system of authority: the module system and a management hierarchy. Decisions around tooling are even less clear, given that the relevant modules are either nonexistent or sweepingly defined. Thus in the absence of other options, it seemed that this should be a decision made by upper management, ultimately the Senior Director of Engineering Operations, Laura Thomson. My role was to define the scope of the change and drive it forward.

Of course since this decision affects every developer working on Firefox, we needed the support of Firefox engineering management. This has been another issue at Mozilla; the directorship was often concerned with the technical aspects of the Firefox product, but there was little input from them on the direction of the many supporting areas, including build, version control, and tooling. Happily I found out that this problem has been rectified. The current directors were more than happy to engage with Laura and me, backing our decision as well as providing some insights into how we could most effectively communicate it.

One suggestion they had was to set up a small hosted test instance and give accounts to a handful of senior engineers. The purpose of this was to both give them a heads up before the general announcement and to determine if there were any major problems with the tool that we might have missed. We got a bit of feedback, but nothing we weren’t already generally aware of.

At this point we were ready for our announcement. It’s worth pointing out again that this decision had effectively already been made, barring any major issues. That might seem disingenuous to some, but it’s worth reiterating two major points: (a) a decision like this, really, any nontrivial decision, can’t be effectively made by a large group of people, and (b) we did have to be honestly open to the idea that we might have missed some big ramification of this decision and be prepared to rethink parts, or maybe even all, of the plan.

This last piece is worth a bit more discussion. Our preparation for the general announcement included several things: a clear understanding of why we believe this change to be necessary and desirable, a list of concerns we anticipated but did not believe were blockers, and a list of areas that we were less clear on that could use some more input. By sorting out our thoughts in this way, we could stay on message. We were able to address the anticipated concerns but not get drawn into a long discussion. Again this can seem dismissive, but if nothing new is brought into the discussion, then there is no benefit to debating it. It is of course important to show that we understand such concerns, but it is equally important to demonstrate that we have considered them and do not see them as critical problems. However, we must also admit when we do not yet have a concrete answer to a problem, along with why we don’t think it needs an answer at this point—for example, how we will archive past reviews performed in MozReview. We were open to input on this issues, but also did not want to get sidetracked at this time.

All of this was greatly aided by having some members of Firefox and Mozilla leadership provide input into the exact wording of the announcement. I was also lucky to have lots of great input from Mardi Douglass, this area (internal communications) being her specialty. Although no amount of wordsmithing will ensure a smooth process, the end result was a much clearer explanation of the problem and the reasons behind our specific solution.

Indeed, there were some negative reactions to this announcement, although I have to admit that they were fewer than I had feared there would be. We endeavoured to keep the discussion focussed, employing the above approach. There were a few objections we hadn’t fully considered, and we publicly admitted so and tweaked our plans accordingly. None of the issues raised were deemed to be show-stoppers.

There were also a very small number of messages that crossed a line of civility. This line is difficult to determine, although we have often been too lenient in the past, alienating employees and volunteers alike. We drew the line in this discussion at posts that were disrespectful, in particular those that brought little of value while questioning our motives, abilities, and/or intentions. Mozilla has been getting better at policing discussions for toxic behaviour, and I was glad to see a couple people, notably Mike Hoye, step in when things took a turn for the worse.

There is also a point in which a conversation can start to go in circles, and in the discussion around Phabricator (in fact in response to a progress update a few months after the initial announcement) this ended up being around the authority of the decision makers, that is, Laura and myself. At this point I requested that a Firefox engineering director, in this case Joe Hildebrand, get involved and explain his perspective and voice his support for the project. I wish I didn’t have to, but I did feel it was necessary to establish a certain amount of credibility by showing that Firefox leadership was both involved with and behind this decision.

Although disheartening, it is also not surprising that the issue of authority came up, since as I mentioned above, decision making has been a very nebulous topic at Mozilla. There is a tendency to invoke terms like “open” and “transparent” without in any way defining them, evoking an expectation that everyone shares an understanding of how we ought to make decisions, or even how we used to make decisions in some long-ago time in Mozilla’s history. I strongly believe we need to lay out a decision-making framework that values openness and transparency but also sets clear expectations of how these concepts fit into the overall process. The most egregious argument along these lines that I’ve heard is that we are a “consensus-based organization”. Even if consensus were possible in a decision that affects literally hundreds, if not thousands, of people, we are demonstrably not consensus-driven by having both module and management systems. We do ourselves a disservice by invoking consensus when trying to drive change at Mozilla.

On a final note, I thought it was quite interesting that the topic of decision making, in the sense of product design, came up in the recent CNET article on Firefox 57. To quote Chris Beard, “If you try to make everyone happy, you’re not making anyone happy. Large organizations with hundreds of millions of users get defensive and try to keep everybody happy. Ultimately you end up with a mediocre product and experience.” I would in fact extend that to trying to make all Mozillians happy with our internal tools and processes. It’s a scary responsibility to drive innovative change at Mozilla, to see where we could have a greater impact and to know that there will be resistance, but if Mozilla can do it in its consumer products, we have no excuse for not also doing so internally.

Mozilla Open Innovation TeamBeing Open by Design

“We were born as a radically open, radically participatory organization, unbound to traditional corporate structure. We played a role in bringing the ‘open’ movement into mainstream consciousness.”

Mitchell Baker, Executive Chairwoman of Mozilla

“If external sources of innovation can reliably produce breakthrough and functional and novel ideas, a company has to find ways to bring those to market. They have to have programs that allow them to systematically work with those sources, invest in those programs.”

Karim Lakhani, Member of Mozilla’s Board of Directors

Mozilla origins are in the open source movement, and the concept of ‘working in the open’ has always been key to our identity. It’s embedded in our vision for the open Web, and in how we build products and operate as an organization. Mozilla relies upon open, collaborative practices — foremost open source co-development — to bring in external knowledge and contribution to many of our products, technologies, and operations.

However, the landscape of open has changed dramatically in the past years. There are over a thousand open source software projects in the world, and even open source hardware is now a widespread phenomenon. Even companies once considered unlikely to work with open source projects have opened up key properties, such as Microsoft opening .NET and Visual Studio to drive adoption and make them more competitive products. Companies with a longer history in open source continue to apply it strategically: Google’s open sourcing enough of TensorFlow will help them influence the future of AI development, while they continue to crowdsource a huge corpus of machine learning data through the use of their products. But more importantly, beyond these practices, there are now numerous methods for crowdsourcing ideas and expertise, and a worldwide movement around open innovation.

All this means: there’s much out there to learn from — even (or especially) for a pioneer of the open.

Turning the Mental Model into a Strategic Lever

There are many conceptions of Open Innovation in the industry. Mozilla takes a broad definition: the blurring of an organisation’s boundaries to take advantage of the knowledge, diversity, perspectives and ideas that exist beyond its borders. This requires several related things:

  • Being willing to search for ideas outside the organisation: Identify channels to create opportunities and systematically engage with a wide range of external resources.
  • Being willing and capable of acting upon those ideas: Integrating these external resources and ideas into the organisation’s own capabilities.

Mozilla’s Open Innovation Team has been formed to help implement a broad set of open and collaborative practices in our products and technologies. The main guiding principle of the team’s efforts is to foster “openness by design”, rather than by default. The latter is more of a mental model — strong , but abstract, broad and absolute. Often enough “openness by default” reflects an absence of strategic intent: without clarity on why you’re doing something, or what the intended outcomes are, your commitment to openness is likely to diminish over time. In comparison, “open by design” for us means to develop an understanding of how our products and technologies deliver value within an ecosystem. And to intentionally design how we work with external collaborators and contributors, both at the individual and organizational level, for the greatest impact and mutual value.

As part of our ongoing work we partnered with the Copenhagen Institute for Interaction Design (CIID) for a research project looking at how other companies and industries are leveraging open practices. The project reviewed a range of collaborative methods, including but also beyond open source.

Open Practises — uhm, means?

We define open practices as the ways an organization programmatically works with external communities — from hobbyists to professionals — to share knowledge, intellectual property, work, or influence in order to shape a market toward a specific business goal. Although many of these methods are not new, technology has often made them particularly attractive or useful (e.g. crowdsourcing at scale). Some are made possible only through technology (e.g. user telemetry). Used thoughtfully, open practices can simultaneously build vibrant communities and provide competitive advantage.

Together with CIID we identified a wealth of companies and organizations from which we finally picked seven current innovators in “open” to learn from. We tried to avoid examples where community participation was mainly a marketing tactic. Instead we focused on those in which community collaboration was fundamental to the business model. Many of these organisations also share similarities to Mozilla as a mission-driven organisation.

In a series of blog posts we will share insights in how the different companies deliberately apply open practices across in their product and technology development. And we will also introduce a framework for open practices that we co-developed with CIID, structuring different methods of collaboration and interaction across organisational boundaries, which serves as a way to stimulate our thinking.

We hope that lessons learned from open and participatory practices in the technology sector are applicable across industries and that the framework and case studies will be useful to other organisations as they evaluate and implement open and participatory strategies.

If you’d like to learn more in the meantime, share thoughts or news about your projects, please reach out to the Mozilla Open Innovation team at openinnovation@mozilla.com.


Being Open by Design was originally published in Mozilla Open Innovation on Medium, where people are continuing the conversation by highlighting and responding to this story.

Robert O'Callahanrr 5.0 Released

I've released rr 5.0. Kyle convinced me that trace stability and portability were worth a major version bump.

Release notes:

  • Introduction of Cap'n Proto to stabilize the trace format. Recordings created in this rr release should be replayable in any future rr release. This is a plan, not a promise, since we don't know what might happen in the future, but I'm hopeful.
  • New rr pack command makes recordings self-contained.
  • Packed recordings from one machine can be replayed on a different machine by trapping CPUID instructions when supported on the replay machine. We don't have much experience with this yet but so far so good.
  • Brotli compression for smaller traces and lower recording overhead.
  • Improvements to the rr command-line arguments to ease debugger/IDE integration. rr replay now accepts a -- argument; all following arguments are passed to the debugger verbatim. Also, the bare rr command is now smarter about choosing a default subcommand; if the following argument is a directory, the default subcommand is replay, otherwise it is record.
  • Performance improvements, especially for pathological cases with lots of switching to and from the rr supervisor process.
  • Syscall support expanded.
  • Many bugs fixed.

Enjoy!

Frédéric WangReview of Igalia's Web Platform activities (H1 2017)

Introduction

For many years Igalia has been committed to and dedicated efforts to the improvement of Web Platform in all open-source Web Engines (Chromium, WebKit, Servo, Gecko) and JavaScript implementations (V8, SpiderMonkey, ChakraCore, JSC). We have been working in the implementation and standardization of some important technologies (CSS Grid/Flexbox, ECMAScript, WebRTC, WebVR, ARIA, MathML, etc). This blog post contains a review of these activities performed during the first half (and a bit more) of 2017.

Projects

CSS

A few years ago Bloomberg and Igalia started a collaboration to implement a new layout model for the Web Platform. Bloomberg had complex layout requirements and what the Web provided was not enough and caused performance issues. CSS Grid Layout seemed to be the right choice, a feature that would provide such complex designs with more flexibility than the currently available methods.

We’ve been implementing CSS Grid Layout in Blink and WebKit, initially behind some flags as an experimental feature. This year, after some coordination effort to ensure interoperability (talking to the different parties involved like browser vendors, the CSS Working Group and the web authors community), it has been shipped by default in Chrome 58 and Safari 10.1. This is a huge step for the layout on the web, and modern websites will benefit from this new model and enjoy all the features provided by CSS Grid Layout spec.

Since the CSS Grid Layout shared the same alignment properties as the CSS Flexible Box feature, a new spec has been defined to generalize alignment for all the layout models. We started implementing this new spec as part of our work on Grid, being Grid the first layout model supporting it.

Finally, we worked on other minor CSS features in Blink such as caret-color or :focus-within and also several interoperability issues related to Editing and Selection.

MathML

MathML is a W3C recommendation to represent mathematical formulae that has been included in many other standards such as ISO/IEC, HTML5, ebook and office formats. There are many tools available to handle it, including various assistive technologies as well as generators from the popular LaTeX typesetting system.

After the improvements we performed in WebKit’s MathML implementation, we have regularly been in contact with Google to see how we can implement MathML in Chromium. Early this year, we had several meetings with Google’s layout team to discuss this in further details. We agreed that MathML is an important feature to consider for users and that the right approach would be to rely on the new LayoutNG model currently being implemented. We created a prototype for a small LayoutNG-based MathML implementation as a proof-of-concept and as a basis for future technical discussions. We are going to follow-up on this after the end of Q3, once Chromium’s layout team has made more progress on LayoutNG.

Servo

Servo is Mozilla’s next-generation web content engine based on Rust, a language that guarantees memory safety. Servo relies on a Rust project called WebRender which replaces the typical rasterizer and compositor duo in the web browser stack. WebRender makes extensive use of GPU batching to achieve very exciting performance improvements in common web pages. Mozilla has decided to make WebRender part of the Quantum Render project.

We’ve had the opportunity to collaborate with Mozilla for a few years now, focusing on the graphics stack. Our work has focused on bringing full support for CSS stacking and clipping to WebRender, so that it will be available in both Servo and Gecko. This has involved creating a data structure similar to what WebKit calls the “scroll tree” in WebRender. The scroll tree divides the scene into independently scrolled elements, clipped elements, and various transformation spaces defined by CSS transforms. The tree allows WebRender to handle page interaction independently of page layout, allowing maximum performance and responsiveness.

WebRTC

WebRTC is a collection of communications protocols and APIs that enable real-time communication over peer-to-peer connections. Typical use cases include video conferencing, file transfer, chat, or desktop sharing. Igalia has been working on the WebRTC implementation in WebKit and this development is currently sponsored by Metrological.

This year we have continued the implementation effort in WebKit for the WebKitGTK and WebKit WPE ports, as well as the maintenance of two test servers for WebRTC: Ericsson’s p2p and Google’s apprtc. Finally, a lot of progress has been done to add support for Jitsi using the existing OpenWebRTC backend.

Since OpenWebRTC development is not an active project anymore and given libwebrtc is gaining traction in both Blink and the WebRTC implementation of WebKit for Apple software, we are taking the first steps to replace the original WebRTC implementation in WebKitGTK based on OpenWebRTC, with a new one based on libwebrtc. Hopefully, this way we will share more code between platforms and get more robust support of WebRTC for the end users. GStreamer integration in this new implementation is an issue we will have to study, as it’s not built in libwebrtc. libwebrtc offers many services, but not every WebRTC implementation uses all of them. This seems to be the case for the Apple WebRTC implementation, and it may become our case too if we need tighter integration with GStreamer or hardware decoding.

WebVR

WebVR is an API that provides support for virtual reality devices in Web engines. Implementation and devices are currently actively developed by browser vendors and it looks like it is going to be a huge thing. Igalia has started to investigate on that topic to see how we can join that effort. This year, we have been in discussions with Mozilla, Google and Apple to see how we could help in the implementation of WebVR on Linux. We decided to start experimenting an implementation within WebKitGTK. We announced our intention on the webkit-dev mailing list and got encouraging feedback from Apple and the WebKit community.

ARIA

ARIA defines a way to make Web content and Web applications more accessible to people with disabilities. Igalia strengthened its ongoing committment to the W3C: Joanmarie Diggs joined Richard Schwerdtfeger as a co-Chair of the W3C’s ARIA working group, and became editor of the Core Accessibility API Mappings, [Digital Publishing Accessibility API Mappings] (https://w3c.github.io/aria/dpub-aam/dpub-aam.html), and Accessible Name and Description: Computation and API Mappings specifications. Her main focus over the past six months has been to get ARIA 1.1 transitioned to Proposed Recommendation through a combination of implementation and bugfixing in WebKit and Gecko, creation of automated testing tools to verify platform accessibility API exposure in GNU/Linux and macOS, and working with fellow Working Group members to ensure the platform mappings stated in the various “AAM” specs are complete and accurate. We will provide more information about these activities after ARIA 1.1 and the related AAM specs are further along on their respective REC tracks.

Web Platform Predictability for WebKit

The AMP Project has recently sponsored Igalia to improve WebKit’s implementation of the Web platform. We have worked on many issues, the main ones being:

  • Frame sandboxing: Implementing sandbox values to allow trusted third-party resources to open unsandboxed popups or restrict unsafe operations of malicious ones.
  • Frame scrolling on iOS: Addressing issues with scrollable nodes; trying to move to a more standard and interoperable approach with scrollable iframes.
  • Root scroller: Finding a solution to the old interoperability issue about how to scroll the main frame; considering a new rootScroller API.

This project aligns with Web Platform Predictability which aims at making the Web more predictable for developers by improving interoperability, ensuring version compatibility and reducing footguns. It has been a good opportunity to collaborate with Google and Apple on improving the Web. You can find further details in this blog post.

JavaScript

Igalia has been involved in design, standardization and implementation of several JavaScript features in collaboration with Bloomberg and Mozilla.

In implementation, Bloomberg has been sponsoring implementation of modern JavaScript features in V8, SpiderMonkey, JSC and ChakraCore, in collaboration with the open source community:

  • Implementation of many ES6 features in V8, such as generators, destructuring binding and arrow functions
  • Async/await and async iterators and generators in V8 and some work in JSC
  • Optimizing SpiderMonkey generators
  • Ongoing implementation of BigInt in SpiderMonkey and class field declarations in JSC

On the design/standardization side, Igalia is active in TC39 and with Bloomberg’s support

In partnership with Mozilla, Igalia has been involved in the specification of various JavaScript standard library features for internationalization, in specification, implementation in V8, code reviews in other JavaScript engines, as well as working with the underlying ICU library.

Other activities

Preparation of Web Engines Hackfest 2017

Igalia has been organizing and hosting the Web Engines Hackfest since 2009. This event under an unconference format has been a great opportunity for Web Engines developers to meet, discuss and work together on the web platform and on web engines in general. We announced the 2017 edition and many developers already confirmed their attendance. We would like to thank our sponsors for supporting this event and we are looking forward to seeing you in October!

Coding Experience

Emilio Cobos has completed his coding experience program on implementation of web standards. He has been working in the implementation of “display: contents” in Blink but some work is pending due to unresolved CSS WG issues. He also started the corresponding work in WebKit but implementation is still very partial. It has been a pleasure to mentor a skilled hacker like Emilio and we wish him the best for his future projects!

New Igalians

During this semester we have been glad to welcome new igalians who will help us to pursue Web platform developments:

  • Daniel Ehrenberg joined Igalia in January. He is an active contributor to the V8 JavaScript engine and has been representing Igalia at the ECMAScript TC39 meetings.
  • Alicia Boya joined Igalia in March. She has experience in many areas of computing, including web development, computer graphics, networks, security, and software design with performance which we believe will be valuable for our Web platform activities.
  • Ms2ger joined Igalia in July. He is a well-known hacker of the Mozilla community and has wide experience in both Gecko and Servo. He has noticeably worked in DOM implementation and web platform test automation.

Conclusion

Igalia has been involved in a wide range of Web Platform technologies going from Javascript and layout engines to accessibility or multimedia features. Efforts have been made in all parts of the process:

  • Participation to standardization bodies (W3C, TC39).
  • Elaboration of conformance tests (web-platform-tests test262).
  • Implementation and bug fixes in all open source web engines.
  • Discussion with users, browser vendors and other companies.

Although, some of this work has been sponsored by Google or Mozilla, it is important to highlight how external companies (other than browser vendors) can make good contributions to the Web Platform, playing an important role on its evolution. Alan Stearns already pointed out the responsibility of the Web Plaform users on the evolution of CSS while Rachel Andrew emphasized how any company or web author can effectively contribute to the W3C in many ways.

As mentioned in this blog post, Bloomberg is an important contributor of several open source projects and they’ve been a key player in the development of CSS Grid Layout or Javascript. Similarly, Metrological’s support has been instrumental for the implementation of WebRTC in WebKit. We believe others could follow their examples and we are looking forward to seeing more companies sponsoring Web Platform developments!

Mozilla Open Policy & Advocacy BlogMaking Privacy More Transparent

How do you make complex privacy information easily accessible and understandable to users?  At Mozilla, we’ve been thinking through this for the past several months from different perspectives: user experience, product management, content strategy, legal, and privacy.  In Firefox 56 (which releases on September 26), we’re trying a new idea, and we’d love your feedback.

Many companies, including Mozilla, present a Privacy Notice to users prior to product installation.  You’ll find a link to the Firefox Privacy Notice prominently displayed under the Firefox download button on our websites.

Our testing showed that less than 1% of users clicked the link to view the “Firefox Privacy Notice” before downloading Firefox.  Another source of privacy information in Firefox is a notification bar displayed within the first minute of a new installation.  We call this the “Privacy Info Bar.”

User testing showed this was a confusing experience for many users, who often just ignored it.  For users who clicked the button, they ended up in the advanced settings of Firefox.  Once there, some people made unintentional changes that impacted browser performance without understanding the consequences.  And because this confusing experience occurred within the first few minutes of using a brand new browser, it took away from the primary purpose of installing a new browser: to navigate the web.

We know that many Firefox users care deeply about privacy, and we wanted to find a way to increase engagement with our privacy practices.  So we went back to the drawing board to provide users with more meaningful interactions. And after further discovery and iteration, our solution, which we’re implementing in Firefox 56, is a combination of several product and experience changes.  Here are our improvements:

  1. Displaying the Privacy Notice as the second tab of Firefox for all new installs;
  2. Reformatting and improving the Firefox Privacy Notice; and
  3. Improving the language in the preferences menu.

We reformatted the Privacy Notice to make it more obvious what data Firefox uses and sends to Mozilla and others.  Not everyone uses the same features or cares about the same things, so we layered the notice with high-level data topics and expanders to let you dig into details based on your interest.  All of this is now on the second tab of Firefox after a new installation, so it’s much more accessible and user-friendly.  The Privacy Info Bar became redundant with these changes, so we removed it.

We also improved the language in the Firefox preferences menu to make data collection and choices more clear to users.  We also used the same data terms in the preferences menu and privacy notice that our engineers use internally for data collection in Firefox.

These are just a few changes we made recently, but we are continuously seeking innovative ways to make the privacy and data aspects of our products more transparent.  Internally at Mozilla, data and privacy are topics we discuss constantly.  We challenge our engineers and partners to find alternative approaches to solving difficult problems with less data.  We have review processes to ensure the end-result benefits from different perspectives.  And we always consider issues from the user perspective so that privacy controls are easy to find and data practices are clear and understandable.

You can join the conversation on Github, or commenting on our governance mailing list.

Special thanks to Michelle Heubusch, Peter Dolanjski, Tina Hsieh, Elvin Lee, and Brian Smith for their invaluable contributions to our revised privacy notice structure.

The post Making Privacy More Transparent appeared first on Open Policy & Advocacy.

Mozilla Future Releases BlogIt’s your data, we’re just living in it

Let me ask you a question: How often do you think about your Firefox data? I think about your Firefox data every day, like it’s my job. Because it is.  As the head of data science for Firefox, I manage a team of data scientists who contribute to the development of Firefox by shaping the direction of product strategy through the interpretation of the data we collect.  Being a data scientist at Mozilla means that I aim to ensure that Firefox users have meaningful choices when it comes to participating in our data collection efforts, without sacrificing our ability to collect useful, high-quality data that is essential to making smarter product decisions.

To achieve this balance, I’ve been working with colleagues across the organization to simplify and clarify our data collection practices and policies. Our goal is that this will make it easier for you to decide if and when you share data with us.  Recently, you may have seen some updates about planned changes to the data we collect, how we collect it, and how we share the data we collect. These pieces are part of a larger strategy to align our data collection practices with a set of guiding principles that inform how we work with and communicate about data we collect.

The direct impact is that we have made changes to the systems that we use to collect data from Firefox, and we have updated the data collection preferences as a result.  Firefox clients no longer employ two different data collection systems (Firefox Health Report and opt-in Telemetry). Although one was on by default, and the other was opt-in, as a practical matter there was no real difference in the type of data that was being collected by the two different channels in release.  Because of that, we now rely upon a single system called Unified Telemetry that has aspects of both systems combined into a single data collection platform and as a result no longer have separate preferences, as we did for the old systems.

If you are a long-time Firefox user and you previously allowed us to collect FHR data but you refrained from opting into extended telemetry, we will continue to collect the same type of technical and interaction information using Unified Telemetry. We have scaled back all other data collection to either pre-release or in situ opt-in, so you will continue to have choices and control over how Firefox collects your data.

Four Pillars of Our Data Collection Strategy

There are four key areas that we focused on when we decided to adjust our data preferences settings.  For Firefox, it means that any time we collect data, we wanted to ensure that the proposal for data collection met our criteria for:

  • Necessity
  • Transparency
  • Accountability
  • Privacy

Necessity

We don’t collect data “just because we can” or “just because it would be interesting to measure”.  Anyone on the Firefox team who requests data has to be able to answer questions like:

  • Is the data collection necessary for Firefox to function properly? For example, the automatic update check must be sent in order to keep Firefox up to date.
  • Is data collection needed to make a feature of Firefox work well? For example, we need to collect data to make our search suggestion feature work.
  • Is it necessary to take a measurement from Firefox users?  Could we learn what we need from measuring users on a pre-release version of Firefox?
  • Is it necessary to get data from all users, or is it sufficient to collect data from a smaller sample?

Transparency

Transparency at Mozilla means that we publicly share details about what data we collect and ensure that we can answer questions openly about our related decision-making.

Requests for data collection start with a publicly available bug on bugzilla. The general process around requests for new data collection follows this process: people indicate that they would like to collect some data according to some specification, they flag a data steward (an employee who is trained to check that requests have publicly documented their intentions and needs) for review, and only those requests that pass review are implemented.

Most simple requests, like new Telemetry probes or experimental tests, are approved within the context of a single bug.  We check that every simple request includes enough detail that a standard set of questions to determine the necessity and accountability of the proposed measurements.  Here’s an example of a simple request for new telemetry-based data collection.

More complex requests, like those that call for a new data collection mechanism or require changes to the privacy notice, will require more extensive review than a simple request.  Typically, data stewards or requesters themselves will escalate requests to this level of review when it is clear that a simple review is insufficient.  This review can involve some or all of the following:

  • Privacy analysis: Feedback from the mozilla.dev.privacy mailing list and/or privacy experts within and outside of Mozilla to discuss the feature and its privacy impact.
  • Policy compliance review: An assessment from the Mozilla data compliance team to determine if the request matches the Mozilla data compliance policies and documents.
  • Legal review: An assessment from Mozilla’s legal team, which is necessary for any changes to the privacy policies/notices.

Accountability

Our process includes a set of controls that hold us accountable for our data collection. We take the precaution of ensuring that there is a person listed who is responsible for following the approved specification resulting from data review, such as designing and implementing the code as well as analyzing and reporting the data received.  Data stewards check to make sure that basic questions about the intent behind and implementation of the data we collect can be answered, and that the proposed collection is within the boundaries of a given data category type in terms of defaults available.  These controls allow for us to feel more confident about our ability to explain and justify to our users why we have decided to start collecting specific data.

Privacy

We can collect many types of data from your use of Firefox, but we don’t consider them equal. We consider some types of data to be more benign (like what version of Firefox you are using) than others (like the websites you visit). We’ve devised a four-tier system to group data in clear categories from less sensitive to highly sensitive, which you can review here in more detail.   Since we developed this 4-tier approach, we’ve worked to align this language with our Privacy Policy and at the user settings for Privacy in Firefox.   (You can read more about the legal team’s efforts in a post by my colleagues at Legal and Compliance.)

What does this mean for you?

We hope it means a lot and not much at the same time.  At Firefox, we have long worked to respect your privacy, and we hope this new strategy gives you a clearer understanding of what data we collect and why it’s important to us.  We also want to reassure you that we haven’t dramatically changed what we collect by default.  So while you may not often think about the data you share with Mozilla, we hope that when you do, you feel better informed and more in control.

The post It’s your data, we’re just living in it appeared first on Future Releases.