Hannes VerschoreTracelogger gui updates

Tracelogger is one of the tools JIT devs (especially me) use to look into performance issues and to improve the JS engine of Firefox performance-wise. It traces which functions are executing together with extra information like which engine is running. How long compilation took. How many times we are GC’ing and if we are calling VM functions …

I made the GUI a bit more powerful. First of all I moved the computation of the overview to a web worker. This should help the usability of the tool. Next to that I made it possible to toggle the categories on and off. That might make it easier to understand the graphs. I also introduced a settings popup. In the settings popup you can now choice to see absolute (cpu ticks) or relative (%) timings.

Screenshot from 2016-05-03 09:36:41

There are still a lot of improvements that are possible. Eventually it should be possible to zoom on graphs, toggle scripts on/off, see full times of scripts (instead of self time only) and maybe make it possible to show another graph (like a flame chart). Hopefully one day.

This is off course open source and available at:
https://github.com/h4writer/tracelogger/tree/master/website

Mozilla Open Policy & Advocacy BlogThis is what a rightsholder looks like in 2016

In today’s policy discussions around intellectual property, the term ‘rightsholder’ is often misconstrued as someone who supports maximalist protection and enforcement of intellectual property, instead of someone who simply holds the rights to intellectual property. This false assumption can at times create a kind of myopia, in which the breadth and variety of actors, interests, and viewpoints in the internet ecosystem – all of whom are rightsholders to one degree or another – are lost.

This is not merely a process issue – it undermines constructive dialogues aimed at achieving a balanced policy. Copyright law is, ostensibly, designed and intended to advance a range of beneficial goals, such as promoting the arts, growing the economy, and making progress in scientific endeavour. But maximalist protection policies and draconian enforcement benefit the few and not the many, hindering rather than helping these policy goals. For copyright law to enhance creativity, innovation, and competition, and ultimately to benefit the public good, we must all recognise the plurality and complexity of actors in the digital ecosystem, who can be at once IP rightsholders, creators, and consumers.

Mozilla is an example of this complex rightsholder stakeholder. As a technology company, a non-profit foundation, and a global community, we hold copyrights, trademarks, and other exclusive rights. Yet, in the pursuit of our mission, we’ve also championed open licenses to share our works with others. Through this, we see an opportunity to harness intellectual property to promote openness, competition and participation in the internet economy.

We are a rightsholder, but we are far from maximalists. Much of the code produced by Mozilla, including much of Firefox, is licensed using a free and open source software licence called the Mozilla Public License (MPL), developed and maintained by the Mozilla Foundation. We developed the MPL to strike a real balance between the interests of proprietary and open source developers in an effort to promote innovation, creativity and economic growth to benefit the public good.

Similarly, in recognition of the challenges the patent system raises for open source software development, we’re pioneering an innovative approach to patent licensing with our Mozilla Open Software Patent License (MOSPL). Today, the patent system can be used to hinder innovation by other creators. Our solution is to create patents that expressly permit everyone to innovate openly. You can read more in our terms of license here.

While these are just two initiatives from Mozilla amongst many more in the open source community, we need more innovative ideas in order to fully harness intellectual property rights to foster innovation, creation and competition. And we need policy makers to be open (pun intended) to such ideas, and to understand the place they have in the intellectual property ecosystem.

More than just our world of software development, the concept of a rightsholder is in reality broad and nuanced. In practice, we’re all rightsholders – we become rightsholders by creating for ourselves, whether we’re writing, singing, playing, drawing, or coding. And as rightsholders, we all have a stake in this rich and diverse ecosystem, and in the future of intellectual property law and policy that shapes it.

Here is some of our most recent work on IP reform:

Mozilla Addons BlogMay 2016 Featured Add-ons

Pick of the Month: uBlock Origin

by Raymond Hill
Very efficient blocker with a low CPU footprint.

”Wonderful blocker, part of my everyday browsing arsenal, highly recommended.”

Featured: Download Plan

by Abraham
Schedule download times for large files during off-peak hours.

”Absolutely beautiful interface!!”

Featured: Emoji Keyboard

by Harry N.
Input emojis right from the browser.

”This is a good extension because I can input emojis not available in Hangouts, Facebook, and email.”

Featured: Tab Groups

by Quicksaver
A simple way to organize a ton of tabs.

”Awesome feature and very intuitive to use.”

Nominate your favorite add-ons

Featured add-ons are selected by a community board made up of add-on developers, users, and fans. Board members change every six months, so there’s always an opportunity to participate. Stayed tuned to this blog for the next call for applications. Here’s further information on AMO’s featured content policies.

If you’d like to nominate an add-on for featuring, please send it to amo-featured@mozilla.org for the board’s consideration. We welcome you to submit your own add-on!

Wladimir PalantAdventures porting Easy Passwords to Chrome and back to Firefox

Easy Passwords is based on the Add-on SDK and runs in Firefox. However, people need access to their passwords in all kinds of environments, so I created an online version of the password generator. The next step was porting Easy Passwords to Chrome and Opera. And while at it, I wanted to see whether that port will work in Firefox via Web Extensions. After all, eventually the switch to Web Extensions will have to be done.

Add-on SDK to Chrome APIs

The goal was using the same codebase for all variants of the extension. Most of the logic is contained in HTML files anyway, so it wouldn’t have to be changed. As to the remaining code, it should just work with some fairly minimal implementation of the SDK APIs on top of the Chrome APIs. Why not the other way round? Well, I consider the APIs provided by the Add-on SDK much cleaner and easier to use.

It turned out that Easy Passwords used twelve SDK modules, many of these could be implemented in a trivial way however. For example, the timers module merely exports functions that are defined anyway (unlike SDK extensions, Chrome extensions run in a window context).

There were a few conceptual differences however. For example, Chrome extensions don’t support modularization — all background scripts execute in a single shared scope of the background page. Luckily, browserify solves this problem nicely by compiling all the various modules into a single background.js script while giving each one its own scope.

The other issue is configuration: Chrome doesn’t generate settings UI automatically the way simple-prefs module does it. No way around creating a special page for the two settings. Getting automatic SDK-style localization of HTML pages on the other hand was only a matter of a few lines (Chrome makes it a bit more complicated by disallowing dashes in message names).

A tricky issue was unifying the way scripts are attached to HTML pages. With the Add-on SDK these are content scripts which are defined in the JavaScript code — otherwise they wouldn’t be able to communicate to the extension. In Chrome you use regular <script> tags however, the scripts get the necessary privileges automatically. In the end I had to go with conditional comments interpreted by the build system, for the Chrome build these would become regular HTML code. This had the advantage that I could have additional scripts for Chrome only, in order to emulate the self variable which is available to SDK content scripts.

Finally, communication turned out tricky as well. The Add-on SDK automatically connects a content script to whichever code is responsible for it. Whenever some code creates a panel it gets a panel.port property which can be used to communicate with that panel — and only with that panel. Chrome’s messaging on the other hand is all-to-all, the code is meant to figure out itself whether it is supposed to process a particular message or leave it for somebody else. And while Chrome also has a concept of communication ports, these can only be distinguished by their name — so my implementations of the SDK modules had to figure out which SDK object a new communication port was meant for by looking at its name. In the end I implemented a hack: since I had exactly one panel, exactly one page and exactly one page worker, I only set the type of the port as its name. Which object it should be associated with? Who cares, there is only one.

And that’s mostly it as far as issues go. Quite surprisingly, fancy JavaScript syntax is no longer an issue as of Chrome 49 — let statements, for..of loops, rest parameters, destructuring assignments, all of this works. The only restrictions I noticed: node lists like document.forms cannot be used in for..of loops, and calling Array.filter() as opposed to Array.prototype.filter.call() isn’t supported (the former isn’t documented on MDN either, it seems to be non-standard). And a bunch of stuff which requires extra code with the Add-on SDK “just works”: pop-up size is automatically adjusted to content, switching tabs closes pop-up, tooltips and form validation messages work inside the pop-up like in every webpage.

The result was a Chrome extension that works just as well as the one for Firefox, with the exception of not being able to show the Easy Passwords icon in pop-up windows (sadly, I suspect that this limitation is intentional). It works in Opera as well and will be available in their add-on store once it is reviewed.

Chrome APIs to Web Extensions?

And what about running the Chrome port in Firefox now? Web Extensions are compatible to Chrome APIs, so in theory it shouldn’t be a big deal. And in fact, after adding applications property to manifest.json the extension could be installed in Firefox. However, after it replaced the version based on the Add-on SDK all the data was gone. This is bug 1214790 and I wonder what kind of solution the Mozilla developers can come up with.

It wasn’t really working either. Turned out, crypto functionality wasn’t working because the code was running in a context without access to Web Extensions APIs. Also, messages weren’t being received properly. After some testing I identified bug 1269327 as the culprit: proxied objects in messages were being dropped silently. Passing the message through JSON.stringify() and JSON.parse() before sending solved the issue, this would create a copy without any proxies.

And then there were visuals. One issue turned out to be a race condition which didn’t occur on Chrome, I guess that I made too many assumptions. Most of the others were due to bug 1225633 — somebody apparently considered it a good idea to apply a random set of CSS styles to unknown content. I filed bug 1269334 and bug 1269336 on the obvious bugs in these CSS styles, and overwrote some of the others in the extension. Finally, the nice pop-up sizing automation doesn’t work in Firefox, so the size of the Easy Passwords pop-up is almost always wrong.

Interestingly, pretty much everything that Chrome does better than the Add-on SDK isn’t working with Web Extensions right now. It isn’t merely the pop-up sizing: HTML tooltips in pop-ups don’t show up, and pop-ups aren’t being closed when switching tabs. In addition, tabs.query() doesn’t allow searching extension pages and submitting passwords produces bogus error messages.

While most of these issues can be worked around easily, some are not. So I guess that it will take a while until I replace the SDK-based version of Easy Passwords by one based on Web Extensions.

Air MozillaMozilla Weekly Project Meeting, 02 May 2016

Mozilla Weekly Project Meeting The Monday Project Meeting

Armen ZambranoOpen Platform Operations’ logo design

Last year, the Platform Operations organization was born and it brought together multiple teams across Mozilla which empower development with tools and processes.

This year, we've decided to create a logo that identifies us an organization and builds our self-identify.

We've filed this issue for a logo design [1] and we would like to have a call for any community members to propose their designs. We would like to have all applications in by May 13th. Soon after that, we will figure out a way to narrow it down to one logo! (details to be determined).

We would also like to thank whoever made the logo which we pick at the end (details also to be determined).

Looking forward to collaborate with you and see what we create!

[1] https://github.com/mozilla/Community-Design/issues/62


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Daniel GlazmanBlueGriffon officially recommended by the French Government

en-US TL;DR: BlueGriffon is now officially recommended as the html editor for the French Administration in its effort to rely on and promote Free Software!

Je suis très heureux de signaler que BlueGriffon, mon éditeur Web cross-platform et Wysiwyg, est officiellement recommandé par le Socle Interministériel de Logiciels Libres pour 2016 !!! Vous trouverez la liste officielle des logiciels recommandés ici (document pdf).

Gervase MarkhamDNSSEC on gerv.net

My ISP, the excellent Mythic Beasts, has started offering a managed DNSSEC service for domains they control – just click one button, and you’ve got DNSSEC on your domain. I’ve just enabled it on gerv.net (which, incidentally, as of a couple of weeks ago, is also available over a secure channel thanks to MB and Let’s Encrypt).

If you have any problems accessing any resources on gerv.net, please let me know by email – gerv at mozilla dot org should be unaffected by any problems.

Laurent JouanneauRelease of SlimerJS 0.10

I'm pleased to announce the release of SlimerJS 0.10!

SlimerJS is a scriptable browser. It is a tool like PhantomJS except it is based on Firefox and and it is not (yet) "headless" (if some Mozillians could help me to have a true headless browser ;-)...).

This new release brings new features and compatibility with Firefox 46. Among of them:

  • support of PDF export
  • support of Selenium with a "web driver mode"
  • support of stdout, stderr and stdin streams with the system module
  • support of exit code with phantom.exit() and slimer.exit()
  • support of node_modules with require()
  • support of special files (/dev/* etc) with the fs module

This version fixes also many bugs and conformance issues with PhantomJS 1.9.8 and 2.x. It fixed also some issues to run CasperJS 1.1.

See change details in release notes. As usual, you can download SlimerJS from the download page.

Note that there isn't anymore "standalone edition" (with embedding of XulRunner), because Mozilla ceased to maintain and build XulRunner. Only the "lightweight" edition is available from now, and you must install Firefox to run SlimerJS.

Consider this release as a "1.0pre". I'll try to release in few weeks the next major version, 1.0. It will only fix bugs found in 0.10 (if any), and will implement last few features to match the PhantomJS 2.1 API.

Matěj CeplOn GitLab growing and OStatus extension

Finally, this issue ticket gave me the opportunity to write what I think about OStatus. So, I did.

  1. http://www.joelonsoftware.com/articles/fog0000000018.html -- I am sorry, if you like OStatus, but it is the most insane open source example of this disease. After the astro design, we have no working and stable platform for the thing.

    There was old identi.ca, which was scrapped (I know more polite term is "moved to GNU/Social", yeah … how many users of this social software there is? And yes, I know, the protocol is named differently, but it is just another version of the same thing from my point of view), and pump.io, which is … I have just upgraded my instance to see whether I can write honestly that it is a well working proprietary (meaning, used by one implementation by the author of the protocol only) distributed network, and no, it is broken.

    And even if the damned thing worked, it would not offer me the functionality I really want: which is to connect with my real friends, who are all on Twitter, Facebook, or even some on G+. Heck, pump.io would be useless even these friends were on Diaspora (no, they are not, nobody is there). So, yes, if you want something which is useless, go and write OStatus component.

  2. I don't know what happens when we want to share issues, etc. I don't know and I don't care (for example, it seems to me that issues are something which is a way more linked to one particular project). And yes, I am the reporter of https://bugzilla.mozilla.org/show_bug.cgi?id=719725 (and author of http://article.gmane.org/gmane.linux.redhat.fedora.devel/79936/), and I think that it is impossible to do it. At least, nobody have managed to do it and it was not for the lack of trying. How is that OpenID or other federated identity doing?

    Besides, Do The Simplest Thing That Could Possibly Work because You Aren’t Gonna Need It . I vote for git request-pull(1) parser. And, no, not just sending an URL in HTTP GET, I would like to see that button (when the comment is recognized as being parseable) next to the comment with the plain text output of git request-pull.

  3. Actually, git request-pull(1) parser not only follows YAGNI, but it also in a loving way steps around the biggest problem of all federated solution: broken or missing federated identity.

Matěj CeplOn GitLab growing an OStatus extension

Finally, this issue ticket gave me the opportunity to write what I think about OStatus. So, I did.

  1. http://www.joelonsoftware.com/articles/fog0000000018.html -- I am sorry, if you like OStatus, but it is the most insane open source example of this disease. After the astro design, we have no working and stable platform for the thing.

    There was old identi.ca, which was scrapped (I know more polite term is "moved to GNU/Social", yeah … how many users of this social software there is? And yes, I know, the protocol is named differently, but it is just another version of the same thing from my point of view), and pump.io, which is … I have just upgraded my instance to see whether I can write honestly that it is a well working proprietary (meaning, used by one implementation by the author of the protocol only) distributed network, and no, it is broken.

    And even if the damned thing worked, it would not offer me the functionality I really want: which is to connect with my real friends, who are all on Twitter, Facebook, or even some on G+. Heck, pump.io would be useless even these friends were on Diaspora (no, they are not, nobody is there). So, yes, if you want something which is useless, go and write OStatus component.

  2. I don't know what happens when we want to share issues, etc. I don't know and I don't care (for example, it seems to me that issues are something which is a way more linked to one particular project). And yes, I am the reporter of https://bugzilla.mozilla.org/show_bug.cgi?id=719725 (and author of http://article.gmane.org/gmane.linux.redhat.fedora.devel/79936/), and I think that it is impossible to do it. At least, nobody have managed to do it and it was not for the lack of trying. How is that OpenID or other federated identity doing?

    Besides, Do The Simplest Thing That Could Possibly Work because You Aren’t Gonna Need It . I vote for git request-pull(1) parser. And, no, not just sending an URL in HTTP GET, I would like to see that button (when the comment is recognized as being parseable) next to the comment with the plain text output of git request-pull.

  3. Actually, git request-pull(1) parser not only follows YAGNI, but it also in a loving way steps around the biggest problem of all federated solution: broken or missing federated identity.

This Week In RustThis Week in Rust 128

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

This week's edition was edited by: Vikrant and llogiq.

Updates from Rust Community

News & Blog Posts

Notable New Crates & Project Updates

  • Rust project changelog for 2016-04-29. Updates to bitflags, lazy_static, regex, rust-mode, rustup, uuid.
  • Xi Editor. A modern editor with a backend written in Rust.
  • rure. A C API for the regex crate.
  • cassowary-rs. A Rust implementation of the Cassowary constraint solving algorithm.
  • Sapper. A lightweight web framework built on async hyper, implemented in Rust language.
  • servo-vdom. A modified servo browser which accepts content patches over an IPC channel.
  • rustr and rustinr. Rust library for working with R, and an R package to generate Rust interfaces.
  • Rorschach. Pretty print binary blobs based on common layout definition.

Crate of the Week

This week's Crate of the Week is arrayvec, which gives us a Vec-like interface over plain arrays for those instances where you don't want the indirection. Thanks to ehiggs for the suggestion!

Submit your suggestions for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

92 pull requests were merged in the last week.

New Contributors

  • Andy Russell
  • Brayden Winterton
  • Demetri Obenour
  • Ergenekon Yigit
  • Jonathan Turner
  • Michael Tiller
  • Timothy McRoy
  • Tomáš Hübelbauer

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week!.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

fn work(on: RustProject) -> Money

No jobs listed for this week.

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

In general, enough layers of Rc/RefCell will make anything work.

gkoz on TRPLF.

Thanks to birkenfeld for the suggestion.

Submit your quotes for next week!

Maja FrydrychowiczNot Testing a Firefox Build (Generic Tasks in TaskCluster)

A few months ago I wrote about my tentative setup of a TaskCluster task that was neither a build nor a test. Since then, gps has implemented “generic” in-tree tasks so I adapted my initial work to take advantage of that.

Triggered by file changes

All along I wanted to run some in-tree tests without having them wait around for a Firefox build or any other dependencies they don’t need. So I originally implemented this task as a “build” so that it would get scheduled for every incoming changeset in Mozilla’s repositories.

But forget “builds”, forget “tests” — now there’s a third category of tasks that we’ll call “generic” and it’s exactly what I need.

In base_jobs.yml I say, “hey, here’s a new task called marionette-harness — run it whenever there’s a change under (branch)/testing/marionette/harness”. Of course, I can also just trigger the task with try syntax like try: -p linux64_tc -j marionette-harness -u none -t none.

When the task is triggered, a chain of events follows:

For Tasks that Make Sense in a gecko Source Checkout

As you can see, I made the build.sh script in the desktop-build docker image execute an arbitrary in-tree JOB_SCRIPT, and I created harness-test-linux.sh to run mozharness within a gecko source checkout.

Why not the desktop-test image?

But we can also run arbitrary mozharness scripts thanks to the configuration in the desktop-test docker image! Yes, and all of that configuration is geared toward testing a Firefox binary, which implies downloading tools that my task either doesn’t need or already has access to in the source tree. Now we have a lighter-weight option for executing tests that don’t exercise Firefox.

Why not mach?

In my lazy work-in-progress, I had originally executed the Marionette harness tests via a simple call to mach, yet now I have this crazy chain of shell scripts that leads all the way mozharness. The mach command didn’t disappear — you can run Marionette harness tests with ./mach python-test .... However, mozharness provides clearer control of Python dependencies, appropriate handling of return codes to report test results to Treeherder, and I can write a job-specific script and configuration.

The Servo BlogThese Weeks In Servo 61

In the last two weeks, we landed 228 PRs in the Servo organization’s repositories.

Planning and Status

Our overall roadmap and quarterly goals are available online.

This week’s status updates are here.

Zhen Zhang and Rahul Sharma were selected as 2016 GSoC students for Servo! They will be working on the File API and foundations for Service Workers respectively.

Notable Additions

  • nox landed Windows support in the upgraded SpiderMonkey - now we just need to land it in Servo!
  • bholley implemented Margin, Padding, font-size, and has_class for the Firefox/Gecko support in Servo’s style system
  • pcwalton fixed a bug that was preventing us from hitting 60fps reliably with browser.html and WebRender!
  • mbrubeck changed to use the line-breaking algorithm from Raph Levian’s xi-unicode project
  • frewsxcv removed the horrific Dock-thrashing while running the WPT and CSS tests on OSX
  • vramana implemented fetch support for file:// URLs
  • fabrice implemented armv7 support across many of our dependencies and in Servo itself
  • larsberg re-enabled gating checkins on Windows builds, now that the Windows Buildbot instance is more reliable
  • asajeffrey added reporting of backtraces to the Constellation during panic!, which will allow better reporting in the UI
  • danl added the style property for flex-basis in Flexbox
  • perlun improved line heights and fonts in input and textarea
  • jdm re-enabled the automated WebGL tests
  • ms2ger updated the CSS tests
  • dzbarsky implemented glGetVertexAttrib
  • jdm made canvas elements scale based on the DOM width and height
  • edunham improved our ability to correctly recognize and validate licenses
  • pcwalton implemented overflow:scroll in WebRender
  • KiChjang added support for multipart/form-data submission
  • fitzgen created a new method for dumping time profile info to an HTML file
  • mrobinson removed the need for StackingLevel info in WebRender
  • ddefisher added initial support for persistent sessions in Servo
  • cgwalters added an option to Homu to support linear commit histories better
  • simonsapin promoted rust-url to version 1.0
  • wafflespeanut made highfive automatically report test failures from our CI infrastructure
  • connorgbrewster finished integrating the experimental XML5 parser
  • emilio added some missing WebGL APIs and parameter validation
  • izgzhen implemented the scrolling-related CSSOM View APIs
  • wafflespeanut redesigned the network error handling code
  • jdm started and in-tree glossary

New Contributors

Get Involved

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!

Screenshot

Screenshot of Firefox browsing a very simple page using Servo’s Stylo style system implementation: (screenshot)

Logic error that caused the page to redraw after every HTML parser operation: (screenshot)

Meetings and Mailing List

Nick Fitzgerald made a thread describing his incredibly awesome profiler output for Servo: https://groups.google.com/forum/#!topic/mozilla.dev.servo/KmzdXoaKo9s

Karl Dubost[worklog] Kusunoki, that smell.

This feeling when finally things get fixed after 2 years of negociating. Sometimes things take longer. Sometimes policies change on the other side. All in one, that's very good fortune for the Web and the users. It's bit like that smell for the last two years in summer time in my street, I finally got to ask the gardener of one of the houses around and it revealed what I should have known: Camphor tree (楠). Good week. Tune of the week: Carmina Burana - O Fortuna - Carl Orff.

Webcompat Life

Progress this week:

Today: 2016-05-02T09:21:45.583211
368 open issues
----------------------
needsinfo       4
needsdiagnosis  108
needscontact    35
contactready    93
sitewait        119
----------------------

You are welcome to participate

Londong agenda.

We had a meeting this week: Minutes

Webcompat issues

(a selection of some of the bugs worked on this week).

Webcompat development

Gecko Bugs

Updating Our Webkit Prefixing Policy

This is the big news of the week. And that's a lot of good for the Web. WebKit (aka Apple) is switching from vendor prefixes to feature flags. What does it mean? It means that new features will be available only to developers who activate them. It allows for testing without polluting the feature-space.

The current consensus among browser implementors is that, on the whole, prefixed properties have hurt more than they’ve helped. So, WebKit’s new policy is to implement experimental features unprefixed, behind a runtime flag. Runtime flags allow us to continue to get experimental features into developers’ hands while avoiding the various problems vendor prefixes had.

Also

We’ll be evaluating existing features on a case-by-case basis. We expect to significantly reduce the number of prefixed properties supported over time but Web compatibility will require us to keep around prefixed versions of some features.

HTTP Cache Invalidation, Facebook, Chrome and Firefox

Facebook is proposing to change the policy for HTTP Cache invalidation. This thread is really interesting. It started as a simple question on changing the behavior of Firefox to align with changes planned for Chrome, but it is evolving into a discussion about how to do cache invalidation the right way. Really cool.

I remember this study below a little while ago (March 3, 2012). And I was wondering if we had similar data for Firefox.

for those users who filled up their cache, - 25% of them fill it up in 4 hours. - 50% of them fill it up within 20 hours. - 75% of them fill it up within 48 hours. Now, that's just wall clock time... but how many hours of "active" browsing does it take to fill the cache? - 25% in 1 hour, - 50% in 4 hours, - and 75% in 10 hours.

Found again through Cache me if you can.

I wonder how many times a resource which is set up with a max-age of 1 year is still around in the cache after 1 year. And if indeed Web developers set a long time for cache invalidation as to mean never reload it, it seems wise to have something in Cache-Control: to allow this. There is must-revalidate, I was wondering if the immutable is another way of saying never-revalidate. Maybe a max-age value is even not necessary at all. Anyway, read the full thread on the bug.

Reading List

Follow Your Nose

TODO

  • Document how to write tests on webcompat.com using test fixtures.
  • ToWrite: rounding numbers in CSS for width
  • ToWrite: Amazon prefetching resources with <object> for Firefox only.

Otsukare!

Support.Mozilla.OrgMozillian profile: Jayesh

Hello, SUMO Nation!

Do you still remember Dinesh? Turns out he’s not the only Mozillian out there who’s happy to share his story with us. Today, I have the pleasure of introducing Jayesh, one of the many SUMOzillians among you, with a really inspiring story of his engagement in the community to share. Read on!

Jayesh

I’m Jayesh from India. I’ve been contributing to Mozilla as a Firefox Student Ambassador since 2014. I’m a self-made entrepreneur, tech lover, and passionate traveller. I am also an undergraduate with a Computer Science background.

During my university days I used to waste a lot of time playing games as I did not have a platform to showcase my technical skills. I thought working was only useful when you had a “real” job. I only heard about open source, but in my third year I came to know about open source contributors – through my friend Dinesh, who told me about the FSA program – this inspired me a lot. I thought it was the perfect platform for me to kickstart my career as a Mozillian and build a strong, bright future.

Being a techie, I could identify with Mozilla and its efforts to keep the web open. I registered for the FSA program with the guidance of my friend, and found a lot of students and open source enthusiasts from India contributing to Mozilla in many ways. I was very happy to join the Mozilla India Community.

Around 90% of Computer Science students at the university learn the technology but don’t actually try to implement working prototypes using their knowledge, as they don’t know about the possibility of open source contributions – they just believe that showcasing counts only during professional internships and work training. Thus, I thought of sharing my knowledge about open source contributors through the Mozilla community.

I gained experience conducting events for Mozilla in the Tirupati Community, where my friend was seeking help in conducting events as he was the only Firefox Student Ambassador in that region. Later, to learn more, we travelled to many places and attend various events in Bengaluru and Hyderabad , where we met a very well developed Mozilla community in southern India. We met many Mozilla Representatives and sought help from them. Vineel and Galaxy helped us a lot, guiding us through our first steps.

Later, I found that I was the only Mozillian in my region – Kumbakonam, where I do my undergrad studies – within a 200 miles radius. This motivated me to personally build a new university club – SRCMozillians. I inaugurated the club at my university with the help of the management.

More than 450 students in the university registered for the FSA program in the span of two days, and we have organized more than ten events, including FFOS App days, Moz-Quiz, Web-Development-Learning, Connected Devices-Learning, Moz-Stall, a ponsored fun event, community meet-ups – and more! All this in half a year. For my efforts, I was recognized as FSA of the month, August 2015 & FSA Senior.

The biggest problems we faced while building our club were the studying times, when we’d be having lots of assignments, cycle tests, lab internals, and more – with everyone really busy and working hard, it took time to bridge the gap and realise grades alone are not the key factor to build a bright future.

My contributions to the functional areas in Mozilla varied from time to time. I started with Webmaker by creating educational makes about X-Ray Goggles, App-Maker and Thimble. I’m proud of being recognized as a Webmaker Mentor for that. Later, I focused on Army of Awesome (AoA) by tweeting and helping Firefox users. I even developed two Firefox OS applications (Asteroids – a game and a community application for SRCMozillians), which were available in the Marketplace. After that, I turned my attention to Quality Assurance, as Software Testing was one of the subject in my curriculum. I started testing tasks in One And Done – this helped me understand the key concepts of software testing easily – especially checking the test conditions and triaging bugs. My name was even mentioned on the Mozilla blog about the Firefox 42.0 Beta 3 Test day for successfully testing and passing all the test cases.

I moved on to start localization for Telugu, my native language. I started translating KB articles – with time, my efforts were recognized, and I became a Reviewer for Telugu. This area of contribution proved to be very interesting, and I even started translating projects in Pontoon.

As you can see from my Mozillian story above, it’s easy to get started with something you like. I guarantee that every individual student with passion to contribute and build a bright career within the Mozilla community, can discover that this is the right platform to start with. The experience you gain here will help you a lot in building your future. I personally think that the best aspect of it is the global connection with many great people who are always happy to support and guide you.

– Jayesh , a proud Mozillian

Thank you, Jayesh! A great example of turning one’s passion into a great initiative that enables many people around you understand and use technology better. We’re looking forward to more open source awesomeness from you!

SUMO Blog readers – are you interested in posting on our blog about your open source projects and adventures? Let us know!

Dustin J. MitchellLoading TaskCluster Docker Images

When TaskCluster builds a push to a Gecko repository, it does so in a docker image defined in that very push. This is pretty cool for developers concerned with the build or test environment: instead of working with releng to deploy a change, now you can experiment with that change in try, get review, and land it like any other change. However, if you want to actually download that docker image, docker pull doesn’t work anymore.

The image reference in the task description looks like this now:

"image": {
    "path": "public/image.tar",
    "taskId": "UDZUwkJWQZidyoEgVfFUKQ",
    "type": "task-image"
},

This is referring to an artifact of the task that built the docker image. If you want to pull that exact image, there’s now an easier way:

./mach taskcluster-load-image --task-id UDZUwkJWQZidyoEgVfFUKQ

will download that docker image:

dustin@dustin-moz-devel ~/p/m-c (central) $ ./mach taskcluster-load-image --task-id UDZUwkJWQZidyoEgVfFUKQ
Task ID: UDZUwkJWQZidyoEgVfFUKQ
Downloading https://queue.taskcluster.net/v1/task/UDZUwkJWQZidyoEgVfFUKQ/artifacts/public/image.tar
######################################################################## 100.0%
Determining image name
Image name: mozilla-central:f7b4831774960411275275ebc0d0e598e566e23dfb325e5c35bf3f358e303ac3
Loading image into docker
Deleting temporary file
Loaded image is named mozilla-central:f7b4831774960411275275ebc0d0e598e566e23dfb325e5c35bf3f358e303ac3
dustin@dustin-moz-devel ~/p/m-c (central) $ docker images
REPOSITORY          TAG                                                                IMAGE ID            CREATED             VIRTUAL SIZE
mozilla-central     f7b4831774960411275275ebc0d0e598e566e23dfb325e5c35bf3f358e303ac3   51e524398d5c        4 weeks ago         1.617 GB

But if you just want to pull the image corresponding to the codebase you have checked out, things are even easier: give the image name (the directory under testing/docker), and the tool will look up the latest build of that image in the TaskCluster index:

dustin@dustin-moz-devel ~/p/m-c (central) $ ./mach taskcluster-load-image desktop-build
Task ID: TjWNTysHRCSfluQjhp2g9Q
Downloading https://queue.taskcluster.net/v1/task/TjWNTysHRCSfluQjhp2g9Q/artifacts/public/image.tar
######################################################################## 100.0%
Determining image name
Image name: mozilla-central:f5e1b476d6a861e35fa6a1536dde2a64daa2cc77a4b71ad685a92096a406b073
Loading image into docker
Deleting temporary file
Loaded image is named mozilla-central:f5e1b476d6a861e35fa6a1536dde2a64daa2cc77a4b71ad685a92096a406b073

Tim TaubertA Fast, Constant-time AEAD for TLS

The only TLS v1.2+ cipher suites with a dedicated AEAD scheme are the ones using AES-GCM, a block cipher mode that turns AES into an authenticated cipher. From a cryptographic point of view these are preferable to non-AEAD-based cipher suites (e.g. the ones with AES-CBC) because getting authenticated encryption right is hard without using dedicated ciphers.

For CPUs without the AES-NI instruction set, constant-time AES-GCM however is slow and also hard to write and maintain. The majority of mobile phones, and mostly cheaper devices like tablets and notebooks on the market thus cannot support efficient and safe AES-GCM cipher suite implementations.

Even if we ignored all those aforementioned pitfalls we still wouldn’t want to rely on AES-GCM cipher suites as the only good ones available. We need more diversity. Having widespread support for cipher suites using a second AEAD is necessary to defend against weaknesses in AES or AES-GCM that may be discovered in the future.

ChaCha20 and Poly1305, a stream cipher and a message authentication code, were designed with fast and constant-time implementations in mind. A combination of those two algorithms yields a safe and efficient AEAD construction, called ChaCha20/Poly1305, which allows TLS with a negligible performance impact even on low-end devices.

Firefox 47 will ship with two new ECDHE/ChaCha20 cipher suites as specified in the latest draft. We are looking forward to see the adoption of these increase and will, as a next step, work on prioritizing them over AES-GCM suites on devices not supporting AES-NI.

QMOFirefox 47 Beta 3 Testday, May 6th

Hey everyone,

I am happy to announce that the following Friday, May 6th, we are organizing a new event – Firefox 47 Beta 3 Testday. The main focus will be on Synced Tabs Sidebar and Youtube Embedded Rewrite features. The detailed instructions are available via this etherpad.

No previous testing experience is needed, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better! 😉

See you all on Friday!

Mozilla Addons BlogWebExtensions in Firefox 48

We last updated you on our progress with WebExtensions when Firefox 47 landed in Developer Edition (Aurora), and today we have an update for Firefox 48, which landed in Developer Edition this week.

With the release of Firefox 48, we feel WebExtensions are in a stable state. We recommend developers start to use the WebExtensions API for their add-on development. Over the last release more than 82 bugs were closed on WebExtensions alone.

If you have authored an add-on in the past and are curious how it’s affected by the upcoming changes, please use the lookup tool. There is also a wiki page filled with resources to support you through the changes.

APIs Implemented

Many APIs gained improved support in this release, including: alarms, bookmarks, downloads, notifications, webNavigation, webRequest, windows and tabs.

The options v2 API is now supported so that developers can implement an options UI for their users. We do not plan to support the options v1 API, which is deprecated in Chrome. You can see an example of how to use this API in the WebExtensions examples on Github.

image08

In Firefox 48 we pushed hard to make the WebRequest API a solid foundation for privacy and security add-ons such as Ghostery, RequestPolicy and NoScript. With the current implementation of the onErrorOccurred function, it is now possible for Ghostery to be written as a WebExtension.

The addition of reliable origin information was a major requirement for existing Firefox security add-ons performing cross-origin checks such as NoScript or uBlock Origin. This feature is unique to Firefox, and is one of our first expansions beyond parity with the Chrome APIs for WebExtensions.

Although requestBody support is not in Firefox 48 at the time of publication, we hope it will be uplifted. This change to Gecko is quite significant because it will allow NoScript’s XSS filter to perform much better as a WebExtension, with huge speed gains (20 times or more) in some cases over the existing XUL and XPCOM extension for many operations (e.g. form submissions that include file uploads).

We’ve also had the chance to dramatically increase our unit test coverage again across the WebExtensions API, and now our modules have over 92% test coverage.

Content Security Policy Support

By default WebExtensions now use a Content Security Policy, limiting the location of resources that can be loaded. The default policy for Firefox is the same as Chrome’s:

"script-src 'self'; object-src 'self';"

This has many implications, such as the following: eval will no longer work, inline JavaScript will not be executed and only local scripts and resources are loaded. To relax that and define your own, you’ll need to define a new CSP using the content_security_policy entry in the WebExtension’s manifest.

For example, to load scripts from example.com, the manifest would include a policy configuration that would look like this:

"content_security_policy": "script-src 'self' https://example.com; object-src 'self'"

Please note: this will be a backwards incompatible change for any Firefox WebExtensions that did not adhere to this CSP. Existing WebExtensions that do not adhere to the CSP will need to be updated.

Chrome compatibility

To improve the compatibility with Chrome, a change has landed in Firefox that allows an add-on to be run in Firefox without the add-on id specified. That means that Chrome add-ons can now be run in Firefox with no manifest changes using about:debugging and loading it as a temporary add-on.

Support for WebExtensions with no add-on id specified in the manifest is being added to addons.mozilla.org (AMO) and our other tools, and should be in place on AMO for when Firefox 48 lands in release.

Android Support

With the release of Firefox 48 we are announcing Android support for WebExtensions. WebExtensions add-ons can now be installed and run on Android, just like any other add-on. However, because Firefox for Android makes use of a native user interface, anything that involves user interface interaction is currently unsupported (similar to existing extensions on Android).

You can see what the full list of APIs supported on Android in the WebExtensions documentation on MDN, these include alarms, cookies, i18n and runtime.

Developer Support

In Firefox 45 the ability to load add-ons temporarily was added to about:debugging. In Firefox 48 several exciting enhancements are added to about:debugging.

If your add-on fails to load for some reason in about:debugging (most commonly due to JSON syntax errors), then you’ll get a helpful message appearing at the top of about:debugging. In the past, the error would be hidden away in the browser console.

image02

It still remains in the browser console, but is now visible that an error occurred right in the same page where loading was triggered.

image04

Debugging

You can now debug background scripts and content scripts in the debugging tools. In this example, to debug background scripts I loaded the add-on bookmark-it from the MDN examples. Next click “Enable add-on debugging”, then click “debug”:

image03

You will need to accept the incoming remote debugger session request. Then you’ll have a Web Console for the background page. This allows you to interact with the background page. In this case I’m calling the toggleBookmark API.

image06

This will call the toggleBookmark function and bookmark the page (note the bookmark icon is now blue. If you want to debug the toggleBookmark function,  just add the debugger statement at the appropriate line. When you trigger toggleBookmark, you’ll be dropped into the debugger:image09

You can now debug content scripts. In this example I’ve loaded the beastify add-on from the MDN examples using about:debugging. This add-on runs a content script to alter the current page by adding a red border.

All you have to do to debug it is to insert the debugger statement into your content script, open up the Developer Tools debugger and trigger the debug statement:

image05

You are then dropped into the debugger ready to start debugging the content script.

Reloading

As you may know, restarting Firefox and adding in a new add-on is can be slow, so about:debugging now allows you to reload an add-on. This will remove the add-on and then re-enable the add-on, so that you don’t have to keep restarting Firefox. This is especially useful for changes to the manifest, which will not be automatically refreshed. It also resets UI buttons.

In the following example the add-on just calls setBadgeText to add “Test” onto the browser action button (in the top right) when you press the button added by the add-on.

image03

Hitting reload for that add-on clears the state for that button and reloads the add-on from the manifest, meaning that after a reload, the “Test” text has been removed.

image07

This makes developing and debugging WebExtensions really easy. Coming soon, web-ext, the command line tool for developing add-ons, will gain the ability to trigger this each time a file in the add-on changes.

There are also lots of other ways to get involved with WebExtensions, so please check them out!

Update: clarified that no add-on id refers to the manifest as a WebExtension.

Daniel Stenbergcurl 7.49.0 goodies coming

Here’s a closer look at three new features that we’re shipping in curl and libcurl 7.49.0, to be released on May 18th 2016.

connect to this instead

If you’re one of the users who thought --resolve and doing Host: header tricks with --header weren’t good enough, you’ll appreciate that we’re adding yet another option for you to fiddle with the connection procedure. Another “Swiss army knife style” option for you who know what you’re doing.

With --connect-to you basically provide an internal alias for a certain name + port to instead internally use another name + port to connect to.

Instead of connecting to HOST1:PORT1, connect to HOST2:PORT2

It is very similar to --resolve which is a way to say: when connecting to HOST1:PORT1 use this ADDR2:PORT2. --resolve effectively prepopulates the internal DNS cache and makes curl completely avoid the DNS lookup and instead feeds it with the IP address you’d like it to use.

--connect-to doesn’t avoid the DNS lookup, but it will make sure that a different host name and destination port pair is used than what was found in the URL. A typical use case for this would be to make sure that your curl request asks a specific server out of several in a pool of many, where each has a unique name but you normally reach them with a single URL who’s host name is otherwise load balanced.

--connect-to can be specified multiple times to add mappings for multiple names, so that even following HTTP redirects to other host names etc can be handled. You don’t even necessarily have to redirect the first used host name.

The libcurl option name for for this feature is CURLOPT_CONNECT_TO.

Michael Kaufmann brought this feature.

http2 prior knowledge

In our ongoing quest to provide more and better HTTP/2 support in a world that is slowly but steadily doing more and more transfers over the new version of the protocol, curl now offers --http2-prior-knowledge.

As the name might hint, this is a way to tell curl that you have “prior knowledge” that the URL you specifies goes to a host that you know supports HTTP/2. The term prior knowledge is in fact used in the HTTP/2 spec (RFC 7540) for this scenario.

Normally when given a HTTP:// or a HTTPS:// URL, there will be no assumption that it supports HTTP/2 but curl when then try to upgrade that from version HTTP/1. The command line tool tries to upgrade all HTTPS:// URLs by default even, and libcurl can be told to do so.

libcurl wise, you ask for a prior knowledge use by setting CURLOPT_HTTP_VERSION to CURL_HTTP_VERSION_2_PRIOR_KNOWLEDGE.

Asking for http2 prior knowledge when the server does in fact not support HTTP/2 will give you an error back.

Diego Bes brought this feature.

TCP Fast Open

TCP Fast Open is documented in RFC 7413 and is basically a way to pass on data to the remote machine earlier in the TCP handshake – already in the SYN and SYN-ACK packets. This of course as a means to get data over faster and reduce latency.

The --tcp-fastopen option is supported on Linux and OS X only for now.

This is an idea and technique that has been around for a while and it is slowly getting implemented and supported by servers. There have been some reports of problems in the wild when “middle boxes” that fiddle with TCP traffic see these packets, that sometimes result in breakage. So this option is opt-in to avoid the risk that it causes problems to users.

A typical real-world case where you would use this option is when  sending an HTTP POST to a site you don’t have a connection already established to. Just note that TFO relies on the client having had contact established with the server before and having a special TFO “cookie” stored and non-expired.

TCP Fast Open is so far only used for clear-text TCP protocols in curl. These days more and more protocols switch over to their TLS counterparts (and there’s room for future improvements to add the initial TLS handshake parts with TFO). A related option to speed up TLS handshakes is --false-start (supported with the NSS or the secure transport backends).

With libcurl, you enable TCP Fast Open with CURLOPT_TCP_FASTOPEN.

Alessandro Ghedini brought this feature.

Support.Mozilla.OrgWhat’s Up with SUMO – 28th April

Hello, SUMO Nation!

Did you know that in Japanese mythology, foxes with nine tails are over a 100 years old and have the power of omniscience? I think we could get the same result if we put a handful of SUMO contributors in one room – maybe except for the tails ;-)

Here are the news from the world of SUMO!

Welcome, new contributors!

If you just joined us, don’t hesitate – come over and say “hi” in the forums!

Contributors of the week

Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

Most recent SUMO Community meeting

The next SUMO Community meeting

  • …is happening on WEDNESDAY the 4th of May – join us!
  • Reminder: if you want to add a discussion topic to the upcoming meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Wednesday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).
    • If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback.

Community

Social

Support Forum

Knowledge Base & L10n

  • Hackathons everywhere! Find your people and get organized!
  • We have three upcoming iOS articles that will need localization. Their drafts are still in progress (pending review from the product team). Coming your way real soon – watch your dashboards!
  • New l10n milestones coming to your dashboards soon, as well.

Firefox – RELEEEEAAAAASE WEEEEEEK ;-)

What’s your experience of release week? Share with us in the comments or our forums! We are looking forward to seeing you all around SUMO – KEEP ROCKING THE HELPFUL WEB!

Air MozillaWeb QA Weekly Meeting, 28 Apr 2016

Web QA Weekly Meeting This is our weekly gathering of Mozilla'a Web QA team filled with discussion on our current and future projects, ideas, demos, and fun facts.

Air MozillaReps weekly, 28 Apr 2016

Reps weekly This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Chris H-CIn Lighter News…

…Windows XP Firefox users may soon be able to properly render poop.

winxpPoo

Here at Mozilla, we take these things seriously.

:chutten


Ian BickingA Product Journal: Data Up and Data Down

I’m blogging about the development of a new product in Mozilla, look here for my other posts in this series

We’re in the process of reviewing the KPI (Key Performance Indicators) for Firefox Hello (relatedly I joined the Firefox Hello team as engineering manager in October). Mozilla is trying (like everyone else) to make data-driven decisions. Basing decisions on data has some potential to remove or at least reveal bias. It provides a feedback mechanism that can provide continuity even as there are personnel changes. It provides some accountability over time. Data might also provide insight about product opportunities which we might otherwise miss.

Enter the KPI: for Hello (like most products) the key performance indicators are number of users, growth in users over time, user retention, and user sentiment (e.g., we use the Net Promoter Score). But like most projects those are not actually our success criteria: product engagement is necessary but not sufficient for organizational goals. Real goals might be revenue, social or political impact, or improvement in brand sentiment.

The value of KPI is often summarized as “letting us know how we’re doing”. I think the value KPI offers is more select:

  1. When you think a product is doing well, but it’s not, KPI is revealing.
  2. When you know a product isn’t doing well, KPI let’s you triage: is it hopeless? Do we need to make significant changes? Do we need to maintain our approach but try harder?
  3. When a product is doing well the KPI gives you a sense of the potential. You can also triage success: Should we invest heavily? Stay the path? Is there no potential to scale the success far enough?

I’m skeptical that KPI can provide the inverse of 1: when you think a product is doing poorly, can KPI reveal that it is doing well? Because there’s another set of criteria that defines “success”, KPI is necessary but not sufficient. It requires a carefully objective executive to revise their negative opinion about the potential of a project based on KPI, and they may have reasonably lost faith that a project’s KPI-defined success can translate into success given organizational goals.

The other theoretical value of KPI is that you could correlate KPI with changes to the product, testing whether each change improves your product’s core value. I’m sure people manage to do this, with both very fine grained measurements and fine grained deployments of changes. But it seems more likely to me that for most projects given a change in KPI you’ll simply have to say “yup” and come up with unverified theories about that change.

The metrics that actually support the development of the product are not “key”, they are “incidental”. These are metrics that find bugs in the product design, hint at unexplored opportunities, confirm the small wins. These are metrics that are actionable by the people making the product: how do people interact with the tool? What do they use it for? Where do they get lost? What paths lead to greater engagement?

What is KPI for?

I’m trying to think more consciously about the difference between managing up and managing down. A softer way of phrasing this is managing in and managing out – but in this case I think the power dynamics are worth highlighting.

KPI is data that goes up. It lets someone outside the project – and above the project – make choices: about investment, redirection, cancellation. KPI data doesn’t go down, it does little to help the people doing the work. Feeling joy or despair about your project based on KPI is not actionable for those people on the inside of a project.

Incentive or support

I would also distinguish two kinds of management here: one perspective on management is that the organization should set up the right incentives and consequences so that rewards are aligned with organizational goals. The right incentives might make people adapt their behavior to get alignment; how they adapt is undefined. The right incentives might also exclude those who aren’t in alignment, culling misalignment from the organization. Another perspective is that the organization should work to support people, that misalignment of purpose between a person and the organization is more likely a bug than a misalignment of intention. Are people black boxes that we can nudge via punishment and reward? Are there less mechanical ways to influence change?

Student performance measurement are another kind of KPI. It lets someone on the outside (of the classroom) know if things are going well or poorly for the students. It says little about why, and it doesn’t support improvement. School reform based on measurement presumes that teachers and schools are able to achieve the desired outcomes, but simply not willing. A risk of top-down reform: the people on the top use a perspective from the top. As an authority figure, how do I make decisions? The resulting reform is disempowering, supporting decisions from above, as opposed to using data to support the empowerment of those making the many day-to-day decisions that might effect a positive outcome.

Of course, having data available to inform decisions at all levels – from the executive to the implementor – would be great. But there’s a better criteria for data: it should support decision making processes. What are your most important decisions?

As an example from Mozilla, we have data about how much Firefox is used and its marketshare. How much should we pay attention to this data? We certainly don’t have the granularity to connect changes in this KPI to individual changes we make in the project. The only real way to do that is through controlled experiments (which we are trying). We aren’t really willing to triage the project; no one is asking “should we just give up on Firefox?” The only real choice we can make is: are we investing enough in Firefox, or should we invest more? That’s a question worth asking, but we need to keep our attention on the question and not the data. For instance, if we decide to increase investment in Firefox, the immediate questions are: what kind of investment? Over what timescale? Data can be helpful to answer those questions, but not just any data.

Exploratory data

Weeks after I wrote (but didn’t publish) this post I encountered Why Greatness Cannot Be Planned: The Myth of the Objective, a presentation by Kenneth Stanley:

Setting an objective can block its own achievement. It can be an obstacle to creativity and innovation in general. Without protection of individual autonomy collaboration can become dangerously objective.”

The example he uses is manually searching a space of nonlinear image generation to find interesting images. The positive example is one where people explore, branching from novel examples until something recognizable emerges:

One negative example is one where an algorithm explores with a goal in mind:

Another negative example is selection by voting, instead of personal exploration; a product of convergent consensus instead of divergent treasure hunting:

If you decide what you are looking for, you are unlikely to find it. This generated image search space is deliberately nonlinear, so it’s difficult to understand how actions affect outcomes. Though artificial, I think the example is still valid: in a competitive environment, the thing you are searching for is hard to find, because if it was not hard then someone would have found it. And it’s probably hard because actions affect outcomes in unexpected ways.

You could describe this observation as another way of describing the pitfalls of hill climbing: getting stuck at local maximums. Maybe an easy fix is to add a little randomness, to bounce around, to see what lies past the hill you’ve found. But the hills themselves can be distractions: each hill supposes a measurement. The divergent search doesn’t just reveal novel solutions, but it can reveal a novel rubric for success.

This is also a similar observation to that in Innovator’s Dilemma: specifically that in these cases good management consistently and deliberately keeps a company away from novelty and onto the established track, and it does so by paying attention to the feedback that defines the company’s (current) success. The disruptive innovation, a term somewhat synonymous with the book, is an innovation that requires a change in metrics, and that a large portion of the innovation is finding the metric (and so finding the market), not implementing the maximizing solution.

But I digress from the topic of data. If we’re going to be data driven to entirely new directions, we may need data that doesn’t answer a question, doesn’t support a decision, but just tells us about things we don’t know. To support exploration, not based on a hypothesis which we confirm or reject based on the data, because we are still trying to discover our hypothesis. We use the data to look for the hidden variable, the unsolved need, the desire that has not been articulated.

I think we look for this kind of data more often than we would admit. Why else would we want complex visualizations? The visualizations are our attempt at finding a pattern we don’t expect to find.

In Conclusion

I’m lousy at conclusions. All those words up there are like data, and I’m curious what they mean, but I haven’t figured it out yet.

Geoff LankowDoes Firefox update despite being set to "never check for updates"? This might be why.

If, like me, you have set Firefox to "never check for updates" for some reason, and yet it does sometimes anyway, this could be your problem: the chrome debugger.

The chrome debugger uses a separate profile, with the preferences copied from your normal profile. But, if your prefs (such as app.update.enabled) have changed, they remain in the debugger profile as they were when you first opened the debugger.

App update can be started by any profile using the app, so the debugger profile sees the pref as it once was, and goes looking for updates.

Solution? Copy the app update prefs from the main profile to the debugger profile (mine was at ~/.cache/mozilla/firefox/31392shv.default/chrome_debugger_profile), or just destroy the debugger profile and have a new one created next time you use it.

Just thought you might like to know.

Air MozillaPrivacy Lab - April 2016 - Encryption vs. the FBI

Privacy Lab - April 2016 - Encryption vs. the FBI Riana Pfefferkorn, Cryptography Fellow at the Stanford Center for Internet and Society, will talk about the FBI's dispute with Apple over encrypted iPhones.

Air MozillaPrivacy Lab - April 2016 - Encryption vs. the FBI

Privacy Lab - April 2016 - Encryption vs. the FBI Riana Pfefferkorn, Cryptography Fellow at the Stanford Center for Internet and Society, will talk about the FBI's dispute with Apple over encrypted iPhones.

Mike HommeyAnnouncing git-cinnabar 0.3.2

Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.

Get it on github.

These release notes are also available on the git-cinnabar wiki.

This is mostly a bug and regression-fixing release.

What’s new since 0.3.1?

  • Fixed a performance regression when cloning big repositories on OSX.
  • git configuration items with line breaks are now supported.
  • Fixed a number of issues with corner cases in mercurial data (such as, but not limited to nodes with no first parent, malformed .hgtags, etc.)
  • Fixed a stack overflow, a buffer overflow and a use-after free in cinnabar-helper.
  • Better work with git worktrees, or when called from subdirectories.
  • Updated git to 2.7.4 for cinnabar-helper.
  • Properly remove all refs meant to be removed when using git version lower than 2.1.

Mozilla Addons BlogJoin the Featured Add-ons Community Board

Are you a big fan of add-ons? Think you can help help identify the best content to spotlight on AMO? Then let’s talk!

All the add-ons featured on addons.mozilla.org (AMO) are selected by a board of community members. Each board consists of 5-8 members who nominate and select featured add-ons once a month for six months. Featured add-ons help users discover what’s new and useful, and downloads increase dramatically in the months they’re featured, so your participation really makes an impact.

And now the time has come to assemble a new board for the months July – December.

Anyone from the add-ons community is welcome to apply: power users, theme designers, developers, and evangelists. Priority will be given to applicants who have not served on the board before, followed by those from previous boards, and finally from the outgoing board. This page provides more information on the duties of a board member. To be considered, please email us at amo-featured@mozilla.org with your name, and tell us how you’re involved with AMO. The deadline is Tuesday, May 10, 2016 at 23:59 PDT. The new board will be announced about a week after.

We look forward to hearing from you!

Michael KaplyBroken Add-ons in Firefox 46

A lot of add-ons are being broken by a subtle change in Firefox 46, in particular the removal of legacy array/generator comprehension.

Most of these add-ons (including mine) did not use array comprehension intentionally, but they copied some code from this page on developer.mozilla.org for doing an md5 hash of a string. It looked like this:

var s = [toHexString(hash.charCodeAt(i)) for (i in hash)].join("");

You should search through your source code for toHexString and make sure you aren’t using this. MDN was updated in January to fix this. Here’s what the new code looks like:

var s = Array.from(hash, (c, i) => toHexString(hash.charCodeAt(i))).join("");

The new code will only work in Firefox 32 and beyond. If for some reason you need an older version, you can go through the history of the page to find the array based version.

Using this old code will cause a syntax error, so it will cause much more breakage than you realize. You’ll want to get it fixed sooner than later because Firefox 46 started rolling out yesterday.

As a side note, Giorgio Maone caught this in January, but unfortunately all that was updated was the MDN page.

Air MozillaThe Joy of Coding - Episode 55

The Joy of Coding - Episode 55 mconley livehacks on real Firefox bugs while thinking aloud.

Air MozillaApril 2016 Speaker Series: When Change is the Only Constant, Org Structure Doesn't Matter - Kirsten Wolberg

April 2016 Speaker Series: When Change is the Only Constant, Org Structure Doesn't Matter - Kirsten Wolberg Regardless of whether an organization is decentralized or command & control, large-scale changes are never simple nor straightforward. There's no silver bullets. And yet, when...

Rail AliievFirefox 46.0 and SHA512SUMS

In my previous post I introduced the new release process we have been adopting in the 46.0 release cycle.

Release build promotion has been in production since Firefox 46.0 Beta 1. We have discovered some minor issues; some of them are already fixed, some still waiting.

One of the visible bugs is Bug 1260892. We generate a big SHA512SUMS file, which should contain all important checksums. With numerous changes to the process the file doesn't represent all required files anymore. Some files are missing, some have different names.

We are working on fixing the bug, but you can use the following work around to verify the files.

For example, if you want to verify http://ftp.mozilla.org/pub/firefox/releases/46.0/win64/ach/Firefox%20Setup%2046.0.exe, you need use the following 2 files:

http://ftp.mozilla.org/pub/firefox/candidates/46.0-candidates/build5/win64/ach/firefox-46.0.checksums

http://ftp.mozilla.org/pub/firefox/candidates/46.0-candidates/build5/win64/ach/firefox-46.0.checksums.asc

Example commands:

# download all required files
$ wget -q http://ftp.mozilla.org/pub/firefox/releases/46.0/win64/ach/Firefox%20Setup%2046.0.exe
$ wget -q http://ftp.mozilla.org/pub/firefox/candidates/46.0-candidates/build5/win64/ach/firefox-46.0.checksums
$ wget -q http://ftp.mozilla.org/pub/firefox/candidates/46.0-candidates/build5/win64/ach/firefox-46.0.checksums.asc
$ wget -q http://ftp.mozilla.org/pub/firefox/releases/46.0/KEY
# Import Mozilla Releng key into a temporary GPG directory
$ mkdir .tmp-gpg-home && chmod 700 .tmp-gpg-home
$ gpg --homedir .tmp-gpg-home --import KEY
# verify the signature of the checksums file
$ gpg --homedir .tmp-gpg-home --verify firefox-46.0.checksums.asc && echo "OK" || echo "Not OK"
# calculate the SHA512 checksum of the file
$ sha512sum "Firefox Setup 46.0.exe"
c2ed64298ac2140d8dbdaed28cabc90b38dd9444e9c0d6dd335a2a32cf043a35314945536a5c75124a88bf418a4e2ba77256be223425380e7fcc45a97da8f479  Firefox Setup 46.0.exe
# lookup for the checksum in the checksums file
$ grep c2ed64298ac2140d8dbdaed28cabc90b38dd9444e9c0d6dd335a2a32cf043a35314945536a5c75124a88bf418a4e2ba77256be223425380e7fcc45a97da8f479 firefox-46.0.checksums
c2ed64298ac2140d8dbdaed28cabc90b38dd9444e9c0d6dd335a2a32cf043a35314945536a5c75124a88bf418a4e2ba77256be223425380e7fcc45a97da8f479 sha512 46275456 install/sea/firefox-46.0.ach.win64.installer.exe

This is just a temporary work around and the bug will be fixed ASAP.

Air MozillaSuMo Community Call 27th April 2016

SuMo Community Call 27th April 2016 This is the sumo weekly call We meet as a community every Wednesday 17:00 - 17:30 UTC The etherpad is here: https://public.etherpad-mozilla.org/p/sumo-2016-04-27

Niko MatsakisNon-lexical lifetimes: introduction

Over the last few weeks, I’ve been devoting my free time to fleshing out the theory behind non-lexical lifetimes (NLL). I think I’ve arrived at a pretty good point and I plan to write various posts talking about it. Before getting into the details, though, I wanted to start out with a post that lays out roughly how today’s lexical lifetimes work and gives several examples of problem cases that we would like to solve.

The basic idea of the borrow checker is that values may not be mutated or moved while they are borrowed. But how do we know whether a value is borrowed? The idea is quite simple: whenever you create a borrow, the compiler assigns the resulting reference a lifetime. This lifetime corresponds to the span of the code where the reference may be used. The compiler will infer this lifetime to be the smallest lifetime that it can that still encompasses all the uses of the reference.

Note that Rust uses the term lifetime in a very particular way. In everyday speech, the word lifetime can be used in two distinct – but similar – ways:

  1. The lifetime of a reference, corresponding to the span of time in which that reference is used.
  2. The lifetime of a value, corresponding to the span of time before that value gets freed (or, put another way, before the destructor for the value runs).

This second span of time, which describes how long a value is valid, is of course very important. We refer to that span of time as the value’s scope. Naturally, lifetimes and scopes are linked to one another. Specifically, if you make a reference to a value, the lifetime of that reference cannot outlive the scope of that value, Otherwise your reference would be pointing into free memory.

To better see the distinction between lifetime and scope, let’s consider a simple example. In this example, the vector data is borrowed (mutably) and the resulting reference is passed to a function capitalize. Since capitalize does not return the reference back, the lifetime of this borrow will be confined to just that call. The scope of data, in contrast, is much larger, and corresponds to a suffix of the fn body, stretching from the let until the end of the enclosing scope.

1
2
3
4
5
6
7
8
9
10
11
12
fn foo() {
    let mut data = vec!['a', 'b', 'c']; // --+ 'scope
    capitalize(&mut data[..]);          //   |
//  ^~~~~~~~~~~~~~~~~~~~~~~~~ 'lifetime //   |
    data.push('d');                     //   |
    data.push('e');                     //   |
    data.push('f');                     //   |
} // <---------------------------------------+

fn capitalize(data: &mut [char]) {
    // do something
}

This example also demonstrates something else. Lifetimes in Rust today are quite a bit more flexible than scopes (if not as flexible as we might like, hence this RFC):

  • A scope generally corresponds to some block (or, more specifically, a suffix of a block that stretches from the let until the end of the enclosing block) [1].
  • A lifetime, in contrast, can also span an individual expression, as this example demonstrates. The lifetime of the borrow in the example is confined to just the call to capitalize, and doesn’t extend into the rest of the block. This is why the calls to data.push that come below are legal.

So long as a reference is only used within one statement, today’s lifetimes are typically adequate. Problems arise however when you have a reference that spans multiple statements. In that case, the compiler requires the lifetime to be the innermost expression (which is often a block) that encloses both statements, and that is typically much bigger than is really necessary or desired. Let’s look at some example problem cases. Later on, we’ll see how non-lexical lifetimes fixes these cases.

Problem case #1: references assigned into a variable

One common problem case is when a reference is assigned into a variable. Consider this trivial variation of the previous example, where the &mut data[..] slice is not passed directly to capitalize, but is instead stored into a local variable:

1
2
3
4
5
6
7
8
fn bar() {
    let mut data = vec!['a', 'b', 'c'];
    let slice = &mut data[..]; // <-+ 'lifetime
    capitalize(slice);         //   |
    data.push('d'); // ERROR!  //   |
    data.push('e'); // ERROR!  //   |
    data.push('f'); // ERROR!  //   |
} // <------------------------------+

The way that the compiler currently works, assigning a reference into a variable means that its lifetime must be as large as the entire scope of that variable. In this case, that means the lifetime is now extended all the way until the end of the block. This in turn means that the calls to data.push are now in error, because they occur during the lifetime of slice. It’s logical, but it’s annoying.

In this particular case, you could resolve the problem by putting slice into its own block:

1
2
3
4
5
6
7
8
9
10
fn bar() {
    let mut data = vec!['a', 'b', 'c'];
    {
        let slice = &mut data[..]; // <-+ 'lifetime
        capitalize(slice);         //   |
    } // <------------------------------+
    data.push('d'); // OK
    data.push('e'); // OK
    data.push('f'); // OK
}

Since we introduced a new block, the scope of slice is now smaller, and hence the resulting lifetime is smaller. Of course, introducing a block like this is kind of artificial and also not an entirely obvious solution.

Problem case #2: conditional control flow

Another common problem case is when references are used in only match arm. This most commonly arises around maps. Consider this function, which, given some key, processes the value found in map[key] if it exists, or else inserts a default value:

1
2
3
4
5
6
7
8
9
10
fn process_or_default<K,V:Default>(map: &mut HashMap<K,V>,
                                   key: K) {
    match map.get_mut(&key) { // -------------+ 'lifetime
        Some(value) => process(value),     // |
        None => {                          // |
            map.insert(key, V::default()); // |
            //  ^~~~~~ ERROR.              // |
        }                                  // |
    } // <------------------------------------+
}

This code will not compile today. The reason is that the map is borrowed as part of the call to get_mut, and that borrow must encompass not only the call to get_mut, but also the Some branch of the match. The innermost expression that encloses both of these expressions is the match itself (as depicted above), and hence the borrow is considered to extend until the end of the match. Unfortunately, the match encloses not only the Some branch, but also the None branch, and hence when we go to insert into the map in the None branch, we get an error that the map is still borrowed.

This particular example is relatively easy to workaround. One can (frequently) move the code for None out from the match like so:

1
2
3
4
5
6
7
8
9
10
11
12
fn process_or_default1<K,V:Default>(map: &mut HashMap<K,V>,
                                    key: K) {
    match map.get_mut(&key) { // -------------+ 'lifetime
        Some(value) => {                   // |
            process(value);                // |
            return;                        // |
        }                                  // |
        None => {                          // |
        }                                  // |
    } // <------------------------------------+
    map.insert(key, V::default());
}

When the code is adjusted this way, the call to map.insert is not part of the match, and hence it is not part of the borrow. While this works, it is of course unfortunate to require these sorts of manipulations, just as it was when we introduced an artificial block in the previous example.

Problem case #3: conditional control flow across functions

While we were able to work around problem case #2 in a relatively simple, if irritating, fashion. there are other variations of conditional control flow that cannot be so easily resolved. This is particularly true when you are returning a reference out of a function. Consider the following function, which returns the value for a key if it exists, and inserts a new value otherwise (for the purposes of this section, assume that the entry API for maps does not exist):

1
2
3
4
5
6
7
8
9
10
11
12
fn get_default<'m,K,V:Default>(map: &'m mut HashMap<K,V>,
                               key: K)
                               -> &'m mut V {
    match map.get_mut(&key) { // -------------+ 'm
        Some(value) => value,              // |
        None => {                          // |
            map.insert(key, V::default()); // |
            //  ^~~~~~ ERROR               // |
            map.get_mut(&key).unwrap()     // |
        }                                  // |
    }                                      // |
}                                          // v

At first glance, this code appears quite similar the code we saw before. And indeed, just as before, it will not compile. But in fact the lifetimes at play are quite different. The reason is that, in the Some branch, the value is being returned out to the caller. Since value is a reference into the map, this implies that the map will remain borrowed until some point in the caller (the point 'm, to be exact). To get a better intuition for what this lifetime parameter 'm represents, consider some hypothetical caller of get_default: the lifetime 'm then represents the span of code in which that caller will use the resulting reference:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
fn caller() {
    let mut map = HashMap::new();
    ...
    {
        let v = get_default(&mut map, key); // -+ 'm
          // +-- get_default() -----------+ //  |
          // | match map.get_mut(&key) {  | //  |
          // |   Some(value) => value,    | //  |
          // |   None => {                | //  |
          // |     ..                     | //  |
          // |   }                        | //  |
          // +----------------------------+ //  |
        process(v);                         //  |
    } // <--------------------------------------+
    ...
}

If we attempt the same workaround for this case that we tried in the previous example, we will find that it does not work:

1
2
3
4
5
6
7
8
9
10
11
fn get_default1<'m,K,V:Default>(map: &'m mut HashMap<K,V>,
                                key: K)
                                -> &'m mut V {
    match map.get_mut(&key) { // -------------+ 'm
        Some(value) => return value,       // |
        None => { }                        // |
    }                                      // |
    map.insert(key, V::default());         // |
    //  ^~~~~~ ERROR (still)                  |
    map.get_mut(&key).unwrap()             // |
}                                          // v

Whereas before the lifetime of value was confined to the match, this new lifetime extends out into the caller, and therefore the borrow does not end just because we exited the match. Hence it is still in scope when we attempt to call insert after the match.

The workaround for this problem is a bit more involved. It relies on the fact that the borrow checker uses the precise control-flow of the function to determine what borrows are in scope.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
fn get_default2<'m,K,V:Default>(map: &'m mut HashMap<K,V>,
                                key: K)
                                -> &'m mut V {
    if map.contains(&key) {
    // ^~~~~~~~~~~~~~~~~~ 'n
        return match map.get_mut(&key) { // + 'm
            Some(value) => value,        // |
            None => unreachable!()       // |
        };                               // v
    }

    // At this point, `map.get_mut` was never
    // called! (As opposed to having been called,
    // but its result no longer being in use.)
    map.insert(key, V::default()); // OK now.
    map.get_mut(&key).unwrap()
}

What has changed here is that we moved the call to map.get_mut inside of an if, and we have set things up so that the if body unconditionally returns. What this means is that a borrow begins at the point of get_mut, and that borrow lasts until the point 'm in the caller, but the borrow checker can see that this borrow will not have even started outside of the if. So it does not consider the borrow in scope at the point where we call map.insert.

This workaround is more troublesome than the others, because the resulting code is actually less efficient at runtime, since it must do multiple lookups.

It’s worth noting that Rust’s hashmaps include an entry API that one could use to implement this function today. The resulting code is both nicer to read and more efficient even than the original version, since it avoids extra lookups on the not present path as well:

1
2
3
4
5
6
fn get_default3<'m,K,V:Default>(map: &'m mut HashMap<K,V>,
                                key: K)
                                -> &'m mut V {
    map.entry(key)
       .or_insert_with(|| V::default())
}

Regardless, the problem exists for other data structures besides HashMap, so it would be nice if the original code passed the borrow checker, even if in practice using the entry API would be preferable. (Interestingly, the limitation of the borrow checker here was one of the motivations for developing the entry API in the first place!)

Conclusion

This post looked at various examples of Rust code that do not compile today, and showed how they can be fixed using today’s system. While it’s good that workarounds exist, it’d be better if the code just compiled as is. In an upcoming post, I will outline my plan for how to modify the compiler to achieve just that.

Endnotes

1. Scopes always correspond to blocks with one exception: the scope of a temporary value is sometimes the enclosing statement.

Air MozillaBay Area Rust Meetup April 2016

Bay Area Rust Meetup April 2016 Rust meetup on the subject of operating systems.

Air MozillaConnected Devices Weekly Program Review, 26 Apr 2016

Connected Devices Weekly Program Review Weekly project updates from the Mozilla Connected Devices team.

Richard NewmanDifferent kinds of storage

I’ve been spending most of my time so far on Project Tofino thinking about how a user agent stores data.

A user agent is software that mediates your interaction with the world. A web browser is one particular kind of user agent: one that fetches parts of the web and shows them to you.

(As a sidenote: browsers are incredibly complicated, not just for the obvious reasons of document rendering and navigation, but also because parts of the web need to run code on your machine and parts of it are actively trying to attack and track you. One of a browser’s responsibilities is to keep you safe from the web.)

Chewing on Redux, separation of concerns, and Electron’s process model led to us drawing a distinction between a kind of ‘profile service’ and the front-end browser itself, with ‘profile’ defined as the data stored and used by a traditional browser window. You can see the guts of this distinction in some of our development docs.

The profile service stores full persistent history and data like it. The front-end, by contrast, has a pure Redux data model that’s much closer to what it needs to show UI — e.g., rather than all of the user’s starred pages, just a list of the user’s five most recent.

The front-end is responsible for fetching pages and showing the UI around them. The back-end service is responsible for storing data and answering questions about it from the front-end.

To build that persistent storage we opted for a mostly event-based model: simple, declarative statements about the user’s activity, stored in SQLite. SQLite gives us durability and known performance characteristics in an embedded database.

On top of this we can layer various views (materialized or not). The profile service takes commands as input and pushes out diffs, and the storage itself handles writes by logging events and answering queries through views. This is the CQRS concept applied to an embedded store: we use different representations for readers and writers, so we can think more clearly about the transformations between them.

Where next?

One of the reasons we have a separate service is to acknowledge that it might stick around when there are no browser windows open, and that it might be doing work other than serving the immediate needs of a browser window. Perhaps the service is pre-fetching pages, or synchronizing your data in the background, or trying to figure out what you want to read next. Perhaps you can interact with the service from something other than a browser window!

Some of those things need different kinds of storage. Ad hoc integrations might be best served by a document store; recommendations might warrant some kind of graph database.

When we look through that lens we no longer have just a profile service wrapping profile storage. We have a more general user agent service, and one of the data sources it manages is your profile data.

Mozilla Addons BlogMigrating Popup ALT Attribute from XUL/XPCOM to WebExtensions

Today’s post comes from Piro, the developer of Popup ALT Attribute, in addition to 40 other add-ons. He shares his thoughts about migrating XUL/XPCOM add-ons to WebExtensions, and shows us how he did it with Popup ALT Attribute. You can see the full text of this post on his personal blog.

***

Hello, add-on developers. My name is YUKI Hiroshi aka Piro, a developer of Firefox add-ons. For many years I developed Firefox and Thunderbird add-ons personally and for business, based on XUL and XPCOM.

I recently started to research the APIs are required to migrate my add-ons to WebExtensions, because Mozilla announced that XUL/XPCOM add-ons will be deprecated at the end of 2017. I realized that only some add-ons can be migrated with currently available APIs, and
Popup ALT Attribute is one such add-on.

Here is the story of how I migrated it.

What’s the add-on?

Popup ALT Attribute is an ancient add-on started in 2002, to show what is written in the alt attribute of img HTML elements on web pages. By default, Firefox shows only the title attribute as a tooltip.

Initially, the add-on was implemented to replace an internal function FillInHTMLTooltip() of Firefox itself.

In February 2016, I migrated it to be e10s-compatible. It is worth noting that depending on your add-on, if you can migrate it directly to WebExtensions, it will be e10s-compatible by default.

Re-formatting in the WebExtensions style

I read the tutorial on how to build a new simple WebExtensions-based add-on from scratch before migration, and I realized that bootstrapped extensions are similar to WebExtensions add-ons:

  • They are dynamically installed and uninstalled.
  • They are mainly based on JavaScript code and some static manifest files.

My add-on was easily re-formatted as a WebExtensions add-on, because I already migrated it to bootstrapped.

This is the initial version of the manifest.json I wrote. There were no localization and options UI:

{
  "manifest_version": 2,
  "name": "Popup ALT Attribute",
  "version": "4.0a1",
  "description": "Popups alternate texts of images or others like NetscapeCommunicator(Navigator) 4.x, and show long descriptions in the multi-row tooltip.",
  "icons": { "32": "icons/icon.png" },
  "applications": {
    "gecko": { "id": "{61FD08D8-A2CB-46c0-B36D-3F531AC53C12}",
               "strict_min_version": "48.0a1" }
  },
  "content_scripts": [
    { "all_frames": true,
      "matches": ["<all_urls>"],
      "js": ["content_scripts/content.js"],
      "run_at": "document_start" }
  ]
}

I had already separated the main script to a frame script and a loader for it. On the other hand, manifest.json can have some manifest keys to describe how scripts are loaded. It means that I don’t need to put my custom loaders in the package anymore. Actually, a script for any web page can be loaded with the content_scripts rule in the above sample. See the documentation for content_scripts for more details.

So finally only 3 files were left.

Before:

+ install.rdf
+ icon.png
+ [components]
+ [modules]
+ [content]
    + content-utils.js

And after:

+ manifest.json (migrated from install.rdf)
+ [icons]
|   + icon.png (moved)
+ [content_scripts]
    + content.js (moved and migrated from content-utils.js)

And I still had to isolate my frame script from XPCOM.

  • The script touched nsIPrefBranch and some XPCOM components via XPConnect, so they were temporarily commented out.
  • User preferences were not available and only default configurations were there as fixed values.
  • Some constant properties accessed, like Ci.nsIDOMNode.ELEMENT_NODE, had to be replaced as Node.ELEMENT_NODE.
  • The listener for mousemove events from web pages was attached to the global namespace for a frame script, but it was re-attached to the document itself of each web page, because the script was now executed on each web page directly.

Localization

For the old install.rdf I had a localized description. In WebExtensions add-ons I had to do it in different way. See how to localize messages for details. In short I did the following:

Added files to define localized descriptions:

+ manifest.json
+ [icons]
+ [content_scripts]
+ [_locales]
    + [en_US]
    |   + messages.json (added)
    + [ja]
        + messages.json (added)

Note, en_US is different from en-US in install.rdf.

English locale, _locales/en_US/messages.json was:

{
  "name": { "message": "Popup ALT Attribute" },
  "description": { "message": "Popups alternate texts of images or others like NetscapeCommunicator(Navigator) 4.x, and show long descriptions in the multi-row tooltip." }
}

Japanese locale, _locales/ja/messages.json was also included. And, I had to update my manifest.json to embed localized messages:

{
  "manifest_version": 2,
  "name": "__MSG_name__",
  "version": "4.0a1",
  "description": "__MSG_description__",
  "default_locale": "en_US",
  ...

__MSG_****__ in string values are automatically replaced to localized messages. You need to specify the default locale manually via the default_locale key.

Sadly, Firefox 45 does not support the localization feature, so you need to use Nightly 48.0a1 or newer to try localization.

User preferences

Currently, WebExtensions does not provide any feature completely compatible to nsIPrefBranch. Instead, there are simple storage APIs. It can be used like an alternative of nsIPrefBranch to set/get user preferences. This add-on had no configuration UI but had some secret preferences to control its advanced features, so I did it for future migrations of my other add-ons, as a trial.

Then I encountered a large limitation: the storage API is not available in content scripts. I had to create a background script just to access the storage, and communicate with it via the inter-sandboxes messaging system. [Updated 4/27/16: bug 1197346 has been fixed on Nightly 49.0a1, so now you don’t need any hack to access the storage system from content scripts anymore. Now, my library (Configs.js) just provides easy access for configuration values instead of the native storage API.]

Finally, I created a tiny library to do that. I don’t describe how I did it here, but if you hope to know details, please see the source. There are just 177 lines.

I had to update my manifest.json to use the library from both the background page and the content script, like:

  "background": {
    "scripts": [
      "common/Configs.js", /* the library itself */
      "common/common.js"   /* codes to use the library */
    ]
  },
  "content_scripts": [
    { "all_frames": true,
      "matches": ["<all_urls>"],
      "js": [
        "common/Configs.js", /* the library itself */
        "common/common.js",  /* codes to use the library */
        "content_scripts/content.js"
      ],
      "run_at": "document_start" }
  ]

Scripts listed in the same section share a namespace for the section. I didn’t have to write any code like require() to load a script from others. Instead, I had to be careful about the listing order of scripts, and wrote a script requiring a library after the library itself, in each list.

One last problem was: how to do something like the about:config or the MCD — general methods to control secret preferences across add-ons.

For my business clients, I usually provide add-ons and use MCD to lock their configurations. (There are some common requirements for business use of Firefox, so combinations of add-ons and MCD are more reasonable than creating private builds of Firefox with different configurations for each client.)

I think I still have to research around this point.

Options UI

WebExtensions provides a feature to create options pages for add-ons. It is also not supported on Firefox 45, so you need to use Nightly 48.0a1 for now. As I previously said, this add-on didn’t have its configuration UI, but I implemented it as a trial.

In XUL/XPCOM add-ons, rich UI elements like <checkbox>, <textbox>, <menulist>, and more are available, but these are going away at the end of next year. So I had to implement a custom configuration UI based on pure HTML and JavaScript. (If you need more rich UI elements, some known libraries for web applications will help you.)

On this step I created two libraries:

Conclusion

I’ve successfully migrated my Popup ALT Attribute add-on from XUL/XPCOM to WebExtensions. Now it is just a branch but I’ll release it after Firefox 48 is available.

Here are reasons why I could do it:

  • It was a bootstrapped add-on, so I had already isolated the add-on from all destructive changes.
  • The core implementation of the add-on was similar to a simple user script. Essential actions of the add-on were enclosed inside the content area, and no privilege was required to do that.

However, it is a rare case for me. My other 40+ add-ons require some privilege, and/or they work outside the content area. Most of my cases are such non-typical add-ons.

I have to do triage, plan, and request new APIs not only for me but for other XUL/XPCOM add-on developers also.

Thank you for reading.

The Mozilla BlogUpdate to Firefox Released Today

The latest version of Firefox was released today. It features an improved look and feel for Linux users, a minor security improvement and additional updates for all Firefox users.

The update to Firefox for Android features minor changes, including an improvement to user notifications and clearer homescreen shortcut icons.

More information:

Air MozillaMartes mozilleros, 26 Apr 2016

Martes mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos. Bi-weekly meeting to talk (in Spanish) about Mozilla status, community and...

Marcia KnousNightly is where I will live

After some time working on Firefox OS and Connected Devices, I am moving back to Desktop land. Going forward I will be working with the Release Management Team as the Nightly Program Manager. That means I would love to work with all of you all to identify any potential issues in Nightly and help bring them to resolution. To that end, I have done a few things. First, we now have a Telegram Group for Nightly Testers. Feel free to join that group if you want to keep up with issues we are

David LawrenceHappy BMO Push Day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1195736] intermittent internal error: “file error – nav_link: not found” (also manifests as fields_lhs: not found)

discuss these changes on mozilla.tools.bmo.


Daniel GlazmanFirst things first

Currently implementing many new features into Postbox, I carefully read (several times) Mark Surman's recent article on Thunderbird's future. I also read Simon Phipps's report twice. Then the contract offer for a Thunderbird Architect posted by Mozilla must be read too:

... Thunderbird is facing a number of technical challenges, including but not limited to:

  • ...
  • The possible future deprecation of XUL, its current user interface technology and XPCOM, its current component technology, by Mozilla
  • ...

In practice, the last line above means for Thunderbird:

  1. rewrite the whole UI and the whole JS layer with it
  2. most probably rewrite the whole SMTP/MIME/POP/IMAP/LDAP/... layer
  3. most probably have a new Add-on layer or, far worse, no more Add-ons

Well, sorry to say, but that's a bit of a « technical challenge »... So yes, that's indeed a « fork in the road » but let's be serious a second, it's unfortunately this kind of fork; rewriting the app is not a question of if but only a question of when. Unless Thunderbird dies entirely, of course.

Evaluating potential hosts for Thunderbird and a fortiori chosing one seems to me rather difficult without first discussing the XUL/XPCOM-less future of the app, i.e. without having in hands the second milestone delivered by the Thunderbird Architect. First things first. I would also be interested in knowing how many people MoCo will dedicate to the deXULXPCOMification of Firefox, that would allow some extrapolations and some pretty solid requirements (and probably rather insurmountable...) for TB's host.

Last but not least and from a more personal point of view, I feel devastated confronting Mark's article and the Mozilla Manifesto.

Daniel StenbergAbsorbing 1,000 emails per day

Some people say email is dead. Some people say there are “email killers” and bring up a bunch of chat and instant messaging services. I think those people communicate far too little to understand how email can scale.

I receive up to around 1,000 emails per day. I average on a little less but I do have spikes way above.

Why do I get a thousand emails?

Primarily because I participate on a lot of mailing lists. I run a handful of open source projects myself, each with at least one list. I follow a bunch more projects; more mailing lists. We have a whole set of mailing lists at work (Mozilla) and I participate and follow several groups in the IETF. Lists and lists. I discuss things with friends on a few private mailing lists. I get notifications from services about things that happen (commits, bugs submitted, builds that break, things that need to get looked at). Mails, mails and mails.

Don’t get me wrong. I prefer email to web forums and stuff because email allows me to participate in literally hundreds of communities from a single spot in an asynchronous manner. That’s a good thing. I would not be able to do the same thing if I had to use one of those “email killers” or web forums.

Unwanted email

I unsubscribe from lists that I grow tired from. I stamp down on spam really hard and I run aggressive filters and blacklists that actually make me receive rather few spam emails these days, percentage wise. There are nowadays about 3,000 emails per month addressed to me that my mail server accepts that are then classified as spam by spamassassin. I used to receive a lot more before we started using better blacklists. (During some periods in the past I received well over a thousand spam emails per day.) Only 2-3 emails per day out of those spam emails fail to get marked as spam correctly and subsequently show up in my inbox.

Flood management

My solution to handling this steady high paced stream of incoming data is prioritization and putting things in different bins. Different inboxes.

  1. Filter incoming email. Save the email into its corresponding mailbox. At this very moment, I have about 30 named inboxes that I read. I read them in order, top to bottom as they’re sorted in roughly importance order (to me).
  2. Mails that don’t match an existing mailing list or topic that get stored into the 28 “topic boxes” run into another check: is the sender a known “friend” ? That’s a loose term I use, but basically means that the mail is from an email address that I have had conversations with before or that I know or trust etc. Mails from “friends” get the honor of getting put in mailbox 0. The primary one. If the mail comes from someone not listed as friend, it’ll end up in my “suspect” mailbox. That’s mailbox 1.
  3. Some of the emails get the honor of getting forwarded to a cloud email service for which I have an app in my phone so that I can get a sense of important mail that arrive. But I basically never respond to email using my phone or using a web interface.
  4. I also use the “spam level” in spams to save them in different spam boxes. The mailbox receiving the highest spam level emails is just erased at random intervals without ever being read (unless I’m tracking down a problem or something) and the “normal” spam mailbox I only check every once in a while just to make sure my filters are not hiding real mails in there.

Reading

I monitor my incoming mails pretty frequently all through the day – every day. My wife calls me obsessed and maybe I am. But I find it much easier to handle the emails a little at a time rather than to wait and have it pile up to huge lumps to deal with.

I receive mail at my own server and I read/write my email using Alpine, a text based mail client that really excels at allowing me to plow through vast amounts of email in a short time – something I can’t say that any UI or web based mail client I’ve tried has managed to do at a similar degree.

A snapshot from my mailbox from a while ago looked like this, with names and some topics blurred out. This is ‘INBOX’, which is the main and highest prioritized one for me.

alpine screenshot

I have my mail client to automatically go to the next inbox when I’m done reading this one. That makes me read them in prio order. I start with the INBOX one where supposedly the most important email arrives, then I check the “suspect” one and then I go down the topic inboxes one by one (my mail client moves on to the next one automatically). Until either I get overwhelmed and just return to the main box for now or I finish them all up.

I tend to try to deal with mails immediately, or I mark them as ‘important’ and store them in the main mailbox so that I can find them again easily and quickly.

I try to only keep mails around in my mailbox that concern ongoing topics, discussions or current matters of concern. Everything else should get stored away. It is hard work to maintain the number of emails there at a low number. As you all know.

Writing email

I averaged at less than 200 emails written per month during 2015. That’s 6-7 per day.

That makes over 150 received emails for every email sent.

Allen Wirfs-BrockSlide Bite: Survival of the Fittest

incrementalevolution

The first ten or fifteen years of a computing era is a period of chaotic experimentation. Early product concepts rapidly evolve via both incremental and disruptive innovations. Radical ideas are tried. Some succeed and some fail. Survival of the fittest prevails. By mid-era, new stable norms should be established. But we can’t predict the exact details.

Andrew TruongExperience, Learn, Revitalize and Share: The Adventures of High School

High school was an adventure. This time around, I was in courses that I picked and not determined by someone else who followed how you rank the offered courses. At the end of junior high, we were left with the phrase from every teacher that we will no longer be with the people we usually hang out with. How true can that be? I am not able to say.


I started my first year off in high school rough. However, I was able to adapt quite easily through the attendance of leadership seminars every week. I started to get a little more involved with events around the school and eventually around the community. The teachers were far from different than what our junior high teachers prescribed them as. They weren't uncaring, leaving you on your own and were helpful with finding your way around. They were the exact opposite of what our junior high teachers told us. Perhaps, they told us that "lie" to prepare us, or maybe they went through something completely different during their time.

First year in, I had an assortment of classes and I felt good and at ease with them. I was fortunate enough to have every other day free in the second semester where I was able to go to leadership and further enhance my life skills. The regular days, I had a class where I was able to do homework and receive additional help when I needed it, due the fact that I didn't do too well in junior high. Nonetheless, I excelled in the main course wasting most of my time in the additional help class.


Grade 11 rolls by and I took a block (there are 4 blocks in a day) of my day during the first semester to go to leadership. There I was able to further enhance my abilities, be assigned responsibilities and earn the trust of the department head. Furthermore, I ran to the students' union president, though I was not successful - it may have benefited me instead. There's nothing much to say as things went a certain direction and it worked out quite well.

Into my last year of high school, there's a new development in our family and household. This year is extremely important as I must pass all courses in order to graduate and move on to post-secondary. I was satisfied with my first semester where my courses went out pretty well. I still took a block of my day out the first semester to go to leadership. But, this time, I took on the position of being the chairperson of Spirit Wear for the school year. Designing, advertising, and promoting what we had to sell was a wonderful journey. I also met some great people during my spare time in leadership and I learned a lot more about myself and what I was socially doing wrong. That realization of what I was doing wrong, dawned upon me and led me to become who I am today.

The second semester comes around the corner and it was a roller coaster for me. For some odd reason, the course I excelled in continuously the 2 years before, I was now having trouble. It was partially a leap from what I knew and learned to something completely different. Part of the blame for this is the instructor, as I knew from how others have struggled with this particular teacher in the past, I would too - even though I told myself I won't. I got through it with my ups and downs despite being worried about whether or not I would being able to graduate and move on to post-secondary. In the end, I graduated and received my high school diploma.

Mark SurmanFirefox and Thunderbird: A Fork in the Road

Firefox and Thunderbird have reached a fork in the road: it’s now the right time for them to part ways on both a technical and organizational level.

In line with the process we started in 2012, today we’re taking another step towards the independence of Thunderbird. We’re posting a report authored by open source leader Simon Phipps that explores options for a future organizational home for Thunderbird. We’ve also started the process of helping the Thunderbird Council chart a course forward for Thunderbird’s future technical direction, by posting a job specification for a technical architect.

In this post, I want to take the time to go over the origins of Thunderbird and Firefox, the process for Thunderbird’s independence and update you on where we are taking this next. For those close to Mozilla, both the setting and the current process may already be clear. For those who haven’t been following the process, I wanted to write a longer post with all the context. If you are interested in that context, read on.

Summary

Much of Mozilla, including the leadership team, believes that focusing on the web through Firefox offers a vastly better chance of moving the Internet industry to a more open place than investing further in Thunderbird—or continuing to attend to both products.

Many of us remain committed Thunderbird users and want to see Thunderbird remain a healthy community and product. But both Firefox and Thunderbird face different challenges, have different goals and different measures of success. Our actions regarding Thunderbird should be viewed in this light.

Success for Firefox means continued relevance in the mass consumer market as a way for people to access, shape and feel safe across many devices. With hundreds of millions of users on both desktop and mobile, we have the raw material for this success. However, if we want Firefox to continue to have an impact on how developers and consumers interact with the Internet, we need to move much more quickly to innovate on mobile and in the cloud. Mozilla is putting the majority of its human and financial resources into Firefox product innovation.

In contrast, success for Thunderbird means remaining a reliable and stable open source desktop email client. While many people still value the security and independence that come with desktop email (I am one of them), the overall number of such people in the world is shrinking. In 2012, around when desktop email first became the exception rather than the rule, Mozilla started to reduce its investment and transitioned Thunderbird into a fully volunteer-run open source project.

Given these different paths, it should be no surprise that tensions have arisen as we’ve tried to maintain Firefox and Thunderbird on top of a common underlying code base and common release engineering system. In December, we started a process to deal with those release engineering issues, and also to find a long-term organizational home for Thunderbird.

The Past

On a technical level, Firefox and Thunderbird have common roots, emerging from the browser and email components of the Mozilla Application Suite nearly 15 years ago. When they were turned into separate products, they also maintained a common set of underlying software components, as well as a shared build and release infrastructure. Both products continue to be intertwined in this manner today.

Firefox and Thunderbird also share common organizational roots. Both were incorporated by the Mozilla Foundation in 2003, and from the beginning, the Foundation aimed to make these products successful in the mainstream consumer Internet market. We believed—and still believe—mass-market open source products are our biggest lever in our efforts to ensure the Internet remains a public resource, open and accessible to all.

Based on this belief, we set up Mozilla Corporation (MoCo) and Mozilla Messaging (MoMo) as commercial subsidiaries of the Mozilla Foundation. These organizations were each charged with innovating and growing a market: one in web access, the other in messaging. We succeeded in making the browser a mass market success, but we were not able to grow the same kind of market for email or messaging.

In 2012, we shut down Mozilla Messaging. That’s when Thunderbird became a purely volunteer-run project.

The Present

Since 2012, we have been doggedly focused on how to take Mozilla’s mission into the future.

In the Mozilla Corporation, we have tried to innovate and sustain Firefox’s relevance in the browser market while breaking into new product categories—first with smartphones, and now in a variety of connected devices.

In the Mozilla Foundation, we have invested in a broader global movement of people who stand for the Internet as a public resource. In 2016, we are focused on becoming a loud and clear champion on open internet issues. This includes significant investments in fuelling the open internet movement and growing a next generation of leaders who will stand up for the web.

These are hard and important things to do—and we have not yet succeeded at them to the level that we need to.

During these shifts, we invested less and less of Mozilla’s resources in Thunderbird, with the volunteer community developing and sustaining the product. MoCo continues to provide the underlying code and build and release infrastructure, but there are no dedicated staff focused on Thunderbird.

Many people who work on Firefox care about Thunderbird and do everything they can to accommodate Thunderbird as they evolve the code base, which slows down Firefox development when it needs to be speeding up. People in the Thunderbird community also remain committed to building on the Firefox codebase. This puts pressure on a small, dedicated group of volunteer coders who struggle to keep up. And people in the Mozilla Foundation feel similar pressure to help the Thunderbird community with donations and community management, which distracts them from the education and advocacy work that’s needed to grow the open internet movement on a global level.

Everyone has the right motivations, and yet everyone is stretched thin and frustrated. And Mozilla’s strategic priorities are elsewhere.

The Future

In late 2015, Mozilla leadership and the Thunderbird Council jointly agreed to:

a) take a new approach to release engineering, as a first step towards putting Thunderbird on the path towards technical independence from Firefox; and

b) identify the organizational home that will best allow Thunderbird to thrive as a volunteer-run project.

Mozilla has already posted a proposal for separating Thunderbird from Firefox release engineering infrastructure. In order to move the technical part of this plan further ahead and address some of the other challenges Thunderbird faces, we agreed to contract for a short period of time with a technical architect who can support the Thunderbird community as they decide what path Thunderbird should take. We have a request for proposals for this position here.

On the organizational front, we hired open source leader Simon Phipps to look at different long-term options for a home for Thunderbird, including: The Document Foundation, Gnome, Mozilla Foundation, and The Software Freedom Conservancy. Simon’s initial report will be posted today in the Thunderbird Planning online forum and is currently being reviewed by both Mozilla and the Thunderbird Council.

With the right technical and organizational paths forward, both Firefox and Thunderbird will have a better chance at success. We believe Firefox will evolve into something consumers need and love for a long time—a way to take the browser into experiences across all devices. But we need to move fast to be effective.

We also believe there’s still a place for stable desktop email, especially if it includes encryption. The Thunderbird community will attract new volunteers and funders, and we’re digging in to help make that happen. We will provide more updates as things progress further.

The post Firefox and Thunderbird: A Fork in the Road appeared first on Mark Surman.

Mike TaylorString.prototype.contains, use your judgement

I was lurking on the darkweb (stackoverflow) looking for old bugs when I ran into this gem: "How can I check if one string contains another substring?".

Pretty normal question for people new to programming (like myself), and the #3 answer contains the following suggestion:

String.prototype.contains = function(it) { return this.indexOf(it) != -1; };

Totally does what the person was asking for. Good stuff.

(And as a result, the person who gave the answer is swimming in stackoverflow points—which is how you buy illegal things on the darkweb.)

The spooky part is back in 2011, in his response to this answer, zachleat linked to a classic Zakas post "Don’t modify objects you don’t own".

From the article,

Maintainable code is code that you don’t need to modify when the browser changes. You don’t know how browser developers will evolve existing browsers and the rate at which those evolutions will take place.

You might remember that ES6 tried to add String.prototype.contains, but it broke a number of sites (especially those using MooTools because the two implementations had different semantics) and had to be renamed to String.prototype.includes.

To the OP's credit, they came back with an edit:

Note: see the comments below for a valid argument for not using this. My advice: use your own judgement.

The morals to this story are obvious: the darkweb is as scary as they say. And Zach Leatherman might be a witch.

This Week In RustThese Weeks in Rust 127

Hello and welcome to another multi-week issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

This week's edition was edited by: Vikrant and llogiq.

Updates from Rust Community

News & Blog Posts

Notable New Crates & Project Updates

Crate of the Week

This week's Crate of the Week is owning_ref, which contains a reference type that can carry its owner with it. Thanks to Diwic for the suggestion!

Submit your suggestions for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

186 pull requests were merged in the last two weeks.

Notable changes

New Contributors

  • Alec S
  • Andrey Tonkih
  • c4rlo
  • David Hewitt
  • David Tolnay
  • Deepak Kannan
  • Gigih Aji Ibrahim
  • jocki84
  • Jonathan Turner
  • Kaiyin Zhong
  • Lukas Kalbertodt
  • Lukas Pustina
  • Maxim Samburskiy
  • Raph Levien
  • rkjnsn
  • Sander Maijers
  • Szabolcs Berecz

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

fn work(on: RustProject) -> Money

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Cow is still criminally underused in a lot of code bases

I suggest we make a new slogan to remedy this: "To err is human, to moo bovine." (I may or may not have shamelessly stolen this from this bug report)

so_you_like_donuts on reddit.

Thanks to killercup for the suggestion.

Submit your quotes for next week!

Daniel Stenbergfcurl is fread and friends for URLs

This whole family of functions, fopen, fread, fwrite, fgets, fclose and more are defined in the C standard since C89. You can’t really call yourself a C programmer without knowing them and probably even using them in at least a few places.

The charm with these is that they’re standard, they’re easy to use and they’re available everywhere where there’s a C compiler.

A basic example that just reads a file from disk and writes it to stdout could look like this:

FILE *file;

file = fopen("hello.txt", "r");
if(file) {
  char buffer [256];
  while(1) {
    size_t rc = fread(buffer, sizeof(buffer),
                1, file);
    if(rc > 0)
      fwrite(buffer, rc, 1, stdout);
    else
      break;
  }
  fclose(file);
}

Imagine you’d like to switch this example, or one of your actual real world programs that use the fopen() family of functions to read or write files, and instead read and write files from and to the Internet instead using your favorite Internet protocols. How would you do that without having to change your code a lot and do a major refactoring job?

Enter fcurl

I’ve started to work on a library that provides a look-alike API with matching functions and behaviors, but that allows fopen() to instead specify a URL instead of a file name. I call it fcurl. (Much inspired by the libcurl example fopen.c, which I wrote the first version of already back in 2002!)

It is of course open source and is powered by libcurl.

The project is in its early infancy. I think it would be interesting to try it out and I’ve mentioned the idea to a few people that have shown interest. I really can’t make this happen all on my own anyway so while I’ve created a first embryo, it will take some time before it gets truly useful. Help from others would be greatly appreciated of course.

Using this API, a version of the above example that instead reads data from a HTTPS site instead of a local file could look like:

FCURL *file;

file = fcurl_open("https://daniel.haxx.se/",
                  "r");
if(file) {
  char buffer [256];
  while(1) {
    size_t rc = fcurl_read(buffer,         
                           sizeof(buffer), 1, 
                           file);
    if(rc > 0)
      fwrite(buffer, rc, 1, stdout);
    else
      break;
  }
  fcurl_close(file);
}

And it could even actually also read a local file using the file:// sheme.

Drop-in replacement

The idea here is to make the alternative functions have new names but as far as possible accept the same input arguments, return the same return codes and so on.

If we do it right, you could possibly even convert an existing program with just a set of #defines at the top without even having to change the code!

Something like this:

#define FILE FCURL
#define fopen(x,y) fcurl_open(x, y)
#define fclose(x) fcurl_close(x)

I think it is worth considering a way to provide an official macro set like that for those who’d like to switch easily (?) and quickly.

Fun things to consider

1. for non-scheme input, use normal fopen?

An interesting take is probably to make fcurl_open() treat input specified without a “scheme://” to be a local file, and then passed to fopen() instead under the hood. That would then enable even more code to switch to fcurl since all the existing use cases with local file names would just continue to work.

2. LD_PRELOAD

An interesting area of deeper research around this could be to provide a way to LD_PRELOAD replacements for the functions so that not even any source code would need be changed and already built existing binaries could be given this functionality.

3. fopencookie

There’s also the GNU libc’s fopencookie concept to figure out if that is something for fcurl to support/use. BSD and OS X have something similar called funopen.

4. merge in official libcurl

If this turns out useful, appreciated and good. We could consider moving the API in under the curl project’s umbrella and possibly eventually even making it part of the actual libcurl. But hey, we’re far away from that and I’m not saying that is even the best idea…

Your input is valuable

Please file issues or pull-requests. Let’s see where we can take this!

Michael KohlerReps Council Working Days Berlin 2016

From April 15th through April 17th the Mozilla Reps Council met in Berlin together with the Participation Team to discuss the Working groups and overall strategy topics. Unfortunately I couldn’t attend on Friday (working day 1) since I had to take my exams. Therefore I could only attend Saturday and Sunday. Nevertheless I think I could help out a lot and definitely learned a lot doing this :) This blog posts reflects my personal opinions, the others will write a blog post as well to give you a more concise view of this weekend.

 

Alignment Working Group

The first session on Saturday was about the Alignment WG. Before the weekend we (more or less) finished the proposal. This allowed us to discuss the last few open questions, which are now all integrated in the proposal. This will only need review by Konstantina to make sure I haven’t forgotten to add anything from the session and then we can start implementing it. We are sure that this will formalize the interaction between Mozilla goals and Reps goals, stay tuned for more information, we’re currently working on a communication strategy for all the RepsNext changes to make it easier and more fun for you to get informed about the changes.

Meta Working Group

For the Meta Working Group we had more open questions and therefore decided to do brainstorming in three teams. The questions were:

  • Who can join Council?
  • Which recognition mechanisms should be implement now?
  • How does accountability look in Reps?

We’re currently documenting the findings in the Meta working group working proposal, but we probably will need some more time to figure out everything perfectly. Keep an eye out on the Discourse topic in case we’ll need more feedback from you all!

Identity Working Group

A new working group? As you see, I didn’t believe it at first and Rara was visibly shocked!

Fun aside, yes, we’ll start a new Working group around the topics of outwards communication and the Rep program’s image. During our discussions on Saturday, we came up with a few questions that we will need to answer. This Friday we had our first call, follow us in the Discourse topic and it’s not too late to help out here! Please get involved as soon as possible to shape the future of Reps!

Communication Session

On Sunday we ran a joint session with the rest of the Participation team around the topic “How we work together”. We came up with the questions above and let those be answered / brainstormed in groups. I started to document the findings yesterday, but this is not yet in a state where it will be useful for anybody. Stay tuned for more communication around this (communication about communication, isn’t it fun? :)). The last question around “How might we improve the communication between the Participation-Team and the Council?” is already documented in the Alignment Working group proposal. Further the Identity working group will tackle and elaborate further the question around visibility.

Reps Roadmap for 2016

Wait, there is a roadmap?

Yes!

At the end of our sessions we put up a timeline for Reps for all our different initiatives on a wall. Within the next days we’ll work on this to have it digitally per months. For now, we have started to create GitHub issues in the Reps repo. Stay tuned for more information about this, the current information might confuse you since we haven’t updated all issues yet! It basically includes everything from RepsNext proposal implementations to London Work Week preparations to Council elections.

Conclusion

This weekend showed that we currently have an amazing, hard-working Council. It also showed that we’re on track with all the RepsNext work and that we can do a lot once we all work together and have Working Groups to involve all Reps as well.

Looking forward to the next months! If you haven’t yet, have a look at the Reps Discourse category, to keep yourself updated on Reps related topics and the working groups!

The other Council members will write their blog post in the next few days as well, keep an eye out for link on our Reps issues. Once again, there are a lot of changes to be implemented and discussed, we are working on a strategy for that. We believe that just pointing to all proposals is not easy enough and will come up with fun ways to chime into these and fully understand them. Nevertheless, if you have questions about anything I wrote here, feel free to reach out to me!

Credit: all pictures were taken by our amazing photographer Christos!

Michael KohlerMozilla Switzerland IoT Hackathon in Lausanne

On April 2nd 2016 we held a small IoT Hackathon in Lausanne to brainstorm about the Web and IoT. This was aligned with the new direction that Mozilla is taking on.

Preparation
We started to organize the Hackathon on Github, so everyone can participate. Geoffroy was really helpful to organize the space for it at Liip.ch. Thanks a lot to them, without them organizing our events would be way harder!

The Hackathon
We expected more people to come, but as mentioned above, this is our first self-organized event in the French speaking part of Switzerland. Nevertheless we were four persons with an interest in hacking something together.

Geoffroy and Paul started to have a look at Vaani.iot, one of the projects that Mozilla is currently pushing on. They started to build it on their laptops, unfortunately the Vaani documentation is not good enough yet to see the full picture and what you could do with it. We’re planning to send some feedback regarding that to the Vaani team.

In the meantime Martin and I set up my Raspberry Pi and started to write a small script together that reads out the temperature from one of the sensors. Once we’ve done that, I created a small API to have the temperature returned in JSON format.

At this point, we decided we wanted to connect those two pieces and create a Web app to read out the temperature and announce it through voice. Since we couldn’t get Vaani working, we decided to use the WebSpeech API for this. The voice output part is available in Firefox and Chrome right now, therefore we could achieve this goal without using any non-standard APIs. After that Geoffroy played around with the voice input feature of this API. This is currently only working in Chrome, but there is a bug to implement it in Firefox as well. In the spirit of the open web, we decided to ignore the fact that we need to use Chrome for now, and create a feature that is built on Web standards that are on track to standardization.

After all, we could achieve something together and definitely had some good learnings during that.

Lessions learned

  • Organizing a hackathon for the first time in a new city is not easy
  • We probably need to establish an “evening-only” meetup series first, so we can attract participants that identify with us
  • We could use this opportunity to document the Liip space in Lausanne for future events on our Events page on the wiki
  • Not all projects are well documented, we need to work on this!

After the Hackathon

Since I needed to do a project for my studies that involves hardware as well, I could take the opportunity and take the sensors for my project.

You can find the Source Code on the MozillaCH github organization. It currently regularly reads out the two temperature sensors and checks if there is any movement registered by the movement sensor. If the temperature difference is too high it sends an alarm to the NodeJS backend. The same goes for the situation where it detects movement. I see this as a first step into my own take on a smart home, it would need a lot of work and more sensors to be completely useful though.

 

 

 

Daniel PocockLinuxWochen, MiniDebConf Vienna and Linux Presentation Day

Over the coming week, there are a vast number of free software events taking place around the world.

I'll be at the LinuxWochen Vienna and MiniDebConf Vienna, the events run over four days from Thursday, 28 April to Sunday, 1 May.

At MiniDebConf Vienna, I'll be giving a talk on Saturday (schedule not finalized yet) about our progress with free Real-Time Communications (RTC) and welcoming 13 new GSoC students (and their mentors) working on this topic under the Debian umbrella.

On Sunday, Iain Learmonth and I will be collaborating on a workshop/demonstration on Software Defined Radio from the perspective of ham radio and the Debian Ham Radio Pure Blend. If you want to be an active participant, an easy way to get involved is to bring an RTL-SDR dongle. It is highly recommended that instead of buying any cheap generic dongle, you buy one with a high quality temperature compensated crystal oscillator (TXCO), such as those promoted by RTL-SDR.com.

Saturday, 30 April is also Linux Presentation Day in many places. There is an event in Switzerland organized by the local local FSFE group in Basel.

DebConf16 is only a couple of months away now, Registration is still open and the team are keenly looking for additional sponsors. Sponsors are a vital part of such a large event, if your employer or any other organization you know benefits from Debian, please encourage them to contribute.

Hal WineEnterprise Software Writers R US

Enterprise Software Writers R US

Someone just accused me of writing Enterprise Software!!!!!

Well, the “someone” is Mahmoud Hashemi from PayPal, and I heard him on the Talk Python To Me podcast (episode 54). That whole episode is quite interesting - go listen to it.

Mahmoud makes a good case, presenting nine “hallmarks” of enterprise software (the more that apply, the more “enterprisy” your software is). Most of the work RelEng does easily hits 7 of the points. You can watch Mahmoud define Enterprise Software for free by following the link from his blog entry (link is 2.1 in table of contents). (It’s part of his “Enterprise Software with Python” course offered on O’Reilly’s Safari.) One advantage of watching his presentation is that PayPal’s “Mother of all Diagrams” make ours_ look simple! (Although “blue spaghetti” is probably tastier.)

Do I care about “how enterprisy” my work is? Not at all. But I do like the way Mahmoud explains the landscape and challenges of enterprise software. He makes it clear, in the podcast, how acknowledging the existence of those challenges can inform various technical decisions. Such as choice of language. Or need to consider maintenance. Or – well, just go listen for yourself.

Myk MelezProject Positron

Along with several colleagues, I recently started working on Project Positron, an effort to build an Electron-compatible runtime on top of the Mozilla technology stack (Gecko and SpiderMonkey). Mozilla has long supported building applications on its stack, but the process is complex and cumbersome. Electron development, by comparison, is a dream. We aim to bring the same ease-of-use to Mozilla.

Positron development is proceeding along two tracks. In the SpiderNode repository, we’re working our way up from Node, shimming the V8 API so we can run Node on top of SpiderMonkey. Ehsan Akhgari details that effort in his Project SpiderNode post.

In the Positron repository, we’re working our way down from Electron, importing Electron (and Node) core modules, stubbing or implementing their native bindings, and making the minimal necessary changes (like basing the <webview> element on <iframe mozbrowser>) so we can run Electron apps on top of Gecko. Eventually we aim to join these tracks, even though we aren’t yet sure exactly where the last spike will be located.

It’s early days. As Ehsan noted, SpiderNode doesn’t yet link the Node executable successfully, since we haven’t shimmed all the V8 APIs it accesses. Meanwhile, Positron supports only a tiny subset of the Electron and Node APIs.

Nevertheless, we reached a milestone today: the tip of the Positron trunk now runs the Electron Quick Start app described in the Electron tutorial. That means it can open a BrowserWindow, hook up a DevTools window to it (with Firefox DevTools integration contributed by jryans), and handle basic application lifecycle events. We’ve imported that app into the Positron repository as our “hello world” example.

Clone and build Positron to see it in action!

 

About:CommunityFirefox 46 new contributors

With the release of Firefox 46, we are pleased to welcome the 37 developers who contributed their first code change to Firefox in this release, 31 of whom were brand new volunteers! Please join us in thanking each of these diligent and enthusiastic individuals, and take a look at their contributions:

Andrew TruongExperience, Learn, Revitalize and Share: Junior High in a Nutshell

The transition into junior was a somewhat complex one for me. I had trouble adapting a newer environment where there were more students, classrooms, teachers, etc. I struggled the first 2 of the 3 years. The style of teaching I was used to in elementary compared to junior high was different, and the treatment from teachers was absolutely different, and not in a positive way. I tried hard to adapt, bring my marks to a level where I want them to be, and to have a balance between school and social life.


Part of what made it difficult is that I was keen on following the same way of doing things I was used to in elementary. I was also reluctant to change and the ability to see change as well. The most crucial part was that I was acting in a manner where I was telling myself to just be myself but, I did just that in the wrong way, which caused more harm rather than good. More so, I started doing things that were unacceptable, but not embarrassing. It resulted in me being down in the office speaking with the AP or Principal on a few occasions. With one of the incidents that frowned upon then, wouldn't be frowned upon now, in our digital age. There are times where I wish I could just go back and change things up, but at the same time: life moves on. What has happened, happened.

The other part which made it difficult was that I didn't have wonderful teachers in my opinion. Not all of them were bad, but a handful of them just didn't click for me. I could say, to a certain extent that they picked on me at times because I simply just did not like class discussions (I still don't). More so, that in grade 8, teachers would know every students' name in the class except mine; I'm not sure what the issue was, or what the deal with that was. It wasn't just one, but 2 teachers that did it all the time. They were corrected from time to time but it didn't click to them that it was the reason that other classmates were laughing out loud because of it, every time it happened.


What changed me, however, took place through the summer break I had before going into grade 9. At that point, I discovered the opportunity to volunteer/ contribute to Mozilla. I started off with live chat on SUMO which paved the way for me to improve my writing skills and grammar by contributing to the knowledge base.

As I started into my last year of junior high, the other thing that I was lucky to be part of was: leadership. I was lucky and fortunate enough to be enrolled in that class so, that I was able to find myself and be myself. In leadership, students are encouraged to help each other out, work in groups/ teams, work to boost your enthusiasm and self-esteem, and to help organize school events. I loved it! I was able to see my potential, and what potential others had. These 2 factors allowed me to become more successful in my studies, and day-to-day life along with my contributions to Mozilla. I started to have things dawn on me, and so I was able to figure out what I did wrong, and how I could take a different and better approach the next time if the similar situations arose again.


Unfortunately, even though there are positives, there will be negatives. Not everything worked out to be a miracle. There were 2 situations where I had issues with my teachers.

The first of which, was where a teacher wanted things done her way only. If you found a solution to a homework question, test question a different way but with the same answer, and you could do that for other questions as well, you were still wrong. You had to do it a certain and specific way in order for it to be right. Now, as always, there are 2 ways of thinking about this for sure, but as we've progressed, we find that there are multiple approaches to achieving or reaching something and it doesn't have to be done in the set in a stone way.

The second was where I know that I don't have the talent or ability to complete something and required help. I tried and tried through the whole semester to achieve what was being taught in the class. On the very last day, I didn't expect myself to take it any further but somehow one thing lead to another where I wasn't happy with the teacher and nor was he happy with me. In the end, I spoke with my favourite AP who was also a teacher of mine as well, and she agreed with what I said and we ended it there.


There are always 2 parts to a story, but I can only reveal so much that it doesn't hurt me in the long run. I'm being really vague as I don't want to hurt my reputation nor do I want an investigation to be launched. The sole purpose of this blog post is to share what I experienced in junior high and to share how I was able to progress and find myself simply through the power of leadership.

Christian HeilmannTurning a community into evangelism helpers – DevRelCon Notes

These are the notes of my talk at DevRelCon in San Francisco. “Turning a community into evangelism helpers” covered how you can scale your evangelism/advocacy efforts. The trick is to give up some of the control and sharing your materials with the community. Instead of being the one who brings the knowledge, you’re the one who shares it and coaches people how to use it.

fox handshake-campusparty

Why include the community?

First of all, we have to ask ourselves why we should include the community in our evangelism efforts. Being the sole source of information about your products can be beneficial. It is much easier to control the message and create materials that way. But, it limits you to where you can physically be at one time. Furthermore, your online materials only reach people who already follow you.

Sharing your materials and evangelism efforts with a community reaps a lot of benefits:

  • You cut down on travel – whilst it is glamorous to rack up the air miles and live the high life of lounges and hotels it also burns you out. Instead of you traveling everywhere, you can nurture local talent to present for you. A lot of conferences will want the US or UK presenter to come to attract more attendees. You can use this power to introduce local colleagues and open doors for them.
  • You reach audiences that are beyond your reach – often it is much more beneficial to speak in the language and the cultural background of a certain place. You can do your homework and get translations. But, there is nothing better than a local person delivering in the right format.
  • You avoid being a parachute presenter – instead of dropping out of the sky, giving your talk and then vanishing without being able to keep up with the workload of answering requests, you introduce a local counterpart. That way people get answers to their requests after you left in a language and format they understand. It is frustrating when you have no time to answer people or you just don’t understand what they want.

Share, inspire, explain

Starts by making yourself available beyond the “unreachable evangelist”. You’re not a rockstar, don’t act like one. Share your materials and the community will take them on. That way you can share your workload. Break down the barrier between you and your community by sharing everything you do. Break down fears of your community by listening and amplifying things that impress you.

Make yourself available and show you listen

  • Have a repository of slide decks in an editable format – besides telling your community where you will be and sharing the videos of your talks also share your slides. That way the community can re-use and translate them – either in part or as a whole.
  • Share out interesting talks and point out why they are great – that way you show that there is more out there than your company materials. And you advertise other presenters and influencers for your community to follow. Give a lot of details here to show why a talk is great. In Mozilla I did this as a minute-by-minute transcript.
  • Create explanations for your company products, including demo code and share it out with the community – the shorter and cleaner you can keep these, the better. Nobody wants to talk over a 20 minute screencast.
  • Share and comment on great examples from community members – this is the big one. It encourages people to do more. It shows that you don’t only throw content over the wall, but that you expect people to make it their own.

Record and teach recording

Keeping a record of everything you do is important. It helps you to get used to your own voice and writing style and see how you can improve over time. It also means that when people ask you later about something you have a record of it. Ask for audio and video recordings of your community presenting to prepare for your one on one meetings with them. It also allows you to share these with your company to show how your community improves. You can show them to conference organisers to promote your community members as prospective speakers.

Recordings are great

  • They show how you deliver some of the content you talked about
  • They give you an idea of how much coaching a community member needs to become a presenter
  • They allow people to get used to seeing themselves as they appear to others
  • You create reusable content (screencasts, tutorials), that people can localise and talk over in presentations

Often you will find that a part of your presentation can inspire people. It makes them aware of how to deliver a complex concept in an understandable manner. And it isn’t hard to do – get Camtasia or Screenflow or even use Quicktime. YouTube is great for hosting.

Avoid the magical powerpoint

One thing both your company and your community will expect you to create is a “reusable power point presentation”. One that people can deliver over and over again. This is a mistake we’ve been doing for years. Of course, there are benefits to having one of those:

  • You have a clear message – a Powerpoint reviewed by HR, PR and branding and makes sure there are no communication issues.
  • You have a consistent look and feel – and no surprises of copyrighted material showing up in your talks
  • People don’t have to think about coming up with a talk – the talking points are there, the soundbites hidden, the tweetable bits available.

All these are good things, but they also make your presentations boring as toast. They don’t challenge the presenter to own the talk and perform. They become readers of slides and notes. If you want to inspire, you need to avoid that at all cost.

You can have the cake of good messaging and eat it, too. Instead of having a full powerpoint to present, offer your community a collection of talking points. Add demos and screencasts to remix into their own presentations.

There is merit in offering presentation templates though. It can be daunting to look at a blank screen and having to choose fonts, sizes and colours. Offering a simple, but beautiful template to use avoids that nuisance.

What I did in the past was offering an HTML slide deck on GitHub that had introductory slides for different topics. Followed by annotated content slides how to show parts of that topic. Putting it up on GitHub helped the community adding to it, translating it and fork their own presentations. In other words, I helped them on the way but expected them to find their own story arc and to make it relevant for the audience and their style of presenting.

Delegate and introduce

Delegation is the big win whenever you want to scale your work. You can’t reap the rewards of the community helping you without trusting them. So, stop doing everything yourself and instead delegate tasks. What is annoying and boring to you might be a great new adventure for someone else. And you can see them taking your materials into places you hadn’t thought of.

Delegate tasks early and often

Here are some things you can easily delegate:

  • Translation / localisation – you don’t speak all the languages. You may not be aware that your illustration or your use of colour is offensive in some countries.
  • Captioning and transcription of demo videos – this takes time and effort. It is annoying for you to describe your own work, but it is a great way for future presenters to memorise it.
  • Demo code cleanup / demo creation – you learn by doing, it is that simple.
  • Testing and recording across different platforms/conditions – your community has different setups from what you have. This is a good opportunity to test and fix your demos with their hardware.
  • Maintenance of resources – in the long run, you don’t want to be responsible for maintaining everything. The earlier you get people involved, the smoother the transition will be.

Introduce local community members

Sharing your content is one thing. The next level is to also share your fame. You can use your schedule and bookings to help your community:

  • Mention them in your talks and as a resource to contact – you avoid disappointing people by never coming back to them. And it shows your company cares about the place you speak at.
  • Co-present with them at events – nothing better to give some kudos than to share the stage
  • Introduce local companies/influencers to your local counterpart – the next step in the introduction cycle. This way you have something tangible to show to your company. It may be the first step for that community member to get hired.
  • Once trained up, tell other company departments about them. – this is the final step to turn volunteers into colleagues.

Set guidelines and give access

You give up a lot of control and you show a lot of trust when you start scaling by converting your community. In order not to cheapen that, make sure you also define guidelines. Being part of this should not be a medal for showing up – it should become something to aim for.

  • Define a conference playbook – if someone speaks on behalf of your company using your materials, they should also have deliveries. Failing to deliver them means they get less or no support in the future.
  • Offer 1:1 training in various levels as a reward – instead of burning yourself out by training everyone, have self-training materials that people can use to get themselves to the next level
  • Have a defined code of conduct – your reputation is also at stake when one of your community members steps out of line
  • Define benefits for participation – giving x number of talks gets you y, writing x amount of demos y amount of people use give you the same, and so on.

Official channels > Personal Blogs

Often people you train want to promote their own personal channels in their work. That is great for them. But it is dangerous to mix their content with content created on work time by someone else. This needs good explanation. Make sure to point out to your community members that their own brand will grow with the amount of work they delivered and the kudos they got for it. Also explain that by separating their work from your company’s, they have a chance to separate themselves from bad things that happen on a company level.

Giving your community members access to the official company channels and making sure their content goes there has a lot of benefits:

  • You separate personal views from company content
  • You control the platform (security, future plans…)
  • You enjoy the reach and give kudos to the community member.

You don’t want to be in the position to explain a hacked blog or outrageous political beliefs of a community member mixed with your official content. Believe me, it isn’t fun.

Communicate sideways and up

This is the end game. To make this sustainable, you need full support from your company.

For sustainability, get company support

The danger of programs like this is that they cost a lot of time and effort and don’t yield immediate results. This is why you have to be diligent in keeping your company up-to-date on what’s happening.

  • Communicate out successes company-wide – find the right people to tell about successful outreach into markets you couldn’t reach but the people you trained could. Tell all about it – from engineering to marketing to PR. Any of them can be your ally in the future.
  • Get different company departments to maintain and give input to the community materials – once you got community members to talk about products, try to get a contact in these departments to maintain the materials the community uses. That way they will be always up to date. And you don’t run into issues with outdated materials annoying the company department.
  • Flag up great community members for hiring as full-time devrel people

The perfect outcome of this is to convert community members into employees. This is important to the company as people getting through the door is expensive. Already trained up employees are more effective to hit the ground running. It also shows that using your volunteer time on evangelism pays off in the long run. It can also be a great career move for you. People hired through this outreach are likely to become your reports.

Mark CôtéHow MozReview helps

A great post on code review is making its rounds. It’s started some discussion amongst Mozillians, and it got me thinking about how MozReview helps with the author’s points. It’s particularly interesting because apparently Twitter uses Review Board for code reviews, which is a core element of the whole MozReview system.

The author notes that it’s very important for reviewers to know what reviews are waiting on them, but also that Review Board itself doesn’t do a good job of this. MozReview fixes this problem by piggybacking on Bugzilla’s review flags, which have a number of features built around them: indicators, dashboards, notification emails, and reminder emails. People can even subscribe to the reminders for other reviewers; this is a way managers can ensure that their teams are responding promptly to review requests. We’ve also toyed around with the idea of using push notifications to notify people currently using Bugzilla that they have a new request (also relevant to the section on being “interrupt-driven”).

On the submitter side, MozReview’s core support for microcommits—a feature we built on top of Review Board, within our extensions—helps “keep reviews as small as possible”. While it’s impossible to enforce small commits within a tool, we’ve tried to make it as painless as possible to split up work into a series of small changes.

The MozReview team has made progress on automated static analysis (linters and the like), which helps submitters verify that their commits follow stylistic rules and other such conventions. It will also shorten review time, as the reviewer will not have to spend time pointing out these issues; when the review bots have given their r+s, the reviewer will be able to focus solely on the logic. As we continue to grow the MozReview team, we’ll be devoting some time to finishing up this feature.

Armen ZambranoThe Joy of Automation

This post is to announce The Joy of Automation YouTube channel. In this channel you should be able to watch presentations about automation work by Mozilla's Platforms Operations. I hope more folks than me would like to share their videos in here.

This follows the idea that mconley started with The Joy of Coding and his livehacks.
At the moment there is only "Unscripted" videos of me hacking away. I hope one day to do live hacks but for now they're offline videos.

Mistakes I made in case any Platform Ops member wanting to contribute want to avoid:

  • Lower the music of the background music
  • Find a source of music without ads and with music that would not block certain countries from seeing it (e.g. Germany)
  • Do not record in .flv format since most video editing software do not handle it
  • Add an intro screen so you don't see me hiding OBS
  • Have multiple bugs to work on in case you get stuck in the first one



Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Benjamin BouvierMaking asm.js/WebAssembly compilation more parallel in Firefox

In December 2015, I've worked on reducing startup time of asm.js programs in Firefox by making compilation more parallel. As our JavaScript engine, Spidermonkey, uses the same compilation pipeline for both asm.js and WebAssembly, this also benefitted WebAssembly compilation. Now is a good time to talk about what it meant, how it got achieved and what are the next ideas to make it even faster.

What does it mean to make a program "more parallel"?

Parallelization consists of splitting a sequential program into smaller independent tasks, then having them run on different CPU. If your program is using N cores, it can be up to N times faster.

Well, in theory. Let's say you're in a car, driving on a 100 Km long road. You've already driven the first 50 Km in one hour. Let's say your car can have unlimited speed from now on. What is the maximal average speed you can reach, once you get to the end of the road?

People intuitively answer "If it can go as fast as I want, so nearby lightspeed sounds plausible". But this is not true! In fact, if you could teleport from your current position to the end of the road, you'd have traveled 100 Km in one hour, so your maximal theoritical speed is 100 Km per hour. This result is a consequence of Amdahl's law. When we get back to our initial problem, this means you can expect a N times speedup if you're running your program with N cores if, and only if your program can be entirely run in parallel. This is usually not the case, and that is why most wording refers to speedups up to N times faster, when it comes to parallelization.

Now, say your program is already running some portions in parallel. To make it faster, one can identify some parts of the program that are sequential, and make them independent so that you can run them in parallel. With respect to our car metaphor, this means augmenting the portion of the road on which you can run at unlimited speed.

This is exactly what we have done with parallel compilation of asm.js programs under Firefox.

A quick look at the asm.js compilation pipeline

I recommend to read this blog post. It clearly explains the differences between JIT (Just In Time) and AOT (Ahead Of Time) compilation, and elaborates on the different parts of the engines involved in the compilation pipeline.

As a TL;DR, keep in mind that asm.js is a strictly validated, highly optimizable, typed subset of JavaScript. Once validated, it guarantees high performance and stability (no garbage collector involved!). That is ensured by mapping every single JavaScript instruction of this subset to a few CPU instructions, if not only a single instruction. This means an asm.js program needs to get compiled to machine code, that is, translated from JavaScript to the language your CPU directly manipulates (like what GCC would do for a C++ program). If you haven't heard, the results are impressive and you can run video games directly in your browser, without needing to install anything. No plugins. Nothing more than your usual, everyday browser.

Because asm.js programs can be gigantic in size (in number of functions as well as in number of lines of code), the first compilation of the entire program is going to take some time. Afterwards, Firefox uses a caching mechanism that prevents the need for recompilation and almost instaneously loads the code, so subsequent loadings matter less*. The end user will mostly wait for the first compilation, thus this one needs to be fast.

Before the work explained below, the pipeline for compiling a single function (out of an asm.js module) would look like this:

  • parse the function, and as we parse, emit intermediate representation (IR) nodes for the compiler infrastructure. SpiderMonkey has several IRs, including the MIR (middle-level IR, mostly loaded with semantic) and the LIR (low-level IR closer to the CPU memory representation: registers, stack, etc.). The one generated here is the MIR. All of this happens on the main thread.
  • once the entire IR graph is generated for the function, optimize the MIR graph (i.e. apply a few optimization passes). Then, generate the LIR graph before carrying out register allocation (probably the most costly task of the pipeline). This can be done on supplementary helper threads, as the MIR optimization and LIR generation for a given function doesn't depend on other ones.
  • since functions can call between themselves within an asm.js module, they need references to each other. In assembly, a reference is merely an offset to somewhere else in memory. In this initial implementation, code generation is carried out on the main thread, at the cost of speed but for the sake of simplicity.

So far, only the MIR optimization passes, register allocation and LIR generation were done in parallel. Wouldn't it be nice to be able to do more?

* There are conditions for benefitting from the caching mechanism. In particular, the script should be loaded asynchronously and it should be of a consequent size.

Doing more in parallel

Our goal is to make more work in parallel: so can we take out MIR generation from the main thread? And we can take out code generation as well?

The answer happens to be yes to both questions.

For the former, instead of emitting a MIR graph as we parse the function's body, we emit a small, compact, pre-order representation of the function's body. In short, a new IR. As work was starting on WebAssembly (wasm) at this time, and since asm.js semantics and wasm semantics mostly match, the IR could just be the wasm encoding, consisting of the wasm opcodes plus a few specific asm.js ones*. Then, wasm is translated to MIR in another thread.

Now, instead of parsing and generating MIR in a single pass, we would now parse and generate wasm IR in one pass, and generate the MIR out of the wasm IR in another pass. The wasm IR is very compact and much cheaper to generate than a full MIR graph, because generating a MIR graph needs some algorithmic work, including the creation of Phi nodes (join values after any form of branching). As a result, it is expected that compilation time won't suffer. This was a large refactoring: taking every single asm.js instructions, and encoding them in a compact way and later decode these into the equivalent MIR nodes.

For the second part, could we generate code on other threads? One structure in the code base, the MacroAssembler, is used to generate all the code and it contains all necessary metadata about offsets. By adding more metadata there to abstract internal calls **, we can describe the new scheme in terms of a classic functional map/reduce:

  • the wasm IR is sent to a thread, which will return a MacroAssembler. That is a map operation, transforming an array of wasm IR into an array of MacroAssemblers.
  • When a thread is done compiling, we merge its MacroAssembler into one big MacroAssembler. Most of the merge consists in taking all the offset metadata in the thread MacroAssembler, fixing up all the offsets, and concatenate the two generated code buffers. This is equivalent to a reduce operation, merging each MacroAssembler within the module's one.

At the end of the compilation of the entire module, there is still some light work to be done: offsets of internal calls need to be translated to their actual locations. All this work has been done in this bugzilla bug.

* In fact, at the time when this was being done, we used a different superset of wasm. Since then, work has been done so that our asm.js frontend is really just another wasm emitter.

** referencing functions by their appearance order index in the module, rather than an offset to the actual start of the function. This order is indeed stable, from a function to the other.

Results

Benchmarking has been done on a Linux x64 machine with 8 cores clocked at 4.2 Ghz.

First, compilation times of a few asm.js massive games:

The X scale is the compilation time in seconds, so lower is better. Each value point is the best one of three runs. For the new scheme, the corresponding relative speedup (in percentage) has been added:

Compilation times of various benchmarks

For all games, compilation is much faster with the new parallelization scheme.

Now, let's go a bit deeper. The Linux CLI tool perf has a stat command that gives you an average of the number of utilized CPUs during the program execution. This is a great measure of threading efficiency: the more a CPU is utilized, the more it is not idle, waiting for other results to come, and thus useful. For a constant task execution time, the more utilized CPUs, the more likely the program will execute quickly.

The X scale is the number of utilized CPUs, according to the perf stat command, so higher is better. Again, each value point is the best one of three runs.

CPU utilized on DeadTrigger2

With the older scheme, the number of utilized CPUs quickly rises up from 1 to 4 cores, then more slowly from 5 cores and beyond. Intuitively, this means that with 8 cores, we almost reached the theoritical limit of the portion of the program that can be made parallel (not considering the overhead introduced by parallelization or altering the scheme).

But with the newer scheme, we get much more CPU usage even after 6 cores! Then it slows down a bit, although it is still more significant than the slow rise of the older scheme. So it is likely that with even more threads, we could have even better speedups than the one mentioned beforehand. In fact, we have moved the theoritical limit mentioned above a bit further: we have expanded the portion of the program that can be made parallel. Or to keep on using the initial car/road metaphor, we've shortened the constant speed portion of the road to the benefit of the unlimited speed portion of the road, resulting in a shorter trip overall.

Future steps

Despite these improvements, compilation time can still be a pain, especially on mobile. This is mostly due to the fact that we're running a whole multi-million line codebase through the backend of a compiler to generate optimized code. Following this work, the next bottleneck during the compilation process is parsing, which matters for asm.js in particular, which source is plain text. Decoding WebAssembly is an order of magnitude faster though, and it can be made even faster. Moreover, we have even more load-time optimizations coming down the pipeline!

In the meanwhile, we keep on improving the WebAssembly backend. Keep track of our progress on bug 1188259!

Cameron Kaiser38.8.0 available

38.8.0 is available (downloads, hashes, release notes). There are no major changes, only a bustage fix for one of the security updates that does not compile under gcc 4.6. Although I built the browser and did all the usual custodial tasks remotely from a hotel room in Sydney, assuming no major showstoppers I will actually take a couple minutes on my honeymoon to flip the version indicator Monday Pacific time (and, in a good sign for the marriage, she accepts this as a necessary task).

Don't bother me on my honeymoon.

David BurnsSelenium WebDriver and Firefox 46

As of Firefox 46, the extension based version FirefoxDriver will no longer work. This is because of the new Add-on policy that Mozilla is enforcing to try help protect end users from installers inserting add ons that are not what the user wants. This version is due for release next week.

This does not mean that your tests need to stop working entirely as there are options to keep them working.

Marionette

Firstly, you can use Marionette, the Mozilla version of FirefoxDriver to drive Firefox. This has been in Firefox since about 24 as we, slowly working against Mozilla priorities, getting it up to Selenium level. Currently Marionette is passing ~85% of the Selenium test suite.

I have written up some documentation on how to use Marionette on MDN

I am not expecting everything to work but below is a quick list that I know doesn't work.

  • No support for self-signed certificates
  • No support for actions
  • No support logging endpoint
  • I am sure there are other things we don't remember

It would be great if we could raise bugs.

Firefox 45 ESR

If you don't want to worry about Marionette, the other option is to downgrade to Firefox 45, preferably the ESR as it won't update to 46 and will update in about 6-9 months time to Firefox 52 when you will need to use Marionette.

Marionette will be turned on by default from Selenium 3, which is currently being worked on by the Selenium community. Ideally when Firefox 52 comes around you will just update to Selenium 3 and, fingers crossed, all works as planned.

Support.Mozilla.OrgGet inspired! Reaching 100% SUMO Localization with the Czech team

Hey there, SUMO Nation! We’re back to sharing more awesomeness from you, by you, for all the users. This time I have the pleasure of passing the screen over to Michal, our Czech locale leader. Michal and his trusted team of Czech l10ns reached all the possible KB milestones ever and are maintaining the Czech KB with grace and ease. Learn how they did it and get inspired!

The years 2015 and 2016 were a great success for our Czech localization team. We have grown in number, improved our suggestion & reviewing workflow, moved all projects to a single place (Pontoon) and finished all milestones for SUMO l10n – both for UI and articles. But there is much more that we gained when making all dashboards green than just “getting the work done”.

But who is the Czech team?

That’s a very good question. The Czech team has not been involved much in the global SUMO life. So, if you do not know us, let me introduce everyone.

  • First there is me ;) – Michal. I primarily focus on product localization, but “as a hobby” I am trying to help the SUMO heroes too.
  • Our biggest hero and record breaker Jiří! If you open any Czech article, he worked on it directly, or reviewed and polished it to perfection. His counter recently exceeded the number of 730 articles updated.
  • Miroslav is our long-time contributor and his updates and translations are considered approved in advance.
  • Tomáš does irreplaceable work keeping Kitsune UI localization in great shape, and he started that in the old ages of Verbatim.
  • I almost forgot our former leader Pavel. Many thanks to him for the outstanding work on both Kitsune UI l10n and the very first help articles as well.
  • I also want to highlight the contributions of other brave volunteers. Their updates and translation even a few articles helped us conquer our dashboard.

Nice to meet all of you, guys.

Thank you, SUMO! So, the story… At the end of last year, Michał looked closely on the locale statuses and assigned the milestones we should smash this year. Our milestone was to localize all articles globally. That was something I didn’t believe we can do easily. Even in February or March a new set of milestones appeared and the updated one for Czech reduced the “requirement” to localize 700 articles. Well, from that point in time, we cheated a little. ;)

As you may notice only 697 articles now. During the localization we noticed some articles were pretty outdated, containing links to pages that no longer exist, etc. So we’ve got in touch with the team, reported them and… they were magically archived. But do not think we are just bloody cheaters achieving milestones through asking for content deletion. No, we made almost 400 updates to all articles this year (50% of the total we’ve done in the whole 2015)!

The cooperation between us (localizers) and the SUMO team (Michał, Joni, Madalina, and others), I personally have found very beneficial. One one side, they put a huge effort in our support, introducing a new tool or explaining article content, as well as Firefox release notes and news. During the localization we read each article whole at least once or twice, so giving them feedback or suggesting updates is the smallest thing we can offer from our side.

Amazing! But you mentioned you learned something new too?

True. We learned that communication is very important. In the team, we learned to share new ideas on terminology and also opened a discussion on the theme of screenshots, which are our next target. Did you notice, there is no dedicated way to mark that your localization revision is missing localized screenshots? Oh, it’s quite simple in fact. We are adding a “[scr]” tag into the revision comment each time we have not had enough time to take localized screenshots, and only translated the article content. It’s very easy to filter them out in the “Recent revisions” list, once you have time and mood for some “screenshotting”.

Equally important is the communication outside the localization team. A lot of strings in the Kitsune UI would have been translated blindly without any consultation. Especially in the support forum areas, we haven’t been using those pages ourselves until the beginning of April (yes, we did forum support, too!).

In the light of our success, we do not want to rest on the laurels. It’s time to look forward – our screenshots are not perfect, and we are still dividing our efforts between articles and the Kitsune UI. I hope that Kitsune can support us in both areas a little more, e.g. there are no tools for finding the actual location of the strings, but that’s something we can help fix. We are actually quite new to Kitsune. But as it’s great to help people by bringing knowledge into your language, it’s also quite important to start a discussion, even if we might think the questions we have are trivial. Just do not be afraid to say what you think is important for the project.

Karl Dubost[worklog] Mabuigumi, the soul shifting

We had two powerful earthquakes in the south of Japan which have been registered both at 7, the maximum, on the Japanese earthquake scale. Previously only 3 other earthquakes had been that high in Japan.

Tune of the week: Boom! Shake the room.

Webcompat Life

Progress this week:

Today: 2016-04-22T10:25:49.831867
376 open issues
----------------------
needsinfo       4
needsdiagnosis  132
needscontact    27
contactready    95
sitewait        116
----------------------

You are welcome to participate

Londong agenda.

Webcompat issues

(a selection of some of the bugs worked on this week).

  • -webkit-overflow-scrolling: touch; and a missing width: 100vw; creates a scrolling issue for a menu. If you speak Chinese you can help us find the contact and reach out to them.
  • One of the issues with Webcompat work is that people change life, job, assignments like it is happening on this Amazon bug, but after finding another person in charge, the dialog is still going on. And it's an healthy one, where constraints and needs are discussed on both sides. That's the best you could hope for when discussing about features and bugs.
  • User-Agent override is never an easy choice. When deciding to do user agent override, aka faking the user agent so you receive the proper user experience from the Web site, what else will you break in the process?
  • A selection which creates a jump in reading the next article on the NYTimes Web site.
  • There is something to write about user agent sniffing, websites and longterm. Orbis is obviously a site which is not maintained anymore BUT working at least for some browsers. The strategies of user agent sniffing are still in place and then fail the browser even if it could work.
  • There should be a special category for Daniel Holbert's Web compatibility bug reports… Holy cow. This is just butter on top of grilled bread. A barber shop

Webcompat development

Need to increase my dev rate.

Gecko Bugs

  • Brian Grinstead [fixed] an issue which was created by the interaction of User Agent Switcher addons and developer tool.

Meaning of WebCompat, err, Web Compatibility

In two tweets, Jen Simmons got pretty interesting answers:

  • “web compatibility” is HTML content which is accessible via interconnected URLs independent of further technologies.
  • another buzzword.
  • Web compat means pages that work for everyone regardless of browser, screen size, network speed, language, physical ability.
  • Something that can safely be loaded in a browser?
  • more seriously, https://webcompat.com is a cool initiative. Wasn’t aware until you asked :)
  • Spiderman vs. Bizarro Spiderman
  • Web Compatibility, but then in a slightly puzzled manner I wonder why they decided to shorten compatibility to compat.
  • it means someone didn't finish their sentence

I like this series of answers. "Web Compat" as a buzzword is interesting. We usually use webcompat in between us in the team without realizing that it might be not understood outside of our circle. Taking notes.

The other answers are very spread apart into something which is more on does it work more than interoperable, aka focusing more on universality of the Web technologies. In the London agenda, we have an item opened for discussions on the meaning of Web Compatibility and what are the class of issues. It should be fun.

Reading List

Follow Your Nose

TODO

  • Document how to write tests on webcompat.com using test fixtures.
  • ToWrite: rounding numbers in CSS for width
  • ToWrite: Amazon prefetching resources with <object> for Firefox only.

Otsukare!

Air MozillaConnected Devices Speakers and Open Forum

Connected Devices Speakers and Open Forum This is the Connected Devices Meetup where we will have 3 speakers presenting their slides or demos and answering questions.

David LawrenceHappy BMO Push Day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1264207] add support for the hellosplat tracker to ‘see also’
  • [1195736] intermittent internal error: “file error – nav_link: not found” (also manifests as fields_lhs: not found)
  • [1265432] backport upstream bug 1263923 to bmo/4.2 – X-Bugzilla-Who header is not set for flag mails
  • [1266117] I have found a bug in the section 2.6.1 in the user guide(2.6) of BMO documentation. The bug identified is a grammatical error committed in one of the sentences.
  • [1239838] Don’t see a way to redirect a needinfo request (in Experimental UI)
  • [1266167] clickjacking is possible on “view all” and “details” attachment pages

discuss these changes on mozilla.tools.bmo.


Support.Mozilla.OrgWhat’s Up with SUMO – 21st April

Hello, SUMO Nation!

Let’s get the big things out of the way – we met last week in Berlin to talk about where we are and what’s ahead of us. While you will see changes and updates appearing here and there around SUMO, the most crucial result is the start of a discussion about the future of our platform – a discussion about the technical future of SUMO. We need your feedback about it as soon as possible. Read more details here – and tell us what you think.

This is just the beginning of a discussion about one aspect of sumo, so please – don’t panic and remember: we are not going away, no content will be lost (but we may be archiving some obsolete stuff), no user will be left behind, and (on a less serious note) no chickens will have to cross the road – we swear!

Now, let’s get to the updates…

A glimpse of Berlin, courtesy of Roland

Welcome, new contributors!

If you just joined us, don’t hesitate – come over and say “hi” in the forums!

Contributors of the week

Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

Most recent SUMO Community meeting

The next SUMO Community meeting

  • …is happening on WEDNESDAY the 27th of April – join us!
  • Reminder: if you want to add a discussion topic to the upcoming meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Wednesday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).
    • If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback.

Community

Social

Support Forum

Knowledge Base & L10n

  • Hackathons everywhere! Well, at least in Stockholm, Sweden (this Friday) and Prague, Czech Republic (next Friday). Contact information in the meeting notes!
  • A guest post all about a certain group of our legendary l10ns coming your way – it will be a great read, I guarantee!
  • An update post about SUMO l10n coming over the weekend, because there ain’t no rest for the wicked.

Firefox

…and that’s it for today! We hope you enjoyed the update and will stick around for more news this (and next) week. We are looking forward to seeing you all around SUMO – KEEP ROCKING THE HELPFUL WEB!