Adrian GaudebertRethinking Socorro's Web App

rewrite-cycle.jpg
Credits @lxt

I have been thinking a lot about what we could do better with Socorro's webapp in the last months (and even more, the first discussions I had about this with phrawzty date from Spring last year). Recently, in a meeting with Lonnen (my manager), I said "this is what I would do if I were to rebuild Socorro's webapp from scratch today". In this post I want to write down what I said and elaborate it, in the hope that it will serve as a starting point for upcoming discussions with my colleagues.

State of the Art

First, let's take a look at the current state of the webapp. According to our analytics, there are 5 parts of the app that are heavily consulted, and a bunch of other less used pages. The core features of Socorro's front-end are:

Those we know people are looking at a lot. Then there are other pages, like Crashes per User, Top Changers, Explosive Crashes, GC Crashes and so on that are used from "a lot less" to "almost never". And finally there's the public API, on which we don't have much analytics, but which we know is being used for many different things (for example: Spectateur, crash-stats-api-magic, Are we shutting down yet?, Such Comments).

The next important thing to take into account is that our users oftentimes ask us for some specific dataset or report. Those are useful at a point in time for a few people, but will soon become useless to anyone. We used to try and build such reports into the webapp (and I suppose the ones from above that are not used anymore fall into that category), but that costs us time to build and time to maintain. And that also means that the report will have to be built by someone from the Socorro team who has time for it, it will go through review and testing, and by the time it hits our production site it might not be so useful anymore. We have all been working on trying to reduce that "time to production", which resulted in the public API and Super Search. And I'm quite sure we can do even better.

Building Reports

bob-the-builder.jpg

Every report is, at its core, a query of one or several API endpoints, some logic applied to the data from the API, and a view generated from that data. Some reports require very specific data, asking for dedicated API endpoints, but most of them could be done using either Super Search alone or some combination of it with other API endpoints. So maybe we could facilitate the creation of such reports?

Let us put aside the authentication and ACL features, the API, the admin panel, and a few very specific features of the web app, to focus on the user-facing features. Those can be simply considered as a collection of reports: they all call one or several models, have a controller that does some logic, and then are displayed via a Django template. I think what we want to give our users is a way to easily build their own reports. I would like them to be able to answer their needs as fast as possible, without depending on the Socorro team.

The basic brick of a fresh web app would thus be a report builder. It would be split in 3 parts:

  • the model controls the data that should be fetched from the API;
  • the controller gets that data and performs logic on it, transforming it to fit the needs of the user;
  • and the view will take the transformed data and turn it into something pretty, like a table or a graph.

Each report could be saved, bookmarked, shared with others, forked, modified, and so on. Spectateur is a prototype of such a report builder.

We developers of Socorro would use that report system to build the core features of the app (top crashers, home page graphs, etc. ), maybe with some privileges. And then users will be able to build reports for their own use or to share with teammates. We know that users have different needs depending on what they are working on (someone working on FirefoxOS will not look at the same reports than someone working on Thunderbird), so this would be one step towards allowing them to customize their Socorro.

One Dashboard to Rule Them All

So users can build their own reports. Now what if we pushed customization even further? Each report has a view part, and that's what would be of interest to people most of the time. Maybe we could make it easy for a user to quickly see the main reports that are of interest to them? My second proposal would be to build a dashboard system, which would show the views of various reports on a single page.

A dashboard is a collection of reports. It is possible to remove or add new reports to a dashboard, and to move them around. A user can also create several dashboards: for example, one for Firefox Nightly, one for Thunderbird, one for an ongoing investigation... Dashboards only show the view part of a report, with links to inspect it further or modify it.

dashboard-example.png

An example of what a dashboard could look like.

Socorro As A Platform

The overall idea of this new Socorro is to make it a platform where people can find what they want very quickly, personalize their tool, and build whatever feature they need that does not exist yet. I would like it to be a better tool for our users, to help them be even more efficient crash killers.

I can see several advantages to such a platform:

  • time to create new reports is shorter;
  • people can collaborate on reports;
  • users can tweak existing reports to better fit their needs;
  • people can customize the entire app to be focused on what they want;
  • when you give data to people, they build things that you did not even dream about. I expect that will happen on Socorro, and people will come up with incredibly useful reports.

I Need Feedback

feedback-everywhere.jpg

Concretely, the plan would be to build a brand new app along the existing one. The goal won't be to replace it right away, but instead to build the tools that would then be used to replace what we currently have. We would keep both web apps side by side for a time, continuing to fix bugs in the Django app, but investing all development time in the new app. And we would slowly push users towards the new one, probably by removing features from the Django app once the equivalent is ready.

I would love to discuss this with anyone interested. The upcoming all-hands meeting in Whistler is probably going to be the perfect occasion to have a beer and share opinions, but other options would be fine (email, IRC... ). Let me know what you think!

Kaustav Das ModakDecentralizing Mozilla India Evangelism Task Force

Some of the key goals of the Mozilla India Task Force Meetup ’15 included ensuring an organizational structure which promotes larger inclusion, allows wider participation and better recognition of contributors. We had precious little time to discuss about the Evangelism Task Force (ETF) in this year’s meetup, but the focus was to build an inter-disciplinary […]

Ted ClancyRAII helper macro

One of the most important idioms in C++ is “Resource Acquisition Is Initialization” (RAII), where the constructor of a class acquires a resource (like locking a mutex, or opening a file) and the corresponding destructor releases the resource.

Such classes are almost always used as local variables. The lifetime of the variable lasts from its point of declaration to the end of its innermost containing block.

A classic example is something like:

extern int n;
extern Mutex m;

void f() {
    Lock l(&m); // This is an RAII class.
    n++;
}

The problem is, code written with these classes can easily be misunderstood.

1) Sometimes the variable is used solely for the side-effects of its constructor and destructor. To a naive coder, this can look like an unused variable, and they might be tempted to remove it.

2) In order to control where the object is destroyed, the coder sometimes needs to add another pair of braces (curly brackets) to create a new block scope. Something like:

void f() {
    [...]

    {
        Lock l(&m);
        n++;
    }

    [...]
}

The problem is, it’s not always obvious why that extra block scope is there. (For trivial code, like the above, it’s obvious. But in real code, ‘l’ might not be the only variable defined in the block.) To a naive coder, it might even look like an unnecessary scope, and they might be tempted to remove it.

The usual solutions to this situation are: (a) Write a comment, or (b) trust people to understand what you meant. Those are both bad options.

That’s why I’m fond of the following MACRO.

#define with(decl) \
for (bool __f = true; __f; ) \
for (decl; __f; __f = false)

This allows you to write:

void f() {
    [...]

    with (Lock l(&m)) {
        n++;
    }

    [...]
}

This creates a clear association between the RAII object and the statements which depend on its existence. The code is less likely to be misunderstood.

I like to call this a with-statement (analogous to an if-statement). Any kind of variable declaration can go in the head of the with-statement (as long as it can appear in the head of a for-statement). The body of the with-statement executes once. The variable declared in the head of the statement is destroyed after the body executes.

Your code editor might even highlight ‘with‘ as a keyword, since it’s a keyword in Javascript (where it has a different — and deprecated — purpose).

I didn’t invent this kind of MACRO. (I think I first read about something similar in a Dr Dobb’s article.) I just find it really useful, and I hope you do too.


Planet Mozilla InternsJonathan Wilde: Gossamer Sprint Two, Days One and Two

I’m currently working with Lyre Calliope on a project to improve tooling for developing and sharing web browser features.

I’ll be documenting my progress on this Mediapublic-style.

First, a Little Demo

In order to tinker with your web browser’s source today, you need to download a working copy of the source, set up a build environment, and have your text editor selected and configured. It can take hours, even for people who’ve done it before.

Why can’t we just edit and share web browser UI changes from a web application, like we can with documents and other things?

Well, now we can. As an example, let’s say we want to change the color of the flask icon in upper right corner of the browser.

We open up the GitHub web interface (even from the browser you’re trying to edit), make edit the color, and when the update popup appears in the web browser, click “Apply”. Boom, there’s the update.

And other people testing the same Gossamer branch receive that update, too.

In case you’re curious, here’s the commit I made in the demo.

What I Did Thursday and Friday

  • Removed analytics and news feed for now. Remove as much UI as possible.
  • Restructured the build process. Our one cheap build transform is performed as late as possible, and not muddled in with our cache of repo data.
  • Remove the need to explicitly register branches as experiments.
  • Add “base” branch that is used when you don’t have a branch explicitly selected. Drop requirement to be logged in to access builds.
  • Add Webhook support for receiving push notifications.

Next

  • Standing up Webpack and react-hot-loader locally with minimal changes to browser.html.

Stay tuned for more!

Ted ClancyICANN, wtf?

If you care about privacy on the internet, you might want to check out this link: icann.wtf


Mozilla Reps CommunityRep of the month – May 2015

Please join us in congratulating Mahmood Qudah of Jordan for being selected as Mozilla Rep of the Month for May 2015.

Mahmood is not just an active member in the Jordan local community but also very active in the bigger Mozilla Arabic community. He is doing lectures at universities in Jordan (many FSA events too), participating in the Marketing team for the Arabic community, he is helping in fixing RTL (right-to-left) issues in Firefox OS. In addition, he did few lectures teaching students how to start localizing Firefox OS.

Every month he is having one or two events, either organizing them or being part of them. We’re happy to see that he’s blasting both communities with new ideas every day.

Congratulations, Mahmood! Keep on rocking.

Don’t forget to congratulate him on Discourse!

Carsten Book7 Years at Mozilla!

Hi,

since last month i’m now 7 years at Mozilla as full-time employee \o/

Of course I’m longer around because i started as Community Member in QA years before. And its a long way from my first steps at QA to my current role as Code Sheriff @ Mozilla.

I never actively planned to join the Mozilla Community it just happened :) I worked back in 2001 at a German Email Provider as 2nd Level Support Engineer and as part of my Job (and also to Support Customers) we used different Email Programm’s to find out how to set-up the Programm and so.

Some Friends already involved into OpenSource (some linux fans) pointed me to this Mozilla Programm (at that time M1 or so) and i liked the Idea with this “Nightly”. Having everyday a brand new Program was something really cool and so started my way into the Community without even knowing that i’m now Part of the Community.

So over the years with Mozilla i finally filed my first bug and and was scared like hell (all this new fields in a non-native language) and not really knowing what i signed up when i clicked up this “submit” button in bugzillla :)  (was not even sure if i’m NOW supposed to fix the bug :)

And now i file dozens of Bugs every day while on Sheriffduty or doing other tasks :)

I learned a lot of stuff over the last years and still love being part of Mozilla  and its the best place to work for me! So on to the next years at Mozilla!

– Tomcat

Carsten Booka day in sheriffing

Hi,

since i talked with a lot of people about Sheriffing and what we do here is what a typical day for me look:

We care about the Code Trees like Test Failures etc

I usually start the day with checkin the trees we are responsible for  for test failures using treeherder. This gives me first a overview of the current status and as well make sure that everything is ok for the Asian and European Community which is online at that time.

This Tasks is ongoing till the end of my duty shift. From time to time this means we have to do backouts for code/test regressions.
Beside this i do stuff like checkin-neededs, uplifts etc and other tasks and of course always availble for questions etc on irc :)

Also i was thinking about some parts of my day-to-day experience:

Backouts and Tree Closures:

While backouts of code for test failures/bustages etc is one important task of sheriffs (and the managing of tree closures related to this), its always a mixed feeling to backout work from someone (and no one wants to cause a bustage) but its important to ensure quality of our products.

Try Server!!!

Tree Closures due to backouts can have the side effect that others are blocked with checkins. So if in doubt if your patch compile or could cause test regressions, please consider a try run, this helps at lot to keep tree closures for code issues at a minimum.

And last but not least Sheriffing is a Community Task! So if you want to be part of the Sheriff Team as Community Sheriff please sent me a mail at tomcat at mozilla dot com

Thanks!

– Tomcat

About:CommunityMDN at Whistler

The MDN community was well-represented at the Mozilla “Coincidental Work Week” in Whistler, British Columbia, during the last week in June. All of the content staff, a couple of the development staff, and quite a few volunteers were there. Meetings were met, code was hacked, docs were sprinted, and fun was funned.

Cross-team conversations

One of the big motivations for the “Coincidental Work Week” is the opportunity for cross-pollination among teams, so that teams can have a high-bandwidth conversations about their work with others. MDN touches many other functional groups within Mozilla, so we had a great many of these conversations. Some MDN staff were also part of “durable” (i.e., cross-functional) teams within the Engagement department, meeting with product teams about their marketing needs. Among others, MDN folks met with:

  • Add-ons, about future plans for add-ons.
  • Mozilla Foundation, about MDN’s role in broader learning initiatives, and about marketing themes for the last half of the year.
  • Firefox OS, about their plans in the next six months, and about increasing participation in Firefox OS.
  • Developer Relations and Platform Engineering, about improving coordination and information sharing.
  • Firefox Developer Tools, about integrating more MDN content into the tools, and making the dev-tools codebase more accessible to contributors.
  • Participation, to brainstorm ways to increase retention of MDN contributors.

Internal conversations

The MDN community members at Whistler spent some time as a group reflecting on the first half of the year, and planning and prioritizing for the second half of the year. Sub-groups met to discuss specific projects, such as the compatibility data service, or HTML API docs.

Hacking and sprinting

Lest the “Work Week” be all meetings, we also scheduled time for heads-down productivity. MDN was part of a web development Hack Day on Wednesday, and we held doc sprints for most of the day on Thursday and Friday. These events resulted in some tangible outputs, as well as some learning that will likely pay off in the future.

  • Heather wrote glossary entries and did editorial reviews.
  • Sebastian finished a new template for CSS syntax.
  • Sheppy worked on an article and code sample about Web RTC.
  • Justin finished a prototype feature for helpfulness ratings for MDN articles.
  • Saurabh prototyped an automation for badge nominations, and improved CSS reference pages’ structure and syntax examples.
  • Klez got familiar with the compatibility service codebase and development workflow; he also wrote glossary entries and other learning content.
  • Mark learned about Kuma by failing to get it running on Windows.
  • Will finished a patch to apply syntax highlighting to the CSS content from MDN in Dev Tools.

And fun!

Of course, the highlight of any Mozilla event is the chance to eat, drink, and socialize with other Mozillians. Planned dinners and parties, extracurricular excursions, and spontaneous celebrations rounded out the week. Many in the MDN group stayed at a hotel that happened to be a 20-minute walk from most of the other venues, so those of us with fitness trackers blew out our step-count goals all week. A few high points:

  • Chris celebrated his birthday at the closing party on Friday, at the top of Whistler mountain.
  • Mark saw a bear, from the gondola on the way up to the mountain-top party.
  • Saurabh saw snow for the first time. In June, no less.

    Nick AlexanderBuild Fennec frontend fast with mach artifact!

    Nota bene: this post supercedes Build Fennec frontend fast!

    Quick start

    It’s easy! But there is a pre-requisite: you need to enable Gregory Szorc’s mozext Mercurial extension [1] first. mozext is part of Mozilla’s version-control-tools repository; run mach mercurial-setup to make sure your local copy is up-to-date, and then add the following to the .hg/hgrc file in your source directory:

    [extensions]
    mozext = /PATH/TO/HOME/.mozbuild/version-control-tools/hgext/mozext
    

    Then, run hg pushlogsync. Mercurial should show a long (and slow) progress bar [2]. From now on, each time you hg pull, you’ll also maintain your local copy of the pushlog.

    Now, open your mozconfig file and add:

    ac_add_options --disable-compile-environment
    mk_add_options MOZ_OBJDIR=./objdir-frontend
    

    (That last line uses a different object directory — it’s worth experimenting with a different directory so you can go back to your old flow if necessary.)

    Then mach build and mach build mobile/android as usual. When it’s time to package an APK, use:

    mach artifact install && mach package
    

    instead of mach package [3]. Use mach install like normal to deploy to your device!

    After running mach artifact install && mach package once, you should find that mach gradle-install, mach gradle app:installDebug, and developing with IntelliJ (or Android Studio) work like normal as well.

    Disclaimer

    This only works when you are building Fennec (Firefox for Android) and developing JavaScript and/or Fennec frontend Java code! If you’re building Firefox for Desktop, this won’t help you. If you’re building C++ code, this won’t help you.

    The integration currently requires Mercurial. Mozilla’s release engineering runs a service mapping git commit hashes to Mercurial commit hashes; mach artifact should be able to use this service to provide automatic binary artifact management for git users.

    Discussion

    mach artifact install is your main entry point: run this to automatically inspect your local repository, determine good candidate revisions, talk to the Task Cluster index service to identify suitable build artifacts, and download them from Amazon S3. The command caches heavily, so it should be fine to run frequently; and the command avoids touching files except when necessary, so it shouldn’t invalidate builds arbitrarily.

    The reduction in build time comes from --disable-compile-environment: this tells the build system to never build C++ libraries (libxul.so and friends) [4]. On my laptop, a clobber build with this configuration completes in about 3 minutes [5]. This configuration isn’t well tested, so please file tickets blocking Bug 1159371.

    Troubleshooting

    Run mach artifact to see help.

    I’m seeing problems with pip

    Your version of pip may be to old. Upgrade it by running pip install --upgrade pip.

    I’m seeing problems with hg

    Does hg log -r pushhead('fx-team') work? If not, there’s a problem with your mozext configuration. Check the pre-requisites again.

    What version of the downloaded binaries am I using?

    mach artifact last displays the last artifact installed. You can see the local file name; the URL the file was fetched from; the Task Cluster job URL; and the corresponding Mercurial revision hash. You can use this to get some insight into the system.

    Where are the downloaded binaries cached?

    Everything is cached in ~/.mozbuild/package-frontend. The commands purge old artifacts as new artifacts are downloaded, keeping a small number of recently used artifacts.

    I’m seeing weird errors and crashes!

    Since your local build and the upstream binaries may diverge, lots of things can happen. If the upstream binaries change a C++ XPCOM component, you may see a binary incompatibility. Such a binary incompatibility looks like:

    E GeckoConsole(5165)          [JavaScript Error: "NS_ERROR_XPC_GS_RETURNED_FAILURE: Component returned failure code: 0x80570016 (NS_ERROR_XPC_GS_RETURNED_FAILURE) [nsIJSCID.getService]" {file: "resource://gre/modules/Services.jsm" line: 23}]
    

    You should update your tree (using hg pull -u --rebase or similar) and run mach build && mach artifact install && mach package again.

    How can I help debug problems?

    There are two commands to help with debugging: print-cache and clear-cache. You shouldn’t need either; these are really just to help me debug issues in the wild.

    Acknowledgements

    This work builds on the contributions of a huge number of people. First, @indygreg supported this effort from day one and reviewed the code. He also wrote mozext and made it easy to access the pushlog locally. None of this happens without Greg. Second, the Task Cluster Index team deserves kudos for making it easy to download artifacts built in automation. Anyone who’s written a TBPL scraper knows how much better the new system is. Third, I’d like to thank @liucheia for testing this with me in Whistler, and Vivek Balakrishnan for proof-reading this blog post.

    Conclusion

    In my blog post The Firefox for Android build system in 2015, the first priority was making it easier to build Firefox for Android the first time. The second priority was reducing the edit-compile-test cycle time. The mach artifact work described here drastically reduces the first compile-test cycle time, and subsequent compile-test cycles after pulling from the upstream repository. It’s hitting part of the first priority, and part of the second priority. Baby steps.

    The Firefox for Android team is always making things better for contributors! Get involved with Firefox for Android.

    Discussion is best conducted on the mobile-firefox-dev mailing list and I’m nalexander on irc.mozilla.org/#mobile and @ncalexander on Twitter.

    Changes

    • Wed 1 July 2015: Initial version.

    Notes

    [1]I can’t find documentation for mozext anywhere, but http://gregoryszorc.com/blog/2013/07/22/mercurial-extension-for-gecko-development/ includes a little information. mach artifact uses mozext to manage the pushlog.
    [2]The long (and slow) download is fetching a local copy of the pushlog, which records who pushed what commits when to the Mozilla source tree. mach artifact uses the pushlog to determine good candidate revisions (and builds) to download artifacts for.
    [3]We should make this happen automatically.
    [4]In theory, --disable-compile-environment also means we don’t need a host C++ toolchain (e.g., gcc targeting Mac OS X) nor a target C++ toolchain (e.g., the Android NDK). This is not my primary motivation but I’m happy to mentor a contributor who wanted to test this and make sure it works! It would be a nice win: you could get a working Fennec build with fewer (large!) dependencies.
    [5]I intend to profile mach build in this case and try to improve it. Much of the build is essentially single-threaded in this configuration, including compiling the Java sources for Fennec. Splitting Fennec into smaller pieces and libraries would help, but that is hard. See for example Bug 1104203.

    Will Kahn-GreeneInput: Thank You project Phase 1: Part 1

    Summary

    Beginning

    When users click on "Submit feedback..." in Firefox, they end up on our Input site where they can let us know whether they're happy or sad about Firefox and why. This data gets collected and we analyze it in aggregate looking for trends and correlations between sentiment and subject matter, releases, events, etc. It's one of the many ways that users directly affect the development of Firefox.

    One of the things that's always bugged me about this process is that some number of users are leaving feedback about issues they have with Firefox that aren't problems with the product, but rather something else or are known issues with the product that have workarounds. It bugs me because users go out of their way to leave us this kind of feedback and then they get a Thank You page that isn't remotely helpful for them.

    I've been thinking about this problem since the beginning of 2014, but hadn't had a chance to really get into it until the end of 2014 when I wrote up a project plan and some bugs.

    In the first quarter of 2015, Adam worked on this project with me as part of the Outreachy program. I took the work that he did and finished it up in the second quarter of 2015.

    Surprise ending!

    The code has been out there for a little under a month now and early analysis suggests SUCCESS!

    But keep reading for the riveting middle!

    This blog post is a write-up for the Thank You project phase 1. It's long because I wanted to go through the entire project beginning to end as a case study.

    Read more… (17 mins to read)

    Mozilla Open Policy & Advocacy BlogDecisive moment for net neutrality in Europe

    After years of negotiations, the E.U. Telecom Single Market Regulation (which includes proposed net neutrality rules) is nearing completion. If passed, the Regulation will be binding on all E.U. member states. The policymakers – the three European governmental bodies:  the Parliament, the Commission, and the Council – are at a crossroads: implement real net neutrality into law, or permit net discrimination and in doing so threaten innovation and competition. We urge European policymakers to stand strong, adopt clear rules to protect the open Internet, and set an example for the world.

    At Mozilla, we’ve taken a strong stance for real net neutrality, because it is central to our mission and to the openness of the Internet. Just as we have supported action in the United States and in India, we support the adoption of net neutrality rules in Europe. Net neutrality fundamentally protects competition and innovation, to the benefit of both European Internet users and businesses. We want an Internet where everyone can create, participate, and innovate online, all of which is at risk if discriminatory practices are condoned by law or through regulatory indifference.

    The final text of European legislation is still being written, and the details are still gaining shape. We have called for strong, enforceable rules against blocking, discrimination, and fast lanes are critical to protecting the openness of the Internet. To accomplish this, the European Parliament needs to hold firm to its five votes in the last five years for real net neutrality. Members of the European Parliament must resist internal and external pressures to build in loopholes that would threaten those rules.

    Two issues stand out as particularly important in this final round of negotiations: specialized services and zero-rating. On the former, specialized services – or “services other than Internet access services” – represent a complex and unresolved set of market practices, including very few current ones and many speculative future possibilities. While there is certainly potential for real value in these services, absent any safeguards, such services risk undermining the open Internet. It’s important to maintain a baseline of robust access, and prevent relegating the open Internet to a second tier of quality.

    Second, earlier statements from the E.U. included language that appeared to endorse zero-rating business practices. Our view is that zero-rating as currently implemented in the market is not the right path forward for the open Internet. However, we do not believe it is necessary to address this issue in the context of the Telecom Single Market Regulation. As such, we’re glad to see such language removed from more recent drafts and we encourage European policymakers to leave it out of the final text.

    The final text that emerges from the European process will set a standard not only for Europe but for the rest of the world. It’s critical for European policymakers to stand with the Internet and get it right.

    Chris Riley, Head of Public Policy
    Jochai Ben-Avie, Internet Policy Manager

    The Mozilla BlogNew Sharing Features in Firefox

    Whichever social network you choose, it’s undeniable that being social is a key part of why you enjoy the Web. Firefox is built to put you in control, including making it easier to share anything you like on the Web’s most popular social networks. Today, we’re announcing that Firefox Share has been integrated into Firefox Hello. We introduced Firefox Share to offer a simple way of sharing Web content across popular services such as Facebook, Twitter, Tumblr, LinkedIn and Google+ and other social and email services  (full list here) to help you share anything on the Web with any or all of your friends.

    Firefox Hello link sharing

    Firefox Hello, which we’ve been developing in beta with our partner, Telefonica, is the only in-browser video chat tool that doesn’t require an account or extra software downloads. We recently added screen sharing to Firefox Hello to make it easier to share anything you’re looking at in your video call. Now you can also invite friends to a Firefox Hello video call by sharing a link via the social network or email account of your choice, all without leaving your browser tab. That includes a newly added Yahoo Mail integration in Firefox Share that lets Yahoo Mail users share Hello conversation links or other Web content directly from Firefox Share.

    For more information:
    Release Notes for Firefox for Windows, Mac, Linux
    Release Notes for Android
    Download Firefox

    Air MozillaGerman speaking community bi-weekly meeting

    German speaking community bi-weekly meeting https://wiki.mozilla.org/De/Meetings

    Mozilla Addons BlogT-Shirt Form Data Exposure

    On Monday, June 15, 2015 Mozilla announced on the Add-ons blog a free special edition t-shirt for eligible AMO developers. Eligible developers were requested to sign up via Google Form and asked to input their full name, full address, telephone number and T-shirt size.

    This document was mistakenly configured to allow potential public access for less than 24 hours, exposing the response data for 70 developers. As soon as the incident was discovered, we immediately changed the permission level to private access. Other than the developer who discovered and reported this incident, we are not aware of anyone without authorization accessing the spreadsheet.

    We have notified the affected individuals. We regret any inconvenience or concern this incident may have caused our AMO developer community.

    Air MozillaWeb QA Weekly Meeting

    Web QA Weekly Meeting This is our weekly gathering of Mozilla'a Web QA team filled with discussion on our current and future projects, ideas, demos, and fun facts.

    Roberto A. VitilloTelemetry metrics roll-ups

    Our Telemetry aggregation system has been serving us well for quite some time. As Telemetry evolved though, maintaining the codebase and adding new features such as keyed histograms has proven to be challenging. With the introduction of unified FHR/Telemetry, we decided to rewrite the aggregation pipeline with an updated set of requirements in mind.

    Metrics

    A ping is the data payload that clients submit to our server. The payload contains, among other things, over thousand metrics of the following types:

    • numerical, like e.g. startup time
    • categorical, like e.g. operating system name
    • distributional, like e.g. garbage collection timings

    Distributions are implemented with histograms that come in different shapes and sizes. For example, a keyed histogram represent a collection of labelled histograms. It’s not rare for keyed histograms to have thousands of possible labels. For instance, MISBEHAVING_ADDONS_JANK_LEVEL, which measures the longest blocking operation performed by an add-on, has potentially a label for each extensions.

    The main objective of the aggregator is to create time or build-id based aggregates by a set of dimensions:

    • channel, e.g. nightly
    • build-id or submission date
    • metric name, e.g. GC_MS
    • label, for keyed histograms
    • application name, e.g. Fennec
    • application version, e.g. 41
    • CPU architecture, e.g. x86_64
    • operating system, e.g. Windows
    • operating system version, e.g. 6.1
    • e10s enabled
    • process type, e.g. content or parent

    As scalar and categorical metrics are converted to histograms during the aggregation, ultimately we display only distributions in our dashboard.

    Raw Storage

    We receive millions of pings each day over all our channels. A raw uncompressed ping has a size of over 100KB. Pings are sent to our edge servers and end up being stored in an immutable chunk of up to 300MB on S3, partitioned by submission date, application name, update channel, application version, and build id.

    As we are currently collecting v4 submissions only on pre-release channels, we store about 700 GB per day; this considering only saved_session pings as those are the ones being aggregated. Once we start receiving data on the release channel as well we are likely going to double that number.

    As soon as an immutable chunk is stored on S3, an AWS lambda function adds a corresponding entry to a SimpleDB index. The index allows Spark jobs to query the set of available pings by different criteria without the need of performing an expensive scan over S3.

    Spark Aggregator

    A daily scheduled Spark job performs the aggregation, on the data received the day before, by the set of dimensions mentioned above. We are likely going to move from a batch job to a streaming one in the future to reduce the latency from the time a ping is stored on S3 to the time its data appears in the dashboard.

    Two kinds of aggregates are produced by aggregator:

    • submission date based
    • build-id based

    Aggregates by build-id computed for a given submission date have to be added to the historical ones. As long as there are submissions coming from an old build of Firefox, we will keep receiving and aggregating data for it. The aggregation of the historical aggregates with the daily computed ones (i.e. partial aggregates) happens within a PostgreSQL database.

    Database

    There is only one type of table within the database which is partitioned by channel, version and build-id (or submission date depending on the aggregation type).

    As PostgreSQL supports natively json blobs and arrays, it came naturally to express each row just as a couple of fields, one being a json object containing a set of dimensions and the other being an array representing the histogram. Adding a new dimension in the future should be rather painless as dimensions are not represented with columns.

    When a new partial aggregate is pushed to the database, PostreSQL finds the current historical entry for that combination of dimensions, if it exists, and updates the current histogram by summing to it the partially aggregated histogram. In reality a temporary table is pushed to the database that contains all partial aggregates which is then merged with the historical aggregates, but the underlying logic remains the same.

    As the database is usually queried by submission date or build-id and as there are milions of partial aggregates per day deriving from the possible combinations of dimensions, the table is partitioned the way it is to allow the upsert operation to be performed as fast as possible.

    API

    An inverted index on the json blobs allows to efficiently retrieve, and aggregate, all histograms matching a given filtering criteria.

    For example,

    select aggregate_histograms(histograms)
    from build_id_nightly_41_20150602
    where dimensions @> '{"metric": "SIMPLE_MEASURES_UPTIME", 
                          "application": "Firefox",
                          "os": "Windows"}'::jsonb;
    
    

    retrieves a list of histograms, one for each combination of dimensions matching the where clause, and adds them together producing a final histogram that represents the distribution of the uptime measure for Firefox users on the nightly Windows build created on the 2nd of June 2015.

    Aggregates are made available through a HTTP API. For example, to retrieve the aggregated histogram for the GC_MS metric on Windows for build-ids of the 2015/06/15 and 2015/06/16:

    curl -X GET "http://SERVICE/aggregates_by/build_id/channels/nightly/?version=41&dates=20150615,20150616&metric=GC_MS&os=Windows_NT"
    
    

     which returns

    {"buckets":[0, ..., 10000],
     "data":[{"date":"20150615",
              "count":239459,
              "histogram":[309, ..., 5047],
              "label":""},
             {"date":"20150616",
              "count":233688,
              "histogram":[306, ..., 7875],
              "label":""}],
     "kind":"exponential",
     "description":"Time spent running JS GC (ms)"}
    
    

    Dashboard

    Our intern, Anthony Zhang, did a phenomenal job creating a nifty dashboard to display the aggregates. Even though it’s still under active development, it’s already functional and thanks to it we were able to spot a serious bug in the v2 aggregation pipeline.

    It comes with two views, the histogram view designed for viewing distributions of measures:

    histogramand an evolution view for viewing the evolution of aggregate values for measures over time:

    evolutionAs we started aggregating data at the beginning of June, the evolution plot looks rightfully wacky before that date.


    Air MozillaReps weekly

    Reps weekly Weekly Mozilla Reps call

    Byron Joneshappy bmo push day!

    the following changes have been pushed to bugzilla.mozilla.org:

    • [1174057] Authentication Delegation should add an App ID column to associate api keys with specific callbacks
    • [1163761] Allow MozReview to skip user consent screen for Authentication Delegation
    • [825946] tracking flags should be cleared when a bug is moved to a product/component where they are not valid
    • [1149593] Hovering over “Copy Summary” changes the button to a grey box
    • [1171523] Change Loop product to Hello
    • [1175928] flag changes made at the same time as a cc change are not visible without showing cc changes
    • [1175644] The cpanfile created by checksetup.pl defines the same feature multiple times, breaking cpanm
    • [1176362] [Voting] When a user votes enough to confirm an individual bug, the bug does not change to CONFIRMED properly
    • [1176368] [Voting] When updating votestoconfirm to a new value, bugs with enough votes are not moved to CONFIRMED properly
    • [1161797] Use document.execCommand(“copy”) instead of flash where it is available
    • [1177239] Please create a “Taskcluster Platform” product
    • [1144485] Adapt upstream Selenium test suite to BMO
    • [1178301] webservice_bug_update.t Parse errors: Bad plan. You planned 927 tests but ran 921
    • [1163170] Giving firefox-backlog-drivers rights to edit the Rank field anywhere it appears in bugzilla
    • [1171758] Persistent xss is possible on Firefox

    discuss these changes on mozilla.tools.bmo.


    Filed under: bmo, mozilla

    Mark FinkleRandom Thoughts on Management

    I have ended up managing people at the last three places I’ve worked, over the last 18 years. I can honestly say that only in the last few years have I really started to embrace the job of managing. Here’s a collection of thoughts and observations:

    Growth: Ideas and Opinions and Failures

    Expose your team to new ideas and help them create their own voice. When people get bored or feel they aren’t growing, they’ll look elsewhere. Give people time to explore new concepts, while trying to keep results and outcomes relevant to the project.

    Opinions are not bad. A team without opinions is bad. Encourage people to develop opinions about everything. Encourage them to evolve their opinions as they gain new experiences.

    “Good judgement comes from experience, and experience comes from bad judgement” – Frederick P. Brooks

    Create an environment where differing viewpoints are welcomed, so people can learn multiple ways to approach a problem.

    Failures are not bad. Failing means trying, and you want people who try to accomplish work that might be a little beyond their current reach. It’s how they grow. Your job is keeping the failures small, so they can learn from the failure, but not jeopardize the project.

    Creating Paths: Technical versus Management

    It’s important to have an opinion about the ways a management track is different than a technical track. Create a path for managers. Create a different path for technical leaders.

    Management tracks have highly visible promotion paths. Organization structure changes, company-wide emails, and being included in more meetings and decision making. Technical track promotions are harder to notice if you don’t also increase the person’s responsibilities and decision making role.

    Moving up either track means more responsibility and more accountability. Find ways to delegate decision making to leaders on the team. Make those leaders accountable for outcomes.

    Train your engineers to be successful managers. There is a tradition in software development to use the most senior engineer to fill openings in management. This is wrong. Look for people that have a proclivity for working with people. Give those people management-like challenges and opportunities. Once they (and you) are confident in taking on management, promote them.

    Snowflakes: Engineers are Different

    Engineers, even great ones, have strengthens and weaknesses. As a manager, you need to learn these for each person on your team. People can be very strong at starting new projects, building something from nothing. Others can be great at finishing, making sure the work is ready to release. Some excel at user-facing code, others love writing back-end services. Leverage your team’s strengthens to efficiently ship products.

    “A 1:1 is your chance to perform weekly preventive maintenance while also understanding the health of your team” – Michael Lopp (rands)

    The better you know your team, the less likely you will create bored, passionless drones. Don’t treat engineers as fungible, swapable resources. Set them, and the team, up for success. Keep people engaged and passionate about the work.

    Further Reading

    The Role of a Senior Developer
    On Being A Senior Engineer
    Want to Know Difference Between a CTO and a VP of Engineering?
    Thoughts on the Technical Track
    The Update, The Vent, and The Disaster
    Bored People Quit
    Strong Opinions, Weakly Held

    Karl DubostWeb Components, Stories Of Scars

    Chris Heilmann has written about Web Components.

    If you want to see the mess that is the standardisation effort around web components right now in all its ugliness, Wilson Page wrote a great post on that on Mozilla Hacks. Make sure to also read the comments – lots of good stuff there.

    Indeed a very good blog post to read. Then Chris went on saying:

    Web Components are a great idea. Modules are a great idea. Together, they bring us hours and hours of fun debating where what should be done to create a well-performing, easy to maintain and all around extensible complex app for the web.

    This is twitching in the back of my mind for the last couple of weeks. And I kind of remember a wicked pattern from 10 years ago. Enter Compound Document Formats (CDF) with its WICD (read wicked) specifications. If you think I'm silly, check the CDF FAQ:

    When combining content from arbitrary sources, a number of problems present themselves, including how rendering is handled when crossing from one markup language to another, or how events propagate across the same boundaries, or how to interpret the meaning of a piece of content within an unanticipated context.

    and

    Simply put, a compound document is a mixture of content in any number of formats. Compound documents range from static (say, XHTML that includes a simple SVG illustration) to very dynamic (a full-fledged Web Application). A compound document may include its parts directly (such as when you include an SVG image in an XHTML file) or by reference (such as when you embed a separate SVG document in XHTML using an <object> element. There are benefits to both, and the application should determine which one you use. For instance, inclusion by reference facilitates reuse and eases maintenance of a large number of resources. Direct inclusion can improve portability or offline use. W3C will support both modes, called CDR ("compound documents by reference") and CDI ("compound documents by inclusion").

    At that time, the Web and W3C, where full throttle on XML and namespaces. Now, the cool kids on the block are full HTML, JSON, polymers and JS frameworks. But if you look carefully and remove the syntax, architecture parts, the narrative is the same. And with the narratives of the battle and its scars, the Web Components sound very familiar to the Coupound Document Format.

    Still by Chris

    When it comes to componentising the web, the rabbit hole is deep and also a maze.

    Note that not everything was lost from WICD. It helped develop a couple of things, and reimagine the platform. Stay tune, I think we will have surprises on this story. Not over yet.

    Modularity has already a couple of scars when thinking about large distribution. Remember Opendoc and OLE. I still remember using Cyberdog. Fun times.

    Otsukare!

    Planet Mozilla InternsJonathan Wilde: A Project Called Gossamer

    A few summers back, I worked on the Firefox Metro project. The big challenge I ran into the first summer–back when the project was at a very early stage–was figuring out how to distribute early builds.

    I wanted to quickly test work-in-progress builds across different devices on my desk without having to maintain a working copy and rebuild on every device. Later on, I also wanted to quickly distribute builds to other folks, too.

    I had a short-term hack based on Dropbox, batch scripts, and hope. It was successful at getting rapid builds out, but janky and unscalable.

    The underlying problem space–how you build, distribute, and test experimental prototypes rapidly?–is one that I’ve been wanting to revisit for a while.

    So, Gossamer

    This summer, Lyre Calliope and I have had some spare time to tinker on this for fun.

    We call this project Gossamer, in honor of the Gossamer Albatross, a success story in applying rapid prototyping methodology to building a human-powered airplane.

    We’re working to enable the following development cycle:

    1. Build a prototype in a few hours or maybe a couple days, and at the maximum fidelity possible–featuring real user data, instead of placeholders.
    2. Share the prototype with testers as easily as sharing a web page.
    3. Understand how the prototype is performing in user testing relative to the status quo, qualitatively and quantitatively.
    4. Polish and move ideas that work into the real world in days or possibly weeks, instead of months or years.

    A First Proof-of-Concept

    We started by working to build a simple end-to-end demonstration of a lightweight prototyping workflow:

    (Yeah, it took longer than two weeks due to personal emergencies on my end.)

    We tinkered around with a few different ways to do this.

    Our proof-of-concept is a simple distribution service that wraps Mozilla’s browser.html project. It’s a little bit like TestFlight or HockeyApp, but for web browsers.

    To try an experimental build, you log in via GitHub, and pick the build that you want to test…and presto!

    Sequence shortened.

    About the login step: When you pick an experiment, you’re picking it for all of your devices logged in via that account.

    This makes cross-device feature testing a bit easier. Suppose you have a feature you want to test on different form factors because the feature is responsive to screen dimensions or input methods. Or suppose you’re building a task continuity feature that you need to test on multiple devices. Having the same experiment running on all the devices of your account makes this testing much easier.

    It also enables us to have a remote one-click escape hatch in case something breaks in the experiment you’re running. (It happens to the best developers!)

    To ensure that you can trust experiments on Gossamer, we integrated the login system with Mozillians. Only vouched Mozillians can ship experimental code via Gossamer.

    To ship an experimental build…you click the “Ship” button. Boom. The user gets a message asking them if they want to apply the update.

    And the cool thing about browser.html being a web application…is that when the user clicks the “Apply” button to accept the update…all we have to do is refresh the window.

    We did some lightweight user testing by having Lyre (who hadn’t seen any of the implementation yet) step through the full install process and receive a new updated build from me remotely.

    We learned a few things from this.

    What We’re Working On Next

    There’s three big points we want to focus on in the next milestone:

    1. Streamline every step. The build service web app should fade away and just be hidden glue around a web browser- and GitHub-centric workflow.
    2. Remove the refresh during updates. Tooling for preserving application state while making hot code changes to web applications based on React (such as browser.html!) is widely available.
    3. Make the build pipeline as fast as possible. Let’s see how short we can make the delay from pushing new code to GitHub (or editing through GitHub’s web interface) to updates appearing on your machine.

    We also want to shift our mode of demo from screencasts to working prototypes.

    Get Involved

    This project is still at a very early stage, but if you’d like to browse the code, it’s in three GitHub repositories:

    • gossamer - Our fork of browser.html.
    • gossamer-server - The build and distribution server.
    • gossamer-larch-patches - Tweaks to Mozilla’s larch project branch containing the graphene runtime. We fixed a bug and made a configuration tweak.

    Most importantly, we’d love your feedback:

    There’s a lot of awesome in the pipeline. Stay tuned!

    Mozilla Addons BlogAdd-ons Update – Week of 2015/07/01

    I post these updates every 3 weeks to inform add-on developers about the status of the review queues, add-on compatibility, and other happenings in the add-ons world.

    Add-ons Forum

    As we announced before, there’s a new add-ons community forum for all topics related to AMO or add-ons in general. The Add-ons category is already the most active one on the community forum, so thank you all for your contributions! The old forum is still available in read-only mode.

    The Review Queues

    • Most nominations for full review are taking less than 10 weeks to review.
    • 272 nominations in the queue awaiting review.
    • Most updates are being reviewed within 8 weeks.
    • 159 updates in the queue awaiting review.
    • Most preliminary reviews are being reviewed within 10 weeks.
    • 295 preliminary review submissions in the queue awaiting review.

    A number of factors have lead to the current state of the queues: increased submissions, decreased volunteer reviewer participation, and a Mozilla-wide event that took most of our attention last week. We’re back and our main focus are the review queues. We have a new reviewer on our team, who will hopefully make a difference in the state of the queues.

    If you’re an add-on developer and would like to see add-ons reviewed faster, please consider joining us. Add-on reviewers get invited to Mozilla events and earn cool gear with their work. Visit our wiki page for more information.

    Firefox 40 Compatibility

    The Firefox 40 compatibility blog post is up. The automatic compatibility validation will be run in a few weeks.

    As always, we recommend that you test your add-ons on Beta and Firefox Developer Edition (formerly known as Aurora) to make sure that they continue to work correctly. End users can install the Add-on Compatibility Reporter to identify and report any add-ons that aren’t working anymore.

    Extension Signing

    We announced that we will require extensions to be signed in order for them to continue to work in release and beta versions of Firefox. The wiki page on Extension Signing has information about the timeline, as well as responses to some frequently asked questions.

    There’s a small change to the timeline: Firefox 40 will only warn about unsigned extensions (for all channels), Firefox 41 will disable unsigned extensions  by default unless a preference is toggled (on Beta and Release), and Firefox 42 will not have the preference. This means that we’ll have an extra release cycle before signatures are enforced by default.

    Electrolysis

    Electrolysis, also known as e10s, is the next major compatibility change coming to Firefox. In a nutshell, Firefox will run on multiple processes now, running content code in a different process than browser code. This should improve responsiveness and overall stability, but it also means many add-ons will need to be updated to support this.

    We will be talking more about these changes in this blog in the future. For now we recommend you start looking at the available documentation.

    Christian HeilmannOver the Edge: Web Components are an endangered species

    Last week I ran the panel and the web components/modules breakout session of the excellent Edge Conference in London, England and I think I did quite a terrible job. The reason was that the topic is too large and too fragmented and broken to be taken on as a bundle.

    If you want to see the mess that is the standardisation effort around web components right now in all its ugliness, Wilson Page wrote a great post on that on Mozilla Hacks. Make sure to also read the comments – lots of good stuff there.

    Web Components are a great idea. Modules are a great idea. Together, they bring us hours and hours of fun debating where what should be done to create a well-performing, easy to maintain and all around extensible complex app for the web. Along the way we can throw around lots of tools and ideas like NPM and ES6 imports or – as Alex Russell said it on the panel: “tooling will save you”.

    It does. But that was always the case. When browsers didn’t support CSS, we had Dreamweaver to create horribly nested tables that achieved the same effect. There is always a way to make browsers do what we want them to do. In the past, we did a lot of convoluted things client-side with libraries. With the advent of node and others we now have even more environments to innovate and release “not for production ready” impressive and clever solutions.

    When it comes to componentising the web, the rabbit hole is deep and also a maze. Many developers don’t have time to even start digging and use libraries like Polymer or React instead and call it a day and that the “de facto standard” (a term that makes my toenails crawl up – layout tables were a “de facto standard”, so was Flash video).

    React did a genius thing: by virtualising the DOM, it avoided a lot of the problems with browsers. But it also means that you forfeit all the good things the DOM gives you in terms of accessibility and semantics/declarative code. It simply is easier to write a <super-button> than to create a fragment for it or write it in JavaScript.

    Of course, either are easy for us clever and amazing developers, but the fact is that the web is not for developers. It is a publishing platform, and we are moving away from that concept at a ridiculous pace.

    And whilst React gives us all the goodness of Web Components now, it is also a library by a commercial company. That it is open source, doesn’t make much of a difference. YUI showed that a truckload of innovation can go into “maintenance mode” very quickly when a company’s direction changes. I have high hopes for React, but I am also worried about dependencies on a single company.

    Let’s rewind and talk about Web Components

    Let’s do away with modules and imports for now, as I think this is a totally different discussion.

    I always loved the idea of Web Components – allowing me to write widgets in the browser that work with it rather than against it is an incredible idea. Years of widget frameworks trying to get the correct performance out of a browser whilst empowering maintainers would come to a fruitful climax. Yes, please, give me a way to write my own controls, inherit from existing ones and share my independent components with other developers.

    However, in four years, we haven’t got much to show.. When we asked the very captive and elite audience of EdgeConf about Web Components, nobody raised their hand that they are using them in real products. People either used React or Polymer as there is still no way to use Web Components in production otherwise. When we tried to find examples in the wild, the meager harvest was GitHub’s time element. I do hope that this was not all we wrote and many a company is ready to go with Web Components. But most discussions I had ended up the same way: people are interested, tried them out once and had to bail out because of lack of browser support.

    Web Components are a chicken and egg problem where we are currently trying to define the chicken and have many a different idea what an egg could be. Meanwhile, people go to chicken-meat based fast food places to get quick results. And others increasingly mention that we should hide the chicken and just give people the eggs leaving the chicken farming to those who also know how to build a hen-house. OK, I might have taken that metaphor a bit far.

    We all agreed that XHTML2 sucked, was overly complicated, and defined without the input of web developers. I get the weird feeling that Web Components and modules are going in the same direction.

    In 2012 I wrote a longer post as an immediate response to Google’s big announcement of the foundation of the web platform following Alex Russell’s presentation at Fronteers 11 showing off what Web Components could do. In it I kind of lamented the lack of clean web code and the focus on developer convenience over clarity. Last year, I listed a few dangers of web components. Today, I am not too proud to admit that I lost sight of what is going on. And I am not alone. As Wilson’s post on Mozilla Hacks shows, the current state is messy to say the least.

    We need to enable web developers to use “vanilla” web components

    What we need is a base to start from. In the browser and in a browser that users have and doesn’t ask them to turn on a flag. Without that, Web Components are doomed to become a “too complex” standard that nobody implements but instead relies on libraries.

    During the breakout session, one of the interesting proposals was to turn Bootstrap components into web components and start with that. Tread the cowpath of what people use and make it available to see how it performs.

    Of course, this is a big gamble and it means consensus across browser makers. But we had that with HTML5. Maybe there is a chance for harmony amongst competitors for the sake of an extensible and modularised web that is not dependent on ES6 availability across browsers. We’re probably better off with implementing one sci-fi idea at a time.

    I wished I could be more excited or positive about this. But it left me with a sour taste in my mouth to see that EdgeConf, that hot-house of web innovation and think-tank of many very intelligent people were as confused as I was.

    I’d love to see a “let’s turn it on and see what happens” instead of “but, wait, this could happen”. Of course, it isn’t that simple – and the Mozilla Hacks post explains this well – but a boy can dream, right? Remember when using HTML5 video was just a dream?

    David BurnsWho wants to be an alpha tester for Marionette?

    Are you an early adopter type? Are you an avid user of WebDriver and want to use the latest and great technology? Then you are most definitely in luck.

    Marionette, the Mozilla implementation of the FirefoxDriver, is ready for a very limited outing. There is a lot of things that have not been implemented or, since we are implementing things agains the WebDriver Specification they might not have enough prose to implement (This has been a great way to iron out spec bugs).

    Getting Started

    At the moment, since things are still being developed and we are trying to do things with new technologies (like writing part this project using Rust) we are starting out with supporting Linux and OS X first. Windows support will be coming in the future!

    Getting the driver

    We have binaries that you can download. For Linux and for OS X . The only bindings currently updated to work are the python bindings that are available in a branch on my fork of the Selenium project. Do the following to get it into a virtualenv:
    1. Create a virtualenv
    2. activate your virtualenv
    3. cd to where you have cloned my repository
    4. In a terminal type the following: ./go py_install

    Running tests

    Running tests against marionette requires that you do the following changes (which hopefully remains small)

    Update the desired capability to have marionette:true and add binary:/path/to/Firefox/DeveloperEdition/or/Nightly. We are only supporting those two versions of at the moment because we have had a couple in compatible issues that we have fixed which means speaking to Marionette in the browser in the beta or release versions quite difficult.

    Since you are awesome early adopters it would be great if we could raise bugs. I am not expecting everything to work but below is a quick list that I know doesn't work.

    • No support for self-signed certificates
    • No support for actions
    • No support for Proxy (but will be there soon)
    • No support logging endpoint
    • I am sure there are other things we don't remember

    Thanks for being an early adopter and thanks for raising bugs as you find them!

    Will Kahn-GreeneDitching ElasticUtils on Input for elasticsearch-dsl-py

    What was it?

    ElasticUtils was a Python library for building and executing Elasticsearch searches. I picked up maintenance after the original authors moved on with their lives, did a lot of work, released many versions and eventually ended the project in January 2015.

    Why end it? A bunch of reasons.

    It started at PyCon 2014, when I had a long talk with Rob, Jannis, and Erik about ElasticUtils the new library Honza was working on which became elasticsearch-dsl-py.

    At the time, I knew that ElasticUtils had a bunch of architectural decisions that turned out to be make life really hard. Doing some things was hard. It was built for a pre-1.0 Elasticsearch and it would have been a monumental effort to upgrade it past the Elasticsearch 1.0 hump. The code wasn't structured particularly well. I was tired of working on it.

    Honza's library had a lot of promise and did the sorts of things that ElasticUtils should have done and did them better--even at that ridiculously early stage of development.

    By the end of PyCon 2014, I was pretty sure elasticsearch-dsl-py was the future. The only question was whether to gut ElasticUtils and make it a small shim on top of elasticsearch-dsl-py or end it.

    In January 2015, I decided to just end it because I didn't see a compelling reason to keep it around or rewrite it into something on top of elasticsearch-dsl-py. Thus I ended it.

    Now to migrate to something different.

    Read more… (7 mins to read)

    Hannah KaneWhistler Wrap-up

    What an amazing week!

    Last week members of the Mozilla community met in beautiful Whistler, BC to celebrate, reflect, brainstorm, and plan (and eat snacks). While much of the week was spent in functional teams (that is, designers with designers and engineers with engineers), the Mozilla Learning Network website (known informally as “Teach”) team was able to convene for two meetings—one focused on our process, and the other on our roadmap and plans for the future.

    Breakthroughs

    From my perspective, the week inspired a few significant breakthroughs:

    1. The Mozilla Learning Network is one, unified team with several offerings. Those offerings can be summarized in this image: Networks, Groups, and Convenings.MLN ProgramsThe breakthrough was realizing that it’s urgent that the site reflects the full spectrum of offerings as soon as possible. We’ve adjusted our roadmap accordingly. First up: incorporate the Hive content in a way that makes sense to our audience, and provides clear pathways for engagement.
    2. Our Clubs pipeline is a bit off-balance. We have more interested Club Captains than our current (amazing) Regional Coordinators can support. This inspired an important conversation about changes to our strategy to better test out our model. We’ll be talking about how these changes are reflected on the site soon.
    3. The most important content to localize is our curriculum content. To be fair, we knew this before the work week, but it was definitely crystallized in Whistler. This gives useful shape to our localization plan.
    4. We also identified a few areas where we can begin the process of telling the full “Mozilla Learning” story. By that I mean the work that goes beyond what we call the Mozilla Learning Network—for example, we can highlight our Fellowship programs, curriculum from other teams (starting with Mozilla Science Lab!), and additional peer learning opportunities.
    5. Finally, we identified a few useful, targeted performance indicators that will help us gauge our success: 1) the # of curriculum hits, and 2) the % of site visitors who take the pledge to teach.

    Site Updates

    I also want to share a few site updates that have happened since I wrote last:

      • The flow for Clubs has been adjusted to reflect the “apply, connect, approve” model described in an earlier post.
      • We’ve added a Protect Your Data curriculum module with six great activities.
      • We added the “Pledge to Teach” action on the homepage. Visitors to the site can choose to take the pledge, and are then notified about an optional survey they can take. We’ll follow up with tailored offerings based on their survey responses.pledge

    Questions? Ideas? Share ’em in the comments!


    Gervase MarkhamTop 50 DOS Problems Solved: Squashing Files

    Q: I post files containing DTP pages and graphics on floppy disks to a bureau for printing. Recently I produced a file that was too big to fit on the disk and I know that I will be producing more in the future. What’s the best way round the problem?

    A. There are a number of solutions, most of them expensive. For example, both you and the bureau could buy modems. A modem is a device that allows computers to be connected via a phone line. You would need software, known as a comms program, to go with the modems. This will allow direct PC-to-PC transfer of files without the need for floppy disks. Since your files are so large, you would need a fast 9600 baud modem [Ed: approx 1 kilobyte per second] with MNP5 compression/error correction to make this a viable option.

    In this case, however, I would get hold of a utility program called LHA which is widely available from the shareware/PD libraries that advertise in PC Answers. In earlier incarnations it was known as LHarc. LHA enables you to squash files into less space than they occupied before.

    The degree of compression depends on the nature of the file. Graphics and text work best, so for you this is a likely solution. The bureau will need a copy of LHA to un-squash the files before it can use them, or you can use LHA in a special way that makes the compressed files self-unpacking.

    LHA has a great advantage over rival utilities in that the author allows you to use it for free. There is no registration fee, as with the similar shareware program PKZip, for example.

    Every time they brought out a new, larger hard disk, they used to predict the end of the need for compression…

    Botond BalloC++ Concepts TS could be voted for publication on July 20

    In my report on the C++ standards meeting this May, I described the status of the Concepts TS as of the end of the meeting:

    • The committee’s Core Working Group (CWG) was working on addressing comments received from national standards bodies.
    • CWG planned to hold a post-meeting teleconference to complete the final wording including the resolutions of the comments.
    • The committee would then have the option of holding a committee-wide teleconference to vote the final wording for publication, or else delay this vote until the next face-to-face meeting in October.

    I’m excited to report that the CWG telecon has taken place, final wording has been produced, and the committee-wide telecon to approve the final wording has been scheduled for July 20.

    If this vote goes through, the final wording of the Concepts TS will be sent to ISO for publication, and the TS will be officially published within a few months!


    Nicholas NethercoteFirefox 41 will use less memory when running AdBlock Plus

    Last year I wrote about AdBlock Plus’s effect on Firefox’s memory usage. The most important part was the following.

    First, there’s a constant overhead just from enabling ABP of something like 60–70 MiB. […] This appears to be mostly due to additional JavaScript memory usage, though there’s also some due to extra layout memory.

    Second, there’s an overhead of about 4 MiB per iframe, which is mostly due to ABP injecting a giant stylesheet into every iframe. Many pages have multiple iframes, so this can add up quickly. For example, if I load TechCrunch and roll over the social buttons on every story […], without ABP, Firefox uses about 194 MiB of physical memory. With ABP, that number more than doubles, to 417 MiB.

    An even more extreme example is this page, which contains over 400 iframes. Without ABP, Firefox uses about 370 MiB. With ABP, that number jumps to 1960 MiB.

    (This description was imprecise; the overhead is actually per document, which includes both top-level documents in a tab and documents in iframes.)

    Last week Mozilla developer Cameron McCormack landed patches to fix bug 77999, which was filed more than 14 years ago. These patches enable sharing of CSS-related data — more specifically, they add data structures that share the results of cascading user agent style sheets — and in doing so they entirely fix the second issue, which is the more important of the two.

    For example, on the above-mentioned “extreme example” (a.k.a. the Vim Color Scheme Test) memory usage dropped by 3.62 MiB per document. There are 429 documents on that page, which is a total reduction of about 1,550 MiB, reducing memory usage for that page down to about 450 MiB, which is not that much more than when AdBlock Plus is absent. (All these measurements are on a 64-bit build.)

    I also did measurements on various other sites and confirmed the consistent saving of ~3.6 MiB per document when AdBlock Plus is enabled. The number of documents varies widely from page to page, so the exact effect depends greatly on workload. (I wanted to test TechCrunch again, but its front page has been significantly changed so it no longer triggers such high memory usage.) For example, for one of my measurements I tried opening the front page and four articles from each of nytimes.com, cnn.com and bbc.co.uk, for a total of 15 tabs. With Cameron’s patches applied Firefox with AdBlock Plus used about 90 MiB less physical memory, which is a reduction of over 10%.

    Even when AdBlock Plus is not enabled this change has a moderate benefit. For example, in the Vim Color Scheme Test the memory usage for each document dropped by 0.09 MiB, reducing memory usage by about 40 MiB.

    If you want to test this change out yourself, you’ll need a Nightly build of Firefox and a development build of AdBlock Plus. (Older versions of AdBlock Plus don’t work with Nightly due to a recent regression related to JavaScript parsing). In Firefox’s about:memory page you’ll see the reduction in the “style-sets” measurements. You’ll also see a new entry under “layout/rule-processor-cache”, which is the measurement of the newly shared data; it’s usually just a few MiB.

    This improvement is on track to make it into Firefox 41, which is scheduled for release on September 22, 2015. For users on other release channels, Firefox 41 Beta is scheduled for release on August 11, and Firefox 41 Developer Edition is scheduled to be released in the next day or two.

    Mark BannerFirefox Hello Desktop: Behind the Scenes – UI Showcase

    This is the third of some posts I’m writing about how we implement and work on the desktop and standalone parts of Firefox Hello. You can find the previous posts here.

    The Showcase

    One of the most useful parts of development for Firefox Hello is the User Interface (UI) showcase. Since all of the user interface for Hello is written in html and JavaScript, and is displayed in the content scope, we are able to display them within a “normal” web page with very little adjustment.

    So what we do is to put almost all our views onto a single html page at representative sizes. The screen-shot below shows just one view from the page, but those buttons at the top give easy access, and in reality there’s lots of them (about 55 at the time of writing).

    UI Showcase showing a standalone (link-clicker) view

    UI Showcase showing a standalone (link-clicker) view

    Faster Development

    The showcase has various advantages that help us develop faster:

    • Since it is a web page, we have all the developer tools available to us – inspector, css layout etc.
    • We don’t have to restart Firefox to pick up changes to the layout, nor do we have to recompile – a simple reload of the page is enough to pick up changes.
    • We also don’t have to go through the flow each time, e.g. if we’re changing some of the views which show the media (like the one above), we avoid needing to go through the conversation setup routines for each code/css change until we’re pretty sure its going to work the way we expect.
    • Almost all the views are shown – if the css is broken for one view its much easier to detect it than having to go through the user flow to get to the view you want.
    • We’ve recently added an RTL mode so that we can easily see what the views look like in RTL languages. Hence no awkward forcing of Firefox into RTL mode to check the views.

    There’s one other “feature” of the showcase as we’ve got it today – we don’t pick up the translated strings, but rather the raw string label. This tends to give us longer strings than are used normally for English, which it turns out is an excellent way of being able to detect some of the potential issues for locales which need longer strings.

    Structure of the showcase

    The showcase is a series of iframes. We load individual react components into each iframe, sometimes loading the same component multiple times with different parameters or stores to get the different views. The rest of the page is basically just structure around display of the views.

    The iframes does have some downsides – you can’t live edit css in the inspector and have it applied across all the views, but that’s minor compared to the advantages we get from this one page.

    Future improvements

    We’re always looking for ways we can improve how we work on Hello. We’ve recently improved the UI showcase quite a bit, so I don’t think we have too much on our outstanding list at the moment.

    The only thing I’ve just remembered is that we’ve commented it would be nice to have some sort of screen-shot comparison, so that we can make changes and automatically check for side-effects on other views.

    We’d also certainly be interested in hearing about similar tools which could do a similar job – sharing and re-using code is definitely a win for everyone involved.

    Interested in learning more?

    If you’re interested in learning more about the UI-showcase, then you can find the code here, try it out for yourself, or come and ask us questions in #loop on irc.

    If you want to help out with Hello development, then take a look at our wiki pages, our mentored bugs or just come and talk to us.

    Roberto A. VitilloSpark best practices

    We have been running Spark for a while now at Mozilla and this post is a summary of things we have learned about tuning and debugging Spark jobs.

    Spark execution model

    Spark’s simplicity makes it all too easy to ignore its execution model and still manage to write jobs that eventually complete. With larger datasets having an understanding of what happens under the hood becomes critical to reduce run-time and avoid out of memory errors.

    Let’s start by taking our good old word-count friend as starting example:

    rdd = sc.textFile("input.txt")\
            .flatMap(lambda line: line.split())\
            .map(lambda word: (word, 1))\
            .reduceByKey(lambda x, y: x + y, 3)\
            .collect()
    

    RDD operations are compiled into a Direct Acyclic Graph of RDD objects, where each RDD points to the parent it depends on:

    DAGFigure 1

    At shuffle boundaries, the DAG is partitioned into so-called stages that are going to be executed in order, as shown in figure 2. The shuffle is Spark’s mechanism for re-distributing data so that it’s grouped differently across partitions. This typically involves copying data across executors and machines, making the shuffle a complex and costly operation.

    stagesFigure 2

    To organize data for the shuffle, Spark generates sets of tasks – map tasks to organize the data and reduce tasks to aggregate it. This nomenclature comes from MapReduce and does not directly relate to Spark’s map and reduce operations. Operations within a stage are pipelined into tasks that can run in parallel, as shown in figure 3.

    tasksFigure 3

    Stages, tasks and shuffle writes and reads are concrete concepts that can be monitored from the Spark shell. The shell can be accessed from the driver node on port 4040, as shown in figure 4.

    shellFigure 4

    Best practices

    Spark Shell

    Running Spark jobs without the Spark Shell is like flying blind. The shell allows to monitor and inspect the execution of jobs. To access it remotely a SOCKS proxy is needed as the shell connects also to the worker nodes.

    Using a proxy management tool like FoxyProxy allows to automatically filter URLs based on text patterns and to limit the proxy settings to domains that match a set of rules. The browser add-on automatically handles turning the proxy on and off when you switch between viewing websites hosted on the master node, and those on the Internet.

    Assuming that you launched your Spark cluster with the EMR service on AWS, type the following command to create a proxy:

    ssh -i ~/mykeypair.pem -N -D 8157 hadoop@ec2-...-compute-1.amazonaws.com
    

    Finally, import the following configuration into FoxyProxy:

    <?xml version="1.0" encoding="UTF-8"?>
    <foxyproxy>
      <proxies>
        <proxy name="emr-socks-proxy" notes="" fromSubscription="false" enabled="true" mode="manual" selectedTabIndex="2" lastresort="false" animatedIcons="true" includeInCycle="true" color="#0055E5" proxyDNS="true" noInternalIPs="false" autoconfMode="pac" clearCacheBeforeUse="false" disableCache="false" clearCookiesBeforeUse="false" rejectCookies="false">
          <matches>
            <match enabled="true" name="*ec2*.amazonaws.com*" pattern="*ec2*.amazonaws.com*" isRegEx="false" isBlackList="false" isMultiLine="false" caseSensitive="false" fromSubscription="false" />
            <match enabled="true" name="*ec2*.compute*" pattern="*ec2*.compute*" isRegEx="false" isBlackList="false" isMultiLine="false" caseSensitive="false" fromSubscription="false" />
            <match enabled="true" name="10.*" pattern="http://10.*" isRegEx="false" isBlackList="false" isMultiLine="false" caseSensitive="false" fromSubscription="false" />
            <match enabled="true" name="*10*.amazonaws.com*" pattern="*10*.amazonaws.com*" isRegEx="false" isBlackList="false" isMultiLine="false" caseSensitive="false" fromSubscription="false" />
            <match enabled="true" name="*10*.compute*" pattern="*10*.compute*" isRegEx="false" isBlackList="false" isMultiLine="false" caseSensitive="false" fromSubscription="false" />
            <match enabled="true" name="*localhost*" pattern="*localhost*" isRegEx="false" isBlackList="false" isMultiLine="false" caseSensitive="false" fromSubscription="false" />
          </matches>
          <manualconf host="localhost" port="8157" socksversion="5" isSocks="true" username="" password="" domain="" />
        </proxy>
      </proxies>
    </foxyproxy>

    Once the proxy is enabled you can open the Spark Shell by visiting localhost:4040.

    Use the right level of parallelism

    Clusters will not be fully utilized unless the level of parallelism for each operation is high enough. Spark automatically sets the number of partitions of an input file according to its size and for distributed shuffles, such as groupByKey and reduceByKey, it uses the largest parent RDD’s number of partitions. You can pass the level of parallelism as a second argument to an operation. In general, 2-3 tasks per CPU core in your cluster are recommended. That said, having tasks that are too small is also not advisable as there is some overhead paid to schedule and run a task.

    As a rule of thumb tasks should take at least 100 ms to execute; you can ensure that this is the case by monitoring the task execution latency from the Spark Shell. If your tasks take considerably longer than that keep increasing the level of parallelism, by say 1.5, until performance stops improving.

    Reduce working set size

    Sometimes, you will get terrible performance or out of memory errors, because the working set of one of your tasks, such as one of the reduce tasks in groupByKey, was too large. Spark’s shuffle operations (sortByKey, groupByKey, reduceByKey, join, etc) build a hash table within each task to perform the grouping, which can often be large.

    Even though those tables spill to disk, getting to the point where the tables need to be spilled increases the memory pressure on the executor incurring the additional overhead of disk I/O and increased garbage collection. If you are using pyspark, the memory pressure will also increase the chance of Python running out of memory.

    The simplest fix here is to increase the level of parallelism, so that each task’s input set is smaller.

    Avoid groupByKey for associative operations

    Both reduceByKey and groupByKey can be used for the same purposes but reduceByKey works much better on a large dataset. That’s because Spark knows it can combine output with a common key on each partition before shuffling the data.

    In reduce tasks, key-value pairs are kept in a hash table that can spill to disk, as mentioned in “Reduce working set size“. However, the hash table flushes out the data to disk one key at a time. If a single key has more values than can fit in memory, an out of memory exception occurs. Pre-combining the keys on the mappers before the shuffle operation can drastically reduce the memory pressure and the amount of data shuffled over the network.

    Avoid reduceByKey when the input and output value types are different

    Consider the job of creating a set of strings for each key:

    rdd.map(lambda p: (p[0], {p[1]}))\
        .reduceByKey(lambda x, y: x | y)\
        .collect()
    

    Note how the input values are strings and the output values are sets. The map operation creates lots of temporary small objects. A better way to handle this scenario is to use aggregateByKey:

    def seq_op(xs, x):
        xs.add(x)
        return xs
    
    def comb_op(xs, ys):
        return xs | ys
    
    rdd.aggregateByKey(set(), seq_op, comb_op).collect()
    

    Avoid the flatMap-join-groupBy pattern

    When two datasets are already grouped by key and you want to join them and keep them grouped, you can just use cogroup. That avoids all the overhead associated with unpacking and repacking the groups.

    Python memory overhead

    The spark.executor.memory option, which determines the amount of memory to use per executor process, is JVM specific. If you are using pyspark you can’t set that option to be equal to the total amount of memory available to an executor node as the JVM might eventually use all the available memory leaving nothing behind for Python.

    Use broadcast variables

    Using the broadcast functionality available in SparkContext can greatly reduce the size of each serialized task, and the cost of launching a job over a cluster. If your tasks use any large object from the driver program, like a static lookup table, consider turning it into a broadcast variable.

    Cache judiciously

    Just because you can cache a RDD in memory doesn’t mean you should blindly do so. Depending on how many times the dataset is accessed and the amount of work involved in doing so, recomputation can be faster than the price paid by the increased memory pressure.

    It should go without saying that if you only read a dataset once there is no point in caching it, i it will actually make your job slower. The size of cached datasets can be seen from the Spark Shell.

    Don’t collect large RDDs

    When a collect operation is issued on a RDD, the dataset is copied to the driver, i.e. the master node. A memory exception will be thrown if the dataset is too large to fit in memory; take or takeSample can be used to retrieve only a capped number of elements instead.

    Minimize amount of data shuffled

    A shuffle is an expensive operation since it involves disk I/O, data serialization, and network I/O. As illustrated in figure 3, each reducer in the second stage has to pull data across the network from all the mappers.

    As of Spark 1.3, these files are not cleaned up from Spark’s temporary storage until Spark is stopped, which means that long-running Spark jobs may consume all available disk space. This is done in order to don’t re-compute shuffles.

    Know the standard library

    Avoid re-implementing existing functionality as it’s guaranteed to be slower.

    Use dataframes

    A DataFrame is a distributed collection of data organized into named columns. It is conceptually equivalent to a table in a relational database or a pandas Dataframe in Python.

    props = get_pings_properties(pings,
                                 ["environment/system/os/name",
                                  "payload/simpleMeasurements/firstPaint",
                                  "payload/histograms/GC_MS"],
                                 only_median=True)
    
    frame = sqlContext.createDataFrame(props.map(lambda x: Row(**x)))
    frame.groupBy("environment/system/os/name").count().show()
    

    yields:

    environment/system/os/name count 
    Darwin                     2368  
    Linux                      2237  
    Windows_NT                 105223
    

    Before any computation on a DataFrame starts, the Catalyst optimizer compiles the operations that were used to build the DataFrame into a physical plan for execution. Since the optimizer generates JVM bytecode for execution, pyspark users will experience the same high performance as Scala users.


    Michael KaplyThe End of Firefox 31 ESR

    With the release of Firefox 39 today also comes the final release of the Firefox 31 ESR (barring any security updates in the next six weeks).
    That means you have six weeks to manage your switch over to the Firefox 38 ESR.

    If you've been wondering if you should use the ESR instead of keeping up with current Firefox releases, now might be a good time to switch. That's because there are a couple features coming in the Firefox mainline that might affect you. These include the removal of the distribution/bundles directory as well as the requirement for all add-ons to be signed by Mozilla.

    It's much easier going from Firefox 38 to the Firefox 38 ESR then going from Firefox 39 to the Firefox 38 ESR.

    If you want to continue on the Firefox mainline, you can use the CCK2 to bring back some of the distribution/bundles functionality, but I won't be able to do anything about the signing requirement.

    Mark BannerFirefox Hello Desktop: Behind the Scenes – Architecture

    This is the second of some posts I’m writing about how we implement and work on the desktop and standalone parts of Firefox Hello. The first post was about our use of Flux and React, this second post is about the architecture.

    In this post, I will give an overview of the Firefox browser software architecture for Hello, which includes the standalone UI.

    User-visible parts of Hello

    Although there’s many small parts to Hello, most of it is shaped by what is user visible:

    Firefox Hello Desktop UI (aka Link-Generator)

    Hello Standalone UI (aka Link-clicker)

    Firefox Browser Architecture for Hello

    The in-browser part of Hello is split into three main areas:

    • The panel which has the conversation and contact lists. This is a special about: page that is run in the content process with access to additional privileged APIs.
    • The conversation window where conversations are held. Within this window, similar to the panel, is another about: page that is also run in the content process.
    • The backend runs in the privileged space within gecko. This ties together the communication between the panels and conversation windows, and provides access to other gecko services and parts of the browser to which Hello integrates.
    Outline of Hello's Desktop Architecture

    Outline of Hello’s Desktop Architecture

    MozLoopAPI is our way of exposing small bits of the privileged gecko (link) code to the about: pages running in content. We inject a navigator.mozLoop object into the content pages when they are loaded. This allows various functions and facilities to be exposed, e.g. access to a backend cache of the rooms list (which avoids multiple caches per window), and a similar backend store of contacts.

    Standalone Architecture

    The Standalone UI is simply a web page that’s shown in any browser when a user clicks a conversation link.

    The conversation flow is in the standalone UI is very similar to that of the conversation window, so most of the stores and supporting files are shared. Most of the views for the Standalone UI are currently different to those from the desktop – there’s been a different layout, so we need to have different structures.

    Outline of Hello's Standalone UI Architecture

    Outline of Hello’s Standalone UI Architecture

    File Architecture as applied to the code

    The authoritative location for the code is mozilla-central it lives in the browser/components/loop directory. Within that we have:

    • content/ – This is all the code relating to the panel and conversation window that is shipped on in the Firefox browser
    • content/shared/ – This code is used on browser as well as on the standalone UI
    • modules/ – This is the backend code that runs in the browser
    • standalone/ – Files specific to the standalone UI

    Future Work

    There’s a couple of likely parts of the architecture that we’re going to rework soon.

    Firstly, with the current push to electrolysis, we’re replacing the current exposed MozLoopAPI with a message-based RPC mechanism. This will then let us run the panel and conversation window in the separated content process.

    Secondly, we’re currently reworking some of the UX and it is moving to be much more similar between desktop and standalone. As a result, we’re likely to be sharing more of the view code between the two.

    Interested in learning more?

    If you’re interested in learning more about Hello’s architecture, then feel free to dig into the code, or come and ask us questions in #loop on irc.

    If you want to help out with Hello development, then take a look at our wiki pages, our mentored bugs or just come and talk to us.

    About:CommunityFirefox 39 new contributors

    With the release of Firefox 39, we are pleased to welcome the 64 developers who contributed their first code change to Firefox in this release, 55 of whom were brand new volunteers! Please join us in thanking each of these diligent and enthusiastic individuals, and take a look at their contributions:

  • agrigas: 1135270
  • bmax1337: 967319
  • leo: 1134993
  • schtroumps31: 1130045
  • zmiller12: 1138873
  • George Duan: 1135293
  • Abhinav Koppula: 732688, 951695, 1127337
  • Alex Verstak: 1113431, 1144816
  • Alexandre Ratchov: 1144087
  • Andrew Overholt: 1127552
  • Anish: 1135091, 1135383
  • Anush: 418517, 1113761
  • Bhargav Chippada: 1112605, 1130372
  • Boris Kudryavtsev: 1135364, 1144613
  • Cesar Guirao: 1139132
  • Chirag Bhatia: 1133211
  • Danilo Cesar Lemes de Paula: 1146020
  • Daosheng Mu: 1133391
  • Deepak: 1039540
  • Felix Janda: 1130164, 1130175
  • Gareth Aye: 1145310
  • Geoffroy Planquart: 942475, 1042859
  • Gerald Squelart: 1121774, 1135541, 1137583
  • Greg Arndt: 1142779
  • Henry Addo: 1084663
  • Jason Gersztyn: 1132673
  • Jeff Griffiths: 1138545
  • Jeff Lu: 1098415, 1106779, 1140044
  • Johan K. Jensen: 1096294
  • Johannes Vogel: 1139594
  • John Giannakos: 1134568
  • John Kang: 1144782, 1146252
  • Jorg K: 756984
  • Kyle Thomas: 1137004
  • Léon McGregor: 1115925, 1130741, 1136708
  • Manraj Singh [:manrajsingh|Away till 21st June]: 1120408
  • Mantaroh Yoshinaga: 910634, 1106905, 1130614
  • Markus Jaritz: 1139174
  • Massimo Gervasini: 1137756, 1144695
  • Matt Hammerly: 1124271
  • Matt Spraggs: 1036454
  • Michael Weisz : 736572, 782623, 935434
  • Mitchell Field: 987902
  • Mohamed Waleed: 1106938
  • NiLuJe: 1143411
  • Perry Wagle: 1122941
  • Ponç Bover: 1126978
  • Quentin Pradet: 1092544
  • Ravi Shankar: 1109608
  • Rishi Baldawa: 1143196
  • Stéphane SCHMIDELY: 935259, 1144619
  • Sushrut Girdhari (sg345): 1137248
  • Thomas Baquet: 1132078
  • Titi_Alone : 1133063
  • Tyler St. Onge: 1134927
  • Vaibhav Bhosale: 1135009
  • Vidit23: 1121317
  • Wickie Lee: 1136253
  • Zimon Dai: 983469, 1135435
  • atlanto: 1137615
  • farhaan: 1073234
  • pinjiz: 1124943, 1142260, 1142268
  • qasim: 1123431
  • ronak khandelwal: 1122767
  • uelis: 1047529
  • Andy McKayCycling

    In January I finally decided to do something I'd wanted to do in a long time and that's the Gran Fondo from Vancouver to Whistler.

    I've been cycling for a while too and from work, but it was time to get more serious and hopefully lose some weight in the process. So I signed up and September I'll be racing up to Whistler with a few thousand other people.

    Last Saturday I was in Whistler for the Mozilla work week. I got to do some riding, including a quick trip to Pemberton:

    I had the opportunity to ride down from Whistler as a practice. I've enjoyed my cycling, but I looked at this ride with a mixture of anticipation and dread.

    Getting back is a 135.8km ride that looks like this:

    It also meant hanging out at a party with Mozilla on top of the mountain without drinking very much. Something that, if you know me, I tend to find a little difficult.

    As it turns out the ride was great fun. Riding in the sun with views of the Elaho and Howe Sound. The main annoyance was when I had stop for a traffic light in Squamish after so long of continually cycling.

    After Horseshoe Bay I got flat a tyre. When I cycle to and from work I don't carry spares - if anything happens I got on a bus. Repeating that was an error, but fortunately multiple people kindly stopped and offered to help. About 15 minutes later I was up and cycling again.

    Later on, as I was crossing North Vancouver with some other people, a truck pulled up besides us and shouted "You're doing 39km/h!". Somehow after 5 hours on the road I was still having fun and cycling fast.

    I've gone from doing a few days a week, to over 300k on a bike a week and still loving it. I've gone from just being happy to get to Whistler alive to thinking about setting a time for my race. We'll see how that goes.

    Karl DubostWatch A Computing Engineer Live Coding

    Mike (Taylor) has been telling me about the live coding session of Mike Conley for a little while. So yesterday I decided to give it a try with the first episode of The Joy of Coding (mconley livehacks on Firefox). As mentionned in the description:

    Unscripted, unplanned, uncensored, and true to life, watch what a Firefox Desktop engineer does to close bugs and get the job done.

    And that is what is written on the jar. And sincerely this is good.

    Why Should You Watch Live Coding?

    I would recommend to watch this for:

    • Beginner devs: To understand that more experienced developers struggle and make mistakes too. To also learn a couple of good practices when coding.
    • Experienced devs: To see how someone is coding and learn a couple of new tricks and habits when coding. To encourage them to do the same kind of things than Mike is doing.
    • Managers: You are working as a manager in a Web agency, you are a project manager, watch this. You will not understand most of it, but focus exactly on what Mike is doing in terms of thoughts process and work organization. Even without a knowledge of programming, we discover the struggles, the trials and errors, and the success.

    There's a full series of them, currently 19.

    My Own (Raw) Notes When Watching The Video

    • Watching the first video of Live Coding
    • He started with 3 video scenes and switched to his main full screen
    • He's introducing himself
    • mconley: "Nothing is prepared"
    • He's introducing the bug and explained it in demonstrating it
    • mconley: "I like to take notes when working on bugs" (taken in evernote)
    • mconley: "dxr is better than mxr."
    • He's not necessary remembering everything. So he goes through other parts of the code to understand what others did.
    • Sometimes he just doesn't know, doesn't understand and he says it.
    • mconley: "What other people do?"
    • He's taking notes including some TODOs for the future to explore, understand, do.
    • He's showing his fails in compiling, in coding, etc.
    • (personal thoughts) It's hard to draw on a computer. Paper provides some interesting features for quickly drawing something. Computer loses, paper wins.
    • When recording, thinking with a loud voice gives context on what is happening.
    • Write comments in the code for memory even if you remove them later.
    • In your notes, cut and paste the code from the source. Paper loses, computer wins.
    • (personal thoughts): C++ code is ugly to read.
    • (personal thoughts): Good feeling for your own job after watching this. It shows you are not the only one struggling when doing stuff.

    Some Additional Thoughts

    We met Mike Conley in Whistler, Canada last week. He explained he used Open Broadcasting Project for recording his sessions. I'm tempted to do something similar for Web Compatibility work. I'm hesitating in between French and English. Maybe if Mike was doing something in English, I might do it in French. So people in the French community could benefit of it.

    So thanks Mike for telling me about this in the last couple of weeks.

    Otsukare!

    Air MozillaDXR 2.0 (Part 2: Discussion)

    DXR 2.0 (Part 2: Discussion) A discussion of the roadmap for DXR after 2.0

    Air MozillaDXR 2.0 (Part 1: Dog & Pony Show)

    DXR 2.0 (Part 1: Dog & Pony Show) Demo of features new in the upcoming 2.0 release of DXR, Mozilla's search and analysis tool for large codebases

    Daniel GlazmanCSS Working Group's future

    Hello everyone.

    Back in March 2008, I was extremely happy to announce my appointment as Co-chairman of the CSS Working Group. Seven years and a half later, it's time to move on. There are three main reasons to that change, that my co-chair Peter and I triggered ourselves with W3C Management's agreement:

    1. We never expected to stay in that role 7.5 years. Chris Lilley chaired the CSS Working Group 1712 days from January 1997 (IIRC) to 2001-oct-10 and that was at that time the longest continuous chairing in W3C's history. Bert Bos chaired it 2337 days from 2001-oct-11 to 2008-mar-05. Peter and I started co-chairing it on 2008-mar-06 and it will end at TPAC 2015. That's 2790 days so 7 years 7 months and 20 days! I'm not even sure those 2790 days hold a record, Steven Pemberton probably chaired longer. But it remains that our original mission to make the WG survive and flourish is accomplished, and we now need fresher blood. Stability is good, but smart evolution and innovation are better.
    2. Co-chairing a large, highly visible Working Group like the CSS Working Group is not a burden, far from it. But it's not a light task either. We start feeling the need for a break.
    3. There were good candidates for the role, unanimously respected in the Working Group.

    So the time has come. The new co-chairs, Rossen Atanassov from Microsoft and Alan Stearns from Adobe, will take over during the Plenary Meeting of the W3C held in Sapporo, japan, at the end of October and that is A Good Thing™. You'll find below a copy of my message to W3C.

    To all the people I've been in touch with while holding my co-chair's hat: thank you, sincerely and deeply. You, the community around CSS, made everything possible.

    Yours truly.

    Dear Tim, fellow ACs, fellow Chairs, W3C Staff, CSS WG Members,

    After seven years and a half, it's time for me to pass the torch of the CSS Working Group's co-chairmanship. 7.5 years is a lot and fresh blood will bring fresh perspectives and new chairing habits. At a time the W3C revamps its activities and WGs, the CSS Working Group cannot stay entirely outside of that change even if its structure, scope and culture are retained. Peter and I decided it was time to move on and, with W3M's agreement, look for new co-chairs.

    I am really happy to leave the Group in Alan's and Rossen's smart and talented hands, I'm sure they will be great co-chairs and I would like to congratulate and thank them for accepting to take over. I will of course help the new co-chairs on request for a smooth and easy transition, and I will stay in the CSS WG as a regular Member.

    I'd like to deeply thank Tim for appointing me back in 2008, still one of the largest surprises of my career!

    I also wish to warmly thank my good friends Chris Lilley, Bert Bos and Philippe Le Hégaret from W3C Staff for their crucial daily support during all these years. Thank you Ralph for the countless transition calls! I hope the CSS WG still holds the record for the shortest positive transition call!

    And of course nothing would have been possible without all the members of the CSS Working Group, who tolerated me for so long and accepted the changes we implemented in 2008, and all our partners in the W3C (in particular the SVG WG) or even outside of it, so thank you all. The Membership of the CSS WG is a powerful engine and, as I often say, us co-chairs have only been a drop of lubricant allowing that engine to run a little bit better, smoother and without too much abrasion.

    Last but not least, deep thanks to my co-chair and old friend Peter Linss for these great years; I accepted that co-chair's role to partner with Peter and enjoyed every minute of it. A long ride but such a good one!

    I am confident the CSS Working Group is and will remain a strong and productive Group, with an important local culture. The CSS Working Group has both style and class (pun intended), and it has been an honour to co-chair it.

    Thank you.

    </Daniel>

    Air MozillaMozilla Weekly Project Meeting

    Mozilla Weekly Project Meeting The Monday Project Meeting

    Mozilla WebDev CommunityBeer and Tell – June 2015

    Once a month, web developers from across the Mozilla Project get together to try and transmute a fresh Stanford graduate into a 10x engineer. Meanwhile, we find time to talk about our side projects and drink, an occurrence we like to call “Beer and Tell”.

    There’s a wiki page available with a list of the presenters, as well as links to their presentation materials. There’s also a recording available courtesy of Air Mozilla.

    Osmose: SpeedKills and Advanced Open File

    First up was Osmose (that’s me!) presenting two packages for Atom, a text editor. SpeedKills is a playful package that plays a guitar solo and sets text on fire when the user types fast enough. Advanced Open File is a more useful package that adds a convenient dialog for browsing the file system and opening files by path rather than using the fuzzy finder. Both are available for install through the Atom package repository.

    new_one: Tab Origin

    Next was new_one, who shared Tab Origin, a Firefox add-on that lets you return to the webpage that launched the current tab, even if the parent tab has since been closed. It’s activated via a keyboard shortcut that can be customized.

    Potch: WONTFIX and Presentation Mode

    Continuing a fine tradition of batching projects, Potch stopped by to show off two Firefox add-ons. The first was WONTFIX, which adds a large red WONTFIX stamp to any Bugzilla bug that has been marked as WONTFIX. The second was Presentation Mode, which allows you to full-screen any content in a web page while hiding the browser chrome. This is especially useful when giving web-based presentations.

    Peterbe: premailer.io

    Peterbe shared premailer.io, which is a service wrapping premailer. Premailer takes a block of HTML with a style tag and applies the styles within as style attributes on each matching tag. This is mainly useful for HTML emails, which generally don’t support style tags that apply to the entire email.

    ErikRose: Spam-fighting Tips

    ErikRose learned a lot about the current state of spam-fighting while redoing his mail server:

    • Telling Postfix to be picky about RFCs is a good first pass. It eliminates some spam without having to do much computation.
    • spamassassin beats out dspam, which hasn’t seen an update since 2012.
    • Shared-digest detectors like Razor help a bit but aren’t sufficient on their own without also greylisting to give the DBs a chance to catch up.
    • DNS blocklists are a great aid: they reject 3 out of 4 spams without taking much CPU.
    • Bayes is still the most reliable (though the most CPU-intense) filtration method. Bayes poisoning is infeasible, because poisoners don’t know what your ham looks like, so don’t worry about hand-picking spam to train on. Train on an equal number of spams and hams: 400 of each works well. Once your bayes is performing well, crank up your BAYES_nn settings so spamassassin believes it.
    • Crank up spamc’s –max-size to 1024000, because spammers are now attaching images > 512K to mails to bypass spamc’s stock 512K threshold. This will cost extra CPU.

    With this, he gets perhaps a spam a week, with over 400 attempts per day.


    We were only able to get a 3x engineer this month, but at least they were able to get a decent job working on enterprise software.

    If you’re interested in attending the next Beer and Tell, sign up for the dev-webdev@lists.mozilla.org mailing list. An email is sent out a week beforehand with connection details. You could even add yourself to the wiki and show off your side-project!

    See you next month!

    Mozilla Security BlogDharma


    dharma

    As soon as a developer at Mozilla starts integrating a new WebAPI feature, the Mozilla Security team begins working to help secure that API. Subtle programming mistakes in new code can introduce annoying crashes and even serious security vulnerabilities that can be triggered by malformed input which can lead to headaches for the user and security exposure.

    WebAPIs start life as a specification in the form of an Interface Description Language, or IDL. Since this is essentially a grammar, a grammar-based fuzzer becomes a valuable tool in finding security issues in new WebAPIs because it ensures that expected semantics are followed most of the time, while still exploring enough undefined behavior to produce interesting results.

    We came across a grammar fuzzer Ben Hawkes released in 2011 called “Dharma.” Sadly, only one version was ever made public. We liked Ben’s approach, but Dharma was missing some features which were important for us and its wider use for API fuzzing. We decided to sit down with our fuzzing mates at BlackBerry and rebuild Dharma, giving the results back to the public, open source and licensed as MPL v2.

    We redesigned how Dharma parses grammars and optimized the speed of parsing and the generating of fuzzed output, added new grammar features to the grammar specification, added support for serving testcases over a WebSocket server, and made it Python 3 ready. It comes with no dependencies and runs out of the box.

    In theory Dharma can be used with any data that can be represented as a grammar. At Mozilla we typically use it for APIs like WebRTC, WebAudio, or WebCrypto.


    dharma_demo

    Dharma has no integrated harness. Feel free to check out the Quokka project which provides an easy way for launching a target with Dharma, monitoring the process and bucketing any faults.

    Dharma is actively in use and maintained at Mozilla and more features are planned for the future. Ideas for improvements are always greatly welcomed.

    Dharma is available via GitHub (preferred and always up-to-date) or via PyPi by running “pip install dharma”.

    References
    https://github.com/mozillasecurity/dharma
    https://github.com/mozillasecurity/quokka
    https://code.google.com/p/dharma/

    Aaron ThornburghNot Working in Whistler

    A brief retrospective
    Whistler Rock - Shot with my Moto X 2nd Gen

    I just returned from Whistler, BC, along with 1300 other Mozillians.

    Because we’re a global organization, it’s important to get everyone together every now and then. We matched faces with IRC channels, talked shop over finger-food, and went hiking with random colleagues. Important people gave speeches. Teams aligned. There was a massive party at the top of a mountain.

    +++++

    Typically, I learn the most from experiences like these only after they have ended. Now that I’ve had 48 hours to process (read: recover), some themes have emerged…

    1. Organizing and mobilizing a bunch of smart, talented, opinionated people is hard work.
    2. It’s easy to say “you’re doing it wrong.” It takes courage to ask “how could we do it better?”
    3. Anyone can find short-term gains. The best leaders define a long-term vision, then enable individuals to define their own contribution.
    4. Success is always relative, and only for a moment in time.
    5. Accelerated change requires rapid adaptation. Being small is our advantage.
    6. The market is what you make it. So make it better (for everyone).
    7. It’s been said that when technology works, it’s magic. I disagree. Magic is watching passionate people – from all over the worldcreate technology together.

    Also, it has now been confirmed by repeatable experiment: I’m not a people person – especially when lots of them are gathered in small spaces. I’m fucking exhausted.

    Officially signing back on,

    -DCROBOT


    About:CommunityRecap of Participation at Whistler

    <noscript>[<a href="http://storify.com/lucyeharris/participation-at-whistler" target="_blank">View the story &#8220;Participation at Whistler&#8221; on Storify</a>]</noscript>

    Bogomil ShopovVoice-controlled UI plus a TTS engine.

    A few days ago I’ve attended the launch of SohoXI – Hook and Loop’s latest product that aims to change the look and feel of the Enterprise applications forever. I am close to believing that and not only because I work for the same company as they do. They have delivered extremely revolutionary UI in … Continue reading Voice-controlled UI plus a TTS engine.

    Dave HuntWhistler Work Week 2015

    Last week was Mozilla’s first work week of 2015 in Whistler, BC. It was my first visit to Whistler having joined Mozilla shortly after their last summit there in 2010, and it was everything I needed it to be. Despite currently feeling jetlagged, I have been recharged and I have renewed enthusiasm for the mission and I’m even more than a little excited about Firefox OS again! I’d like to share a few of my highlights from last week…

    • S’mores – The dinner on Wednesday evening was followed by a street party, where I had my first S’more. It was awesome!
    • Firefox OS – Refreshing honesty over past mistakes and a coherant vision for the future has actually made me enthusiastic about this project again. I’m no longer working directly on Firefox OS, but I’ve signed up for the dogfoxfooding program and I’m excited about making a difference again.
    • LEGO – I got to build a LEGO duck, and we heard from David Robertson about the lessons LEGO learned from near bankruptcy.
    • Milkshake – A Firefox Q&A was made infinitely better by taking a spontaneous walk to Cow’s for milkshakes and ice cream with my new team!
    • Running – I got to run with #running friends old and new on Tuesday morning around Lost Lake. Then on Thursday morning I headed back and took on the trails with Matt. These were my first runs since my marathon, and running through the beautiful scenary was exactly what I needed to get me back into it.
    • Istanbul – After dinner on Tuesday night, Stephen and I sat down with Bob to play the board game Istanbul.
    • Hacking – It’s always hard to get actual code written during these team events, but I’m pleased to say we thought through some challenging problems, and actually even managed to land some code.
    • Hike – On Friday morning I joined Justin and Matt on a short hike up Whistler mountain. We didn’t have long before breakfast, but it was great to spend more time with these guys.
    • Whistler Mountain – The final party was at the top of Whistler Mountain, which was just breathtaking. I can’t possibly do the experience justice – so I’m not even going to try.

    Thank you Whistler for putting up with a thousand Mozillians, and thank you Mozilla for organising such a perfect week. We’re going to keep rocking the free web!

    Yunier José Sosa VázquezDisponible cuentaFox 3.1.2 con añoradas funcionalidades

    Más rápido de lo que esperábamos ya está aquí una nueva versión de cuentaFox con interesantes características que desde hace algún tiempo los usuarios deseaban.

    Instalar cuentaFox 3.1.2

    Con esta liberación ya no tendremos que preocuparnos más porque el certificado no está añadido y se me abren x pestañas, ahora el UCICA.pem se añade automáticamente con sus niveles de confianza requeridos. Sin dudas es un gran paso de avance pues nos quita un gran peso de encima.

    Con el paso del tiempo a algunas personas se les olvida actualizar la extensión por lo que siguen utilizando versiones viejas que presentan algún error. Por esta razón, hemos decido alertar al usuario cuando estén disponibles nuevas versiones mostrando una alerta y abriendo en una nueva pestaña la URL para actualizar.

    v3.1.2-alerta-de-actualizacion

    Nuestro objetivo es que este proceso se realice de forma transparente al usuario pero actualmente no podemos hacerlo.

    Completan la lista de novedades

    • Al mostrar una alerta de error o de consumo se muestra el usuario que la ha generado #21.
    • Solucionado no se mostraba el usuario real del error al obtener los datos para varios usuarios #18.
    • Si existen varios usuarios almacenados, en el botón ubicado en la barra de herramientas se muestra el consumo del usuario que inició sesión #19.
    • Si al obtener los datos del usuario ocurre un error siempre se intenta borrar sus datos del administrador de contraseñas de Firefox #26.
    • Actualizado jQuery a v2.1.4

    Por último, pero no menos importante. En las opciones de configuración del add-on (no de la interfaz) pueden decidir si mostrar uno o más usuarios al mismo tiempo y elegir ocultar las cuotas gastadas.

    v3.1.2-opciones-de-configuracion

    Sirva este artículo para agradecer a todas las personas que han acercado a la página del proyecto en GitLab y nos han dejado sus ideas o errores encontrados. Esperamos que se sumen muchos más.

    Si deseas colaborar en el desarrollo del complemento puedes acceder a GitLab (UCI) y clonar el proyecto o dejar una sugerencia. Quizás encuentres una que te motive.

    Instalar cuentaFox 3.1.2

    John O'Duinn“We are ALL Remoties” (Jun2015 edition)

    Since my last post on “remoties”, I’ve done several more presentations, some more consulting work for private companies, and even started writing this down more explicitly (exciting news coming here soon!). While I am always refining these slides, this latest version is the first major “refactor” of this presentation in a long time. I think this restructuring makes the slides even easier to follow – there’s a lot of material to cover here, so this is always high on my mind.

    Without further ado – you can get the latest version of these slides, in handout PDF format, by clicking on the thumbnail image.

    Certainly, the great responses and enthusiastic discussions every time I go through this encourages me to keep working on this. As always, if you have any questions, suggestions or good/bad stories about working remotely or as part of a geo-distributed teams, please let me know (either by email or in the comments below) – I’d love to hear them.

    Thanks
    John.

    Cameron KaiserHello, 38 beta, it's nice to meet you

    And at long last the 38 beta is very happy to meet you too (release notes, downloads, hashes). Over the next few weeks I hope you and the new TenFourFox 38 will become very fond of each other, and if you don't, then phbbbbt.

    There are many internal improvements to 38. The biggest one specific to us, of course, is the new IonPower JavaScript JIT backend. I've invested literally months in making TenFourFox's JavaScript the fastest available for any PowerPC-based computer on any platform, not just because every day websites lard up on more and more crap we have to swim through (viva Gopherspace) but also because a substantial part of the browser is written in JavaScript: the chrome, much of the mid-level plumbing and just about all those addons you love to download and stuff on in there. You speed up JavaScript, you speed up all those things. So now we've sped up many browser operations by about 11 times over 31.x -- obviously the speed of JavaScript is not the only determinant of browser speed, but it's a big part of it, and I think you'll agree that responsiveness is much improved.

    JavaScript also benefits in 38 from a compacting, generational garbage collector (generational garbage collection was supposed to make 31 but was turned off at the last minute). This means recently spawned objects will typically be helplessly slaughtered in their tender youth in a spasm of murderous efficiency based on the empiric observation that many objects are created for brief usage and then never used again, reducing the work that the next-stage incremental garbage collector (which we spent a substantial amount of time tuning in 31 as you'll recall, including backing out background finalization and tweaking the timeslice for our slower systems) has to do for objects that survive this pediatric genocide. The garbage collector in 38 goes one step further and compacts the heap as well, which is to say, it moves surviving objects together contiguously in memory instead of leaving gaps that cannot be effectively filled. This makes both object cleanup and creation much quicker in JavaScript, which relies heavily on the garbage collector (the rest of the browser uses more simplistic reference counting to determine object lifetime), to say nothing of a substantial savings in memory usage: on my Quad G5 I'm seeing about 200MB less overhead with 48 tabs open.

    I also spent some time working on font enumeration performance because of an early showstopper where sites that loaded WOFF fonts spun and spun and spun. After several days of tearing my hair out in clumps the problem turned out to be a glitch in reference counting caused by the unusual way we load platform fonts: since Firefox went 10.6+ it uses CoreText exclusively, but we use almost completely different font code based on the old Apple Type Services which is the only workable choice on 10.4 and the most stable choice on 10.5. ATS is not very fast at instantiating lots of fonts, to say the least, so I made the user font cache stickier (please don't read that as "leaky" -- it's sticky because things do get cleaned up, but less aggressively to improve cache hit percentage) and also made a global font cache where the font's attribute tag directory is cached browser-wide to speed up loading font tables from local fonts on your hard disk. Previously this directory was cached per font entry, meaning if the font entry was purged for re-enumeration it had to be loaded all over again, which usually happened when the browser was hunting for a font with a particular character. This process used to take about fifteen to twenty seconds for the 700+ font faces on my G5. With the global font cache it now takes less than two.

    Speaking of showstoppers, here's an interesting one which I'll note here for posterity. nsChildView, the underlying system view which connects Cocoa/Carbon to Gecko, implements the NSTextInput protocol which allows it to accept Unicode input without (as much) mucking about with the Carbon Text Services Manager (Firefox also implements NSTextInputClient, which is the new superset protocol, but this doesn't exist in 10.4). To accept Unicode input, under the hood the operating system actually manipulates a special undocumented TSM input context called, surprisingly, NSTSMInputContext (both this and the undocumented NSInputContext became the documented NSTextInputContext in 10.6), and it gets this object from a previously undocumented method on NSView called (surprise again) inputContext. Well, turns out if you override this method you can potentially cause all sorts of problems, and Mozilla had done just that to handle complex text input for plugins. Under the 10.4 SDK, however, their code ended up returning a null input context and Unicode input didn't work, so since we don't support plugins anyhow the solution was just to remove it completely ... which took several days more to figure out. The moral of the story is, if you have an NSView that is not responding to setMarkedText or other text input protocol methods, make sure you haven't overridden inputContext or screwed it up somehow.

    I also did some trivial tuning to the libffi glue library to improve the speed of its calls and force it to obey our compiler settings (there was a moment of panic when the 7450 build did not start on the test machines because dyld said XUL was a 970 binary -- libffi had seen it was being built on a G5 and "helpfully" compiled it for that target), backed out some portions of browser chrome that were converted to CoreUI (not supported on 10.4), and patched out the new tab tile page entirely; all new tabs are now blank, like they used to be in previous versions of Firefox and as intended by God Himself. There are also the usual cross-platform HTML5 and CSS improvements you get when we leap from ESR to ESR like this, and graphics are now composited off-main-thread to improve display performance on multiprocessor systems.

    That concludes most of the back office stuff. What about user facing improvements? Well, besides the new blank tabs "feature," we have built-in PDF viewing as promised (I think you'll find this more useful to preview documents and load them into a quicker viewer to actually read them, but it's still very convenient) and Reader View as the biggest changes. Reader View, when the browser believes it can attempt it, appears in the address bar as a little book icon. Click on it and the page will render in a simplified view like you would get from a tool such as Readability, cutting out much of the extraneous formatting. This is a real godsend on slower computers, lemme tell ya! Click the icon again to go back. Certain pages don't work with this, but many will. I have also dragged forward my MP3 decoder support, but see below first, and we have prospectively landed Mozilla bug 1151345 to fix an issue with the application menu (modified for the 10.4 SDK).

    You will also note the new, in-content preferences (i.e., preferences appears in a browser tab now instead of a window, a la, natch, Chrome), and that the default search engine is now Yahoo!. I have not made this default to anything else since we can still do our part this way to support MoCo (but you can change it from the preferences, of course).

    I am not aware of any remaining showstopper bugs, so therefore I'm going ahead with the beta. However, there are some known issues ("bugs" or "features" mayhaps?) which are not critical. None of these will hold up final release currently, but for your information, here they are:

    • If you turn on the title bar, private browsing windows have the traffic light buttons in the wrong position. They work; they just look weird. This is somewhat different than issue 247 and probably has a separate, though possibly related, underlying cause. Since this is purely cosmetic and does not occur in the browser's default configuration, we can ship with this bug present but I'll still try to fix it since it's fugly (plus, I personally usually have the title bar on).

    • MP3 support is still not enabled by default because seeking within a track (except to the beginning) does not work yet. This is the last thing to do to get this support off the ground. If you want to play with it in its current state, however, set tenfourfox.mp3.enabled to true (you will need to create this pref). If I don't get this done by 38.0.2, the pref will stay off until I do, but the rest of it is working already and I have a good idea how to get this last piece functional.

    • I'm not sure whether to call this a bug or a feature, but scaling now uses a quick and dirty algorithm for many images and some non-.ico favicons apparently because we don't have Skia support. It's definitely lower quality, but it has a lot less latency. Images displayed by themselves still use the high-quality built-in scaler which is not really amenable to the other uses that I can tell. Your call on which is better, though I'm not sure I know how to go back the old method or if it's even possible anymore.

    • To reduce memory pressure, 31 had closed tab and window undos substantially reduced. I have not done that yet for 38 -- near as I can determine, the more efficient memory management means it is no longer worth it, so we're back to the default 10 and 3. See what you think.

    Builders: take note that you will need to install a modified strip ("strip7") if you intend to make release binaries due to what is apparently a code generation bug in gcc 4.6. If you want to use a different (later) compiler, you should remove the single changeset with the gcc 4.6 compatibility shims -- in the current changeset pack it's numbered 260681, but this number increments in later versions. See our new HowToBuildRightNow38 for the gory details and where to get strip7.

    Localizers: strings are frozen, so start your language pack engines one more time in issue 42. We'd like to get the same language set for 38 that we had for 31, and your help makes it possible. Thank you!

    As I mentioned before, it's probably 70-30 against there being a source parity version after 38ESR because of the looming threat of Electrolysis, which will not work as-is on 10.4 and is not likely to perform well or even correctly on our older systems. (If Firefox 45, the next scheduled ESR, still allows single process operation then there's a better chance. We still need to get a new toolchain up and a few other things, though, so it won't be a trivial undertaking.) But I'm pleased with 38 so far and if we must go it means we go out on a high note, and nothing says we can't keep improving the browser ourselves separate from Mozilla after we split apart (feature parity). Remember, that's exactly what Classilla does, except that we're much more advanced than Classilla will ever be, and in fact Pale Moon recently announced they're doing the same thing. So if 38 turns out to be our swan song as a full-blooded Mozilla tier 3 port, that doesn't mean it's the end of TenFourFox as a browser. I promise! Meanwhile, let's celebrate another year of updates! PowerPC forever!

    Finally, looking around the Power Mac enthusiast world, it appears that SeaMonkeyPPC has breathed its last -- there have been no updates in over a year. We will pour one out for them. On the other hand, Leopard Webkit continues with regular updates from Tobias, and our friendly builder in the land of the Rising Sun has been keeping up with Tenfourbird. We have the utmost confidence that there will be a Tenfourbird 38 in your hands soon as well.

    Some new toys to play with are next up in a couple days.

    This Week In RustThis Week in Rust 85

    Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us an email! Want to get involved? We love contributions.

    This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

    From the Blogosphere

    Tips & Tricks

    In the News

    New Releases & Project Updates

    • rust-timsort. Rust implementation of the modified MergeSort used in Python and Java.
    • trust. Rust automated test runner.
    • mongo-rust-driver. Mongo Rust driver built on top of the Mongo C driver.
    • rust-ffi-omnibus. A collection of examples of using code written in Rust from other languages.
    • hyper is now at v0.6. An HTTP/S library for Rust.
    • rust-throw. A new experimental rust error handling library, meant to assist and build on existing error handling systems.
    • burrito. A monadic IO interface in Rust.
    • mimty. Fast, safe, self-contained MIME Type Identification for C and Rust.

    What's cooking on master?

    95 pull requests were merged in the last week.

    Breaking Changes

    Now you can follow breaking changes as they happen!

    Other Changes

    New Contributors

    • Andy Grover
    • Brody Holden
    • Christian Persson
    • Cruz Julian Bishop
    • Dirkjan Ochtman
    • Gulshan Singh
    • Jake Hickey
    • Makoto Kato
    • Yongqian Li

    Final Comment Period

    Every week the teams announce a 'final comment period' for RFCs which are reaching a decision. Express your opinions now. This week's RFCs entering FCP are:

    New RFCs

    Upcoming Events

    If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

    Marco BonardoAdd-ons devs heads-up: History API removals in Places

    Alex ClarkPillow 2-9-0 Is Almost Out

    Pillow 2.9.0 will be released on July 1, 2015.

    Pre-release

    Please help the Pillow Fighters prepare for the Pillow 2.9.0 release by downloading and testing this pre-release:

    Report issues

    As you might expect, we'd like to avoid the creation of a 2.9.1 release within 24-48 hours of 2.9.0 due to any unforeseen circumstances. If you suspect such an issue to exist in 2.9.0.dev2, please let us know:

    Thank you!

    John O'DuinnMozilla’s Release Engineering now on Dr Dobbs!

    book coverLong time readers of this blog will remember when The Architecture of Open Source Applications (vol2) was published, containing a chapter describing the tools and mindsets used when re-building Mozilla’s Release Engineering infrastructure. (More details about the book, about the kindle and nook versions, and about the Russian version(!).

    Dr Dobbs recently posted an article here which is an edited version of the Mozilla Release Engineering chapter. As a long time fan of Dr Dobbs, seeing this was quite an honor, even with the sad news here.

    Obviously, Mozilla’s release automation continues to evolve, as new product requirements arise, or new tools help further streamline things. There is still lots of interesting work being done here – for me, top of mind is Task Cluster, and ScriptHarness (v0.1.0 and v0.2.0). Release Engineering at scale is both complex, and yet very interesting – so you should keep watching these sites for more details, and consider if they would also help in your current environment. As they are all open source, you can of course join in and help!

    For today, I just re-read the Dr. Dobbs article with a fresh cup of coffee, and remembered the various different struggles we went through as we scaled Mozilla’s infrastructure up so we could quickly grow the company, and the community. And then in the middle of it all, found time with armenzg, catlee and lsblakk to write about it all. While some of the technical tools have changed since the chapter was written, and some will doubtless change again in the future, the needs of the business, the company and the community still resonate.

    For anyone doing Release Engineering at scale, the article is well worth a quiet read.

    Roberto A. VitilloA glance at unified FHR/Telemetry

    Lots is changing in Telemetry land. If you do occasionally run data analyses with our Spark infrastructure you might want to keep reading.

    Background

    The Telemetry and FHR collection systems on desktop are in the process of being unified. Both systems will be sending their data through a common data pipeline which has some features of both the current Telemetry pipeline as well the Cloud Services one that we use to ingest server logs.

    The goals of the unification are to:

    • avoid measuring the same metric in multiple systems on the client side;
    • reduce the latency from the time a measurement occurs until it can be analyzed on the server;
    • increase the accuracy of measurements so that they can be better correlated with factors in the user environment such as the specific build, enabled add-ons, and other hardware or software characteristics;
    • use a common data pipeline for client telemetry and service log data.

    The unified pipeline is currently sending data for Nightly, Aurora and Beta. Classic FHR and Telemetry pipelines are going to keep sending data to the very least until the new unified pipeline has not been fully validated. The plan is to land this feature in 40 Release. We’ll also continue to respect existing user preferences. If the user has opted out of FHR or Telemetry, we’ll continue to respect that for the equivalent data sets. Similarly, the opt-out and opt-in defaults will remain the same for equivalent data sets.

    Data format

    A Telemetry ping, stored as JSON object on the client, encapsulates the data sent to our backend. The main differences between the new unified Telemetry ping format (v4) and the classic Telemetry one (v2) are that:

    • multiple ping types are supported beyond the classic saved-session ping, like the main ping;
    • pings have a common top-level which contains basic information shared between types, like build-id and channel;
    • pings have an optional environment field which consists of data that is expected to be characteristic for performance and other behavior.

    From an analysis point of view, the most important addition is the main ping which includes the very same histograms and other performance and diagnostic data as the v2 saved-session pings. Unlike in “classic” Telemetry though, there can be multiple main pings during a single session. A main ping is triggered by different scenarios, which are documented by the reason field:

    • aborted-session: periodically saved to disk and deleted at shutdown – if a previous aborted session ping is found at startup it gets sent to our backend;
    • environment-change: generated when the environment changes;
    • shutdown: triggered when the browser session ends;
    • daily: a session split triggered in 24h hour intervals at local midnight; this is needed to make sure we keep receiving data also from clients that have very long sessions.

    Data access through Spark

    Once you connect to a Spark enabled IPython notebook launched from our self-service dashboard, you will be prompted with a new tutorial based on the v4 dataset. The v4 data is fetched through the get_pings function by passing “v4″ as the schema parameter. The following parameters are valid for the new data format:

    • app: an application name, e.g.: “Firefox”;
    • channel: a channel name, e.g.: “nightly”;
    • version: the application version, e.g.: “40.0a1″;
    • build_id: a build id or a range of build ids, e.g.:”20150601000000″ or (“20150601000000″, “20150610999999”)
    • submission_date: a submission date or a range of submission dates, e.g: “20150601” or (“20150601″, “20150610”)
    • doc_type: ping type, e.g: “main”, set to “saved_session” by default
    • fraction: the fraction of pings to return, set to 1.0 by default

    Once you have a RDD, you can further filter the pings down by reason. There is also a new experimental API that returns the history of submissions for a subset of profiles, which can be used for longitudinal analyses.


    Zack WeinbergGoogle Voice Search and the Appearance of Trustworthiness

    Last week there were several bug reports [1] [2] [3] about how Chrome (the web browser), even in its fully-open-source Chromium incarnation, downloads a closed-source, binary extension from Google’s servers and installs it, without telling you it has done this, and moreover this extension appears to listen to your computer’s microphone all the time, again without telling you about it. This got picked up by the trade press [4] [5] [6] and we rapidly had a full-on Internet panic going.

    If you dig into the bug reports and/or the open source part of the code involved, which I have done, it turns out that what Chrome is doing is not nearly as bad as it looks. It does download a closed-source binary extension from Google, install it, and hide it from you in the list of installed extensions (technically there are two hidden extensions involved, only one of which is closed-source, but that’s only a detail of how it’s all put together). However, it does not activate this extension unless you turn on the voice search checkbox in the settings panel, and this checkbox has always (as far as I can tell) been off by default. The extension is labeled, accurately, as having the ability to listen to your computer’s microphone all the time, but of course it does not get to do this until it is activated.

    As best anyone can tell without access to the source, what the closed-source extension actually does when it’s activated is monitor your microphone for the code phrase OK Google. When it detects this phrase it transmits the next few words spoken to Google’s servers, which convert it to text and conduct a search for the phrase. This is exactly how one would expect a voice search feature to behave. In particular, a voice-activated feature intrinsically has to listen to sound all the time, otherwise how could it know that you have spoken the magic words? And it makes sense to do the magic word detection with code running on the local computer, strictly as a matter of efficiency. There is even a non-bogus business reason why the detector is closed source; speech recognition is still in the land where tiny improvements lead to measurable competitive advantage.

    So: this feature is not actually a massive privacy violation. However, Google could and should have put more care into making this not appear to be a massive privacy violation. They wouldn’t have had mud thrown at them by the trade press about it, and the general public wouldn’t have had to worry about it. Everyone wins. I will now dissect exactly what was done wrong and how it could have been done better.

    It was a diagnostic report, intended for use by developers of the feature, that gave people the impression the extension was listening to the microphone all the time. Below is a screen shot of this diagnostic report (click for full width). You can see it on your own copy of Chrome by typing chrome://voicesearch into the URL bar; details will probably differ a little (especially if you’re not using a Mac).

    Screen shot of Google Voice Search diagnostic report, taken on Chrome 43 running on MacOS X. The most important lines of text are 'Microphone: Yes', 'Audio Capture Allowed: Yes', 'Hotword Search Enabled: No', and 'Extension State: ENABLED. Screen shot of Google Voice Search diagnostic report, taken on Chrome 43 running on MacOS X.

    Google’s first mistake was not having anyone check this over for what it sounds like it means to someone who isn’t familiar with the code. It is very well known that when faced with a display like this, people who aren’t familiar with the code will pick out whatever bits they think they understand and ignore everything else, even if that means they completely misunderstand it. [7] In this case, people see Microphone: Yes and Audio Capture Allowed: Yes and maybe also Extension State: ENABLED and assume that this means the extension is actively listening right now. (What the developers know it means is this computer has a microphone, the extension could listen to it if it had been activated, and it’s connected itself to the checkbox in the preferences so it can be activated. And it’s hard for them to realize that anyone could think it would mean something else.)

    They didn’t have anyone check it because they thought, well, who’s going to look at this who isn’t a developer? Thing is, it only takes one person to look at it, decide it looks hinky, mention it online, and now you have a media circus on your hands. Obscurity is no excuse for not doing a UX review.

    Now, mistake number two becomes evident when you consider what this screen ought to say in order not to scare people who haven’t turned the feature on (and maybe this is the first they’ve heard of it even): something like

    Voice Search is inactive.

    (A couple of sentences about what Voice Search is and why you might want it.) To activate Voice Search, go to the preferences screen and check the box.

    It would also be okay to have a duplicate checkbox right there on this screen, and to have all the same debugging information show up after you check the box. But wait—how do developers diagnose problems with downloading the extension, which happens before the box has been checked? And that’s mistake number two. The extension should not be downloaded until the box is checked. I am not aware of any technical reason why that couldn’t have been the way it worked in the first place, and it would go a long way to reassure people that this closed-source extension can’t listen to them unless they want it to. Note that even if the extension were open source it might still be a live question whether it does anything hinky. There’s an excellent chance that it’s a generic machine recognition algorithm that’s been trained to detect OK Google, which training appears in the code as a big lump of meaningless numbers—and there’s no way to know whether those numbers train it to detect anything besides OK Google. Maybe if you start talking about bombs the computer just quietly starts recording…

    Mistake number three, finally, is something they got half-right. This is not a core browser feature. Indeed, it’s hard for me to imagine any situation where I would want this feature on a desktop computer. Hands-free operation of a mobile device, sure, but if my hands are already on a keyboard, that’s faster and less bothersome for other people in the room. So, Google implemented this frill as a browser extension—but then they didn’t expose that in the user interface. It should be an extension, and it should be visible as such. Then it needn’t take up space in the core preferences screen, even. If people want it they can get it from the Chrome extension repository like any other extension. And that would give Google valuable data on how many people actually use this feature and whether it’s worth continuing to develop.

    Chris CooperReleng & Relops weekly highlights - June 26, 2015

    Friday, foxyeah!

    It’s been a very busy and successful work week here in beautiful Whistler, BC. People are taking advantage of being in the same location to meet, plan, hack, and socialize. A special thanks to Jordan for inviting us to his place in beautiful Squamish for a BBQ!

    (Note: No release engineering folks were harmed by bears in the making of this work week.)

    tl;dr

    Whistler: Keynotes were given by our exec team and we learned we’re focusing on quality, dating our users to get to know them better, and that WE’RE GOING TO SPACE!! We also discovered that at LEGO, Everything is Awesome now that they’re thinking around the box instead of inside or outside of it. Laura’s GoFaster project sounds really exciting, and we got a shoutout from her on the way we manage the complexity of our systems. There should be internal videos of the keynotes up next week if you missed them.

    Internally, we talked about Q3 planning and goals, met with our new VP, David, met with our CEO, Chris, presented some lightning talks, and did a bunch of cross-group planning/hacking. Dustin, Kim, and Morgan talked to folks at our booth at the Science Fair. We had a cool banner and some cards (printed by Dustin) that we could hand out to tell people about try. SHIP IT!

    Taskcluster: Great news; the TaskCluster team is joining us in Platform! There was lots of evangelism about TaskCluster and interest from a number of groups. There were some good discussions about operationalizing taskcluster as we move towards using it for Firefox automation in production. Pete also demoed the Generic Worker!

    Puppetized Windows in AWS: Rob got the nxlog puppet module done. Mark is working on hg and NSIS puppet modules in lieu of upgrading to MozillaBuild 2.0. Jake is working on the metric-collective module. The windows folks met to discuss the future of windows package management. Q is finishing up the performance comparison testing in AWS. Morgan, Mark, and Q deployed runner to all of the try Windows hosts and one of the build hosts.

    Operational: Amy has been working on some additional nagios checks. Ben, Rail, and Nick met and came up with a solid plan for release promotion. Rail and Nick worked on releasing Firefox 39 and two versions of Firefox ESR. Hal spent much of the week working with IT. Dustin and catlee got some work on on migrating treestatus to relengapi. Hal, Nick, Chris, and folks from IT, sheriffs, dev-services debugged problems with b2g jobs. Callek deployed a new version of slaveapi. Kim, Jordan, Chris, and Ryan worked on a plan for addons. Kim worked with some new buildduty folks to bring them up to speed on operational procedures.

    Thank you all, and have a safe trip home!

    And here are all the details:

    Taskcluster

    • We got to spend some quality time with the our new TaskCluster teammates, Greg, Jonas, Wander, Pete, and John. We’re all looking forward to working together more closely.
    • Morgan convinced lots of folks that Taskcluster is super amazing, and now we have a lot of people excited to start hacking on it and moving their workloads to it.
    • We put together a roadmap for TaskCluster in Trello and identified the blockers to turning Buildbot Scheduling off.

    Puppetized Windows in AWS

    • Rob has pushed out the nxlog puppet module to get nxlog working in scl3 (bug 1146324). He has a follow-on bug to modify the ec2config file for AWS to reset the log-aggregator host so that we’re aggregating to the local region instead of where we instantiate the instance (like we do with linux). This will ensure we have Windows system logs in AWS (bug 1177577).
    • The new version of MozillaBuild was released, and our plan was to upgrade to that on Windows (bug 1176111). An attempt at that showed that the way hg was compiled requires an external dll (likely something from cygwin), and needs to be run from bash. Since this would require significant changes, we’re going to install the old version of MozillaBuild and put upgrades of hg (bug 1177740) and NSIS on top of that (like we’re doing with GPO now). Future work will include splitting out all the packages and not using MozillaBuild. Jake is working on the puppet module for metric-collective, our host-level stats gathering software for windows (similar to collectd on windows/OS X). This will give use Windows system metrics in graphite in AWS (bug 1097356).
    • We met to talk about Windows packaging and how to best integrate with puppet. Rob is starting to investigate using NuGet and Chocolatey to handle this (bugs 1175133 and 1175107).
    • Q spun up some additional instance types in AWS and is in the process of getting some more data for Windows performance after the network modifications we made earlier (bug 1159384).
    • Jordan added a new puppetized path for all windows jobs, fixing a problem we were seeing with failing sendchanges on puppetized machines (bug 1175701).
    • Morgan, Mark, and Q deployed runner to all of the try Windows hosts (bug 1055794).

    Operational

    • The relops team met to perform a triage of their two bugzilla queues and closed almost 20% of the open bugs as either already done or wontfix based on changes in direction.
    • Amy has been working on some additional nagios checks for some Windows services and for AWS subnets filling up (bugs 1164441 and 793293).
    • Ben, Rail, and Nick met and came up with a solid plan for the future of release promotion.
    • Rail and Nick worked on getting Firefox 39 (and the related ESR releases) out to our end users.
    • Hal spent lots of time working with IT and the MOC, improving our relationships and workflow.
    • Dustin and catlee did some hacking to start the porting of treestatus to relengapi (one of the blockers to moving us out of PHX1).
    • Hal, Nick, Chris, and folks from IT, sheriffs, dev-services tracked down an intermittent problem with the repo-tool impacting only b2g jobs (bug 1177190).
    • Callek deployed the new version of slaveapi to support slave loans using the AWS API (bug 1177932).
    • Kim, Jordan, Chris, and Ryan discussed the initial steps for future addon support.
    • Coop (hey, that’s me) held down the buildduty fort while everyone else was in Whistler

    See you next week!

    Cameron Kaiser31.8.0 available (say goodbye)

    31.8.0 is available, the last release for the 31 series (release notes, downloads, hashes). Download it and give it one last spin. 31 wasn't a high water mark for us in terms of features or performance, but it was pretty stable and did the job, so give it a salute as it rides into the sunset. It finalizes Monday PM Pacific time as usual.

    I'm trying very hard to get you the 38.0.1 beta by sometime next week, probably over the July 4th weekend assuming the local pyros don't burn my house down with errant illegal fireworks, but I keep hitting showstoppers while trying to dogfood it. First it was fonts and then it was Unicode input, and then the newtab crap got unstuck again, and then the G5 build worked but the 7450 build didn't, and then, and then, and then. I'm still working on the last couple of these major bugs and then I've got some additional systems to test on before I introduce them to you. There are a couple minor bugs that I won't fix before the beta because we need enough time for the localizers to do their jobs, and MP3 support is present but is still not finished, but there will be a second beta that should address most of these problems prior to our launch with 38.0.2. Be warned of two changes right away: no more tiles in the new tab page (I never liked them anyway, but they require Electrolysis now, so that's a no-no), and Check for Updates is now moved to the Help menu, congruent with regular Firefox, since keeping it in its old location now requires substantial extra code that is no longer worth it. If you can't deal with these changes, I will hurt you very slowly.

    Features that did not make the cut: Firefox Hello and Pocket, and the Cisco H.264 integration. Hello and Pocket are not in the ESR, and I wouldn't support them anyway; Hello needs WebRTC, which we still don't really support, and you can count me in with the people who don't like a major built-in browser component depending exclusively on a third-party service (Pocket). As for the Cisco integration, there will never be a build of those components for Tiger PowerPC, so there. Features that did make the cut, though, are pdf.js and Reader View. Although PDF viewing is obviously pokier compared to Preview.app, it's still very convenient, generally works well enough now that we have IonPower backing it, and is much safer. However, Reader View on the other hand works very well on our old systems. You'll really like it especially on a G3 because it cuts out a lot of junk.

    After that there are two toys you'll get to play with before 38.0.2 since I hope to introduce them widely with the 38 launch. More on that after the beta, but I'll whet your appetite a little: although the MacTubes Enabler is now officially retired, since as expected the MacTubes maintainer has thrown in the towel, thanks to these projects the MTE has not one but two potential successors, and one of them has other potential applications. (The QuickTime Enabler soldiers on, of course.)

    Last but not least, I have decided to move the issues list and the wiki from Google Code to Github, and leave downloads with SourceForge. That transition will occur sometime late July before Google Code goes read-only on August 24th. (Classilla has already done this invisibly but I need to work on a stele so that 9.3.4 will be able to use Github effectively.) In the meantime, I have already publicly called Google a bunch of meaniepants and poopieheads for their shameful handling of what used to be a great service, so my work here is done.

    Gervase MarkhamPromises: Code vs. Policy

    A software organization wants to make a promise, for example about its data practices. For example, “We don’t store information on your location”. They can keep that promise in two ways: code or policy.

    If they were keeping it in code, they would need to be open source, and would simply make sure the code didn’t transmit location information to the server. Anyone can review the code and confirm that the promise is being kept. (It’s sometimes technically possible for the company to publish source code that does one thing, and binaries which do another, but if that was spotted, there would be major reputational damage.)

    If they were keeping it in policy, they would add “We don’t store information on your location” to their privacy policy or Terms of Service. The documents can be reviewed, but in general you have to trust the company that they are sticking to their word. This is particularly so if the policy states that it does not create a binding obligation on the company. So this is a function of your view of the company’s reputation.

    Geeks like promises kept in code. They can’t be worked around using ambiguities in English, and they can’t be changed without the user’s consent (to a software upgrade). I suspect many geeks think of them as superior to promises kept in policy – “that’s what they _say_, but who knows?”. This impression is reinforced when companies are caught sticking to the letter but not the spirit of their policies.

    But some promises can’t be kept in code. For example, you can’t simply not send the user’s IP address, which normally gives coarse location information, when making a web request. More complex or time-bound promises (“we will only store your information for two weeks”) also require policy by their nature. Policy is also more flexible, and using a policy promise rather than a code promise can speed time-to-market due to reduced software complexity and increased ability to iterate.

    Question: is this distinction, about where to keep your promises, useful when designing new features?

    Question: is it reasonable or misguided for geeks to prefer promises kept in code?

    Question: if Mozilla or its partners are using promises kept in policy for e.g. a web service, how can we increase user confidence that such a policy is being followed?

    Vladan DjericAnnouncing the Content Performance program

    Introduction

    Aaron Klotz, Avi Halachmi and I have been studying Firefox’s performance on Android & Windows over the last few weeks as part of an effort to evaluate Firefox “content performance” and find actionable issues. We’re analyzing and measuring how well Firefox scrolls pages, loads sites, and navigates between pages. At first, we’re focusing on 3 reference sites: Twitter, Facebook, and Yahoo Search.

    We’re trying to find reproducible, meaningful, and common use cases on popular sites which result in noticeable performance problems or where Firefox performs significantly worse than competitors. These use cases will be broken down into tests or profiles, and shared with platform teams for optimization. This “Content Performance” project is part of a larger organizational effort to improve Firefox quality.

    I’ll be regularly posting blog posts with our progress here, but you can can also track our efforts on our mailing list and IRC channel:

    Mailing list: https://mail.mozilla.org/listinfo/contentperf
    IRC channel: #contentperf
    Project wiki page: Content_Performance_Program

    Summary of Current Findings (June 18)

    Generally speaking, desktop and mobile Firefox scroll as well as other browsers on reference sites when there is only a single tab loaded in a single window.

    • We compared Firefox vs Chrome and IE:
      • Desktop Firefox scrolling can badly deteriorate when the machine is in power-saver mode1 (Firefox performance relative to other browsers depends on the site)
      • Heavy activity in background tabs badly affects desktop Firefox’s scrolling performance1 (much worse than other browsers — we need E10S)
      • Scrolling on infinitely-scrolling pages only appears janky when the page is waiting on additional data to be fetched
    • Inter-page navigation in Firefox can exhibit flicker, similar to other browsers
    • The Firefox UI locks up during page loading, unlike other browsers (need E10S)
    • Scrolling in desktop E10S (with heavy background tab activity) is only as good as the other browsersn1 when Firefox is in the process-per-tab configuration (dom.ipc.processCount >> 1)

    1 You can see Aaron’s scrolling measurements here: http://bit.ly/1K1ktf2

    Potential scenarios to test next:

    • Check impact of different Firefox configurations on scrolling smoothness:
      • Hardware acceleration disabled
      • Accessibility enabled & disabled
      • Maybe: Multiple monitors with different refresh rate (test separately on Win 8 and Win 10)
      • Maybe: OMTC, D2D, DWrite, display & font scaling enabled vs disabled
        • If we had a Telemetry measurement of scroll performance, it would be easier to determine relevant characteristics
    • Compare Firefox scrolling & page performance on Windows 8 vs Windows 10
      • Compare Firefox vs Edge on Win 10
    • Test other sites in Alexa top 20 and during random browsing
    • Test the various scroll methods on reference sites (Avi has done some of this already): mouse wheel, mouse drag, arrow key, page down, touch screen swipe and drag, touchpad drag, touchpad two finger swipe, trackpoints (special casing for ThinkPads should be re-evaluated).
      • Check impact of pointing device drivers
    • Check performance inside Google web apps (Search, Maps, Docs, Sheets)
      • Examine benefits of Chrome’s network pre-fetcher on Google properties (e.g. Google search)
      • Browse and scroll simple pages when top Google apps are loaded in pinned tabs
    • Compare Firefox page-load & page navigation performance on HTTP/2 sites (Facebook & Twitter, others?)
    • Check whether our cache and pre-connector benefit perceived performance, compare vs competition

    Issues to report to Platform teams

    • Worse Firefox scrolling performance with laptop in power-save mode
    • Scrolling Twitter feed with YouTube HTML5 videos is jankier in Firefox
    • bug 1174899: Scrolling on Facebook profile with many HTML5 videos eventually causes 100% CPU usage on a Necko thread + heavy CPU usage on main thread + the page stops loading additional posts (videos)

    Tooling questions:

    • Find a way to to measure when the page is “settled down” after loading, i.e. time until last page-loading event. This could be measured by the page itself (similar to Octane), which would allow us to compare different browsers
    • How to reproduce dynamic websites offline?
    • Easiest way to record demos of bad Firefox & Fennec performance vs other browsers?

    Decisions made so far:

    • Exclusively focus on Android 5.0+ and Windows 7, 8.1 & 10
    • Devote the most attention to single-process Nightly on desktop, but do some checks of E10S performance as well
    • Desktop APZC and network pre-fetcher are a long time away, don’t wait

    Doug BelshawWeb Literacy Map v2.0

    I’m delighted to see that development of Mozilla’s Web Literacy Map is still continuing after my departure a few months ago.

    Read, Write, Participate

    Mark Surman, Executive Director of the Mozilla Foundation, wrote a blog post outlining the way forward and a working group has been put together to drive forward further activity. It’s great to see Mark Lesser being used as a bridge to previous iterations.

    Another thing I’m excited to see is the commitment to use Open Badges to credential Web Literacy skills. We tinkered with badges a little last year, but hopefully there’ll be a new impetus around this.

    The approach to take the Web Literacy Map from version 1.5 to version 2.0 is going to be different from the past few years. It’s going to be a ‘task force’ approach with people brought in to lend their expertise rather than a fully open community approach. That’s probably what’s needed at this point.

    I’m going to give myself some space to, as my friend and former colleague Laura Hilliger said, 'disentangle myself’ from the Web Literacy Map and wider Mozilla work. However, I wish them all the best. It’s important work.


    Comments? Questions? I’m @dajbelshaw on Twitter or you can email: mail@dougbelshaw.com

    Planet Mozilla InternsWillie Cheong: Maximum Business Value; Minimum Effort

    enhanced-4875-1433865112-1

    Dineapple is an online food delivery gig that I have been working on recently. In essence, a new food item is introduced periodically, and interested customers place orders online to have their food delivered the next day.

    Getting down to the initial build of the online ordering site, I started to think about the technical whats and hows. For this food delivery service, a customer places an order by making an online payment. The business then needs to know of this transaction, and have it linked to the contact information of the customer.

    Oh okay, easy. Of course I’ll set up a database. I’ll store the order details inside a few tables. Then I’ll build a mini application to extract this information and generate a daily report for the cooks and delivery people to operate on. Then I started to build these things in my head. But wait, there is a simpler way to get the operations people aware of orders. We could just send an email to the people on every successful transaction to notify them of a new incoming order. But this means the business loses visibility and data portability. Scraping for relational data from a bunch of automated emails, although possible, will be a nightmare. The business needs to prepare to scale, and that means analytics.

    Then I saw something that now looks so obvious I feel pretty embarrassed. Payments on the ordering service are processed using Stripe. When the HTTP request to process a payment is made, Stripe provides an option to submit additional metadata that will be tagged to the payment. There is a nice interface on the Stripe site that allows business owners to do some simple analytics on the payment data. There is also the option to export all of that data (and metadata) to CSV for more studying.

    Forget about ER diagrams, forget about writing custom applications, forget about using automated emails to generate reports. Stripe is capable of doing the reporting for Dineapple, we just had to see a way to adapt the offering to fit the business’s use case.

    Beyond operations reporting through Stripe, there are so many existing web services out there that can be integrated into Dineapple. Just to name a few, an obvious one would be to use Google Analytics to study site traffic. Customers’ reviews on food and services could (and probably should) be somehow integrated to work using Yelp. Note that none of these outsourced alternatives, although significantly easier to implement, compromise on the quality of the solution for the business. Because at the end of the day, all that really matters is that the business gets what it needs.

    So here’s a reminder to my future self. Spend a little more time looking around for simpler alternatives that you can take advantage of before jumping into development for a custom solution.

    Engineers are builders by instinct, but that isn’t always a good thing.

    Emma Irwin#Mozlove for Tad

    I truly believe, that to make Mozilla a place worth ‘hanging your hat‘, we need to get better at being ‘forces of good for each other’.  I like to think this idea is catching on, but only time will tell.

    This month’s #mozlove post is for Tom Farrow AKA ‘Tad’,  a long time contributor, in numerous initiatives across Mozilla.  Although Tad’s contribution focus is in Community Dev Ops, it’s his interest in teaching youth digital literacy that first led to a crossing of our paths. You’ll probably find it interesting to know that despite being in his sixth(!!) year of contribution to Mozilla –  Tad is still a High School in Solihull Birmingham, UK.

    Tad starting contribution to Mozilla after helping a friend install Firefox on their government-issued laptop, which presented some problems. He found help on SUMO, and through being helped was inspired to become a helper and contributor himself.  Tad speaks fondly of starting with SUMO, of finding friends, training and mentorship.

    Originally drawn to IT and DevOps contribution for the opportunity of ‘belonging to something’, Tad has become a fixture in this space helping design hosting platforms, and the evolution of a multi-tenant WordPress hosting. When I asked what was most rewarding about contributing to Community Dev Ops, he shared that pride in innovating a quality solution.

    I’m also increasingly curious about the challenges of participation and asked about this as well.  Tad expressed some frustration around ‘access and finding the right people to unlock resources’.  I think that’s probably something that speaks to the greater challenges for the Mozilla community in understanding pathways for support.

    Finally my favorite question:  “How do your friends and family relate to your volunteer efforts? Is it easy or hard to explain volunteering at Mozilla?”.

    I don’t really try to explain it – my parents get the general idea, and are happy I’m gaining skills in web technology.

    I think it’s very cool that in a world of ‘learn to code’ merchandizing, that Tad found his opportunity to learn and grow technical skills in participation at Mozilla :)

    I want to thank Tad for taking the time to chat with me, for being such an amazing contributor, and inspiration to others around the project.

    * I set a reminder in my calendar every month, which this month happens to be during Mozilla’s Work Week in Whistler.  Tad is also in Whistler, make sure you look out for him – and say hello!

     

     

     

     

     

    Aki Sasakion configuration

    A few people have suggested I look at other packages for config solutions. I thought I'd record some of my thoughts on the matter. Let's look at requirements first.

    Requirements

    1. Commandline argument support. When running scripts, it's much faster to specify some config via the commandline than always requiring a new config file for each config change.

    2. Default config value support. If a script assumes a value works for most cases, let's make it default, and allow for overriding those values in some way.

    3. Config file support. We need to be able to read in config from a file, and in some cases, several files. Some config values are either too long and unwieldy to pass via the commandline, and some config values contain characters that would be interpreted by the shell. Plus, the ability to use diff and version control on these files is invaluable.

    4. Multiple config file type support. json, yaml, etc.

    5. Adding the above three solutions together. The order should be: default config value -> config file -> commandline arguments. (The rightmost value of a configuration item wins.)

    6. Config definition and validation. Commandline options are constrained by the options that are defined, but config files can contain any number of arbitrary key/value pairs.

    7. The ability to add groups of commandline arguments together. Sometimes familes of scripts need a common set of commandline options, but also need the ability to add script-specific options. Sharing the common set allows for consistency.

    8. The ability to add config definitions together. Sometimes families of scripts need a common set of config items, but also need the ability to add script-specific config items.

    9. Locking and/or logging any changes to the config. Changing config during runtime can wreak havoc on the debugability of a script; locking or logging the config helps avoid or mitigate this.

    10. Python 3 support, and python 2.7 unicode support, preferably unicode-by-default.

    11. Standardized solution, preferably non-company and non-language specific.

    12. All-in-one solution, rather than having to use multiple solutions.

    Packages and standards

    argparse

    Argparse is the standardized python commandline argument parser, which is why configman and scriptharness have wrapped it to add further functionality. Its main drawbacks are lack of config file support and limited validation.

    1. Commandline argument support: yes. That's what it's written for.

    2. Default config value support: yes, for commandline options.

    3. Config file support: no.

    4. multiple config file type support: no.

    5. Adding the above three solutions together: no. The default config value and the commandline arguments are placed in the same Namespace, and you have to use the parser.get_default() method to determine whether it's a default value or an explicitly set commandline option.

    6. Config definition and validation: limited. It only covers commandline option definition+validation, and there's the required flag but not a if foo is set, bar is required type validation. It's possible to roll your own, but that would be script-specific rather than part of the standard.

    7. Adding groups of commandline arguments together: yes. You can take multiple parsers and make them parent parsers of a child parser, if the parent parsers have specified add_help=False

    8. Adding config definitions together: limited, as above.

    9. The ability to lock/log changes to the config: no. argparse.Namespace will take changes silently.

    10. Python 3 + python 2.7 unicode support: yes.

    11. Standardized solution: yes, for python. No for other languages.

    12. All-in-one solution: no, for the above limitations.

    configman

    Configman is a tool written to deal with configuration in various forms, and adds the ability to transform configs from one type to another (e.g., commandline to ini file). It also adds the ability to block certain keys from being saved or output. Its argparse implementation is deeper than scriptharness' ConfigTemplate argparse abstraction.

    Its main drawbacks for scriptharness usage appear to be lack of python 3 + py2-unicode-by-default support, and for being another non-standardized solution. I've given python3 porting two serious attempts, so far, and I've hit a wall on the dotdict __getattr__ hack working differently on python 3. My wip is here if someone else wants a stab at it.

    1. Commandline argument support: yes.

    2. Default config value support: yes.

    3. Config file support: yes.

    4. Multiple config file type support: yes.

    5. Adding the above three solutions together: not as far as I can tell, but since you're left with the ArgumentParser object, I imagine it'll be the same solution to wrap configman as argparse.

    6. Config definition and validation: yes.

    7. Adding groups of commandline arguments together: yes.

    8. Adding config definitions together: not sure, but seems plausible.

    9. The ability to lock/log changes to the config: no. configman.namespace.Namespace will take changes silently.

    10. Python 3 support: no. Python 2.7 unicode support: there are enough str() calls that it looks like unicode is a second class citizen at best.

    11. Standardized solution: no.

    12. All-in-one solution: no, for the above limitations.

    docopt

    Docopt simplifies the commandline argument definition and prettifies the help output. However, it's purely a commandline solution, and doesn't support adding groups of commandline options together, so it appears to be oriented towards relatively simple script configuration. It could potentially be added to json-schema definition and validation, as could the argparse-based commandline solutions, for an all-in-two solution. More on that below.

    json-schema

    This looks very promising for an overall config definition + validation schema. The main drawback, as far as I can see so far, is the lack of commandline argument support.

    A commandline parser could generate a config object to validate against the schema. (Bonus points for writing a function to validate a parser against the schema before runtime.) However, this would require at least two definitions: one for the schema, one for the hopefully-compliant parser. Alternately, the schema could potentially be extended to support argparse settings for various items, at the expense of full standards compatiblity.

    There's already a python jsonschema package.

    1. Commandline argument support: no.

    2. Default config value support: yes.

    3. Config file support: I don't think directly, but anything that can be converted to a dict can be validated.

    4. Multiple config file type support: no.

    5. Adding the above three solutions together: no.

    6. Config definition and validation: yes.

    7. Adding groups of commandline arguments together: no.

    8. Adding config definitions together: sure, you can add dicts together via update().

    9. The ability to lock/log changes to the config: no.

    10. Python 3 support: yes. Python 2.7 unicode support: I'd guess yes since it has python3 support.

    11. Standardized solution: yes, even cross-language.

    12. All-in-one solution: no, for the above limitations.

    scriptharness 0.2.0 ConfigTemplate + LoggingDict or ReadOnlyDict

    Scriptharness currently extends argparse and dict for its config. It checks off the most boxes in the requirements list currently. My biggest worry with the ConfigTemplate is that it isn't fully standardized, so people may be hesitant to port all of their configs to it.

    An argparse/json-schema solution with enough glue code in between might be a good solution. I think ConfigTemplate is sufficiently close to that that adding jsonschema support shouldn't be too difficult, so I'm leaning in that direction right now. Configman has some nice behind the scenes and cross-file-type support, but the python3 and __getattr__ issues are currently blockers, and it seems like a lateral move in terms of standards.

    An alternate solution may be BYOC. If the scriptharness Script takes a config object that you built from somewhere, and gives you tools that you can choose to use to build that config, that may allow for enough flexibility that people can use their preferred style of configuration in their scripts. The cost of that flexibility is familiarity between scriptharness scripts.

    1. Commandline argument support: yes.

    2. Default config value support: yes, both through argparse parsers and script initial_config.

    3. Config file support: yes. You can define multiple required config files, and multiple optional config files.

    4. Multiple config file type support: no. Mozharness had .py and .json. Scriptharness currently only supports json because I was a bit iffy about execfileing python again, and PyYAML doesn't always install cleanly everywhere. It's on the list to add more formats, though. We probably need at least one dynamic type of config file (e.g. python or yaml) or a config-file builder tool.

    5. Adding the above three solutions together: yes.

    6. Config definition and validation: yes.

    7. Adding groups of commandline arguments together: yes.

    8. Adding config definitions together: yes.

    9. The ability to lock/log changes to the config: yes. By default Scripts use LoggingDict that logs runtime changes; StrictScript uses a ReadOnlyDict (sams as mozharness) that prevents any changes after locking.

    10. Python 3 and python 2.7 unicode support: yes.

    11. Standardized solution: no. Extended/abstracted argparse + extended python dict.

    12. All-in-one solution: yes.

    Corrections, additions, feedback?

    As far as I can tell there is no perfect solution here. Thoughts?



    comment count unavailable comments

    Sean McArthurhyper v0.6

    A bunch of goodies are included in version 0.6 of hyper.

    Highlights

    • Experimental HTTP2 support for the Client! Thanks to tireless work of @mlalic.
    • Redesigned Ssl support. The Server and Client can accept any implementation of the Ssl trait. By default, hyper comes with an implementation for OpenSSL, but this can now be disabled via the ssl cargo feature.
    • A thread safe Client. As in, Client is Sync. You can share a Client over multiple threads, and make several requests simultaneously.
    • Just about 90% test coverage. @winding-lines has been bumping the number ever higher.

    Also, as a reminder, hyper has been following semver more closely, and so, breaking changes mean bumping the minor version (until 1.0). So, to reduce unplanned breakage, you should probably depend on a specific minor version, such as 0.6, and not *.

    The Rust Programming Language BlogRust 1.1 stable, the Community Subteam, and RustCamp

    We’re happy to announce the completion of the first release cycle after Rust 1.0: today we are releasing Rust 1.1 stable, as well as 1.2 beta.

    Read on for details the releases, as well as some exciting new developments within the Rust community.

    What’s in 1.1 Stable

    One of the highest priorities for Rust after its 1.0 has been improving compile times. Thanks to the hard work of a number of contributors, Rust 1.1 stable provides a 32% improvement in compilation time over Rust 1.0 (as measured by bootstrapping).

    Another major focus has been improving error messages throughout the compiler. Again thanks to a number of contributors, a large portion of compiler errors now include extended explanations accessible using the --explain flag.

    Beyond these improvements, the 1.1 release includes a number of important new features:

    • New std::fs APIs. This release stabilizes a large set of extensions to the filesystem APIs, making it possible, for example, to compile Cargo on stable Rust.
    • musl support. It’s now possible to target musl on Linux. Binaries built this way are statically linked and have zero dependencies. Nightlies are on the way.
    • cargo rustc. It’s now possible to build a Cargo package while passing arbitrary flags to the final rustc invocation.

    More detail is available in the release notes.

    What’s in 1.2 Beta

    Performance improvements didn’t stop with 1.1 stable. Benchmark compilations are showing an additional 30% improvement from 1.1 stable to 1.2 beta; Cargo’s main crate compiles 18% faster.

    In addition, parallel codegen is working again, and can substantially speed up large builds in debug mode; it gets another 33% speedup on bootstrapping on a 4 core machine. It’s not yet on by default, but will be in the near future.

    Cargo has also seen some performance improvements, including a 10x speedup on large “no-op” builds (from 5s to 0.5s on Servo), and shared target directories that cache dependencies across multiple packages.

    In addition to all of this, 1.2 beta includes our first support for MSVC (Microsoft Visual C): the compiler is able to bootstrap, and we have preliminary nightlies targeting the platform. This is a big step for our Windows support, making it much easier to link Rust code against code built using the native toolchain. Unwinding is not yet available – code aborts on panic – but the implementation is otherwise complete, and all rust-lang crates are now testing on MSVC as a first-tier platform.

    Rust 1.2 stable will be released six weeks from now, together with 1.3 beta.

    Community news

    In addition to the above technical work, there’s some exciting news within the Rust community.

    In the past few weeks, we’ve formed a new subteam explicitly devoted to supporting the Rust community. The team will have a number of responsibilities, including aggregating resources for meetups and other events, supporting diversity in the community through leadership in outreach, policies, and awareness-raising, and working with our early production users and the core team to help guide prioritization.

    In addition, we’ll soon be holding the first official Rust conference: RustCamp, on August 1, 2015, in Berkeley, CA, USA. We’ve received a number of excellent talk submissions, and are expecting a great program.

    Contributors to 1.1

    As with every release, 1.1 stable is the result of work from an amazing and active community. Thanks to the 168 contributors to this release:

    • Aaron Gallagher
    • Aaron Turon
    • Abhishek Chanda
    • Adolfo Ochagavía
    • Alex Burka
    • Alex Crichton
    • Alexander Polakov
    • Alexis Beingessner
    • Andreas Tolfsen
    • Andrei Oprea
    • Andrew Paseltiner
    • Andrew Straw
    • Andrzej Janik
    • Aram Visser
    • Ariel Ben-Yehuda
    • Avdi Grimm
    • Barosl Lee
    • Ben Gesoff
    • Björn Steinbrink
    • Brad King
    • Brendan Graetz
    • Brian Anderson
    • Brian Campbell
    • Carol Nichols
    • Chris Morgan
    • Chris Wong
    • Clark Gaebel
    • Cole Reynolds
    • Colin Walters
    • Conrad Kleinespel
    • Corey Farwell
    • David Reid
    • Diggory Hardy
    • Dominic van Berkel
    • Don Petersen
    • Eduard Burtescu
    • Eli Friedman
    • Erick Tryzelaar
    • Felix S. Klock II
    • Florian Hahn
    • Florian Hartwig
    • Franziska Hinkelmann
    • FuGangqiang
    • Garming Sam
    • Geoffrey Thomas
    • Geoffry Song
    • Graydon Hoare
    • Guillaume Gomez
    • Hech
    • Heejong Ahn
    • Hika Hibariya
    • Huon Wilson
    • Isaac Ge
    • J Bailey
    • Jake Goulding
    • James Perry
    • Jan Andersson
    • Jan Bujak
    • Jan-Erik Rediger
    • Jannis Redmann
    • Jason Yeo
    • Johann
    • Johann Hofmann
    • Johannes Oertel
    • John Gallagher
    • John Van Enk
    • Jordan Humphreys
    • Joseph Crail
    • Kang Seonghoon
    • Kelvin Ly
    • Kevin Ballard
    • Kevin Mehall
    • Krzysztof Drewniak
    • Lee Aronson
    • Lee Jeffery
    • Liigo Zhuang
    • Luke Gallagher
    • Luqman Aden
    • Manish Goregaokar
    • Marin Atanasov Nikolov
    • Mathieu Rochette
    • Mathijs van de Nes
    • Matt Brubeck
    • Michael Park
    • Michael Rosenberg
    • Michael Sproul
    • Michael Wu
    • Michał Czardybon
    • Mike Boutin
    • Mike Sampson
    • Ms2ger
    • Nelo Onyiah
    • Nicholas
    • Nicholas Mazzuca
    • Nick Cameron
    • Nick Hamann
    • Nick Platt
    • Niko Matsakis
    • Oliver Schneider
    • P1start
    • Pascal Hertleif
    • Paul Banks
    • Paul Faria
    • Paul Quint
    • Pete Hunt
    • Peter Marheine
    • Philip Munksgaard
    • Piotr Czarnecki
    • Poga Po
    • Przemysław Wesołek
    • Ralph Giles
    • Raphael Speyer
    • Ricardo Martins
    • Richo Healey
    • Rob Young
    • Robin Kruppe
    • Robin Stocker
    • Rory O’Kane
    • Ruud van Asseldonk
    • Ryan Prichard
    • Sean Bowe
    • Sean McArthur
    • Sean Patrick Santos
    • Shmuale Mark
    • Simon Kern
    • Simon Sapin
    • Simonas Kazlauskas
    • Sindre Johansen
    • Skyler
    • Steve Klabnik
    • Steven Allen
    • Steven Fackler
    • Swaroop C H
    • Sébastien Marie
    • Tamir Duberstein
    • Theo Belaire
    • Thomas Jespersen
    • Tincan
    • Ting-Yu Lin
    • Tobias Bucher
    • Toni Cárdenas
    • Tshepang Lekhonkhobe
    • Ulrik Sverdrup
    • Vadim Chugunov
    • Valerii Hiora
    • Wangshan Lu
    • Wei-Ming Yang
    • Wojciech Ogrodowczyk
    • Xuefeng Wu
    • York Xiang
    • Young Wu
    • bors
    • critiqjo
    • diwic
    • gareins
    • inrustwetrust
    • jooert
    • klutzy
    • kwantam
    • leunggamciu
    • mdinger
    • nwin
    • parir
    • pez
    • robertfoss
    • sinkuu
    • tynopex
    • らいどっと

    Yunier José Sosa VázquezDisponible cuentaFox 3.1.1

    Pocos días después de presentarse la versión que corregía el problema presentado con el servicio para obtener el estado de las cuotas, ya está aquí cuentaFox 3.1.1.

    ¿Qué hay de nuevo?

    • Ahora se muestra la lista de todos los usuarios que han almacenado sus contraseñas en Firefox.

    v31.1-userlist

    • Las alertas de consumo ahora se muestran pero sin iconos pues al agregarle un icono no se muestran (probado en Linux).

    v3.1.1-alertas

    • También se corrigieron algunos errores menores.

    Firmando el complemento

    A partir de Firefox 41 se introducirán algunos cambios en la gestión de los complementos en el navegador y solo se podrán instalar complementos firmados por Mozilla. Que un complemento esté firmado por Mozilla significa más seguridad para las usuarios ante extensiones malignas y programas de terceros que intentan instalar add-ons en Firefox.

    Para estar preparados cuando llegue Firefox 41 hemos enviando cuentaFox para su revisión en AMO y dentro de poco lo tendremos por aquí.

    Aún muchas personas utilizan versiones viejas, actualiza a cuentaFox 3.1.1

    Desde el panel estadísticas de AMO nos hemos dado cuenta que muchas personas siguen usando versiones viejas que no funcionan y no son recomendadas. Desde aquí hacemos el llamado para que actualicen y difundan la noticia sobre la nueva liberación.

    No obstante, cuando el add-on sea aprobado, Firefox lo actualizará según la configuración del usuario. La idea que tenemos es que el complemento se actualice desde Firefoxmanía y no desde Mozilla pero el certificado autofirmado y otros problemas impiden que lo hagamos.

    stats-cuentafox

    El usuario o la contraseña es incorrecta

    Muchas personas han manifestado que al intentar obtener sus datos se muestra una alerta donde dice “El usuario no es válido o lo contraseña es incorrecta” y nos piden solucionar esto pero no podemos. Nosotros no somos responsables del servicio que brinda cuotas.uci.cu y tampoco sabemos que utiliza para verificar que esos datos son correctos.

    Si deseas colaborar en el desarrollo del complemento puedes acceder a GitLab (UCI) y clonar el proyecto o dejar una sugerencia.

    Instalar cuentaFox 3.1.1

    Matt ThompsonUpdating our on-ramps for contributors

    I got to sit in on a great debrief / recap of the recent Webmaker go-to-market strategy. A key takwaway: we’re having promising early success recruiting local volunteers to help tell the story, evangelize for the product, and (crucially) feed local knowledge into making the product better. In short:

    Volunteer contribution is working. But now we need to document, systematize and scale up our on-ramps.

    Documenting and systematizing

    It’s been a known issue that we need to update and improve our on-ramps for contributors across MoFo. They’re fragmented, out of date, and don’t do enough to spell out the value for contributors. Or celebrate their stories and successes.

    We should prioritize this work in Q3. Our leadership development work, local user research, social marketing for Webmaker, Mozilla Club Captains and Regional Co-ordinators recruitment, the work the Participation Lab is doing — all of that is coming together at an opportune moment.

    Ryan is a 15-year-old volunteer contributor to Webmaker for Android — and currently the second-most-active Java developer on the entire project.

    Get the value proposition right

    A key learning is: we need to spell out the concrete value proposition for contributors. Particularly in terms of training and gaining relevant work experience.

    Don’t assume we know in advance what contributors actually want. They will tell us.

    We sometimes assume contributors want something like certification or a badge — but what if what they *really* want is a personalized letter of recommendation, on Mozilla letterhead, from an individual mentor at Mozilla that can vouch for them and help them get a job, or get into a school program? Let’s listen.

    An on-boarding and recruiting checklist

    Here’s some key steps in the process the group walked through. We can document / systematize / remix these as we go forward.

    • Value proposition. Start here first. What’s in it for contributors? (e.g., training, a letter of recommendation, relevant work experience?) Don’t skip this! It’s the foundation for doing this in a real way.
    • Role description. Get good at describing those skills and opportunities, in language people can imagine adding to their CV, personal bio or story, etc.
    • Open call. Putting the word out. Having the call show up in the right channels, places and networks where people will see and hear about it.
    • Application / matching. How do people express interest? How do we sort and match them?
    • On-boarding and training. These processes exist, but aren’t well-documented. We need a playbook for how newcomers get on-boarded and integrated.
    • Assigning  to a specific team and individual mentor. So that they don’t feel disconnected or lost. This could be an expectation for all MoFo: each staff member will mentor at least one apprentice each quarter.
    • Goal-setting / tasking. Tickets or some other way to surface and co-ordinate the work they’re doing.
    • A letter of recommendation. Once the work is done. Written by their mentor. In a language that an employer / admission officer / local community members understand and value.
    • Certification. Could eventually also offer something more formal. Badging, a certificate, something you could share on your linked in profile, etc.

    Next steps

    Myk MelezIntroducing PluotSorbet

    PluotSorbet is a J2ME-compatible virtual machine written in JavaScript. Its goal is to enable users you run J2ME apps (i.e. MIDlets) in web apps without a native plugin. It does this by interpreting Java bytecode and compiling it to JavaScript code. It also provides a virtual filesystem (via IndexedDB), network sockets (through the TCPSocket API), and other common J2ME APIs, like Contacts.

    The project reuses as much existing code as possible, to minimize its surface area and maximize its compatibility with other J2ME implementations. It incorporates the PhoneME reference implementation, numerous tests from Mauve, and a variety of JavaScript libraries (including jsbn, Forge, and FileSaver.js). The virtual machine is originally based on node-jvm.

    PluotSorbet makes it possible to bring J2ME apps to Firefox OS. J2ME may be a moribund platform, but it still has non-negligible market share, not to mention a number of useful apps. So it retains residual value, which PluotSorbet can extend to Firefox OS devices.

    PluotSorbet is also still under development, with a variety of issues to address. To learn more about PluotSorbet, check out its README, clone its Git repository, peruse its issue tracker, and say hello to its developers in irc.mozilla.org#pluotsorbet!

    About:CommunityMozilla Tech Speakers: A pilot for technical evangelism

    The six-week pilot version of the Mozilla Tech Speakers program wrapped up at the end of May. We learned a lot, made new friends on several continents, and collected valuable practical feedback on how to empower and support volunteer Mozillians who are already serving their regional communities as technical evangelists and educators. We’ve also gathered some good ideas for how to scale a speaker program that’s relevant and accessible to technical Mozillians in communities all over the world. Now we’re seeking your input and ideas as well.

    During the second half of 2015, we’ll keep working with the individuals in our pilot group (our pilot pilots) to create technical workshops and presentations that increase developer awareness and adoption of Firefox, Mozilla, and the Open Web platform. We’ll keep in touch as they submit talk proposals and develop Content Kits during the second half of the year, work with them to identify relevant conferences and events, fund speaker travel as appropriate, make sure speakers have access to the latest information (and the latest swag to distribute), and offer them support and coaching to deliver and represent!

    Why we did it

    Our aim is to create a strong community-driven technical speaker development program in close collaboration with Mozilla Reps and the teams at Mozilla who focus on community education and participation. From the beginning we benefited from the wisdom of Rosana Ardila, Emma Irwin, Soumya Deb, and other Mozillian friends. We decided to stand up a “minimum viable” program with trusted, invited participants—Mozillians who are active technical speakers and are already contributing to Mozilla by writing about and presenting Mozilla technology at events around the world. We were inspired by the ongoing work of the Participation Team and Speaker Evangelism program that came before us, thanks to the efforts of @codepo8, Shezmeen Prasad, and many others.

    We want this program to scale and stay sustainable, as individuals come and go, and product and platform priorities evolve. We will incorporate the feedback and learnings from the current pilot into all future iterations of the Mozilla Tech Speaker program.

    What we did

    Participants met together weekly on a video call to practice presentation skills and impromptu storytelling, contributed to the MDN Content Kit project for sharing presentation assets, and tried out some new tools for building informative and inspiring tech talks.

    Each participant received one session of personalized one-to-one speaker coaching, using “techniques from applied improvisation and acting methods” delivered by People Rocket’s team of coaching professionals. For many participants, this was a peak experience, a chance to step out of their comfort zone, stretch their presentation skills, build their confidence, and practice new techniques.

    In our weekly meetings, we worked with the StoryCraft technique, and hacked it a little to make it more geek- and tech speaker-friendly. We also worked with ThoughtBox, a presentation building tool to “organize your thoughts while developing your presentation materials, in order to maximize the effectiveness of the content.” Dietrich took ThoughtBox from printable PDF to printable web-based form, but we came to the conclusion it would be infinitely more usable if it were redesigned as an interactive web app. (Interested in building this? Talk to us on IRC. You’ll find me in #techspeakers or #devrel, with new channels for questions and communication coming soon.)

    We have the idea that an intuitive portable tool like ThoughtBox could be useful for any group of Mozillians anywhere in the world who want to work together on practicing speaking and presentation skills, especially on topics of interest to developers. We’d love to see regional communities taking the idea of speaker training and designing the kind of programs and tools that work locally. Let’s talk more about this.

    What we learned

    The pilot was ambitious, and combined several components—speaker training, content development, creating a presentation, proposing a talk—into an aggressive six-week ‘curriculum.’ The team, which included participants in eight timezones, spanning twelve+ hours, met once a week on a video call. We kicked off the program with an introduction by People Rocket and met regularly for the next six weeks.

    Between scheduled meetings, participants hung out in Telegram, a secure cross-platform messaging app, sharing knowledge, swapping stickers (the virtual kind) and becoming friends. Our original ambitious plan might have been feasible if our pilots were not also university students, working developers, and involved in multiple projects and activities. But six weeks turned out to be not quite long enough to get it all done, so we focused on speaking skills—and, as it turned out, on building a global posse of talented tech speakers.

    What’s next

    We’re still figuring this out. We collected feedback from all participants and discovered that there’s a great appetite to keep this going. We are still fine-tuning some of the ideas around Content Kits, and the first kits are becoming available for use and re-use. We continue to support Tech Speakers to present at conferences organize workshops and trainings in their communities. And create their own Mozilla Tech Speakers groups with local flavor and focus.

    Stay tuned: we’ll be opening a Discourse category shortly, to expand the conversation and share new ideas.

    And now for some thank yous…

    I’d like to quickly introduce you to the Mozilla Tech Speakers pilot pilots. You’ll be hearing from them directly in the days, weeks, months ahead, but for today, huge thanks and hugs all around, for the breadth and depth of their contributions, their passion, and the friendships we’ve formed.

    Adrian Crespo, Firefox Marketplace reviewer, Mozilla Rep, student, and technical presenter from Madrid, Spain, creator of the l10n.js Content Kit, for learning and teaching localization through the native JavaScript method.

    Ahmed Nefzaoui, @AhmedNefzaoui, recent graduate and active Mozillian, Firefox OS contributor, Arabic Mozilla localizer, RTL (right-to-left) wizard, and web developer from Tozeur, Tunisia.

    Andre Garzia, @soapdog, Mozilla Rep from Rio de Janeiro, Brazil, web developer, app developer and app reviewer, who will be speaking about Web Components at Expotec at the end of this month. Also, ask him about the Webmaker team LAN Houses program just getting started now in Rio.

    Andrzej Mazur, @end3r, HTML5 game developer, active Hacks blog and MDN contributor, creator of a content kit on HTML5 Game Development for Beginners, active Firefox app developer, Captain Rogers creator, and frequent tech speaker, from Warsaw, Poland.

    István “Flaki” Szmozsánszky, @slsoftworks, Mozillian and Mozilla Rep, web and mobile developer from Budapest, Hungary. Passionate about Rust, Firefox OS, the web of things. If you ask him anything “mildly related to Firefox OS, be prepared with canned food and sleeping bags, because the answer might sometimes get a bit out of hand.”

    Kaustav Das Modak, @kaustavdm, Mozilla Rep from Bengalaru, India; web and app developer; open source evangelist; co-founder of Applait. Ask him about Grouphone. Or, catch his upcoming talk at the JSChannel conference in Bangalore in July.

    Michaela R. Brown, @michaelarbrown, self-described “feisty little scrapper,” Internet freedom fighter, and Mozillian from Michigan. Michaela will share skills in San Francisco next week at the Library Freedom Project: Digital Rights in Libraries event.

    Rabimba Karanjai, @rabimba, a “full-time graduate researcher, part-time hacker and FOSS enthusiast,” and 24/7 Mozillian. Before the month is out, Rabimba will speak about Firefox OS at OpenSourceBridge in Portland and at the Hong Kong Open Source conference.

    Gracias. شكرا. धन्यवाद. Köszönöm. Obrigada. Dziękuję. Thank you. #FoxYeah.

    Robert O'CallahanWhistler Hike

    I'm at the Mozilla all-hands week in Whistler. Today (Monday) was a travel day, but many of us arrived yesterday, so today I had most of the day free and chose to go on a long time organized by Sebastian --- because I like hiking, but also lots of exercise outside should help me adjust to the time zone. We took a fairly new trail, the Skywalk South trail: starting in the Alpine Meadows settlement at the Rick's Roost trailhead at the end of Alpine Way, walking up to connect with the Flank trail, turning up 19 Mile Creek to wind up through forest to Iceberg lake above the treeline, then south up and over a ridge on the Skywalk South route, connecting with the Rainbow Ridge Loop route, then down through Switchback 27 to finally reach Alta Lake Rd. This took us a bit over 8 hours including stops. We generally hiked quite fast, but some of the terrain was tough, especially the climb up to and over the ridge heading south from Iceberg Lake, which was more of a rock-climb than a hike in places! We had to get through snow in several places. We had a group of eight, four of us who did the long version and four who did a slightly shorter version by returning from Iceberg Lake the way we came. Though I'm tired, I'm really glad we did this hike the way we did it; the weather was perfect, the scenery was stunning, and we had a good workout. I even went for a dip in Iceberg Lake, which was a little bit crazy and well worth it!

    Liz HenryTrip to Whistler for Mozilla’s work week

    Our work week hasn’t started yet, but since I got to Whistler early I have had lots of adventures.

    First the obligatory nostril-flaring over what it is like to travel with a wheelchair. As we started the trip to Vancouver I had an interesting experience with United Airlines as I tried to persuade them that it was OK for me to fold up my mobility scooter and put it into the overhead bin on the plane. Several gate agents and other people got involved telling me many reasons why this could not, should not, and never has or would happen:

    * It would not fit
    * It is illegal
    * The United Airlines handbook says no
    * The battery has to go into the cargo hold
    * Electric wheelchairs must go in the cargo hold
    * The scooter might fall out and people might be injured
    * People need room for their luggage in the overhead bins
    * Panic!!

    The Air Carrier Access Act of 1986 says,

    Assistive devices do not count against any limit on the number of pieces of carry-on baggage. Wheelchairs and other assistive devices have priority for in-cabin storage space over other passengers’ items brought on board at the same airport, if the disabled passenger chooses to preboard.

    In short I boarded the airplane, and my partner Danny folded up the scooter and put it in the overhead bin. Then, the pilot came out and told me that he could not allow my battery on board. One of the gate agents had told him that I have a wet cell battery (like a car battery). It is not… it is a lithium ion battery. In fact, airlines do not allow lithium batteries in the cargo hold! The pilot, nicely, did not demand proof it is a lithium battery. He believed me, and everyone backed down.

    The reason I am stubborn about this is that I specially have a very portable, foldable electric wheelchair so that I can fold it up and take it with me. Two times in the past few years, I have had my mobility scooters break in the cargo hold of a plane. That made my traveling very difficult! The airlines never reimbursed me for the damage. Another reason is that the baggage handlers may lose the scooter, or bring it to the baggage pickup area rather than to the gate of the plane.

    Onward to Whistler! We took a shuttle and I was pleasantly (and in a way, sadly) surprised that the shuttle liason, and the driver, both just treated me like any other human being. What a relief! It is not so hard! This experience is so rare for me that I am going to email the shuttle company to compliment them and their employees.

    The driver, Ivan, took us through Vancouver, across a bridge that is a beautiful turquoise color with stone lions at its entrance, and through Stanley Park. I particularly noticed the tiny beautiful harbor or lagoon full of boats as we got off the bridge. Then, we went up Highway 99, or the Sea to Sky Highway, to Squamish and then Whistler.

    Sea to sky highway

    When I travel to new places I get very excited about the geology and history and all the geography! I love to read about it beforehand or during a trip.

    The Sea to Sky Highway was improved in preparation for the Winter Olympics and Paralympics in 2010. Before it was rebuilt it was much twistier with more steeply graded hills and had many bottlenecks where the road was only 2 lanes. I believe it must also have been vulnerable to landslides or flooding or falling rocks in places. As part of this deal the road signs are bilingual in English and Squamish. I read a bit on the way about the ongoing work to revitalize the Squamish language.

    The highway goes past Howe Sound, on your left driving up to Squamish. It is a fjord, created by retreated glaciers around 11,000 years ago. Take my geological knowledge with a grain of salt (or a cube of ice) but here is a basic narrative of the history. AT some point it was a shallow sea here but a quite muddy one, not one with much of a coral reef system, and the mountains were an archipelago of island volcanoes. So there are ocean floor sediments around, somewhat metamorphosed; a lot of shale.

    There is a little cove near the beginning of the highway with some boats and tumble-down buildings, called Porteau Cove. Interesting history there. Then you will notice a giant building up the side of a hill, the Britannia Mining Museum. That was once the Britannia Mines, producing billions of dollars’ worth of copper, gold, and other metals. The entire hill behind the building is honeycombed with tunnels! While a lot of polluted groundwater has come out of this mine damaging the coast and the bay waters, it was recently plugged with concrete: the Millenium Plug, and that improved water quality a lot, so that shellfish, fish, and marine mammals are returning to the area. The creek also has trout and salmon returning. That’s encouraging!

    Then you will see huge granite cliffs and Shannon Falls. The giant monolith made me think of El Capitan in Yosemite. And also of Enchanted Rock, a huge pink granite dome in central Texas. Granite weathers and erodes in very distinctive ways. Once you know them you can recognize a granite landform from far away! I haven’t had a chance to look close up at any rocks on this trip…. Anyway, there is a lot of granite and also basalt or some other igneous extrusive rock. Our shuttle driver told me that there is columnar basalt near by at a place called French Fry Hill.

    The mountain is called Stawamus Chief Mountain. Squamish history tells us it was a longhouse turned to stone by the Transformer Brothers. I want to read more about that! Sounds like a good story! Rock climbers love this mountain.

    There are some other good stories, I think one about two sisters turned to stone lions. Maybe that is why there are stone lions on the Vancouver bridge.

    The rest of the drive brought us up into the snowy mountains! Whistler is only 2000 feet above sea level but the mountains around it are gorgeous!

    The “village” where tourists stay is sort of a giant, upscale, outdoor shopping mall with fake streets in a dystopian labyrinth. It is very nice and pretty but it can also feel, well, weird and artificial! I have spent some time wandering around with maps, backtracking a lot when I come to dead ends and stairways. I am also playing Ingress (in the Resistance) so I have another geographical overlay on the map.

    Whistler bridge lost lake

    On Sunday I got some groceries and went down paved and then gravel trails to Lost Lake. It was about an hour long trip to get there. The lake was beautiful, cold, and full of people sunbathing, having picnics, and swimming. Lots of bikes and hikers. I ran out of battery (nearly), then realized that the lake is next to a parking lot. I got a taxi back to the Whistler Village hotel! Better for me anyway since the hour long scooter trip over gravel just about killed me (I took painkiller halfway there and then was just laid flat with pain anyway.) Too ambitious of an expedition, sadly. I had many thoughts about the things I enjoyed when I was younger (going down every trail, and the hardest trails, and swimming a lot) Now I can think of those memories, and I can look at beautiful things and also read all the information about an area which is enjoyable in a different way. This is just how life is and you will all come to it when you are old. I have this sneak preview…. at 46…. When I am actually old, I will have a lot of practice and will be really good at it. Have you thought about what kind of old person you would like to be, and how you will become that person?

    Today I stayed closer to home just going out to Rebagliati Park. This was fabulous since it wasn’t far away, seriously 5 minutes away! It was very peaceful. I sat in a giant Adirondack chair in a flower garden overlooking the river and a covered bridge. Watching the clouds, butterflies, bees, birds, and a bear! And of course hacking the portals (Ingress again). How idyllic! I wish I had remembered to bring my binoculars. I have not found a shop in the Whistler Mall-Village that stocks binoculars. If I find some, I will buy them.

    I also went through about 30 bugs tracked for Firefox 39, approved some for uplift, wontfixed others, emailed a lot of people for work, and started the RC build going. Releng was heroic in fixing some issues with the build infrastructure! But, we planned for coverage for all of us. Good planning! I was working Sunday and Monday while everyone else travelled to get here…. Because of our release schedule for Firefox it made good sense for me to get here early. It also helps that I am somewhat rested from the trip!

    I went to the conference center, found the room that is the home base for the release management and other platform teams, and got help from a conference center setup guy to lay down blue tape on the floor of the room from the doorway to the back of the room. The tape marks off a corridor to be kept clear, not full of backpacks or people standing and talking in groups, so that everyone can freely get in and out of the room. I hope this works to make the space easy for me to get around in, in my wheelchair, and it will surely benefit other people as well.

    Travel lane

    At this work week I hope to learn more about what other teams are doing, any cool projects etc, especially in release engineering and in testing and automated tools and to catch up with the Bugzilla team too. And will be talking a bunch about the release process, how we plan and develop new Firefox features, and so on! Looking forward now to the reception and seeing everyone who I see so much online!

    Related posts:

    Ted ClancyThe Canadian, Day 5

    Today I woke up on the outskirts of Greater Vancouver, judging by the signs in this industrial area. The steward has just announced we’ll be arriving in Vancouver in one hour.

    Thus concludes my journey. Though the journey spanned five calendar days, I left late on Thursday night and I’m arriving early Monday morning, so it’s more like four nights and three days. (A total travelling time of 3 days and 14.5 hours.) Because I travelled over a weekend, I only lost one day of office time and gained a scenic weekend.

    The trip roughly breaks down to one day in the forests of Ontario, one day across the plains of Manitoba and Saskatchewan, and one day across the mountains of Alberta and British Columbia.

    In retrospect, I should have done my work on the second day, when my mobile internet connection was good and the scenery was less interesting. (Sorry, Manitoba. Sorry, Saskatchewan.)

    From here I take a bus to Whistler to attend what Mozilla calls a “work week” — a collection of presentations, team meetings, and planning sessions. It ends with a party on Friday night, the theme of which is “lumberjack”. (Between the bears and the hipsters, don’t I already get enough of people dressing like lumberjacks back in Toronto?)

    Because I’m a giant hypocrite, I’ll be flying back to Toronto. But I heartily recommend travel by Via Rail. Their rail passes (good for visiting multiple destinations, or doing a round trip) are an especially good deal.

    I wonder what Kylie Minogue has to say about rail travel.


    QMOFirefox 39 Beta 7 Testday Results

    Hey Mozillians!

    As you already may know, last Friday – June 19th – we held another Testday event, Firefox 39 Beta 7.

    We’d like to take this opportunity to thank everyone for getting involved in the proposed testing activities and in general, for helping us make Firefox better.
    Many thanks go out to Bangladesh QA Community, which gets bigger and better everyday (Hossain Al Ikram, Nazir Ahmed Sabbir, Forhad Hossain, Ashickur Rahman, Meraj  Kazi, Md. Ehsanul Hassan, Wahiduzzaman Hridoy, Rezaul Huque Nayeem, Towkir Ahmed, Md. Rahimul Islam, Mohammad Maruf Islam, Fahmida Noor, Sayed Mohammad Amir, Saheda reza Antora), Moin, nt, Vairus and AaronRaimist for their efforts and contributions, and to all our moderators. Thanks a bunch!

    Keep an eye on QMO for the upcoming events! :)

    John O'DuinnThe canary in the coal mine

    After my recent “We are ALL Remoties” presentation at Wikimedia, I had some really great followup conversations with Arthur Richards at WikiMedia Foundation. Arthur has been paying a lot of attention to scrum and agile methodologies – both in the wider industry and also specifically in the context of his work at Wikimedia Foundation, which has people in different locations. As you can imagine, we had some great fun conversations – about remoties, about creating culture change, and about all-things-scrum – especially the rituals and mechanics of doing daily standups with a distributed team.

    Next time you see a group of people standing together looking at a wall, and moving postit notes around, ask yourself: “how do remote people stay involved and contribute?” Taking photographs of the wall of postit notes, or putting the remote person on a computer-with-camera-on-wheeled-cart feels like a duct-tape workaround; a MacGyver fix done quickly, with the best of intentions, genuinely wanting to help the remote person be involved, but still not-a-great experience for remoties.

    There has to be a better way.

    We both strongly agree that having people in different locations is just a way to uncover the internal communication problems you didn’t know you already have… the remote person is the canary in the coal mine. Having a “we are all remoties” mindset helps everyone become more organized in their communications, which helps remote people *and* also the people sitting near each other in the office.

    Arthur talked about this idea in his recent (and lively and very well attended!) presentation at the Annual Scrum Alliance “Global Scrum Gathering” event in Phoenix, Arizona. His slides are now visible here and here.

    If you work in an agile / scrum style environment, especially with a geo-distributed team of humans, it’s well worth your time to read Arthur’s presentation! Thought provoking stuff, and nice slides too!