Robert O'Callahanrr Talk Video From TCE 2015

The good people at the Technion have uploaded video of the TCE 2015 talks, in particular video of my rr talk. My slides are also available. This is a fairly high-level talk describing the motivation and design of rr, delving into a few interesting details and concluding with a discussion of some of the exciting possibilities that could be enabled by rr and similar technology.

Byron Joneshappy bmo push day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1178231] When the tracking section is collapsed, keywords are displayed like they are in the whiteboard
  • [1180449] The modal user interface has changed the behavior of ctype=xml URLs
  • [1180711] invalid_cookies_or_token should have a real error code
  • [1172968] Move the scripts we want to keep from contrib/* and place them in scripts/ directory. Remove contrib from repo
  • [1180788] b.m.o modal UI “fixed in” label isn’t quite right for B2G
  • [1180776] Mozilla Recruiting Requisition Opening Process Template: Job Description Update

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

Ehsan AkhgariLocal Autoland

It has been a while since I’ve asked myself: “Is the tree open?”

These days, when I want to land something on mozilla-inound, I switch to my git-workdir[1], I cherry-pick the commit that I want to land, and I type the following in my terminal:

$ land

Land is a very sophisticated bot that tries to land your commits for you!  It assumes you use git-cinnabar, which you should if you use git.  Porting it to Mercurial is left as an exercise.

FAQ

  • Isn’t this wasteful for the infrastructure?
    The frequency of the tool is adjustable, and I sometimes to tune it back to poll less frequently.  However our infrastructure should be very well capable of handling load at this level (please let me know if this assumption is flawed!)
  • Is this how we sometimes see you push immediately after the tree opens?
    Of course!

Cameron KaiserBeta 1 aftermath

So let's sum up the first 38 beta.

Confirmed bugs: Facesuck seems to be totally whacked in Ion mode (it works fine in Baseline only mode). IonPower passes all the JIT tests, though, so this must be something that the test suite does not cover. I'm investigating some other conformance test suites and corrected a couple other variances so far between Baseline and Ion but none of them appear to be what's ailing Faceblech yet.

Also, we have another web font that makes ATSUI puke, except it has an inconveniently null PostScript name so we can't filter it with the existing method. Fortunately Tobias had come up with an alternative font filter system some time ago that should work with 10.4.

Not confirmed (yet?): a few people have reported that memory usage skyrockets upon quit and the browser crashes (inevitably after exceeding its addressing space), on a variety of systems, both 10.4 and 10.5. I can't reproduce this on any of the test machines.

I need to do more looking into the stored passwords question.

Since we're out of runway, i.e., ESR31, and we need one more beta before release, I'm going to keep working on the Facebork problem (or at least try to fix it by fixing something else) until July 24. If we can't do it by then, I guess we launch without IonPower, which is unfortunate and will regress JavaScript performance, but we will still at least have Baseline. Faceburp is just too big to debug in place, so I need you folks to find another site that has similar problems. I haven't been able to yet myself.

Nicholas NethercoteCompacting GC

Go read Jon Coppeard’s description of the compacting GC algorithm now used by SpiderMonkey!

John O'Duinn“Hot Seat: The CEO Guidebook” by Dan Shapiro

This book just came out and I loved it. If you are starting a company, or thinking of it, you need to read this book. Period.

Dan covered a whole range of topics very succinctly, and in easy-to-follow language. When and how to raise funds. What all those terms mean. Who should (and should not!) be on your board, and why. How to allocate shares and ownership between co-founders. Where to incorporate your company (Dan has strong opinions on this!). How to create (and then also maintain) company culture. A great section on decision making. A section on “Hiring” in the context of the Manhattan project vs the moon shot Apollo project that I think every engineering hiring manager should read before building a team. Several true stories about startups where co-founders mismatches caused company threatening problems (trivia: 6 of 10 startups lose a co-founder in early days). And some good (and bad!) stories of how important trust was.

Some great quotes that resonated with me:

“You have limited resources of time and money. When they run out, you go bankrupt. The important thing is not cost/benefit: it’s opportunity cost.”

(in the context of how much travel was needed for all the in-person meetings with investors when raising funding) “…Alaska Airlines gave me MVP status for my efforts. In January.”

“Entrepreneurship is the pursuit of opportunity without regard to the resources currently controlled”. Prof Stevenson, Harvard.

In a variation of the “fail fast” mantra in developer circles, Dan notes that “…while it might seem like cold comfort now, the sooner you fail, the sooner you can try again.” Oh, and he’s not just saying it – that was the ending of a chapter where he detailed the failure of one of his startups.

His tolerance for large volumes of coffee and pointer to suggested reading “Coffee, CYP1A2 Genotype, and Risk of Myocardial Infarction” was a great and unexpected tangent for me personally. (More info here Journal of American Medical Association)

“Startups don’t out think their competitors; they out-execute them.”

“If leadership is the forest, then management is the trees. Day to day, it’s what consumes your time, and its imperative that you get it right.”

It takes skill and seasoned-experience-in-the-field to have one person cover all these different topics. Even more skill to do so clearly, and concisely. Putting them all together in a way that makes sense was great. Just great. If you are starting a company, or thinking of it, you need to read this book. Period.

Aside: Having this on my kindle app, on my trusty nexus5 phone was quite a good reading experience. The book was written in short, digestible chapters, which I could quickly complete standing a store line, or in the back of a taxi between meetings. It also encouraged me to think more about the chapter I just finished in the time before I got to stop and read some more. A nice way to digest the many lessons in here. I’m still experimenting with what books I find do work best on phone+kindle vs ink-on-paper, but at least for this book, reading on kindle worked for me.

(Disclaimer: I bought this book because I’m starting my own company, and that is the basis of the above review. As this book is published by O’Reilly Press, it feels important to disclose that I am also currently doing some work with O’Reilly… which did not influence anything I wrote here.)

Chris CooperReleng & Relops weekly highlights - July 3, 2015

Welcome to the weekly releng Friday update, Whistler hangover edition.

Half the team took time off after Whistler. With a few national holidays sprinkled in, things were pretty slow last week. Still, those of us who were still around took advantage of the lull to get stuff done.

tl;dr

Taskcluster: Our new intern, Anthony Miyaguchi, started in San Francisco and will working on crash symbols uploads in TaskCluster. Our other intern, Anhad, has almost finished his work migrating spidermonkey to taskcluster. Morgan and Jonas are investigating task graph creation directly from github. Dustin continues to make efficiency improvements in the Fennec Taskcluster builds.

Modernize infrastructure: Mark, Q, and Rob continue to work on standing up our new Windows build platform in AWS. This includes measuring some unexpected performance improvements.

Improve release pipeline: We’re standing up a staging version of Ship-It to make it easier to iterate. Ben’s working on a new-and-improved checksum builder for S3, and Mike fixed a problem with l10n updates.

Improve CI pipeline: Jordan pushed the archiver relengapi endpoint and client live. They are now being actively used for mozharness on the ash project branch. catlee deployed the hg bundleclone extension to our Mac and Linux platforms, and Rail deployed a new version of fun size with many integrity improvements.

Release: Firefox 39.0 is in the wild!

Tune in again next week!

And here are all the details:

Taskcluster

  • Our intern, Anhad, has nearly finished porting Spidermonkey to TaskCluster (https://bugzil.la/1164656). He’ll have a blog post with details coming up shortly.
  • Morgan decided it would be a good idea if the container we used to build 32-bit Linux builds was under our direct control, so we spent some time this week putting one together. (https://bugzil.la/1178161)
  • Morgan and Jonas began sketching out how we can create task graphs directly from github integration by using organization hooks (https://bugzil.la/1179458). This will be an important piece for autoland.
  • Dustin continues to make efficiency improvements in the Fennec TC builds. Last week, he worked installing java for android builds via tooltool rather than on the base image for the build host. (https://bugzil.la/1161075). He also filed a bug to make sure the tooltool client deletes downloads after unpacking to avoid some of the existing overhead incurred by using tooltool for an increasing number of things (https://bugzil.la/1179777).
  • Dustin also wrote a blog post about how to run ad-hoc task using taskcluster: http://code.v.igoro.us/posts/2015/07/ad-hoc-tasks-in-taskcluster.html

Operational work

  • Coop was in California to onboard our new intern, Anthony Miyaguchi. Anthony’s first task will be figuring out how to upload symbols to Socorro API from a separate task in TaskCluster (https://bugzil.la/1168979). Welcome, Anthony!

Modernize infrastructure

  • Microsoft is deprecating the use of SHA-1 Authenticode signatures at the end of the year. This is good for the safety of internet users in general, unless you happen to still be on Windows XP SP2 or earlier which do _not_ support SHA-2 or other newer signature algorithms. This means that if Mozilla needs to ship to XP SP2 _and_ XP SP3 (and higher) after 2016-01-01, we may not be able to ship them the same binaries. Ben spent some time looking at potential solutions this week. (https://bugzil.la/1079858)
  • Mark has been adding new xml-based config support options for our new Windows instances in ec2 (https://bugzil.la/1164943).
  • Having Windows builds in AWS is a win from a scalability standpoint, but early measurements indicate that builds in AWS might also be faster than our current hardware builders by up to 30%. Q has been capturing build time comparisons for build slaves in AWS versus colo hardware to get a better picture of the potential wins (https://bugzil.la/1159384).
  • Rob deployed an updated version of runslave.py to all Windows slaves, finishing off a round of standardization and correctness work begun a few months ago (https://bugzil.la/115406).

Improve release pipeline

  • As part of our ongoing migration off of our monolithic ftp server, Ben has been iterating on getting checksum builders working with S3 (https://bugzil.la/117414). The eventual goal will be to run these as tasks in Taskcluster.
  • In Whistler, we decided it was important to maintain a staging version of release runner / ship it to allow us to iterate more quickly and safely on these tools. Ben spent some time this week planning how to facilitate supporting a staging version alongside the production deployment (https://bugzil.la/1178324).
  • Mike landed a fix to a branding issue that was preventing developer edition l10n builds from correctly identifying themselves to the update server (https://bugzil.la/1178785).

Improve CI pipeline

  • At the start of the quarter, Jordan was tasked with moving mozharness into the gecko tree. Through discussion with Dustin and others, this gradually morphed into creating a relengapi endpoint that allows you to get a tarball of any repo and rev subdirectory, upload it to s3, and download/unpack it locally. This would pave the way for being able to put *anything* in the gecko tree by reference, and still being able to deploy it or use it without checking out the tree. The archiver relengapi endpoint and client is now live and being actively used on the ash project branch. We hope to uplift it to mozilla-central by the end of this week (https://bugzil.la/1131856).
  • coop added buildprops.json to the list of files uploaded to TaskCluster as part of the build process (https://bugzil.la/117709). This makes it easier for developers to replicate buildbot builds locally or on loaned machines because now they have access to the same seed variables as buildbot.
  • catlee deployed the hg bundleclone extension to our Mac and Linux platforms last week (https://bugzil.la/1144872). Coupled with the server-side improvements deployed by dev services recently, we are gradually reducing the complexity and overhead of VCS operations that we need to maintain in releng code.
  • Rail deployed a new version of funsize (0.11), our on-demand update generation service. The new version includes improvements to use whitelisted domains, and virus scanning complete and performing signature verification on complete MAR files before creating partial MARs (https://bugzil.la/1176428).

Releases

See you next week!

Air MozillaTech Talk: The Power of Emotion and Delight

Tech Talk: The Power of Emotion and Delight Ricardo Vazquez will be speaking on, "The Power of Emotion and Delight: Microinteractions."

Air MozillaMozilla Weekly Project Meeting

Mozilla Weekly Project Meeting The Monday Project Meeting

About:CommunityMDN Fellows Successfully Oriented

The first-ever cohort of MDN Fellows convened at the Mozilla Vancouver space the weekend of June 20. This MDN Fellowship Pilot is an experiment for Mozilla to engage advanced web developers in strategic projects at Mozilla to advance our teaching and learning objectives.

Day 1: Learning About Learning

One of the first things we did was to collectively identify shared goals for the weekend:

  • Welcome our Fellows into the Mozilla fold.
  • Create ties between our Fellows and their project mentors.
  • Build familiarity with key learning and curriculum design principles.
  • Set up our Fellows for success in creating Content Kits, a new framework designed by both MDN and the Mozilla Foundation to facilitate wide teaching of web content.
  • Understand how the Fellows’ work ties into the broader mission and efforts at Mozilla.

And Day 1 was an exercise in integrity: because one of the least effective ways people learn is by lecture – and since we wanted our fellows to learn about learning – we all jumped in and engaged with learning content. Bill Mills, Community Manager for Mozilla’s Science Lab, conveyed several principles. A few nuggets that our teams have already started to apply to their projects:

  • Structure curriculum into as small, manageable pieces as possible. This allows instructors and students to customize and adapt the content to their specific needs and learning pace. This also helps avoid the common pitfall of underestimating how much time is required to teach material generally.
  • Employ techniques to identify gaps in learning. For example, it’s possible to design multiple choice answers to flag specific learning errors e.g. if the question is “What is 23 + 28?” and a student selects one of the incorrect answers of “41” then you can assume the student did not properly ‘carry’ in their math.
  • Provide multiple approaches to explain the same issue to avoid the common pitfall of simply repeating the information more slowly, or more loudly ;).

Day 2: Getting to Brass Tacks

Day 2 had our Fellows applying their new knowledge to their own projects. They developed a plan of attack for their respective work for the remainder of the Fellowship. Some highlights:

The Curriculum team was well-served by referencing the Dunning-Kruger effect in designing its pre-requisites list. Specifically, they decided to parse this out using a “get information as you need it” approach for the pre-reqs rather than present their potential instructors with one long daunting list.

Both the Service Workers team and the WebGL team are embracing the above-mentioned concept of modularizing their content to make it more manageable. Specifically, Service Workers will create different approaches for different use cases to accommodate the evolving nature of its nascent technology; and WebGL will parse out different components so instructors and students can create reusable hackable code samples.

The Test The Web Forward team is employing “reverse instructional design” so its instructors can help others understand how problems are solved a step-by-step basis that students can dissect rather than simply see the final ‘answers.’ If you’ve heard of “reverse engineering” then “reverse instructional design” should make sense.

The Web App Performance Team, taking into consideration the time spent on both network as well front-end efforts, is advocating best practices and reverse engineering the problems in the code with proper documentation and examples.

How MDN Fellows Support the Mozilla Mission

Last year MDN began working with our colleagues at the Mozilla Foundation to see how we might partner to advance our common goals of growing web literacy. The work MDN is doing to expand beyond documentation and into teaching and learning dovetails nicely with the Foundation’s efforts to harmonize Mozilla’s learning and fellowship programs. This is a work in progress and we expect our MDN Fellows to play a key role in informing this.

Doug BelshawHow I've achieved notification nirvana with a smartphone / smartband combo

TL;DR I’m using a cheap Sony SmartBand SWR10 to get selective vibrating notifications on my wrist from my Android phone. I never miss anything important, and I’m not constantly checking my devices.

Sony SmartBand SWR10 - image via Digital Trends

Every year I take between one and two months away from social media and blogging. I call this period Belshaw Black Ops. One of the things I’ve really enjoyed during these periods is not being constantly interrupted by notifications.

The problem with notifications systems on smartphones is that they’re still reasonably immature. You’re never really sure which ones are unmissable and which ones are just fairly meaningless social updates. When a pre-requisite of your job is ‘keeping up to date’ it’s difficult to flick the binary switch to off.

Thankfully, I’ve come across a cheap and easy way to simplify all of this. After finding out about the existence of Sony smartbands via HotUKDeals (a goldmine of knowledge as well as deals) I bought the SWR10 for about £20. It connects via NFC and Bluetooth to Android smartphones.

The battery life of the smartband is about 3-4 days. I wear it almost all of the time - including in bed as I use the vibrating alarm to wake me up when I start to stir. I choose not to use the Sony lifelogging app as I’m not really interested in companies having that many details about me. It’s the reason I stopped wearing a Fitbit.

Sony SmartBand SWR10 notifications

My wife and I use Telegram to message each other, so my wrist vibrates discreetly when she sends me a message. I’ve also got it configured to vibrate on calendar events, phone calls, and standard SMS messages.

All of this means that my smartphone is almost permanently in Silent mode. The vibration on my wrist is enough for me to feel, but not for others to hear. It’s pretty much the perfect system for me - notification nirvana!


Comments? Questions? I’m @dajbelshaw or you can email me: mail@dougbelshaw.com

Armen Zambranomozci 0.8.2 - Allow using TreeHerder as a query source

In this release we have added an experimental feature where you can use Treeherder as your source for jobs' information instead of using BuildApi/Buildjson.
My apologies as this should have been a minor release (0.9.0) instead of a security release (0.8.2).

Contributors

Thanks to @adusca @vaibhavmagarwal and @chmanchester for their contributions.
Our latest new contributor is @priyanklodha - thank you!

How to update

Run "pip install -U mozci" to update

Major highlights

  • Added --query-source option to get data from Treeherder or Buildapi
  • Improved usage of OOP to allow for different data sources seamlessly

Minor improvements

  • Better documentation of --times
  • Cleaning up old builds-*.js files
  • Enforced line character limit

All changes

You can see all changes in here:
0.8.1...0.8.2

Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Mike TaylorUpcoming changes to the Firefox for Android UA string

If there's one thing that developers love more than the confusing nature of User Agent strings, it's when they change.

Great news, everybody.

Beginning in Firefox for Android 41, the default UA string will contain the Android version in the platform token (bug here):

Mozilla/5.0 (Android <Android version>; Mobile; rv:<Gecko version>) Gecko/<Gecko version> Firefox/<Gecko version>

And perhaps the only things that developers love more than changing UA strings is when they change conditionally.

So, for interoperability with the wild and crazy web, if a user is on a version of Android lower than 4 (which we still support), we will report the Android version as 4.4. Versions 4 and above will accurately reflect the Android version.

And in case you've forgotten, Firefox for Android is the same across Android versions, so sniffing the version won't tell you if we do or do not support the latest cool feature.

As always, send all complaints to our head of customer satisfaction.

This Week In RustThis Week in Rust 86

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

From the Blogosphere

New Releases & Project Updates

  • capgun. A simple utility that watches files and fires a specified command when they do.
  • pirate. A command-line arrrrguments parser, written in Rust.
  • rust-worldgen. Noise and World Generation library for Rust.
  • plex. A parser and lexer generator as a Rust syntax extension.

What's cooking on nightly?

107 pull requests were merged in the last week.

New Contributors

  • Adam Heins
  • Alex Newman
  • Christian Persson
  • Eljay
  • Kagami Sascha Rosylight

Approved RFCs

Final Comment Period

Every week the teams announce a 'final comment period' for RFCs which are reaching a decision. Express your opinions now. This week's RFCs entering FCP are:

New RFCs

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

Quote of the Week

"Greek constitution to be rewritten in #rustlang to deal with their ownership and borrowing problem."@bigthingist

Submit your quotes for next week!.

Planet Mozilla InternsJonathan Wilde: Gossamer Sprint Two, Day Three

Want more context? See the introduction to Gossamer and previous update.

Another Demo

Let’s say you make a code change to your browser and you want it today. After making your change, you need to restart the app, or in the case of browser.html clear caches and refresh the page.

With our experimental fork of browser.html, we can now apply a lot of different types of changes without a refresh.

Let’s say we want to change the experiments icon in the upper right of our browser and make it red and larger. You just make the change and hit save. The changes appear in your running browser, without any loss of state.

We’re doing this with Webpack Hot Module Replacement and React Hot Loader.

In the demo, I’m running browser.html from Webpack’s development server. It watches and serves the browser.html files from my working copy, performs incremental module builds, and has an open socket.io connection to the browser notifying it of build status.

When the working copy changes, it performs an incremental build and notifies the browser of new code. The browser can apply the changes without a restart.

What I Did on Saturday

  • Restructured browser.html so that there is one component exported per file, which plays better with React Hot Loader at the moment.
  • Moved browser.html to CommonJS, which plays better with the early builds of Webpack 2 at the moment.
  • Shifted the npm start script from ecstatic to Webpack’s development server.

Next Steps

  • Graft webpack onto the Gossamer build server.

Adrian GaudebertRethinking Socorro's Web App

rewrite-cycle.jpg
Credits @lxt

I have been thinking a lot about what we could do better with Socorro's webapp in the last months (and even more, the first discussions I had about this with phrawzty date from Spring last year). Recently, in a meeting with Lonnen (my manager), I said "this is what I would do if I were to rebuild Socorro's webapp from scratch today". In this post I want to write down what I said and elaborate it, in the hope that it will serve as a starting point for upcoming discussions with my colleagues.

State of the Art

First, let's take a look at the current state of the webapp. According to our analytics, there are 5 parts of the app that are heavily consulted, and a bunch of other less used pages. The core features of Socorro's front-end are:

Those we know people are looking at a lot. Then there are other pages, like Crashes per User, Top Changers, Explosive Crashes, GC Crashes and so on that are used from "a lot less" to "almost never". And finally there's the public API, on which we don't have much analytics, but which we know is being used for many different things (for example: Spectateur, crash-stats-api-magic, Are we shutting down yet?, Such Comments).

The next important thing to take into account is that our users oftentimes ask us for some specific dataset or report. Those are useful at a point in time for a few people, but will soon become useless to anyone. We used to try and build such reports into the webapp (and I suppose the ones from above that are not used anymore fall into that category), but that costs us time to build and time to maintain. And that also means that the report will have to be built by someone from the Socorro team who has time for it, it will go through review and testing, and by the time it hits our production site it might not be so useful anymore. We have all been working on trying to reduce that "time to production", which resulted in the public API and Super Search. And I'm quite sure we can do even better.

Building Reports

bob-the-builder.jpg

Every report is, at its core, a query of one or several API endpoints, some logic applied to the data from the API, and a view generated from that data. Some reports require very specific data, asking for dedicated API endpoints, but most of them could be done using either Super Search alone or some combination of it with other API endpoints. So maybe we could facilitate the creation of such reports?

Let us put aside the authentication and ACL features, the API, the admin panel, and a few very specific features of the web app, to focus on the user-facing features. Those can be simply considered as a collection of reports: they all call one or several models, have a controller that does some logic, and then are displayed via a Django template. I think what we want to give our users is a way to easily build their own reports. I would like them to be able to answer their needs as fast as possible, without depending on the Socorro team.

The basic brick of a fresh web app would thus be a report builder. It would be split in 3 parts:

  • the model controls the data that should be fetched from the API;
  • the controller gets that data and performs logic on it, transforming it to fit the needs of the user;
  • and the view will take the transformed data and turn it into something pretty, like a table or a graph.

Each report could be saved, bookmarked, shared with others, forked, modified, and so on. Spectateur is a prototype of such a report builder.

We developers of Socorro would use that report system to build the core features of the app (top crashers, home page graphs, etc. ), maybe with some privileges. And then users will be able to build reports for their own use or to share with teammates. We know that users have different needs depending on what they are working on (someone working on FirefoxOS will not look at the same reports than someone working on Thunderbird), so this would be one step towards allowing them to customize their Socorro.

One Dashboard to Rule Them All

So users can build their own reports. Now what if we pushed customization even further? Each report has a view part, and that's what would be of interest to people most of the time. Maybe we could make it easy for a user to quickly see the main reports that are of interest to them? My second proposal would be to build a dashboard system, which would show the views of various reports on a single page.

A dashboard is a collection of reports. It is possible to remove or add new reports to a dashboard, and to move them around. A user can also create several dashboards: for example, one for Firefox Nightly, one for Thunderbird, one for an ongoing investigation... Dashboards only show the view part of a report, with links to inspect it further or modify it.

dashboard-example.png

An example of what a dashboard could look like.

Socorro As A Platform

The overall idea of this new Socorro is to make it a platform where people can find what they want very quickly, personalize their tool, and build whatever feature they need that does not exist yet. I would like it to be a better tool for our users, to help them be even more efficient crash killers.

I can see several advantages to such a platform:

  • time to create new reports is shorter;
  • people can collaborate on reports;
  • users can tweak existing reports to better fit their needs;
  • people can customize the entire app to be focused on what they want;
  • when you give data to people, they build things that you did not even dream about. I expect that will happen on Socorro, and people will come up with incredibly useful reports.

I Need Feedback

feedback-everywhere.jpg

Concretely, the plan would be to build a brand new app along the existing one. The goal won't be to replace it right away, but instead to build the tools that would then be used to replace what we currently have. We would keep both web apps side by side for a time, continuing to fix bugs in the Django app, but investing all development time in the new app. And we would slowly push users towards the new one, probably by removing features from the Django app once the equivalent is ready.

I would love to discuss this with anyone interested. The upcoming all-hands meeting in Whistler is probably going to be the perfect occasion to have a beer and share opinions, but other options would be fine (email, IRC... ). Let me know what you think!

Kaustav Das ModakDecentralizing Mozilla India Evangelism Task Force

Some of the key goals of the Mozilla India Task Force Meetup ’15 included ensuring an organizational structure which promotes larger inclusion, allows wider participation and better recognition of contributors. We had precious little time to discuss about the Evangelism Task Force (ETF) in this year’s meetup, but the focus was to build an inter-disciplinary […]

Ted ClancyRAII helper macro

One of the most important idioms in C++ is “Resource Acquisition Is Initialization” (RAII), where the constructor of a class acquires a resource (like locking a mutex, or opening a file) and the corresponding destructor releases the resource.

Such classes are almost always used as local variables. The lifetime of the variable lasts from its point of declaration to the end of its innermost containing block.

A classic example is something like:

extern int n;
extern Mutex m;

void f() {
    Lock l(&m); // This is an RAII class.
    n++;
}

The problem is, code written with these classes can easily be misunderstood.

1) Sometimes the variable is used solely for the side-effects of its constructor and destructor. To a naive coder, this can look like an unused variable, and they might be tempted to remove it.

2) In order to control where the object is destroyed, the coder sometimes needs to add another pair of braces (curly brackets) to create a new block scope. Something like:

void f() {
    [...]

    {
        Lock l(&m);
        n++;
    }

    [...]
}

The problem is, it’s not always obvious why that extra block scope is there. (For trivial code, like the above, it’s obvious. But in real code, ‘l’ might not be the only variable defined in the block.) To a naive coder, it might even look like an unnecessary scope, and they might be tempted to remove it.

The usual solutions to this situation are: (a) Write a comment, or (b) trust people to understand what you meant. Those are both bad options.

That’s why I’m fond of the following MACRO.

#define with(decl) \
for (bool __f = true; __f; ) \
for (decl; __f; __f = false)

This allows you to write:

void f() {
    [...]

    with (Lock l(&m)) {
        n++;
    }

    [...]
}

This creates a clear association between the RAII object and the statements which depend on its existence. The code is less likely to be misunderstood.

I like to call this a with-statement (analogous to an if-statement). Any kind of variable declaration can go in the head of the with-statement (as long as it can appear in the head of a for-statement). The body of the with-statement executes once. The variable declared in the head of the statement is destroyed after the body executes.

Your code editor might even highlight ‘with‘ as a keyword, since it’s a keyword in Javascript (where it has a different — and deprecated — purpose).

I didn’t invent this kind of MACRO. (I think I first read about something similar in a Dr Dobb’s article.) I just find it really useful, and I hope you do too.


Planet Mozilla InternsJonathan Wilde: Gossamer Sprint Two, Days One and Two

I’m currently working with Lyre Calliope on a project to improve tooling for developing and sharing web browser features.

I’ll be documenting my progress on this Mediapublic-style.

First, a Little Demo

In order to tinker with your web browser’s source today, you need to download a working copy of the source, set up a build environment, and have your text editor selected and configured. It can take hours, even for people who’ve done it before.

Why can’t we just edit and share web browser UI changes from a web application, like we can with documents and other things?

In our experimental fork of browser.html, we can up the GitHub web interface (even from the browser you’re trying to edit), make edit the color, and when the update popup appears in the web browser, click “Apply”.

We don’t have to configure Gossamer to continuously build and ship our branches, and other people testing the same Gossamer branch receive that update, too.

In case you’re curious, here’s the commit I made in the demo.

What I Did Thursday and Friday

  • Removed analytics and news feed for now. Remove as much UI as possible.
  • Restructured the build process. Our one cheap build transform is performed as late as possible, and not muddled in with our cache of repo data.
  • Remove the need to explicitly register branches as experiments.
  • Add “base” branch that is used when you don’t have a branch explicitly selected. Drop requirement to be logged in to access builds.
  • Add Webhook support for receiving push notifications.

Next

  • Standing up Webpack and react-hot-loader locally with minimal changes to browser.html.

Stay tuned for more!

Ted ClancyICANN, wtf?

If you care about privacy on the internet, you might want to check out this link: icann.wtf


Mozilla Reps CommunityRep of the month – May 2015

Please join us in congratulating Mahmood Qudah of Jordan for being selected as Mozilla Rep of the Month for May 2015.

Mahmood is not just an active member in the Jordan local community but also very active in the bigger Mozilla Arabic community. He is doing lectures at universities in Jordan (many FSA events too), participating in the Marketing team for the Arabic community, he is helping in fixing RTL (right-to-left) issues in Firefox OS. In addition, he did few lectures teaching students how to start localizing Firefox OS.

Every month he is having one or two events, either organizing them or being part of them. We’re happy to see that he’s blasting both communities with new ideas every day.

Congratulations, Mahmood! Keep on rocking.

Don’t forget to congratulate him on Discourse!

Carsten Book7 Years at Mozilla!

Hi,

since last month i’m now 7 years at Mozilla as full-time employee \o/

Of course I’m longer around because i started as Community Member in QA years before. And its a long way from my first steps at QA to my current role as Code Sheriff @ Mozilla.

I never actively planned to join the Mozilla Community it just happened :) I worked back in 2001 at a German Email Provider as 2nd Level Support Engineer and as part of my Job (and also to Support Customers) we used different Email Programm’s to find out how to set-up the Programm and so.

Some Friends already involved into OpenSource (some linux fans) pointed me to this Mozilla Programm (at that time M1 or so) and i liked the Idea with this “Nightly”. Having everyday a brand new Program was something really cool and so started my way into the Community without even knowing that i’m now Part of the Community.

So over the years with Mozilla i finally filed my first bug and and was scared like hell (all this new fields in a non-native language) and not really knowing what i signed up when i clicked up this “submit” button in bugzillla :)  (was not even sure if i’m NOW supposed to fix the bug :)

And now i file dozens of Bugs every day while on Sheriffduty or doing other tasks :)

I learned a lot of stuff over the last years and still love being part of Mozilla  and its the best place to work for me! So on to the next years at Mozilla!

– Tomcat

Carsten Booka day in sheriffing

Hi,

since i talked with a lot of people about Sheriffing and what we do here is what a typical day for me look:

We care about the Code Trees like Test Failures etc

I usually start the day with checkin the trees we are responsible for  for test failures using treeherder. This gives me first a overview of the current status and as well make sure that everything is ok for the Asian and European Community which is online at that time.

This Tasks is ongoing till the end of my duty shift. From time to time this means we have to do backouts for code/test regressions.
Beside this i do stuff like checkin-neededs, uplifts etc and other tasks and of course always availble for questions etc on irc :)

Also i was thinking about some parts of my day-to-day experience:

Backouts and Tree Closures:

While backouts of code for test failures/bustages etc is one important task of sheriffs (and the managing of tree closures related to this), its always a mixed feeling to backout work from someone (and no one wants to cause a bustage) but its important to ensure quality of our products.

Try Server!!!

Tree Closures due to backouts can have the side effect that others are blocked with checkins. So if in doubt if your patch compile or could cause test regressions, please consider a try run, this helps at lot to keep tree closures for code issues at a minimum.

And last but not least Sheriffing is a Community Task! So if you want to be part of the Sheriff Team as Community Sheriff please sent me a mail at tomcat at mozilla dot com

Thanks!

– Tomcat

About:CommunityMDN at Whistler

The MDN community was well-represented at the Mozilla “Coincidental Work Week” in Whistler, British Columbia, during the last week in June. All of the content staff, a couple of the development staff, and quite a few volunteers were there. Meetings were met, code was hacked, docs were sprinted, and fun was funned.

Cross-team conversations

One of the big motivations for the “Coincidental Work Week” is the opportunity for cross-pollination among teams, so that teams can have a high-bandwidth conversations about their work with others. MDN touches many other functional groups within Mozilla, so we had a great many of these conversations. Some MDN staff were also part of “durable” (i.e., cross-functional) teams within the Engagement department, meeting with product teams about their marketing needs. Among others, MDN folks met with:

  • Add-ons, about future plans for add-ons.
  • Mozilla Foundation, about MDN’s role in broader learning initiatives, and about marketing themes for the last half of the year.
  • Firefox OS, about their plans in the next six months, and about increasing participation in Firefox OS.
  • Developer Relations and Platform Engineering, about improving coordination and information sharing.
  • Firefox Developer Tools, about integrating more MDN content into the tools, and making the dev-tools codebase more accessible to contributors.
  • Participation, to brainstorm ways to increase retention of MDN contributors.

Internal conversations

The MDN community members at Whistler spent some time as a group reflecting on the first half of the year, and planning and prioritizing for the second half of the year. Sub-groups met to discuss specific projects, such as the compatibility data service, or HTML API docs.

Hacking and sprinting

Lest the “Work Week” be all meetings, we also scheduled time for heads-down productivity. MDN was part of a web development Hack Day on Wednesday, and we held doc sprints for most of the day on Thursday and Friday. These events resulted in some tangible outputs, as well as some learning that will likely pay off in the future.

  • Heather wrote glossary entries and did editorial reviews.
  • Sebastian finished a new template for CSS syntax.
  • Sheppy worked on an article and code sample about Web RTC.
  • Justin finished a prototype feature for helpfulness ratings for MDN articles.
  • Saurabh prototyped an automation for badge nominations, and improved CSS reference pages’ structure and syntax examples.
  • Klez got familiar with the compatibility service codebase and development workflow; he also wrote glossary entries and other learning content.
  • Mark learned about Kuma by failing to get it running on Windows.
  • Will finished a patch to apply syntax highlighting to the CSS content from MDN in Dev Tools.

And fun!

Of course, the highlight of any Mozilla event is the chance to eat, drink, and socialize with other Mozillians. Planned dinners and parties, extracurricular excursions, and spontaneous celebrations rounded out the week. Many in the MDN group stayed at a hotel that happened to be a 20-minute walk from most of the other venues, so those of us with fitness trackers blew out our step-count goals all week. A few high points:

  • Chris celebrated his birthday at the closing party on Friday, at the top of Whistler mountain.
  • Mark saw a bear, from the gondola on the way up to the mountain-top party.
  • Saurabh saw snow for the first time. In June, no less.

    Nick AlexanderBuild Fennec frontend fast with mach artifact!

    Nota bene: this post supercedes Build Fennec frontend fast!

    Quick start

    It’s easy! But there is a pre-requisite: you need to enable Gregory Szorc’s mozext Mercurial extension [1] first. mozext is part of Mozilla’s version-control-tools repository; run mach mercurial-setup to make sure your local copy is up-to-date, and then add the following to the .hg/hgrc file in your source directory:

    [extensions]
    mozext = /PATH/TO/HOME/.mozbuild/version-control-tools/hgext/mozext
    

    Then, run hg pushlogsync. Mercurial should show a long (and slow) progress bar [2]. From now on, each time you hg pull, you’ll also maintain your local copy of the pushlog.

    Now, open your mozconfig file and add:

    ac_add_options --disable-compile-environment
    mk_add_options MOZ_OBJDIR=./objdir-frontend
    

    (That last line uses a different object directory — it’s worth experimenting with a different directory so you can go back to your old flow if necessary.)

    Then mach build and mach build mobile/android as usual. When it’s time to package an APK, use:

    mach artifact install && mach package
    

    instead of mach package [3]. Use mach install like normal to deploy to your device!

    After running mach artifact install && mach package once, you should find that mach gradle-install, mach gradle app:installDebug, and developing with IntelliJ (or Android Studio) work like normal as well.

    Disclaimer

    This only works when you are building Fennec (Firefox for Android) and developing JavaScript and/or Fennec frontend Java code! If you’re building Firefox for Desktop, this won’t help you. If you’re building C++ code, this won’t help you.

    The integration currently requires Mercurial. Mozilla’s release engineering runs a service mapping git commit hashes to Mercurial commit hashes; mach artifact should be able to use this service to provide automatic binary artifact management for git users.

    Discussion

    mach artifact install is your main entry point: run this to automatically inspect your local repository, determine good candidate revisions, talk to the Task Cluster index service to identify suitable build artifacts, and download them from Amazon S3. The command caches heavily, so it should be fine to run frequently; and the command avoids touching files except when necessary, so it shouldn’t invalidate builds arbitrarily.

    The reduction in build time comes from --disable-compile-environment: this tells the build system to never build C++ libraries (libxul.so and friends) [4]. On my laptop, a clobber build with this configuration completes in about 3 minutes [5]. This configuration isn’t well tested, so please file tickets blocking Bug 1159371.

    Troubleshooting

    Run mach artifact to see help.

    I’m seeing problems with pip

    Your version of pip may be to old. Upgrade it by running pip install --upgrade pip.

    I’m seeing problems with hg

    Does hg log -r pushhead('fx-team') work? If not, there’s a problem with your mozext configuration. Check the pre-requisites again.

    What version of the downloaded binaries am I using?

    mach artifact last displays the last artifact installed. You can see the local file name; the URL the file was fetched from; the Task Cluster job URL; and the corresponding Mercurial revision hash. You can use this to get some insight into the system.

    Where are the downloaded binaries cached?

    Everything is cached in ~/.mozbuild/package-frontend. The commands purge old artifacts as new artifacts are downloaded, keeping a small number of recently used artifacts.

    I’m seeing weird errors and crashes!

    Since your local build and the upstream binaries may diverge, lots of things can happen. If the upstream binaries change a C++ XPCOM component, you may see a binary incompatibility. Such a binary incompatibility looks like:

    E GeckoConsole(5165)          [JavaScript Error: "NS_ERROR_XPC_GS_RETURNED_FAILURE: Component returned failure code: 0x80570016 (NS_ERROR_XPC_GS_RETURNED_FAILURE) [nsIJSCID.getService]" {file: "resource://gre/modules/Services.jsm" line: 23}]
    

    You should update your tree (using hg pull -u --rebase or similar) and run mach build && mach artifact install && mach package again.

    How can I help debug problems?

    There are two commands to help with debugging: print-cache and clear-cache. You shouldn’t need either; these are really just to help me debug issues in the wild.

    Acknowledgements

    This work builds on the contributions of a huge number of people. First, @indygreg supported this effort from day one and reviewed the code. He also wrote mozext and made it easy to access the pushlog locally. None of this happens without Greg. Second, the Task Cluster Index team deserves kudos for making it easy to download artifacts built in automation. Anyone who’s written a TBPL scraper knows how much better the new system is. Third, I’d like to thank @liucheia for testing this with me in Whistler, and /u/vivek for proof-reading this blog post.

    Conclusion

    In my blog post The Firefox for Android build system in 2015, the first priority was making it easier to build Firefox for Android the first time. The second priority was reducing the edit-compile-test cycle time. The mach artifact work described here drastically reduces the first compile-test cycle time, and subsequent compile-test cycles after pulling from the upstream repository. It’s hitting part of the first priority, and part of the second priority. Baby steps.

    The Firefox for Android team is always making things better for contributors! Get involved with Firefox for Android.

    Discussion is best conducted on the mobile-firefox-dev mailing list and I’m nalexander on irc.mozilla.org/#mobile and @ncalexander on Twitter.

    Changes

    • Wed 1 July 2015: Initial version.
    • Mon 6 July 2015: fix typo in link to Vivek. Thanks, sfink!

    Notes

    [1]I can’t find documentation for mozext anywhere, but http://gregoryszorc.com/blog/2013/07/22/mercurial-extension-for-gecko-development/ includes a little information. mach artifact uses mozext to manage the pushlog.
    [2]The long (and slow) download is fetching a local copy of the pushlog, which records who pushed what commits when to the Mozilla source tree. mach artifact uses the pushlog to determine good candidate revisions (and builds) to download artifacts for.
    [3]We should make this happen automatically.
    [4]In theory, --disable-compile-environment also means we don’t need a host C++ toolchain (e.g., gcc targeting Mac OS X) nor a target C++ toolchain (e.g., the Android NDK). This is not my primary motivation but I’m happy to mentor a contributor who wanted to test this and make sure it works! It would be a nice win: you could get a working Fennec build with fewer (large!) dependencies.
    [5]I intend to profile mach build in this case and try to improve it. Much of the build is essentially single-threaded in this configuration, including compiling the Java sources for Fennec. Splitting Fennec into smaller pieces and libraries would help, but that is hard. See for example Bug 1104203.

    Aaron ThornburghDivining the Future of New Tab

    What’s next on New Tab for Firefox Desktop users?
    The future of New Tab?

    New Tab has come a long way since earlier last year.

    It started with rounded corners and a few tweaked buttons. Then, Directory Sites landed for new users shortly thereafter, seeding their Firefox experience with content from Mozilla and a sponsored partner. Soon, Firefox 40 Beta users will begin noticing Suggested Sites related to their browsing history, along with a restyled interface and updated page controls.

    But there’s so much more to the story.

    The following are some of the experiments we’ve been thinking about for New Tab later this year. All of them are focused on user control, feedback, and discovery. We hope to land many of these features; others may get tossed entirely. Ultimately, aggressive user research will help us determine which ones are worth shipping.

    +++++

    Experiment #1: More Control.

    View the full presentation: Enhanced User Controls for New Tab on Firefox (PDF – 5.9 MB)

    Deeper insights.

    Transparency + control = trust. These days, everything I design is based on this formula. When it comes to Suggested Sites, users should understand why they’re seeing a particular suggestion, and have the ability to manipulate their preferences.

    Interest category flyout menuAll suggestions include a corresponding interest category. Clicking the category (ex: “automotive”) reveals more controls.

    Part of the solution is obvious: include appropriate labels, or explanation, where and when appropriate. Naturally, any Suggested Site will include a label. However, and more importantly, the interest category a suggestion relates to should allow for more control as well.

    Interest category layover, with optionsClicking “View all categories” launches a control panel with any options available – including the option to turn off suggestions altogether.

    Combined, these functions would provide users both the context and transparency we’ve been promising. (OK, it’s a start. We still have so much more to learn about this.)

    Add a site of your own.

    Users have been able to delete sites since New Tab’s introduction; but it was never evident how to add a site of their own. After deleting an unwanted site, it should instead be super-easy to choose a new one. Additionally, users should have the ability to see a logo, the homepage, or last page they visited.

    Deleting a Suggested SiteDon’t like a suggestion? Select “Not interested” from the menu (available on rollover). Boom. Gone forever.
    Add a site - defaultAfter deleting a site, a user can add one of their own by clicking the giant “+” button. Doing so launches a new control panel.
    Adding a site - definedOnce the user enters a URL in the Website field, they can then determine how the site should display on New Tab.
    Top Sites get some love.

    Logo or thumbnail images of destination pages may help users identify each Top Site, but what if they want to know more about their activity related to a particular site? The History feature on Firefox has always been difficult to navigate, and requires the user to engage with multiple functions of the browser.

    See more information about any site on New Tab via the controls menu (available on rollover).

    By selecting “About this site” from the tile control menu, a user could perhaps see information regarding the site’s purpose, the interest it relates to, and the user’s most recent browsing history – all in one view.

    After clicking “About this site”, an overlay reveals the related interest category, a brief description, and all recent browsing history.

    Which got me thinking: why not just put all of their history right on New Tab? Those looking for a certain, recently visited page could search via a simple dropdown, which would then list their most visited sites and corresponding browsing history.

    By clicking on the clock icon, a user can view their recent browsing activity, sorted by their top destinations. Clicking on a site will reveal its full history.

    Finally, no more digging! Just click and scroll.

    Experiment #2: More Value.

    View the full presentation: Feeds, Groups & User Feedback for New Tab on Firefox (PDF – 8.5 MB)

    Feed that need.

    When a new user downloads Firefox and tries New Tab, they see a bunch of Mozilla stuff. When current users view New Tab, they can see their recent sites… but not their “stuff” contained therein. If anything, they might see a single content page headline.

    Not for long. One day soon, users may be able to add feeds from their favorite destinations on the Web.

    Directory Sites - default view for new Firefox usersNew Firefox users will see sites from Mozilla, a partner, and an empty tile. Rollover and click the “+” to “Add a site”…
    The “Add a site” control panel displays. Once the user starts typing an address in the Website field, URLs are automatically suggested…
    Selecting a URL populates the “tile” shown on the left. In this case, a content feed is available, and is selected by default.
    Content feed addedClicking “Save” adds the new content feed from the user’s preferred site on New Tab. Rollover the site to scroll through recent headlines.

    People who want to keep a tidy New Tab, could do so. Those who prefer frequent updates from their favorite sites could find them all in one place. In this way, the user decides entirely how much – or how little – they want to see.

    Interesting Groups.

    If New Tab is all about getting user’s onto their next task online efficiently, then there is currently no way to organize New Tab around common tasks. Personally, I visit about 25+ different sites on any given day, but they’re all related to only a handful of core interests (car blogs, news sites, technology research, etc.).

    To fix this, I imagine offering users the ability to create a “meta-group”, based on a core interest. Unlike “folders”, the group contents would become accessible “buttons” that link to their preferred sites in that category.

    Click on hold a site to grabGrouping made easy: Click and grab any tile…
    Drag and drop one site onto anotherDrag one site onto another to automatically create a new group.
    Editing a groupAfter creating a new group, a control panel will display. Build the rest of the group in a single view…
    Group addedClick “Save” to add the group New Tab. Done.

    Essentially, this makes room for hundreds of possible destinations one could see on New Tab (not that anyone would want to). And if creating interest groups were easy, it could transform the way people use Firefox altogether.

    “How did that content make you feel?”

    Say a user sees a Suggested Site on New Tab. It looks interesting, so they click on it. They’re taken to a content page on a site they have never see before.

    Click a suggested to see the destination pageA user sees a Suggested Site (lower left) that looks interesting. Clicking the site takes them to a destination page.

    From the publisher’s perspective, the user clicked. Success!

    From a user’s perspective, they’ve donated their time. Was there a payoff?

    Now, after they’ve viewed that content – and after they’ve returned once again to New Tab – the user may be thinking one of two things: 1.) “Worth it!” or 2.) “Totally not worth it.”

    Rating a Suggested SiteWhen the user opens a New Tab again, they will have the option to rate the Suggested Site they viewed.
    Feedback receivedIf the rating is a positive, then the Suggested Site automatically becomes a History Site. In this case, a content feed is available.

    Just by adding a bare-bones rating system for all Suggested Content, users would instantly have the ability to communicate something beyond their click: their actual reaction.

    Creators of outstanding content experiences would be rewarded. Content which fails to meet the standards of everyday users would be flagged and purged. The ecosystem could have a real incentive to make content truly better – just by harnessing real user feedback (for FREE!).

    Experiment #3: More Discovery.

    View the full presentation: Combating Pervasive Boredom on Firefox New Tab (PDF – 2.1 MB)

    Bursting bubbles. Finding new ones.

    When you’re home, you’re comfortable. Everything is familiar. Everything is in it’s place.

    That sounds terribly boring to me. I suspect others feel the same way.

    The same could be said about New Tab.

    New Tab - default viewDefault view of New Tab. Boring.

    What if New Tab could offer a break from the normal? What if it wasn’t so dang task-oriented?

    What if the user wanted experience entirely new things that were all about his or her top interests?

    What if-

    CATS!

    New Tab - CATS!New Tab Cat takeover. Not boring.

    That’s all for now. As New Tab evolves, so will the creative thinking.


    Will Kahn-GreeneInput: Thank You project Phase 1: Part 1

    Summary

    Beginning

    When users click on "Submit feedback..." in Firefox, they end up on our Input site where they can let us know whether they're happy or sad about Firefox and why. This data gets collected and we analyze it in aggregate looking for trends and correlations between sentiment and subject matter, releases, events, etc. It's one of the many ways that users directly affect the development of Firefox.

    One of the things that's always bugged me about this process is that some number of users are leaving feedback about issues they have with Firefox that aren't problems with the product, but rather something else or are known issues with the product that have workarounds. It bugs me because users go out of their way to leave us this kind of feedback and then they get a Thank You page that isn't remotely helpful for them.

    I've been thinking about this problem since the beginning of 2014, but hadn't had a chance to really get into it until the end of 2014 when I wrote up a project plan and some bugs.

    In the first quarter of 2015, Adam worked on this project with me as part of the Outreachy program. I took the work that he did and finished it up in the second quarter of 2015.

    Surprise ending!

    The code has been out there for a little under a month now and early analysis suggests SUCCESS!

    But keep reading for the riveting middle!

    This blog post is a write-up for the Thank You project phase 1. It's long because I wanted to go through the entire project beginning to end as a case study.

    Read more… (17 mins to read)

    Mozilla Open Policy & Advocacy BlogDecisive moment for net neutrality in Europe

    After years of negotiations, the E.U. Telecom Single Market Regulation (which includes proposed net neutrality rules) is nearing completion. If passed, the Regulation will be binding on all E.U. member states. The policymakers – the three European governmental bodies:  the Parliament, the Commission, and the Council – are at a crossroads: implement real net neutrality into law, or permit net discrimination and in doing so threaten innovation and competition. We urge European policymakers to stand strong, adopt clear rules to protect the open Internet, and set an example for the world.

    At Mozilla, we’ve taken a strong stance for real net neutrality, because it is central to our mission and to the openness of the Internet. Just as we have supported action in the United States and in India, we support the adoption of net neutrality rules in Europe. Net neutrality fundamentally protects competition and innovation, to the benefit of both European Internet users and businesses. We want an Internet where everyone can create, participate, and innovate online, all of which is at risk if discriminatory practices are condoned by law or through regulatory indifference.

    The final text of European legislation is still being written, and the details are still gaining shape. We have called for strong, enforceable rules against blocking, discrimination, and fast lanes are critical to protecting the openness of the Internet. To accomplish this, the European Parliament needs to hold firm to its five votes in the last five years for real net neutrality. Members of the European Parliament must resist internal and external pressures to build in loopholes that would threaten those rules.

    Two issues stand out as particularly important in this final round of negotiations: specialized services and zero-rating. On the former, specialized services – or “services other than Internet access services” – represent a complex and unresolved set of market practices, including very few current ones and many speculative future possibilities. While there is certainly potential for real value in these services, absent any safeguards, such services risk undermining the open Internet. It’s important to maintain a baseline of robust access, and prevent relegating the open Internet to a second tier of quality.

    Second, earlier statements from the E.U. included language that appeared to endorse zero-rating business practices. Our view is that zero-rating as currently implemented in the market is not the right path forward for the open Internet. However, we do not believe it is necessary to address this issue in the context of the Telecom Single Market Regulation. As such, we’re glad to see such language removed from more recent drafts and we encourage European policymakers to leave it out of the final text.

    The final text that emerges from the European process will set a standard not only for Europe but for the rest of the world. It’s critical for European policymakers to stand with the Internet and get it right.

    Chris Riley, Head of Public Policy
    Jochai Ben-Avie, Internet Policy Manager

    The Mozilla BlogNew Sharing Features in Firefox

    Whichever social network you choose, it’s undeniable that being social is a key part of why you enjoy the Web. Firefox is built to put you in control, including making it easier to share anything you like on the Web’s most popular social networks. Today, we’re announcing that Firefox Share has been integrated into Firefox Hello. We introduced Firefox Share to offer a simple way of sharing Web content across popular services such as Facebook, Twitter, Tumblr, LinkedIn and Google+ and other social and email services  (full list here) to help you share anything on the Web with any or all of your friends.

    Firefox Hello link sharing

    Firefox Hello, which we’ve been developing in beta with our partner, Telefonica, is the only in-browser video chat tool that doesn’t require an account or extra software downloads. We recently added screen sharing to Firefox Hello to make it easier to share anything you’re looking at in your video call. Now you can also invite friends to a Firefox Hello video call by sharing a link via the social network or email account of your choice, all without leaving your browser tab. That includes a newly added Yahoo Mail integration in Firefox Share that lets Yahoo Mail users share Hello conversation links or other Web content directly from Firefox Share.

    For more information:
    Release Notes for Firefox for Windows, Mac, Linux
    Release Notes for Android
    Download Firefox

    Air MozillaGerman speaking community bi-weekly meeting

    German speaking community bi-weekly meeting https://wiki.mozilla.org/De/Meetings

    Mozilla Addons BlogT-Shirt Form Data Exposure

    On Monday, June 15, 2015 Mozilla announced on the Add-ons blog a free special edition t-shirt for eligible AMO developers. Eligible developers were requested to sign up via Google Form and asked to input their full name, full address, telephone number and T-shirt size.

    This document was mistakenly configured to allow potential public access for less than 24 hours, exposing the response data for 70 developers. As soon as the incident was discovered, we immediately changed the permission level to private access. Other than the developer who discovered and reported this incident, we are not aware of anyone without authorization accessing the spreadsheet.

    We have notified the affected individuals. We regret any inconvenience or concern this incident may have caused our AMO developer community.

    Air MozillaWeb QA Weekly Meeting

    Web QA Weekly Meeting This is our weekly gathering of Mozilla'a Web QA team filled with discussion on our current and future projects, ideas, demos, and fun facts.

    Roberto A. VitilloTelemetry metrics roll-ups

    Our Telemetry aggregation system has been serving us well for quite some time. As Telemetry evolved though, maintaining the codebase and adding new features such as keyed histograms has proven to be challenging. With the introduction of unified FHR/Telemetry, we decided to rewrite the aggregation pipeline with an updated set of requirements in mind.

    Metrics

    A ping is the data payload that clients submit to our server. The payload contains, among other things, over thousand metrics of the following types:

    • numerical, like e.g. startup time
    • categorical, like e.g. operating system name
    • distributional, like e.g. garbage collection timings

    Distributions are implemented with histograms that come in different shapes and sizes. For example, a keyed histogram represent a collection of labelled histograms. It’s not rare for keyed histograms to have thousands of possible labels. For instance, MISBEHAVING_ADDONS_JANK_LEVEL, which measures the longest blocking operation performed by an add-on, has potentially a label for each extensions.

    The main objective of the aggregator is to create time or build-id based aggregates by a set of dimensions:

    • channel, e.g. nightly
    • build-id or submission date
    • metric name, e.g. GC_MS
    • label, for keyed histograms
    • application name, e.g. Fennec
    • application version, e.g. 41
    • CPU architecture, e.g. x86_64
    • operating system, e.g. Windows
    • operating system version, e.g. 6.1
    • e10s enabled
    • process type, e.g. content or parent

    As scalar and categorical metrics are converted to histograms during the aggregation, ultimately we display only distributions in our dashboard.

    Raw Storage

    We receive millions of pings each day over all our channels. A raw uncompressed ping has a size of over 100KB. Pings are sent to our edge servers and end up being stored in an immutable chunk of up to 300MB on S3, partitioned by submission date, application name, update channel, application version, and build id.

    As we are currently collecting v4 submissions only on pre-release channels, we store about 700 GB per day; this considering only saved_session pings as those are the ones being aggregated. Once we start receiving data on the release channel as well we are likely going to double that number.

    As soon as an immutable chunk is stored on S3, an AWS lambda function adds a corresponding entry to a SimpleDB index. The index allows Spark jobs to query the set of available pings by different criteria without the need of performing an expensive scan over S3.

    Spark Aggregator

    A daily scheduled Spark job performs the aggregation, on the data received the day before, by the set of dimensions mentioned above. We are likely going to move from a batch job to a streaming one in the future to reduce the latency from the time a ping is stored on S3 to the time its data appears in the dashboard.

    Two kinds of aggregates are produced by aggregator:

    • submission date based
    • build-id based

    Aggregates by build-id computed for a given submission date have to be added to the historical ones. As long as there are submissions coming from an old build of Firefox, we will keep receiving and aggregating data for it. The aggregation of the historical aggregates with the daily computed ones (i.e. partial aggregates) happens within a PostgreSQL database.

    Database

    There is only one type of table within the database which is partitioned by channel, version and build-id (or submission date depending on the aggregation type).

    As PostgreSQL supports natively json blobs and arrays, it came naturally to express each row just as a couple of fields, one being a json object containing a set of dimensions and the other being an array representing the histogram. Adding a new dimension in the future should be rather painless as dimensions are not represented with columns.

    When a new partial aggregate is pushed to the database, PostreSQL finds the current historical entry for that combination of dimensions, if it exists, and updates the current histogram by summing to it the partially aggregated histogram. In reality a temporary table is pushed to the database that contains all partial aggregates which is then merged with the historical aggregates, but the underlying logic remains the same.

    As the database is usually queried by submission date or build-id and as there are milions of partial aggregates per day deriving from the possible combinations of dimensions, the table is partitioned the way it is to allow the upsert operation to be performed as fast as possible.

    API

    An inverted index on the json blobs allows to efficiently retrieve, and aggregate, all histograms matching a given filtering criteria.

    For example,

    select aggregate_histograms(histograms)
    from build_id_nightly_41_20150602
    where dimensions @&gt; '{"metric": "SIMPLE_MEASURES_UPTIME", 
                          "application": "Firefox",
                          "os": "Windows"}'::jsonb;
    
    

    retrieves a list of histograms, one for each combination of dimensions matching the where clause, and adds them together producing a final histogram that represents the distribution of the uptime measure for Firefox users on the nightly Windows build created on the 2nd of June 2015.

    Aggregates are made available through a HTTP API. For example, to retrieve the aggregated histogram for the GC_MS metric on Windows for build-ids of the 2015/06/15 and 2015/06/16:

    curl -X GET "http://SERVICE/aggregates_by/build_id/channels/nightly/?version=41&dates=20150615,20150616&metric=GC_MS&os=Windows_NT"
    
    

     which returns

    {"buckets":[0, ..., 10000],
     "data":[{"date":"20150615",
              "count":239459,
              "histogram":[309, ..., 5047],
              "label":""},
             {"date":"20150616",
              "count":233688,
              "histogram":[306, ..., 7875],
              "label":""}],
     "kind":"exponential",
     "description":"Time spent running JS GC (ms)"}
    
    

    Dashboard

    Our intern, Anthony Zhang, did a phenomenal job creating a nifty dashboard to display the aggregates. Even though it’s still under active development, it’s already functional and thanks to it we were able to spot a serious bug in the v2 aggregation pipeline.

    It comes with two views, the histogram view designed for viewing distributions of measures:

    histogramand an evolution view for viewing the evolution of aggregate values for measures over time:

    evolutionAs we started aggregating data at the beginning of June, the evolution plot looks rightfully wacky before that date.


    Air MozillaReps weekly

    Reps weekly Weekly Mozilla Reps call

    Byron Joneshappy bmo push day!

    the following changes have been pushed to bugzilla.mozilla.org:

    • [1174057] Authentication Delegation should add an App ID column to associate api keys with specific callbacks
    • [1163761] Allow MozReview to skip user consent screen for Authentication Delegation
    • [825946] tracking flags should be cleared when a bug is moved to a product/component where they are not valid
    • [1149593] Hovering over “Copy Summary” changes the button to a grey box
    • [1171523] Change Loop product to Hello
    • [1175928] flag changes made at the same time as a cc change are not visible without showing cc changes
    • [1175644] The cpanfile created by checksetup.pl defines the same feature multiple times, breaking cpanm
    • [1176362] [Voting] When a user votes enough to confirm an individual bug, the bug does not change to CONFIRMED properly
    • [1176368] [Voting] When updating votestoconfirm to a new value, bugs with enough votes are not moved to CONFIRMED properly
    • [1161797] Use document.execCommand(“copy”) instead of flash where it is available
    • [1177239] Please create a “Taskcluster Platform” product
    • [1144485] Adapt upstream Selenium test suite to BMO
    • [1178301] webservice_bug_update.t Parse errors: Bad plan. You planned 927 tests but ran 921
    • [1163170] Giving firefox-backlog-drivers rights to edit the Rank field anywhere it appears in bugzilla
    • [1171758] Persistent xss is possible on Firefox

    discuss these changes on mozilla.tools.bmo.


    Filed under: bmo, mozilla

    Mark FinkleRandom Thoughts on Management

    I have ended up managing people at the last three places I’ve worked, over the last 18 years. I can honestly say that only in the last few years have I really started to embrace the job of managing. Here’s a collection of thoughts and observations:

    Growth: Ideas and Opinions and Failures

    Expose your team to new ideas and help them create their own voice. When people get bored or feel they aren’t growing, they’ll look elsewhere. Give people time to explore new concepts, while trying to keep results and outcomes relevant to the project.

    Opinions are not bad. A team without opinions is bad. Encourage people to develop opinions about everything. Encourage them to evolve their opinions as they gain new experiences.

    “Good judgement comes from experience, and experience comes from bad judgement” – Frederick P. Brooks

    Create an environment where differing viewpoints are welcomed, so people can learn multiple ways to approach a problem.

    Failures are not bad. Failing means trying, and you want people who try to accomplish work that might be a little beyond their current reach. It’s how they grow. Your job is keeping the failures small, so they can learn from the failure, but not jeopardize the project.

    Creating Paths: Technical versus Management

    It’s important to have an opinion about the ways a management track is different than a technical track. Create a path for managers. Create a different path for technical leaders.

    Management tracks have highly visible promotion paths. Organization structure changes, company-wide emails, and being included in more meetings and decision making. Technical track promotions are harder to notice if you don’t also increase the person’s responsibilities and decision making role.

    Moving up either track means more responsibility and more accountability. Find ways to delegate decision making to leaders on the team. Make those leaders accountable for outcomes.

    Train your engineers to be successful managers. There is a tradition in software development to use the most senior engineer to fill openings in management. This is wrong. Look for people that have a proclivity for working with people. Give those people management-like challenges and opportunities. Once they (and you) are confident in taking on management, promote them.

    Snowflakes: Engineers are Different

    Engineers, even great ones, have strengthens and weaknesses. As a manager, you need to learn these for each person on your team. People can be very strong at starting new projects, building something from nothing. Others can be great at finishing, making sure the work is ready to release. Some excel at user-facing code, others love writing back-end services. Leverage your team’s strengthens to efficiently ship products.

    “A 1:1 is your chance to perform weekly preventive maintenance while also understanding the health of your team” – Michael Lopp (rands)

    The better you know your team, the less likely you will create bored, passionless drones. Don’t treat engineers as fungible, swapable resources. Set them, and the team, up for success. Keep people engaged and passionate about the work.

    Further Reading

    The Role of a Senior Developer
    On Being A Senior Engineer
    Want to Know Difference Between a CTO and a VP of Engineering?
    Thoughts on the Technical Track
    The Update, The Vent, and The Disaster
    Bored People Quit
    Strong Opinions, Weakly Held

    Karl DubostWeb Components, Stories Of Scars

    Chris Heilmann has written about Web Components.

    If you want to see the mess that is the standardisation effort around web components right now in all its ugliness, Wilson Page wrote a great post on that on Mozilla Hacks. Make sure to also read the comments – lots of good stuff there.

    Indeed a very good blog post to read. Then Chris went on saying:

    Web Components are a great idea. Modules are a great idea. Together, they bring us hours and hours of fun debating where what should be done to create a well-performing, easy to maintain and all around extensible complex app for the web.

    This is twitching in the back of my mind for the last couple of weeks. And I kind of remember a wicked pattern from 10 years ago. Enter Compound Document Formats (CDF) with its WICD (read wicked) specifications. If you think I'm silly, check the CDF FAQ:

    When combining content from arbitrary sources, a number of problems present themselves, including how rendering is handled when crossing from one markup language to another, or how events propagate across the same boundaries, or how to interpret the meaning of a piece of content within an unanticipated context.

    and

    Simply put, a compound document is a mixture of content in any number of formats. Compound documents range from static (say, XHTML that includes a simple SVG illustration) to very dynamic (a full-fledged Web Application). A compound document may include its parts directly (such as when you include an SVG image in an XHTML file) or by reference (such as when you embed a separate SVG document in XHTML using an <object> element. There are benefits to both, and the application should determine which one you use. For instance, inclusion by reference facilitates reuse and eases maintenance of a large number of resources. Direct inclusion can improve portability or offline use. W3C will support both modes, called CDR ("compound documents by reference") and CDI ("compound documents by inclusion").

    At that time, the Web and W3C, where full throttle on XML and namespaces. Now, the cool kids on the block are full HTML, JSON, polymers and JS frameworks. But if you look carefully and remove the syntax, architecture parts, the narrative is the same. And with the narratives of the battle and its scars, the Web Components sound very familiar to the Coupound Document Format.

    Still by Chris

    When it comes to componentising the web, the rabbit hole is deep and also a maze.

    Note that not everything was lost from WICD. It helped develop a couple of things, and reimagine the platform. Stay tune, I think we will have surprises on this story. Not over yet.

    Modularity has already a couple of scars when thinking about large distribution. Remember Opendoc and OLE. I still remember using Cyberdog. Fun times.

    Otsukare!

    Planet Mozilla InternsJonathan Wilde: A Project Called Gossamer

    A few summers back, I worked on the Firefox Metro project. The big challenge I ran into the first summer–back when the project was at a very early stage–was figuring out how to distribute early builds.

    I wanted to quickly test work-in-progress builds across different devices on my desk without having to maintain a working copy and rebuild on every device. Later on, I also wanted to quickly distribute builds to other folks, too.

    I had a short-term hack based on Dropbox, batch scripts, and hope. It was successful at getting rapid builds out, but janky and unscalable.

    The underlying problem space–how you build, distribute, and test experimental prototypes rapidly?–is one that I’ve been wanting to revisit for a while.

    So, Gossamer

    This summer, Lyre Calliope and I have had some spare time to tinker on this for fun.

    We call this project Gossamer, in honor of the Gossamer Albatross, a success story in applying rapid prototyping methodology to building a human-powered airplane.

    We’re working to enable the following development cycle:

    1. Build a prototype in a few hours or maybe a couple days, and at the maximum fidelity possible–featuring real user data, instead of placeholders.
    2. Share the prototype with testers as easily as sharing a web page.
    3. Understand how the prototype is performing in user testing relative to the status quo, qualitatively and quantitatively.
    4. Polish and move ideas that work into the real world in days or possibly weeks, instead of months or years.

    A First Proof-of-Concept

    We started by working to build a simple end-to-end demonstration of a lightweight prototyping workflow:

    (Yeah, it took longer than two weeks due to personal emergencies on my end.)

    We tinkered around with a few different ways to do this.

    Our proof-of-concept is a simple distribution service that wraps Mozilla’s browser.html project. It’s a little bit like TestFlight or HockeyApp, but for web browsers.

    To try an experimental build, you log in via GitHub, and pick the build that you want to test…and presto!

    Sequence shortened.

    About the login step: When you pick an experiment, you’re picking it for all of your devices logged in via that account.

    This makes cross-device feature testing a bit easier. Suppose you have a feature you want to test on different form factors because the feature is responsive to screen dimensions or input methods. Or suppose you’re building a task continuity feature that you need to test on multiple devices. Having the same experiment running on all the devices of your account makes this testing much easier.

    It also enables us to have a remote one-click escape hatch in case something breaks in the experiment you’re running. (It happens to the best developers!)

    To ensure that you can trust experiments on Gossamer, we integrated the login system with Mozillians. Only vouched Mozillians can ship experimental code via Gossamer.

    To ship an experimental build…you click the “Ship” button. Boom. The user gets a message asking them if they want to apply the update.

    And the cool thing about browser.html being a web application…is that when the user clicks the “Apply” button to accept the update…all we have to do is refresh the window.

    We did some lightweight user testing by having Lyre (who hadn’t seen any of the implementation yet) step through the full install process and receive a new updated build from me remotely.

    We learned a few things from this.

    What We’re Working On Next

    There’s three big points we want to focus on in the next milestone:

    1. Streamline every step. The build service web app should fade away and just be hidden glue around a web browser- and GitHub-centric workflow.
    2. Remove the refresh during updates. Tooling for preserving application state while making hot code changes to web applications based on React (such as browser.html!) is widely available.
    3. Make the build pipeline as fast as possible. Let’s see how short we can make the delay from pushing new code to GitHub (or editing through GitHub’s web interface) to updates appearing on your machine.

    We also want to shift our mode of demo from screencasts to working prototypes.

    Get Involved

    This project is still at a very early stage, but if you’d like to browse the code, it’s in three GitHub repositories:

    • gossamer - Our fork of browser.html.
    • gossamer-server - The build and distribution server.
    • gossamer-larch-patches - Tweaks to Mozilla’s larch project branch containing the graphene runtime. We fixed a bug and made a configuration tweak.

    Most importantly, we’d love your feedback:

    There’s a lot of awesome in the pipeline. Stay tuned!

    Mozilla Addons BlogAdd-ons Update – Week of 2015/07/01

    I post these updates every 3 weeks to inform add-on developers about the status of the review queues, add-on compatibility, and other happenings in the add-ons world.

    Add-ons Forum

    As we announced before, there’s a new add-ons community forum for all topics related to AMO or add-ons in general. The Add-ons category is already the most active one on the community forum, so thank you all for your contributions! The old forum is still available in read-only mode.

    The Review Queues

    • Most nominations for full review are taking less than 10 weeks to review.
    • 272 nominations in the queue awaiting review.
    • Most updates are being reviewed within 8 weeks.
    • 159 updates in the queue awaiting review.
    • Most preliminary reviews are being reviewed within 10 weeks.
    • 295 preliminary review submissions in the queue awaiting review.

    A number of factors have lead to the current state of the queues: increased submissions, decreased volunteer reviewer participation, and a Mozilla-wide event that took most of our attention last week. We’re back and our main focus are the review queues. We have a new reviewer on our team, who will hopefully make a difference in the state of the queues.

    If you’re an add-on developer and would like to see add-ons reviewed faster, please consider joining us. Add-on reviewers get invited to Mozilla events and earn cool gear with their work. Visit our wiki page for more information.

    Firefox 40 Compatibility

    The Firefox 40 compatibility blog post is up. The automatic compatibility validation will be run in a few weeks.

    As always, we recommend that you test your add-ons on Beta and Firefox Developer Edition (formerly known as Aurora) to make sure that they continue to work correctly. End users can install the Add-on Compatibility Reporter to identify and report any add-ons that aren’t working anymore.

    Extension Signing

    We announced that we will require extensions to be signed in order for them to continue to work in release and beta versions of Firefox. The wiki page on Extension Signing has information about the timeline, as well as responses to some frequently asked questions.

    There’s a small change to the timeline: Firefox 40 will only warn about unsigned extensions (for all channels), Firefox 41 will disable unsigned extensions  by default unless a preference is toggled (on Beta and Release), and Firefox 42 will not have the preference. This means that we’ll have an extra release cycle before signatures are enforced by default.

    Electrolysis

    Electrolysis, also known as e10s, is the next major compatibility change coming to Firefox. In a nutshell, Firefox will run on multiple processes now, running content code in a different process than browser code. This should improve responsiveness and overall stability, but it also means many add-ons will need to be updated to support this.

    We will be talking more about these changes in this blog in the future. For now we recommend you start looking at the available documentation.

    Christian HeilmannOver the Edge: Web Components are an endangered species

    Last week I ran the panel and the web components/modules breakout session of the excellent Edge Conference in London, England and I think I did quite a terrible job. The reason was that the topic is too large and too fragmented and broken to be taken on as a bundle.

    If you want to see the mess that is the standardisation effort around web components right now in all its ugliness, Wilson Page wrote a great post on that on Mozilla Hacks. Make sure to also read the comments – lots of good stuff there.

    Web Components are a great idea. Modules are a great idea. Together, they bring us hours and hours of fun debating where what should be done to create a well-performing, easy to maintain and all around extensible complex app for the web. Along the way we can throw around lots of tools and ideas like NPM and ES6 imports or – as Alex Russell said it on the panel: “tooling will save you”.

    It does. But that was always the case. When browsers didn’t support CSS, we had Dreamweaver to create horribly nested tables that achieved the same effect. There is always a way to make browsers do what we want them to do. In the past, we did a lot of convoluted things client-side with libraries. With the advent of node and others we now have even more environments to innovate and release “not for production ready” impressive and clever solutions.

    When it comes to componentising the web, the rabbit hole is deep and also a maze. Many developers don’t have time to even start digging and use libraries like Polymer or React instead and call it a day and that the “de facto standard” (a term that makes my toenails crawl up – layout tables were a “de facto standard”, so was Flash video).

    React did a genius thing: by virtualising the DOM, it avoided a lot of the problems with browsers. But it also means that you forfeit all the good things the DOM gives you in terms of accessibility and semantics/declarative code. It simply is easier to write a <super-button> than to create a fragment for it or write it in JavaScript.

    Of course, either are easy for us clever and amazing developers, but the fact is that the web is not for developers. It is a publishing platform, and we are moving away from that concept at a ridiculous pace.

    And whilst React gives us all the goodness of Web Components now, it is also a library by a commercial company. That it is open source, doesn’t make much of a difference. YUI showed that a truckload of innovation can go into “maintenance mode” very quickly when a company’s direction changes. I have high hopes for React, but I am also worried about dependencies on a single company.

    Let’s rewind and talk about Web Components

    Let’s do away with modules and imports for now, as I think this is a totally different discussion.

    I always loved the idea of Web Components – allowing me to write widgets in the browser that work with it rather than against it is an incredible idea. Years of widget frameworks trying to get the correct performance out of a browser whilst empowering maintainers would come to a fruitful climax. Yes, please, give me a way to write my own controls, inherit from existing ones and share my independent components with other developers.

    However, in four years, we haven’t got much to show.. When we asked the very captive and elite audience of EdgeConf about Web Components, nobody raised their hand that they are using them in real products. People either used React or Polymer as there is still no way to use Web Components in production otherwise. When we tried to find examples in the wild, the meager harvest was GitHub’s time element. I do hope that this was not all we wrote and many a company is ready to go with Web Components. But most discussions I had ended up the same way: people are interested, tried them out once and had to bail out because of lack of browser support.

    Web Components are a chicken and egg problem where we are currently trying to define the chicken and have many a different idea what an egg could be. Meanwhile, people go to chicken-meat based fast food places to get quick results. And others increasingly mention that we should hide the chicken and just give people the eggs leaving the chicken farming to those who also know how to build a hen-house. OK, I might have taken that metaphor a bit far.

    We all agreed that XHTML2 sucked, was overly complicated, and defined without the input of web developers. I get the weird feeling that Web Components and modules are going in the same direction.

    In 2012 I wrote a longer post as an immediate response to Google’s big announcement of the foundation of the web platform following Alex Russell’s presentation at Fronteers 11 showing off what Web Components could do. In it I kind of lamented the lack of clean web code and the focus on developer convenience over clarity. Last year, I listed a few dangers of web components. Today, I am not too proud to admit that I lost sight of what is going on. And I am not alone. As Wilson’s post on Mozilla Hacks shows, the current state is messy to say the least.

    We need to enable web developers to use “vanilla” web components

    What we need is a base to start from. In the browser and in a browser that users have and doesn’t ask them to turn on a flag. Without that, Web Components are doomed to become a “too complex” standard that nobody implements but instead relies on libraries.

    During the breakout session, one of the interesting proposals was to turn Bootstrap components into web components and start with that. Tread the cowpath of what people use and make it available to see how it performs.

    Of course, this is a big gamble and it means consensus across browser makers. But we had that with HTML5. Maybe there is a chance for harmony amongst competitors for the sake of an extensible and modularised web that is not dependent on ES6 availability across browsers. We’re probably better off with implementing one sci-fi idea at a time.

    I wished I could be more excited or positive about this. But it left me with a sour taste in my mouth to see that EdgeConf, that hot-house of web innovation and think-tank of many very intelligent people were as confused as I was.

    I’d love to see a “let’s turn it on and see what happens” instead of “but, wait, this could happen”. Of course, it isn’t that simple – and the Mozilla Hacks post explains this well – but a boy can dream, right? Remember when using HTML5 video was just a dream?

    David BurnsWho wants to be an alpha tester for Marionette?

    Are you an early adopter type? Are you an avid user of WebDriver and want to use the latest and great technology? Then you are most definitely in luck.

    Marionette, the Mozilla implementation of the FirefoxDriver, is ready for a very limited outing. There is a lot of things that have not been implemented or, since we are implementing things agains the WebDriver Specification they might not have enough prose to implement (This has been a great way to iron out spec bugs).

    Getting Started

    At the moment, since things are still being developed and we are trying to do things with new technologies (like writing part this project using Rust) we are starting out with supporting Linux and OS X first. Windows support will be coming in the future!

    Getting the driver

    We have binaries that you can download. For Linux and for OS X . The only bindings currently updated to work are the python bindings that are available in a branch on my fork of the Selenium project. Do the following to get it into a virtualenv:
    1. Create a virtualenv
    2. activate your virtualenv
    3. cd to where you have cloned my repository
    4. In a terminal type the following: ./go py_install

    Running tests

    Running tests against marionette requires that you do the following changes (which hopefully remains small)

    Update the desired capability to have marionette:true and add binary:/path/to/Firefox/DeveloperEdition/or/Nightly. We are only supporting those two versions of at the moment because we have had a couple in compatible issues that we have fixed which means speaking to Marionette in the browser in the beta or release versions quite difficult.

    Since you are awesome early adopters it would be great if we could raise bugs. I am not expecting everything to work but below is a quick list that I know doesn't work.

    • No support for self-signed certificates
    • No support for actions
    • No support for Proxy (but will be there soon)
    • No support logging endpoint
    • I am sure there are other things we don't remember

    Thanks for being an early adopter and thanks for raising bugs as you find them!

    Will Kahn-GreeneDitching ElasticUtils on Input for elasticsearch-dsl-py

    What was it?

    ElasticUtils was a Python library for building and executing Elasticsearch searches. I picked up maintenance after the original authors moved on with their lives, did a lot of work, released many versions and eventually ended the project in January 2015.

    Why end it? A bunch of reasons.

    It started at PyCon 2014, when I had a long talk with Rob, Jannis, and Erik about ElasticUtils the new library Honza was working on which became elasticsearch-dsl-py.

    At the time, I knew that ElasticUtils had a bunch of architectural decisions that turned out to be make life really hard. Doing some things was hard. It was built for a pre-1.0 Elasticsearch and it would have been a monumental effort to upgrade it past the Elasticsearch 1.0 hump. The code wasn't structured particularly well. I was tired of working on it.

    Honza's library had a lot of promise and did the sorts of things that ElasticUtils should have done and did them better--even at that ridiculously early stage of development.

    By the end of PyCon 2014, I was pretty sure elasticsearch-dsl-py was the future. The only question was whether to gut ElasticUtils and make it a small shim on top of elasticsearch-dsl-py or end it.

    In January 2015, I decided to just end it because I didn't see a compelling reason to keep it around or rewrite it into something on top of elasticsearch-dsl-py. Thus I ended it.

    Now to migrate to something different.

    Read more… (7 mins to read)

    Hannah KaneWhistler Wrap-up

    What an amazing week!

    Last week members of the Mozilla community met in beautiful Whistler, BC to celebrate, reflect, brainstorm, and plan (and eat snacks). While much of the week was spent in functional teams (that is, designers with designers and engineers with engineers), the Mozilla Learning Network website (known informally as “Teach”) team was able to convene for two meetings—one focused on our process, and the other on our roadmap and plans for the future.

    Breakthroughs

    From my perspective, the week inspired a few significant breakthroughs:

    1. The Mozilla Learning Network is one, unified team with several offerings. Those offerings can be summarized in this image: Networks, Groups, and Convenings.MLN ProgramsThe breakthrough was realizing that it’s urgent that the site reflects the full spectrum of offerings as soon as possible. We’ve adjusted our roadmap accordingly. First up: incorporate the Hive content in a way that makes sense to our audience, and provides clear pathways for engagement.
    2. Our Clubs pipeline is a bit off-balance. We have more interested Club Captains than our current (amazing) Regional Coordinators can support. This inspired an important conversation about changes to our strategy to better test out our model. We’ll be talking about how these changes are reflected on the site soon.
    3. The most important content to localize is our curriculum content. To be fair, we knew this before the work week, but it was definitely crystallized in Whistler. This gives useful shape to our localization plan.
    4. We also identified a few areas where we can begin the process of telling the full “Mozilla Learning” story. By that I mean the work that goes beyond what we call the Mozilla Learning Network—for example, we can highlight our Fellowship programs, curriculum from other teams (starting with Mozilla Science Lab!), and additional peer learning opportunities.
    5. Finally, we identified a few useful, targeted performance indicators that will help us gauge our success: 1) the # of curriculum hits, and 2) the % of site visitors who take the pledge to teach.

    Site Updates

    I also want to share a few site updates that have happened since I wrote last:

      • The flow for Clubs has been adjusted to reflect the “apply, connect, approve” model described in an earlier post.
      • We’ve added a Protect Your Data curriculum module with six great activities.
      • We added the “Pledge to Teach” action on the homepage. Visitors to the site can choose to take the pledge, and are then notified about an optional survey they can take. We’ll follow up with tailored offerings based on their survey responses.pledge

    Questions? Ideas? Share ’em in the comments!


    Gervase MarkhamTop 50 DOS Problems Solved: Squashing Files

    Q: I post files containing DTP pages and graphics on floppy disks to a bureau for printing. Recently I produced a file that was too big to fit on the disk and I know that I will be producing more in the future. What’s the best way round the problem?

    A. There are a number of solutions, most of them expensive. For example, both you and the bureau could buy modems. A modem is a device that allows computers to be connected via a phone line. You would need software, known as a comms program, to go with the modems. This will allow direct PC-to-PC transfer of files without the need for floppy disks. Since your files are so large, you would need a fast 9600 baud modem [Ed: approx 1 kilobyte per second] with MNP5 compression/error correction to make this a viable option.

    In this case, however, I would get hold of a utility program called LHA which is widely available from the shareware/PD libraries that advertise in PC Answers. In earlier incarnations it was known as LHarc. LHA enables you to squash files into less space than they occupied before.

    The degree of compression depends on the nature of the file. Graphics and text work best, so for you this is a likely solution. The bureau will need a copy of LHA to un-squash the files before it can use them, or you can use LHA in a special way that makes the compressed files self-unpacking.

    LHA has a great advantage over rival utilities in that the author allows you to use it for free. There is no registration fee, as with the similar shareware program PKZip, for example.

    Every time they brought out a new, larger hard disk, they used to predict the end of the need for compression…

    Botond BalloC++ Concepts TS could be voted for publication on July 20

    In my report on the C++ standards meeting this May, I described the status of the Concepts TS as of the end of the meeting:

    • The committee’s Core Working Group (CWG) was working on addressing comments received from national standards bodies.
    • CWG planned to hold a post-meeting teleconference to complete the final wording including the resolutions of the comments.
    • The committee would then have the option of holding a committee-wide teleconference to vote the final wording for publication, or else delay this vote until the next face-to-face meeting in October.

    I’m excited to report that the CWG telecon has taken place, final wording has been produced, and the committee-wide telecon to approve the final wording has been scheduled for July 20.

    If this vote goes through, the final wording of the Concepts TS will be sent to ISO for publication, and the TS will be officially published within a few months!


    Nicholas NethercoteFirefox 41 will use less memory when running AdBlock Plus

    Last year I wrote about AdBlock Plus’s effect on Firefox’s memory usage. The most important part was the following.

    First, there’s a constant overhead just from enabling ABP of something like 60–70 MiB. […] This appears to be mostly due to additional JavaScript memory usage, though there’s also some due to extra layout memory.

    Second, there’s an overhead of about 4 MiB per iframe, which is mostly due to ABP injecting a giant stylesheet into every iframe. Many pages have multiple iframes, so this can add up quickly. For example, if I load TechCrunch and roll over the social buttons on every story […], without ABP, Firefox uses about 194 MiB of physical memory. With ABP, that number more than doubles, to 417 MiB.

    An even more extreme example is this page, which contains over 400 iframes. Without ABP, Firefox uses about 370 MiB. With ABP, that number jumps to 1960 MiB.

    (This description was imprecise; the overhead is actually per document, which includes both top-level documents in a tab and documents in iframes.)

    Last week Mozilla developer Cameron McCormack landed patches to fix bug 77999, which was filed more than 14 years ago. These patches enable sharing of CSS-related data — more specifically, they add data structures that share the results of cascading user agent style sheets — and in doing so they entirely fix the second issue, which is the more important of the two.

    For example, on the above-mentioned “extreme example” (a.k.a. the Vim Color Scheme Test) memory usage dropped by 3.62 MiB per document. There are 429 documents on that page, which is a total reduction of about 1,550 MiB, reducing memory usage for that page down to about 450 MiB, which is not that much more than when AdBlock Plus is absent. (All these measurements are on a 64-bit build.)

    I also did measurements on various other sites and confirmed the consistent saving of ~3.6 MiB per document when AdBlock Plus is enabled. The number of documents varies widely from page to page, so the exact effect depends greatly on workload. (I wanted to test TechCrunch again, but its front page has been significantly changed so it no longer triggers such high memory usage.) For example, for one of my measurements I tried opening the front page and four articles from each of nytimes.com, cnn.com and bbc.co.uk, for a total of 15 tabs. With Cameron’s patches applied Firefox with AdBlock Plus used about 90 MiB less physical memory, which is a reduction of over 10%.

    Even when AdBlock Plus is not enabled this change has a moderate benefit. For example, in the Vim Color Scheme Test the memory usage for each document dropped by 0.09 MiB, reducing memory usage by about 40 MiB.

    If you want to test this change out yourself, you’ll need a Nightly build of Firefox and a development build of AdBlock Plus. (Older versions of AdBlock Plus don’t work with Nightly due to a recent regression related to JavaScript parsing). In Firefox’s about:memory page you’ll see the reduction in the “style-sets” measurements. You’ll also see a new entry under “layout/rule-processor-cache”, which is the measurement of the newly shared data; it’s usually just a few MiB.

    This improvement is on track to make it into Firefox 41, which is scheduled for release on September 22, 2015. For users on other release channels, Firefox 41 Beta is scheduled for release on August 11, and Firefox 41 Developer Edition is scheduled to be released in the next day or two.

    Mark BannerFirefox Hello Desktop: Behind the Scenes – UI Showcase

    This is the third of some posts I’m writing about how we implement and work on the desktop and standalone parts of Firefox Hello. You can find the previous posts here.

    The Showcase

    One of the most useful parts of development for Firefox Hello is the User Interface (UI) showcase. Since all of the user interface for Hello is written in html and JavaScript, and is displayed in the content scope, we are able to display them within a “normal” web page with very little adjustment.

    So what we do is to put almost all our views onto a single html page at representative sizes. The screen-shot below shows just one view from the page, but those buttons at the top give easy access, and in reality there’s lots of them (about 55 at the time of writing).

    UI Showcase showing a standalone (link-clicker) view

    UI Showcase showing a standalone (link-clicker) view

    Faster Development

    The showcase has various advantages that help us develop faster:

    • Since it is a web page, we have all the developer tools available to us – inspector, css layout etc.
    • We don’t have to restart Firefox to pick up changes to the layout, nor do we have to recompile – a simple reload of the page is enough to pick up changes.
    • We also don’t have to go through the flow each time, e.g. if we’re changing some of the views which show the media (like the one above), we avoid needing to go through the conversation setup routines for each code/css change until we’re pretty sure its going to work the way we expect.
    • Almost all the views are shown – if the css is broken for one view its much easier to detect it than having to go through the user flow to get to the view you want.
    • We’ve recently added an RTL mode so that we can easily see what the views look like in RTL languages. Hence no awkward forcing of Firefox into RTL mode to check the views.

    There’s one other “feature” of the showcase as we’ve got it today – we don’t pick up the translated strings, but rather the raw string label. This tends to give us longer strings than are used normally for English, which it turns out is an excellent way of being able to detect some of the potential issues for locales which need longer strings.

    Structure of the showcase

    The showcase is a series of iframes. We load individual react components into each iframe, sometimes loading the same component multiple times with different parameters or stores to get the different views. The rest of the page is basically just structure around display of the views.

    The iframes does have some downsides – you can’t live edit css in the inspector and have it applied across all the views, but that’s minor compared to the advantages we get from this one page.

    Future improvements

    We’re always looking for ways we can improve how we work on Hello. We’ve recently improved the UI showcase quite a bit, so I don’t think we have too much on our outstanding list at the moment.

    The only thing I’ve just remembered is that we’ve commented it would be nice to have some sort of screen-shot comparison, so that we can make changes and automatically check for side-effects on other views.

    We’d also certainly be interested in hearing about similar tools which could do a similar job – sharing and re-using code is definitely a win for everyone involved.

    Interested in learning more?

    If you’re interested in learning more about the UI-showcase, then you can find the code here, try it out for yourself, or come and ask us questions in #loop on irc.

    If you want to help out with Hello development, then take a look at our wiki pages, our mentored bugs or just come and talk to us.

    Roberto A. VitilloSpark best practices

    We have been running Spark for a while now at Mozilla and this post is a summary of things we have learned about tuning and debugging Spark jobs.

    Spark execution model

    Spark’s simplicity makes it all too easy to ignore its execution model and still manage to write jobs that eventually complete. With larger datasets having an understanding of what happens under the hood becomes critical to reduce run-time and avoid out of memory errors.

    Let’s start by taking our good old word-count friend as starting example:

    rdd = sc.textFile("input.txt")\
            .flatMap(lambda line: line.split())\
            .map(lambda word: (word, 1))\
            .reduceByKey(lambda x, y: x + y, 3)\
            .collect()
    

    RDD operations are compiled into a Direct Acyclic Graph of RDD objects, where each RDD points to the parent it depends on:

    DAGFigure 1

    At shuffle boundaries, the DAG is partitioned into so-called stages that are going to be executed in order, as shown in figure 2. The shuffle is Spark’s mechanism for re-distributing data so that it’s grouped differently across partitions. This typically involves copying data across executors and machines, making the shuffle a complex and costly operation.

    stagesFigure 2

    To organize data for the shuffle, Spark generates sets of tasks – map tasks to organize the data and reduce tasks to aggregate it. This nomenclature comes from MapReduce and does not directly relate to Spark’s map and reduce operations. Operations within a stage are pipelined into tasks that can run in parallel, as shown in figure 3.

    tasksFigure 3

    Stages, tasks and shuffle writes and reads are concrete concepts that can be monitored from the Spark shell. The shell can be accessed from the driver node on port 4040, as shown in figure 4.

    shellFigure 4

    Best practices

    Spark Shell

    Running Spark jobs without the Spark Shell is like flying blind. The shell allows to monitor and inspect the execution of jobs. To access it remotely a SOCKS proxy is needed as the shell connects also to the worker nodes.

    Using a proxy management tool like FoxyProxy allows to automatically filter URLs based on text patterns and to limit the proxy settings to domains that match a set of rules. The browser add-on automatically handles turning the proxy on and off when you switch between viewing websites hosted on the master node, and those on the Internet.

    Assuming that you launched your Spark cluster with the EMR service on AWS, type the following command to create a proxy:

    ssh -i ~/mykeypair.pem -N -D 8157 hadoop@ec2-...-compute-1.amazonaws.com
    

    Finally, import the following configuration into FoxyProxy:

    <?xml version="1.0" encoding="UTF-8"?>
    <foxyproxy>
      <proxies>
        <proxy name="emr-socks-proxy" notes="" fromSubscription="false" enabled="true" mode="manual" selectedTabIndex="2" lastresort="false" animatedIcons="true" includeInCycle="true" color="#0055E5" proxyDNS="true" noInternalIPs="false" autoconfMode="pac" clearCacheBeforeUse="false" disableCache="false" clearCookiesBeforeUse="false" rejectCookies="false">
          <matches>
            <match enabled="true" name="*ec2*.amazonaws.com*" pattern="*ec2*.amazonaws.com*" isRegEx="false" isBlackList="false" isMultiLine="false" caseSensitive="false" fromSubscription="false" />
            <match enabled="true" name="*ec2*.compute*" pattern="*ec2*.compute*" isRegEx="false" isBlackList="false" isMultiLine="false" caseSensitive="false" fromSubscription="false" />
            <match enabled="true" name="10.*" pattern="http://10.*" isRegEx="false" isBlackList="false" isMultiLine="false" caseSensitive="false" fromSubscription="false" />
            <match enabled="true" name="*10*.amazonaws.com*" pattern="*10*.amazonaws.com*" isRegEx="false" isBlackList="false" isMultiLine="false" caseSensitive="false" fromSubscription="false" />
            <match enabled="true" name="*10*.compute*" pattern="*10*.compute*" isRegEx="false" isBlackList="false" isMultiLine="false" caseSensitive="false" fromSubscription="false" />
            <match enabled="true" name="*localhost*" pattern="*localhost*" isRegEx="false" isBlackList="false" isMultiLine="false" caseSensitive="false" fromSubscription="false" />
          </matches>
          <manualconf host="localhost" port="8157" socksversion="5" isSocks="true" username="" password="" domain="" />
        </proxy>
      </proxies>
    </foxyproxy>

    Once the proxy is enabled you can open the Spark Shell by visiting localhost:4040.

    Use the right level of parallelism

    Clusters will not be fully utilized unless the level of parallelism for each operation is high enough. Spark automatically sets the number of partitions of an input file according to its size and for distributed shuffles, such as groupByKey and reduceByKey, it uses the largest parent RDD’s number of partitions. You can pass the level of parallelism as a second argument to an operation. In general, 2-3 tasks per CPU core in your cluster are recommended. That said, having tasks that are too small is also not advisable as there is some overhead paid to schedule and run a task.

    As a rule of thumb tasks should take at least 100 ms to execute; you can ensure that this is the case by monitoring the task execution latency from the Spark Shell. If your tasks take considerably longer than that keep increasing the level of parallelism, by say 1.5, until performance stops improving.

    Reduce working set size

    Sometimes, you will get terrible performance or out of memory errors, because the working set of one of your tasks, such as one of the reduce tasks in groupByKey, was too large. Spark’s shuffle operations (sortByKey, groupByKey, reduceByKey, join, etc) build a hash table within each task to perform the grouping, which can often be large.

    Even though those tables spill to disk, getting to the point where the tables need to be spilled increases the memory pressure on the executor incurring the additional overhead of disk I/O and increased garbage collection. If you are using pyspark, the memory pressure will also increase the chance of Python running out of memory.

    The simplest fix here is to increase the level of parallelism, so that each task’s input set is smaller.

    Avoid groupByKey for associative operations

    Both reduceByKey and groupByKey can be used for the same purposes but reduceByKey works much better on a large dataset. That’s because Spark knows it can combine output with a common key on each partition before shuffling the data.

    In reduce tasks, key-value pairs are kept in a hash table that can spill to disk, as mentioned in “Reduce working set size“. However, the hash table flushes out the data to disk one key at a time. If a single key has more values than can fit in memory, an out of memory exception occurs. Pre-combining the keys on the mappers before the shuffle operation can drastically reduce the memory pressure and the amount of data shuffled over the network.

    Avoid reduceByKey when the input and output value types are different

    Consider the job of creating a set of strings for each key:

    rdd.map(lambda p: (p[0], {p[1]}))\
        .reduceByKey(lambda x, y: x | y)\
        .collect()
    

    Note how the input values are strings and the output values are sets. The map operation creates lots of temporary small objects. A better way to handle this scenario is to use aggregateByKey:

    def seq_op(xs, x):
        xs.add(x)
        return xs
    
    def comb_op(xs, ys):
        return xs | ys
    
    rdd.aggregateByKey(set(), seq_op, comb_op).collect()
    

    Avoid the flatMap-join-groupBy pattern

    When two datasets are already grouped by key and you want to join them and keep them grouped, you can just use cogroup. That avoids all the overhead associated with unpacking and repacking the groups.

    Python memory overhead

    The spark.executor.memory option, which determines the amount of memory to use per executor process, is JVM specific. If you are using pyspark you can’t set that option to be equal to the total amount of memory available to an executor node as the JVM might eventually use all the available memory leaving nothing behind for Python.

    Use broadcast variables

    Using the broadcast functionality available in SparkContext can greatly reduce the size of each serialized task, and the cost of launching a job over a cluster. If your tasks use any large object from the driver program, like a static lookup table, consider turning it into a broadcast variable.

    Cache judiciously

    Just because you can cache a RDD in memory doesn’t mean you should blindly do so. Depending on how many times the dataset is accessed and the amount of work involved in doing so, recomputation can be faster than the price paid by the increased memory pressure.

    It should go without saying that if you only read a dataset once there is no point in caching it, i it will actually make your job slower. The size of cached datasets can be seen from the Spark Shell.

    Don’t collect large RDDs

    When a collect operation is issued on a RDD, the dataset is copied to the driver, i.e. the master node. A memory exception will be thrown if the dataset is too large to fit in memory; take or takeSample can be used to retrieve only a capped number of elements instead.

    Minimize amount of data shuffled

    A shuffle is an expensive operation since it involves disk I/O, data serialization, and network I/O. As illustrated in figure 3, each reducer in the second stage has to pull data across the network from all the mappers.

    As of Spark 1.3, these files are not cleaned up from Spark’s temporary storage until Spark is stopped, which means that long-running Spark jobs may consume all available disk space. This is done in order to don’t re-compute shuffles.

    Know the standard library

    Avoid re-implementing existing functionality as it’s guaranteed to be slower.

    Use dataframes

    A DataFrame is a distributed collection of data organized into named columns. It is conceptually equivalent to a table in a relational database or a pandas Dataframe in Python.

    props = get_pings_properties(pings,
                                 ["environment/system/os/name",
                                  "payload/simpleMeasurements/firstPaint",
                                  "payload/histograms/GC_MS"],
                                 only_median=True)
    
    frame = sqlContext.createDataFrame(props.map(lambda x: Row(**x)))
    frame.groupBy("environment/system/os/name").count().show()
    

    yields:

    environment/system/os/name count 
    Darwin                     2368  
    Linux                      2237  
    Windows_NT                 105223
    

    Before any computation on a DataFrame starts, the Catalyst optimizer compiles the operations that were used to build the DataFrame into a physical plan for execution. Since the optimizer generates JVM bytecode for execution, pyspark users will experience the same high performance as Scala users.


    Michael KaplyThe End of Firefox 31 ESR

    With the release of Firefox 39 today also comes the final release of the Firefox 31 ESR (barring any security updates in the next six weeks).
    That means you have six weeks to manage your switch over to the Firefox 38 ESR.

    If you've been wondering if you should use the ESR instead of keeping up with current Firefox releases, now might be a good time to switch. That's because there are a couple features coming in the Firefox mainline that might affect you. These include the removal of the distribution/bundles directory as well as the requirement for all add-ons to be signed by Mozilla.

    It's much easier going from Firefox 38 to the Firefox 38 ESR then going from Firefox 39 to the Firefox 38 ESR.

    If you want to continue on the Firefox mainline, you can use the CCK2 to bring back some of the distribution/bundles functionality, but I won't be able to do anything about the signing requirement.

    Mark BannerFirefox Hello Desktop: Behind the Scenes – Architecture

    This is the second of some posts I’m writing about how we implement and work on the desktop and standalone parts of Firefox Hello. The first post was about our use of Flux and React, this second post is about the architecture.

    In this post, I will give an overview of the Firefox browser software architecture for Hello, which includes the standalone UI.

    User-visible parts of Hello

    Although there’s many small parts to Hello, most of it is shaped by what is user visible:

    Firefox Hello Desktop UI (aka Link-Generator)

    Hello Standalone UI (aka Link-clicker)

    Firefox Browser Architecture for Hello

    The in-browser part of Hello is split into three main areas:

    • The panel which has the conversation and contact lists. This is a special about: page that is run in the content process with access to additional privileged APIs.
    • The conversation window where conversations are held. Within this window, similar to the panel, is another about: page that is also run in the content process.
    • The backend runs in the privileged space within gecko. This ties together the communication between the panels and conversation windows, and provides access to other gecko services and parts of the browser to which Hello integrates.
    Outline of Hello's Desktop Architecture

    Outline of Hello’s Desktop Architecture

    MozLoopAPI is our way of exposing small bits of the privileged gecko (link) code to the about: pages running in content. We inject a navigator.mozLoop object into the content pages when they are loaded. This allows various functions and facilities to be exposed, e.g. access to a backend cache of the rooms list (which avoids multiple caches per window), and a similar backend store of contacts.

    Standalone Architecture

    The Standalone UI is simply a web page that’s shown in any browser when a user clicks a conversation link.

    The conversation flow is in the standalone UI is very similar to that of the conversation window, so most of the stores and supporting files are shared. Most of the views for the Standalone UI are currently different to those from the desktop – there’s been a different layout, so we need to have different structures.

    Outline of Hello's Standalone UI Architecture

    Outline of Hello’s Standalone UI Architecture

    File Architecture as applied to the code

    The authoritative location for the code is mozilla-central it lives in the browser/components/loop directory. Within that we have:

    • content/ – This is all the code relating to the panel and conversation window that is shipped on in the Firefox browser
    • content/shared/ – This code is used on browser as well as on the standalone UI
    • modules/ – This is the backend code that runs in the browser
    • standalone/ – Files specific to the standalone UI

    Future Work

    There’s a couple of likely parts of the architecture that we’re going to rework soon.

    Firstly, with the current push to electrolysis, we’re replacing the current exposed MozLoopAPI with a message-based RPC mechanism. This will then let us run the panel and conversation window in the separated content process.

    Secondly, we’re currently reworking some of the UX and it is moving to be much more similar between desktop and standalone. As a result, we’re likely to be sharing more of the view code between the two.

    Interested in learning more?

    If you’re interested in learning more about Hello’s architecture, then feel free to dig into the code, or come and ask us questions in #loop on irc.

    If you want to help out with Hello development, then take a look at our wiki pages, our mentored bugs or just come and talk to us.

    About:CommunityFirefox 39 new contributors

    With the release of Firefox 39, we are pleased to welcome the 64 developers who contributed their first code change to Firefox in this release, 55 of whom were brand new volunteers! Please join us in thanking each of these diligent and enthusiastic individuals, and take a look at their contributions:

  • agrigas: 1135270
  • bmax1337: 967319
  • leo: 1134993
  • schtroumps31: 1130045
  • zmiller12: 1138873
  • George Duan: 1135293
  • Abhinav Koppula: 732688, 951695, 1127337
  • Alex Verstak: 1113431, 1144816
  • Alexandre Ratchov: 1144087
  • Andrew Overholt: 1127552
  • Anish: 1135091, 1135383
  • Anush: 418517, 1113761
  • Bhargav Chippada: 1112605, 1130372
  • Boris Kudryavtsev: 1135364, 1144613
  • Cesar Guirao: 1139132
  • Chirag Bhatia: 1133211
  • Danilo Cesar Lemes de Paula: 1146020
  • Daosheng Mu: 1133391
  • Deepak: 1039540
  • Felix Janda: 1130164, 1130175
  • Gareth Aye: 1145310
  • Geoffroy Planquart: 942475, 1042859
  • Gerald Squelart: 1121774, 1135541, 1137583
  • Greg Arndt: 1142779
  • Henry Addo: 1084663
  • Jason Gersztyn: 1132673
  • Jeff Griffiths: 1138545
  • Jeff Lu: 1098415, 1106779, 1140044
  • Johan K. Jensen: 1096294
  • Johannes Vogel: 1139594
  • John Giannakos: 1134568
  • John Kang: 1144782, 1146252
  • Jorg K: 756984
  • Kyle Thomas: 1137004
  • Léon McGregor: 1115925, 1130741, 1136708
  • Manraj Singh [:manrajsingh|Away till 21st June]: 1120408
  • Mantaroh Yoshinaga: 910634, 1106905, 1130614
  • Markus Jaritz: 1139174
  • Massimo Gervasini: 1137756, 1144695
  • Matt Hammerly: 1124271
  • Matt Spraggs: 1036454
  • Michael Weisz : 736572, 782623, 935434
  • Mitchell Field: 987902
  • Mohamed Waleed: 1106938
  • NiLuJe: 1143411
  • Perry Wagle: 1122941
  • Ponç Bover: 1126978
  • Quentin Pradet: 1092544
  • Ravi Shankar: 1109608
  • Rishi Baldawa: 1143196
  • Stéphane SCHMIDELY: 935259, 1144619
  • Sushrut Girdhari (sg345): 1137248
  • Thomas Baquet: 1132078
  • Titi_Alone : 1133063
  • Tyler St. Onge: 1134927
  • Vaibhav Bhosale: 1135009
  • Vidit23: 1121317
  • Wickie Lee: 1136253
  • Zimon Dai: 983469, 1135435
  • atlanto: 1137615
  • farhaan: 1073234
  • pinjiz: 1124943, 1142260, 1142268
  • qasim: 1123431
  • ronak khandelwal: 1122767
  • uelis: 1047529
  • Andy McKayCycling

    In January I finally decided to do something I'd wanted to do in a long time and that's the Gran Fondo from Vancouver to Whistler.

    I've been cycling for a while too and from work, but it was time to get more serious and hopefully lose some weight in the process. So I signed up and September I'll be racing up to Whistler with a few thousand other people.

    Last Saturday I was in Whistler for the Mozilla work week. I got to do some riding, including a quick trip to Pemberton:

    I had the opportunity to ride down from Whistler as a practice. I've enjoyed my cycling, but I looked at this ride with a mixture of anticipation and dread.

    Getting back is a 135.8km ride that looks like this:

    It also meant hanging out at a party with Mozilla on top of the mountain without drinking very much. Something that, if you know me, I tend to find a little difficult.

    As it turns out the ride was great fun. Riding in the sun with views of the Elaho and Howe Sound. The main annoyance was when I had stop for a traffic light in Squamish after so long of continually cycling.

    After Horseshoe Bay I got flat a tyre. When I cycle to and from work I don't carry spares - if anything happens I got on a bus. Repeating that was an error, but fortunately multiple people kindly stopped and offered to help. About 15 minutes later I was up and cycling again.

    Later on, as I was crossing North Vancouver with some other people, a truck pulled up besides us and shouted "You're doing 39km/h!". Somehow after 5 hours on the road I was still having fun and cycling fast.

    I've gone from doing a few days a week, to over 300k on a bike a week and still loving it. I've gone from just being happy to get to Whistler alive to thinking about setting a time for my race. We'll see how that goes.

    Karl DubostWatch A Computing Engineer Live Coding

    Mike (Taylor) has been telling me about the live coding session of Mike Conley for a little while. So yesterday I decided to give it a try with the first episode of The Joy of Coding (mconley livehacks on Firefox). As mentionned in the description:

    Unscripted, unplanned, uncensored, and true to life, watch what a Firefox Desktop engineer does to close bugs and get the job done.

    And that is what is written on the jar. And sincerely this is good.

    Why Should You Watch Live Coding?

    I would recommend to watch this for:

    • Beginner devs: To understand that more experienced developers struggle and make mistakes too. To also learn a couple of good practices when coding.
    • Experienced devs: To see how someone is coding and learn a couple of new tricks and habits when coding. To encourage them to do the same kind of things than Mike is doing.
    • Managers: You are working as a manager in a Web agency, you are a project manager, watch this. You will not understand most of it, but focus exactly on what Mike is doing in terms of thoughts process and work organization. Even without a knowledge of programming, we discover the struggles, the trials and errors, and the success.

    There's a full series of them, currently 19.

    My Own (Raw) Notes When Watching The Video

    • Watching the first video of Live Coding
    • He started with 3 video scenes and switched to his main full screen
    • He's introducing himself
    • mconley: "Nothing is prepared"
    • He's introducing the bug and explained it in demonstrating it
    • mconley: "I like to take notes when working on bugs" (taken in evernote)
    • mconley: "dxr is better than mxr."
    • He's not necessary remembering everything. So he goes through other parts of the code to understand what others did.
    • Sometimes he just doesn't know, doesn't understand and he says it.
    • mconley: "What other people do?"
    • He's taking notes including some TODOs for the future to explore, understand, do.
    • He's showing his fails in compiling, in coding, etc.
    • (personal thoughts) It's hard to draw on a computer. Paper provides some interesting features for quickly drawing something. Computer loses, paper wins.
    • When recording, thinking with a loud voice gives context on what is happening.
    • Write comments in the code for memory even if you remove them later.
    • In your notes, cut and paste the code from the source. Paper loses, computer wins.
    • (personal thoughts): C++ code is ugly to read.
    • (personal thoughts): Good feeling for your own job after watching this. It shows you are not the only one struggling when doing stuff.

    Some Additional Thoughts

    We met Mike Conley in Whistler, Canada last week. He explained he used Open Broadcasting Project for recording his sessions. I'm tempted to do something similar for Web Compatibility work. I'm hesitating in between French and English. Maybe if Mike was doing something in English, I might do it in French. So people in the French community could benefit of it.

    So thanks Mike for telling me about this in the last couple of weeks.

    Otsukare!

    Air MozillaDXR 2.0 (Part 2: Discussion)

    DXR 2.0 (Part 2: Discussion) A discussion of the roadmap for DXR after 2.0

    Air MozillaDXR 2.0 (Part 1: Dog & Pony Show)

    DXR 2.0 (Part 1: Dog & Pony Show) Demo of features new in the upcoming 2.0 release of DXR, Mozilla's search and analysis tool for large codebases

    Daniel GlazmanCSS Working Group's future

    Hello everyone.

    Back in March 2008, I was extremely happy to announce my appointment as Co-chairman of the CSS Working Group. Seven years and a half later, it's time to move on. There are three main reasons to that change, that my co-chair Peter and I triggered ourselves with W3C Management's agreement:

    1. We never expected to stay in that role 7.5 years. Chris Lilley chaired the CSS Working Group 1712 days from January 1997 (IIRC) to 2001-oct-10 and that was at that time the longest continuous chairing in W3C's history. Bert Bos chaired it 2337 days from 2001-oct-11 to 2008-mar-05. Peter and I started co-chairing it on 2008-mar-06 and it will end at TPAC 2015. That's 2790 days so 7 years 7 months and 20 days! I'm not even sure those 2790 days hold a record, Steven Pemberton probably chaired longer. But it remains that our original mission to make the WG survive and flourish is accomplished, and we now need fresher blood. Stability is good, but smart evolution and innovation are better.
    2. Co-chairing a large, highly visible Working Group like the CSS Working Group is not a burden, far from it. But it's not a light task either. We start feeling the need for a break.
    3. There were good candidates for the role, unanimously respected in the Working Group.

    So the time has come. The new co-chairs, Rossen Atanassov from Microsoft and Alan Stearns from Adobe, will take over during the Plenary Meeting of the W3C held in Sapporo, japan, at the end of October and that is A Good Thing™. You'll find below a copy of my message to W3C.

    To all the people I've been in touch with while holding my co-chair's hat: thank you, sincerely and deeply. You, the community around CSS, made everything possible.

    Yours truly.

    Dear Tim, fellow ACs, fellow Chairs, W3C Staff, CSS WG Members,

    After seven years and a half, it's time for me to pass the torch of the CSS Working Group's co-chairmanship. 7.5 years is a lot and fresh blood will bring fresh perspectives and new chairing habits. At a time the W3C revamps its activities and WGs, the CSS Working Group cannot stay entirely outside of that change even if its structure, scope and culture are retained. Peter and I decided it was time to move on and, with W3M's agreement, look for new co-chairs.

    I am really happy to leave the Group in Alan's and Rossen's smart and talented hands, I'm sure they will be great co-chairs and I would like to congratulate and thank them for accepting to take over. I will of course help the new co-chairs on request for a smooth and easy transition, and I will stay in the CSS WG as a regular Member.

    I'd like to deeply thank Tim for appointing me back in 2008, still one of the largest surprises of my career!

    I also wish to warmly thank my good friends Chris Lilley, Bert Bos and Philippe Le Hégaret from W3C Staff for their crucial daily support during all these years. Thank you Ralph for the countless transition calls! I hope the CSS WG still holds the record for the shortest positive transition call!

    And of course nothing would have been possible without all the members of the CSS Working Group, who tolerated me for so long and accepted the changes we implemented in 2008, and all our partners in the W3C (in particular the SVG WG) or even outside of it, so thank you all. The Membership of the CSS WG is a powerful engine and, as I often say, us co-chairs have only been a drop of lubricant allowing that engine to run a little bit better, smoother and without too much abrasion.

    Last but not least, deep thanks to my co-chair and old friend Peter Linss for these great years; I accepted that co-chair's role to partner with Peter and enjoyed every minute of it. A long ride but such a good one!

    I am confident the CSS Working Group is and will remain a strong and productive Group, with an important local culture. The CSS Working Group has both style and class (pun intended), and it has been an honour to co-chair it.

    Thank you.

    </Daniel>

    Air MozillaMozilla Weekly Project Meeting

    Mozilla Weekly Project Meeting The Monday Project Meeting

    Mozilla WebDev CommunityBeer and Tell – June 2015

    Once a month, web developers from across the Mozilla Project get together to try and transmute a fresh Stanford graduate into a 10x engineer. Meanwhile, we find time to talk about our side projects and drink, an occurrence we like to call “Beer and Tell”.

    There’s a wiki page available with a list of the presenters, as well as links to their presentation materials. There’s also a recording available courtesy of Air Mozilla.

    Osmose: SpeedKills and Advanced Open File

    First up was Osmose (that’s me!) presenting two packages for Atom, a text editor. SpeedKills is a playful package that plays a guitar solo and sets text on fire when the user types fast enough. Advanced Open File is a more useful package that adds a convenient dialog for browsing the file system and opening files by path rather than using the fuzzy finder. Both are available for install through the Atom package repository.

    new_one: Tab Origin

    Next was new_one, who shared Tab Origin, a Firefox add-on that lets you return to the webpage that launched the current tab, even if the parent tab has since been closed. It’s activated via a keyboard shortcut that can be customized.

    Potch: WONTFIX and Presentation Mode

    Continuing a fine tradition of batching projects, Potch stopped by to show off two Firefox add-ons. The first was WONTFIX, which adds a large red WONTFIX stamp to any Bugzilla bug that has been marked as WONTFIX. The second was Presentation Mode, which allows you to full-screen any content in a web page while hiding the browser chrome. This is especially useful when giving web-based presentations.

    Peterbe: premailer.io

    Peterbe shared premailer.io, which is a service wrapping premailer. Premailer takes a block of HTML with a style tag and applies the styles within as style attributes on each matching tag. This is mainly useful for HTML emails, which generally don’t support style tags that apply to the entire email.

    ErikRose: Spam-fighting Tips

    ErikRose learned a lot about the current state of spam-fighting while redoing his mail server:

    • Telling Postfix to be picky about RFCs is a good first pass. It eliminates some spam without having to do much computation.
    • spamassassin beats out dspam, which hasn’t seen an update since 2012.
    • Shared-digest detectors like Razor help a bit but aren’t sufficient on their own without also greylisting to give the DBs a chance to catch up.
    • DNS blocklists are a great aid: they reject 3 out of 4 spams without taking much CPU.
    • Bayes is still the most reliable (though the most CPU-intense) filtration method. Bayes poisoning is infeasible, because poisoners don’t know what your ham looks like, so don’t worry about hand-picking spam to train on. Train on an equal number of spams and hams: 400 of each works well. Once your bayes is performing well, crank up your BAYES_nn settings so spamassassin believes it.
    • Crank up spamc’s –max-size to 1024000, because spammers are now attaching images > 512K to mails to bypass spamc’s stock 512K threshold. This will cost extra CPU.

    With this, he gets perhaps a spam a week, with over 400 attempts per day.


    We were only able to get a 3x engineer this month, but at least they were able to get a decent job working on enterprise software.

    If you’re interested in attending the next Beer and Tell, sign up for the dev-webdev@lists.mozilla.org mailing list. An email is sent out a week beforehand with connection details. You could even add yourself to the wiki and show off your side-project!

    See you next month!

    Mozilla Security BlogDharma


    dharma

    As soon as a developer at Mozilla starts integrating a new WebAPI feature, the Mozilla Security team begins working to help secure that API. Subtle programming mistakes in new code can introduce annoying crashes and even serious security vulnerabilities that can be triggered by malformed input which can lead to headaches for the user and security exposure.

    WebAPIs start life as a specification in the form of an Interface Description Language, or IDL. Since this is essentially a grammar, a grammar-based fuzzer becomes a valuable tool in finding security issues in new WebAPIs because it ensures that expected semantics are followed most of the time, while still exploring enough undefined behavior to produce interesting results.

    We came across a grammar fuzzer Ben Hawkes released in 2011 called “Dharma.” Sadly, only one version was ever made public. We liked Ben’s approach, but Dharma was missing some features which were important for us and its wider use for API fuzzing. We decided to sit down with our fuzzing mates at BlackBerry and rebuild Dharma, giving the results back to the public, open source and licensed as MPL v2.

    We redesigned how Dharma parses grammars and optimized the speed of parsing and the generating of fuzzed output, added new grammar features to the grammar specification, added support for serving testcases over a WebSocket server, and made it Python 3 ready. It comes with no dependencies and runs out of the box.

    In theory Dharma can be used with any data that can be represented as a grammar. At Mozilla we typically use it for APIs like WebRTC, WebAudio, or WebCrypto.


    dharma_demo

    Dharma has no integrated harness. Feel free to check out the Quokka project which provides an easy way for launching a target with Dharma, monitoring the process and bucketing any faults.

    Dharma is actively in use and maintained at Mozilla and more features are planned for the future. Ideas for improvements are always greatly welcomed.

    Dharma is available via GitHub (preferred and always up-to-date) or via PyPi by running “pip install dharma”.

    References
    https://github.com/mozillasecurity/dharma
    https://github.com/mozillasecurity/quokka
    https://code.google.com/p/dharma/

    Aaron ThornburghNot Working in Whistler

    A brief retrospective
    Whistler Rock - Shot with my Moto X 2nd Gen

    I just returned from Whistler, BC, along with 1300 other Mozillians.

    Because we’re a global organization, it’s important to get everyone together every now and then. We matched faces with IRC channels, talked shop over finger-food, and went hiking with random colleagues. Important people gave speeches. Teams aligned. There was a massive party at the top of a mountain.

    +++++

    Typically, I learn the most from experiences like these only after they have ended. Now that I’ve had 48 hours to process (read: recover), some themes have emerged…

    1. Organizing and mobilizing a bunch of smart, talented, opinionated people is hard work.
    2. It’s easy to say “you’re doing it wrong.” It takes courage to ask “how could we do it better?”
    3. Anyone can find short-term gains. The best leaders define a long-term vision, then enable individuals to define their own contribution.
    4. Success is always relative, and only for a moment in time.
    5. Accelerated change requires rapid adaptation. Being small is our advantage.
    6. The market is what you make it. So make it better (for everyone).
    7. It’s been said that when technology works, it’s magic. I disagree. Magic is watching passionate people – from all over the worldcreate technology together.

    Also, it has now been confirmed by repeatable experiment: I’m not a people person – especially when lots of them are gathered in small spaces. I’m fucking exhausted.

    Officially signing back on,

    -DCROBOT


    About:CommunityRecap of Participation at Whistler

    <noscript>[<a href="http://storify.com/lucyeharris/participation-at-whistler" target="_blank">View the story &#8220;Participation at Whistler&#8221; on Storify</a>]</noscript>

    Bogomil ShopovVoice-controlled UI plus a TTS engine.

    A few days ago I’ve attended the launch of SohoXI – Hook and Loop’s latest product that aims to change the look and feel of the Enterprise applications forever. I am close to believing that and not only because I work for the same company as they do. They have delivered extremely revolutionary UI in … Continue reading Voice-controlled UI plus a TTS engine.

    Dave HuntWhistler Work Week 2015

    Last week was Mozilla’s first work week of 2015 in Whistler, BC. It was my first visit to Whistler having joined Mozilla shortly after their last summit there in 2010, and it was everything I needed it to be. Despite currently feeling jetlagged, I have been recharged and I have renewed enthusiasm for the mission and I’m even more than a little excited about Firefox OS again! I’d like to share a few of my highlights from last week…

    • S’mores – The dinner on Wednesday evening was followed by a street party, where I had my first S’more. It was awesome!
    • Firefox OS – Refreshing honesty over past mistakes and a coherant vision for the future has actually made me enthusiastic about this project again. I’m no longer working directly on Firefox OS, but I’ve signed up for the dogfoxfooding program and I’m excited about making a difference again.
    • LEGO – I got to build a LEGO duck, and we heard from David Robertson about the lessons LEGO learned from near bankruptcy.
    • Milkshake – A Firefox Q&A was made infinitely better by taking a spontaneous walk to Cow’s for milkshakes and ice cream with my new team!
    • Running – I got to run with #running friends old and new on Tuesday morning around Lost Lake. Then on Thursday morning I headed back and took on the trails with Matt. These were my first runs since my marathon, and running through the beautiful scenary was exactly what I needed to get me back into it.
    • Istanbul – After dinner on Tuesday night, Stephen and I sat down with Bob to play the board game Istanbul.
    • Hacking – It’s always hard to get actual code written during these team events, but I’m pleased to say we thought through some challenging problems, and actually even managed to land some code.
    • Hike – On Friday morning I joined Justin and Matt on a short hike up Whistler mountain. We didn’t have long before breakfast, but it was great to spend more time with these guys.
    • Whistler Mountain – The final party was at the top of Whistler Mountain, which was just breathtaking. I can’t possibly do the experience justice – so I’m not even going to try.

    Thank you Whistler for putting up with a thousand Mozillians, and thank you Mozilla for organising such a perfect week. We’re going to keep rocking the free web!

    Yunier José Sosa VázquezDisponible cuentaFox 3.1.2 con añoradas funcionalidades

    Más rápido de lo que esperábamos ya está aquí una nueva versión de cuentaFox con interesantes características que desde hace algún tiempo los usuarios deseaban.

    Instalar cuentaFox 3.1.2

    Con esta liberación ya no tendremos que preocuparnos más porque el certificado no está añadido y se me abren x pestañas, ahora el UCICA.pem se añade automáticamente con sus niveles de confianza requeridos. Sin dudas es un gran paso de avance pues nos quita un gran peso de encima.

    Con el paso del tiempo a algunas personas se les olvida actualizar la extensión por lo que siguen utilizando versiones viejas que presentan algún error. Por esta razón, hemos decido alertar al usuario cuando estén disponibles nuevas versiones mostrando una alerta y abriendo en una nueva pestaña la URL para actualizar.

    v3.1.2-alerta-de-actualizacion

    Nuestro objetivo es que este proceso se realice de forma transparente al usuario pero actualmente no podemos hacerlo.

    Completan la lista de novedades

    • Al mostrar una alerta de error o de consumo se muestra el usuario que la ha generado #21.
    • Solucionado no se mostraba el usuario real del error al obtener los datos para varios usuarios #18.
    • Si existen varios usuarios almacenados, en el botón ubicado en la barra de herramientas se muestra el consumo del usuario que inició sesión #19.
    • Si al obtener los datos del usuario ocurre un error siempre se intenta borrar sus datos del administrador de contraseñas de Firefox #26.
    • Actualizado jQuery a v2.1.4

    Por último, pero no menos importante. En las opciones de configuración del add-on (no de la interfaz) pueden decidir si mostrar uno o más usuarios al mismo tiempo y elegir ocultar las cuotas gastadas.

    v3.1.2-opciones-de-configuracion

    Sirva este artículo para agradecer a todas las personas que han acercado a la página del proyecto en GitLab y nos han dejado sus ideas o errores encontrados. Esperamos que se sumen muchos más.

    Si deseas colaborar en el desarrollo del complemento puedes acceder a GitLab (UCI) y clonar el proyecto o dejar una sugerencia. Quizás encuentres una que te motive.

    Instalar cuentaFox 3.1.2

    John O'Duinn“We are ALL Remoties” (Jun2015 edition)

    Since my last post on “remoties”, I’ve done several more presentations, some more consulting work for private companies, and even started writing this down more explicitly (exciting news coming here soon!). While I am always refining these slides, this latest version is the first major “refactor” of this presentation in a long time. I think this restructuring makes the slides even easier to follow – there’s a lot of material to cover here, so this is always high on my mind.

    Without further ado – you can get the latest version of these slides, in handout PDF format, by clicking on the thumbnail image.

    Certainly, the great responses and enthusiastic discussions every time I go through this encourages me to keep working on this. As always, if you have any questions, suggestions or good/bad stories about working remotely or as part of a geo-distributed teams, please let me know (either by email or in the comments below) – I’d love to hear them.

    Thanks
    John.

    Cameron KaiserHello, 38 beta, it's nice to meet you

    And at long last the 38 beta is very happy to meet you too (release notes, downloads, hashes). Over the next few weeks I hope you and the new TenFourFox 38 will become very fond of each other, and if you don't, then phbbbbt.

    There are many internal improvements to 38. The biggest one specific to us, of course, is the new IonPower JavaScript JIT backend. I've invested literally months in making TenFourFox's JavaScript the fastest available for any PowerPC-based computer on any platform, not just because every day websites lard up on more and more crap we have to swim through (viva Gopherspace) but also because a substantial part of the browser is written in JavaScript: the chrome, much of the mid-level plumbing and just about all those addons you love to download and stuff on in there. You speed up JavaScript, you speed up all those things. So now we've sped up many browser operations by about 11 times over 31.x -- obviously the speed of JavaScript is not the only determinant of browser speed, but it's a big part of it, and I think you'll agree that responsiveness is much improved.

    JavaScript also benefits in 38 from a compacting, generational garbage collector (generational garbage collection was supposed to make 31 but was turned off at the last minute). This means recently spawned objects will typically be helplessly slaughtered in their tender youth in a spasm of murderous efficiency based on the empiric observation that many objects are created for brief usage and then never used again, reducing the work that the next-stage incremental garbage collector (which we spent a substantial amount of time tuning in 31 as you'll recall, including backing out background finalization and tweaking the timeslice for our slower systems) has to do for objects that survive this pediatric genocide. The garbage collector in 38 goes one step further and compacts the heap as well, which is to say, it moves surviving objects together contiguously in memory instead of leaving gaps that cannot be effectively filled. This makes both object cleanup and creation much quicker in JavaScript, which relies heavily on the garbage collector (the rest of the browser uses more simplistic reference counting to determine object lifetime), to say nothing of a substantial savings in memory usage: on my Quad G5 I'm seeing about 200MB less overhead with 48 tabs open.

    I also spent some time working on font enumeration performance because of an early showstopper where sites that loaded WOFF fonts spun and spun and spun. After several days of tearing my hair out in clumps the problem turned out to be a glitch in reference counting caused by the unusual way we load platform fonts: since Firefox went 10.6+ it uses CoreText exclusively, but we use almost completely different font code based on the old Apple Type Services which is the only workable choice on 10.4 and the most stable choice on 10.5. ATS is not very fast at instantiating lots of fonts, to say the least, so I made the user font cache stickier (please don't read that as "leaky" -- it's sticky because things do get cleaned up, but less aggressively to improve cache hit percentage) and also made a global font cache where the font's attribute tag directory is cached browser-wide to speed up loading font tables from local fonts on your hard disk. Previously this directory was cached per font entry, meaning if the font entry was purged for re-enumeration it had to be loaded all over again, which usually happened when the browser was hunting for a font with a particular character. This process used to take about fifteen to twenty seconds for the 700+ font faces on my G5. With the global font cache it now takes less than two.

    Speaking of showstoppers, here's an interesting one which I'll note here for posterity. nsChildView, the underlying system view which connects Cocoa/Carbon to Gecko, implements the NSTextInput protocol which allows it to accept Unicode input without (as much) mucking about with the Carbon Text Services Manager (Firefox also implements NSTextInputClient, which is the new superset protocol, but this doesn't exist in 10.4). To accept Unicode input, under the hood the operating system actually manipulates a special undocumented TSM input context called, surprisingly, NSTSMInputContext (both this and the undocumented NSInputContext became the documented NSTextInputContext in 10.6), and it gets this object from a previously undocumented method on NSView called (surprise again) inputContext. Well, turns out if you override this method you can potentially cause all sorts of problems, and Mozilla had done just that to handle complex text input for plugins. Under the 10.4 SDK, however, their code ended up returning a null input context and Unicode input didn't work, so since we don't support plugins anyhow the solution was just to remove it completely ... which took several days more to figure out. The moral of the story is, if you have an NSView that is not responding to setMarkedText or other text input protocol methods, make sure you haven't overridden inputContext or screwed it up somehow.

    I also did some trivial tuning to the libffi glue library to improve the speed of its calls and force it to obey our compiler settings (there was a moment of panic when the 7450 build did not start on the test machines because dyld said XUL was a 970 binary -- libffi had seen it was being built on a G5 and "helpfully" compiled it for that target), backed out some portions of browser chrome that were converted to CoreUI (not supported on 10.4), and patched out the new tab tile page entirely; all new tabs are now blank, like they used to be in previous versions of Firefox and as intended by God Himself. There are also the usual cross-platform HTML5 and CSS improvements you get when we leap from ESR to ESR like this, and graphics are now composited off-main-thread to improve display performance on multiprocessor systems.

    That concludes most of the back office stuff. What about user facing improvements? Well, besides the new blank tabs "feature," we have built-in PDF viewing as promised (I think you'll find this more useful to preview documents and load them into a quicker viewer to actually read them, but it's still very convenient) and Reader View as the biggest changes. Reader View, when the browser believes it can attempt it, appears in the address bar as a little book icon. Click on it and the page will render in a simplified view like you would get from a tool such as Readability, cutting out much of the extraneous formatting. This is a real godsend on slower computers, lemme tell ya! Click the icon again to go back. Certain pages don't work with this, but many will. I have also dragged forward my MP3 decoder support, but see below first, and we have prospectively landed Mozilla bug 1151345 to fix an issue with the application menu (modified for the 10.4 SDK).

    You will also note the new, in-content preferences (i.e., preferences appears in a browser tab now instead of a window, a la, natch, Chrome), and that the default search engine is now Yahoo!. I have not made this default to anything else since we can still do our part this way to support MoCo (but you can change it from the preferences, of course).

    I am not aware of any remaining showstopper bugs, so therefore I'm going ahead with the beta. However, there are some known issues ("bugs" or "features" mayhaps?) which are not critical. None of these will hold up final release currently, but for your information, here they are:

    • If you turn on the title bar, private browsing windows have the traffic light buttons in the wrong position. They work; they just look weird. This is somewhat different than issue 247 and probably has a separate, though possibly related, underlying cause. Since this is purely cosmetic and does not occur in the browser's default configuration, we can ship with this bug present but I'll still try to fix it since it's fugly (plus, I personally usually have the title bar on).

    • MP3 support is still not enabled by default because seeking within a track (except to the beginning) does not work yet. This is the last thing to do to get this support off the ground. If you want to play with it in its current state, however, set tenfourfox.mp3.enabled to true (you will need to create this pref). If I don't get this done by 38.0.2, the pref will stay off until I do, but the rest of it is working already and I have a good idea how to get this last piece functional.

    • I'm not sure whether to call this a bug or a feature, but scaling now uses a quick and dirty algorithm for many images and some non-.ico favicons apparently because we don't have Skia support. It's definitely lower quality, but it has a lot less latency. Images displayed by themselves still use the high-quality built-in scaler which is not really amenable to the other uses that I can tell. Your call on which is better, though I'm not sure I know how to go back the old method or if it's even possible anymore.

    • To reduce memory pressure, 31 had closed tab and window undos substantially reduced. I have not done that yet for 38 -- near as I can determine, the more efficient memory management means it is no longer worth it, so we're back to the default 10 and 3. See what you think.

    Builders: take note that you will need to install a modified strip ("strip7") if you intend to make release binaries due to what is apparently a code generation bug in gcc 4.6. If you want to use a different (later) compiler, you should remove the single changeset with the gcc 4.6 compatibility shims -- in the current changeset pack it's numbered 260681, but this number increments in later versions. See our new HowToBuildRightNow38 for the gory details and where to get strip7.

    Localizers: strings are frozen, so start your language pack engines one more time in issue 42. We'd like to get the same language set for 38 that we had for 31, and your help makes it possible. Thank you!

    As I mentioned before, it's probably 70-30 against there being a source parity version after 38ESR because of the looming threat of Electrolysis, which will not work as-is on 10.4 and is not likely to perform well or even correctly on our older systems. (If Firefox 45, the next scheduled ESR, still allows single process operation then there's a better chance. We still need to get a new toolchain up and a few other things, though, so it won't be a trivial undertaking.) But I'm pleased with 38 so far and if we must go it means we go out on a high note, and nothing says we can't keep improving the browser ourselves separate from Mozilla after we split apart (feature parity). Remember, that's exactly what Classilla does, except that we're much more advanced than Classilla will ever be, and in fact Pale Moon recently announced they're doing the same thing. So if 38 turns out to be our swan song as a full-blooded Mozilla tier 3 port, that doesn't mean it's the end of TenFourFox as a browser. I promise! Meanwhile, let's celebrate another year of updates! PowerPC forever!

    Finally, looking around the Power Mac enthusiast world, it appears that SeaMonkeyPPC has breathed its last -- there have been no updates in over a year. We will pour one out for them. On the other hand, Leopard Webkit continues with regular updates from Tobias, and our friendly builder in the land of the Rising Sun has been keeping up with Tenfourbird. We have the utmost confidence that there will be a Tenfourbird 38 in your hands soon as well.

    Some new toys to play with are next up in a couple days.

    This Week In RustThis Week in Rust 85

    Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us an email! Want to get involved? We love contributions.

    This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

    From the Blogosphere

    Tips & Tricks

    In the News

    New Releases & Project Updates

    • rust-timsort. Rust implementation of the modified MergeSort used in Python and Java.
    • trust. Rust automated test runner.
    • mongo-rust-driver. Mongo Rust driver built on top of the Mongo C driver.
    • rust-ffi-omnibus. A collection of examples of using code written in Rust from other languages.
    • hyper is now at v0.6. An HTTP/S library for Rust.
    • rust-throw. A new experimental rust error handling library, meant to assist and build on existing error handling systems.
    • burrito. A monadic IO interface in Rust.
    • mimty. Fast, safe, self-contained MIME Type Identification for C and Rust.

    What's cooking on master?

    95 pull requests were merged in the last week.

    Breaking Changes

    Now you can follow breaking changes as they happen!

    Other Changes

    New Contributors

    • Andy Grover
    • Brody Holden
    • Christian Persson
    • Cruz Julian Bishop
    • Dirkjan Ochtman
    • Gulshan Singh
    • Jake Hickey
    • Makoto Kato
    • Yongqian Li

    Final Comment Period

    Every week the teams announce a 'final comment period' for RFCs which are reaching a decision. Express your opinions now. This week's RFCs entering FCP are:

    New RFCs

    Upcoming Events

    If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

    Marco BonardoAdd-ons devs heads-up: History API removals in Places

    Alex ClarkPillow 2-9-0 Is Almost Out

    Pillow 2.9.0 will be released on July 1, 2015.

    Pre-release

    Please help the Pillow Fighters prepare for the Pillow 2.9.0 release by downloading and testing this pre-release:

    Report issues

    As you might expect, we'd like to avoid the creation of a 2.9.1 release within 24-48 hours of 2.9.0 due to any unforeseen circumstances. If you suspect such an issue to exist in 2.9.0.dev2, please let us know:

    Thank you!

    John O'DuinnMozilla’s Release Engineering now on Dr Dobbs!

    book coverLong time readers of this blog will remember when The Architecture of Open Source Applications (vol2) was published, containing a chapter describing the tools and mindsets used when re-building Mozilla’s Release Engineering infrastructure. (More details about the book, about the kindle and nook versions, and about the Russian version(!).

    Dr Dobbs recently posted an article here which is an edited version of the Mozilla Release Engineering chapter. As a long time fan of Dr Dobbs, seeing this was quite an honor, even with the sad news here.

    Obviously, Mozilla’s release automation continues to evolve, as new product requirements arise, or new tools help further streamline things. There is still lots of interesting work being done here – for me, top of mind is Task Cluster, and ScriptHarness (v0.1.0 and v0.2.0). Release Engineering at scale is both complex, and yet very interesting – so you should keep watching these sites for more details, and consider if they would also help in your current environment. As they are all open source, you can of course join in and help!

    For today, I just re-read the Dr. Dobbs article with a fresh cup of coffee, and remembered the various different struggles we went through as we scaled Mozilla’s infrastructure up so we could quickly grow the company, and the community. And then in the middle of it all, found time with armenzg, catlee and lsblakk to write about it all. While some of the technical tools have changed since the chapter was written, and some will doubtless change again in the future, the needs of the business, the company and the community still resonate.

    For anyone doing Release Engineering at scale, the article is well worth a quiet read.

    Roberto A. VitilloA glance at unified FHR/Telemetry

    Lots is changing in Telemetry land. If you do occasionally run data analyses with our Spark infrastructure you might want to keep reading.

    Background

    The Telemetry and FHR collection systems on desktop are in the process of being unified. Both systems will be sending their data through a common data pipeline which has some features of both the current Telemetry pipeline as well the Cloud Services one that we use to ingest server logs.

    The goals of the unification are to:

    • avoid measuring the same metric in multiple systems on the client side;
    • reduce the latency from the time a measurement occurs until it can be analyzed on the server;
    • increase the accuracy of measurements so that they can be better correlated with factors in the user environment such as the specific build, enabled add-ons, and other hardware or software characteristics;
    • use a common data pipeline for client telemetry and service log data.

    The unified pipeline is currently sending data for Nightly, Aurora and Beta. Classic FHR and Telemetry pipelines are going to keep sending data to the very least until the new unified pipeline has not been fully validated. The plan is to land this feature in 40 Release. We’ll also continue to respect existing user preferences. If the user has opted out of FHR or Telemetry, we’ll continue to respect that for the equivalent data sets. Similarly, the opt-out and opt-in defaults will remain the same for equivalent data sets.

    Data format

    A Telemetry ping, stored as JSON object on the client, encapsulates the data sent to our backend. The main differences between the new unified Telemetry ping format (v4) and the classic Telemetry one (v2) are that:

    • multiple ping types are supported beyond the classic saved-session ping, like the main ping;
    • pings have a common top-level which contains basic information shared between types, like build-id and channel;
    • pings have an optional environment field which consists of data that is expected to be characteristic for performance and other behavior.

    From an analysis point of view, the most important addition is the main ping which includes the very same histograms and other performance and diagnostic data as the v2 saved-session pings. Unlike in “classic” Telemetry though, there can be multiple main pings during a single session. A main ping is triggered by different scenarios, which are documented by the reason field:

    • aborted-session: periodically saved to disk and deleted at shutdown – if a previous aborted session ping is found at startup it gets sent to our backend;
    • environment-change: generated when the environment changes;
    • shutdown: triggered when the browser session ends;
    • daily: a session split triggered in 24h hour intervals at local midnight; this is needed to make sure we keep receiving data also from clients that have very long sessions.

    Data access through Spark

    Once you connect to a Spark enabled IPython notebook launched from our self-service dashboard, you will be prompted with a new tutorial based on the v4 dataset. The v4 data is fetched through the get_pings function by passing “v4″ as the schema parameter. The following parameters are valid for the new data format:

    • app: an application name, e.g.: “Firefox”;
    • channel: a channel name, e.g.: “nightly”;
    • version: the application version, e.g.: “40.0a1″;
    • build_id: a build id or a range of build ids, e.g.:”20150601000000″ or (“20150601000000″, “20150610999999”)
    • submission_date: a submission date or a range of submission dates, e.g: “20150601” or (“20150601″, “20150610”)
    • doc_type: ping type, e.g: “main”, set to “saved_session” by default
    • fraction: the fraction of pings to return, set to 1.0 by default

    Once you have a RDD, you can further filter the pings down by reason. There is also a new experimental API that returns the history of submissions for a subset of profiles, which can be used for longitudinal analyses.


    Zack WeinbergGoogle Voice Search and the Appearance of Trustworthiness

    Last week there were several bug reports [1] [2] [3] about how Chrome (the web browser), even in its fully-open-source Chromium incarnation, downloads a closed-source, binary extension from Google’s servers and installs it, without telling you it has done this, and moreover this extension appears to listen to your computer’s microphone all the time, again without telling you about it. This got picked up by the trade press [4] [5] [6] and we rapidly had a full-on Internet panic going.

    If you dig into the bug reports and/or the open source part of the code involved, which I have done, it turns out that what Chrome is doing is not nearly as bad as it looks. It does download a closed-source binary extension from Google, install it, and hide it from you in the list of installed extensions (technically there are two hidden extensions involved, only one of which is closed-source, but that’s only a detail of how it’s all put together). However, it does not activate this extension unless you turn on the voice search checkbox in the settings panel, and this checkbox has always (as far as I can tell) been off by default. The extension is labeled, accurately, as having the ability to listen to your computer’s microphone all the time, but of course it does not get to do this until it is activated.

    As best anyone can tell without access to the source, what the closed-source extension actually does when it’s activated is monitor your microphone for the code phrase OK Google. When it detects this phrase it transmits the next few words spoken to Google’s servers, which convert it to text and conduct a search for the phrase. This is exactly how one would expect a voice search feature to behave. In particular, a voice-activated feature intrinsically has to listen to sound all the time, otherwise how could it know that you have spoken the magic words? And it makes sense to do the magic word detection with code running on the local computer, strictly as a matter of efficiency. There is even a non-bogus business reason why the detector is closed source; speech recognition is still in the land where tiny improvements lead to measurable competitive advantage.

    So: this feature is not actually a massive privacy violation. However, Google could and should have put more care into making this not appear to be a massive privacy violation. They wouldn’t have had mud thrown at them by the trade press about it, and the general public wouldn’t have had to worry about it. Everyone wins. I will now dissect exactly what was done wrong and how it could have been done better.

    It was a diagnostic report, intended for use by developers of the feature, that gave people the impression the extension was listening to the microphone all the time. Below is a screen shot of this diagnostic report (click for full width). You can see it on your own copy of Chrome by typing chrome://voicesearch into the URL bar; details will probably differ a little (especially if you’re not using a Mac).

    Screen shot of Google Voice Search diagnostic report, taken on Chrome 43 running on MacOS X. The most important lines of text are 'Microphone: Yes', 'Audio Capture Allowed: Yes', 'Hotword Search Enabled: No', and 'Extension State: ENABLED. Screen shot of Google Voice Search diagnostic report, taken on Chrome 43 running on MacOS X.

    Google’s first mistake was not having anyone check this over for what it sounds like it means to someone who isn’t familiar with the code. It is very well known that when faced with a display like this, people who aren’t familiar with the code will pick out whatever bits they think they understand and ignore everything else, even if that means they completely misunderstand it. [7] In this case, people see Microphone: Yes and Audio Capture Allowed: Yes and maybe also Extension State: ENABLED and assume that this means the extension is actively listening right now. (What the developers know it means is this computer has a microphone, the extension could listen to it if it had been activated, and it’s connected itself to the checkbox in the preferences so it can be activated. And it’s hard for them to realize that anyone could think it would mean something else.)

    They didn’t have anyone check it because they thought, well, who’s going to look at this who isn’t a developer? Thing is, it only takes one person to look at it, decide it looks hinky, mention it online, and now you have a media circus on your hands. Obscurity is no excuse for not doing a UX review.

    Now, mistake number two becomes evident when you consider what this screen ought to say in order not to scare people who haven’t turned the feature on (and maybe this is the first they’ve heard of it even): something like

    Voice Search is inactive.

    (A couple of sentences about what Voice Search is and why you might want it.) To activate Voice Search, go to the preferences screen and check the box.

    It would also be okay to have a duplicate checkbox right there on this screen, and to have all the same debugging information show up after you check the box. But wait—how do developers diagnose problems with downloading the extension, which happens before the box has been checked? And that’s mistake number two. The extension should not be downloaded until the box is checked. I am not aware of any technical reason why that couldn’t have been the way it worked in the first place, and it would go a long way to reassure people that this closed-source extension can’t listen to them unless they want it to. Note that even if the extension were open source it might still be a live question whether it does anything hinky. There’s an excellent chance that it’s a generic machine recognition algorithm that’s been trained to detect OK Google, which training appears in the code as a big lump of meaningless numbers—and there’s no way to know whether those numbers train it to detect anything besides OK Google. Maybe if you start talking about bombs the computer just quietly starts recording…

    Mistake number three, finally, is something they got half-right. This is not a core browser feature. Indeed, it’s hard for me to imagine any situation where I would want this feature on a desktop computer. Hands-free operation of a mobile device, sure, but if my hands are already on a keyboard, that’s faster and less bothersome for other people in the room. So, Google implemented this frill as a browser extension—but then they didn’t expose that in the user interface. It should be an extension, and it should be visible as such. Then it needn’t take up space in the core preferences screen, even. If people want it they can get it from the Chrome extension repository like any other extension. And that would give Google valuable data on how many people actually use this feature and whether it’s worth continuing to develop.

    Chris CooperReleng & Relops weekly highlights - June 26, 2015

    Friday, foxyeah!

    It’s been a very busy and successful work week here in beautiful Whistler, BC. People are taking advantage of being in the same location to meet, plan, hack, and socialize. A special thanks to Jordan for inviting us to his place in beautiful Squamish for a BBQ!

    (Note: No release engineering folks were harmed by bears in the making of this work week.)

    tl;dr

    Whistler: Keynotes were given by our exec team and we learned we’re focusing on quality, dating our users to get to know them better, and that WE’RE GOING TO SPACE!! We also discovered that at LEGO, Everything is Awesome now that they’re thinking around the box instead of inside or outside of it. Laura’s GoFaster project sounds really exciting, and we got a shoutout from her on the way we manage the complexity of our systems. There should be internal videos of the keynotes up next week if you missed them.

    Internally, we talked about Q3 planning and goals, met with our new VP, David, met with our CEO, Chris, presented some lightning talks, and did a bunch of cross-group planning/hacking. Dustin, Kim, and Morgan talked to folks at our booth at the Science Fair. We had a cool banner and some cards (printed by Dustin) that we could hand out to tell people about try. SHIP IT!

    Taskcluster: Great news; the TaskCluster team is joining us in Platform! There was lots of evangelism about TaskCluster and interest from a number of groups. There were some good discussions about operationalizing taskcluster as we move towards using it for Firefox automation in production. Pete also demoed the Generic Worker!

    Puppetized Windows in AWS: Rob got the nxlog puppet module done. Mark is working on hg and NSIS puppet modules in lieu of upgrading to MozillaBuild 2.0. Jake is working on the metric-collective module. The windows folks met to discuss the future of windows package management. Q is finishing up the performance comparison testing in AWS. Morgan, Mark, and Q deployed runner to all of the try Windows hosts and one of the build hosts.

    Operational: Amy has been working on some additional nagios checks. Ben, Rail, and Nick met and came up with a solid plan for release promotion. Rail and Nick worked on releasing Firefox 39 and two versions of Firefox ESR. Hal spent much of the week working with IT. Dustin and catlee got some work on on migrating treestatus to relengapi. Hal, Nick, Chris, and folks from IT, sheriffs, dev-services debugged problems with b2g jobs. Callek deployed a new version of slaveapi. Kim, Jordan, Chris, and Ryan worked on a plan for addons. Kim worked with some new buildduty folks to bring them up to speed on operational procedures.

    Thank you all, and have a safe trip home!

    And here are all the details:

    Taskcluster

    • We got to spend some quality time with the our new TaskCluster teammates, Greg, Jonas, Wander, Pete, and John. We’re all looking forward to working together more closely.
    • Morgan convinced lots of folks that Taskcluster is super amazing, and now we have a lot of people excited to start hacking on it and moving their workloads to it.
    • We put together a roadmap for TaskCluster in Trello and identified the blockers to turning Buildbot Scheduling off.

    Puppetized Windows in AWS

    • Rob has pushed out the nxlog puppet module to get nxlog working in scl3 (bug 1146324). He has a follow-on bug to modify the ec2config file for AWS to reset the log-aggregator host so that we’re aggregating to the local region instead of where we instantiate the instance (like we do with linux). This will ensure we have Windows system logs in AWS (bug 1177577).
    • The new version of MozillaBuild was released, and our plan was to upgrade to that on Windows (bug 1176111). An attempt at that showed that the way hg was compiled requires an external dll (likely something from cygwin), and needs to be run from bash. Since this would require significant changes, we’re going to install the old version of MozillaBuild and put upgrades of hg (bug 1177740) and NSIS on top of that (like we’re doing with GPO now). Future work will include splitting out all the packages and not using MozillaBuild. Jake is working on the puppet module for metric-collective, our host-level stats gathering software for windows (similar to collectd on windows/OS X). This will give use Windows system metrics in graphite in AWS (bug 1097356).
    • We met to talk about Windows packaging and how to best integrate with puppet. Rob is starting to investigate using NuGet and Chocolatey to handle this (bugs 1175133 and 1175107).
    • Q spun up some additional instance types in AWS and is in the process of getting some more data for Windows performance after the network modifications we made earlier (bug 1159384).
    • Jordan added a new puppetized path for all windows jobs, fixing a problem we were seeing with failing sendchanges on puppetized machines (bug 1175701).
    • Morgan, Mark, and Q deployed runner to all of the try Windows hosts (bug 1055794).

    Operational

    • The relops team met to perform a triage of their two bugzilla queues and closed almost 20% of the open bugs as either already done or wontfix based on changes in direction.
    • Amy has been working on some additional nagios checks for some Windows services and for AWS subnets filling up (bugs 1164441 and 793293).
    • Ben, Rail, and Nick met and came up with a solid plan for the future of release promotion.
    • Rail and Nick worked on getting Firefox 39 (and the related ESR releases) out to our end users.
    • Hal spent lots of time working with IT and the MOC, improving our relationships and workflow.
    • Dustin and catlee did some hacking to start the porting of treestatus to relengapi (one of the blockers to moving us out of PHX1).
    • Hal, Nick, Chris, and folks from IT, sheriffs, dev-services tracked down an intermittent problem with the repo-tool impacting only b2g jobs (bug 1177190).
    • Callek deployed the new version of slaveapi to support slave loans using the AWS API (bug 1177932).
    • Kim, Jordan, Chris, and Ryan discussed the initial steps for future addon support.
    • Coop (hey, that’s me) held down the buildduty fort while everyone else was in Whistler

    See you next week!

    Cameron Kaiser31.8.0 available (say goodbye)

    31.8.0 is available, the last release for the 31 series (release notes, downloads, hashes). Download it and give it one last spin. 31 wasn't a high water mark for us in terms of features or performance, but it was pretty stable and did the job, so give it a salute as it rides into the sunset. It finalizes Monday PM Pacific time as usual.

    I'm trying very hard to get you the 38.0.1 beta by sometime next week, probably over the July 4th weekend assuming the local pyros don't burn my house down with errant illegal fireworks, but I keep hitting showstoppers while trying to dogfood it. First it was fonts and then it was Unicode input, and then the newtab crap got unstuck again, and then the G5 build worked but the 7450 build didn't, and then, and then, and then. I'm still working on the last couple of these major bugs and then I've got some additional systems to test on before I introduce them to you. There are a couple minor bugs that I won't fix before the beta because we need enough time for the localizers to do their jobs, and MP3 support is present but is still not finished, but there will be a second beta that should address most of these problems prior to our launch with 38.0.2. Be warned of two changes right away: no more tiles in the new tab page (I never liked them anyway, but they require Electrolysis now, so that's a no-no), and Check for Updates is now moved to the Help menu, congruent with regular Firefox, since keeping it in its old location now requires substantial extra code that is no longer worth it. If you can't deal with these changes, I will hurt you very slowly.

    Features that did not make the cut: Firefox Hello and Pocket, and the Cisco H.264 integration. Hello and Pocket are not in the ESR, and I wouldn't support them anyway; Hello needs WebRTC, which we still don't really support, and you can count me in with the people who don't like a major built-in browser component depending exclusively on a third-party service (Pocket). As for the Cisco integration, there will never be a build of those components for Tiger PowerPC, so there. Features that did make the cut, though, are pdf.js and Reader View. Although PDF viewing is obviously pokier compared to Preview.app, it's still very convenient, generally works well enough now that we have IonPower backing it, and is much safer. However, Reader View on the other hand works very well on our old systems. You'll really like it especially on a G3 because it cuts out a lot of junk.

    After that there are two toys you'll get to play with before 38.0.2 since I hope to introduce them widely with the 38 launch. More on that after the beta, but I'll whet your appetite a little: although the MacTubes Enabler is now officially retired, since as expected the MacTubes maintainer has thrown in the towel, thanks to these projects the MTE has not one but two potential successors, and one of them has other potential applications. (The QuickTime Enabler soldiers on, of course.)

    Last but not least, I have decided to move the issues list and the wiki from Google Code to Github, and leave downloads with SourceForge. That transition will occur sometime late July before Google Code goes read-only on August 24th. (Classilla has already done this invisibly but I need to work on a stele so that 9.3.4 will be able to use Github effectively.) In the meantime, I have already publicly called Google a bunch of meaniepants and poopieheads for their shameful handling of what used to be a great service, so my work here is done.

    Gervase MarkhamPromises: Code vs. Policy

    A software organization wants to make a promise, for example about its data practices. For example, “We don’t store information on your location”. They can keep that promise in two ways: code or policy.

    If they were keeping it in code, they would need to be open source, and would simply make sure the code didn’t transmit location information to the server. Anyone can review the code and confirm that the promise is being kept. (It’s sometimes technically possible for the company to publish source code that does one thing, and binaries which do another, but if that was spotted, there would be major reputational damage.)

    If they were keeping it in policy, they would add “We don’t store information on your location” to their privacy policy or Terms of Service. The documents can be reviewed, but in general you have to trust the company that they are sticking to their word. This is particularly so if the policy states that it does not create a binding obligation on the company. So this is a function of your view of the company’s reputation.

    Geeks like promises kept in code. They can’t be worked around using ambiguities in English, and they can’t be changed without the user’s consent (to a software upgrade). I suspect many geeks think of them as superior to promises kept in policy – “that’s what they _say_, but who knows?”. This impression is reinforced when companies are caught sticking to the letter but not the spirit of their policies.

    But some promises can’t be kept in code. For example, you can’t simply not send the user’s IP address, which normally gives coarse location information, when making a web request. More complex or time-bound promises (“we will only store your information for two weeks”) also require policy by their nature. Policy is also more flexible, and using a policy promise rather than a code promise can speed time-to-market due to reduced software complexity and increased ability to iterate.

    Question: is this distinction, about where to keep your promises, useful when designing new features?

    Question: is it reasonable or misguided for geeks to prefer promises kept in code?

    Question: if Mozilla or its partners are using promises kept in policy for e.g. a web service, how can we increase user confidence that such a policy is being followed?