Doug BelshawToday is my last day at Mozilla

TL;DR: I’m leaving Mozilla as a paid contributor because, as of next week, I’ll be a full-time consultant! I’ll write about that in a separate blog post.


Around four years ago, I stumbled across a project that the Mozilla Foundation was running with P2PU. It was called ‘Open Badges’ and it really piqued my interest. I was working in Higher Education at the time and finishing off my doctoral thesis. The prospect of being able to change education by offering a different approach to credentialing really intrigued me.

I started investigating further, blogging about it, and started getting more people interested in the Open Badges project. A few months later, the people behind MacArthur’s Digital Media and Learning (DML) programme asked me to be a judge for the badges-focused DML Competition. While I was in San Francisco for the judging process I met Erin Knight, then Director of Learning at Mozilla, in person. She asked if I was interested in working on her team. I jumped at the chance!

During my time at Mozilla I’ve worked on Open Badges, speaking and running keynotes at almost as many events as there are weeks in the year. I’ve helped bring a Web Literacy Map (originally ‘Standard’) into existence, and I’ve worked on various projects and with people who have changed my outlook on life. I’ve never come across a community with such a can-do attitude.

This June would have marked three years as a paid contributor to the Mozilla project. It was time to move on so as not to let the grass grow under my feet. Happily, because Mozilla is a global non-profit with a strong community that works openly, I’ll still be a volunteer contributor. And because of the wonders of the internet, I’ll still have a strong connection to the network I built up over the last few years.

I plan to write more about the things I learned and the things I did at Mozilla over the coming weeks. For now, I just want to thank all of the people I worked with over the past few years, and wish them all the best for the future. As of next week I’ll be a full-time consultant. More about that in an upcoming post!

Nicholas NethercoteOn vacation for a month

I’m taking a month of vacation. Today is my last working day for March, and I will be back on April 30th. While I won’t be totally incommunicado, for the most part I won’t be reading email. While I’m gone, any management-type inquiries can be passed on to Naveed Ihsannullah.

Air MozillaVR Cinema Meetup #3

VR Cinema Meetup #3 VR Cinema's third event will showcase new and exciting films for virtual reality. Come see these groundbreaking projects before they're released to the public and...

Morgan PhillipsWhoop, Whoop: Pull Up!

Since December 1st 1975, by FAA mandate, no plane has been allowed to fly without a "Ground Proximity Warning System" GPWS (or one of its successors).[1] For good reason too, as it's been figured that 75% of the fatalities just one year prior (1974) could have been prevented using the system.[2]

In a slew of case studies, reviewers reckoned that a GPWS may have prevented crashes by giving pilots additional time to act before they smashed into the ground. Often, the GPWS's signature "Whoop, Whoop: Pull Up!" would have sounded a full fifteen seconds before any other alarms triggered.[3]

Instruments like this are indispensable to aviation because pilots operate in an environment outside of any realm where human intuition is useful. Lacking augmentation, our bodies and minds are simply not suited to the task of flying airliners.

For the same reason, thick layers of instrumentation and early warning systems are necessary for managing technical infrastructure. Like pilots, without proper tooling, system administrators often plow their vessels into the earth....

The St. Patrick's Day Massacre

Case in point, on Saint Patrick's Day we suffered two outages which could have likely been avoided via some additional alerts and a slightly modified deployment process.

The first outage was caused by the accidental removal of a variable from a config file which one of our utilities depends on. Our utilities are all managed by a dependency system called runner, and when any task fails the machine is prevented from doing work until it succeeds. This all-or-nothing behavior is correct, but should not lead to closed trees....

On our runner dashboards, the whole event looked like this (the smooth decline on the right is a fix being rolled out with ansible):


The second, and most severe, outage was caused by an insufficient wait time between retries upon failing to pull from our mercurial repositories.

There was a temporary disruption in service which led to slaves failing to clone a repository, and, unfortunately, a large number of slaves were affected and began retrying the task. As a result, we ended up DDoSing ourselves.

From the repository's point of view, the explosion looked like this:


Then, from runner's point of view, the retrying task:



In both of these cases, despite having the data (via runner logging), we missed the opportunity to catch the problem before it caused system downtime. Furthermore, especially in the first case, we could have avoided the issue even earlier by testing our updates and rolling them out gradually.

Avoiding Future Massacres

After these fires went out, I started working on a RelEng version of the Ground Proximity Warning System, to keep us from crashing in the future. Here's the plan:

1.) Bug 1146974 - Add automated alerting for abnormally high retries (in runner).

In both of the above cases, we realized that things had gone amiss based on job backlog alerts. The problem is, once we have a large enough backlog to trigger those alarms, we're already hosed.

The good news is, the backlog is preceded by a spike in runner retries. Setting up better alerting here should buy us as much as an extra hour to respond to trouble.

We're already logging all task results to influxdb, but, alerting via that data requires a custom nagios script. Instead of stringing that together, I opted to write runner output to syslog where it's being aggregated by papertrail.

Using papertrail, I can grep for runner retries and build alarms from the data. Below is a screenshot of our runner data in the papertrail dashboard:



2.) Add automated testing, and tiered roll-outs to golden ami generation

Finally, when we update our slave images the new version is not rolled out in a precise fashion. Instead, as old images die (3 hours after the new image releases) new ones are launched on the latest version. Because of this, every deploy is an all-or-nothing affair.

By the time we notice a problem, almost all of our hosts are using the bad instance and rolling back becomes a huge pain. We also do rollbacks by hand. Nein, nein, nein.

My plan here is to launch new instances with a weighted chance of picking up the latest ami. As we become more confident that things aren't breaking -- by monitoring the runner logs in papertrail/influxdb -- we can increase the percentage.

The new process will work like this:
    00:00 - new AMI generated

    00:01 - new slaves launch with a 12.5% chance of taking the latest version.

    00:45 - new slaves launch with a 25% chance of taking the latest version.

    01:30 - new slaves launch with a 50% chance of taking the latest version.

    02:15 - new slaves launch with a 100% chance of taking the latest version.
Lastly, if we want to roll back, we can just lower the percentage down to zero while we figure things out. This also means that we can create sanity checks which roll back bad amis without any human intervention whatsoever.

The intention being, any failure within the first 90 minutes will trigger a rollback and keep the doors open....

Daniel PocockWebRTC: DruCall in Google Summer of Code 2015?

I've offered to help mentor a Google Summer of Code student to work on DruCall. Here is a link to the project details.

The original DruCall was based on SIPml5 and released in 2013 as a proof-of-concept.

It was later adapted to use JSCommunicator as the webphone implementation. JSCommunicator itself was updated by another GSoC student, Juliana Louback, in 2014.

It would be great to take DruCall further in 2015, here are some of the possibilities that are achievable in GSoC:

  • Updating it for Drupal 8
  • Support for logged-in users (currently it just makes anonymous calls, like a phone box)
  • Support for relaying shopping cart or other session cookie details to the call center operative who accepts the call

Help needed: could you be a co-mentor?

My background is in real-time and server-side infrastructure and I'm providing all the WebRTC SIP infrastructure that the student may need. However, for the project to have the most impact, it would also be helpful to have some input from a second mentor who knows about UI design, the Drupal way of doing things and maybe some Drupal 8 experience. Please contact me ASAP if you would be keen to participate either as a mentor or as a student. The deadline for student applications is just hours away but there is still more time for potential co-mentors to join in.

WebRTC at mini-DebConf Lyon in April

The next mini-DebConf takes place in Lyon, France on April 11 and 12. On the Saturday morning, there will be a brief WebRTC demo and there will be other opportunities to demo or test it and ask questions throughout the day. If you are interested in trying to get WebRTC into your web site, with or without Drupal, please see the RTC Quick Start guide.

Armen Zambranomozci 0.4.0 released - Many bug fixes and improved performance

For the release notes with all there hyper-links go here.

NOTE: I did a 0.3.1 release but the right number should have been 0.4.0

This release does not add any major features, however, it fixes many issues and has much better performance.

Many thanks to @adusca, @jmaher and @vaibhavmagarwal for their contributions.

Features:

  • An alltalos.py script has been added
  • Issue #69 - Generate graph of builds to testers
  • Added flake8 support - Remove pyflakes and pep8
  • Allow skipping revisions on a list (09f7138)
  • Issue #61 - Rename trigger_range.py to trigger.py

Fixes:

  • All the documentation and roadmap have been polished
  • Issue #90 - Do not trigger builds multiple times if we are intending the test jobs to be triggered multiple times
  • Issue #94 - Load list of repositories from disk only once
  • Issue #117 - gaia-try builders are always upstream builders
  • Determine a running job correctly (068b5ee)
  • Issue #142 - Loading buildjson files from disk is now only done once
  • Issue #135 - Remove buildjson files which have fallen out of date
  • Issue #146 - If the buildapi information about a build is corrupted, trigger that build again
  • Some DONTBUILD pushes can have buildapi support (dcb942f)
  • Issue #120 - Prevent triggering more build jobs than necessary


For all changes visit: 0.3.0...0.4.0


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Air MozillaGerman speaking community bi-weekly meeting

German speaking community bi-weekly meeting Zweiwöchentliches Meeting der deutschsprachigen Community. ==== German speaking community bi-weekly meeting.

Air MozillaFlowbox.io & Luna

Flowbox.io & Luna Woyciech Danilo from flowbox.io talks about their new programming language, Luna. Flowbox develops professional video compositing software, which is powered by a new programming language...

Emma IrwinOpensource.com Article on Mozilla Community Education

Super excited to share my post published on opensource.com for Open Education Week: “Mozilla cares for community with educational resources“.

 

Mozilla Reps CommunityNew Rep Mentors, welcome!

Dear Reps Planet,

The council is excited to share with you, our second group of new Mozilla Rep Mentors this Year.

These are Reps council has recognized as being equally good at inspiring and empowering others, as they are leading globally and locally in their communities.

As mentorship is core to the program, we are very grateful they have agreed to take on this new responsibility.

A crucial role in the Mozilla Reps ecosystem is that of a mentor. We strive for every Rep to become a mentor for the program to become self-sustaining and for Reps to play a central role in our ambitious goals for growing and enabling the Mozilla Community. We’ve just accepted eight new mentors, bringing the current total to 54.

Our new mentors are:

Please join us in congratulating our new Mozilla Rep Mentors – via this thread on Discourse
:)

Reps Mentor Role Description:

  • Mozilla Reps recognizes that our primary goals are best reached through the support, encouragement, and empowerment of community through mentorship. Mentoring is a process for the informal transmission of knowledge, made possible through regular and supportive interaction.
  • We encourage mentors to be as open to learning from their mentees, as they are to teaching, for the benefit and growth of both individuals and the program as a whole.

Welcome New Reps Mentors!

Air MozillaCommunity Education Call

Community Education Call The Community Education Working Group exists to merge ideas, opportunities, efforts and impact across the entire project through Education & Training.

Monty MontgomeryDaala Blog-Like Update: Bug or feature? [or, the law of Unintentionally Intentional Behaviors]

Codec development is often an exercise in tracking down examples of "that's funny... why is it doing that?" The usual hope is that unexpected behaviors spring from a simple bug, and finding bugs is like finding free performance. Fix the bug, and things usually work better.

Often, though, hunting down the 'bug' is a frustrating exercise in finding that the code is not misbehaving at all; it's functioning exactly as designed. Then the question becomes a thornier issue of determining if the design is broken, and if so, how to fix it. If it's fixable. And the fix is worth it.

[continue reading at Xiph.Org....]

Matt ThompsonMozilla Foundation March 2015 Board Meeting

What’s happening at the Mozilla Foundation? This post contains the presentation slides from our recent Board Meeting, plus an audio interview with Executive Director Mark Surman. It provides highlights from 2014, a brief summary of Mozilla’s 2015 plan, and a progress report on what we’ve achieved over the past three months.

What we did in 2014

  • Grew contributors and ground game. (10,077 active contributors total.)
  • Prototyped new Webmaker mobile product
  • Expanded community programs by 3x

March 2015 Board Deck - Share.004

March 2015 Board Deck - Share.005 March 2015 Board Deck - Share.007

Mozilla’s 2015 Plan

Mozilla-wide goals: grow long-term relationships that 
help people and promote the open web. By building product and empowering people.

Webmaker+ goal: Expand participation in Webmaker through new software and on the ground clubs.

Building Mozilla Learning

By 2017, we’ve built Mozilla Learning: a global classroom and lab for the citizens of the web. Part community, part academy, people come to Mozilla Learning to unlock the power of the web for themselves, their organizations and the world.

2015 Mozilla Foundation goals

  • Deepen learning networks (500 cities)
  • B
uild mass appeal learning product (250k Monthly Active Users)
  • Craft ambitious Mozilla Learning and community strategy
Q1 Mozilla Foundation highlights
  • Major victory in US net neutrality, with Mozilla getting 330k people to sign a petition.
  • Launched Webmaker app at Mobile World Congress. Strong interest from partners, possible link to Orange FirefoxOS launch in Africa and Middle East.

March 2015 Board Deck - Share.019

March 2015 Board Deck - Share.020

March 2015 Board Deck - Share.021

Gervase MarkhamTop 50 DOS Problems Solved: Shrinking Hard Disk

Q: My hard disk seems to be getting smaller! There is a megabyte less free space than there was a month ago, yet I have not saved anywhere near 1MB’s worth of files. What’s going on?

A: This is quite a common problem, but most sufferers don’t realise they’ve got it. What happens is that some of the free space gets allocated to a non-existent file.

In other words the disk filing system has, in your case, a megabyte allocated to one or more files that don’t have a directory entry. They cannot therefore be seen with the DIR command, nor deleted.

Fortunately it is possible to turn these lost chains, as they are called, back into real files which can then be seen and deleted in the normal way. Simply type this command:

CHKDSK /F

If you have any lost chains, Chkdsk will tell you so and ask you if you want to convert them into files. Answer ‘Y’.

FILE0000.CHK, FILE0001.CHK, FILE0002.CHK…

Mozilla Science LabAsk Us Anything: Lessons from the Local Research Coding Groups Panel Discussion

On Tuesday, the the Science Lab conducted its first Ask Us Anything on the MSL Forum, organized by Noam Ross. The topic was lessons learned in running local study groups, user’s groups, hacky hours and other meetups for researchers using and writing code; many thanks go out to the seven panelists who were available to answer questions:

This was a tremendously successful event, with a sustained conversation of more than a post per minute on the topic for two full hours; a lot of interesting ideas came out of the discussion, a few of which I summarize here, followed by detailed discussion below; also, be sure to check out the full thread.

Summary

A few great ideas for study groups can be distilled from this event:

  • Get your event information in front of a few hundred people regularly; 10% attendance is normal.
  • Involve your institution in communicating about your event if possible.
  • Always have an exit strategy: how will you pass the baton to the next cohort of organizers?

And for online AMA-style events:

  • Make it a panel discussion; this creates the space for the community to ask questions, but keeps the thread lively and substantial with discussions between experienced community members.
  • Make it a time-limited event; this encourages active participation and re-creates some of the energy usually found only at conferences and in-person meetups.

Sticking Power

One of the first things the thread discussed, was one of the most common problems for any regular community event: sustainability. How do we get people coming out and participating, month after month, and maintain momentum?

The panel quickly zeroed in on an interesting challenge: will interest be inspired and sustained by highly targeted skills and tools trainings, or will keeping things as general as possible appeal to a wide audience? Highly specific material will be the most attractive to the small group of people already interested in it, while general topics might seem vague or unclear to a potential attendee on how relevant they’ll be, even if they, in principle, apply to a wider range of people.

This led to an important observation: the bigger the pool of people a study group is communicating with, the better its attendance will be. Panelists seemed to have a bit more success with the specific and clearly practically applicable; what allowed these groups to keep attendance up despite getting into the nitty gritty, was developing a large audience of people aware of their activities. Numbers seemed to hover around 10% attendance, if we compare number of actual attendees to size of mailing lists; but with a large audience (critical mass seemed to be around 200 people), there’s sure to be a cohort of people interested in whichever specific topic the group wants to take up.

But what about the early days, before a new group has gotten in front of that first 200? Fiona and Jeff made a key observation: stick to it, even if the first couple of events are just you and one or two other people. It takes time for word of mouth to spread, time for people to make up their minds that they’re comfortable dipping their toe into something like a meetup group – and, worst case, you’ve set aside some time to get some of your own work done and have a beer.

Finally on the topic of sustainability, another common concern that came up was the relationship of organizers to the host institution; post-docs and students move on after only a few short years, and without someone to pick up the torch, efforts can fizzle out. The panel agreed that it’s crucial for senior organizers to cultivate relationships with people who they can hand off to in future, but this calls out another key design question: how can we design a really smooth hand-off procedure between generations of organizers? This is a long term goal a bit beyond the concerns of groups just getting started, but I think with some savvy design, this process can be made quite smooth; more of my own ideas on this will be forthcoming on this blog very soon.

Communication

We need that pool of 200 people thinking about our event – how do we assemble them to begin with?

Organizers found, perhaps surprisingly, that their attendees were pretty quiet on Twitter, and didn’t generate much conversation there, although Twitter might be more effective as a ‘push’ platform, to let people know about events and content. More successful were blogs and mailing lists; panelists cited the familiarity of these formats to most researchers.

A novel approach that a few of the groups based at universities took, was to approach departments for inclusion in departmental news letters and welcome packages for new students. Not only do these communication channels typically already exist in most institutions, they can put a group in front of a large number of potentially interested people quickly, and lend a degree of inclusion into the establishment that helps catch peoples’ attention.

Novel Ideas

One thing I love about getting a bunch of people together to talk, is that novel ideas always come out. One of my favorites was a whole other flavor of event that a study group could put on; Fiona Tweedie described ‘Research Speed Dating’, an event where a bunch of people set up short demos of tools they use in their research, and attendees circulate among them, exploring the tools in short, five-minute introductions to see if they might be interested in looking deeper into them at a future meetup. Topics that garner a lot of interest are chosen for deeper dives at future events, and prospective participants get to meet organizers and start developing connections in a relatively no-pressure atmosphere.

Another observation I found compelling from the discussion came from Rayna Harris – graduate school often involves working on the same project for years, and the singular focus can be maddening after a while. It’s really refreshing to have a project that comes in little, month-long bites; from announcing a meetup to delivering can easily occupy only a few weeks, giving a sense of delivery and completion on a much faster cadence than research naturally provides.

Meta-AMA

A number of people also asked me about the AMA format itself; I think it was a big success, and it was largely thanks to some design decisions Noam Ross made when we were setting this event up:

  • Have a panel of people to discuss the topic at hand. This worked very well, since even when there weren’t newcomers to ask questions, the panelists all talked amongst themselves, which led to some really deep and insightful questions and answers from old hands in the space. We had a seven-person panel, and everyone participated and seemed heard.
  • Put it all on one thread. I admit, I had some misgivings about having seven parallel conversations in one thread. It was about as chaotic as I imagined, but Noam was right, it was actually a good thing; it enhanced the panel’s ability to interact and make this as much a panel discussion as an AMA – call it an open panel discussion.

A remarkable thing about this event, was that the same sort of skill and knowledge sharing that happens so naturally at a conference and that I’ve been trying to produce online came out in this event; by sitting a half dozen people down around a topic in a finite time window (we did two hours and it didn’t drag at all), the same sort of connections and mutual understanding came out.

Conclusion

A number of interesting ideas, metrics and goals for study groups came out of this conversation, which we’ll be folding in to our forthcoming support for setting up your own meetup – watch this space for news and opportunities in that project coming very soon, and in the meantime, make sure your local study group is on the map!

Map of Study Groups & Hacky Hours

Given what a great time and what a productive discussion everyone had on the forum on Tuesday, I’m looking forward to making these panel AMAs a regular event at the Lab; if you have a topic you’d like to suggest, post it in the Events section of the forum, or tweet it to us at @MozillaScience and @billdoesphysics. I hope you’ll join us!

Mozilla Release Management TeamFirefox 37 beta7 to rc

Due to the short cycle (5 weeks instead of 6), we landed more changes than we used to in the RC build.

We took some stability fixes for graphic issues.

  • 22 changesets
  • 229 files changed
  • 724 insertions
  • 4356 deletions

ExtensionOccurrences
h17
cpp17
ini7
py3
list2
js2
html2
sh1
json1
ipdlh1
hgtags1
build1

ModuleOccurrences
storage17
dom14
mobile12
gfx8
widget4
testing3
layout3
docshell2
security1
editor1
browser1

List of changesets:

Chris ManchesterBug 1145444. r=jmaher, a=test-only - 1efc8c39543c
Jeff GilbertBug 1143218 - Use mochitest subsuites to specify webgl tests. r=jmaher, r=gbrown, a=test-only - a58b8b594396
Kyle HueyBug 1145870. r=bz, a=lmandel - 0725e4cfa3c3
CykesiopkaBug 1121117 - Add fuzz time to workaround non-monotonicity of Date(). r=keeler, a=test-only - 8358c6c2c417
Tim TaubertBug 1088163 - Fix intermittent browser_offlineQuotaNotification.js timeouts by properly waiting for a notification to show. r=markh, a=test-only - 72912a71fb98
Ehsan AkhgariBug 1142360 - Move the mochitests for bugs 441782, 467672 and 570378 to the reftest framework. r=dbaron, a=test-only - 62a72d33d16b
Neil DeakinBug 942411 - Change the frame height to force a reflow and renable the test on Linux to see if it helps. r=smaug, a=test-only - b8ec30b0a437
James WillcoxBug 1090300 - Repopulate input buffers when necessary in Android media decoder. r=gcp, a=lmandel - 2cca5b090036
Ryan VanderMeulenBug 1146061 - Skip test_peerConnection_basicH264Video.html on Windows debug. a=test-only - 19b630388dda
Ryan VanderMeulenBacked out changeset 72912a71fb98 (Bug 1088163) because it depends on BrowserTestUtils, which isn't available on 37. - 196c6575593d
Matt WoodrowBacked out changeset 0c23dcbc6bf7 (Bug 1138967) for causing crashes - 6d7a2555b021
Matt WoodrowBacked out changeset 0c23dcbc6bf7 (Bug 1138967) for causing crashes. CLOSED TREE - 2592523e1eb0
Olli PettayBug 1146339 - Do anchor scrolling right before dispatching popstate/hashchange. r=bz, a=lmandel - 4d306a83ae1b
Marco BonardoBug 1005991 - mozStorage should not use XPCVariant off the main thread. r=asuth, a=lmandel - b8c1a399905d
Marco BonardoBug 1005991 - Trivial fixes for non-unified builds. r=me, a=lmandel - fadc9f270e9f
Ryan VanderMeulenMerge beta to m-r. a=merge - 07c827be741f
Steven MichaudBug 1137229 - Keyboard input can stop working in a window. r=smaug a=lmandel CLOSED TREE - 45961b7d67dc
Shih-Chiang ChienBug 1080130 - Unreferenced socket might be closed before opened. r=khuey, a=test-only - b6a4dca0edc9
Jeff MuizelaarBug 1137716 - Try blacklisting Optimus w/ Intel Ironlake Graphics. r=bas, a=lmandel - d56b6d648c01
Matt WoodrowBug 1145585 - Hold a ref to the right texture. r=jmuizelaar, a=lmandel - e35deaa85d21
Ehsan AkhgariBug 1146883 - Null check the node passed to GetGoodSelPointForNode. r=smaug, a=lmandel - 8fda35675a3f
Ryan VanderMeulenMerge beta to m-r. a=merge - 7ec23d08cf32

Karl DubostRefresh HTTP Header

Through discussions on whatwg, I learned (or I had just forgotten) about the Refresh HTTP header. Let's cut strait to the syntax:

HTTP/1.1 200 OK
Refresh: 5; url=http://www.example.org/fresh-as-a-summer-breeze

where

  • 5 means here 5 seconds.
  • url= gives the destination where the client should head after 5 seconds.

Simon Pieters (Opera) is saying in that mail:

I think Refresh as an HTTP header is not specified anywhere, so per spec
it shouldn't work. However I think browsers all support it, so it would be
good to specify it.

Eric Law (ex-Microsoft) has written about The Performance Impact of META REFRESH. If we express the previous HTTP header in HTML, we get:

<meta http-equiv="refresh" content="5;url=http://www.example.org/fresh-as-a-summer-breeze" />

In his blog post, Eric is talking about people using refresh to… well refresh the page. He means loading the same exact page over and over again. And indeed it means for the browser to create a certain number of "unconditional and conditional HTTP requests to revalidate the page’s resources" for each reload (refresh).

On the Web Compatibility side of things, I see the <meta http-equiv="refresh" …/> used quite often.

<meta http-equiv="refresh" content="0;url=http://example.com/there" />

Note the 0. Probably the result of sysadmins not willing to touch the configuration of the servers, and so front-end developers taking the lead to "fix it", instead of using HTTP 302 or HTTP 301. Anyway, it is something which is being used for most of the time, redirecting to another domain name or uri. Refresh HTTP Header on the other hand, I don't remember seeing it that often.

Should it be documented?

Simon is saying: "it would be good to specify it." I'm not so sure. First things first.

Testing

Let's create a test, by making a page sending a Refresh.

Header set Refresh "0;url=https://www.youtube.com/watch?v=sTJ1XwGDcA4"

which gives

HTTP/1.1 200 OK
Accept-Ranges: bytes
Connection: Keep-Alive
Content-Length: 200
Content-Type: text/html; charset=utf-8
Date: Thu, 26 Mar 2015 05:48:57 GMT
ETag: "c8-5122a67ec0240"
Expires: Thu, 02 Apr 2015 05:48:57 GMT
Keep-Alive: timeout=5, max=100
Last-Modified: Thu, 26 Mar 2015 05:37:05 GMT
Refresh: 0;url=https://www.youtube.com/watch?v=sTJ1XwGDcA4

This should redirect to this Fresh page

  • Yes - Firefox 36.0.4
  • Yes - Opera 29.0.1795.26
  • Yes - Safari 8.0.4 (10600.4.10.7)

If someone could test for IE and Chrome at least.

Browser Bugs?

On Mozilla bug tracker, there are a certain number of bugs around refresh. This bug about inline resources is quite interesting and might indeed need to be addressed if there was a documentation. The bug is what the browser should do when the Refresh HTTP header is on an image included in a Web page (this could be another test). For now, the refresh is not done for inline resources. Then what about scripts, stylesheets, JSON files, HTML document in iframes, etc? For the SetupRefreshURIFromHeader code, there are Web Compatibility hacks in the source code of Firefox. We can read:

// Also note that the seconds and URL separator can be either
// a ';' or a ','. The ',' separator should be illegal but CNN
// is using it."

also:

// Note that URI should start with "url=" but we allow omission

and… spaces!

// We've had at least one whitespace so tolerate the mistake
// and drop through.
// e.g. content="10 foo"

Good times…

On Webkit bug tracker, I found another couple of bugs but about meta refresh and not specifically Refresh:. But I'm not sure it's handled by WebCore or if it's handled elsewhere in MacOSX (NSURLRequest, NSURLConnection, …). If someone knows, tell me. I didn't explore yet the source code.

On Chromium bug tracker, another couple of bugs for meta refresh, with some interesting such as this person complaining that a space doesn't work instead of a ;. This is also tracked on WebKit. Something like:

<meta http-equiv="refresh" content="0 url=http://example.com/there" />

Also what should be done with a relative URL.

<meta http-equiv="refresh" content="0;url=/there" />

But for Chromium, I have not found anything really specific to Refresh header. I didn't explore yet the source code.

On Opera bug tracker, it is still closed. We tried to open it when I was working there, and it didn't work.

Competition Of Techniques

Then you can also imagine the hierarchy of commands in a case like this:

HTTP/1.1 301 Permanent Redirect
Refresh: 0;url=http://example.net/refresh-header
Location: http://example.net/location

<!DOCTYPE html>
<html>
<title>Fresh</title>
<meta http-equiv="refresh" content="0;url=http://example.net/meta" />
<body onload="document.location.replace('http://example.net/body')">

</body>
</html>

My guess is the 301 always win with the Location HTTP header, or at least it's what I hope.

History

I can find very early references of meta refresh such as in Netscape Developer documentation.

The earliest mention seems to be An Exploration Of Dynamic Documents I can't find anywhere the documentation for Refresh HTTP header on old Netscape Web sites. (Thanks to SecuriTeam Web site and Amit Klein)

So another thing you obviously want to do, in addition to causing the current document to reload, is to cause another document to be reloaded in n seconds in place of the current document. This is easy. The HTTP response header will look like this:

Refresh: 12; URL=http://foo.bar/blatz.html

In June 1996, Jerry Jongerius posted about HTTP/1.1 Refresh header field comments

My concern with "Refresh" is that I do not want it to be a global concept (a browser can only keep track of one refresh)--it looks to be implemented this way in Netscape 2.x. I would like "Refresh" to apply to individual objects (RE: the message below to netscape).

which Roy T. Fielding replied to:

Refresh is no longer in the HTTP/1.1 document -- it has been deferred to HTTP/1.2 (or later).

Should it be documented? Well, there are plenty of issues, there are plenty of hacks around it. I have just touched the surface of it. Maybe it would be worth to document indeed how it is working as implemented now and how it is supposed to be working when there's no interoperability. If I was silly enough, maybe I would do this. HTTP, Archeology and Web Compatibility issues that seems to be close enough from my vices.

Otsukare!

Robert O'CallahanPaper Titles

A few tips on computer science paper titles:

Titles of the form Catchy Project Name: What Our Project Is About are stilted. Show some imagination.

Titles of the form Towards [Some Goal We Totally Failed To Reach] are an obvious attempt to dress up failure as success. Don't do that.

Do write bold papers about negative results. Call your paper [Our Idea] Doesn't Work (And Here's Why) and I'll be excited to read it.

[Goal] Is Harder Than You Think would also get my attention.

If your paper title contains the word Aristotelian, I will never read your work again and skip the conference too --- but you get points for chutzpah.

Note: following this advice may harm your career. Consider a career where you don't have to publish or perish.

The Mozilla BlogPlease welcome Allison Banks, Vice President of People

We’re thrilled to announce that Allison Banks is joining the leadership team at Mozilla today as our new Vice President of People.

As the leader of our global human resource team at Mozilla, Allison will be responsible, above all, for ensuring our people have what they need to help move our mission forward. Specifically, her team will develop and execute the people-related strategies and activities that will help to foster growth, innovation, and our overall organizational effectiveness.

With over 20 years of experience, Allison joins us most recently from GoPro where she served as Sr. Director of HR overseeing the hiring of 900 people, opening offices in seven countries, integrating acquisitions and building the HR processes and systems required to support a dynamic global organization. Prior to GoPro, she developed her HR expertise and track record for inspiring and supporting people at Perforce Software, Citibank, and Ingres.

Allison’s background, experience and passion for the human side of business is an exceptional fit for Mozilla.

She will be based in the Bay Area, working out of our Mozilla Space in San Francisco and our headquarters in Mountain View.

Please join me in welcoming Allison to Mozilla!

chris

Background:

Allison Banks, Vice President of People, Mozilla

Bio & Mozillians profile

LinkedIn profile

High-res photo

Air MozillaProduct Coordination Meeting

Product Coordination Meeting Weekly coordination meeting for Firefox Desktop & Android product planning between Marketing/PR, Engineering, Release Scheduling, and Support.

Mozilla WebDev CommunityBeer and Tell – March 2015

Once a month, web developers from across the Mozilla Project get together to design the most dangerous OSHA-compliant workstation possible. While searching for loopholes, we find time to talk about our side projects and drink, an occurrence we like to call “Beer and Tell”.

There’s a wiki page available with a list of the presenters, as well as links to their presentation materials. There’s also a recording available courtesy of Air Mozilla.

Michael Kelly: dxr-cmd

A certain blog post author was first with dxr-cmd, a command-line client for making queries to DXR, Mozilla’s source code browser. The tool is installed via pip and supports any query you can make via the web interface. Output can be run through a pager utility such as less, and you can also control the syntax highlighting applied to the output.

Daniel Maher: AudioAddict plugin for Plex

Next up was phrawzty, who was not present but shared a link to AudioAddict.bundle, a Plex plugin that allows you to play music from AudioAddict-based services (such as radiotunes.com, di.fm, and more).

Peter Bengtsson: Redunter

peterbe shared Redunter, a web service that helps hunt down unused CSS on your website. By embedding a small snippet of JS into your page and browsing through your website, Redunter will analyze the HTML being rendered and compare it to the CSS being served. The end result is a list of CSS rules that did not match any HTML that was delivered to the user. Redunter even works with sites that modify the DOM by watching for mutation events and tracking the altered HTML.

Scott Michaud: GPU-Accelerated Audio

ScottMichaud returns with more fun stuff using the WebCL extension! Scott shared a demo of WebCL-powered audio where a virtual microphone was surrounded by individual raindrop sounds. By controlling the rate of raindrops, you can simulate a higher audio load and see the difference that pushing audio processing to the GPU can make.

Les Orchard: Parsec Patrol

Senior Space Cadet lorchard shared Parsec Patrol, a vector-based space game for the web. While there’s no full game made yet, there is a webpage with several demos showing collision detection, spaceship navigation, missiles, point-defense systems, and more!

Matthew Claypotch: a9r

Have you ever seen an abbreviation like l10n or i18n and had no idea what it meant? Have no fear, Uncle Potch is here with a9r, the answer to the abbreviation problem! Simply install the command and enter in an abbreviation to receive a list of all words in the SOWPODS word list that match. Got a word that you need to abbreviate? Not only can a9r decipher abbreviations, it can create them!

Matthew Claypotch: socketpeer

In a slightly-less-whimsical vein, potch also shared socketpeer, a simple JavaScript library for 1:1 messaging via WebRTC Data Channels and WebSockets. Extracted from the Tanx demo that Mozilla showed at GDC 2015, socketpeer contains both a server API for establishing peer connections between users and a client API to handle the client-side communication. Potch also shared a demo of a peer-to-peer chat application using socketpeer.

Chris Van Wiemeersch: PhantomHAR

Next up was cvan, who shared PhantomHAR, a PhantomJS and SlimerJS script that generates an HTTP Archive (or HAR) for a URL. A HAR is an archive of data about HTTP transactions that can be used to export detailed performance data for tools to consume and analyze, and PhantomHAR allows you to easily generate the HAR for use by these tools.

Chris Van Wiemeersch: fetch-manifest

Next, cvan shared fetch-manifest, a small library that takes a URL, locates the W3C web app manifest for the page, fixes any relative URLs in the manifest, and returns it. This is useful for things like app marketplaces that want to allow people to submit web apps by submitting a single URL to the app they want to submit.

Bill Walker: robot-threejs

Last up was bwalker, who shared robot-threejs, an experimental steampunk robot game powered by three.js and WebGL. The game currently allows you to fly around a 3D environment that has 3D positional audio emitting from an incredibly mysterious cube. CAN YOU SOLVE THE CUBE MYSTERY?


This month we think we’ve really got something special with our Seki Edge keyboard-and-mouse combo. Order now and get a free box of Band-aids at no additional cost!

If you’re interested in attending the next Beer and Tell, sign up for the dev-webdev@lists.mozilla.org mailing list. An email is sent out a week beforehand with connection details. You could even add yourself to the wiki and show off your side-project!

See you next month!

Air MozillaBugzilla Development Meeting

Bugzilla Development Meeting Help define, plan, design, and implement Bugzilla's future!

Advancing ContentContent Services Team Adds New Talent With Partnerships (and Mozilla) Experience

Earlier this year I wrote about how 2015 will be a big year for Mozilla to scale and build better personalized experiences as we help move the ad industry forward. Today, I’m excited to announce two new additions to our Content Services team as we continue our mission to create innovative content offerings while always upholding Mozilla’s commitment to user privacy.

Accomplished interactive advertising expert Aaron Lasilla has joined Mozilla and our Content Services team as head of content partnerships. Aaron comes to us from EA Games where he served as the global director of brand solutions and co-founded the in-game advertising group. Aaron was instrumental in negotiating and securing a number of strategic partnerships for EA’s publishing division as he built the group it into a new business and revenue channel for EA, including the largest EA Online partnership ever (within Pogo.com’s casual games offering, in 2003). During his tenure, EA was established as the number one publisher of integrated advertising placements and partnership in and around games. Aaron previously managed Microsoft’s Premium Games Advertising offering and also worked in sales and sponsorship capacities for Double Fusion, Clear Channel Entertainment and Kemper Sports Marketing.

As we continue to develop and refine our new offerings like Firefox Tiles, Aaron will be focusing on engagement and value exchange for Mozilla’s offerings while maintaining the same quality and standards of user experience that Mozilla is known for.

In addition, I’m excited to formally announce that long-time Mozillian Patrick Finch joined our group late last year as director of marketing. Patrick has been with Mozilla for over seven years based out of Sweden and has worked in a number of strategic roles on Mozilla’s desktop and mobile projects over that time. Prior to joining Mozilla Patrick spent over ten years at Sun Microsystems in a variety of capacities including working on numerous open source projects.

As we continue the rollout of Firefox Tiles and bring on new partners, you’ll probably be seeing more of Aaron and Patrick on this blog. If you’re interested in partnering with us in our mission or if you’d just like to drop our team a line, feel free to reach out to us at contentservices@mozilla.com.

Aaron Lassila

Aaron Lassila

Patrick Finch

Patrick Finch

 

Mozilla Release Management TeamFirefox 36.0.2 to 36.0.4

Last Friday and Saturday, we released two minor releases to fix the issues found during the pwn2own hacking contests.

  • 6 changesets
  • 8 files changed
  • 25 insertions
  • 16 deletions

ExtensionOccurrences
txt2
py2
sh1
json1
h1
cpp1

ModuleOccurrences
testing3
docshell2
mobile1
config1
browser1

List of changesets:

Steve FinkBug 1145255. r=luke, a=lmandel - 2b1ecc5fad12
Jordan LundBug 1113460 - Bump mozharness.json to revision 75c435ef19ca. a=test-only - 3d681d747053
Jordan LundBug 1142743 - Increase chunks for Android 2.3 mochitest-gl, in-tree cfg total chunk fix. r=kmoir, a=test-only - 7d23a45099ee
Geoff BrownBug 1142552 - Update robocop chunking to fix rc10. r=ahal, a=test-only - e2ba5973e4bb
Olli PettayBug 1144988 - Don't let other pages to load while doing scroll-to-anchor. r=bz, a=lmandel - d5a003cc284a
Kyle HueyBug 1145870. r=bz a=lmandel - 57cc76236bd7

Mozilla Release Management TeamFirefox 37 beta6 to beta7

In this beta release, we continued to land some patches for MSE. We disabled MSE under Mac OS X for now.

We also took the fixes for the pwn2own hacking contest in this release.

  • 54 changesets
  • 109 files changed
  • 2315 insertions
  • 1248 deletions

ExtensionOccurrences
cpp54
h39
cc3
py2
js2
html2
json1
jsm1
ipdlh1
ipdl1
inc1
c1
build1

ModuleOccurrences
dom26
gfx18
layout15
media14
widget9
toolkit5
image5
ipc4
testing3
security2
js2
docshell2
xpfe1
modules1
caps1
browser1

List of changesets:

Matt WoodrowBug 1131638 - Discard video frames that fail to sync. r=cpearce, a=lmandel - 297e2e626fe9
Matt WoodrowBug 1131638 - Record invalid frames as dropped for video playback stats. r=ajones, a=lmandel - f88fcb8ccc27
Matt WoodrowBug 1131638 - Disable hardware decoding if too many frames are invalid. r=cpearce, a=lmandel - edb24ca59d13
Mike HommeyBackout the part of changeset 8044e5199fe2 (Bug 1080319) that removed -remote. a=sledru - 29eac8276b62
Matt WoodrowBug 1139503 - Backlist ATI driver version for DXVA on windows 10 since it's causing crashes. r=cpearce, a=sledru - 5a8085d3a0fe
Bill McCloskeyBack out Bug 1103036 to resolve shutdown hangs a=backout - 2cc99febbda0
Ryan VanderMeulenNo bug - Bump mozharness.json to revision fde96e1730cc. a=NPOTB - d16fe93d2755
Jordan LundBug 1142743 - Increase chunks for Android 2.3 mochitest-gl, in-tree cfg total chunk fix. r=kmoir, a=test-only - 2d55d8220616
Geoff BrownBug 1142552 - Update robocop chunking to fix rc10. r=ahal, a=test-only - 0e0204877015
Ralph GilesBug 1141349 - Pref off MSE on Mac. r=ajones, a=lmandel - c8f377118985
Jan VargaBug 1067568 - Fix intermittent "ASSERTION: We don't know anyting about this file handle?!: 'Error', file dom/filehandle/FileService.cpp, line 234". r=bent, a=lsblakk - 199e30cb18f3
Margaret LeibovicBug 1141550 - Register an AsyncShutdown blocker to persist download changes. r=paolo, a=lsblakk - 3eeb35bbafd2
Jean-Yves AvenardBug 1139271 - Part 1: Add logging when encountering invalid atoms. r=k17e, a=lsblakk - 202177831c59
Jean-Yves AvenardBug 1139271 - Part 2: Ignore partial moof. r=k17e, a=lsblakk - 21384861c447
Jean-Yves AvenardBug 1139271 - Part 3: Only consider a Box to be available if entire content is available. r=k17e, a=lsblakk - f4c0cec35772
Paul AdenotBug 1141781 - Grip the VideoFrameContainer when queing a call to invalidate in the MediaStreamGraph. r=roc, a=lsblakk - 6a4e68222995
Matthew GreganBug 1142746 - Make unexpected SL_PLAYEVENT_HEADATMARKER notification non-fatal. r=brsun, a=lsblakk - 067f83e99f66
Ryan VanderMeulenBacked out changeset 6a4e68222995 (Bug 1141781) for mochitest crashes/asserts. - 6a9120be7216
Ethan HuggBug 1144157 - Add ciscospark.com to the screenshare default whitelist r=jesup a=lmandel - bd028b4c3b95
Simon MontaguBug 1114239 patch 1: Backout Bug 1105137, r=jfkthame, a=lmandel - 0837b7d1188b
Simon MontaguBug 1114239 patch 2: Backout Bug 1079139, r=jfkthame, a=lmandel - 8fca3694654a
Simon MontaguBug 1114239 patch 3: Backout Bug 1062963 patch 3, r=jfkthame, a=lmandel - 470cd8c739c5
Olli Pettaybackout Bug 1121406 which enabled WebSocket in Workers in beta, a=abillings - f0a0d5d2d525
David MajorBug 1138794: Use an alternate crash report server on Windows XP SP2. r=ted a=lmandel - caf324dbb13f
Matthew GreganBug 1124542 - WebrtcGmpVideoDecoder shouldn't crash when GMP completion callbacks are received. r=rjesup, a=lmandel - c54687cb7086
Ethan HuggBug 1125047 - GMP should catch decoder failures. r=jesup, a=lmandel - 5598a289b442
Chris PearceBug 1140797 - Make gmp-clearkey buildable outside of mozilla-central. r=edwin, a=lmandel - a49b40d229df
Chris PearceBug 1140797 - Prevent fatal assert when doing base64 decode in gmp-clearkey. r=edwin, a=lmandel - 29333933d6d6
Jordan LundBug 1113460 - Bump mozharness.json to revision 75c435ef19ca. a=test-only - 938177ece421
Cameron McCormackBug 1143953 - Fix typo in test_font_loading_api.html where it incorrectly returns document.fonts.read. r=jdaggett, a=test-only - 37be317efc7a
Cameron McCormackBug 1143995 - Remove unnecessary layout flushes from test_font_loading_api.html. r=jdaggett, a=test-only - e84f65c3a6aa
Cameron McCormackBug 1144507 - Fix incorrect Promise usage in test_font_loading_api.html. r=jdaggett, a=test-only - 9fc579f7bf3a
Tim TaubertBug 1124409 - Fix intermittent browser_bug1015721.js failures by ensuring the EventStateManager has a document before trying to dispatch ZoomChangeUsingMouseWheel. r=smaug, a=test-only - 46cfbcfb58c5
Tim TaubertBug 1124409 - Fix test_bug659071.html to properly reset page zoom before finishing. r=smaug, a=test-only - e4f1cc6f63a3
Nicolas B. PierronBug 1137624 - Disable Array.join optimization. r=jandem, a=abillings - 968fa2b32612
Aaron KlotzBug 1141081 - Ensure nsPluginInstanceOwner::Destroy is called before returning from failed plugin instantiation. r=jimm, a=lmandel - 2710769c40a5
Aaron KlotzBug 1128064 - Check for null mContent in nsPluginInstanceOwner::GetDocument. r=jimm, a=abillings - e92558fa59eb
Byron Campen [:bwc]Bug 1141749 - Prevent collisions in local SSRCs. r=mt, a=abillings - d76c709556bb
Nicolas SilvaBug 1125848 - Reduce the likelyhood of a CompositorParent being destroyed without the proper shutdown sequence. r=sotaro a=lmandel - 45897d27ef82
Avi HalachmiBug 1142079 - Disable refresh driver telemetry on Android. r=froydnj, a=lmandel - 17adc07baf56
Matt WoodrowBug 1138967 - Part 1: Remove ISharedImage. r=nical, a=lmandel - c1356c27fa1b
Matt WoodrowBug 1138967 - Part 2: Create IMFYCbCrImage so that image data copying happens off the decoder thread. r=nical, r=cpearce, a=lmandel - 07e266d45703
Matt WoodrowBug 1138967 - Part 3: Add D3D11 YCbCr texture clients and upload on the client side. r=nical, a=lmandel - 0c23dcbc6bf7
Masatoshi KimuraBug 1133187 - Update fallback whitelist. r=keeler, a=lmandel - 02b9c74353ad
Seth FowlerBug 1142849 - Upliftable fix for imgRequest TSan violations. r=tn, a=lmandel - 9b7aa96d0e11
Karsten DüsterlohBug 1116952 - Treelines fragments after Bug 1105104. r=jwatt, a=lmandel - 5bd29483f85e
Jeff MuizelaarBug 1130978 - Fix VisitEdges. r=kats, a=lmandel - fb9ae74a783a
Seth FowlerBug 1137058 - Increment RasterImage::mLockCount to ensure that non-discardable images don't eventually become unlocked. r=tn, a=lmandel - 52b55d9c1d61
Matt WoodrowBug 1145029 - Disable DXVA for 4k videos on AMD hardware since it performs poorly. r=jya a=lmandel - 2445fcfe99d4
Steve FinkBug 1145255. r=luke, a=lmandel - aabde7671ac0
Jed DavisBug 1111079 - Backport some IPC message/channel fixes. r=bent, a=lmandel - 5bb1bb65cc28
Jed DavisBug 1111065 - Backport some upstream IPC serialization fixes. r=bent, a=lmandel - a2295cc0de06
Boris ZbarskyBug 1144991 - Be a bit more restrictive about when a URI_IS_UI_RESOURCE source is allowed to link to a URI_IS_UI_RESOURCE URI that doesn't have the same scheme. r=bholley, a=lmandel - 2e6977da201e
Olli PettayBug 1144988 - Don't let other pages to load while doing scroll-to-anchor. r=bz, a=lmandel - 9b93e6033d5d

François MarierKeeping up with noisy blog aggregators using PlanetFilter

I follow a few blog aggregators (or "planets") and it's always a struggle to keep up with the amount of posts that some of these get. The best strategy I have found so far to is to filter them so that I remove the blogs I am not interested in, which is why I wrote PlanetFilter.

Other options

In my opinion, the first step in starting a new free software project should be to look for a reason not to do it :) So I started by looking for another approach and by asking people around me how they dealt with the firehoses that are Planet Debian and Planet Mozilla.

It seems like a lot of people choose to "randomly sample" planet feeds and only read a fraction of the posts that are sent through there. Personally however, I find there are a lot of authors whose posts I never want to miss so this option doesn't work for me.

A better option that other people have suggested is to avoid subscribing to the planet feeds, but rather to subscribe to each of the author feeds separately and prune them as you go. Unfortunately, this whitelist approach is a high maintenance one since planets constantly add and remove feeds. I decided that I wanted to follow a blacklist approach instead.

PlanetFilter

PlanetFilter is a local application that you can configure to fetch your favorite planets and filter the posts you see.

If you get it via Debian or Ubuntu, it comes with a cronjob that looks at all configuration files in /etc/planetfilter.d/ and outputs filtered feeds in /var/cache/planetfilter/.

You can either:

  • add file:///var/cache/planetfilter/planetname.xml to your local feed reader
  • serve it locally (e.g. http://localhost/planetname.xml) using a webserver, or
  • host it on a server somewhere on the Internet.

The software will fetch new posts every hour and overwrite the local copy of each feed.

A basic configuration file looks like this:

[feed]
url = http://planet.debian.org/atom.xml

[blacklist]

Filters

There are currently two ways of filtering posts out. The main one is by author name:

[blacklist]
authors =
  Alice Jones
  John Doe

and the other one is by title:

[blacklist]
titles =
  This week in review
  Wednesday meeting for

In both cases, if a blog entry contains one of the blacklisted authors or titles, it will be discarded from the generated feed.

Tor support

Since blog updates happen asynchronously in the background, they can work very well over Tor.

In order to set that up in the Debian version of planetfilter:

  1. Install the tor and polipo packages.
  2. Set the following in /etc/polipo/config:

     proxyAddress = "127.0.0.1"
     proxyPort = 8008
     allowedClients = 127.0.0.1
     allowedPorts = 1-65535
     proxyName = "localhost"
     cacheIsShared = false
     socksParentProxy = "localhost:9050"
     socksProxyType = socks5
     chunkHighMark = 67108864
     diskCacheRoot = ""
     localDocumentRoot = ""
     disableLocalInterface = true
     disableConfiguration = true
     dnsQueryIPv6 = no
     dnsUseGethostbyname = yes
     disableVia = true
     censoredHeaders = from,accept-language,x-pad,link
     censorReferer = maybe
    
  3. Tell planetfilter to use the polipo proxy by adding the following to /etc/default/planetfilter:

     export http_proxy="localhost:8008"
     export https_proxy="localhost:8008"
    

Bugs and suggestions

The source code is available on repo.or.cz.

I've been using this for over a month and it's been working quite well for me. If you give it a go and run into any problems, please file a bug!

I'm also interested in any suggestions you may have.

Mozilla Open Policy & Advocacy BlogInformation sharing debates continuing in problematic directions

Recently, the U.S. Senate Select Committee on Intelligence held a closed-door hearing to markup the Cybersecurity Information Sharing Act (CISA). Mozilla has previously opposed CISA and its predecessor CISPA, and these changes do not alleviate our concerns. Simultaneously, in neighboring Canada, an aggressive counterterrorism bill would introduce similarly problematic surveillance provisions, among other harms.

But first, CISA. While the newly marked up version includes some improvements over the discussion draft circulated earlier this year, the substantive dangers remain. In particular, the bill:

  • Is still overbroad in scope, allowing near limitless sharing of private user data for a vague and expansive list of purposes that fall well outside the realm of cybersecurity;
  • Continues to require information to be automatically shared with “relevant agencies” including the NSA, which severely limits the power of the Department of Homeland Security (a civilian agency) to oversee information sharing practices and policies;
  • Allows for dangerous “defensive measures” (a rebranding of the previous version’s “countermeasures”) which could legitimize and permit “hacking back” in a manner that seriously harms the Internet; and
  • Provides blanket immunity for sharing private user information with still insufficient privacy safeguards, denying users both effective protection and remedy.

But the flaws of CISA are more than just the sum of its problematic provisions. The underlying paradigm of information sharing as a means to “detect and respond” or “detect and prevent” cybersecurity attacks lends itself more to advancing surveillance than to improving the security of the Web or its users. The primary threat we face is not a dearth of information shared with or by the government, but rather is often a lack of proactive, common sense security measures.

Moreover, data collected is data at risk, from the government’s failures to secure its own systems to the abuses revealed by the Snowden revelations. Putting more and more information into the hands of the government puts more user data in danger. Nevertheless, after passing the Senate Select Committee on Intelligence 14-1, CISA is scheduled to move to the full Senate floor imminently. This is a bad step forward for the future of the open Web.

Meanwhile in Canada, the Canadian Parliament is considering an even more concerning bill, C-51, the Anti-Terrorism Act of 2015. C-51 is sweeping in scope, including granting Canadian intelligence agencies CSIS and CSE new authority for offensive online attacks, as well as allowing these agencies to obtain significant amounts of information held by the Canadian government. The open-ended internal information-sharing exceptions contained in the bill erode the relationship between individuals and their government by removing the compartmentalization that allows Canadians to provide the government some of their most private information (for census, tax compliance, health services, and a range of other purposes) and trust that that information will be used for only its original purposes. This compartmentalization, currently a requirement of the Privacy Act, will not exist after Bill C-51 comes into force.

The Bill further empowers CSIS to take unspecified and open-ended “measures,” which may include the overt takedown of websites, attacks on Internet infrastructure, introduction of malware, and more all without any judicial oversight. These kinds of attacks on the integrity and availability of the Web make us all less secure.

We hope that both the Canadian Parliament and the U.S. Congress will take the time to hear from users and experts before pushing any further with C-51 and CISA respectively. Both of these bills emphasize nearly unlimited information sharing, without adequate privacy safeguards, and alarmingly provide support for cyberattacks. This is an approach to cybersecurity that only serves to undermine user trust, threaten the openness of the Web, and reduce the security of the Internet and its users. For these reasons, we strongly oppose both C-51 and CISA.

 

 

 

 

Carsten BookFirst overview from the sheriff survey!

Hi,

thanks for all the Reply’s we got for the Sheriff Survey! If you haven’t already took part in it, its still online and you can still take part in the survey!

While we close the Survey in a few days and i will provide a comprehensive overview of course, i was feeling i could already do some quick overview what we got so far.

One big take away is how important checkin-needed requests is and how many people depend on this. We are very sorry if there are delays with picking up checkin-needed requests but since its a human task it depend how much is ongoing with the trees etc.

But there is work being done on Autoland like on https://wiki.mozilla.org/Auto-tools/Projects/Autoland :)

Also to follow up on 2 concrete things (you might know or maybe not).

Question: How do i know why the tree is closed (when we have a tree closure) on Treeherder

Answer:  Just hover over the repo name in Treeherder (as example mozilla-inbound) or click on the info button right next to the repo name

Question: When i land something on like mozilla-inbound its a mess to manually copy and past the hg changeset url to bug

Answer: We have a tool called mcmerge its right next to every push in the drown-down arrow action menu and unlike the name says its not just to mark merges. During the survey we found out that the name is misleading so we trying to find a new name – https://bugzilla.mozilla.org/show_bug.cgi?id=1145836

Thanks,

 

– Tomcat

Jim ChenBack from leave

Back in January, I left on a two-month-long leave from Mozilla, in order to do some traveling in China and Japan. Now I'm finally back! I was in China for 1.5 months and in Japan for 2 weeks, and it was amazing! I made a short video highlighting parts of my trip:

Being a mobile developer, I naturally paid some attention to mobile phone usage in China, and how it's different from what I'm used to in the U.S. The cellular infrastructure was impressive. It was fairly cheap, and I was getting full 3G/4G service in small villages and along high-speed rail routes. It seemed like everyone had a smartphone, too. I would see grandmas standing on the side of the road checking their phones.

I never use QR codes in the U.S., but I actually used them quite often in China. For example, you would scan another person's QR code to add them as friends on Wechat. In some places, you could scan a merchant's QR code to pay that merchant using Alipay, a wallet app. Many types of tickets like train tickets and movie tickets also use QR codes over there.

Everyone used Wechat, a messaging app that's “way better than anything else in the U.S.” according to my American friend living in China. It's more than just a messaging app though – you have a “friend circle” that you can post to, a la Facebook; you can also follow “public accounts”, a la Twitter. The app has integrated wallet functionality: I paid for a train ticket and topped up my phone using the app; during Chinese New Year, people were sending each other cash gifts through it.

For some reasons, you see a lot of these “all-in-one” apps in China. I used Baidu Maps during my travel, which does maps and navigation. However, you can also call taxis from within the app or hire a “private car”, a la Uber. You can use the app like Yelp to find nearby restaurants by type and reviews. While you're at it, the app lets you find “group buy” discounts to these restaurants, a la Groupon. I have to say it was super convenient. After I came back to the States, I wasn't used to using Google Maps anymore because it didn't do as much.

Of course, on the flip side, these apps probably would be less popular without the Internet censorship that's so prevalent over there. By creating a barrier for foreign companies to enter the Chinese market, it provided opportunities for domestic companies to create and adapt copycat products. I found it amusing that Android is so prevalent in the Chinese smartphone market, but everything Google is blocked. As a result, you have all these third-party markets that may or may not be legitimate. Mobile malware seems to be a much larger issue in China than in the U.S., because people have to find their apps off of random markets/websites. It was strange to see an apps market promising “safe, no malware” with every download link. Also amusingly, every larger app I saw came with its own updater, again because these apps could not count on having a market to provide update service.

Overall, the trip was quite eye-opening, to see China's tremendous development from multiple angles. I loved Japan, too; I felt it was a lot different from both China and the U.S. Maybe I'll write about Japan in another post.

Last modified: 2015/03/25 11:35

David Weir (satdav)Windows Nighly 64 bit test day

Why not come along to the windows 64 bit nighly test day this Saturday from 9am to 3pm

PS we are looking for moderators at the event

https://etherpad.mozilla.org/testday-20150328


Smokey ArdissonWhat year is it again?

The other day, my brother asked me to log in to his account on his employer’s1HR system” in order to make him some backup copies of information presented there (his existing copies of which he had needed to provide to his supervisor). On the login screen, I was still slightly shocked2 to see the following message:

For an optimal experience, we recommend using these browsers:

Unexpected results may occur when using other browsers.

(If you view the source, you can see that each of the <a>s has an id="ielink_001" attribute—not only incorrect, but perhaps a holdover from the days this particular website “supported” only IE?)

Seriously? It’s 2015 and your website is not only not compatible with any version of Safari, but it is only compatible with versions of Chrome and Firefox that are four3 versions out-of-date!? (Kudos for supporting versions of IE dating back six years, though!)

I forged ahead, because if the site claimed to work properly in a six-year-old version of Internet Explorer, it surely would work in a current two-year-old version of Safari (the just-released version 6.2.4 on 10.8/Mountain Lion). Nothing I had to look at seemed to look or function incorrectly—until it came time to look for his timesheets. When I clicked on the tab entitled “Timesheets”, a page loaded with no “content” below the row of tabs, except for a link to help me return to the site I was already on. Indeed, unexpected results may occur when using a browser other than the last four versions of IE or versions of Chrome and Firefox four versions out-of-date! Eventually, I realized that the problem was that loading the page was triggering a pop-up window(!?) with the website for the company’s scheduling system, and Safari was (silently) blocking said pop-up.4

Allowing pop-ups and forging ahead again, I looked at the scheduling system’s website, and it reminded me of a poor knockoff of the web as rendered by Firebird 0.6 or 0.7 more than a decade ago (eerie, that poorly-rendered, overly-fat Helvetica—perhaps it’s Verdana or Tahoma?—and <table>s, lots of <table>s!) Also, there was a menu that seemed to have no useful functions. Finally relenting, I launched Firefox 36, discovered the functional part of the menu was indeed missing (according to the Web Inspector in Safari, that part of the menu was being rendered off-screen and I think zero-height; given that Blink and WebKit supposedly haven’t diverged that much, I wonder if this critical piece of the menu would have appeared in Chrome, either, supported version or otherwise?), found the link I needed, and returned to Safari to print out pages of multi-page <table>s.

These are websites/systems that are created and installed to be used by every employee of this company, from the convenience of each employee’s personal computing device, not systems that are to be used solely by the HR department on company computers where IT can mandate a certain browser and software combination. This is software whose purpose is to be used by everyone; why is it not designed to be used by everyone—compatible with current versions of the major rendering engines, avoiding unfriendly and abused technologies like pop-ups, and so on?

If the software is intended to be used by everyone (or, generally, people beyond those whose computer configuration you can dictate by supplying said computer) and it’s web-based software (or has a web front-end), then the company (or the company’s software vendor) needs to continually test the software/web front-end with new versions of major rendering engines, making changes (or reporting bugs in the rendering engine) in the unlikely event something breaks, so that they aren’t requiring employees to use six-month-old versions of browsers in order for the corporate software to work properly.

As for the integration between the main HR system and the scheduling system, if the two can’t talk to each other directly behind the scenes, then why not embed the scheduling system into the “Timesheets” tab with an <iframe> (<iframe>s are already present in some of the other tabs). If an <iframe> won’t work for some technical or security reasons, why not include a button on the “Timesheets” tab that the user can click to trigger the pop-up window with the scheduling system, thus escaping the pop-up blocker? It’s not as elegant in some ways as automatically launching, but pop-ups are already not as elegant as showing the data inline (and pop-ups are arguably not elegant at all), and manually-triggered pop-ups are more friendly since the human involved knows he or she is triggering some action and isn’t annoyed by blocked pop-up notifications. You also then get Safari compatibility “for free” without requiring users to change settings (and without having to tell them how to do so). If there are still legitimate reasons not to use a button or link or similar element, at the very least some explanatory text in the “content” section of the “Timesheets” tab is far more useful to anyone than a link to return to the very site you’re already viewing.

When I encounter software like this, I often wonder how it was built. Was there a user experience or human interface designer as part of the team? Was there any testing? Any quality assurance team involved? Or did some product manager just throw a spec sheet from marketing at the software engineers and tell them, “Not only do you have to write the code to make it do these things, but you have to determine how it’s going to do these things, too.” Or did management decide to ship as-is, perhaps over the objections of team members, in order to meet some deadline?

Design is how things work. Not everyone is a good designer, just like not everyone is a good programmer or tester (they’re not necessarily mutually exclusive, but many times excelling in one field means not learning as much about another), but every good piece of software needs all three skillsets, working in concert, whether in one body or more. Too often, “corporate software” like this seems to be missing one or more of the three, and that’s a shame, because with a little more effort, every interaction with the software could be improved. Then the vendor sells better software, the employees who use the software have a faster, easier experience and can get back to doing what they love and are good at, and the company installing the software can have happier employees. Everyone wins.

        

1 An unnamed major American restaurant group.
2 I know, I know, I really shouldn’t be. :P
3 In fairness, Firefox 31 is at least still in the ESR support window until May.
4 Question 1: Why, in 2015, does Safari still not support a per-site control for pop-ups (and, at least as of version 6, still not provide any sort of notification of blocked pop-ups; granted the UI balance there is hard—and a subject for another post—but still!)?
Question 2: The better question is, why, in 2015, are sites still using non-user-triggered pop-up windows for critical functions, or any functions at all?

Michael KaplyFirefox ESR Only Changes

There are a few changes that are coming for Firefox that will be major headaches for enterprise, educational, government and other institutional deployments. These include the removal of the distribution/bundles directory as well as the requirement for all add-ons to be signed by Mozilla.

Given that these two changes are not needed for enterprise, there has been some discussion of not putting these changes into the Firefox ESR.

So I'm curious: besides these two changes, what other things do you think should be different between regular Firefox and the Firefox ESR? I'm not talking about creating new features for the ESR, I'm only talking about enabling and/or disabling features.

Put your suggestions in the comments. I'll put mine there as well.

Daniel PocockThe easiest way to run your own OpenID provider?

A few years ago, I was looking for a quick and easy way to run OpenID on a small web server.

A range of solutions were available but some appeared to be slightly more demanding than what I would like. For example, one solution required a servlet container such as Tomcat and another one required some manual configuration of Python with Apache.

I came across the SimpleID project. As the name implies, it is simple. It is written in PHP and works with the Apache/PHP environment on just about any Linux web server. It allows you to write your own plugin for a user/password database or just use flat files to get up and running quickly with no database at all.

This seemed like the level of simplicity I was hoping for so I created the Debian package of SimpleID. SimpleID is also available in Ubuntu.

Help needed

Thanks to a contribution from Jean-Michel Nirgal Vourgère, I've just whipped up a 0.8.1-14 package that should fix Apache 2.4 support in jessie. I also cleaned up a documentation bug and the control file URLs.

Nonetheless, it may be helpful to get feedback from other members of the community about the future of this package:

  • Is it considered secure enough?
  • Have other people found it relatively simple to install or was I just lucky when I tried it?
  • Are there other packages that now offer such a simple way to get OpenID for a vanilla Apache/PHP environment?
  • Would anybody else be interested in helping to maintain this package?
  • Would anybody like to see this packaged in other distributions such as Fedora?
  • Is anybody using it for any online community?

Works with HOTP one-time-passwords and LDAP servers

One reason I chose SimpleID is because of dynalogin, the two-factor authentication framework. I wanted a quick and easy way to use OTP with OpenID so I created the SimpleID plugin for dynalogin, also available as a package.

I also created the LDAP backend for SimpleID, that is available as a package too.

Works with Drupal

I tested SimpleID for login to a Drupal account when the OpenID support is enabled in Drupal, it worked seamlessly. I've also tested it with a few public web sites that support OpenID.

Adam Lofting2015 Mozilla Foundation Metrics Strategy(ish) & Roadmap(ish)

I wrote a version of this strategy in January but hadn’t published it as I was trying to remove those ‘ish‘s from the title. But the ‘ish’ is actually a big part of my day-to-day work, so this version embraces the ‘ish’.

MoFo Metrics Measures of Success:

These are ironically, more qualitative than quantitative.

  1. Every contributor (paid or volunteer) knows at any given time what number they (or we) are trying to move, where that number is right now, and how they hope to influence it.
  2. We consider metrics (i.e. measures of success) before, during and after after each project.
  3. We articulate the stories behind the metrics we aim for, so their relevance isn’t lost in the numbers.
  4. A/B style testing practice has a significant impact on the performance of our ‘mass audience’ products and campaigns.

1. Every contributor (paid or volunteer) knows at any given time what number they (or we) are trying to move, where that number is right now, and how they hope to influence it.

  • “Every” is ambitious, but it sets the right tone.
  • This includes:
    • Public dashboards, like those at https://metrics.webmaker.org
    • Updates and storytelling throughout the year
    • Building feedback loops between the process, the work and the results (the impact)

2. We consider metrics (i.e. measures of success) before, during and after after each piece of work.

  • This requires close integration into our organizational planning process
  • This work is underway, but it will take time (and many repetitions) before it becomes habit

3. We articulate the stories behind the metrics we aim for, so their relevance isn’t lost in the numbers.

  • The numbers should be for navigation, rather than fuel

4. A/B style testing practice has a significant impact on the performance of our ‘mass audience’ products and campaigns.

  • This is the growth hacking part of the plan
  • We’ve had some successes (e.g. Webmaker and Fundraising)
  • This needs to become a continuous process

Those are my goals.

In many cases, the ultimate measure of success is when this work is done by the team rather than by me for the team.

We’re working on Process AND Culture

Process and culture feed off of and influence each other. Processes must suit the culture being cultivated. A data driven culture can blinker creativity – it doesn’t have to, but it can. And a culture that doesn’t care for data, won’t care for processes related to data. This strategy aims to balance the needs of both.

A roadmap?

I tried to write one, but basically this strategy will respond to the roadmaps of each of the MoFo teams.

So, what does Metrics work look like in 2015?

  • Building the tools and dashboards to provide the organisational visibility we need for our KPIs
  • ‘Instrumenting’ our products so that we can accurately measure how they are being used
  • Running Optimization experiments against high profile campaigns
  • Running training and support for Google Analytics, Optimizely, and other tools
  • Running project level reporting and analysis to support iterative development
  • Consulting to the Community Development Team to plan experimental initiatives

Plus: supporting teams to implement our data practices, and of course, the unknown unknowns.

…ish

Nigel BabuDino Cufflinks

Recently, in a moment of weakness, I made an order on Etsy for custom cufflinks. I had no idea how it would turn out, so it was a huge leap of faith. I got it the other day and it looks gorgeous!

They do look quite good! Click through for larger image

For those of you wondering, I ordered it from LogiCuff. So, when can we get cufflinks on Mozilla Gear? :)

Ben KellyService Workers in Firefox Nightly

I’m pleased to announce that we now recommend normal Nightly builds for testing our implementation of Service Workers. We will not be posting any more custom builds here.

Now that bug 1110814 has landed in mozilla-central, Nightly has roughly the same functionality as the last sw-build. Just enable these preferences in about:config:

  • Set dom.caches.enabled to true.
  • Set dom.serviceWorkers.enabled to true.

Please note that on Firefox OS you must enable an additional preference as well. See bug 1125961 for details.

In addition, we’ve decided to move forward with enabling the Service Worker and Cache API preferences by default in non-releases builds. We expect the Cache preference to be enabled in the tree today. The Service Worker preference should be enabled within the next week once bug 931249 is complete.

When Nightly merges to Aurora (Developer Edition), these preferences will also be enabled by default there. They will not, however, ride the trains to Beta or Release yet. We feel we need more time stabilizing the implementation before that can occur.

So, unfortunately, I cannot tell you exactly which Firefox Release will ship with Service Workers yet. It will definitely not be Firefox 39. Its possible Service Workers will ship in Firefox 40, but its more likely to finally be enabled in Firefox 41.

Developer Edition 39, however, will have Cache enabled and will likely also have Service Workers enabled.

Finally, while the code is stabilizing you may see Service Worker registrations and Cache data be deleted when you update the browser. If we find that the data format on disk needs to change we will simply be reseting the relevant storage area in your profile. Once the decision to ship is made any future changes will then properly migrate data without any loss. Again, this only effects Service Worker registrations and data stored in Cache.

As always we appreciate your help testing, reporting bugs, and implementing code.

Gervase MarkhamHow to Responsibly Publish a Misissued SSL Certificate

I woke up this morning wanting to write a blog post, then I found that someone else had already written it. Thank you, Andrew.

If you succeed in getting a certificate misissued to you, then that has the opportunity to be a great learning experience for the site, the CA, the CAB Forum, or all three. Testing security is, to my mind, generally a good thing. But publishing the private key turns it from a great learning experience into a browser emergency update situation (at least at the moment, in Firefox, although we are working to make this easier with OneCRL).

Friends don’t publish private keys for certs for friends’ domain names. Don’t be that guy. :-)

QMOFirefox 38 Aurora Testday Results

Hello everyone!

Last Friday, March 20th, we held the Firefox 38 Aurora Testday. We’d like to take this opportunity to thank everyone for getting involved in the proposed testing activities and in general, for your hard work in helping us make Firefox even better.

Many thanks to doublex, Aleksej, Hossain Al Ikram and kenkon for their efforts and contributions and to all our moderators. Your help is greatly appreciated!

We look forward to seeing you at the next Testday. Keep an eye on QMO for upcoming events and schedule announcements!

Byron Joneshappy bmo push day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1145502] cf_crash_signature missing in fresh BMO install
  • [1145689] Adding “Content Services” into Key Initiatives
  • [1146219] the ‘abuse’ comment tag should function like the ‘abusive’ tag and trigger automatic account disabling
  • [1141165] Enforce mandatory field on form.reps.mentorship
  • [1146434] renaming of “Distribution/Bundling” component broke one of the project kickoff form’s sub-bugs
  • [1096798] prototype modal show_bug view

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

Chris DoubleContributing to Servo

Servo is a web browser engine written in the Rust programming language. It is being developed by Mozilla. Servo is open source and the project is developed on github.

I was looking for a small project to do some Rust programming and Servo being written in Rust seemed likely to have tasks that were small enough to do in my spare time yet be useful contributions to the project. This post outlines how I built Servo, found issues to work on, and got them merged.

Preparing Servo

The Servo README has details on the pre-requisites needed. Installing the pre-requisites and cloning the repository on Ubuntu was:

$ sudo apt-get install curl freeglut3-dev \
   libfreetype6-dev libgl1-mesa-dri libglib2.0-dev xorg-dev \
   msttcorefonts gperf g++ cmake python-virtualenv \
   libssl-dev libbz2-dev libosmesa6-dev 
...
$ git clone https://github.com/servo/servo

Building Rust

The Rust programming language has been fairly volatile in terms of language and library changes. Servo deals with this by requiring a specific git commit of the Rust compiler to build. The Servo source is periodically updated for new Rust versions. The commit id for Rust that is required to build is stored in the rust-snapshot-hash file in the Servo repository.

If the Rust compiler isn’t installed already there are two options for building Servo. The first is to build the required version of Rust yourself, as outlined below. The second is to let the Servo build system, mach, download a binary snapshot and use that. If you wish to do the latter, and it may make things easier when starting out, skip this step to build Rust.

$ cat servo/rust-snapshot-hash
d3c49d2140fc65e8bb7d7cf25bfe74dda6ce5ecf/rustc-1.0.0-dev
$ git clone https://github.com/rust-lang/rust
$ cd rust
$ git checkout -b servo d3c49d2140fc65e8bb7d7cf25bfe74dda6ce5ecf
$ ./configure --prefix=/home/myuser/rust
$ make
$ make install

Note that I configure Rust to be installed in a directory off my home directory. I do this out of preference to enable managing different Rust versions. The build will take a long time and once built you need to add the prefix directories to the PATH:

$ export PATH=$PATH:/home/myuser/rust/bin
$ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/myuser/rust/lib

Building Servo

There is a configuration file used by the Servo build system to store information on what Rust compiler to use, whether to use a system wide Cargo (Rust package manager) install and various paths. This file, .servobuild, should exist in the root of the Servo source that was cloned. There is a sample file that can be used as a template. The values I used were:

[tools]
system-rust = true
system-cargo = false

[build]
android = false
debug-mozjs = false

If you want to use a downloaded binary snapshot of Rust to build Servo you should set the system-rust setting to false. With it set to true as above it will expect to find a Rust of the correct version in the path.

Servo uses the mach command line interface that is used to build Firefox. Once the .servobuild is created then Servo can be built with:

$ ./mach build

Servo can be run with:

$ ./mach run http://bluishcoder.co.nz

To run the test suite:

$ ./mach test

Finding something to work on

The github issue list has three useful labels for finding work. They are:

For my first task I searched for E-easy issues that were not currently assigned (using the C-assigned label). I commented in the issue asking if I could work on it and it was then assigned to me by a Servo maintainer.

Submitting the Fix

Fixing the issue involved:

  • Fork the Servo repository on github.
  • Clone my fork localling and make the changes required to the source in a branch I created for the issue I was working on.
  • Commit the changes locally and push them to my fork on github.
  • Raise a pull request for my branch.

Raising the pull request runs a couple of automated actions on the Servo repository. The first is an automated response thanking you for the changes followed by a link to the external critic review system.

Reviews

The Servo project uses the Critic review tool. This will contain data from your pull request and any reviews made by Servo reviewers.

To address reviews I made the required changes and committed them to my local branch as seperate commits using the fixup flag to git commit. This associates the new commit with the original commit that contained the change. It allows easier squashing later.

$ git commit --fixup=<commit id of original commit>

The changes are then pushed to the github fork and the previously made pull request is automatically updated. The Critic review tool also automatically picks up the change and will associate the fix with the relevant lines in the review.

With some back and forth the changes get approved and a request might be made to squash the commits. If fixup was used to record the review changes then they will be squashed into the correct commits when you rebase:

$ git fetch origin
$ git rebase --autosquash origin/master

Force pushing this to the fork will result in the pull request being updated. When the reviewer marks this as r+ the merge to master will start automatically, along with a build and test runs. If test failures happen these get added to the pull request and the review process starts again. If tests pass and it merges then it will be closed and the task is done.

A full overview of the process is available on the github wiki under Github and Critic PR handling 101.

Conclusion

The process overhead of committing to Servo is quite low. There are plenty of small tasks that don’t require a deep knowledge of Rust. The first task I worked on was basically a search/replace. The second was more involved, implementing view-source protocol and text/plain handling. The latter allows the following to work in Servo:

$ ./mach run view-source:http://bluishcoder.co.nz
$ ./mach run http://cd.pn/plainttext.txt

The main issues I encountered working with Rust and Servo were:

  • Compiling Servo is quite slow. Even changing private functions in a module would result in other modules rebuilding. I assume this is due to cross module inlining.
  • I’d hoped to get away from intermittent test failures like there are in Gecko but there seems to be the occasional intermittent reftest failure.

The things I liked:

  • Very helpful Servo maintainers on IRC and in github/review comments.
  • Typechecking in Rust helped find errors early.
  • I found it easier comparing Servo code to HTML specifications and following them together than I do in Gecko.

I hope to contribute more as time permits.

Dave TownsendMaking communicating with chrome from in-content pages easy

As Firefox increasingly switches to support running in multiple processes we’ve been finding common problems. Where we can we are designing nice APIs to make solving them easy. One problem is that we often want to run in-content pages like about:newtab and about:home in the child process without privileges making it safer and less likely to bring down Firefox in the event of a crash. These pages still need to get information from and pass information to the main process though, so we have had to come up with ways to handle that. Often we use custom code in a frame script acting as a middle-man, using things like DOM events to listen for requests from the in-content page and then messaging to the main process.

We recently added a new API to make this problem easier to solve. Instead of needing code in a frame script the RemotePageManager module allows special pages direct access to a message manager to communicate with the main process. This can be useful for any page running in the content area, regardless of whether it needs to be run at low privileges or in the content process since it takes care of listening for documents and hooking up the message listeners for you.

There is a low-level API available but the higher-level API is probably more useful in most cases. If your code wants to interact with a page like about:myaddon just do this from the main process:

Components.utils.import("resource://gre/modules/RemotePageManager.jsm");
let manager = new RemotePages("about:myaddon");

The manager object is now something resembling a regular process message manager. It has sendAsyncMessage and addMessageListener methods but unlike the regular e10s message managers it only communicates with about:myaddon pages. Unlike the regular message managers there is no option to send synchronous messages or pass cross-process wrapped objects.

When about:myaddon is loaded it has sendAsyncMessage and addMessageListener functions defined in its global scope for regular JavaScript to call. Anything that can be structured-cloned can be passed between the processes

The module documentation has more in-depth examples showing message passing between the page and the main process.

The RemotePageManager module is available in nightlies now and you can see it in action with the simple change I landed to switch about:plugins to run in the content process. For the moment the APIs only support exact URL matching but it would be possible to add support for regular expressions in the future if that turns out to be useful.

Mozilla Science LabMozilla Science Lab Week in Review, March 16-22

The Week in Review is our weekly roundup of what’s new in open science from the past week. If you have news or announcements you’d like passed on to the community, be sure to share on Twitter with @mozillascience and @billdoesphysics, or join our mailing list and get in touch there.

  • Andrew Nesbitt has launched libraries.io, a project to help tackle the discoverability challenge in open source and open science software. By leveraging the PageRank algorithm, Nesbitt hopes to represent what is actually being used (rather than what is simply admired) to better represent the true workhorses of open source.
  • The Center for Open Science began composing a wiki on the arguments and motivations for open science part of a “growing open source, open science” meeting that Titus Brown and Kaitlin Thaney co-organized last week. The new wiki explores ways we can work together better across open science initiatives – watch their space for developments, and get in touch there to contribute!
  • The Wikimedia Foundation has adopted an open access policy to support the free reuse of research produced with their support.
  • Stephanie Hampton et al have submitted a preprint of ‘The Tao of Open Science for Ecology‘, a paper outlining a roadmap to understanding and participating in open science. This paper got its start as a collaborative discussion at the NCEAS Codefest in 2014.
  • GitHub added PDF rendering to their services last week.
  • PLOS Biology published recommendations for the role of publishers in the dissemination of open data.
  • Jojo Scoble wrote a great blog post for Digital Science describing her experiences sharing her data openly, and why other researchers should consider it. On the common worry of whether a dataset is ‘good’ enough to publish, Scoble quoted her former supervisor:

    You could spend years trying to collect the perfect data set when you should be publishing what you have, which is enough.”

  •  The National Science Foundation in the US announced a plan to accommodate comprehensive public access to research results; in it, the “NSF will require that articles in peer-reviewed scholarly journals and papers in juried conference proceedings or transactions be deposited in a public access compliant repository and be available for download, reading and analysis within one year of publication.
  • The Fair Access to Research and Technology (FASTR) act was reintroduced recently to the US Congress. Successor to the Federal Research Public Access Act, FASTR introduces, among other provisions, an emphasis on reuse and correct attribution.
  • PLOS Collections has showcased a collection of negative results, underscoring the importance of publishing such studies to the broader scientific community.
  • In a similar vein, submissions are open for the ERROR conference, highlighting negative results in Munich, Germany on 3-4 September.
  • UNESCO recently put its open access curriculum online; the content is targeted at librarians and researchers, and emphasizes topics from introducing open access, to intellectual property rights to how to share your work in an open access model.
  • Also in partnership with UNESCO, Foster Open Science is hosting a two-day Open Science Workshop for European graduate school administrators in order to ‘construct a roadmap for making Open Science certifiable and standard training for future graduates‘.
  • Tom Baden et al recently published an article on 3D printing your own lab equipment, in order to mitigate the costs and hurdles to setting up a research program.
  • The Scholarly Kitchen recently interviewed two of the founders of Advancing Research and Communication Scholarship (ARCS), a new conference coming April 26-28 in Philadelphia ‘designed to provide a broad and collaborative forum for addressing and affecting scholarly and scientific communication.‘ (- Alice Meadows, Scholarly Kitchen)
  • Chris Parr wrote an article for Times Higher Education on Carol Goble’s work and comments on the hurdles created by ostentation in scholarly communication and questions raised by the failure to distribute both data and code.
  • The Su Lab is holding a hackathon on biomedical big data, May 7-9.
  • Finally, don’t miss our map of hacky hours and study groups – and if you know of anyone running a meetup about coding for researchers, let us know so we can add you to the map!

 

Daniel StenbergFixing the Func KB-460 ‘-key

Func KB-460 keyboardI use a Func KB-460 keyboard with Nordic layout – that basically means it is a qwerty design with the Nordic keys for “åäö” on the right side as shown on the picture above. (yeah yeah Swedish has those letters fairly prominent in the language, don’t mock me now)

The most annoying part with this keyboard has been that the key repeat on the apostrophe key has been sort of broken. If you pressed it and then another key, it would immediately generate another (or more than one) apostrophe. I’ve sort of learned to work around it with some muscle memory and treating the key with care but it hasn’t been ideal.

This problem is apparently only happening on Linux someone told me (I’ve never used it on anything else) and what do you know? Here’s how to fix it on a recent Debian machine that happens to run and use systemd so your mileage will vary if you have something else:

1. Edit the file “/lib/udev/hwdb.d/60-keyboard.hwdb”. It contains keyboard mappings of scan codes to key codes for various keyboards. We will add a special line for a single scan code and for this particular keyboard model only. The line includes the USB vendor and product IDs in uppercase and you can verify that it is correct with lsusb -v and check your own keyboard.

So, add something like this at the end of the file:

# func KB-460
keyboard:usb:v195Dp2030*
KEYBOARD_KEY_70031=reserved

2. Now update the database:

$ udevadm hwdb –update

3. … and finally reload the tweaks:

$ udevadm trigger

4. Now you should have a better working key and life has improved!

With a slightly older Debian without systemd, the instructions I got that I have not tested myself but I include here for the world:

1. Find the relevant input for the device by “cat /proc/bus/input/devices”

2. Make a very simple keymap. Make a file with only a single line like this:

$ cat /lib/udev/keymaps/func
0×70031 reserved

3 Map the key with ‘keymap’:

$ sudo /lib/udev/keymap -i /dev/input/eventX /lib/udev/keymaps/func

where X is the event number you figured out in step 1.

The related kernel issue.

Pierros PapadeasMultiple emails on mozillians.org

tl;dr version: You can now associate multiple emails to your mozillians.org profile

Background

Since the start, users of mozillians.org were able to associate only one email per profile. This was used both as email displayed on your profile (depending on your privacy settings) but most importantly as the email used to login using Persona.

Rationale

Most of us own and use multiple emails everyday. Personal, business, alias and any combo in between. Even within various Mozilla properties people have associated different profiles with different emails (eg. SuMo account vs. Bugzilla email). Although we need to recognize and respect the will of some people to use different emails as different (separate) online personas, we also need to find ways to make identity management and consolidation easier for people that choose to use multiple emails, under the same name.
Being able to associate multiple emails under one mozillians.org profile, presents us with really interesting advantages. For once, you can login on websites that check for your mozillians.org account using any email associated with your Persona account. Also other mozillians would be able to look you up using any of your emails. Finally, from a metrics standpoint we will be able to effectively deduplicate accounts and metrics/statistics across different systems of Mozilla.

Implementation

  • Main email is being used for communication with mozillians in Mozillians.org
  • Alternate emails are mostly being used for identity deduplication
  • API v2 exposes alternate emails

What should I do?

  • Login to mozillians.org
  • Click “Edit your profile”
  • Click “Edit E-mail addresses”
There we provide all the functionality to manage your profile’s emails.
  • Add/delete alternate email address
  • Change your primary email address
  • Manage email visibility

Multiple Accounts?

We dont expect many people to have multiple profiles in mozillians.org. We cannot know for sure, only anecdotally. People with multiple accounts should contact us ( #commtools on IRC or open a bug here) for help merging, or they can choose to use one of them and delete the others.

What is next?

Mozillians.org dev team is working tirelessly on new features and enhancements that would make mozillians.org even easier to use and more robust as a source of truth about all things mozillians. You can check our our roadmap here, follow our development and contribute on github and join our discussions here.

Cameron KaiserPwn2Own this Power Mac (plus: IonPower's time is running out)

All of the mighties fell at this year's Pwn2Own, including Firefox, where clever minds find gaping holes for fun and a small amount of profit (but then it also makes a great resume builder, which may be a more reliable paycheque). The holes last year were quickly patched, and after I determined we were also vulnerable we followed suit. As usual, for this year's naughtiness Mozilla already has patched versions available, including ESR 31.5.3.

However, the two holes used for this year's marvelous and somewhat alarming crack are not exploitable in TenFourFox directly: the SVG navigation fault cannot be effectively used to escalate privileges in TenFourFox's default configuration, and we don't even build the code that uses JavaScript bounds checking. The navigation fault may have other weaponizeable vectors and we do want to roll that fix, but the good news is 31.6 will come out this weekend, so no urgent chemspill is necessary unless I discover some odd way of busting through it between now and then.

I lost about a week of hacking time to one of my occasional bouts of bronchitis, which is pushing IonPower's timing very close to the wire. We need two cycles for 38 to allow localizers to catch up and people to test, and of course somewhere in that timeframe we also have to finish the move from Eric Schmidt is a Poopypants Land Google Code. Over the weekend I got IonPower to pass the test suite in Baseline mode, which is very encouraging, but some of the same problems that doomed PPCBC's Ion work are starting to rear up again.

The biggest problem that recurred is an old one: Ion's allocator is not endian-safe. I get bad indicies off it for stack slots and other in-memory boxed values and all but the simplest scripts either assert deep within Ion's bowels (not our new PowerPC backend) or generate code that is verifiably wrong. Unfortunately, Mozilla doesn't really document Ion's guts anywhere, so I don't know where to start with fixing it, and all the extant Ion backends, even MIPS, are little-endian. Maybe some Mozilla JIT folks are reading and can comment? (See also the post in the JavaScript engine internals group.)

One old problem with bad bailout stack frames, however, is partially solved with IonPower. I say partially because even though the stack frame is sane now, it still crashes, but I have a few ideas about that. However, the acid test is getting Ion code to jump to Baseline, run a bit in Baseline, and then jump back to Ion to finish execution. PPCBC could never manage this without crashing. If IonPower can do no better, there is no point in continuing the porting effort.

Even if this effort runs aground again, that doesn't make IonPower useless. PPCBC may pass the test suite, but some reproducible bugs in Tenderapp indicate that it goes awry in certain extremely-tough-to-debug edge cases, and IonPower (in Baseline mode) does pass the test suite as well now. If I can get IonPower to be as fast or faster than PPCBC even if it can't execute Ion code either, we might ship it anyway as "PPCBC II" in Baseline-only mode to see if it fixes those problems -- I have higher confidence that it will, because it generates much more sane and "correct" output and doesn't rely on the hacks and fragile glue code that PPCBC does in 24 and 31. I have to make this decision sometime mid-April, though, because we're fast approaching EOL for 31.

Also, as of Firefox 38 Mozilla no longer supports gcc 4.6, the compiler which we build with. However, I'm not interested in forcing a compiler change so close to the next ESR, and it appears that we should still be able to get it working on 4.6 with some minor adjustments. That won't be the case for Fx39, if we're even going to bother with that, but fortunately there is a gcc 4.8 in MacPorts and we might even use Sevan's gcc from pkgsrc. Again, the decision to continue will be based on feasibility and how close Electrolysis is to becoming mandatory before 45ESR, which is the next jump after that. For now, TenFourFox 38 is the highest priority.

This Week In RustThis Week in Rust 75

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Send me an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors or omissions in this week's issue, please submit a PR.

What's cooking on master?

79 pull requests were merged in the last week, and 9 RFC PRs.

Now you can follow breaking changes as they happen!

Breaking Changes

Other Changes

New Contributors

  • Johannes Oertel
  • kjpgit
  • Nicholas
  • Paul ADENOT
  • Sae-bom Kim
  • Tero Hänninen

Approved RFCs

New RFCs

Notable Links

Project Updates

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

Quote of the Week

<mbrubeck> the 5 stages of loss and rust
<mbrubeck> 1. type check. 2. borrow check. 3. anger. 4. acceptance. 5. rust upgrade

Thanks to jdm for the tip. Submit your quotes for next week!.

Anthony JonesYouTube, MSE and Firefox 37

Being invisible is important when you build infrastructure. You don't notice what your browser does for you unless it is doing a poor job. We have been busy making Firefox video playback more robust, more asynchronous, faster and better.

You may have some memory of selecting between high and low quality on YouTube. When you switched it would stop the video and buffer the video at the new quality. Now it defaults to Auto but allows you to manually override. You may have noticed that the Auto mode doesn't stop playing when it changes quality. Nobody really noticed, but a tiny burden was lifted from users. You need to know exactly one less thing to watch videos on YouTube and many other sites.

This Auto mode is otherwise known as DASH, which stands for Dynamic Adaptive Streaming over HTTP. Flash has supported adaptive streaming for some time. In HTML5 video, DASH is supported on top of an API called MSE (Media Source Extensions). MSE allows Javascript to directly control the data going into the video element. This allows DASH to be supported in Javascript, along with some other things that I'm not going to go into.


It has taken a surprising amount of work to make this automatic. My team has been working on adding MSE to Firefox for a couple of years now as well as adding MP4 support on a number of platforms. We're finally getting to the point where it is working really well on Windows Vista and later in Firefox 37 beta. I know people will ask, so MSE is coming soon for Mac, and it is coming later for Linux and Windows XP.

Making significant changes isn't without its pain but it is great to finally see the light at the end of the tunnel. Firefox beta, developer edition and nightly users have put up with a number of teething problems. Most of them have been sorted out. I'd like to thank everyone who has submitted a crash report, written feedback or filed bugs. It has all helped us to find problems and make the video experience better.

Robustness goes further than simply fixing bugs. To make something robust it is necessary to keep simplifying the design and to create re-usable abstractions. We've switched to using a thread pool for decoding, which keeps the number of threads down. Threads use a lot of address space, which is at a premium in a 32 bit application.

We've used a promise-like abstraction to make many things non-blocking. They make chaining asynchronous operations much simpler. They're like Javascript promises, except being C++ they also guarantee you get called back on the right thread.

We're working towards getting all the complex logic on a single thread, with all the computation done in a thread pool. Putting the video playback machinery on a single thread makes it much clearer which operations are synchronous and which ones are asynchronous. It doesn't hurt performance as long as the state machine thread never blocks. In fact you get a performance win because you avoid locking and cache contention.

We're white listing MSE for YouTube at first but we are intending to roll it out to other sites soon. There are a couple of spec compliance issues that we need to resolve before we can remove the white list. Meanwhile, YouTube is looking really good in Firefox 37.



Daniel StenbergSumming up the birthday festivities

I blogged about curl’s 17th birthday on March 20th 2015. I’ve done similar posts in the past and they normally pass by mostly undetected and hardly discussed. This time, something else happened.

Primarily, the blog post quickly became the single most viewed blog entry I’ve ever written – and I’ve been doing it for many many years. Already in the first day it was up, I counted more than 65,000 views.

The blog post got more comments than on any other blog post I’ve ever done. Right now they have probably stopped but there are 60 of them now, almost everyone one of them saying congratulations and/or thanks.

The posting also got discussed on both hacker news and reddit, totaling in more than 260 comments. Most of those in positive spirit.

The initial tweet I made about my blog post is the most retweeted and stared tweet I’ve ever posted. At least 87 retweets and 49 favorites (it might even grow a bit more over time). Others subsequently also tweeted the link hundreds of times. I got numerous replies and friendly call-outs on twitter saying “congrats” and “thanks” in many variations.

Spontaneously (ie not initiated or requested by me but most probably because of a comment on hacker news), I also suddenly started to get donations from the curl web site’s donation web page (to paypal). Within 24 hours from my post, I had received 35 donations from friendly fans who donated a total sum of  445 USD. A quick count revealed that the total number of donations ever through the history of curl’s lifetime was 43 before this day. In one day we had basically gotten as many as we had gotten the first 17 years.

Interesting data from this donation “race”: I got donations varying from 1 USD (yes one dollar) to 50 USD and the average donation was then 12.7 USD.

Let me end this summary by thanking everyone who in various ways made the curl birthday extra fun by being nice and friendly and some even donating some of their hard earned money. I am honestly touched by the attention and all the warmth and positiveness. Thank you for proving internet comments can be this good!

John O'Duinn“The Race for Space” by Public Service Broadcasting

I was happily surprised by this as a gift recently.

For me, the intermixing of old original broadcasts with original composition music worked well as an idea. Choosing which broadcasts to include was just as important as composing the right music.

I liked how the composers framed the album around 9 pivotal moments events from 1957 (launch of sputnik) to 1972 (Apollo 17, the last Apollo departing the moon). Obviously, there was a lot of broadcasts to choose from, and I liked their choices – some of which I’d never heard before. Kennedy’s “We choose to go to the moon” speech, a homage to Valentina Tereshkova (the first female in space), Apollo 8’s “see you on the flip side” (the earthrise photo taken by Apollo 8 is still one of my favourites), and the tense interactions of all the ground + flights teams in the final seconds of descent to land of Apollo 11 (including handling the 1202, 1201 errors!).

All heady stuff and well worth a listen.

Chris PearceReplacing Lenovo optical drive with second hard drive: The Lenovo adapter is disappointing

I recently ordered a Lenovo Serial ATA Hard Drive Bay Adapter III for my Lenovo T510 laptop. This can hold a hard drive, and replaces the DVD/CD-ROM drive in your laptop. This enables your laptop to run a second hard drive.

I've used my optical drive two, maybe three times since getting the laptop, so swapping it for another hard drive seems like a good trade for me.

The Lenovo drive bay itself works fine, but I'm still disappointed in Lenovo's product.

When installed, the drive bay looks like this:
Lenovo Serial ATA Hard Drive Bay Adapter installed in a Lenovo T510
The problem here is that there's a gap of approximately 3mm (~0.12 inches) between the top of the drive bay and the ceiling of the optical disk cavity. This means the drive bay can wobble vertically, so much so that I feel the need to tape it in place to stop it flopping around. This looks ridiculous.

Secondly, in order to install your hard drive inside the Lenovo drive bay, you need a hard drive cover. This is the metal cover that encases the hard drives shipping in Lenovo laptops. The covers normally have rubber bumpers/rails to stop the drive moving around. You need to take the bumpers off to install your drive into the hard drive bay.

The hard drive covers looks like this:

Lenovo hard drive cover

And with a hard drive in it, the hard drive cover looks like this:

Lenovo hard drive cover encasing a hard drive.

Note the screws. The drive bay has notches which the screws snap into, holding the drive securely inside the drive bay. Possibly the screws are the only important bit here; you probably don't actually need the drive cover to install the drive into the bay, just the screws since they're what hold the drive in place inside the drive bay.

The frustrating thing is that nothing on the Lenovo web site tells you that you need a drive cover to install a drive into the drive bay. My drive bay arrived and I had to loot my old laptop's drive cover in order to install a new drive into my current laptop.

And I also couldn't find the drive covers listed on Lenovo's web site. Presumably if you buy a laptop hard drive from Lenovo they come with this cover, and presumably Lenovo use this as a way to force you to buy all your laptop hard drives directly from Lenovo.

That's the sort of behaviour I'd expect from Apple, not Lenovo.

Thankfully Ann at IT was able to figure out how to order the drive covers separately. Thanks Ann!

Overall, the product was easy to install (once I had a drive cover) and works fine (apart from the wobble) but I'm still disappointed. Next time, I'll try one of newmodeus' Lenovo drive caddies:

Update 22 March 2015: This blog post now has a Russian translation;
Пост доступен на сайте softdroid.net: Замена оптического дисковода вторым жестким диском на Lenovo.

Pascal FinetteLink Pack (March 22nd)

What I was reading this week:

Tantek ÇelikDublin Core Application Profiles — A Brief Dialogue

IndieWebCamp Cambridge 2015 is over. Having finished their ice cream and sorbet while sitting on a couch at Toscanini’s watching it snow, the topics of sameAs, reuse, and general semantics leads to a mention of Dublin Core Application Profiles.

  1. A:
    Dublin Core Application Profiles could be useful for a conceptual basis for metadata interoperation.
  2. T:
    (Yahoos for dublin core application profiles, clicks first result)
  3. T:
    Dublin Core Application Profile Guidelines (SUPERSEDED, SEE Guidelines for Dublin Core Application Profiles)
  4. T:
    Kind of like how The Judean People’s Front was superseded by The People’s Front of Judea?
  5. A:
    (nervous laugh)
  6. T:
    Guidelines for Dublin Core Application Profiles
  7. T:
    Replaces: http://dublincore.org/documents/2008/11/03/profile-guidelines/
  8. T:
    Hmm. (clicks back)
  9. T:
    Dublin Core Application Profile Guidelines
  10. T:
    Is Replaced By: Not applicable, wait, isn’t that supposed to be an inverse relationship?
  11. A:
    I’m used to this shit.
  12. T:
    (nods, clicks forward, starts scrolling, reading)
  13. T:
    We decide that the Library of Congress Subject Headings (LCSH) meet our needs. - I’m not sure the rest of the world would agree.
  14. A:
    No surprises there.
  15. T:
    The person has a name, but we want to record the forename and family name separately rather than as a single string. DCMI Metadata Terms has no such properties, so we will take the properties foaf:firstName and foaf:family_name
  16. T:
    Wait what? Not "given-name" and "family-name"? Nor "first-name" and "last-name" but "firstName" and "family_name"?!?
  17. A:
    Clearly it wasn’t proofread.
  18. T:
    But it’s in the following table too. foaf:firstName / foaf:family_name
  19. A:
    At least it’s internally consistent.
  20. A:
    Oh, this is really depressing.
  21. A:
    Did they even read the FOAF spec or did they just hear a rumour?
  22. T:
    (opens text editor)

Air MozillaWebdev Beer and Tell: March 2015

Webdev Beer and Tell: March 2015 Web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on.

Kim MoirScaling Yosemite

We migrated most of our Mac OS X 10.8 (Mountain Lion) test machines to 10.10.2 (Yosemite) this quarter.

This project had two major constraints:
1) Use the existing hardware pool (~100 r5 mac minis)
2) Keep wait times sane1.  (The machines are constantly running tests most of the day due to the distributed nature of the Mozilla community and this had to continue during the migration.)

So basically upgrade all the machines without letting people notice what you're doing!

Yosemite Valley - Tunnel View Sunrise by ©jeffkrause, Creative Commons by-nc-sa 2.0

Why didn't we just buy more minis and add them to the existing pool of test machines?
  1. We run performance tests and thus need to have all the machines running the same hardware within a pool so performance comparisons are valid.  If we buy new hardware, we need to replace the entire pool at once.  Machines with different hardware specifications = useless performance test comparisons.
  2. We tried to purchase some used machines with the same hardware specs as our existing machines.  However, we couldn't find a source for them.  As Apple stops production of old mini hardware each time they announce a new one, they are difficult and expensive to source.
Apple Pi by ©apionid, Creative Commons by-nc-sa 2.0

Given that Yosemite was released last October, why we are only upgrading our test pool now?  We wait until the population of users running a new platform2 surpass those the old one before switching.

Mountain Lion -> Yosemite is an easy upgrade on your laptop.  It's not as simple when you're updating production machines that run tests at scale.

The first step was to pull a few machines out of production and verify the Puppet configuration was working.  In Puppet, you can specify commands to only run certain operating system versions. So we implemented several commands to accommodate changes for Yosemite. For instance, changing the default scrollbar behaviour, new services that interfere with test runs needed to be disabled, debug tests required new Apple security permissions configured etc.

Once the Puppet configuration was stable, I updated our configs so the people could run tests on Try and allocated a few machines to this pool. We opened bugs for tests that failed on Yosemite but passed on other platforms.  This was a very iterative process.  Run tests on try.  Look at failures, file bugs, fix test manifests. Once we had to the opt (functional) tests in a green state on try, we could start the migration.

Migration strategy
  • Disable selected Mountain Lion machines from the production pool
  • Reimage as Yosemite, update DNS and let them puppetize
  • Land patches to disable Mountain Lion tests and enable corresponding Yosemite tests on selected branches
  • Enable Yosemite machines to take production jobs
  • Reconfig so the buildbot master enable new Yosemite builders and schedule jobs appropriately
  • Repeat this process in batches
    • Enable Yosemite opt and performance tests on trunk (gecko >= 39) (50 machines)
    • Enable Yosemite debug (25 more machines)
    • Enable Yosemite on mozilla-aurora (15 more machines)
We currently have 14 machines left on Mountain Lion for mozilla-beta and mozilla-release branches.

As a I mentioned earlier, the two constraints with this project were to use the existing hardware pool that constantly runs tests in production and keep the existing wait times sane.  We encountered two major problems that impeded that goal:

It's a compliment when people say things like "I didn't realize that you updated a platform" because it means the upgrade did not cause large scale fires for all to see.  So it was a nice to hear that from one of my colleagues this week.

Thanks to philor, RyanVM and jmaher for opening bugs with respect to failing tests and greening them up.  Thanks to coop for many code reviews. Thanks dividehex for reimaging all the machines in batches and to arr for her valiant attempts to source new-to-us minis!

References
1Wait times represent the time from when a job is added to the scheduler database until it actually starts running. We usually try to keep this to under 15 minutes but this really varies on how many machines we have in the pool.
2We run tests for our products on a matrix of operating systems and operating system versions. The terminology for operating system x version in many release engineering shops is a platform.  To add to this, the list of platform we support varies across branches.  For instance, if we're going to deprecate a platform, we'll let this change ride the trains to release.

Further reading
Bug 1121175: [Tracking] Fix failing tests on Mac OSX 10.10 
Bug 1121199: Green up 10.10 tests currently failing on try 
Bug 1126493: rollout 10.10 tests in a way that doesn't impact wait times
Bug 1144206: investigate what is causing frequent talos failures on 10.10
Bug 1125998: Debug tests initially took 1.5-2x longer to complete on Yosemite


Why don't you just run these tests in the cloud?
  1. The Apple EULA severely restricts virtualization on Mac hardware. 
  2. I don't know of any major cloud vendors that offer the Mac as a platform.  Those that claim they do are actually renting racks of Macs on a dedicated per host basis.  This does not have the inherent scaling and associated cost saving of cloud computing.  In addition, the APIs to manage the machines at scale aren't there.
  3. We manage ~350 Mac minis.  We have more experience scaling Apple hardware than many vendors. Not many places run CI at Mozilla scale :-) Hopefully this will change and we'll be able to scale testing on Mac products like we do for Android and Linux in a cloud.

Emma IrwinP2PU Course in a Box & Mozilla Community Education

Last year I created my first course on the P2PU platform  titled ‘Hacking Open Source Participation’,  and through that fantastic experience stumbled across a newer P2PU project called Course in a Box. Built on  Jekyll blogging software, Course in a Box makes it easy to create online educational content powered by Github Pages.

As awesome as this project is, there were a number of challenges I needed solve before adopting it for Mozilla’s Community Education Platform:

 Hierarchy

Jekyll is a blog-aware, static site generator. It uses template and layout files + markdown  +  CSS to display posts. Course in a Box comes with a top level category for content called modules, and within those modules are the content  – which works beautifully for single-course purpose

The challenge is , that we need to write education and training materials on a regular basis, and creating multiple Course in a Box(es) would be a maintenance nightmare.  What I really needed was a way to build multiple courses under one or more topics vrs the ‘one course’ model.  To do that, we needed to build out a hierarchy of content.

What I did

Visualized the menu moving from a list of course modules

cib

 

To a list of course topics.

ce

So Marketpulse, DevRel (for example) are course topics.  Topics are followed by courses, which then contain modules.

On the technical side, I added a new variable called submodules to the courses.yml data file.

2015-03-19_2313

Submodules are prefixed with the topic they belong ‘under’, for example: reps_mentor_training is a module in the topic reps.  This is also how module folders are named:

2015-03-19_2319

 

 

 

 

Using this method of prefixing modules with topics, it was super-simple to create a dropdown menu.

2015-03-19_2317

 

As far as Jekyll is concerned, these are all still ‘modules’, which means that even top level topics can have content associated.  This works great for a ‘landing page’ type of introduction to a topic.

Curriculum Modularity

As mentioned, Jekyll is a blogging platform, so there’s no depth or usability  designed into content architecture, and this is a problem with our goal of writing modular curriculum.  I wanted to make it possible to reuse curriculum across not only our instance of Course in a Box, but other instances across Mozilla well.

What I did

I created a separate repository for community curriculum and made this a git submodule  in the _includes folder of Course in a Box.

2015-03-20_1026

 

 

 

 

With this submodule & Jekyll’s include() function  – I was able easily reference our modular content from a post:

{% include community_curriculum/market_pulse/FFOS/en/introduction.md %}

The only drawback is that Jekyll expects all content referenced with include() to be in a specific folder – and so having content in with design files is – gah!  But I can live with it.

And of course we can do this for multiple repositories if we need.  By using a submodule we can stick to certain versions/releases of curriculum if needed.   Additionally, this makes it easier for contributors to focus on ‘just the content’ (and not get lost in Jeykll code) when they are forking and helping improve curriculum.

Finally

I’m thinking about bigger picture of curriculum-sharing, in large part thanks to conversations with the amazing Laura Hilliger about how we can both share and remix curriculum accross more than one instance of Course in a Box.  The challenge is with remixed curriculum, which is essentially a new version – and whether it should ‘ live’ in a difference place than the original repository fork.

My current thinking is that each Course in a Box Instance should have it’s own curriculum repository, included as a git submodule AND other submodules needed, but not unique to the platform. This  repo will contain all curriculum unique to that instance, including remixed versions of content from other repositories.   (IMHO)  Remixed content should not live in the original fork, ans you risk becoming increasing out of sync with the original.

So that’s where I am right now, welcoming feedback & suggestions on our Mozilla Community Education platform (with gratitude to P2PU for making it possible)

 

 

 

 

 

Doug BelshawWeb Literacy Map v1.5

I’m delighted to announce that, as a result of a process that started back in late August 2014, the Mozilla community has defined the skills and competencies that make up v1.5 of the Web Literacy Map.

Cheers - DiCaprio.gif

Visual design work will be forthcoming with the launch of teach.webmaker.org, but I wanted to share the list of skills and competencies as soon as possible:


EXPLORING

Reading the Web

Navigation

Using software tools to browse the web

  • Accessing the web using the common features of a browser
  • Using hyperlinks to access a range of resources on the web
  • Reading, evaluating, and manipulating URLs
  • Recognizing the common visual cues in web services
  • Exploring browser add-ons and extensions to provide additional functionality

Web Mechanics

Understanding the web ecosystem and Internet stack

  • Using and understanding the differences between URLs, IP addresses and search terms
  • Identifying where data is in the network of devices that makes up the Internet
  • Exporting, moving, and backing up data from web services
  • Explaining the role algorithms play in creating and managing content on the web
  • Creating or modifying an algorithm to serve content from around the web

Search

Locating information, people and resources via the web

  • Developing questions to aid a search
  • Using and revising keywords to make web searches more efficient
  • Evaluating search results to determine if the information is relevant
  • Finding real-time or time-sensitive information using a range of search techniques
  • Discovering information and resources by asking people within social networks

Credibility

Critically evaluating information found on the web

  • Comparing and contrasting information from a number of sources
  • Making judgments based on technical and design characteristics
  • Discriminating between ‘original’ and derivative web content
  • Identifying and investigating the author or publisher of web resources
  • Evaluating how purpose and perspectives shape web resources

Security

Keeping systems, identities, and content safe

  • Recommending how to avoid online scams and 'phishing’
  • Managing and maintaining account security
  • Encrypting data and communications using software and add-ons
  • Changing the default behavior of websites, add-ons and extensions to make web browsing more secure

BUILDING

Writing the web

Composing for the web

Creating and curating content for the web

  • Inserting hyperlinks into a web page
  • Identifying and using HTML tags
  • Embedding multimedia content into a web page
  • Creating web resources in ways appropriate to the medium/genre
  • Setting up and controlling a space to publish on the Web

Remixing

Modifying existing web resources to create something new

  • Identifying remixable content
  • Combining multimedia resources to create something new on the web
  • Shifting context and meaning by creating derivative content
  • Citing and referencing original content

Designing for the web

Enhancing visual aesthetics and user experiences

  • Using CSS properties to change the style and layout of a Web page
  • Demonstrating the difference between inline, embedded and external CSS
  • Improving user experiences through feedback and iteration
  • Creating device-agnostic web resources

Coding / Scripting

Creating interactive experiences on the web

  • Reading and explaining the structure of code
  • Identifying and applying common coding patterns and concepts
  • Adding comments to code for clarification and attribution
  • Applying a script framework
  • Querying a web service using an API

Accessibility

Communicating in a universally-recognisable way

  • Using empathy and awareness to inform the design of web content that is accessible to all users
  • Designing for different cultures which may have different interpretations of design elements
  • Comparing and exploring how different interfaces impact diverse users
  • Improving the accessibility of a web page through the design of its color scheme, structure/hierarchy and markup
  • Comparing and contrasting how different interfaces impact diverse web users

CONNECTING

Participating on the web

Sharing

Providing access to web resources

  • Creating and using a system to distribute web resources to others
  • Contributing and finding content for the benefit of others
  • Creating, curating, and circulating web resources to elicit peer feedback
  • Understanding the needs of audiences in order to make relevant contributions to a community
  • Identifying when it is safe to contribute content in a variety of situations on the web

Collaborating

Creating web resources with others

  • Choosing a Web tool to use for a particular contribution/ collaboration
  • Co-creating Web resources
  • Configuring notifications to keep up-to-date with community spaces and interactions
  • Working towards a shared goal using synchronous and asynchronous tools
  • Developing and communicating a set of shared expectations and outcomes

Community Participation

Getting involved in web communities and understanding their practices

  • Engaging in web communities at varying levels of activity
  • Respecting community norms when expressing opinions in web discussions
  • Making sense of different terminology used within online communities
  • Participating in both synchronous and asynchronous discussions

Privacy

Examining the consequences of sharing data online

  • Debating privacy as a value and right in a networked world
  • Explaining ways in which unsolicited third parties can track users across the web
  • Controlling (meta)data shared with online services
  • Identifying rights retained and removed through user agreements
  • Managing and shaping online identities

Open Practices

Helping to keep the web democratic and universally accessible

  • Distinguishing between open and closed licensing
  • Making web resources available under an open license
  • Contributing to an Open Source project
  • Advocating for an open web

Thanks goes to the dedicated Mozilla contributors who steadfastly worked on this over the last few months. They’re listed here. We salute you!

Any glaring errors? Typos? Let us know! You can file an issue on GitHub.


Questions? Comments? Try and put them in the GitHub repo, but you can also grab me on Twitter (@dajbelshaw) or by email (doug@mozillafoundation.org

Michael KaplyCCK2 2.0.21 released

I've released new version of the CCK. New features include:

  • Setting a lightweight theme
  • Clearing preferences
  • Setting user preference values (versus default or locking)
  • More control over the CA trust string
  • Security devices are loaded at startup and fail gracefully (so multiple platforms can be specified)
  • Redesign of security devices dialog
  • Distribution info on about dialog is no longer bolded
  • Proxy information can be set in the preference page (if you want user values, not default/locked)
  • Better migration of bookmarks between versions
  • Better errors for cert download failures

Bugs fixed include:

  • International characters not working properly
  • CA trust string not being used
  • Unable to set the plugin.disable_full_page_plugin_for_types
  • Bookmarks not deleted when migrating from CCK Wizard

If you find bugs, please report them at cck2.freshdesk.com.

Priority support is given to folks with support subscriptions. If the CCK2 is important to your company, please consider purchasing one.

Mozilla Reps CommunityReps Weekly Call – March 19th 2015

Last Thursday we had our weekly call about the Reps program, where we talk about what’s going on in the program and what Reps have been doing during the last week.

reps-ivorycoast

Summary

  • FOSSASIA 2015 Updates
  • Maker Party Jaipur
  • Update on Council + Peers meetup
  • Education

Detailed notes

AirMozilla video

Don’t forget to comment about this call on Discourse and we hope to see you next week!

Gregory SzorcNew High Scores for hg.mozilla.org

It's been a rough week.

The very short summary of events this week is that both the Firefox and Firefox OS release automation has been performing a denial of service attack against hg.mozilla.org.

On the face of it, this is nothing new. The release automation is by far the top consumer of hg.mozilla.org data, requesting several terabytes per day via several million HTTP requests from thousands of machines in multiple data centers. The very nature of their existence makes them a significant denial of service threat.

Lots of things went wrong this week. While a post mortem will shed light on them, many fall under the umbrella of release automation was making more requests than it should have and was doing so in a way that both increased the chances of an outage occurring and increased the chances of a prolonged outage. This resulted in the hg.mozilla.org servers working harder than they ever have. As a result, we have some new high scores to share.

  • On UTC day March 19, hg.mozilla.org transferred 7.4 TB of data. This is a significant increase from the ~4 TB we expect on a typical weekday. (Even more significant when you consider that most load is generated during peak hours.)

  • During the 1300 UTC hour of March 17, the cluster received 1,363,628 HTTP requests. No HTTP 503 Service Not Available errors were encountered in that window! 300,000 to 400,000 requests per hour is typical.

  • During the 0800 UTC hour of March 19, the cluster transferred 776 GB of repository data. That comes out to at least 1.725 Gbps on average (I didn't calculate TCP and other overhead). Anything greater than 250 GB per hour is not very common. No HTTP 503 errors were served from the origin servers during this hour!

We encountered many periods where hg.mozilla.org was operating more than twice its normal and expected operating capacity and it was able to handle the load just fine. As a server operator, I'm proud of this. The servers were provisioned beyond what is normally needed of them and it took a truly exceptional event (or two) to bring the service down. This is generally a good way to do hosted services (you rarely want to be barely provisioned because you fall over at the slighest change and you don't want to be grossly over-provisioned because you are wasting money on idle resources).

Unfortunately, the hg.mozilla.org service did fall over. Multiple times, in fact. There is room to improve. As proud as I am that the service operated well beyond its expected limits, I can't help but feel ashamed that it did eventual cave in under even extreme load and that people are probably making under-informed general assumptions like Mercurial can't scale. The simple fact of the matter is that clients cumulatively generated an exceptional amount of traffic to hg.mozilla.org this week. All servers have capacity limits. And this week we encountered the limit for the current configuration of hg.mozilla.org. Cause and effect.

Daniel Stenbergcurl, 17 years old today

Today we celebrate the fact that it is exactly 17 years since the first public release of curl. I have always been the lead developer and maintainer of the project.

Birthdaycake

When I released that first version in the spring of 1998, we had only a handful of users and a handful of contributors. curl was just a little tool and we were still a few years out before libcurl would become a thing of its own.

The tool we had been working on for a while was still called urlget in the beginning of 1998 but as we just recently added FTP upload capabilities that name turned wrong and I decided cURL would be more suitable. I picked ‘cURL’ because the word contains URL and already then the tool worked primarily with URLs, and I thought that it was fun to partly make it a real English word “curl” but also that you could pronounce it “see URL” as the tool would display the contents of a URL.

Much later, someone (I forget who) came up with the “backronym” Curl URL Request Library which of course is totally awesome.

17 years are 6209 days. During this time we’ve done more than 150 public releases containing more than 2600 bug fixes!

We started out GPL licensed, switched to MPL and then landed in MIT. We started out using RCS for version control, switched to CVS and then git. But it has stayed written in good old C the entire time.

The term “Open Source” was coined 1998 when the Open Source Initiative was started just the month before curl was born, which was superseded with just a few days by the announcement from Netscape that they would free their browser code and make an open browser.

We’ve hosted parts of our project on servers run by the various companies I’ve worked for and we’ve been on and off various free services. Things come and go. Virtually nothing stays the same so we better just move with the rest of the world. These days we’re on github a lot. Who knows how long that will last…

We have grown to support a ridiculous amount of protocols and curl can be built to run on virtually every modern operating system and CPU architecture.

The list of helpful souls who have contributed to make curl into what it is now have grown at a steady pace all through the years and it now holds more than 1200 names.

Employments

In 1998, I was employed by a company named Frontec Tekniksystem. I would later leave that company and today there’s nothing left in Sweden using that name as it was sold and most employees later fled away to other places. After Frontec I joined Contactor for many years until I started working for my own company, Haxx (which we started on the side many years before that), during 2009. Today, I am employed by my forth company during curl’s life time: Mozilla. All through this project’s lifetime, I’ve kept my work situation separate and I believe I haven’t allowed it to disturb our project too much. Mozilla is however the first one that actually allows me to spend a part of my time on curl and still get paid for it!

The Netscape announcement which was made 2 months before curl was born later became Mozilla and the Firefox browser. Where I work now…

Future

I’m not one of those who spend time glazing toward the horizon dreaming of future grandness and making up plans on how to go there. I work on stuff right now to work tomorrow. I have no idea what we’ll do and work on a year from now. I know a bunch of things I want to work on next, but I’m not sure I’ll ever get to them or whether they will actually ship or if they perhaps will be replaced by other things in that list before I get to them.

The world, the Internet and transfers are all constantly changing and we’re adapting. No long-term dreams other than sticking to the very simple and single plan: we do file-oriented internet transfers using application layer protocols.

Rough estimates say we may have a billion users already. Chances are, if things don’t change too drastically without us being able to keep up, that we will have even more in the future.

1000 million users

It has to feel good, right?

I will of course point out that I did not take curl to this point on my own, but that aside the ego-boost this level of success brings is beyond imagination. Thinking about that my code has ended up in so many places, and is driving so many little pieces of modern network technology is truly mind-boggling. When I specifically sit down or get a reason to think about it at least.

Most of the days however, I tear my hair when fixing bugs, or I try to rephrase my emails to no sound old and bitter (even though I can very well be that) when I once again try to explain things to users who can be extremely unfriendly and whining. I spend late evenings on curl when my wife and kids are asleep. I escape my family and rob them of my company to improve curl even on weekends and vacations. Alone in the dark (mostly) with my text editor and debugger.

There’s no glory and there’s no eternal bright light shining down on me. I have not climbed up onto a level where I have a special status. I’m still the same old me, hacking away on code for the project I like and that I want to be as good as possible. Obviously I love working on curl so much I’ve been doing it for over seventeen years already and I don’t plan on stopping.

Celebrations!

Yeps. I’ll get myself an extra drink tonight and I hope you’ll join me. But only one, we’ll get back to work again afterward. There are bugs to fix, tests to write and features to add. Join in the fun! My backlog is only growing…

Ian BickingA Product Journal: The Evolutionary Prototype

I’m blogging about the development of a new product in Mozilla, look here for my other posts in this series

I came upon a new (for me) term recently: evolutionary prototyping. This is in contrast to the rapid or throwaway prototype.

Another term for the rapid prototype: the “close-ended prototype.” The prototype with a sunset, unlike the evolutionary prototype which is expected to become the final product, even if every individual piece of work will only end up as disposable scaffolding for the final product.

The main goal when using Evolutionary Prototyping is to build a very robust prototype in a structured manner and constantly refine it.

The first version of the product, written primarily late at night, was definitely a throwaway prototype. All imperative jQuery UI and lots of copy-and-paste code. It served its purpose. I was able to extend that code reasonably well – and I played with many ideas during that initial stage – but it was unreasonable to ask anyone else to touch it, and even I hated the code when I had stepped away from it for a couple weeks. So most of the code is being rewritten for the next phase.

To minimize risk, the developer does not implement poorly understood features. The partial system is sent to customer sites. As users work with the system, they detect opportunities for new features and give requests for these features to developers. Developers then take these enhancement requests along with their own and use sound configuration-management practices to change the software-requirements specification, update the design, recode and retest.

Thinking about this, it’s a lot like the Minimal Viable Product approach. Of which I am skeptical. And maybe I’m skeptical because I see MVP as reductive, encouraging the aggressive stripping down of a product, and in the process encouraging design based on conventional wisdom instead of critical engagement. When people push me in that direction I get cagey and defensive (not a great response on my part, just acknowledging it). The framing of the evolutionary prototype feels more humble to me. I don’t want to focus on the question “how can we most quickly get this into users hands?” but instead “what do we know we should build, so we can collect a fuller list of questions we want to answer?”

Gregory SzorcNew High Scores for hg.mozilla.org

It's been a rough week.

The very short summary of events this week is that Firefox's release automation has been performing a denial of service attack against hg.mozilla.org.

On the face of it, this is nothing new. The Firefox release automation is by far the top consumer of hg.mozilla.org data, requesting several terabytes per day via several million HTTP requests from thousands of machines in multiple data centers. The very nature of their existence makes them a significant denial of service threat.

Lots of things went wrong this week. While a post mortem will shed light on them, many fall under the umbrella of Firefox release automation was making more requests than it should have and was doing so in a way that increased the chances of a prolonged service outage. This resulted in the hg.mozilla.org servers working harder than they ever have. As a result, we have some new high scores to share.

  • On UTC day March 19, hg.mozilla.org transferred 7.4 TiB of data. This is a significant increase from the ~4 TiB we expect on a typical weekday. (Even more significant when you consider that most load is generated during peak hours.)

  • During the 1300 UTC hour of March 17, the cluster received 1,363,628 HTTP requests. No HTTP 503 Service Not Available errors were encountered in that window! 300,000 to 400,000 requests per hour is typical.

  • During the 0800 UTC hour of March 19, the cluster transferred 776 GiB of repository data. That comes out to at least 1.725 Gbps on average (I didn't calculate TCP and other overhead). Anything greater than 250 GiB per hour is not very common. No HTTP 503 errors were served from the origin servers during this hour!

We encountered many periods where hg.mozilla.org was operating more than twice its normal and expected operating capacity and it was able to handle the load just fine. As a server operator, I'm proud of this. The servers were provisioned beyond what is normally needed of them and it took a truly exceptional event (or two) to bring the service down. This is generally a good way to do hosted services (you rarely want to be barely provisioned because you fall over at the slighest change and you don't want to be grossly over-provisioned because you are wasting money on idle resources).

Unfortunately, the hg.mozilla.org service did fall over. Multiple times, in fact. There is room to improve. As proud as I am that the service operated well beyond its expected limits, I can't help but feel ashamed that it did eventual cave in under even extreme load and that people are probably making mis-informed general assumptions like Mercurial can't scale. The simple fact of the matter is that clients cumulatively generated an exceptional amount of traffic to hg.mozilla.org this week. All servers have capacity limits. And this week we encountered the limit for the current configuration of hg.mozilla.org. Cause and effect.

Niko MatsakisThe danger of negative thinking

One of the aspects of language design that I find the most interesting is trying to take time into account. That is, when designing a type system in particular, we tend to think of the program as a fixed, immutable artifact. But of course real programs evolve over time, and when designing a language it’s important to consider what impact the type rules will have on the ability of people to change their programs. Naturally as we approach the 1.0 release of Rust this is very much on my mind, since we’ll be making firmer commitments to compatibility than we ever have before.

Anyway, with that introduction, I recently realized that our current trait system contains a forward compatibility hazard concerned with negative reasoning. Negative reasoning is basically the ability to decide if a trait is not implemented for a given type. The most obvious example of negative reasoning are negative trait bounds, which have been proposed in a rather nicely written RFC. However, what’s perhaps less widely recognized is that the trait system as currently implemented already has some amount of negative reasoning, in the form of the coherence system.

This blog post covers why negative reasoning can be problematic, with a focus on the pitfalls in the current coherence system. This post only covers the problem. I’ve been working on prototyping possible solutions and I’ll be covering those in the next few blog posts.

A goal

Let me start out with an implicit premise of this post. I think it’s important that we be able to add impls of existing traits to existing types without breaking downstream code (that is, causing it to stop compiling, or causing it to radically different things). Let me give you a concrete example. libstd defines the Range<T> type. Right now, this type is not Copy for various good reasons. However, we might like to make it Copy in the future. It feels like that should be legal. However, as I’ll show you below, this could in fact cause existing code not to compile. I think this is a problem.

(In the next few posts when I start covering solutions, we’ll see that it may be that one cannot always add impls of any kind for all traits to all types. If so, I can live with it, but I think we should try to make it possible to add as many kinds of impls as possible.)

Negative reasoning in coherence today, the simple case

“Coherence” refers to a set of rules that Rust uses to enforce the idea that there is at most one impl of any trait for any given set of input types. Let me introduce an example crate hierarchy that I’m going to be coming back to throughout the post:

1
2
3
4
5
6
7
8
libstd
  |
  +-> lib1 --+
  |          |
  +-> lib2 --+
             |
             v
            app

This diagram shows that four crates: libstd, two libraries (creatively titled lib1 and lib2), and an application app. app uses both of the libraries (and, transitively, libstd). The libraries are otherwise defined independently from one another., We say that libstd is a parent of the other crates, and that lib[12] are cousins.

OK, so, imagine that lib1 defines a type Carton but doesn’t implement any traits for it. This is a kind of smart pointer, like Box.

1
2
// In lib1
struct Carton<T> { }

Now imagine that the app crate defines a type AppType that uses the Debug trait.

1
2
3
// In app
struct AppType { }
impl Debug for AppType { }

At some point, app has a Carton<AppType> that it is passing around, and it tries to use the Debug trait on that:

1
2
3
4
5
// In app
fn foo(c: Carton<AppType>) {
    println!("foo({:?})", c); // Error
    ...
}

Uh oh, now we encounter a problem because there is no impl of Debug for Carton<AppType>. But app can solve this by adding such an impl:

1
2
// In app
impl Debug for Carton<AppType> { ... }

You might expect this to be illegal per the orphan rules, but in fact it is not, and this is no accident. We want people to be able to define impls on references and boxes to their types. That is, since Carton is a smart pointer, we want impls like the one above to work, just like you should be able to do an impl on &AppType or Box<AppType>.

OK, so, what’s the problem? The problem is that now maybe lib1 notices that Carton should define Debug, and it adds a blanket impl for all types:

1
2
// In lib1
impl<T:Debug> Debug for Carton<T> { }

This seems like a harmless change, but now if app tries to recompile, it will encounter a coherence violation.

What went wrong? Well, if you think about it, even a simple impl like

1
impl Debug for Carton<AppType> { }

contains an implicit negative assertion that no ancestor crate defines an impl that could apply to Carton<AppType>. This is fine at any given moment in time, but as the ancestor crates evolve, they may add impls that violate this negative assertion.

Negative reasoning in coherence today, the more complex case

The previous example was relatively simple in that it only involved a single trait (Debug). But the current coherence rules also allow us to concoct examples that employ multiple traits. For example, suppose that app decided to workaround the absence of Debug by defining it’s own debug protocol. This uses Debug when available, but allows app to add new impls if needed.

1
2
3
4
5
6
7
8
9
10
// In lib1 (note: no `Debug` impl yet)
struct Carton<T> { }

// In app, before `lib1` added an impl of `Debug` for `Carton`
trait AppDebug { }
impl<T:Debug> AppDebug for T { } // Impl A

struct AppType { }
impl Debug for AppType { }
impl AppDebug for Carton<AppType> { } // Impl B

This is all perfectly legal. In particular, implementing AppDebug for Carton<AppType> is legal because there is no impl of Debug for Carton, and hence impls A and B are not in conflict. But now if lib1 should add the impl of Debug for Carton<T> that it added before, we get a conflict again:

1
2
// Added to lib1
impl<T:Debug> Debug for Carton<T> { }

In this case though the conflict isn’t that there are two impls of Debug. Instead, adding an impl of Debug caused there to be two impls of AppDebug that are applicable to Carton<AppType>, whereas before there was only one.

Negative reasoning from OIBIT and RFC 586

The conflicts I showed before have one thing in common: the problem is that when we add an impl in the supercrate, they cause there to be too many impls in downstream crates. This is an important observation, because it can potentially be solved by specialization or some other form conflict resolution – basically a way to decide between those duplicate impls (see below for details).

I don’t believe it is possible today to have the problem where adding an impl in one crate causes there to be too few impls in downstream crates, at least not without enabling some feature-gates. However, you can achieve this easily with OIBIT and RFC 586. This suggests to me that we want to tweak the design of OIBIT – which has been accepted, but is still feature-gated – and we do not want to accept RFC 586.

I’ll start by showing what I mean using RFC 586, because it’s more obvious. Consider this example of a trait Release that is implemented for all types that do not implement Debug:

1
2
3
// In app
trait Release { }
impl<T:!Debug> Release for T { }

Clearly, if lib1 adds an impl of Debug for Carton, we have a problem in app, because whereas before Carton<i32> implemented Release, it now does not.

Unfortunately, we can create this same scenario using OIBIT:

1
2
trait Release for .. { }
impl<T:Debug> !Release for T { }`

In practice, these sorts of impls are both feature-gated and buggy (e.g. #23072), and there’s a good reason for that. When I looked into fixing the bugs, I realized that this would entail implementing essentially the full version of negative bounds, which made me nervous. It turns out we don’t need conditional negative impls for most of the uses of OIBIT that we have in mind, and I think that we should forbid them before we remove the feature-gate.

Orphan rules for negative reasoning

One thing I tried in researching this post is to apply a sort of orphan condition to negative reasoning. To see what I tried, let me walk you through how the overlap check works today. Consider the following impls:

1
2
3
trait AppDebug { ... }
impl<T:Debug> AppDebug for T { }
impl AppDebug for Carton<AppType> { }

(Assume that there is no impl of Debug for Carton.) The overlap checker would check these impls as follows. First, it would create fresh type variables for T and unify, so that T=Carton<AppType>. Because T:Debug must hold for the first impl to be applicable, and T=Carton<AppType>, that implies that if both impls are to be applicable, then Carton<AppType>: Debug must hold. But by searching the impls in scope, we can see that it does not hold – and thanks to the coherence orphan rules, we know that nobody else can make it hold either. So we conclude that the impls do not overlap.

It’s true that Carton<AppType>: Debug doesn’t hold now – but this reasoning doesn’t take into account time. Because Carton is defined in the lib1 crate, and not the app crate, it’s not under “local control”. It’s plausible that lib1 can add an impl of Debug for Carton<T> for all T or something like that. This is the central hazard I’ve been talking about.

To avoid this hazard, I modified the checker so that it could only rely on negative bounds if either the trait is local or else the type is a struct/enum defined locally. The idea being that the current crate is in full control of the set of impls for either of those two cases. This turns out to work somewhat OK, but it breaks a few patterns we use in the standard library. The most notable is IntoIterator:

1
2
3
4
5
6
// libcore
trait IntoIterator { }
impl<T:Iterator> for IntoIterator { }

// libcollections
impl<'a,T> IntoIterator for &'a Vec<T> { }

In particular, the final impl there is illegal, because it relies on the fact that &Vec<T>: Iterator, and the type &Vec is not a struct defined in the local crate (it’s a reference to a struct). In particular, the coherence checker here is pointing out that in principle we could add an impl like impl<T:Something> Iterator for &T, which would (maybe) conflict. This pattern is one we definitely want to support, so we’d have to find some way to allow this. (See below for some further thoughts.)

Limiting OIBIT

As an aside, I mentioned that OIBIT as specified today is equivalent to negative bounds. To fix this, we should add the constraint that negative OIBIT impls cannot add additional where-clauses beyond those implied by the types involved. (There isn’t much urgency on this because negative impls are feature-gated.) Therefore, one cannot write an impl like this one, because it would be adding a constraint T:Debug:

1
2
trait Release for .. { }
impl<T:Debug> !Release for T { }`

However, this would be legal:

1
2
3
struct Foo<T:Debug> { }
trait Release for .. { }
impl<T:Debug> !Release for Foo<T> { }`

The reason that this is ok is because the type Foo<T> isn’t even valid if T:Debug doesn’t hold. We could also just skip such “well-formedness” checking in negative impls and then say that there should be no where-clauses at all.

Either way, the important point is that when checking a negative impl, the only thing we have to do is try and unify the types. We could even go farther, and have negative impls use a distinct syntax of some kind.

Still to come.

OK, so this post laid out the problem. I have another post or two in the works exploring possible solutions that I see. I am currently doing a bit of prototyping that should inform the next post. Stay tuned.

Avi HalachmiFirefox e10s Performance on Talos

TL;DR Talos runs performance tests on Firefox e10s on mozilla-central, not yet on try-server. OS X still doesn’t work. e10s reaults are similar, with notable scroll performance improvement on Windows and Linux, and notable WebGL regression on Windows.

Electrolysis, or e10s, is a Firefox project whose goal is to spread the work of browsing the web over multiple processes. The main initial goal is to separate the UI from web content and reduce negative effects one could have over the other.

e10s is already enabled by default on Firefox Nightly builds, and tabs which run on a different process than the UI are marked with an underline at the tab’s title.

While currently the e10s team’s main focus is correctness more than performance (one bug list and another), we can start collecting performance data and understand roughly where we stand.

jmaher, wlach and myself worked to make Talos run well in e10s Firefox and provide meaningful results. The Talos harness and tests now run well on Windows and Linux, while OS X should be handled shortly (bug 1124728). Session restore tests are still not working with e10s (bug 1098357).

Talos e10s tests run by default on m-c pushes, though Treeherder still hides the e10s results (they can be unhidden from the top right corner of the Treeherder job page).

To compare e10s Talos results with non-e10s we use compare.py, a script which is available in the Talos repository. We’ve improved it recently to make such comparisons more useful. It’s also possible to use the compare-talos web tool.

Here are some numbers on Windows 7 and Ubuntu 32 comparing e10s to non-e10s Talos results of a recent build using compare.py (the output below has been made more readable but the numbers have not been modified).

At the beginning of each line:

  • A plus + means that e10s is better.
  • A minus - means that e10s is worse.

The change % value simply compare the numbers on both sides. For most tests raw numbers are lower-is-better and therefore a negative percentage means that e10s is better. Tests where higher-is-better are marked with an asterix * near the percentage value (and for these values positive percentage means that e10s is better).

Descriptions of all Talos tests and what their numbers mean.

$ python compare.py --compare-e10s --rev 42afc7ef5ccb --pgo --verbose --branch Firefox --platform Win7 --master-revision 42afc7ef5ccb

Windows 7          [ non-e10s ]             [  e10s   ]
                   [ results  ]   change %  [ results ]

-   tresize               15.1   [  +1.7%]      15.4
-   kraken              1529.3   [  +3.9%]    1589.3
+   v8_7               17798.4   [  +1.6%]*  18080.1
+   dromaeo_css         5815.2   [  +3.7%]*   6033.2
-   dromaeo_dom         1310.6   [  -0.5%]*   1304.5
+   a11yr                178.7   [  -0.2%]     178.5
++  ts_paint             797.7   [ -47.8%]     416.3
+   tpaint               155.3   [  -4.2%]     148.8
++  tsvgr_opacity        228.2   [ -56.5%]      99.2
-   tp5o                 225.4   [  +5.3%]     237.3
+   tart                   8.6   [  -1.0%]       8.5
+   tcanvasmark         5696.9   [  +0.6%]*   5732.0
++  tsvgx                199.1   [ -24.7%]     149.8
+   tscrollx               3.0   [  -0.2%]       3.0
--- glterrain              5.1   [+268.9%]      18.9
+   cart                  53.5   [  -1.2%]      52.8
++  tp5o_scroll            3.4   [ -13.0%]       3.0


$ python compare.py --compare-e10s --rev 42afc7ef5ccb --pgo --verbose --branch Firefox --platform Linux --master-revision 42afc7ef5ccb

Ubuntu 32          [ non-e10s ]             [  e10s   ]
                   [ results  ]    change   [ results ]

++  tresize               17.2   [ -25.1%]      12.9
-   kraken              1571.8   [  +2.2%]    1606.6
+   v8_7               19309.3   [  +0.5%]*  19399.8
+   dromaeo_css         5646.3   [  +3.9%]*   5866.8
+   dromaeo_dom         1129.1   [  +3.9%]*   1173.0
-   a11yr                241.5   [  +5.0%]     253.5
++  ts_paint             876.3   [ -50.6%]     432.6
-   tpaint               197.4   [  +5.2%]     207.6
++  tsvgr_opacity        218.3   [ -60.6%]      86.0
--  tp5o                 269.2   [ +21.8%]     328.0
--  tart                   6.2   [ +13.9%]       7.1
--  tcanvasmark         8153.4   [ -15.6%]*   6877.7
--  tsvgx                580.8   [ +10.2%]     639.7
++  tscrollx               9.1   [ -16.5%]       7.6
+   glterrain             22.6   [  -1.4%]      22.3
-   cart                  42.0   [  +6.5%]      44.7
++  tp5o_scroll            8.8   [ -12.4%]       7.7

For the most part, the Talos scores are comparable with a few improvements and a few regressions - most of them relatively small. Windows e10s results fare a bit better than Linux results.

Overall, that’s a great starting point for e10s!

A noticeable improvement on both platforms is tp5o-scroll. This test scrolls the top-50 Alexa pages and measures how fast it can iterate with vsync disabled (ASAP mode).

A noticeable regression on Windows is WebGL (glterrain) - Firefox with e10s performs roughly 3x slower than non-e10s Firefox - bug 1028859 (bug 1144906 should also help for Windows).

A supposed notable improvement is of the tsvg-opacity test, however, this test is sometimes too sensitive to underlying platform changes (regardless of e10s), and we should probably keep an eye on it (yet again, e.g. bug 1027481).

We don’t have bugs filed yet for most Talos e10s regressions since we don’t have systems in place to alert us of them, and it’s still not trivial for developers to obtain e10s test results (e10s doesn’t run on try-server yet, and on m-c it also doesn’t run on every batch of pushes). See bug 1144120.

Snappiness is something that both the performance team and the e10s team care deeply about, and so we’ll be working closely together when it comes time to focus on making multi-process Firefox zippy.

Thanks to vladan and mconley for their valuable comments.

Mike ConleyThe Joy of Coding (Episode 6): Plugins!

In this episode, I took the feedback of my audience, and did a bit of code review, but also a little bit of work on a bug. Specifically, I was figuring out the relationship between NPAPI plugins and Gecko Media Plugins, and how to crash the latter type (which is necessary for me in order to work on the crash report submission UI).

A minor goof – for the first few minutes, I forgot to switch my camera to my desktop, so you get prolonged exposure to my mug as I figure out how I’m going to review a patch. I eventually figured it out though. Phew!

Episode Agenda

References:
Bug 1134222 – [e10s] “Save Link As…”/”Bookmark This Link” in remote browser causes unsafe CPOW usage warning

Bug 1110887 – With e10s, plugin crash submit UI is brokenNotes

Mozilla Science LabBullying & Imposter Phenomenon: the Fraught Process of Learning to Code in the Lab

I’ve been speaking and writing for some time now on the importance of communication and collaboration in the scientific community. We have to stop reinventing wheels, and we can’t expect to learn the skills in coding and data management we need from attending one workshop alone; we need to establish customs and venues for the free exchange of ideas, and for practicing the new skills we are trying to become fluent in. Normally, these comments are founded on values of efficiency and sound learning strategies. But lately, I find myself reaching the same conclusions from a different starting point: the vital need for a reality check on how we treat the towering challenge of learning to code.

Everywhere I go, I meet students who tell me the same story: they are terrified of asking for help, or admitting they don’t know something to the other members of their research group – and the more computationally intensive the field, the more intense this aversion becomes around coding. Fears include: “What if my supervisor is disappointed that I asked such a ‘trivial’ question?“; “What if the other students lose respect for me if I admit I don’t know something?,” and, perhaps most disheartening of all, “What if this means I am not cut out for my field?

If this is what our students are thinking – what have we done, and where has this come from?

There can be, at times, a toxic machismo that creeps into any technical field: a vicious cycle begins when, fearing that admitting ‘ignorance’ will lead to disrepute (perhaps even as lost grants and lost promotions), we dismiss the challenges faced by others, and instill the same fear in our colleagues of admitting they don’t know everything. The arrogant colleague that treats as trivial every problem they don’t have to solve themselves has risen to the level of departmental trope, and it is beginning to cost us in new blood. I remember working on a piece of code once for weeks, for which I received neither feedback nor advice, but only the admonition ‘you should have been able to write that in five minutes‘. Should I have? By what divine inspiration, genetic memory, or deal with the devil would I, savant like, channel a complex and novel algorithm in three hundred seconds, as a new graduate student with absolutely no training in programming?

That rebuke was absurd – and for those less pathologically insensitive than myself, devastating as they accrue, year after year.

Even in the absence of such bullying, we have made things for the new coder double-bleak. The computing demands in almost every field of research are skyrocketing, and the extent to which we train our students in computing continues to stagnate. Think of the signal this sends: computing skills are, apparently, beneath contempt, not even worth speaking of and so trivial to be not worth the training. And yet, they are so central to a growing number of fields as to be indispensable. Is it any wonder then, that so many students and early career researchers feel alienated and isolated in their fields, and doubt themselves for being hobbled in their work when they ‘fail’ to miraculously intuit the skills their establishment has signaled should be obvious?

A couple of years ago, Angelina Fabbro, my friend and mentor as well as noted web developer, wrote a brilliant article on Imposter Phenomenon (aka Imposter Syndrome), which they define as ‘the experience of feeling like a fraud (or impostor) while participating in communities of highly skilled participants even when you are of a level of competence to match those around you.‘ I strongly recommend reading this article, because even though it was written with the tech world in mind, it is one hundred percent applicable to the experience of legions of academics making careers in the current age of adolescence in research coding. The behavior and effects I describe above have contributed to an epidemic of imposter phenomenon in the sciences, particularly surrounding coding and digital acumen and particularly in students. That fear is keeping us in our little silos, making us terrified to come out, share our work, and move forward together; I consider that fear to be one of the biggest obstacles to open science. Also from Fabbro’s article:

‘In the end I wasn’t shocked that the successful people I admired had experienced impostor phenomenon and talked to me about it — I was shocked that I somehow thought the people I see as heroes were somehow exempt from having it… We’re all just doing the best we know how to when it comes to programming, it’s just that some people have more practice coming across as confident than others do. Never mistake confidence for competence, though.’ – Angelina Fabbro

So What Are We Going To Do About It?

The cultural and curricular change around coding for research that ultimately needs to happen to cut to the root of these problems will be, like all institutional change, slow. But what we can do, right now, is to start making spaces at our local universities and labs where students and researchers can get together, struggle with problems, ask each other questions and work together on code in casual, no-bullying, no-stakes safe spaces, welcoming of beginners and where no question is too basic. These are the Study Groups, User’s Groups and Hacky Hours I’ve been talking about, and addressing the problems I described is the other dimension, beyond simple technical skill building, of why they are so important. In my travels, I’ve stumbled across a few; here’s a map:

Study Groups & Hacky Hours

Please, if you’re running a meetup group or something similar for researchers writing code, let me know (bill@mozillafoundation.org) – I’d love to add you to the map and invite you to tell your story here on this blog (see Kathi Unglert’s guest post for a great example). Also, if you’re curious about the idea of small, locally driven study groups, my colleague Noam Ross has assembled a panel Ask Me Anything event on the Mozilla Science Forum, kicking off at 6 PM EDT, Tuesday, March 24. Panelists from several different meetup groups will be available to answer your questions on this thread, from 6-8 PM EDT; more details are on the blog. Don’t forget to also check out our growing collection of lessons and links to curriculum ideas for a study group meetup, if you’d like some material to try working through.

There are tons of ways to do a good meetup – but to start, see if you can get a couple of people you know and trust to hang out once or twice a month, work on some code, and acknowledge that you’re all still learning together. If you can create a space like that, a whole lot of the anxiety and isolation around learning to code for research will fall away, and more people will soon want to join; I’d love to hear your stories, and I hope you’ll join us for the AMA on the 24th.

Monica ChewHow do I turn on Tracking Protection? Let me count the ways.


I get this question a lot from various people, so it deserves its own post. Here's how to turn on Tracking Protection in Firefox to avoid connecting to known tracking domains from Disconnect's blocklist:
  1. Visit about:config and turn on privacy.trackingprotection.enabled. Because this works Firefox 35 or later, this is my favorite method. In Firefox 37 and later, it also works on Fennec.
  2. On Fennec Nightly, visit Settings > Privacy and select the checkbox "Tracking Protection".
  3. Install Lightbeam and toggle the "Tracking Protection" button in the top-right corner. Check out the difference in visiting only 2 sites with Tracking Protection on and off!
  4. On Firefox Nightly, visit about:config and turn on browser.polaris.enabled. This will enable privacy.trackingprotection.enabled and also show the checkbox for it in about:preferences#privacy, similar to the Fennec screenshot above. Because this only works in Nightly and also requires visiting about:config, it's my least favorite option.
  5. Do any of the above and sign into Firefox Sync. Tracking Protection will be enabled on all of your desktop profiles!