## March 11, 2014

### Armen Zambrano Gasparnian — Debian packaging and deployment in Mozilla's Release Engineering setup

I'm been working on creating my second Debian package for Mozilla's Release Engineering infrastructure and it's been a pain like the first one.

To be honest, it's been hard to figure out the correct flow and to understand what I was doing.
In order to help other people in the future, I decided to document the process and workflow.
This is not to replace the documentation but to help understand it.

If you're using a Mac or a Windows machine, notice that we have a VM available on EC2 that has the tools you need: ubuntu64packager1.srv.releng.use1.mozilla.com. The documentation can be found in "How to build DEBs". You can use this blog post to help you get up to speed.

During our coming work week we will look at a complete different approach to make changes like this easier for developers to make without Release Engineering intervention. It is not necessarily a self-serve for Debian deployments.

### Goal

We want to upgrade a library or a binary on our infrastructure.
For Linux, we use Puppet to deploy packages and we deploy it through a Debian repository.
Before we deploy the package through Puppet, we have to add the package to our internal Debian repository. This blog post will guide you to:

1. Create the .deb files
2. Add them to our internal Debian repository
3. Test the deployment of the package with Puppet

### Debian packaging

For a newbie, it can be a very complicated system that has many many parts.

In short, I've learned that there are involved three different files that allow you to recreate the .deb files. The files extensions are: .dsc, .orig.tar.gz and .diff.gz. If you find the source package page for your desired package, you will notice that these 3 files are available to download. We can use the .dsc file to generate all the .deb files.

For full info you can read the Debian Packaging documentation and/or look at the building tutorial to apply changes to an existing package.

### Ubuntu version naming

If I understand correctly (IIUC), "precise" is an identifier for a Ubuntu release. In our case it refers to Ubuntu 12.04 LTS.

### Versions of a package

IIUC, a package can have 3 different versions or channels:
• release. The version that came out with a specific release
• Ubuntu 12.04 came out with mesa 8.0.2-0ubuntu3
• security. The latest security release
• e.g. mesa 8.0.4-0ubuntu0.6
• e.g. mesa 8.0.4-0ubuntu0.7
If you load the "mesa" source package page, you will find a section called "Versions published" and you will see all three versions listed there.

In our specific releng setup, we always use "precise" as the distribution and not "precise-updates".
I don't know why.

### Repackage the current version or the latest one?

If you're patching a current package, do not try jump to the latest available version unless necessary. Choose the version closest to our current package to reduce the number of new dependencies.

In my case I was trying to go for mesa 8.0.4-0ubuntu0.7 instead of mesa 8.0.2-0ubuntu3.
Due to that, I had all sorts of difficulties and it had lots of new dependencies.
Even then, I realized later that I had to go for mesa 8.0.4-0ubuntu0.6 as a minimum.

### Puppetagain-build-deb OR pbuilder?

From Mozilla's Release Engineering's prespective, we're only considering two ways of creating our .deb files: 1) puppetagain-build-deb and 2) pbuilder.

FYI puppetagain-build-deb was written to make it very simple to create the required .deb files.
Unfortunately, in my case, puppetagain-build-deb could only handle the dependencies of 0.8.2 and not the ones of 0.8.4.

I describe how to use pbuilder in the section "Create the debian/ directory".
Below is the "puppetagain-build-deb" approach. Also documented in here.

#### Puppetagain-build-deb

At this point we have the "package_name-debian" directory under modules/packages/manifests in Puppet. Besides that, we need to download ".orig.tar.gz" file.

To create the .deb files we need 1) the debian directory + 2) the original tar ball.

In most cases, we should be able to use ubuntu64packager1 and puppetagain-build-deb to build the deb files. If not,

NOTE: The .orig.tar.gz file does not need to be committed.

cd puppethg up -r d6aac1ea887f #It has the 8.0.2 version checked-incd modules/packages/manifestswget https://launchpad.net/ubuntu/+archive/primary/+files/mesa_8.0.2.orig.tar.gz# The .deb files will appear under /tmp/mesa-precise-i386puppetagain-build-deb precise amd64 mesa-debian# The .deb files will appear under /tmp/mesa-precise-amd64puppetagain-build-deb precise i386 mesa-debian

### Create the debian/ directory

In Puppet we have "debian" directories checked-in (e.g. mesa-debian/) for any debian package we deploy to our systems through it. The debian directory is produced with the standard Debian packing instructions.

If you have access to a Linux machine you can follow the steps that rail gave me to generate the deb files. You can also log-in to ubuntu64packager1 (you have to start it up first).

To make it work locally, I had to install pbuilder with "sudo apt-get install pbuilder".
I also needed to create my own pbuilder images.

In short, to recreate .deb files without modifying them you can follow these steps:
1. use dget to downloads all three required files (.dsc, .orig.tar.gz and .diff.gz)
2. use pbuilder --build to generate the .deb files
Since we want to patch the libraries rather than use them as-is, we also have to run these steps in between step 1 & step 2:
1. dpkg-source -x
• it extracts the source files
3. append a line to debian/patches/series
• the line indicates the filename of your patch under debian/patches
4. set DEBFULLNAME
• to bump the version when repackagin the source
5. dpkg-source -b
• rebuild the source package
You can read rail's explanation for full details.

### Keep track of the debian/ directory in Puppet

The previous section should have generated your desired "debian" directory.
We now need to check it inside of our puppet repository to keep track of it.
cp -r mesa-8.0.4/debian ~/puppet/modules/packages/manifests/mesa-debiancd ~/puppethg addremovehg diff

### Having Debian packaging issues?

rail and dustin have experience in this area, however, if we have further Debian packaging issues we can reach sylvestre and glandium.

### Determine involved libraries

To create our Puppet patch, we have to determine which packages are involved.
For instance, the mesa bug required updating five different libraries.
rail explains on comment 26 how to discover which libraries are involved.
You can list the package names you compiled with something like this:
ls *deb | awk -F_ '{print $1}' | xargs # copy the list of names and run the following on the target machine: dpkg -l 2>/dev/null | grep ^ii | awk '{print$2}'

### Create a no-op puppet change (pinning the version)

If the package already exists on our infra but it is not managed by Puppet (e.g. the library came by default on the OS), then it is better to write first a puppet change to pin the versions.

To write the puppet change you will have to answer these questions:
• Do we want this change for the in-house and ec2 machines? Or a subset?
• Do we want the change for both 64-bit and 32-bit machines?
• What are the versions currently running on the machines that would be affected?
• Check on each pool you're planning to deploy it since we could have inconsistencies between them
Answering these questions will determine which files to modify in puppet.
Remember that you will have to test that your puppet change runs without issues.

### Integrating your .deb files into the releng Debian repository and sync to the puppet masters

The documentation is here. And here's what I did for it.

1 - Sync locally the Debian packages repository
We need to sync locally from the "distinguished master" the "releng", "conf" and "deb" directories:
sudo sursync -av releng-puppet2.srv.releng.scl3.mozilla.com:/data/repos/apt/releng/ /data/repos/apt/releng/rsync -av releng-puppet2.srv.releng.scl3.mozilla.com:/data/repos/apt/conf/ /data/repos/apt/conf/rsync -av releng-puppet2.srv.releng.scl3.mozilla.com:/data/repos/apt/db/ /data/repos/apt/db/

2 - Import your .deb files into the Debian repo

cd /data/repos/aptcp ~armenzg/tmp/mesa_8.0.4.orig.tar.gz releng/pool/main/m/mesareprepro -V --basedir . include precise ~armenzg/tmp/out64/*.changesreprepro -V --basedir . includedeb precise ~armenzg/tmp/out32/*.deb

If the package is new you will also have to place the .orig.tar.gz file under /data/repos/apt/releng. The reprepro will let you know as it will fail until you do.

3 - Rsync the repo and db back to the distinguished master
Push your file back to the official repository:
rsync -av /data/repos/apt/releng/ releng-puppet2.srv.releng.scl3.mozilla.com:/data/repos/apt/releng/rsync -av /data/repos/apt/db/ releng-puppet2.srv.releng.scl3.mozilla.com:/data/repos/apt/db/

Your files should show up in here:
http://puppetagain.pub.build.mozilla.org/data/repos/apt/releng/pool/main

NOTE: Pushing the .deb files to the repo does not update the machines.

4 - Fix the permissions at the distinguished master
ssh root@releng-puppet2.srv.releng.scl3.mozilla.compuppetmaster-fixperms

### Test that you can update

Before you can sync up a host with puppet you need to let the puppet servers sync up with the distinguished master.

For instance, my puppet runs were failing because the packages were missing at:
http://puppetagain-apt.pvt.build.mozilla.org/repos/apt/releng/pool/main/m/mesa

To test my changes, I created two EC2 instances. For other pools you will have to pull a machine from production.

1 - Prepare your user environment
ssh armenzg@releng-puppet2.srv.releng.scl3.mozilla.comcd /etc/puppet/environments/armenzg/envhg pull -u && hg st

2 - Run a no-op test sync from your loaned machines
puppet agent --test --environment=armenzg --server=releng-puppet2.srv.releng.scl3.mozilla.com

3 - Under your user environment on the puppet master, bump the versions and the repoflag

4 - Run puppet syncs again on the test instances and watch for the changes on the Puppet output

puppet agent --test --environment=armenzg --server=releng-puppet2.srv.releng.scl3.mozilla.com

5 - Review the package versions are the right ones

6 - Test a rollback scenario

You will have to remove the bumping of the versions from step #3 and bump the repoflag again.
Run steps 4 and 5 to see that we downgrade properly.

7 - Clean up ubuntu64packager1 and shut it off

8 - Deploy your change like any other Puppet change

### Read all the steps at once

https://bugzilla.mozilla.org/show_bug.cgi?id=975034#c37

### Andrea Marchesini — Audio volume and mute per window object

I have finally found some time to finish a set of patches about a nice feature that will allow addon/firefox developers to control audio volumes for window object.

Through this new feature any window object has two new attributes: audioMuted and audioVolume (accessible from chrome code only using nsIDOMWindowUtils). The aim is to change the volume of any HTML5 media element and any WebAudio destination node (soon WebRTC and also FMRadio API). The control of the volumes works “on cascade” - if a window has an iframe, the iframe elements will be effected by the parent window audio attributes.

The code is just landed on m-c and it will be available on nightly in a few hours.

Also, in order to test this feature I wrote an addon. As you could see, the UI is not the best… I know, but it was just a proof of concept, I’m sure somebody else will do a better work! Download the addon.

This feature is currently disabled by default, but it’s easy to enable it by changing or creating a preference in about:config. Some instructions to do it: open ‘about:config’ in a new tab and add a new boolean preference called ‘media.useAudioChannelService' and set it to true. This property will enable the AudioChannelService for any HTMLMediaElement and any WebAudio destination node.

AudioChannelService is the audio policy controller of Firefox OS. You will know once you use it, that the AudioChannelService is enabled when, while changing tabs, media elements of invisible tabs will be muted. From now on, you can use the addon.

The addon UI can be open from Tools -> Web Developers -> Audio Test.

Here a screenshot:

From a code point of view, you can play with this audio feature from the nsIDOMWindowUtils interface. For instance:

var currentBrowser = tabbrowser.getBrowserAtIndex(0 /* an index */);
var utils = currentBrowser.contentWindow
.QueryInterface(Ci.nsIInterfaceRequestor)
.getInterface(Ci.nsIDOMWindowUtils);
dump("The default audioVolume should be 1.0: " + utils.audioVolume + "\n");
utils.audioVolume = 0.8;
dump("By default the window is not muted: " + utils.audioMuted + "\n");
utils.audioMuted = true;


There is also a notification that is dispatched when a window starts and stops playing audio: media-playback. The value of this notification can be active or inactive. If you are interested about how to use them, there are a couple of tests on bug 923247.

What can we do with this new feature? Here some ideas:

• we could change or animate the tab icon to let the users know which tab is playing.
• addons to mix audio between tabs
• advanced settings for disable audio by default - either to enable audio just for the visible tab or to specific origins, etc.
• other options, such as “disable audio for all the tabs but this one” in some context menu.

Please, use this feature and let me know if you have problems, new ideas or needs. Good hack!

### Lawrence Mandel — Stepping Down as Chair of the Engineering Meeting

As I previously shared, I have accepted a new role at Mozilla. As my responsibilities have changed, I am stepping down as the chair of the Engineering Meeting.

Looking back over the last year or so of running this meeting, I am pleased by the positive reaction to the meeting reboot in June 2013, where we refocused on the needs of engineering, and by the successful follow on changes, such as including additional engineering teams and broadcasting and archiving the meeting on Air Mozilla.

I would like to thank everyone who took the time to provide feedback about the meeting. The changes to the meeting were a direct result of our conversations. I would also like to thank Richard Milewski and the Air Mozilla team for working out how to broadcast the meeting to our global audience each week.

I chaired my last meeting on Mar 5, 2014. You can watch my swan song on Air Mozilla.

Chris Peterson takes over as the chair of the Engineering Meeting effective this week.

Tagged: engineering, meeting, mozilla

### Sean McArthur — Persona is dead, long live Persona

The transition period was really tough for me. It felt like we were killing Persona. But more like tying a rope around it and dragging it behind us as we road tripped to Firefox OS Land. I first argued against this. Then, eventually I said let’s at least be humane, and take off the rope, and put a slug in its head. Like an Angel of Death. That didn’t happen either. The end result is one where Persona fights on.

Persona is free open source software, and has built up a community who agree that decentralized authentication is needed o the Internet. I still think Persona is the best answer in that field, and the closest to becoming the answer. And it’s not going away. We’re asking that the Internet help us make the Internet better.

### Firefox Accounts

In the meantime I’ll be working on our Firefox Accounts system, which understandably could not rely entirely on Persona1. We need to keep Firefox competitive, since it’s what pays for us to do all the other awesomizing we do. Plus, as the Internet becomes more mobile and more multi-device, we need to make sure there is an alternative that puts users first. A goal of Firefox Accounts is to be pluggable, and to integrate with other services on the Web. Why should your OS demand you use their siloed services? If you want to use Box instead of iCloud, we want you to use it.

How does this affect Persona? We’re actually using browserid assertions within our account system, since it’s a solved problem that works well. We’ll need to work on a way to get all sorts of services working with your FxAccount, and it might include proliferating browserid assertions everywhere2. As we learn, and grow the service so that millions of Firefox users have accounts, we can explore easing them into easily and automatically being Persona users. This solves part of the chicken-egg problem of Persona, by having millions of users ready to go.

I’d definitely rather this have ended up differently, but I can also think of far worse endings. The upside is, Persona still exists, and could take off more so with the help of Firefox. Persona is dead, long live Persona!

1. Sync needs a “secret” to encrypt your data before it’s sent to our servers. The easiest solution for users is to provide us a password, and we’ll stretch that and make a secret out of it (so, we don’t actually know your password). Persona doesn’t give us passwords, so we can’t use it.

2. Where “browserid” assertions are accepted, Persona support can also be found.

### Planet Mozilla Interns — Michael Sullivan: Inline threading for TraceMonkey slides

Here are the slides from my end of summer brown-bag presentation: http://www.msully.net/~sully/files/inline_slides.pdf.

### Digital Learning with Young People in the United Kingdom

[Adapted from a panel talk I gave at DML 2014 on March 7, 2014 in Boston, MA.]

There’s a lot of exciting digital making happening in the United Kingdom, so I want to share the story of Make Things Do Stuff, a network and website of maker-focused organizations -- including Mozilla -- that promote digital learning among young people in the UK.

First, let’s set the scene.
In January 2011, Nesta -- an Innovation Foundation in the UK -- published the “Next Gen.” report, co-authored by Sir Ian Livingstone and Alex Hope. Livingstone was part of the team that supported and distributed games such as Dungeons & Dragons, and most famously Lara Croft: Tomb Raider. The authors found at that time that the UK video games sector brought in over £2 billion in sales, and was larger than both the film and music industries. Between 2006 and 2008 the visual effects sector -- encompassing both film and video games -- grew at 16.8% with most of its talent being local. Just after 2008, however, the industry quickly began losing its local talent to overseas competition, and was forced to source overseas talent of its own to keep sales high.

The over-arching conclusion of the report was that the education system failed to fill the skills gap in the industry. Next Gen. looked at how this problem could be tackled, and gave two major recommendations:
• Put Computer Science on the national curriculum.
• Have GCSE (Graduate Certificate of Secondary Education) in all schools.
=> Quick Explanation: Unlike in many countries where high school graduates receive one certificate for satisfactory completion of course work -- a diploma or GED -- in the UK students between 14 and 16 years of age take GCSE exams in each subject -- some compulsory, and some elective.

Stemming from Next Gen. came 2 years of consultation between education specialists, technology experts, and government policy advisers to build a computer science curriculum. The view taken is that coding/programming is an essential skill to join the job force. Thus, starting in Key Stage 1 students aged 5-7 will be introduced to algorithms and logical reasoning. With each successive Key Stage, students will build up their knowledge and skills base, and by the end of the Key Stage 4 (age 14-16) they will be able to code in at least 2 languages, and have the creative and technical abilities for more sophisticated study in CS or a professional career. The curriculum goes into full effect this September.

Learning through making
At the same time, yearly spending on digital education in schools reached into the hundreds of millions, and yet real transformation in learning and teaching remained elusive. Could it be that interactive whiteboards and one-to-one tablet schemes aren’t the final solution?  So Nesta teamed up with London Knowledge Lab and Learning Sciences Research Institute to see how teachers and learners could be more engaged in the design and use of learning technologies. After researching 8 types of learning with technology, they largely concluded that learning through making is one of the more effective strategies. (Decoding Learning: The Proof, Promise, and Potential of Digital Education.)

With solid research in hand, Nesta, Nominet Trust (a funder for socially-minded tech solutions), and Mozilla (a socially-minded engineering organization) banded together to create Make Things Do Stuff. This relationship works not only because we have robust research, funding and tools, but also because we all recognize the importance of bringing together other organizations in the making space, and know that the collaborative effort is greater than the sum of its parts.

With over 40 organizations in our network, we have great depth and representation across a lot of disciplines. Our partners include everyone from small after-school coding clubs and DIY digital making haberdasheries to large tech event planners and government supporters like the Cabinet Office.

But the best part of Make Things Do Stuff is the Youth Editorial team, a group of 25 super talented young makers with interests ranging from programming apps that tackle social issues to musicians with great YouTube followings.  Some speak at youth conferences as evangelists, putting a relatable face on the movement, while others run hackathons in local communities. This stellar team creates content for the website – by young people for young people – and invites others like them to make things and share their stories.

Challenges
Of course it isn’t all smooth sailing. With so many stakeholders pulling in the same direction, it’s tough to make sure everyone feels visible and that their values are prioritized. As the educational Events Manager helping wrangle everyone, my three main pain-points are:

Audience: We work with organizations, not schools, so a lot of young people we see are already highly motivated to learn through making. While it’s great that we’re reaching them through their passions and building on them, I wonder about the young people we’re not reaching. To mitigate this, we try to attend a variety of events – everything from the nationwide Big Bang Fair with 65,000 young people getting their digital hands dirty over 4 days, to small-scale workshops where 25 school children made robots out of plastic cups, remixed our Keep Calm And…Thimble make, and created circuits out of play-dough at an event hosted by the new Children’s Museum at MozLDN. (Some fun remixes: Live long and Prosper, Freak Out and Throw Stuff, Eat Sleep Rave Repeat.)

“Brand Soup”: Although all of the organizations are under the banner of Make Things Do Stuff, we also have responsibilities to our individual organizations to increase visibility and brand recognition. When we’re at events sometimes all you see is a bunch of logos on a sign, and I wonder, what are we really promoting?  To make sure we don’t get lost in the politics of brand soup, we bring it all back to our shared mission and message: we’re here to help everyone move beyond digital consumption to digital creation. We focus on the young people and remember that we’re here for them, not the other way around. And suddenly, it’s clear skies ahead.

Gathering Data: Again, because we’re not partnered with schools it can be difficult to measure the effect our efforts have on the overall learning environment. Moreover, it’s near impossible to come up with a universal definition of effect; are we measuring national test outcomes? Are we looking at job-readiness skills? This one continues to be a challenge, but as the maker landscape changes, I look forward to seeing solutions surface.

Despite our difficulties with data, I’m happy to share that we reached 100,000+ young people over 3 months last summer – our first summer – thanks to our collaborative efforts. Make Things Do Stuff will also continue to change and grow as new technologies enter the field, and as young people find new ways to use old technologies. It’s an exciting time to be in this space, and I hope you’ll become a part of our ever-evolving story.

### Gervase Markham — The Necessity of Management

Getting people to agree on what a project needs, and to work together to achieve it, requires more than just a genial atmosphere and a lack of obvious dysfunction. It requires someone, or several someones, consciously managing all the people involved. Managing volunteers may not be a technical craft in the same sense as computer programming, but it is a craft in the sense that it can be improved through study and practice.

– Karl Fogel, Producing Open Source Software

### William Duyck — Open Education and the Open Web – Day 2

Today is day 2 of the open education week, and an interested question has been asked.

Questions: What do you see as the link between Open Education and the Open Web? Does the former depend on the latter?

I took the time to answer this over on my Year In Industry blog, so go and take a nose… OR join the discussion over on Google +.

The tl;dr for me is that an Open Education does not require the use of the Open Web. But it helps.

### Doug Belshaw — On the link between Open Education and the Open Web

I’m currently moderating a discussion as part of Open Education Week on behalf of Mozilla. In today’s discussion prompt I asked:

What do you see as the link between Open Education and the Open Web? Does the former depend on the latter?

It’s a question that depends on several things, not least your definition of the two terms under consideration. Yesterday, in answer to the first discussion prompt, I used Mozilla Thimble to make this:

The above would be my current (brief) definition of Open Education. But what about the Open Web? Here I’m going to lean on Mark Surman’s definition from 2010:

Open web = freedom, participation, decentralization and generativity.

That last word, ‘generativity’ is an interesting one. Here’s part of the definition from Wikipedia:

Generativity in essence describes a self-contained system from which its user draws an independent ability to create, generate, or produce new content unique to that system without additional help or input from the system’s original creators.

As an educator, I believe that the role of teachers is to make themselves progressively redundant. That is to say, the learner should take on more and more responsibility for their own learning. Both teachers and learners can work together within an Open Educational Ecosystem (OEE) that is more than the sum of its parts.

The more I think about it, this is how the Open Web is similar to Open Education. Both are trying to participate in a generative ecosystem benefitting humankind. It’s about busting silos. It’s about collaborating and sharing.

Does Open Education depend upon the Open Web? No, I wouldn’t say it that strongly. Open Education can happen without technology; you can share ideas and resources without the web. However, the Open Web significantly accelerates the kind of sharing and collaboration that can happen within an OEE. In other words, the Open Web serves as a significant catalyst for Open Education.

What do you think? What’s the relationship between Open Education and the Open Web?

Join the discussion!

### Marco Zehe — Easy ARIA Tip #7: Use “listbox” and “option” roles when constructing AutoComplete lists

One question that comes up quite frequently is the one of which roles to use for an auto-complete widget, or more precisely, for the container and the individual auto-complete items. Here’s my take on it: Let’s assume the following rough scenario (note that the auto-complete you have developed may or may not work in the same, but a similar way):

Say your auto-complete consists of a textbox or textarea that, when typing, has some auto-complete logic in it. When auto-complete results appear, the following happens:

1. The results are being collected and added to a list.
2. The container gets all the items and is then popped into existence.
3. The user can now either continue typing or press DownArrow to go into the list of items.
4. Enter or Tab select the current item, and focus is returned to the text field.

Note: If your widget does not support keyboard navigation yet, go back to it and add that. Without that, you’re leaving a considerable amount of users out on the advantages you want to provide. This does not only apply to screen reader users.

The question now is: Which roles should the container and individual items get from WAI-ARIA?Some think it’s a list, others think it’s a menu with menu items. There may be more cases, but those are probably the two most common ones.

The short version: Menus on Windows are a hell of a mess, and that’s historically rooted in the chaos that is the Win32 API. Take my word for it and stay out of that mess and the debugging hell that may come with it.

The long version: Windows has always known a so-called menu mode. That mode is in effect once a menu bar, a drop-down menu, or a context menu become active. This has been the case for as long as Windows 3.1/3.11 days, possibly even longer. To communicate the menu mode state to screen readers, Windows, or more precisely, Microsoft Active Accessibility, uses four events:

3. SystemMenuPopupEnd: The popup just closed. Menu mode returns to either the previous Popup in the stack (closing of a sub menu), the menu bar, or falls out of menu mode completely.

These events have to arrive in this exact order. Screen readers like JAWS or Window-Eyes rely heavily on the even order to be correct, and they ignore everything that happens outside the menus once the menu mode is active. And even NVDA, although it has no menu mode that is as strict as that of other “older” screen readers, relies on the SystemMenuStart and SystemMenuPopupStart events to recognize when a menu gained focus. Because the menu opening does not automatically focus any item by default. An exception is JAWS, which auto-selects the first item it can once it detects a context or start menu opening.

You can possibly imagine what happens if the events get out of order, or are not all fired in a complete cycle. Those screen readers that rely on the order get confused, stay in a menu mode state even when the menus have all closed etc.

So, when a web developer uses one of the menu roles, they set this whole mechanism in motion, too. Because it is assumed a menu system like a Windows desktop app is being implemented, browsers that implement WAI-ARIA have to also send these events to communicate the state of a menu, drop-down or context or sub menu.

So, what happens in the case of our auto-complete example if you were to use the role menu on the container, and menuitem on the individual items? Let’s go back to our sequence from the beginning of the post:

1. The user is focused in the text field and types something.
2. Your widget detects that it has something to auto-complete, populates the list of items, applies role menuitem to each, and role menu to the container, and pops it up.
3. This causes a SystemMenuPopupStart event to be fired.

The consequences of this event are rather devastating to the user. Because you just popped up the list of items, you didn’t even set focus to one of its items yet. So technically and visually, focus is still in your text field, the cursor is blinking away merrily.

But for a screen reader user, the context just changed completely. Because of the SystemMenuPopupStart event that got fired, screen readers now have to assume that focus went to a menu, and that just no item is selected yet. Worse, in the case of JAWS, the first item may even get selected automatically, producing potentially undesired side effects!

Moreover, the user may continue typing, even use the left and right arrow keys to check their spelling, but the screen reader will no longer read this to them, because their screen reader thinks it’s in menu mode and ignores all happenings outside the “menu”. And one last thing: Because you technically didn’t set focus to your list of auto-complete items, there is no easy way to dismiss that menu any more.

On the other hand, if you use listbox and option roles as I suggested, none of these problems occur. The list will be displayed, but because it doesn’t get focus yet, it doesn’t disturb the interaction with the text field. When focus gets into the list of items, by means of DownArrow, the transition will be clearly communicated, and when it is transitioning back to the text field, even when the list remains open, that will be recognized properly, too.

So even when you sighted web developers think that this is visually similar to a context menu or a popup menu or whatever you may want to call it, from a user interaction point of view it is much more like a list than a menu. A menu system should really be confined to an actual menu system, like the one you see in Google Docs. The side effects of the menu related roles on Windows are just too severe for scenarios like auto-completes. And the reason for that lies in over 20 years of Windows legacy.

Some final notes: You can spice up your widget by letting the user know that auto-complete results are available via a text that gets automatically spoken if you add it in a text element that is moved outside the viewport, but apply an attribute aria-live=”polite” to it. In addition, you can use aria-expanded=”true” if you just popped up the list, and aria-expanded=”false” if it is not there, both applied to your input or textarea element. And the showing and hiding of the auto-complete list should be done via display:none; or visibility:hidden; and their counterparts, or they will appear somewhere in the user’s virtual buffer and cause confusion.

A great example of all of this can be seen in the Tweet composition ContentEditable on twitter.com.

I also sent a proposal for an addition to the Protocols and Formatting Working Group at the W3C, because the example in the WAI-ARIA authoring practices for an auto-complete doesn’t cover most advanced scenarios, like the one on Twitter and others I’ve come across over time. Hope the powers that may be follow my reasoning and make explicit recommendations regarding the use of roles that should and shouldn’t be used for auto-completes!

### Frédéric Wang — TeXZilla 0.9.4 Released

update 2014/03/11: TeXZilla is now available as an npm module.

### Introduction

For the past two months, the Mozilla MathML team has been working on TeXZilla, yet another LaTeX-to-MathML converter. The idea was to rely on itex2MML (which dates back from the beginning of the Mozilla MathML project) to create a LaTeX parser such that:

• It is compatible with the itex2MML syntax and is similarly generated from a LALR(1) grammar (the goal is only to support a restricted set of core LaTeX commands for mathematics, for a more complete converter of LaTeX documents see LaTeXML).
• It is available as a standalone Javascript module usable in all the Mozilla Web applications and add-ons (of course, it will work in non-Mozilla products too).
• It accepts any Unicode characters and supports right-to-left mathematical notation (these are important for the world-wide aspect of the Mozilla community).

The parser is generated with the help of Jison and relies on a grammar based on the one of itex2MML and on the unicode.xml file of the XML Entity Definitions for Characters specification. As suggested by the version number, this is still in development. However, we have made enough progress to present interesting features here and get more users and developers involved.

### Quick Examples

\frac{x^2}{a^2} + \frac{y^2}{b^2} = 1

$\frac{{x}^{2}}{{a}^{2}}+\frac{{y}^{2}}{{b}^{2}}=1\frac\left\{x^2\right\}\left\{a^2\right\} + \frac\left\{y^2\right\}\left\{b^2\right\} = 1$

∑_{n=1}^{+∞} \frac{1}{n^2} = \frac{π^2}{6}

$\sum _{n=1}^{+\infty }\frac{1}{{n}^{2}}=\frac{{\pi }^{2}}{6}\sum _\left\{n=1\right\}^\left\{+\infty \right\} \frac\left\{1\right\}\left\{n^2\right\} = \frac\left\{\pi ^2\right\}\left\{6\right\}$

س = \frac{-ب\pm\sqrt{ب^٢-٤اج}}{٢ا}

$س=\frac{-ب±\sqrt{{ب}^{٢}-٤اج}}{٢ا}س = \frac\left\{-ب\pm\sqrt\left\{ب^٢-٤اج\right\}\right\}\left\{٢ا\right\}$

### Live Demo / FirefoxOS Web app

A live demo is available to let you test the LaTeX-to-MathML converter with various options and examples. For people willing to use the converter on their mobiles a FirefoxOS Web app is also available.

### Using TeXZilla in a CommonJS program or Web page

TeXZilla is made of a single TeXZilla.js file with a public API to convert LaTeX to MathML or extract the TeX source from a MathML element. The converter accepts some options like inline/display mode or RTL/LTR direction of mathematics.

You can load it the standard way in any Javascript program and obtain a TeXZilla object that exposes the public API. For example in a commonJS program, to convert a TeX source into a MathML source:

  var TeXZilla = require("./TeXZilla");
console.log(TeXZilla.toMathMLString("\\sqrt{\\frac{x}{2}+y}"));


or in a Web Page, to convert a TeX source into a MathML DOM element:

  <script type="text/javascript" src="http://www.maths-informatique-jeux.com/blog/frederic/?post/2014/02/25/TeXZilla.js"></script>
...
var MathMLElement = TeXZilla.toMathML("\\sqrt{\\frac{x}{2}+y}");


### Using TeXZilla in Mozilla Add-ons

One of the goal of TeXZilla is to be integrated in Mozilla add-ons, allowing people to write cool math applications (in particular, we would like to have an add-on for Thunderbird). A simple Firefox add-on has been written and passed the AMO review, which means that you can safely include the TeXZilla.js script in your own add-ons.

TeXZilla can be used as an addon-sdk module. However, if you intend to use features requiring a DOMParser instance (for example toMathML), you need to initialize the DOM explicitly:

  var {Cc, Ci} = require("chrome");
TeXZilla.setDOMParser(Cc["@mozilla.org/xmlextras/domparser;1"].
createInstance(Ci.nsIDOMParser));


  TeXZilla.setDOMParser(Components.
classes["@mozilla.org/xmlextras/domparser;1"].
createInstance(Components.interfaces.nsIDOMParser));


### Using TeXZilla from the command line

TeXZilla has a basic command line interface. However, since CommonJS is still being standardized, this may work inconsistently between commonjs interpreters. We have tested it on slimerjs (which uses Gecko), phantomjs and nodejs. For example you can do

  $slimerjs TeXZilla.js parser "a^2+b^2=c^2" true <math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><...  or launch a Web service (see next section). We plan to implement a stream filter too so that it can behave the same as itex2MML: looking the LaTeX fragments from a text document and converting them into MathML. ### Using TeXZilla as a Web Server TeXZilla can be used as a Web Server that receives POST and GET HTTP requests with the LaTeX input and sends JSON replies with the MathML output. The typical use case is for people willing to perform some server-side LaTeX-to-MathML conversion. For instance, to start the TeXZilla Webserver on port 7777: $ nodejs TeXZilla.js webserver 7777
Web server started on http://localhost:7777


Then you can sent a POST request:

  $curl -H "Content-Type: application/json" -X POST -d '{"tex":"x+y","display":"true"}' http://localhost:7777 {"tex":"x+y","mathml":"<math xmlns=\"http://www.w3.org/1998/Math/MathML\"...  or a GET request: $ curl "http://localhost:7777/?tex=x+y&rtl=true"
{"tex":"x+y","mathml":"<math xmlns=\"http://www.w3.org/1998/Math/MathML\"...


Note that client-side conversion is trivial using the public API, but see the next section.

### Web Components Custom Element <x-tex>

We used the X-Tag library to implement a simple Web Components Custom Element <x-tex>. The idea is to have a container for LaTeX expressions like

  <x-tex dir="rtl">س = \frac{-ب\pm\sqrt{ب^٢-٤اج}}{٢ا}</x-tex>

that will be converted into MathML by TeXZilla and displayed in your browser: $س=\frac{-ب±\sqrt{{ب}^{٢}-٤اج}}{٢ا}س = \frac\left\{-ب\pm\sqrt\left\{ب^٢-٤اج\right\}\right\}\left\{٢ا\right\}$. You can set the display/dir attributes on that <x-tex> element and they will be applied to the [itex] element. Instances of <x-tex> elements also have a source property that you can use to retrieve or set the LaTeX source. Of course, the MathML output will automatically be updated when dynamic changes occur. You can try this online demo.

### CKEditor Plugins / Integration in MDN

Finally, we created a first version of a TeXZilla CKEditor plugin. An online demo is available here. We already sent a pull request to Kuma and we hope it will soon enable users to put mathematical mathematical formulas in MDN articles without having to paste the MathML into the source code view. It could be enhanced later with a more advanced UI.

## March 10, 2014

### Pete Moore — Accomplishments & status: Bug 875599 – Delete dead code in tools repoxxx Blocked/Waiting...

Accomplishments & status:

Blocked/Waiting on:

xxx

To look at over the next week:

xxx

Areas to develop:

To be discussed.

Quarterly goal tracking:

Actions:

• <to complete during 1x1>

### Justin Scott — Adventure is out there!

After 8 incredible years at Mozilla, the time has come for me to discover new ideas, meet new people, and explore more of what’s out there. There’s no other organization or community quite like Mozilla, and I feel honored and lucky to have called it home for so long. I can’t overstate how much Mozillians have taught and shaped me personally and professionally, and I’m so grateful for our time together.

### What’s next?

I’m ready for something new and ambitious, and will be spending some extended time building, learning, and reflecting. In other words, I’m taking a sabbatical to focus on:

1. Building cool stuff. It’s been too long since I’ve spent time building and shipping a personal project, and I’ve got some new ideas I’m itching to play with. I’m excited to stretch some creative muscles and see where they take me.
2. Learning new skills. In recent years, my passion for travel has led to a strong interest in learning new languages and becoming a better photographer. I’m looking forward to spending time learning Spanish, photography, and other skills — maybe I’ll even become a Wilderness First Responder!
3. Reflection. Tomorrow I turn 27. It’s a good time to stop and think about the bigger picture – what matters to me, what I truly enjoy doing, and plans for the future.

I’m pretty darn excited.

### Kim Moir — Built to Scale talk at EclipseCon

I'm honoured to be giving a talk at EclipseCon next week entitled Built to Scale: The Mozilla Release Engineering Toolbox.  To give you some context, here are some numbers about the scale of build and test jobs we run.

We run about 6000 build jobs and 50,000 test jobs every week day.  Each test job has many actual test suites within it of course.  We have 1800+ devices to build on, plus 3900+ for tests.  Some devices reside in our data centres, some reside in AWS.  When a developer lands (commits) a change, our goal is to have the relevant job start within 15 minutes of being added to the scheduler database.

My talk will discuss we manage this scale of continuous integration in terms of hardware and software.  Also, I'll touch on how we manage this from a human perspective, because that isn't easy either.  I'll also discuss some of the lessons along the way as we have moved many of our infrastructure to AWS.  And I'll also describe how we manage our 1000+ mobile devices that we run tests on as part of our CI farm.

Image ©ardonik, http://www.flickr.com/photos/ardonik/3954691105/sizes/l/ under Creative Commons by-nc-sa 2.0 Release engineering at this scale has lots of pieces to fit together.

In preparing this talk, I have been thinking a lot about the audience.  The audience will be people in the Eclipse community, who don't have a lot of context about how we do things at Mozilla. I recently read the book Resonate by Nancy Duarte which describes how to create great visual presentations and story arcs as a speaker.  One of the ideas in the book is that the most important thing that you can do as a speaker is think about your audience, what they know, and how to engage them.

I use the Presentation Zen approach when preparing a talk which means that I write out all the topics on index cards, arrange them, rearrange them, and discard non-essential content.  Before touching a computer.  When I was initially preparing the talk, I had an entire index card of Mozilla specific words that I would have to explain.  It was ridiculous.  Nobody would ever remember the context of those terms from one slide to the next. I put that card in the shredder.

Last week, I thought of a new approach to present my talk.  I think it will really work.    I want to make the talk as interesting and relevant to the Eclipse community as it would be as if I gave it to a room full of Mozillians who have more context.

So this is what I know about the audience for my talk
You are Eclipse community members
Like all developers, you have known the pain of slow builds and test results.
You'd like to know how to scale large amounts of hardware and software.
And how things can get better.
So you can work on optimizing your product, and not be frustrated by your build and release process.

If you have specific questions you'd like me to address in the talk, please let me know in the comments or via twitter (@kmoir).  Looking forward to seeing you all at EclipseCon!

Notes:
1 I also recently read Why Don't Students Like School: A Cognitive Scientist Answers Questions About How the Mind Works and What It Means for the Classroom which is an excellent book and extremely applicable to people teaching programming languages and other abstract concepts.  One of the topics that stuck with me from reading this book is that our brains need to have a lot of simple concepts memorized to understand more complex concepts. For instance, if you don't have your multiplication tables memorized, simple algebra will be difficult because you will have to stop and think what the value of 3x is when x is 7 instead of just pulling that from memory.  So this is why when people complain that schools teach a lot of memorization and not more abstract thinking, it's not really a valid argument.  You need a lot of concepts memorized before you can do more abstract thinking.  Highly recommended book.

John O'Duinn gave a talk at the Releng 2013 workshop last year and later as a Google Tech Talk that gives a great overview of why release engineering is a high priority at Mozilla.   Well worth watching.

### Benjamin Smedberg — Use -debugexe to debug apps in Visual Studio

Many people don’t know about how awesome the windows debuggers are. I recently got a question from a volunteer mentee: he was experiencing a startup crash in Firefox and he wanted to know how to get the debugger attached to Firefox before the crash.

On other systems, I’d say to use mach debug, but that currently doesn’t do useful things on Windows. But it’s still pretty simple. You have two options:

Both Visual Studio and Visual C++ Express have a command-line option for launching the IDE ready for debugging.

devenv.exe -debugexe obj-ff-debug/dist/bin/firefox.exe -profile /c/builds/test-profile -no-remote

The -debugexe flag informs the IDE to load your Firefox build with the command lines you specify. Firefox will launch with the “Go” command (F5).

For Visual C++ express edition, run WDExpress.exe instead of devenv.exe.

### Debug Using Windbg

windbg is a the Windows command-line debugger. As with any command-line debugger it has an arcane debugging syntax, but it is very powerful.

Launching Firefox with windbg doesn’t require any flags at all:

windbg.exe obj-ff-debug/dist/bin/firefox.exe -profile /c/builds/test-profile -no-remote

### Debugging Firefox Release Builds

You can also debug Firefox release builds on Windows! Mozilla runs a symbol server that allows you to automatically download the debugging symbols for recent prerelease builds (I think we keep 30 days of nightly/aurora symbols) and all release builds. See the Mozilla Developer Network article for detailed instructions.

Debugging official builds can be a bit confusing due to inlining, reordering, and other compiler optimizations. I often find myself looking at the disassembly view of a function rather than the source view in order to understand what exactly is going on. Also note that if you are planning on debugging a release build, you probably want to disable automatic crash reporting by setting MOZ_CRASHREPORTER_DISABLE=1 in your environment.

### Yunier José Sosa Vázquez — Firefox OS presentado en Chile

¡Y por fin llegó! A casi un año de su estreno mundial, Firefox OS fue presentado en Chile. En un evento de prensa realizado en el centro Open Future de Movistar, al que fueron invitados todos los medios de comunicación nacional, los socios de esta aventura lanzaron el primer equipo con el sistema operativo de Mozilla que será vendido en Chile.

Andreas Gal, Vicepresidente de Movilidad de Mozilla afirma que “estamos buscando continuamente enriquecer la vida de las personas a través de la Web. Al lanzar Firefox OS con Alcatel OneTouch y Movistar, lograremos ofrecer una experiencia de calidad que sea accesible a los chilenos que están comprando su primer smartphone”. Por otra parte, Gal destacó el rol de la comunidad de voluntarios de Mozilla en Chile, quienes han apoyado el lanzamiento con capacitaciones y demostraciones, instando a continuar apoyando también el desarrollo de aplicaciones locales para el dispositivo.

Para Osvaldo Avendaño, Director Regional de Alcatel OneTouch Cono Sur “el OneTouch Fire es reflejo de la permanente apuesta por democratizar el uso del smartphone, entregando conectividad con dispositivos funcionales, fáciles de usar y a precios accesibles”.

Por su parte, Javier Valenzuela, Gerente de Terminales de Movistar declaró que “Firefox OS aumenta el rendimiento de los equipos por sobre las capacidades de su propio hardware. Con esto, ya no será necesario tener un teléfono celular con las máximas especificaciones, ya que este sistema operativo puede darnos rendimientos similares con mucho menos equipo, lo que finalmente promueve la masificación de los smartphones, dando más libertad de elegir a los usuarios”.

### Características técnicas

El Alcatel OneTouch Fire tiene una CPU a 1GHz, 512MB de almacenamiento interno, 256MB de RAM y soporte MicroSD hasta 32GB. Tiene soporte para bandas GSM, UTMS 850/1900/2100 y UMTS 900/2100. En cuanto a la conectividad, tiene acceso a conexiones HSPA, Wi-Fi, Bluetooth 3.0 y A-GPS.

Firefox OS corriendo en un Alcatel One Touch Fire

También está equipado de una cámara de 3.2MP, con gráficos altamente optimizados en una pantalla de 3.5” y un diseño colorido y compacto. Al estar basado en HTML5 y con tecnologías web abiertas, el dispositivo ofrece una navegación fluida que permite a los usuarios descubrir diferentes aplicaciones y navegar en la web.

¡Otro país más donde Firefox OS esta presente! Ya suman 15.

Entrada original por Javier Martínez en Mozilla Chile

Fuente: Mozilla Hispano

### Doug Belshaw — Open Education and the Open Web (#openeducationwk)

This week is Open Education Week 2014:

Open Education Week is a series of events to increase awareness of open education movement. The third annual Open Education Week takes place from March 10-15, both online and offline around the world. Through the events and resources, we hope to reach out to more people to demonstrate what kind of opportunities open education has created and what we have to look forward to.

Mozilla is playing a role, through a week-long online discussion entitled Open Education and the Open Web. There’ll be a new question to prompt conversation each day in our Google+ Webmaker community.

What does it mean to participate on the open web? How can we encourage others to take agency over the opportunities the open web provides? This discussion led by Mozilla’s Doug Belshaw will explore the participatory culture of the web, why it matters, and what we can do to protect and cultivate it.

Today’s prompt is simple. We’re just asking people to introduce themselves and respond as to what ‘open education’ looks like in their context.

You should join us. It’s totally fine to dip in and dip out. Take the first step:

Image CC BY mozillaeu

### Benoit Girard — C/C++ Eclipse Project Generation

With the landing of bug 973770 (and a few touch up landing this week) the Mozilla build system can now generate a fully working out-of-the-box no hassle eclipse workspace for C++ gecko development. This has been tested for Mac, Fennec and B2G. For Windows please use the Visual Studio project generator. If you find any bugs please file and link off bug 973770.

Didn’t we already support eclipse projects? Yes we did but the setup was quite tedious to setup and maintain. My approach automates some of these instructions and uses the information now available from within moz.build to generate the project. This also means we can remove the build scanning phase which tends to make Eclipse unresponsive.

To use this:

• Run |./mach build-backend -b CppEclipse|. For b2g cd into objdir-gecko and use |../gecko/mach -b CppEclipse|
• Start eclipse, use $OBJDIR/eclipse_workspace as the workspace • File->Import->Existing Project with path$OBJDIR/eclipse_workspace/gecko
• Let the indexer run on the first start. This can take about 20 mins (long but worth it, see below). This will index your exact mozconfig setup (example -DDEBUG, b2g system libs).

Here are some benefits:

Code Completion

Call Hierarchy

Build, Flash (b2g), Launch

Type Hierarchy

Mozilla Formatter

If you find yourself tweaking the project let me know so that I can integrate changes into the project generator instead where everyone can benefit.

### Nick Cameron — Subtyping and coercion in Rust

Subtyping and coercion are two related concepts for enabling polymorphic re-use in programming languages. I want to lay out exactly what they are and how they exist in Rust, and in particular how they relate to variance which exists today, is being implemented, or will be implemented as part of the work on dynamically sized types (DST). My terminology will not exactly fit that used by PL people or Rust people, sorry.

In the general case (i.e., not just Rust) subtyping is a relation on types which says that T is a subtype of U if T is in some sense a more specific type than U. More precisely we might like to say that T and U denote sets of values and the set of values denoted by T is a subset of the set of values denoted by U. That gets complicated when thinking about existential types and so forth if we don't have explicit values for the introduction of such types, and if we do, then I think things get a little circular and not very helpful for thinking about real programming languages.

Thinking of expressions in a language, if an expression has type T, it can be used anywhere we would expect an expression of type U. This is inclusion polymorphism (aka the Liskov substitution principle or strong behavioral subtyping. There are subtle differences between these three terms, but I could only describe them in the most hand-wavy terms, so I won't).

Coercion is an operation on values (or expressions) where a value of type T can be changed in some way to a value of type U. An example is using an integer as a float - this is allowed transparently in many languages, but the compiler must insert code which does the low-level conversion from integer to float. We usually assume (hope?) that conversions due to coercion are cheap.

When actually writing code, subtyping and coercion often look the same. Many programmers do not realise there is even a difference. For example, 'subtyping' between pointer types in C++ with multiple inheritance is technically a coercion because of the pointer adjustment required by the implementation using multiple vtables.

Rust has basically no subtyping. The only subtyping it has is due to contravariance with respect to lifetime variables. I (Niko has done the hard work already) am currently extending this to variance (where safe, along the usual lines) with respect to subtyping. Rust has coercions between structs and traits given by impls (well, between references to such things). It should, soon hopefully, have coercions between sub-traits and eventually, maybe sub-structs. Plus coercions between numeric types, etc., which are less interesting for now.

I would like to further classify subtyping and coercions along some axes. The key difference is that subtyping does not change the underlying value and coercion does (as discussed above). Both subtyping and coercion may be explicit or implicit. I sometimes refer to implicit subtyping as subsumption, but I'm not sure if that is common or correct. In Rust, subtyping is always implicit and coercion is sometimes implicit (trait objects) and sometimes explicit (numeric conversions, i.e., they require as. Coercion used to always be explicit until recently).

In Rust, only subtyping is used in the type inference algorithm.

The last axis I have thought of is a bit more hazy and a bit more of an implementation detail than a fundamental. It is that coercion restricts access to the value coerced. If coercion changes a value, it would be unsafe to continue to access the value via the old type. Either coercion must copy a value (i.e., only the new, copied value has the new type), or we must ensure that the old value cannot be accessed whilst the coerced value exists. Rusts linearity rules ensure this.

With DST, coercion becomes much more complicated. In particular we will add covariant coercions which must coerce fields of a struct. That is, they change things deep inside the struct, not just at the surface of a value. In contrast to subtyping, covariant coercion is always safe because we can no longer access the coerced value via its old type. (Thinking about proving safety gave me the insight that coercion actually _changes_ the type of a value, whereas subtyping gives multiple types to a single value (which is not to say that coercion always implies monomorphic types)).

We have talked a little (I think Niko has thought more) about perhaps changing the relationship between subtyping and type inference. Perhaps not all subtyping should be taken into account by the type inference algorithm. And/or perhaps some coercions should be taken account of. I don't really know - I have a hard time visualising how type checking would be affected by these changes.

I feel like I should have a point to make now, but I don't. I just did a lot of thinking to clarify all this in my head, so I thought I should write it down. And who knows? It might be helpful or interesting to someone else too.

### Robert O'Callahan — Introducing Chaos Mode

Some test failures are hard to reproduce. This is often because code (either tests or implementation code) makes unwarranted assumptions about the environment, assumptions that are violated nondeterministically. For example, a lot of tests have used setTimeout to schedule test code and assumed certain events will have happened before the timeout, which may not be true depending on effects such as network speeds and system load.

One way to make such bugs easier to reproduce is to intentionally exercise nondeterminism up to the limits of API contracts. For example, we can intentionally vary the actual time at which timers fire, to simulate the skew between CPU execution time and real time. To simulate different permitted thread schedules, we can assign random priorities to threads. Since hashtable iteration is not defined to have any particular order, we can make a hashtable iterator always start at a randomly chosen item.

I tried applying this to Gecko. I have patches that define a global "chaos mode" switch, and in several different places, if we're in chaos mode, we choose randomly between different valid behaviors of the code. Here's what the patches currently do:

• Sometimes yield just before dispatching an XPCOM event. This gives another thread a chance to win an event-dispatch race.
• On Linux, give threads a random priority and pin some threads to CPU 0 so they contend for CPU.
• Insert sockets in random positions in the list of polled sockets, to effectively randomize the priority of sockets in poll results.
• Similarly, when putting HTTP transactions into the HTTP transaction queue, randomly order them among other transactions with the same specified priority.
• Start hashtable iteration at a random entry.
• Scale timer firing times by random amounts (but don't vary the order in which timers fire, since that would violate the API contract).
• Shuffle mochitests and reftests so they run in random order.

Note that it can be valuable to make a single random choice consistently (for the same object, thread, etc) rather than making lots of fine-grained random decisions. For example, giving a thread a fixed low priority will starve it of CPU which will likely cause more extreme behavior --- hopefully more buggy behavior --- than choosing a random thread to run in each time quantum.

One important source of nondeterminism in Gecko is XPCOM event (i.e. HTML5 task) dispatch. A lot of intermittent bugs are due to event timing and ordering. It would be nice to exploit this in chaos mode, e.g. by choosing the next event to fire randomly from the set of pending events instead of processing them in dispatch order. Unfortunately we can't do that because a lot of code depend on the API contract that firing order follows dispatch order. In general it's hard to determine what the valid alternative firing orders are; the first item on my list above is my best approximation at the moment.

### Important Questions

Does this find bugs? Yes:

Which chaos features are the most helpful for producing test failures? I don't know. It would be a very interesting experiment to do try pushes with different patches enabled to figure out which ones are the most important.

Does it help reproduce known intermittent bugs? Sometimes. In bug 975931 there was an intermittent reftest failure I could not reproduce locally without chaos mode, but I could reproduce with chaos mode. On the other hand chaos mode did not help reproduce bug 791480. Extending chaos mode can improve this situation.

Isn't this just fault injection? It's similar to fault injection (e.g. random out-of-memory triggering) but different. With fault injection typically we expect most tests to fail because faults like OOM are not fully recoverable. Chaos mode should not affect any correctness properties of the program.

Wasn't this already done by <insert project name>? Probably. I don't claim this is a new idea.

When is this going to land and how do I turn it on? It has already landed. To turn it on, change isActive() to return true in mfbt/ChaosMode.h. Shuffling of reftests and mochitests has to be done separately.

OK, so this can trigger interesting bugs, but how do we debug them? Indeed, chaos mode makes normal debugging workflows worse by introducing more nondeterminism. We could try to modify chaos mode to reproduce the random number stream between runs but that's inadequate because other sources of nondeterminism would interfere with the order in which the random number stream is sampled. But we are working on a much better solution to debugging nondeterministic programs; I'll be saying more about that very soon!

## March 09, 2014

I tried to delete my Linkedin account a while back, but I still get a lot of "invitation to connect on Linkedin" emails. I plan to never connect to anyone on Linkedin ever again, so whoever wants to connect, please don't be offended when it doesn't happen --- it's not about you.

PS, I'm visiting Taiwan at the moment and wondering why Google uses that as a cue to switch its Web interface to Chinese even when I'm logged into my regular Google account. Dear Google, surely it is not very likely that my change of location to Taiwan indicates I have suddenly learned Chinese and forgotten English.

### William Lachance — Eideticker for FirefoxOS: Becoming more useful

[ For more information on the Eideticker software I'm referring to, see this entry ]

Time for a long overdue eideticker-for-firefoxos update. Last time we were here (almost 5 months ago! man time flies), I was discussing methodologies for measuring startup performance. Since then, Dave Hunt and myself have been doing lots of work to make Eideticker more robust and useful. Notably, we now have a setup in London running a suite of Eideticker tests on the latest version of FirefoxOS on the Inari on a daily basis, reporting to http://eideticker.mozilla.org/b2g.

There were more than a few false starts with and some of the earlier data is not to be entirely trusted… but it now seems to be chugging along nicely, hopefully providing startup numbers that provide a useful counterpoint to the datazilla startup numbers we’ve already been collecting for some time. There still seem to be some minor problems, but in general I am becoming more and more confident in it as time goes on.

One feature that I am particularly proud of is the detail view, which enables you to see frame-by-frame what’s going on. Click on any datapoint on the graph, then open up the view that gives an account of what eideticker is measuring. Hover over the graph and you can see what the video looks like at any point in the capture. This not only lets you know that something regressed, but how. For example, in the messages app, you can scan through this view to see exactly when the first message shows up, and what exact state the application is in when Eideticker says it’s “done loading”.

(apologies for the low quality of the video — should be fixed with this bug next week)

As it turns out, this view has also proven to be particularly useful when working with the new entropy measurements in Eideticker which I’ve been using to measure checkerboarding (redraw delay) on FirefoxOS. More on that next week.

### Daniel Stenberg — http2 in curl

While the first traces of http2 support in curl was added already back in September 2013 it hasn’t been until recently it actually was made useful. There’s been a lot of http2 related activities in the curl team recently and in the late January 2014 we could run our first command line inter-op tests against public http2 (draft-09) servers on the Internet.

There’s a lot to be said about http2 for those not into its nitty gritty details, but I’ll focus on the curl side of this universe in this blog post. I’ll do separate posts and presentations on http2 “internals” later.

### A quick http2 overview

http2 (without the minor version, as per what the IETF work group has decided on) is a binary protocol that allows many logical streams multiplexed over the same physical TCP connection, it features compressed headers in both directions and it has stream priorities and more. It is being designed to maintain the user concepts and paradigms from HTTP 1.1 so web sites don’t have to change contents and web authors won’t need to relearn a lot. The web will not break because of http2, it will just magically work a little better, a little smoother and a little faster.

In libcurl we build http2 support with the help of the excellent library called nghttp2, which takes care of all the binary protocol details for us. You’ll also have to build it with a new enough version of the SSL library of your choice, as http2 over TLS will require use of some fairly recent TLS extensions that not many older releases have and several TLS libraries still completely lack!

The need for an extension is because with speaking TLS over port 443 which HTTPS implies, the current and former web infrastructure assumes that we will speak HTTP 1.1 over that, while we now want to be able to instead say we want to talk http2. When Google introduced SPDY then pushed for a new extension called NPN to do this, which when taken through the standardization in IETF has been forked, changed and renamed to ALPN with roughly the same characteristics (I don’t know the specific internals so I’ll stick to how they appear from the outside).

So, NPN and especially ALPN are fairly recent TLS extensions so you need a modern enough SSL library to get that support. OpenSSL and NSS both support NPN and ALPN with a recent enough version, while GnuTLS only supports ALPN. You can build libcurl to use any of these threes libraries to get it to talk http2 over TLS.

### http2 using libcurl

(This still describes what’s in curl’s git repository, the first release to have this level of http2 support is the upcoming 7.36.0 release.)

Users of libcurl who want to enable http2 support will only have to set CURLOPT_HTTP_VERSION to CURL_HTTP_VERSION_2_0 and that’s it. It will make libcurl try to use http2 for the HTTP requests you do with that handle.

For HTTP URLs, this will make libcurl send a normal HTTP 1.1 request with an offer to the server to upgrade the connection to version 2 instead. If it does, libcurl will continue using http2 in the clear on the connection and if it doesn’t, it’ll continue using HTTP 1.1 on it. This mode is what Firefox and Chrome will not support.

For HTTPS URLs, libcurl will use NPN and ALPN as explained above and offer to speak http2 and if the server supports it. there will be http2 sweetness from than point onwards. Or it selects HTTP 1.1 and then that’s what will be used. The latter is also what will be picked if the server doesn’t support ALPN and NPN.

Alt-Svc and ALTSVC are new things planned to show up in time for http2 draft-11 so we haven’t really thought through how to best support them and provide their features in the libcurl API. Suggestions (and patches!) are of course welcome!

### http2 with curl

Hardly surprising, the curl command line tool also has this power. You use the –http2 command line option to switch on the libcurl behavior as described above.

### Translated into old-style

To reduce transition pains and problems and to work with the rest of the world to the highest possible degree, libcurl will (decompress and) translate received http2 headers into http 1.1 style headers so that applications and users will get a stream of headers that look very much the way you’re used to and it will produce an initial response line that says HTTP 2.0 blabla.

### Building (lib)curl to support http2

See the README.http2 file in the lib/ directory.

### This is still a draft version of http2!

I just want to make this perfectly clear: http2 is not out “for real” yet. We have tried our http2 support somewhat at the draft-09 level and Tatsuhiro has worked on the draft-10 support in nghttp2. I expect there to be at least one more draft, but perhaps even more, before http2 becomes an official RFC. We hope to be able to stay on the frontier of http2 and deliver support for the most recent draft going forward.

PS. If you try any of this and experience any sort of problems, please speak to us on the curl-library mailing list and help us smoothen out whatever problem you got!

### Tantek Çelik — Mockups For People Focused Mobile Communication

I've been iterating on mockups for people focused mobile communication for a while on the IndieWebCamp wiki for my own publishing tool Falcon, but the mockups deserve a blog post of their own.

Going back to the original people focused mobile communication experience, we've already figured out how to add a personal icon to your site so that visitors can choose "Add to Home Screen" (or similar menu option) to add icons of people (represented by their site) directly to their mobile home screens where they normally organize their apps.

The next step is to mockup what happens when they select an icon of a person and it launches their website.

I started with a mockup for how I could present communication options on my home page when viewed on an iOS7 mobile device, figuring if I can create a seamless experience there, adapting it to other mobile devices, desktop etc. would be fairly straightforward.

Thus when someone selects an icon of a person and it launches their website, they might see a home page view like this:

This is a hybrid approach, providing a look and feel familiar to the user from their "native" environment (smooth, seamless, confidence invoking), with very simply styled web content right below it so if that's all they want, they get it immediately.

#### Home with contact options

Continuing with the user flow, since they want to contact you, they select the "Contact" folder, which opens up accordingly. From there the user selects which "app" they want and it launches automatically into a new message/connection, skipping any distracting inboxes.

The various contact options are presented in preference order of the contactee.

Each of these can be optionally hidden based on presence status / availability, or time of day.

A subset of these could also be presented publicly, with others (e.g. perhaps Facetime and Skype) only shown when the visitor identifies themselves (e.g. with IndieAuth). The non-public options could either be hidden, or perhaps shown disabled, and selecting them would be discoverable way to request the visitor identify themselves.

This is enough of a mockup to get started with the other building blocks so I'm going to stop there.

I've started a wiki page on "communication" and will be iterating on the mockups there.

Got other thoughts? Upload your mockups to indiewebcamp.com and add them to the communication page as well. Let's build on each other's ideas in a spirit of open source design.

### K Lars Lohn — Redneck Broadband - fixed!

the beginning of the story

It was my fault!  Monday's 28Mbps was not an anomaly. At one point in the installation, the WiFi hot spot crashed and I had to do a factory reset.  Little did I know that factory reset disables the 4G radio: our throughput dropped to an abysmal 200Kbps.  Once re-activated, 4G speeds came back and remain consistent.

After crowing success, then lamenting failure, I'm back to shouting "success!"

## March 08, 2014

### Tantek Çelik — Building Blocks For People Focused Mobile Communication

I'm at IndieWebCampSF and today, day 2, is "hack" or "create" day so I'm working on prototyping people focused mobile communication on my own website.

A few months ago I wrote about my frustrations with distracting app-centric communication interfaces, and how a people-focused mobile communication experience could not only solve that problem, but provide numerous other advantages as well.

Yesterday I led a discussion & brainstorming session on the subject, hashtagged #indiecomms, and it became clear that there were several pieces we needed to figure out:

• Mockups for what it would look like
• URLs for each communication service/app
• Markup for the collections of links and labels
• CSS for presenting it like the mockups
• Logic for presence / availability for each service

So that's what I'm working on and will blog each building block as I get figure it out and create it.

### Daniel Stenberg — HTTPbis design team meeting London

I’m writing this just hours after the HTTPbis design team meeting in London 2014 has ended.

Around 30 people attended the meeting i Mozilla’s central London office. The fridge was filled up with drinks, the shelves were full of snacks and goodies. The day could begin. This is the Saturday after the IETF89 week so most people attending had already spent the whole or parts of the week before here in London doing other HTTP and network related work. The HTTPbis sessions at the IETF itself were productive and had already pushed us forward.

We started at 9:30 and we quickly got to work. Mark Nottingham guided us through the day with usual efficiency.

We all basically hang out in a huge room, some in chairs, some in sofas and a bunch of people on the floor or just standing up. We had mikes passed around and the http2 discussions were flowing back and forth depending on the topics and what people felt about them. Some of the issues that were nailed down this time and will end up detailed in the upcoming draft-11 are (strictly speaking, we only discussed the things and formed opinions, as by IETF guidelines we can’t decide things on an offline meeting like this):

• Priorities of streams will have a dependency graph or direction, making individual  streams less or more important than other
• A client can send headers without compression and tell the proxy that the header shouldn’t be compressed – used a way to mitigate some of the compression security problems
• There will be no TLS renegotiation allowed mid-session. Basically a client will have to tear down the connection and negotiate again if suddenly a need to use a client certificate arises.
• Alt-Svc is the way forward so ALTSVC will appear a new frame in draft-11. This is the way to signal to an application that there is another “route” to the same content on the same server. This will allow for what is popularly known as “opportunistic encryption” or at least one sort of that. In short, you can do “plain-text” HTTP over a TLS connection using this…
• We decided that a server should support gzip contents from clients

There were some other things too handled, but I believe those are the main changes. When the afternoon started to turn long, beers and other beverages were brought out and we did enjoy a relaxing social finale of the day before we split up in smaller groups and headed out in the busy London night to get dinner…

Thanks everyone for a great day. I also appreciated meeting several people in real-life I never met before, only discussed with and read emails from online and of course some old friends I hadn’t seen in a long time!

Oh, there’s also a new rough time frame for http2 going forward. Nearest in time would be the draft-11 at the end of March and another interim in the beginning of June (Boston?).

As a reminder, here’s what was happened for draft-10, and here is http2 draft-10.

Out of all people present today, I believe Mozilla was the company with the largest team (8 attendees) – funnily enough none of us Mozillians there actually work in this office or even in this country.

### Konstantinos Antonakoglou — A Creative Commons music video made out of other CC videos

Hello! Let’s go straight to the point. Here is the video:

…and here are the videos that were used having the Creative Commons Attribution licence: http://wonkydollandtheecho.com/thanks.html. They are downloadable via Vimeo, of course.

Videos available from NASA and the ALMA observatory were also used.

The video (not audio) is under the Creative Commons BY-NC-SA licence, which I think is quite reasonable since every scene used from the source videos (ok, almost every scene) has lyrics/graphics embedded on it.

I hope you like it! I didn’t have a lot of time to make this video but I like the result. The tools I used are not open source unfortunately, because the learning curve for these tools is quite steap. I will definitely try them in the future. Actually, I really haven’t come across any alternative to Adobe After Effects. You might say Blender…but is it really an alternative? Any thoughts?

PS. More news soon for the Sopler project (a web application for making to-do lists) and other things I’ve been working on lately (like MQTT-SN).

### Brendan Eich — MWC 2014, Firefox OS Success, and Yet More Web API Evolution

Just over a week ago, I left Barcelona and Mobile World Congress 2014, where Mozilla had a huge third year with Firefox OS.

We announced the $25 Firefox OS smartphone with Spreadtrum Communications, targeting retail channels in emerging markets, and attracting operator interest to boot. This is an upgrade for those channels at about the same price as the feature phones selling there today. (Yes,$25 is the target end-user price.)

We showed the Firefox OS smartphone portfolio growing upward too, with more and higher-end devices from existing and new OEM partners. Peter Bright’s piece for Ars Technica is excellent and has nice pictures of all the new devices.

We also were pleased to relay the good news about official PhoneGap/Cordova support for Firefox OS.

We were above the fold for the third year in a row in Monday’s MWC daily.

(Check out the whole MWC 2014 photo set on MozillaEU’s Flickr.)

As I’ve noted before, our success in attracting partners is due in part to our ability to innovate and standardize the heretofore-missing APIs needed to build fully-capable smartphones and other devices purely from web standards. To uphold tradition, here is another update to my progress reports from last year and from 2012.

First, and not yet a historical curiosity: the still-open tracking bug asking for “New” Web APIs, filed at the dawn of B2G by Andreas Gal.

Next, links for “Really-New” APIs, most making progress in standards bodies:

Yet more APIs, some new enough that they are not ready for standardization:

Finally, the lists of new APIs in Firefox OS 1.1, 1.2, and 1.3:

This is how the web evolves: by implementors championing and testing extensions, with emerging consensus if at all possible, else in a pref-enabled or certified-app sandbox if there’s no better way. We thank colleagues at W3C and elsewhere who are collaborating with us to uplift the Web to include APIs for all the modern mobile device sensors and features. We invite all parties working on similar systems not yet aligned with the emerging standards to join us.

/be

### John O'Duinn — RelEngCon 2014 registration is now open!

In case you missed the announcements, RelEngConf 2014 is officially now open for registrations. This follows the inaugural and wildly successful Release Engineering conference , held in San Francisco on 20may2013, as part of ICSE 2013. More background here.

Last year’s event was great. The mixture of attendees and speakers, from academia and battle-hardened industry, made for some riveting topics. So I already had high expectations for this year… no pressure on the organizers! Then I heard this years will be held in Google HQ MountainView, and feature opening keynotes from Chuck Rossi (RelEng, Facebook, click for linkedin profile), and Dinah McNutt (RelEng, Google, click for linkedin profile). Looks like RelEngConf 2014 is already lining up to be special also.

If you build software delivery pipelines for your company, or if you work in a software company that has software delivery needs, I recommend you follow @relengcon, block off April 11th, 2014 on your calendar and book now. It will be well worth your time.

See you there!
John.

### James Long — Open-Sourcing My Gambit Scheme iOS Game from 2010

Back in 2009-2010, I got Gambit Scheme running on iOS and decided to build a game with it. The result was Farmaggedon, a stupid game where you blow up farm animals to avoid being hit by them.

I blogged about my progress working with Scheme on iOS back then and evidently a lot of people were inspired by it. This was the main blog post, in addition to a bunch of videos. Recently another iOS game was featured on Hacker News that was written in Gambit Scheme, and it inspired me to dredge up the source of my game and completely open source it and talk about it.

### Background

I used to work with Lang Martin and Ben Weaver at a small webdev shop right out of college. They were a little older than me and far more technically grounded than I was at the time. Occasionally I would hear "lisp" and "scheme" murmured around the office while trying to focus on my C++ game engine side project, and I thought they were just trying to sound cool.

Boy was my mind about to be blown. Eventually we all decided to play around with Scheme and see if we could use it internally. I knew nothing about it, but I tried to keep up with the conversation and more often than not ended up saying foolish things. Tired of feeling out of my depth, I committed to studying Scheme and it still influences me to this day. This is why it's so important to surround yourself with people smarter than you. I got lucky.

Fast-forward a few years later, I was feeling burned out at my job and decided to quit and try freelancing. I set aside the first few months to try and make an iOS game (this was right around the time iOS was exploding). Having fallen in love with Scheme, I endeavoured to make a game with Scheme and prove that it can be practical and performant, as well as making you more productive.

### Show Me the Source!

Enough talking, here's the source. You're looking at a completely unfiltered, raw project. Everything I was thinking of is in there somewhere. You're also looking at the messiest project with the worst code, ever.

I was so naïve back then. Set aside a couple months to build a game from scratch, including porting a whole language to a completely new platform? Are you kidding me?

I ported Gambit Scheme to iOS, which basically just means cross-compiling with the right options and writing the necessary FFIs. The actual port wasn't too much work, which was exciting but dangerous because it blinded me to the fact that I would have to build everything myself. Not only was I lacking an OpenGL rendering library, I didn't even have access to the OpenGL API. I had to write an FFI for that. (Actually, I wrote a Scheme program that parsed C++ header files and auto-extracted it.)

Additionally, I created sounds, 3d models, game mechanics, user interfaces, and a basic 3d engine. See all the resources here. I did hire a local designer to make some really cool gritty nuclear farm graphics for the game, but everything else I did myself. Which is why the game is terrible.

Regardless of how badly Farmageddon failed commercially, it was one of the most transformative experiences of my life. I learned tons about project scope, marketing, games, and a lot of other stuff. But even more, I got to experience working in a minimal but powerful language that I could shape to my needs, with a REPL/debugger always there to incrementally play with things.

It wasn't just continuations, green threads, macros, records, and tail-call optimizations that made me a better programmer. It was the idea of incremental development, where you could always redefine a function at run-time to try something new, or inspect and change any data structure. We've come close to that with browser devtools, but the experience still isn't quite what it should be.

So if you haven't aready, you really should learn a Lisp. Personally I like Gambit, but Chicken and Racket are really good too. Clojure is great too, just a different flavor because it's not a minimal Scheme. It doesn't matter. Learn one of them.

### Development Videos

These are some videos I made showing off the real-time REPL and debugger. The first two were the most popular.

There are a few other ones as well.

### Source Commentary

The code is incredibly messy, but I feel warm and nostalgic looking at it. There are a few interesting things to point out about it.

1. Most of the Obj-C code is in src/app. The entry point is in main.m which initializes and configures the Gambit virtual machine. EAGLView.mm is where most of the code lies to interact with the iOS UI.

2. The main entry point for Scheme is in src/init.scm. At that bottom of the file are two FFI functions: c-init and c-render. Those are exposed as init and render at the C level and the Obj-C code calls into them.

3. All of the FFIs are in src/ffi. I think I wrote most of them by hand, and auto-generated a few of them. What's need about Gambit is that you can embed any kind of C/C++/Obj-C code. For example, here is the FFI for invoking methods in the iOS view for changing the UI. The scheme methods embed Obj-C code straight into them. You can see more of this in the iOS FFI which lets me allocate native iOS data structures. Lastly, you can see my attempts at optimizations by converting Scheme vectors into native C arrays.

4. The main game loop is in farmageddon.scm. Most of the work is in the various screens, like level.scm which renders and updates the main game.

5. The main component of the game engine is in src/lib/scene.scm. I used Gambit's native record types and wrote a macro to generate fields that dynamically dispatched of the type for making game entities.

6. All of my tests were simply top-level Scheme code that I live evaluated when the game was running. No automation for me!

7. Gambit has a powerul cooperative threading system, and I used it extensively. The game and sound system each had a thread and would send messages to the main thread for changing the game state. Each level had a thread running to fire off events at random intervals, and I could simply call thread-sleep! to wait for a certain period. Note that these aren't real threads, just cooperative so it was all safe.

8. The remote debugger is in the emacs directory and my Emacs integration was called grime. Since I had a live REPL to my game in Emacs, I even wrote helper functions in Emacs to change game state and bound them to keys so I could quickly invoke them.

There's a lot more in there, and like I said it's very messy. But there's a lot of gems in there too. I hope it continues to inspire others.

## March 07, 2014

### Selena Deckelmann — Weekly Feminist Work in Tech by Mozillians roundup – Week of March 3, 2014

We have a ton of individual work done by MoFo and MoCo employees related to feminism, feminist activism and the larger technology community. So much is happening, I can barely keep track!

I’ve reached out to a few people I work with to get some highlights and spread the word about interesting projects we’re all working on. If you are a Mozillian and occasionally or regularly work on feminist issues in the tech community, please let me know! My plan is to ping people every Friday morning and post a blog post about what’s happened in the last week.

Dispatch from me, Selena Deckelmann:

• I’m presenting at SF Github HQ on Thurs March 13, 7pm as part of the Passion Projects series (Julie Horvath’s project). I’ll be talking about teaching beginners how to code and contribute to open source, specifically through my work with PyLadies. I’m giving a similar talk this afternoon at Portland State University to their chapter of the ACM.
• Just wrapped up a Git workshop for PyLadiesPDX and am gearing up for a test-run of a “make a Flask blog in 80-lines of code” workshop! Course materials are available here for “intro to git” workshops.
• Lukas, Liz, me and others (I’m not sure who all else!!) are coordinating a Geekfeminism and feminist hackerspace meetup at PyCon 2014. The details aren’t published yet, so stay tuned!
• PyLadies PyCon 2014 lunch is happening again!
• PyLadies will also be holding a Mani-Pedi party just like in 2013. Stay tuned for details!
• Brownbags for the most recent GNOME Outreach Program for Women contributors are scheduled for next Friday March 14, 10am and 2pm. (thanks Larissa!!) Tune in at http://air.mozilla.com. One of the GNOME Outreach Program for Women contributors is Jennie Rose Halperin, and another is Sabina Brown.

Dispatch from Liz Henry:

• I’m doing a lot of work to support Double Union feminist hackerspace, a nonprofit in San Francisco. We are hosting tech and arts workshops, and establishing connections with other hackerspaces in the US and around the world. Lukas is also involved with this effort! We have over 100 members now using the space.
• For PyCon I would like to host fairly informal sessions in our Feminist Hacker Lounge, on QA, bug triaging, and running/writing WebQA automated tests with pytest and selenium.
• I’m hoping to have funding for an OPW intern for this upcoming round to work on the back end of a QA community facilitating tool, using Python and various APIs for Mozilla tools like Bugzilla, Moztrap, and the Mozillians profiles.

Dispatch from Lukas Blakk:

• Just held the Lesbians Who Tech hackathon at the Mozilla SF space and it was an amazing weekend of networking, recruiting for Mozilla, doing a stump speech on the radical/political possibilities of open source, and also just a lot of social fun.
• I’m nearing the point of Project Kick Off for The Ascend Project which will be a 6 week training course for underrepresented in current tech mainstream (and underemployed/unpaid) persons who will learn how to write automatable tests for MozMill. This first one will take place at the Portland office in Sept/Oct 2014 (Starts on Sept 8th). There’s so much more here, but this is just a sound bite.
• I’m trying to determine what budget I can get agreement on to put towards women in tech outreach this year.
• PyCon – yes! Such Feminist, So Hackerspace, Much gathering of geek feminists!

Dispatch from Larissa Shapiro:

• OPW wrapup and next session – we’re wrapping up the current round, scheduling brownbags for two of the current interns, etc. Funding is nearly secured for the next round and we have like 6 willing mentors. w00t.
• I’m also providing space for/speaking at an upcoming event in the Mountain View office: last year’s African Techwomen emerging leaders were part of a documentary and the Diaspora African Women’s Network is holding a screening and a planning session for how to support next year’s ELs and other African and African-American bay area women in tech both through this and other projects, March 29. Open to Mozilla folks, let me know if you’re interested.

Anything else that’s come up in the last week, or that you’d like Mozillians to know about? Let me know in the comments!

### Ludovic Hirlimann — Thunderbird 28.0b1 is out and why you should care

We’ve just released another beta of Thunderbird. We are now in the middle of the release cycle until the next major version is released to our millions of daily users. (we’ve fixed 200+ bugs since the last major release (version 24)). We currently have less than 1% of our users - using the beta and that’s not enough to catch regressions - because Thunderbird offers mail, newsgroups and rss feeds we can’t cover the usage of our user base. Also many companies out there sell extensions for spam filtering, for virus protection and so forth. The QA community just doesn’t have the time to try all these and run these with Thunderbird betas to find issues.

And that’s where you dear reader can help. How you might ask well here is a list of examples of how you can help :

• you are a reps ? make this post available in your language ….
• You help on a support forum - link to the beta download page explaining why it’s important to have more users on beta
• You work for a vendor that sell a product that integrates in Thunderbird ? Qualify your product with the beta so when we ship final we both won’t get surprises.
• Your company uses Thunderbird ? Setup a small group of beta users and gather the bugs, issues and let us know.

If you find issues let us know either thru bugzilla or thru the support forums, so we can try to address them.

ps the current download page says English only because of a bug in our build infrastructure for windows. Linux and Mac builds are available localized.

### Al Billings — TrustyCon Videos Available

TrustyCon 2014 (maybe the only one ever) happened the other week as a competitor to the RSA convention because of perceived RSA collaboration with the NSA and all of the kerfuffle around the NSA and surveillance this last year. As they say on their site, “We welcome all security researchers, practitioners and citizens who are interested in discussing the technical, legal and ethical underpinnings of a stronger social contract between users and technology.”

The event sold out quickly so I was unable to attend. Helpfully, it was livestreamed, making it available to everyone and the resulting video was put up on youtube. Unfortunately, this video is one, ginormous, seven hour video. I don’t know about you but I like my viewing in smaller chunks. I also tend to listen to talks and presentations, especially when there is no strong visual component, by saving the audio portion of it to my huffduffer account and listening to the resulting feed as a podcast.

I took it on myself to do a quick and dirty slice and dice on the seven plus hour video. It isn’t perfect (I’m a program manager, not a video editor!) but it works. I’ve uploaded the resulting videos to my youtube channel in order to not destroy any servers I own. You can find the playlist of them all here but I’ve also included the videos embedded below.

Additionally, I extracted the audio from each of these files and put an audio collection up on the Internet Archive, for people like me who just want to listen to them.

TrustyCon 2014 - Opening Remarks

TrustyCon 2014 - The Talk I Was Going to Give at RSA

TrustyCon 2014 - The Laws and Ethics of Trustworthy Technology

TrustyCon 2014 - Joseph Menn Interviews Bruce Schneier

TrustyCon 2014 - Securing SecureDrop

TrustyCon 2014 - New Frontiers in Cryptography

TrustyCon 2014 - Trusted Computing Tech and Government Implants

TrustyCon 2014 - Community Immunity

TrustyCon 2014 - Redesigning NSA Programs to Protect Privacy

TrustyCon 2014 - Thank You and Goodbye

## March 06, 2014

### Jess Klein — Designing BadgeKit

After several months of hard work by the Open Badges team, we are announcing that BadgeKit is  available for access to Private Beta. This means that BadgeKit is now available in two forms:  a hosted version of Mozilla BadgeKit available in private beta for select partner organizations that meet specific technical requirements, and anyone can download the code from GitHub and implement it on their own servers.

BadgeKit is a set of open, foundational tools to make the badging process easy. It includes tools to support the entire process, including badge design, creation, assessment and issuing, remixable badge templates, milestone badges to support leveling up, and much more. The tools are open source and have common interfaces to make  it easy to build additional tools or customizations on top of the  standard core, or to plug in other tools or systems.

From a design perspective, this milestone represents refinements in user research and testing, user experience, user interface and branding.

 We did user testing with members of the Hive in Brooklyn.
In preparation for this release, we conducted extensive user research to define the needs and goals for badge issuers. This work, led by Emily Goligoski, helped to define requirements for the BadgeKit offering as well as inform the user experience. The research was done using a variety of methodologies, however, it is worth noting that all of this work was done in the open. Emily organized distributed user testing in key markets such as New York, Chicago and Toronto to do everything from needs analysis to accessibility and functionality testing. The Open Badges weekly community calls were leveraged to pull in input from the highly motivated research and practitioner cohorts. Much of the work is documented both on her blog and in github. We paired every implementation milestone with some form of user testing and iteration. While this may sound obvious, it was a new way of working for our team, and I can unequivocally say that the product is better because of this practice. User research and testing did not happen in a bubble, but rather it became completed integrated with our design and implementation cycle. As a result, developers and designers became comfortable making informed iterations on the offering, as developers, designers and team researchers all participated in some form of user testing over the past three months.

As a direct result of the extensive research and testing, the user experience for the entire BadgeKit offering was deeply refined. This work, led by Matthew Willse introduced some new features, such as badge “templates” which give the ability for any badge issuer to clone a badge template and remix it. This gives us the unique ability to offer template packages based on common badge requests from the community, as well as eventually to empower the large Open Badges ecosystem to develop badge templates of their own (and perhaps explicitly state how they are comfortable with their content being shared and remixed). One component of this work that evolved as a direct result of testing, was the increased attention to copy. Sue Smith led this work, which entailed everything from tool tip development and a glossary to API documentation. Considering that BadgeKit takes an issuer from badge definition

and  visual design

to assessment and issuing,

designing the user experience was no small effort and the attention to detail combined with designing in the open - proved to be a solid approach for the team.

Perhaps the most obvious design component of this release is the user interface design and brand definition. Adil Kim kicked off this work with an exploration of the brand identity. BadgeKit is under the parent brand of OpenBadges, which sits under the even larger parent brand of Mozilla - which gave us the constraints of designing within the brand guidelines. After exploring options to represent the visual metaphor for this modular system, here is the new logo:

The logo is meant to evoke the imagery of both a badge as well as a tool in one glance. For the untrained craftsperson (ahem) - while gazing into the mark - you will see a bolt . This connotes that BadgeKit is a tool, something that allows you to dive into the details and construct a badge, and a system for your community. The logo incorporates the palette from Mozilla Open Badges, in a playful mobius - at once implying that while this is a handcrafted experience, it is also a seamless one. This logo nicely fits into the larger brand family while reading on it’s own, as if to say, “hey, BadgeKit is the offering for badge MAKERS, dive in and get your hands dirty!”

The brand is in turn extended to user interface design. The overall art direction here was that this needs to be clean, yet approachable. We know that many organizations will not be using all of the components in the interface directly on badgekit.org, however, the design needs to take into account that everything needs to be accessible and read as remixable. Some details to note here are the simplified navigation, the palette and subtle details like the ability to zoom on hover over thumbnails.

It’s worth noting that while Emily, Matthew, Sue and Adil , as well as Carla, Meg, Erin, Jade, Sabrina Ng, Chloe and Sunny were invested in much of this design work, there was an intentional yet organic partnership with the developers (Zahra, Erik, Andrew, Chris, Mavis Ou, Mike and Brian + many, many community contributors) who were doing the implementation. We had weekly critiques of the work and often engaged in conversation about design as well as implementation on github.

Another component of this work is looking ahead towards future features. Chloe Varelidi lead work here thinking through the potential for badge and skill discovery. Under a grant from The Bill & Melinda Gates Foundation, Chloe and her team are thinking through ways to represent earner pathways. This eventually will be leveled up into the core BadgeKit offering, but you can start to dip your toes into those features by checking out the work here.

And the good news is that design never ends! Design isn’t just a destination, it’s an invitation to a conversation. Check it out, let us know what’s working and importantly, what’s not.

### Jim Chen — Fennec App Not Responding (ANR) Dashboard

Over the last few months, I've been working on an improved App Not Responding (ANR) dashboard for Fennec, which is now hosted at telemetry.mozilla.org/hang/anr. With the help of many people, I'm glad to say that the dashboard is now mature enough to be a useful tool for Fennec developers.

### ANR Reporting

The idea of ANR/hang reporting is similar to crash reporting — every time the Fennec UI becomes unresponsive for more than five seconds, Android would show an “App Not Responding” dialog; the ANR Reporter detects this condition and collects these information about the hang:

• Stacks for Java threads in Fennec
• Stacks for Gecko threads (C++ stacks and profiler pseudo-stacks)
• System information listed in about:telemetry
• Fennec logs to help debug the hang

The ANR Reporter is enabled on Nightly and Aurora builds only, and if the user has not opted out of telemetry, the collected information is sent back to Mozilla, where the data are aggregated and presented through the ANR Dashboard. Because the debug logs may contain private information, they are not processed and are only available internally, within Mozilla.

### ANR Dashboard

The ANR Dashboard presents weekly aggregated data collected through the ANR reporter. Use the drop-down list at the top of the page to choose a week to display.

Data for each week are then grouped by certain parameters from ANR reports. The default grouping is “appName”, and because ANR reports are specific to Fennec, you only see one column in the top hangs chart labeled “Fennec”. However, if you choose to group by, for example, “memsize”, you will see many columns in the chart, with each column representing a different device memory size seen from ANR reports.

Each column in the top hangs chart shows the number of hangs, and each column is further divided into blocks, each representing a different hang. Hover over the blocks to see the hang stack and the number of hangs. This example shows 8 hangs with that signature occurred on devices with 768MB of memory over the past week.

Colors are preserved across columns, so the same colored blocks all represent the same hang. The blue blocks at the bottom represent all hangs outside of the top 10 list.

To the right of the top hangs chart is the distributions chart. It shows how different parameters are distributed for all hangs. Hover over each block to see details. This example shows 36% of all hangs occurred on devices running Android API level 15 (corresponding to Android 4.0.3-4.0.4 Ice Cream Sandwich) over the past week.

The distributions chart can also be narrowed down to specific groups. This would let us find out, for example, on devices having 1GB of memory, what is the percentage of hangs occurring on the Nightly update channel.

Clicking on a block in the top hangs chart bring up a Hang Report. The hang report is specific to the column that you clicked on. For example, if you are grouping by “memsize”, clicking on a hang in the “1G” column will give you one hang report and clicking on the same hang in the “2G” column will give you a different hang report. Switch grouping to “appName” if you want to ignore groups — in that case there is only one column, “Fennec”.

The hang report also contains a distributions chart specific to the hang. The example above shows that 14% of this hang occurred on Nexus 7 devices.

In addition, the hang report contains a builds chart that shows the frequency of occurrence for different builds. This example shows there was one hang from build 20140224030203 on the 30.0a1 branch over the past week. The chart can be very useful when verifying that a hang has been fixed in newer builds.

Last but not least, the hang report contains stacks from the hang. The stacks in the hang report are more detailed than the stack shown on the main page. You can also look at stacks from other threads — useful for finding deadlocks!

### Normalization

When comparing the volume of hangs, a higher number can mean two things — the side with higher number is more likely to hang, or the side with higher number has more usage. For example, if we are comparing hangs between devices A and B, and A has a higher number of hangs. It is possible that A is more prone to hanging; however, it is also possible that A simply has more users and therefore more chances for hangs to occur.

To provide better comparisons, the ANR Dashboard has a normalization feature that tries to account for usage. Once “Normalize” is enabled at the top of the dashboard, all hang numbers in the dashboard will be divided by usage as measured by reported uptime. Instead of displaying the raw number of hangs, the top hangs chart will display the number of hangs per one thousand user-hours. For example, 10 hangs per 1k user-hour means, on average, 1000 users each using Fennec for one hour will experience 10 hangs combined; or equivalently, one user using Fennec for 1000 hours will experience 10 hangs total. The distributions chart is also updated to reflect usage.

As a demonstration, the image below shows un-normalized hangs grouped by device memory size. There is no clear trend among the different values.

The image below shows normalized hangs based on the same data. In this case, it is clear that, once usage is accounted for, higher device memory size generally corresponds to lower number of hangs. Note that the “unknown” column became hidden because there is not enough usage data for devices with “unknown” memory size.

At the moment, I think uptime is the best available measurement for usage. Hopefully there will be a better metric in the future to provide more accurate results. Or let me know if it already exists!

### Peter Bengtsson — Github Pull Request Triage tool

Last week I built a little tools called github-pr-triage. It's a single page app that sits on top of the wonderful GitHub API v3.

Its goal is to try to get an overview of what needs to happen next to open pull requests. Or rather, what needs to happen next to get it closed. Or rather, who needs to act next to get it closed.

It's very common, at least in my team, that someone puts up a pull request, asks someone to review it and then walks away from it. She then doesn't notice that perhaps the integrated test runner fails on it and the reviewer is thinking to herself "I'll review the code once the tests don't fail" and all of a sudden the ball is not in anybody's court. Or someone makes a comment on a pull request that the author of the pull requests misses in her firehose of email notifictions. Now she doesn't know that the comment means that the ball is back in her court.

Ultimately, the responsibility lies with the author of the pull request to pester and nag till it gets landed or closed but oftentimes the ball is in someone elses court and hopefully this tool makes that clearer.

Here's an example instance: https://prs.paas.allizom.org/mozilla/socorro

Currently you can use prs.paas.allizom.org for any public Github repo but if too many projects eat up all the API rate limits we have I might need to narrow it down to use mozilla repos. Or, you can simply host your own. It's just a simple Flask server

I'm getting more and more productive with Angular but I still consider myself a beginner. Saying that also buys me insurance when you laugh at my code.

So it's a single page app that uses HTML5 pushState and an angular routeProvider to make different URLs. The server simply acts as a proxy for making queries to api.github.com and bugzilla.mozilla.org/rest and the reason for that is for caching. Every API request you make through this proxy gets cached for 10 minutes. But here's the clever part. Every time it fetches actual remote data it stores it in two caches. One for 10 minutes and one for 24 hours. And when it stores it for 24 hours it also stores its last ETag so that I can make conditional requests. The advantage of that is you quickly know if the data hasn't changed and more importantly it doesn't count against you in the rate limiter. ### Pierros Papadeas — Contribution Activity Metrics – Early attempts and fails As we examined with the intro post, the need for contribution activity metrics for different contribution areas in Mozilla has been high. It was only logical that many attempts were made to address this issue, mainly on the area-level (and not in Mozilla-wide level). Almost all of them had zero interaction between each other, and there was a general lack of vision for an holistic approach to the problem. After one of our initial gatherings as the (then meta-) Community Building Team, a couple of people brainstormed together a possible solution to our problem. Together with Josh Matthews, Giorgos Logiotatidis, Ricky Rosario and Liz Henry a new approach was born. Enter project Blackhole! Project Blackhole was a collaborative effort to develop and maintain an infrastructure of gathering and serving raw contribution data within Mozilla. We created a data architecture and flow together with a data Schema and specification to describe contribution activities for the first time in Mozilla. The project went far enough (thanks to Josh) to create a working prototype for back-end and front-end. What went right: Having a single project to drive multiple metrics efforts forward got people engaged. Everyone saw the value of de-duplicating efforts and tapping into that as a resource. Also during the process of designing and testing it we were able to self-identify as a group of people that share interest and commitment towards a common goal. Most of those people went on to become active members of the Systems and Data Working Group. Finally, we ended up with a common language and descriptions around contribution activities, a really valuable asset to have for the future of cross-project tracking. What went wrong: Building *anything* from scratch can be hard. Really hard. First, everyone (rightfully) questions the need to build something instead of re-using what is out there. Once you get everyone on board, development and deployment resources are hard to find especially on a short notice. On top of that Blackhole’s architecture *seemed* logical enough in theory, but was never tested in scale so everyone involved was not 100% sure that our architecture would survive stress tests and the scale of Mozilla’s contribution ecosystem. PRO TIP: Changing the project name does not help. We went from “Blackhole” to “Wormhole” (and back to “Blackhole”?), to better reflect the proposed data flow (data would not disappear forever!) and people got confused. Really confused. Which is obviously something that is not helpful during conversations. Pick a name, and stick to it! Lack of a team dedicated to it and inability to get the project listed as a personal goal of people (or teams), halted any progress leading us to a fearsome dead-end. What we learned: As with most failures, this one was also really valuable. We learned that: • we need to be a top line goal for people and teams • we need to examine really well what is out there (internally or externally to Mozilla) and investigate the possibility of re-using it. • we need a clear and common language to make communications as effective as possible • we need to be inclusive in all our procedures as a working group, with volunteers as well as all paid staff. • and in true Mozilla fashion: we need to start small, test and iterate with a focus on modularity. A way forward? Having those lessons learned from the process, we sat down last December as a group and re-aligned. We addressed all 5 issues and now we are ready to move forward. And the name of it? Baloo. Stay tuned for more info on our next detailed post ### Andrew Halberstadt — Add more mach to your B2G #### Getting Started tl;dr - It is possible to add more mach to your B2G repo! To get started, install pip:  wget https://raw.github.com/pypa/pip/master/contrib/get-pip.py -O - | python


Install b2g-commands:

$pip install b2g-commands  To play around with it, cd to your B2G repo and run: $ git pull                 # make sure repo is up to date
$./mach help # see all available commands$ ./mach help <command>    # see additional info about a command


#### Details

Most people who spend the majority of their time working within mozilla-central have probably been acquainted with mach. In case you aren't acquainted, mach is a generic command dispatching tool. It is possible to write scripts called 'mach targets' which get registered with mach core and transformed into commands. Mach targets in mozilla-central have access to all sorts of powerful hooks into the build and test infrastructure which allow them to do some really cool things, such as bootstrapping your environment, running builds and tests, and generating diagnostics.

A contributor (kyr0) and I have been working on a side project called b2g-commands to start bringing some of that awesomeness to B2G. At the moment b2g-commands wraps most of the major B2G shell scripts, and provides some brand new ones as well. Here is a summary of its current features:

• Bootstrap your environment - sets up system packages needed to build (includes setting up gcc-4.6)
• Easy to discover arguments - no need to memorize or look up random environment variables
• Helpful error messages where possible - clear explanations of what went wrong and how to fix it
• Fully compatible with existing build system including .userconfig
• List Android vendor ids for udev rules
• Clobber objdir/out directories

I feel it's important to re-iterate, that this is *not* a replacement for the current build system. You can have b2g-commands installed and still keep your existing workflows if desired. Also important to note is that there's a good chance you'll find bugs (especially related to the bootstrap command on varying platforms), or arguments missing from your favourite commands. In this case please don't hesitate to contact me or file an issue. Or, even better, submit a pull request!

If the feature set feels a bit underwhelming, that's because this is just a first iteration. I think there is a lot of potential here to add some really useful things. Unfortunately, this is just a side project I've been working on and I don't have as much time to devote to it as I would like. So I encourage you to submit pull requests (or at least submit an issue) for any additional functionality you would like to see. In general I'll be very open to adding new features.

#### Future Plans

In the end, because this module lives outside the build system, it will only ever be able to wrap existing commands or create new ones from scratch. This means it will be somewhat limited in what it is capable of providing. The targets in this module don't have the same low-level hooks into the B2G and gaia repos like the targets for desktop do into gecko. My hope is that if a certain feature in this module turns out to be especially useful and/or widely used it'll get merged into the B2G repo and be available by default.

Eventually my hope is that we implement some deeper mach integration into the various B2G repos (especially gaia) which would allow us to create even more powerful commands. I guess time will tell.

### Christian Heilmann — Translating marketing texts for speaking – an experiment

As part of the workweek I am currently at I set a goal to give a brownbag on “writing for speaking”. The reasons is that some of the training materials for the Mobile World Congress I recorded were great marketing/press materials but quite a pain to speak into a camera reading them from a teleprompter.

For the record: the original text is a good press release or marketing article. It is succinct, it is full of great soundbites and it brings the message across. It is just not easy to deliver. To show the issues and explain what that kind of wording can come across as I took the script apart. I explained paragraph by paragraph what the problems are and proposed a replacement that is more developer communication friendly. You can see the result on GitHub:

http://codepo8.github.io/marketing-translation/

The result is an easier to deliver text with less confusion. Here’s a recording of it to compare.

I will follow this up with some more materials on simpler communication for speaking soon.

### Lawrence Mandel — Lawrence Mandel Joins Mozilla Release Management

I’m excited to share that I am stepping into a new role with Mozilla as manager of the Release Management team. Below is an e-mail that my friend and manager Sheila Mooney sent to Mozilla employees last week announcing this change.

Date: Fri, 28 Feb 2014 11:19:07 -0800 (PST)
From: Sheila Mooney
To: team Mozilla
Subject: Changes in Release Management

All,

I am happy to share some changes I am making to my team. Effective immediately, Lawrence Mandel will be moving into the role of Manager of the Release Management team. With the Release Managers in tight collaboration with the Project/Program Managers, we can think beyond just keeping the trains running on time and tighten our focus on quality, metrics and process to ensure we are shipping the best possible products to our users. Lawrence's experience inside and outside Mozilla aligns closely with these goals and I am very excited to see what he does with this role!

Lawrence will be transitioning many of his current project management responsibilities to others in my team in order to focus fully on this new challenge. The Web Compatibility Engineers will continue to report to him and Chris Peterson will report to me
directly.

Please join me in congratulating Lawrence on his new opportunity!

Cheers,
Sheila

Tagged: mozilla, release management

### Byron Jones — happy bmo push day!

the following changes have been pushed to bugzilla.mozilla.org:

• [977505] Make the dashboard white-on-red counter easier to click
• [975195] “Your Outstanding Requests” emails don’t include superreview requests
• [956229] develop a system to track the lifetime of review/feedback/needinfo requests
• [926962] all tracking flags are visible on the ‘change many bugs at once’ page
• [977761] Create product and affiliations for Intellego project
• [978941] grammar issue
• [916633] join_activity_entries doesn’t reconstitute text with commas correctly.
• [966180] enable USE_MEMCACHE on most objects
• [956230] improve instrumentation of bugzilla’s internals
• [979092] changing timezone breaks MyDashboard
• [979727] increase the mod_perl sizelimit to 700_000 on production
• [979411] Fix content-type for woff files
• [977137] Comment and Preview tabs need accessibility markup
• [977523] Comment textarea has padding:0
• [980056] ReferenceError: REVIEW is not defined page.cgi javascipt error when viewing a patch in Splinter
• [979431] Please rename Talkilla product to Loop and update User Stories extension

discuss these changes on mozilla.tools.bmo.

Filed under: bmo, mozilla

### Amazon Pricing - Expensive or Cheap?

Amazon ondemand nodes are fantastic for rapid iteration, but using them in production is expensive naivety. It is expensive for Amazon to maintain spare capacity to allow customers to launch any of the wide variety of nodes they offer ondemand. Forecasing demand at Amazon scale can’t be easy. As a result, Amazon recommends that customers buy reserves with an upfront payment then pay a discounted rate after. This is brilliant as it shifts the capacity-planning burden to each customer. This would net us a 60% discount if we we could forecast our AWS usage perfectly.

Fortunately Amazon also has a spot-pricing model. Spot prices can be 70-90% lower than ondemand (we’ve also seen them 50% higher). The downside is that Amazon can kill these nodes at any point and node availability is limited compared to ondemand. Given that Amazon competition can’t match spot prices, Amazon might be selling their unused ondemand capacity at cost. I doubt that anyone smaller than Amazon can maintain their own hardware with salaried ops for less than Amazon’s spot prices.

### Spot Everything

we spent 2014 retrofitting our continuous integration architecture to cope with failure so we can run more of our workload on spot nodes.

On our January AWS bill we were 30% more cost-efficient. This was accomplished late in the month, we managed to have the bill not go up to cope with a higher-than-ever load. For February we were aiming to drop the bill to under $80K. The following is a summary of where we are. ### Provisioning • We now run the majority of our workload on Amazon spot nodes. Spot:ondemand ratio is between 2:1 and 7:1. Note we still pay more for ondemand portion of our bill because ondemand is a lot more expensive • At$74,389.03, our Feb bill is 36% lower than Jan.
• Our current AWS spending per job is approximately half of what we paid in December
• We now bid on a range of AWS node types to maximize node availability and minimize price. This results in >=50% lower spot bill. We now run a portion of our workload on 2x-faster VMs when cheaper spot machine types are not available.

### Scheduling

• Our AWS scheduler ramps up slower now to avoid temporary overprovisioning. Note the improvement on the right side of the graph (tall & narrow spikes are bad)

### Monitoring

• We are evaluating hostedgraphite.com for monitoring our efficiency. It’s nice to have someone offer a well-supported open-source-compatible solution that can cope with 30K+ of metrics our 1000s of VMs generate.

### Mozilla Data Center plans for March

Amazon S3 is cheap, fast and robust. EC2 is incredibly flexible. Both are great for quickly iterating on cool ideas. Unfortunately most of our infrastructure runs on physical machines. We need to improve our non-elastic inhouse capacity with what we learned in the cloud:

• Use a shared object cache for Windows/Mac builds. This should more than double Windows build speed. The plan is to use Ceph for S3-compatible shared object storage.
• Get OpenStack bare metal virtualization working so we could move as fast there as we do in EC2

### Cloud Plans for March

• Eliminate EBS usage for faster builds, 10% lower EC2 bill. Amazon EBS is the antithesis of cost-effectiveness.
• Deploy more jacuzzis for faster builds, less EC2 instances
• Run more things on spot, switch to cheaper ondemand nodes, maybe buy some reserves
• Bid on an even wider variety of spot nodes
• Probably wont hit another 30% reduction, focusing on technical debt, better metrics, etc
• Containerization of Linux builds

### Conclusion

Cloud APIs make cost-oriented architectures fun. Batch nature of c-i is a great match for spot.

In general, spot is a brilliant design pattern, I intend to implement spot workloads on our own infra. It’s too bad other cloud vendors do not offer anything comparable.

## March 05, 2014

### Dave Townsend — Developer Tools meet-up in Portland

Two weeks ago the developer tools teams and a few others met in the Portland office for a very successful week of discussions and hacking. The first day was about setting the stage for the week and working out what everyone was going to work on. Dave Camp kicked us off with a review of the last six months in developer tools and talked about what is going to be important for us to focus on in 2014. We then had a little more in-depth information from each of the teams. After lunch a set of lightning talks went over some projects and ideas that people had been working on recently.

After that everyone got started prototyping new ideas, hacking on features and fixing bugs. The amount of work that happens at these meet-ups is always mind-blowing and this week was no exception, even one of our contributors got in on the action. Here is a list of the things that the team demoed on Friday:

This only covers the work demoed on Friday, a whole lot more went on during the week as a big reason for doing these meet-ups is so that groups can split off to have important discussions. We had Darrin Henein on hand to help out with UX designs for some of the tools and Kyle Huey joined us for a couple of days to help work out the final kinks in the plan for debugging workers. Lot’s of work went on to iron out some of the kinks in the new add-on SDK widgets for Australis, there were discussions about memory and performance tools as well as some talk about how to simplify child processes for Firefox OS and electrolysis.

Of course there was also ample time in the evenings for the teams to socialise. One of the downsides of being a globally distributed team is that getting to know one another and building close working relationships can be difficult over electronic forms of communication so we find that it’s very important to all come together in one place to meet face to face. We’re all looking forward to doing it again in about six months time.

### Brian R. Bondy — Investing in our community through Code Firefox

Mozilla 2014 Goal

As Deb Cohen (VP, Chief of People) said:
Our goal this year is to find new ways for people to become part of us. This year we have a very aggressive goal, for all of us to look inside our teams, our work, our organization, and create some of these pathways.

What is Code Firefox?

The Code Firefox site (@codefirefox), is a new video and exercise framework which is highly inspired by Khan Academy. Its purpose is for on-boarding contributors and sharpening the skills of existing community members.

People learn differently. Providing different ways to consume information is crucial to growing Mozilla. For many people, Code Firefox provides an easier, and more consumable approach to learning how to contribute to Mozilla.

Would you like to start contributing patches to Mozilla? Start here

Testimonial

One prime example of Code Firefox's usefulness is of a new contributor who joined last month, and actively uses Code Firefox. He has already submitted 18 patches.

In a personal email Anuj Agarwal (Student at Vellore Institute of Technology) writes:
Just for fun, here is the patch I got reviewed today! And it was made possible only because of your tutorials. Thank you!

Code Firefox statistics

The site has not been promoted yet, but it already has:

• ~46,000 unique visits since it was released on October 30, 2013
• ~10,000 full video views across the 53 videos
• Most functionality is available without accounts, but there are still over 110 Code Firefox accounts and 35 YouTube subscribers. Most of these accounts are new Bugzilla users as well and several have patches.

What has Code Firefox accomplished?

51 videos, 3 exercises focused mostly on on-boarding new Firefox developers
Videos are captioned
Ability to track watched videos and completed exercises when logged in
Exercise framework that parses code and gives immediate feedback and experience

Videos and Exercises are accessible and searchable through tags

Where is the Code Firefox site going? (Code Firefox Goals)

Opening up the ability to post videos to other people
Ability to create different "Outline tracks", for example, a QA track.
More video and exercise content being added, initially for Firefox OS and Desktop
Working more closely with Student Ambassadors
Implement a point system
Interoperability with Mozillians in creative ways
Implementing hints for exercises

Highlights of benefits gained by Mozilla from the site

Brings in new contributors
Retains new and existing contributors
Sharpens skills of existing members of our community
Code Firefox uses and promotes Persona
Code Firefox will use and promotes Open Badges

Media coverage of Code Firefox so far

Some people have taken the time to write about Code Firefox:

What our community is saying

• Robert O'Callahan (Distinguished Engineer):
In all my years with Mozilla I've never seen such an exciting effort to engage new Firefox contributors. Well done!

• Josh Matthews (Community Engagement Lead):
To me, Code Firefox's value lies in the clear, forthright instruction provided, along with the bite-sized nature of the lessons, which provides a counterpoint to the existing long-winded and slightly overwhelming textual documentation. I have interacted with countless potential volunteers over the past several years, and there is a non-trivial segment of the population that immediately looks for instructional videos to aid in understanding. In addition, perhaps counter-intuitively, video explanations are often preferred by non-native English speakers as a visual reference to aid in comprehension, and I'm often asked at events whether these exist for Firefox development.

• Chris Peterson (Engineering Program Manager):
Facebook has its infamous bootcamp for new employees. Code Firefox should be required watching, end-to-end, for all new Mozilla engineers. Even non-engineers can benefit from the introductory videos. Given the positive community response and patches to prove it, Brian has shown that this e-learning format works. More than just lessons, Brian has built Code Firefox as a platform. I would love to see other Mozilla teams contribute videos for their feature areas. Even within Mozilla we have some technology silos that could learn from each other.

• Lukas Blakk (Senior Release Manager):
Getting into Open Source contribution is not always a clear path. In my experience, it took me 4 different attempts (MDN, Sumo, QA, AMO) before I found my path for a deeper dive and got 'hooked' on contributing to Mozilla. These videos and the technical mentorship they provide is already proving to be a valuable resource for engaging new coders and that is an area that can be hardest for a newcomer to break into alone since we are now such a legacy project. Fortunately Code Firefox is also a budding online community and that ensures no one is ever truly alone as they move up in their learning. Having someone (or several people) getting to focus more time on this particular pathway will ensure more robust community surrounds our software products and makes what we give back a product that is a reflection of that wider community. Win-win.

• Jared Wein (Firefox Engineer):
Code Firefox is an amazing site that is full of detailed, finely-crafted videos that guide interested people from being Firefox users to Firefox contributors. It is an invaluable tool and one that I wished was around when I was just getting started contributing to Firefox. Projects the size of Firefox can be very daunting and scary to newcomers. The tutorials on the site bring a level of comfort not found in many other places, and walk people through steps at a pace that they can handle. Before they know it they are knee deep in a great community of people who are working together to make Firefox and the Internet a better place for everyone.

• David Boswell (Community Engagement Lead):
Code Firefox and other efforts to help support coding contributors are crucial to growing and empowering new members of the Mozilla community.

• Kamil Jozwiak (QA Engineer):
Code Firefox has helped me out tremendously while I was contributing to Mozilla. It helped me complete my first patch and it's an amazing resource to sharpen your skills. I'm a Code Firefox lifer and visit the site on a daily basis to see if anything new has been added. I really believe in this project and I know from first hand experience how powerful this resource can be in helping people interested in the Mozilla mission.

• Rodrigo Silveira (Firefox Engineer):
Getting started with contributing to any code base can be daunting. Code Firefox videos are presented in such a friendly way that makes contributing to Firefox a lot more approachable. Being able to learn through video at your own pace is convenient too, you can get from zero to landing patches much faster. I liked it so much I had to help translate it so more people could benefit.

• Marco Mucci (Engineering Program Manager):
Part of wanting to make a difference is learning how to make one. Code Firefox is the tool of choice for learning how to contribute to Mozilla.

• Samuel Foster
@codefirefox has grown into a great resource for lifting you over the learning curve involved in making firefox awesomer. Nice work.

• David Bolter:
If you want to watch video tutorials on how to hack on Firefox, follow @codefirefox.

• Mark Finkle:
The Module Owner and Becoming A Reviewer videos on @codefirefox are awesome: http://codefirefox.com/#reviewing-code

• Till Schneidereit:
This really is fantastic - I wish I'd had something like these videos when I started contributing.

• Margaret Leibovic:
I know I'm late to the party, but @codefirefox is seriously awesome. Great work by @brianbondy!

• Lucas Rocha:
If you've been considering contributing to Firefox, @codefirefox is definitely a good start. Big thanks @brianbondy! http://buff.ly/18LSSZZ

• Asa Dotzler:
Mozillian @brianbondy has started publishing video tutorials on Firefox hacking. http://codefirefox.com/videos Follow @codefirefox for updates.

• Christian Heilmann:
Superb framework to create code tests by @brianbondy https://github.com/bbondy

• Mozilla Pakistan:
@codefirefox.com now supports accounts via @MozillaPersona You can track which videos you’ve watched too - pic.twitter.com/I4EizTAzbk

• Mozilla India:
Becoming a Mozilla contributor is now even more easy. http://codefirefox.com/ site aim to outline the process... http://fb.me/1ChAz9qPE

• Ian Elliot:
Other open source projects could also benefit from the same sort of idea - are you listening Linus?

• Alessio Placitelli:
Thank you @brianbondy for both http://codefirefox.com and http://goo.gl/bybZKK ! Very helpful for new #Firefox contributors!

• Jennie Rose Halperin:
http://codefirefox.com/ wanna get deep with your code? totally digging this @mozilla

codefirefox.com/ these videos are awesome. setting up firefox dev setup is pretty easy

• Rodrigo Ludgero:
Do the right thing. Some lessons to be learned. http://codefirefox.com/ I thought it was a great initiative. #codefirefox @codefirefox

• serge2k:
This is awesome, and I wish more projects would do it.

• iamleppert:
+1 for whoever's idea this was. We need more guides like this that ask that answer the basic questions on just dev process and work flows. Let's make it easier for people to contribute!

@FremyCompany I agree. I think the http://codefirefox.com/ project is our best bet. Not sure if Chrome could have that (let alone Opera, IE…)

• Michael Mahemoff:
Encouraging open source contributions, done right by Moz at http://CodeFirefox.com

• суперсолэ:
(now a serious tweet) If you're interested in contributing to Firefox, this is a fantastic site: http://codefirefox.com

• Ashley Myers:
OMG. This is so awesome. Videos on how the heck to actually contribute to Firefox: http://codefirefox.com cc/ @codefirefox

• Mike Rogers:
Decided over my Xmas break I'm going to try and contribute to Firefox somehow http://codefirefox.com

• Dongie Agnir:
@codefirefox Just wanted to let you know your website is an excellent resource! Please keep up the great work!

• Brian Graham:
This is awesome. I wish this existed when I wrote my first #FireFox patch.

• Swarnava Sengupta:
Thank you @codefirefox for such a great collection! Hope it will produce more developer all around the world! :)

• Mike Evans:
@angelinamagnum I like the style of @codefirefox, allows someone to get up to speed in an easy to digest manner. so more of that please :)

• Michael Kohler:
That's how I did my first patches! RT @codefirefox: [...]feature from start to finish, without knowing the code

• Hundreds of other tweets about Code Firefox

Would you like to create Code Firefox videos?

### Joel Maher — quick tip – when all else fails – “reseat”

While chatting with dminor the other day he mentioned his camera stopped working and after a reboot there was no mention of the camera hardware in the logs or via dmesg.  His conclusion, the camera was not working.  Since I have the same hardware and run Ubuntu 13.10 as he does he wanted a sanity check.  My only suggestion was to turn off the computer, unplug it and take the battery out, wait 30 seconds then reassemble and power on.

Hey my suggestion worked and now dminor has a working camera again.

This general concept of reseating hardware is something that is easily forgotten, yet is so effective.

### Dave Huseby — Hacking B2G: Building with Distcc

How to set up a ramdisk and distcc for faster B2G builds.

### Have correct values in the manifest

The app type has to be at least privileged. No, web (hosted) apps won’t work with different domains if CORS is not enabled there.

Add the systemXHR permission too, or your XMLHttpRequests to domains without CORS enabled will instafail.

Example of the syntax for these two things in the manifest:

{   "type": "privileged",   "permissions": {     "systemXHR": {       "description": "Allows loading remote content"     }   } }

### Add the “magic word” when creating XMLHttpRequest objects

The magic word being the mozSystem property:

request = new XMLHttpRequest({ mozSystem: true });

You should now be able to use the request in the usual way. Remember to add an error handler in case it fails!

We are using these superfailproof requests in our privileged app template to load definitions from MDN. Have a look!

Today two very exciting things happened:

1. Mozilla Brick 1.0 was finally released! That means that we can now… start working on 1.1! YAY, WE NEVER STOP!
2. Angelina’s article on Brick + X-Tag and web components in general was published: Custom Elements for Custom Applications – Web Components with Mozilla’s Brick and X-Tag

I have worked in both things (more in Brick than in the article, which I just helped edit) so it’s naturally obvious that I’m excited they’re finally published. And also since the article is Chapter 1 (as mentioned in my post), that means that next week you get Chapter 2–i.e. my article.

Stay put, friends. It’s going to rock. Almost literally.

### David Ascher — Product Thinking

I have a new job!  Still with Mozilla, still doing a lot of what I’ve done in the past, just hopefully more/better/faster.  The group I’m joining has a great culture of active blogging, so I’m hoping the peer pressure there will help me blog more often.

What’s the gig you ask? My new focus is to help the Mozilla Foundation make our products as adoptable as possible.

MoFo (as we affectionately call that part of the Mozilla organization) has a few main ways in which we’re hoping to change the world — some of those are programs, like Open News and the Science Lab, some are products. In a program, the change we’re hoping to effect happens by connecting brains together, either through fellowship programs, events, conferences, things like that. That work is the stuff of movement-building, and it’s fascinating to watch my very skilled colleagues at work — there is a distinctive talent required to attract autonomous humans to a project, get them excited about both what you’re doing and what they could do, and empowering them to help themselves and others.

Alongside these programmatic approaches, MoFo has for a while been building software whose use is itself impactful.  Just like getting people to use Firefox was critical to opening up the web, we believe that using products like the Webmaker tools or BadgeKit will have direct impact and help create the internet the world needs.

And that’s where I come in!  Over the last few years, various smart people have kept labeling me a “product person”, and I’ve only recently started to understand what they meant, and that indeed, they are right — “product” (although the word is loaded with problematic connotations) is central for me.

I’ll write a lot more about that over the coming months, but the short version is that I am particularly fascinated by the process that converts an idea or a pile of code into something that intelligent humans choose to use and love to use.  That translation to me is attractive because it requires a variety of types of thinking: business modeling, design, consumer psychology, and creative application of technology.  It is also compelling to me in three other aspects: it is subversive, it is humane, and it is required for impact.

It is subversive because I think if we do things right, we use the insights from billions of dollars worth of work by “greedy, evil, capitalist corporations” who have figured out how to get “eyeballs” to drive profit and repurpose those techniques for public benefit — to make it easy for people to learn what they want to learn, to allow people to connect with each other, to amplify the positive that emerges when people create.  It is humane because I have never seen a great product emerge from teams that treat people as hyper-specialized workers, without recognizing the power of complex brains who are allowed to work creatively together.  And it is required for impact because software in a repo or an idea in a notebook can be beautiful, but is inert.  To get code or an idea to change the world, we need multitudes to use it; and the best way I know to get people to use software is to apply product thinking, and make something people love.

I am thrilled to say that I have as much to learn as I have to teach, and I hope to do much of both in public.  I know I’ll learn a lot from my colleagues, but I’m hoping I’ll also get to learn on this blog.

I’m looking forward to this new phase, it fits my brain.

## March 04, 2014

### Rizky Ariestiyansyah — OpenX Quiz : Test your knowledge about Mozilla and Open Web

OpenX Quiz Version 1.0.2 Updated. This game is fun for learn about Internet, Open Web and Mozilla. Currently only 3 level available and will be updated in future. OpenX Quiz using jQuery, Bootstrap, Firefox...

### Can you safely deliver entropy to a remote system?

Running a system without enough entropy is like tolerating a toothache: something you’d really like to fix, but not quite bothersome enough to deal with.

I recently bought a Simtec EntropyKey to fix this locally: it’s a little USB dongle with avalanche-noise generation hardware and some firmware to test/whiten/deliver the resulting stream to the host. The dongle-to-host protocol is encrypted to protect against even USB man-in-the-middle attacks, which is pretty hardcore. I like it a lot. There’s a simple Debian package that continuously fills /dev/random with the results, giving you something more like this (which would look even better if Munin didn’t use entropy-consuming TCP connections just before each measurement):

But that’s on local hardware. What about virtual servers? I’ve got several remote VPS boxes, little Xen/KVM/VirtualBox slices running inside real computers, rented by the hour or the month. Like many “little” computers (including routers, printers, embedded systems), these systems are usually starved for entropy. They lack the sources that “big” computers usually have: spinning disk drives and keyboards/mice, both of which provide mechanical- or human- variable event timing. The EntropyKey is designed to bring good entropy to “little” machines. But I can’t plug a USB device into my remote virtual servers. So it’s pretty common to want to deliver the entropy from my (real) home computer to the (virtual) remote boxes. Can this be done safely?

### Nope!

Well, mostly nope: it depends upon how you define the threat model. First, let’s go over some background.

### Guessing Internal State

Remember that entropy is how you measure uncertainty, and it’s always relative to an observer who knows some things but not others. If I roll an 8-sided die on my desk right now, the entropy from your point of view is 3 bits. From my point of view it’s 0 bits: *I* know I just rolled a five. And now that *you* know that I rolled a five, it’s 0 bits from your POV too.

Computers use entropy to pick random numbers for cryptographic purposes: generating long-term SSH/GPG/TLS keys, creating ephemeral keys for Diffie-Hellman negotiation, unique nonces for DSA signatures, IVs, and TCP sequence numbers. Most of these uses are externally visible: the machine is constantly shedding clues as to its internal state. If the number of possible states is limited, and an eavesdropper can observe all (or most) of these clues, then they can deduce what that internal state is, and then predict what it will be next. The amount of computation Eve needs to do this depends upon how uncertain she is, and on the nature of the clues.

The most conservative model assumes that Eve sees every packet going into and out of the system, with perfect timing data, and that she knows the complete state of the system before the game begins (imagine that Eve creates a VM from the same EC2 AMI as you do). If she is truly omniscient, and the system is deterministic, then she will know the internal state of the system forever: all she has to do is feed her own clone the same input as your box receives, at the same time, and watch how its internal state evolves. She doesn’t even need to watch what your box outputs: it will always emit the same things as her clone.

If she misses a few bits (maybe she can’t measure the arrival time of a packet perfectly), or if there are hidden (nondeterministic) influences, then she needs to guess. For each guess, she needs to compare her subsequent observations against the predicted consequences of that guess, to determine which guess was correct. It’s as if she creates a new set of nearly-identical VMs for each bit of uncertainty, and then throws out most of them as new measurements rule them out.

There might be a lot of potential states, and it might take her a lot of CPU time to test each one. She might also not get a lot of observations, giving her fewer opportunities to discard the unknowns. Our goal is to make sure she can’t keep up: at any important moment (like when we create a GPG key), the number of possibilities must be so large that all keys are equally likely.

(In fact, our goal is to make sure she can’t retroactively catch up either. If we create a key, and then immediately reveal all the internal state, without going through some one-way function first, she can figure out what the state was *earlier*, and then figure out the key too. So the system also needs forward-security.)

### Bootstrapping Towards Entropy Is Technically Impossible …

To get out of this compromised Eve-knows-everything state, you have to feed it with enough entropy (which are bits that Eve doesn’t see) to exceed her ability to create and test guesses. But she’s watching the network. So you must feed entropy in locally (via the keyboard, locally-attached hardware, or non-deterministic execution).

Could you deliver entropy remotely if you encrypted it first? Sure, but you have to make sure Eve doesn’t know the key, otherwise she can see the data too, and then it isn’t entropy anymore. Encrypting it symmetrically (e.g. AES) means your remote random-number generator machine shares a secret key with the VM, but we already assumed that Eve knows the VM’s entire state, so it has no pre-existing secrets from her. To encrypt it asymmetrically (via a GPG public key) means the VM has a corresponding private key: again, Eve’s insider knowledge lets her decrypt it too.

Can you use authenticated Diffie-Hellman to build a secure connection *from* the VM to the remote entropy source? This would put a public key on the VM, not a private one, so Eve doesn’t learn anything from the key. But DH requires the creation of a random ephemeral key (the “x” in “g^x”), and Eve can still predict what the VM will do, so she can guess the ephemeral key (using the published g^x to test her guesses), determine the shared DH key, and decrypt the data.

So, in the most conservative model, there’s no way to get out of this compromised state using externally-supplied data. You *must* hide something from Eve, by delivering it over a channel that she can’t see.

### But It Might Be Possible In Practice

The real world isn’t quite this bad, for a few reasons:

• watching every input is actually pretty hard. The packet sniffer must be running 24×7, never drop a packet, and must capture high-resolution timestamps very close to the VM’s inputs
• busy computers have an awful lot of state, making Eve’s worst-case modeling job pretty expensive. It’s still deterministic, but depends on a lot of race conditions. The ideal kernel RNG would hash all of memory all the time, to make it maximally sensitive to system state. Unfortunately, that’s expensive and intrusive (“hey! the kernel is reading my private user data and publishing some derivative of it to the world!”), and good engineering practice (modularity) prefers small sub-systems with *reduced* sensitivity to unrelated inputs, so we may not get as much benefit from this as we’d like.
• kernel RNGs are designed to be forward-secure: it’s not as if /dev/urandom just returns the raw entropy pool. Every read and write causes the pool to be stirred. So observations don’t reveal state directly, and Eve has to do (significant) computation to check her guesses.
• RNGs also batch inputs into larger chunks to prevent small incremental attacks. If we added one bit of entropy at a time (say, one per second), then let Eve make some observations, she could probably deduce that one bit in time to repeat the process for the next bit. But if we hide it in memory (i.e. not allow it to influence anything remotely observable) for a few minutes, and then dump 128 bits in all at once, Eve has 128 seconds to test 2^128 possibilities, and won’t be able to keep up.

### How To Do It

So in practice, once the kernel pool gets perhaps 128 or 256 bits of real
entropy, Eve’s job becomes impossible. This needs to happen before any
significant secrets are generated. How can we get to this point?

• the best tool is a local hardware RNG that can feed entropy to the kernel without traversing the network. This might be a special CPU instruction (e.g. Intel’s RdRand) that can be used by the guest VM. Or the guest VM should be able to ask the host OS (dom0) for entropy, which can get it from an on-chip HWRNG (VIA Padlock) or USB-attached EntropyKey. This source should be used very early during startup, before first-boot SSH host keys are generated. It can be periodically refreshed afterwards, but it’s the initial seed that really matters.
• next best is for the guest VM creation process to include a unique initial seed. Linux systems typically save a few kB of entropy to disk at shutdown, and write it back into the kernel at boot: if the contents of disk remain secret, rebooting a box doesn’t cause it to lose entropy. The control system that creates VMs could pre-populate this entropy file from a real RNG, with fresh data for each one. I don’t know if EC2 AMIs work this way: I suspect the disk image is identical each time an AMI is instantiated, but the startup process might do something better.
• failing that, the VM should make network requests for entropy. These requests should go to a box that already has good entropy (perhaps relayed from box to box, ultimately supplied by some kind of HWRNG). And the requests should be as local as possible, so Eve would have to get her packet sniffer into the datacenter network to see it. Pulling entropy from multiple directions might help (maybe she can watch one router but not all of them). Pulling large amounts of data might help (maybe she won’t be able to keep up with the data), as might pulling it frequently over a long period of time (maybe the sniffer breaks down every once in a while: if you can get 256 bits through while it’s offline, you win). Try to include high-resolution timing data too (sample the TSC when you receive each packet and write the contents into the kernel pool along with the data).

You’d probably think you ought to encrypt these network requests, but as described above it’s not really clear what this buys you. The best hope is that it increases the cost of Eve’s guess-testing. You might not bother with authenticating this link: if the RNG is well-designed, then it can’t hurt to add more data, even attacker-controlled data (but note that entropy counters could be incorrectly incremented, which means it can hurt to *rely* on attacker-controlled data).

Continuing this analysis, you might not even bother decrypting the data before adding it to the pool, since that doesn’t increase the entropy by more than the size of the decryption key, so you can get the same effect by just writing the key into the pool too. (But it might be more expensive for Eve if her guess-testing function must include the decryption work).

And if you don’t bother decrypting it, then clearly there’s no point to encrypting it in the first place (since encrypted random data is indistinguishable from unencrypted random data). Which suggests that really you’re just piping /dev/urandom from one box into netcat, plus maybe some timestamps, and just have to hope that Eve misses a packet or two.

### Entropy Counters

What about entropy counters, and the difference between /dev/random and /dev/urandom? They’re trying to provide two different things. The first is to protect you against using the RNG before it’s really ready, which makes a lot of sense (see Mining Your Ps and Qs for evidence of failures here). The second is to protect you against attackers who have infinite computational resources, by attempting to distinguish between computational “randomness” and information-theoretic randomness. This latter distinction is kind of silly, in my mind. Like other folks, I think there should be one kernel source of entropy, it should start in the “off” mode (return errors) until someone tells it that it is ready, and switch to the “on” mode forevermore (never return errors or block).

But I’ll have to cover that in another post. The upshot is that it isn’t safe to make this startup-time off-to-on mode switch unless you have some confidence that the data you’ve added to the kernel’s entropy pool is actually entropy, so attacker-supplied data shouldn’t count. But after you’ve reached the initial threshold, when (in my opinion) you don’t bother counting entropy any more, then it doesn’t hurt to throw anything and everything into the pool.

(cross-posted to my personal blog)

### Armen Zambrano Gasparnian — Planet Release Engineering

If you're interested on reading about Mozilla's Release Engineering, you can subscribe to "Planet Release Engineering".

This is a central location that collects the blog posts of each one of Mozilla's Release Engineering team members.

### Patrick McManus — On the Application of STRINT to HTTP/2

I participated for two days last week in the joint W3C/IETF (IAB) workshop on Strengthening the Internet against Pervasive Monitoring (aka STRINT). Now that the IETF has declared pervasive monitoring of the Internet to be a technical attack the goal of the workshop was to establish the next steps to take in reaction to the problem. There were ~100 members of the Internet engineering and policy communities participating - HTTP/2 standardization is an important test case to see if we're serious about following through.

I'm pleased that we were able to come to some rough conclusions and actions. First a word of caution: there is no official report yet, I'm certainly not the workshop secretary, this post only reflects transport security which was a subset of the areas discussed, but I still promise I'm being faithful in reporting the events as I experienced them.

Internet protocols need to make better use of communications security and more encryption - even imperfect unauthenticated crypto  is better than trivially snoopable cleartext.  It isn't perfect, but it raises the bar for the attacker. New protocols designs should use strongly authenticated mechanisms falling back to weaker measures only as absolutely necessary, and updates to older protocols should be expected to add encryption potentially with disabling switches if compatibility strictly requires it. A logical outcome of that discussion is the addition of these properties (probably by reference, not directly through replacement) to BCP 72 - which provides guidance for writing RFC security considerations.

At a bare minimum, I am acutely concerned with making sure HTTP/2 brings more encryption to the Web. There are certainly many exposures beyond the transport (data storage, data aggregation, federated services, etc..) but in 2014 transport level encryption is a well understood and easily achievable technique that should be as ubiquitously available as clean water and public infrastructure. In the face of known attacks it is a best engineering practice and we shouldn't accept less while still demanding stronger privacy protections too. When you step back from the details and ask yourself if it is really reasonable that a human's interaction with the Web is observable to many silent and undetectable observers the current situation really seems absurd.

The immediate offered solution space is complicated and incomplete. Potential mitigations are fraught with tradeoffs and unintended consequences. The focus here is on what happens to http:// schemed traffic,  https is comparably well taken care of. The common solution offered in this space carries http:// over an unauthenticated TLS channel for HTTP/2. The result is a very simple plug and play TLS capable HTTP server that is not dependent on the PKI. This provides protection against passive eaves droppers, but not against active attacks. The cost of attacking is raised in terms of CPU, monetary cost, political implications, and risk of being discovered. In my opinion, that's a win. Encryption simply becomes the new equivalent of clear text - it doesn't promote http:// to https://, it does not produce a lock icon, and it does not grant you any new guarantees that cleartext http:// would not have. I support that approach.

The IETF HTTPbis working group will test this commitment to encryption on Wednesday at the London #IETF89 meeting when http:// schemed URIs over TLS is on the agenda (again). In the past, it not been able to garner consensus. If the group is unable to form consensus around a stronger privacy approach than was done with HTTP/1.1's use of cleartext I would hope the IESG would block the proposed RFC during last call for having insufficiently addressed the security implications of HTTP/2 on the Internet as we now know it.

#ietf89 #strint

### Jen Fong-Adwent — A Thought Experiment

In this chatroom, each unique participant has an identifier that you can reference.

### Joel Maher — Where did all the good first bugs go?

As this is the short time window of Google Summer of Code applications, I have seen a lot of requests for mochitest related bugs to work on.  Normally, we look for new bugs on the bugs ahoy! tool.  Most of these have been picked through, so I spent some time going through a bunch of mochitest/automation related bugs.  Many of the bugs I found were outdated, duplicates of other things, or didn’t apply to the tools today.

Here is my short list of bugs to get more familiar with automation while fixing bugs which solve real problems for us:

• bug 958897 – ssltunnel lives if mochitest killed
• Bug 841808 – mozfile.rmtree should handle windows directory in use better
• Bug 892283 – consider using shutil.rmtree and/or distutils remove_tree for mozfile
• Bug 908945 – Fix automation.py’s exit code handling
• Bug 912243 – Mochitest shouldnt chdir in __init__
• Bug 939755 – With httpd.js we sometimes don’t get the most recent version of the file

I have added the appropriate tags to those bugs to make them good first bugs.  Please take time to look over the bug and ask questions in the bug to get a full understanding of what needs to be done and how to test it.

Happy hacking!

### My Firefox OS homescreen

I'm having fun building my own homescreen for Firefox OS. I call it "riverscreen". Code's on github. Totally WIP. Barely functional.

### Mike Hommey — Linux and Android try builds, now up to twice as fast

(Taras told me to use sensationalist titles to draw more attention, so here we are)

Last week, I brought up the observable build times improvements on Linux try builds with the use of shared cache. I want to revisit those results now there have been more builds, and to look at the first results of the switch for Android try builds, which are now also using the shared cache.

Here is a comparison between the repartition of build times from last time (about ten days of try pushes, starting from the moment shared cache was enabled) vs. build times for the past ten days (which, almost, start at the point the previous data set stopped)):

As expected, the build times are still improving overall thanks to the cache being fuller. The slowest build times are now slightly lower than the slowest build times we were getting without the shared cache. There is a small “regression” in the number of builds taking between 15 and 20 minutes, but that’s likely related to changes in the tree creating more cache misses. To summarize the before/after:

Unified Non-unified
shared after 10 days shared initially ccache shared after 10 days shared initially ccache
Average 17:11 17:11 29:19 31:00 30:58 57:08
Median 13:03 13:30 30:10 22:07 22:27 60:57

[Note I'm not providing graphs for non-unified builds, they are boringly similar, with different values, which average and median values should give a grasp on]

Android try builds also got faster with shared cache. The situation looks pretty similar to what we observed after the first ten days of Linux try shared cache builds:

[Note I removed two builds without shared cache from those stats, both of which were taking more than an hour for some reason I haven't investigated]

The fastest shared cache builds are, like for Linux builds, slower than the fastest ccache builds, and the slowest builds too, but as we can see above, those slowest builds get faster as the cache fills up. And as I wrote last week, work is under way to make the fastest builds faster.

This is what the average and median look like for Android try builds:

Unified Non-unified
shared ccache shared ccache
Average 17:14 24:08 27:49 43:00
Median 13:52 24:57 20:35 47:17

### Nick Fitzgerald — Memory Tooling In Firefox Developer Tools In 2014

A big push for the Firefox Developer Tools team this year is performance tools. Jim Blandy and I are collaborating on the memory half of performance tooling. What follows is a broad overview of our plans.

### Definitions

• ubi::Node: An abstract base class that provides a generic nodes-and-edges view of any sort of heap object. Not just the JavaScript world, but also XPCOM and the DOM!

• Category: A key/value pair with which we can tag an individual ubi::Node. Some categories are simple booleans, such as whether a DOM node is orphaned from its document. Others may have a value, for example an object may be categorized by its prototype and constructor. ubi::Nodes can have to many categories!

• Census: A semi-lightweight traversal of the heap that provides accurate category counts without saving the full heap state. It gives us totals, but not the specifics of individuals.

• Snapshot: A heavyweight traversal of the heap. It saves the full heap state for later inspection by creating a core dump.

• Core dump: A binary blob containing the full serialized heap state at a past instant in time.

### A Recurring Theme

As we build the infrastructure and lay the foundation for the memory panel, we will expose utility and testing functions developers can use now. Generally, the console object will expose these functions.

The benefit of this approach is two-fold. First, it enables developers to cash in on our work quickly. Second, it gives us a larger testing population; helping us catch and fix bugs as soon as possible.

### Graph Algorithms on the Heap

#### Depth First Search and Dominator Trees

If x dominates y, then any path from the global window to y must pass through x. We can use this information in two practical ways:

1. If you nullify all references to x, every y such that x dominates y will also become unreachable and will eventually be garbage collected.

2. We can calculate the retained size of x. That is, the amount of memory that will be reclaimed if x (and therefore also every y such that x dominates y) were to be garbage collected.

We can expose this information to developers with console.retainedSize(obj).

By doing a BFS in the heap graph from the global window to an object, we find the shortest retaining path for that object. We can use this path to construct a developer-friendly label for that object. Often the label we provide will be a snippet of JavaScript that can be evaluated in the console. For example: "window.MyApp.WidgetView.element". Other times, we will be forced to display labels that cannot be evaluated in the console: "window.[[requestAnimationFrame renderLoop]].[[ closure environment ]].player.sprite".

This can be exposed to developers as a useful little pair of methods on console. If you expect an object to be reclaimed by GC, you will be able to tag it with console.expectGarbageCollected(obj). Next, you would perform whatever actions are supposed to trigger the clean up of that object. Finally, you could call console.logRetained() to log the retaining path of any objects that you tagged via console.expectGarbageCollected that have not been garbage collected. I realize these aren't the greatest method names; please tweet me your suggestions!

### Tracking Allocation Sites

We will track the allocation site of every object in the heap. Allocation sites come into play in a few ways.

First, if you interact with one component of your app, and notice that an unrelated component is allocating or retaining objects, you most likely have an opportunity to reduce memory consumption. Perhaps that unrelated component can lazily delay any allocations it needs, thereby lowering your app's memory usage when that component isn't active.

Second, once developers know which objects are using their precious memory, the next info they need is where the objects were allocated. That leads to why they were allocated, and finally how to reduce those allocations. We can hack this workflow and group objects by allocation site then sort them for developers to effectively make the first step (which objects) redundant.

I'm not sure what the best way to expose this information to developers before the complete memory panel is ready. Tracking allocations isn't lightweight; we can't do it all the time, you have to turn the mode on. We could expose console.startTrackingAllocationSites() and console.stopTrackingAllocationSites(), and then allow calls to console.allocationSite(obj) if obj was allocated while we were tracking allocation sites. Or, we could expose console.startLoggingAllocationSites() and console.stopLoggingAllocationSites(), which could just dump every allocation site to the console as it occurs. Tweet at me if you have an opinion about the best API from which to expose this data.

### Putting it all together

The memory panel will feature a live-updating graph. To construct this graph we will frequently poll the recent categorized allocations, and the total, non-granular heap size. This gives us a fuzzy, slightly inaccurate picture of the heap over time, but it should be efficient enough for us to do at a high frequency. At a less frequent interval, we will take a census. This will be a reality check of sorts that gives us precise numbers for each category of objects in the heap.

You will be able to click on the graph to get a shallow view into the heap at that past moment in time. Alternatively, you will be able to select a region of the graph to view the difference in memory consumption between the start and end points of your selection.

If you need to deep dive into the full heap state, you'll be able to take snapshots, which are too heavy for us to automatically collect on an interval. These can be compared with other snapshots down to each individual object, so you will be able to see exactly what has been allocated and reclaimed in the time between when each snapshot was taken. They will also be exportable and importable as core dumps, so you could attach them to bug tickets, send to other developers, etc.

Darrin Henein has created a beautiful mockup of the memory panel. Caveat: despite said beauty, the mockup is still very much a work in progress, it is far from complete, and what we ship might look very different!

You can follow along with our work by watching the bugs in this bugzilla dependency graph.

2014 will be an exciting year for memory tooling in Firefox Developer Tools!

### Yunier José Sosa Vázquez — Deshabilita la carga de las imágenes a la antigua con Disable Load Images

Si eres de los que se enfadó con la llegada de Firefox 23 al ver que habían quitado la opción de cargar o no las imágenes automáticamente, ya no tendrás que sentirte así porque puedes tener esta funcionalidad de vuelta con Disable Load Images. Ya una vez estuvimos comentando el tema por aquí pero esta vez se trata de otra alternativa.

Sin ánimo de justificar, es bueno que todos sepan la razón por la que se quitó esta opción de Firefox. Resulta ser que en Bugzilla (la plataforma de Mozilla para reporte de bugs) hay muchas peticiones relacionado con este tema y la principal queja es el alto consumo de recursos cuando cargar imágenes esta deshabilitada en algunos sitios. Los desarrolladores investigaron al respecto pero dieron con la solución y en los reportes de salud en Firefox esta opción no era desactivada por muchos usuarios.

Pero bueno, vayamos al grano y hablemos del complemento.

Disable Load Images esta siendo desarrollado por Yaicel Torres y por mí, con el asesoramiento de Jorge Villalobos (Mozilla). Se encuentra en su versión 0.3 y con él podrás habilitar o deshabilitar la opción de cargar imágenes automáticamente y cuenta con una interfaz donde se podrá:

• Permitir o bloquear las imágenes de un determinado sitio.
• Obtener la dirección del sitio visitado en ese momento.
• Eliminar uno, varios o todos los sitios.
• Cambiar el estado de uno o varios sitios rápidamente.

Además, tiene soporte para los idiomas español e inglés.

A continuación, un video de como utilizar esta extensión:

Si deseas colaborar o ver el código puedes acceder a la dirección del proyecto en el GitLab de las comunidades. En próximas versiones se espera adaptar el diseño de la “ventana” al sistema operativo.

¡Espero que les guste Disable Load Images!

## March 03, 2014

### Margaret Leibovic — Dare 2B Digital 2014: Remix the Web!

Dare 2B Digital is an annual South Bay conference that brings 300 young women ages 12-16 together to encourage them to consider STEM fields in college by coming together for a full day of inspiring talks and workshops showcasing women’s work and relevance in technology. For the past four conferences, Lukas has signed Mozilla up as a sponsor and created a workshop that is run 3 times that day and reaches about 80-100 attendees. Last year, Lukas and I created a workshop to teach these girls how to create mobile HTML5 games. This year, Lukas couldn’t make it to the conference because she was busy organizing another great outreach event, so I volunteered to organize our workshop, and I recruited Katie and Heather to help me.

I really love Webmaker, a Mozilla project dedicated to helping people learn how to create amazing things on the web, so I decided to create a workshop to teach the girls how to use some of the Webmaker tools. My goal was to teach the girls how to use these tools to understand the basic building blocks of the web, as well as show them how easy it is to make their own web content. To prepare for the conference, I gathered 20 loaner laptops, ordered some swag, and remixed an existing Webmaker teaching kit to create an outline for the workshop.

We started off each workshop with a brief overview of how the web works (Katie used her teacher skills to create some nice whiteboard diagrams), and then we jumped right into using the Webmaker tools. We started by walking the girls through using the X-Ray Goggles to inspect webpages, and taught them that they can modify what they see in their browsers. We definitely heard lots of “This is so cool!!!” when we showed them how to change text and images on any webpage.

Once they felt comfortable with X-Ray Goggles, we pointed them to a gallery of existing Webmaker projects, and showed them how to use Thimble to remix those projects into their own. At this point, we let them loose to start working on their own remixes, and there was lots of enthusiasm and laughter as they started working on their own projects. We also felt thoroughly out of touch with youth culture when they started filling their projects with images of young male celebrities we had never even heard of.

Unfortunately, we only had an hour and 15 minutes for each workshop, which flew by, but we encouraged the girls to make their own Webmaker accounts to continue working on their remixes at home. They were definitely really excited to learn that all of these resources are available to them at home, and that they only need a web browser to get started!

### Ian Bicking — Towards a Next Level of Collaboration

With TogetherJS we’ve been trying to make a usable tool for the web we have, and the browsers we have, and the web apps we have. But we’re also accepting a lot of limitations.

For a particular scope the limitations in TogetherJS are reasonable, but my own goals have been more far-reaching. I am interested in collaboration with as broad a scope as the web itself. (But no broader than the web because I’m kind of biased.) “Collaboration” isn’t quite the right term — it implies a kind of active engagement in creation, but there’s more ways to work together than collaboration. TogetherJS was previously called TowTruck, but we wanted to rename it to something more meaningful. While brainstorming we kept coming back to names that included some form of “collaboration” but I strongly resisted it because it’s such a mush-mouthed term with too much baggage and too many preconceptions.

When we came up with “together” it immediately seemed right. Admittedly the word feels a little cheesy (it’s a web built out of hugs and holding hands!) but it covers the broad set of activities we want to enable.

With the experience from TogetherJS in mind I want to spend some time thinking about what a less limited tool would look like. Much of this has become manifest in Hotdish, and the notes below have informed its design.

### Degrees of collaboration/interaction

Intense collaboration is cool, but it’s not comprehensive. I don’t want to always be watching over your shoulder. What will first come to mind is privacy, but that’s not interesting to me. I would rather address privacy by helping you scope your actions, let you interact with your peers or not and act appropriately with that in mind. I don’t want to engage with my collaborators all the time because it’s boring and unproductive and my eyes glaze over. I want to engage with other people appropriately: with all the intensity called for given the circumstances, but also all the passivity that is also sometimes called for.

I’ve started to think in terms of categories of collaboration:

#### 1. Asynchronous message-based collaboration

This includes email of course, but also issue trackers, planning tools, any notification system. If you search for “collaboration software” this is most of what you find, and much of the innovation is in representing and organizing the messages.

I don’t think I have any particularly new ideas in this well-explored area. That’s not to say there aren’t lots of important ideas, but the work I want to do is in complementing these tools rather than competing with them. But I do want to note that they exist on this continuum.

#### 2. Ambient awareness

This is the awareness of a person’s presence and activity. We have a degree of this with Instant Messaging and chat rooms (IRC, Campfire, etc). But they don’t show what we are actively doing, just our presence or absence, and in the case of group discussions some of what we’re discussing with other people.

Many tools that indicate presence also include status messages which would purport to summarize a person’s current state and work. I’ve never worked with people who keep those status messages updated. It’s a very explicit approach. At best it devolves into a record of what you had been doing.

A more interesting tool to make people’s presence more present is Sqwiggle, a kind of always-on video conference. It’s not exactly always-on, there is a low-fidelity video with no audio until you start a conversation with someone and it goes to full video and audio. This way you know not only if someone is actually sitting at the computer, but also if they are eating lunch, if they have the furrowed brows of careful concentration, or are frustrated or distracted. Unfortunately most people’s faces only show that they are looking at a screen, with the slightly studious but mostly passive facial expressions that we have when looking at screens.

Instant messaging has grown to include an additional the presence indicator: I am currently typing a response. A better fidelity version of this would indicate if I am typing right now, or if I forgot I started typing and switched tabs but left text in the input box, or if I am trying hard to compose my thoughts (typing and deleting), or if I’m pasting something, or if I am about to deliver a soliloquy in the form of a giant message. (Imagine a typing indicator that gives a sense of the number of words you have typed but not sent.)

I like that instant messaging detects your state automatically, using something that you are already engaged with (the text input box). Sqwiggle has a problem here: because you aren’t trying to project any emotions to your computer screen, Sqwiggle catches expressions that don’t mean anything. We can engage with our computers in different ways, there’s something there to express, it’s just not revealed on our faces.

I’d like to add to the activity indicators we have. Like the pages (and web apps) you are looking at (or some privacy-aware subset). I’d like to show how you are interacting with those pages. Are you flopping between tabs? Are you skimming? Scrolling through in a way that shows you are studying the page? Typing? Clicking controls?

I want to show something like the body language of how you are interacting with the computer. First I wondered if we could interpret your actions and show them as things like “reading”, “composing”, “being pissed off with your computer”, etc. But then I thought more about body language. When I am angry there’s no “angry” note that shows up above my head. A furrowed brow isn’t a message, or at least mostly not a message. Body language is what we read from cues that aren’t explicit. And so we might be able to show what a person is doing, and let the person watching figure out why.

#### 3. Working in close parallel

This is where both people (or more than 2 people) are actively working on the same thing, same project, same goal, but aren’t directly supporting each other at every moment.

When you’ve entered into this level of collaboration you’ve both agreed that you are working together — you’re probably actively talking through tasks, and may regularly be relying on each other (“does what I wrote sound right?” or “did you realize this test is failing” etc). A good working meeting will be like this. A bad meeting would probably have been better if you could have stuck to ambient awareness and promoted it to a more intense level of collaboration only as needed.

#### 4. Working directly

This is where you are both locked on a single task. When I write something and say “does what I wrote sound right?” we have to enter this mode: you have to look at exactly what I’m talking about. In some sense “close parallel” may mean “prepared to work directly”.

I have found that video calls are better than audio-only calls, more than I would have expected. It’s not because the video content is interesting. But the video makes you work directly, while being slightly uncomfortable so you are encouraged to acknowledge when you should end the call. In a way you want your senses filled. Or maybe that’s my propensity to distraction.

There’s a lot more to video calls than this (like the previously mentioned body language). But in each feature I suspect there are parallels in collaborative work. Working directly together should show some of the things that video shows when we are focused on a conversation, but can’t show when we are focusing on work.

#### 5. Demonstrating to another person

This is common for instruction and teaching, but that shouldn’t be the only case we consider. In Hotdish we have often called it “presenting” and “viewing”. In this mode someone is the driver/presenter, and someone is the passenger/viewer. When the presenter focuses on something, you want the viewer to be aware of that and follow along. The presenter also wants to be confident that the viewer is following along. Maybe we want something like how you might say “uh huh” when someone is talking to you — if a listener says nothing it will throw off the talker, and these meaningless indications of active listening are important.

Demonstration could just be a combination of direct work and social convention. Does it need to be specially mediated by tools? I’m not sure. Do we need a talking stick? Can I take the talking stick? Are these interactions like a conversation, where sometimes one person enters into a kind of monologue, but the rhythm of the conversation will shift? If we focus on the demonstration tools we could miss the social interactions we are trying to support.

#### Switching modes

Between each of these styles of interaction I think there must be some kind of positive action. A natural promotion of demotion of your interaction with someone should be mutual. (A counter example would be the dangling IM conversation, where you are never sure it’s over.)

At the same time, the movement between modes also builds your shared context and your relationship with the other person. You might be proofing an article with another person, and you say: “clearly this paragraph isn’t making sense, let me just rewrite it, one minute” — now you know you are leaving active collaboration, but you also both know you’ll be reentering it soon. You shouldn’t have to record that expectation with the tool.

I’m reluctant to put boundaries up between these modes, I’d rather tools simply inform people that modes are changing and not ask if they can change. This is part of the principles behind Defaulting To Together.

### Ownership

At least in the context of computers we often have strong notions of ownership. Maybe we don’t have to — maybe it’s because we have to hand off work explicitly, and maybe we have to hand off work explicitly because we lack fluid ways to interact, cooperate, delegate.

With good tools in hand I see “ownership” being exchanged more regularly:

• I find some documentation, then show it to you, and now it’s yours to make use of.

• I am working through a process, get stuck, and need your skills to finish it up. Now it’s yours. But you might hand it back when you unstick me.

• You are working through something, but are not permitted to complete the operation, you have to hand it over to me for me to complete the last step.

Layered on this we have the normal notions of ownership and control — the login accounts and permissions of the applications we are using. Whether these are in opposition to cooperation or maybe complementary I have not decided.

### Screensharing vs. Peer-to-Peer

Perhaps a technical aside, but when dealing with real-time collaboration (not asynchronous) there are two distinct approaches.

Screensharing means one person (and one computer) is “running” the session — that one person is logged in, their page or app is “live”, everyone else sees what they see.

Screensharing doesn’t mean other people can’t interact with the screen, but any interaction has to go through the owner’s computer. In the case of a web page we can share the DOM (the current visual state of the page) with another person, but we can’t share the Javascript handlers and state, cookies, etc., so most interactions have to go back through the original browser. Any side effects have to make a round trip. Latency is a problem.

It’s hard to figure out exactly what interactivity to implement in a screensharing situation. Doing a view-only interaction is not too hard. There are a few things you can add after that — maybe you let someone touch a form control, suggest that you follow a link, send clicks across the wire — but there’s no clear line to stop at. Worse, there’s no clear line to express. You can implement certain mechanisms (like a click), but these don’t always map to what the user thinks they are doing — something like a drag might involve a mousedown/mousemove/mouseup event, or it might be implemented directly as dragging. Implementing one of those interactions is a lot easier than the other, but the distinction means nothing to the user.

When you implement incomplete interactions you are setting up a situation where a person can do something in the original application that viewers can’t do, even though it looks like the real live application. An uncanny valley of collaboration.

I’ve experimented with DOM-based screen sharing in Browser Mirror, and you can see this approach in a tool like Surfly. As I write this a minimal version of this is available in Hotdish.

In peer-to-peer collaboration both people are viewing their own version of the live page. Everything works exactly like in the non-collaborative environment. Both people are logged in as themselves. This is the model TogetherJS uses, and is also present as a separate mode in Hotdish.

This has a lot of obvious advantages over the problems identified above for screensharing. The big disadvantage is that hardly anything is collaborative by default in this model.

In the context of the web the building blocks we do have are:

• URLs. Insofar as a URL defines the exact interface you look at, then putting both people at the same URL gives a consistent experience. This works great for applications that use lots of server-side logic. Amazon is pretty great, for example, or Wikipedia. It falls down when content is substantially customized for each person, like the Facebook frontpage or a flight search result.

• Event echoing: events aren’t based on any internal logic of the program, they are something initiated by the user. So if the user can do something, a remote user can do something. Form fields are the best example of this, as there’s a clear protocol for doing form changes (change the value, fire a change event).

But we don’t have:

• Consistent event results: events aren’t state changes, and transferring events about doesn’t necessarily lead to a consistent experience. Consider the modest toggle control, where a click on the toggler element shows or hides some other element. If our hidden states are out of sync (e.g., my toggleable element is hidden, yours is shown), sending the click event between the clients keeps them consistently and perfectly out of sync.

• Consistent underlying object models. In a single-page app of some sort, or a whatever fancy Javascript-driven webapp, a lot of what we see is based on Javascript state and models that are not necessarily consistent across peers. This is in contrast to old-school server-side apps, where there’s a good chance the URL contains enough information to keep everything consistent, and ultimately the “state” is held on a single server or database that both peers are connecting to. But we can’t sync the client’s object models, as they are not built to support arbitrary modification from the outside. Apps that use a real-time database work well.

To make this work the application usually has to support peer-to-peer collaboration to some degree. A messy approach can help, but can never be enough, not complete enough, not robust enough.

So peer-to-peer collaboration offers potentially more powerful and flexible kinds of collaboration, but only with work on the part of each application. We can try to make it as easy as possible, and maybe integrate with tools or libraries that support the kinds of higher-level synchronization we would want, but it’s never reliably easy.

### Synchronized vs. Coordinated Experiences

Another question: what kind of experiences do we want to create?

The most obvious real-time experience is: everything sees the same thing. Everything is fully synchronized. In the screensharing model this is what you always get and what you have to get.

The obvious experience is probably a good starting point, but shouldn’t be the end of our thinking.

The trivial example here is the cursor point. We can both be editing content and viewing each other’s edits (close to full sync), but we don’t have to be at exactly the same place. (This is something traditional screensharing has a hard time with, as you are sharing a screen of pixels instead of a DOM.)

But other more subtle examples exist. Maybe only one person has the permission to save a change. A collaboration-aware application might allow both people to edit, while still only allowing one person to save. (Currently editors will usually be denied to people who don’t have permission to save.)

I think there’s fruit in playing with the timing of actions. We don’t have to replay remote actions exactly how they occurred. For example, in a Demonstration context we might detect that when the driver clicks a link the page will change. To the person doing the click the order of events is: find the link, focus attention on the link, move cursor to the link, click. To the viewer the order of events is: cursor moves, maybe a short click indicator, and boom you are at a new page. There’s much less context given to the viewer. But we don’t have to display those events with the original timing for instance we could let the mouse hover over its target for a more extended amount of time on the viewer.

High-level (application-specific) representation of actions could be available. Instead of trying to express what the other person is doing through every click and scroll and twiddling of a form, you might just say “Bob created a new calendar event”.

In the context of something like a bug tracker, you might not want to synchronize the comment field. Instead you might want to show individual fields for all participants on a page/bug. Then I can see the other person’s in-progress comment, even add to it, but I can also compose my own comment as myself.

This is where the peer-to-peer model has advantages, as it will (by necessity) keep the application in the loop. It does not demand that collaboration take one form, but it gives the application an environment in which to build a domain-specific form of collaboration.

We can imagine moving from screenshare to peer-to-peer through a series of enhancements. The first might be: let applications opt-in to peer-to-peer collaboration, or implement a kind of transparent-to-the-application screensharing, and from there tweak. Maybe you indicate some scripts should run on the viewer’s side, and some compound UI components can be manipulated. I can imagine with a component system like Brick where you could identify safe ways to run rich components, avoiding latency.

### How do you package all this?

Given tools and interactions, what is the actual context for collaboration?

TogetherJS has a model of a persistent session, and you invite people to that session. Only for technical reasons the session is bound to a specific domain, but not a specific page.

In Hotdish we’ve used a group approach: you join a group, and your work clearly happens in the group context or not.

One of the interesting things I’ve noticed when getting feedback about TogetherJS is that people are most interested in controlling and adding to how the sessions are setup. While, as an implementor, I find myself drawn to the tooling and specific experiences of collaboration, there’s just as much value in allowing new and interesting groupings of people. Ways to introduce people, ways to start and end collaboration, ways to connect to people by role instead of identity, and so on.

Should this collaboration be a conversation or an environment? When it is a conversation you lead off with the introduction, the “hello” the “so why did you call?” and finish with “talk to you later” — when it is an environment you enter the environment and any coparticipants are just there, you don’t preestablish any specific reason to collaborate.

### And in conclusion…

I’m still developing these ideas. And for each idea the real test is if we can create a useful experience. For instance, I’m pretty sure there’s some ambient information we want to show, but I haven’t figured out what.

Experience has shown that simple history (as in an activity stream) seems too noisy. And is history shown by group or person?

In the past I unintentionally exposed all tab focus and unfocus in TogetherJS, and it felt weird to both expose my own distracted state and my collaborator’s distraction. But part of why it was weird was that in some cases it was simply distraction, but in other cases it was useful multitasking (like researching a question in another tab). Was tab focus too much information or too little?

I am still in the process of figuring out how and where I can explore these questions, build the next thing, and the next thing after that — the tooling I envision doesn’t feel impossibly far away, but still more than one iteration of work yet to be done, maybe many more than one but I can only see to the next peak.

Who else is thinking about these things? And thinking about how to build these things? If you are, or you know someone who is, please get in contact — I’m eager to talk specifics with people who have been thinking about it too, but I’m not sure how to find these people.

### Ian Bicking — Towards a Next Level of Collaboration

With TogetherJS we’ve been trying to make a usable tool for the web we have, and the browsers we have, and the web apps we have. But we’re also accepting a lot of limitations.

For a particular scope the limitations in TogetherJS are reasonable, but my own goals have been more far-reaching. I am interested in collaboration with as broad a scope as the web itself. (But no broader than the web because I’m kind of biased.) “Collaboration” isn’t quite the right term — it implies a kind of active engagement in creation, but there’s more ways to work together than collaboration. TogetherJS was previously called TowTruck, but we wanted to rename it to something more meaningful. While brainstorming we kept coming back to names that included some form of “collaboration” but I strongly resisted it because it’s such a mush-mouthed term with too much baggage and too many preconceptions.

When we came up with “together” it immediately seemed right. Admittedly the word feels a little cheesy (it’s a web built out of hugs and holding hands!) but it covers the broad set of activities we want to enable.

With the experience from TogetherJS in mind I want to spend some time thinking about what a less limited tool would look like. Much of this has become manifest in Hotdish, and the notes below have informed its design.

### Degrees of collaboration/interaction

Intense collaboration is cool, but it’s not comprehensive. I don’t want to always be watching over your shoulder. What will first come to mind is privacy, but that’s not interesting to me. I would rather address privacy by helping you scope your actions, let you interact with your peers or not and act appropriately with that in mind. I don’t want to engage with my collaborators all the time because it’s boring and unproductive and my eyes glaze over. I want to engage with other people appropriately: with all the intensity called for given the circumstances, but also all the passivity that is also sometimes called for.

I’ve started to think in terms of categories of collaboration:

#### 1. Asynchronous message-based collaboration

This includes email of course, but also issue trackers, planning tools, any notification system. If you search for “collaboration software” this is most of what you find, and much of the innovation is in representing and organizing the messages.

I don’t think I have any particularly new ideas in this well-explored area. That’s not to say there aren’t lots of important ideas, but the work I want to do is in complementing these tools rather than competing with them. But I do want to note that they exist on this continuum.

#### 2. Ambient awareness

This is the awareness of a person’s presence and activity. We have a degree of this with Instant Messaging and chat rooms (IRC, Campfire, etc). But they don’t show what we are actively doing, just our presence or absence, and in the case of group discussions some of what we’re discussing with other people.

Many tools that indicate presence also include status messages which would purport to summarize a person’s current state and work. I’ve never worked with people who keep those status messages updated. It’s a very explicit approach. At best it devolves into a record of what you had been doing.

A more interesting tool to make people’s presence more present is Sqwiggle, a kind of always-on video conference. It’s not exactly always-on, there is a low-fidelity video with no audio until you start a conversation with someone and it goes to full video and audio. This way you know not only if someone is actually sitting at the computer, but also if they are eating lunch, if they have the furrowed brows of careful concentration, or are frustrated or distracted. Unfortunately most people’s faces only show that they are looking at a screen, with the slightly studious but mostly passive facial expressions that we have when looking at screens.

Instant messaging has grown to include an additional the presence indicator: I am currently typing a response. A better fidelity version of this would indicate if I am typing right now, or if I forgot I started typing and switched tabs but left text in the input box, or if I am trying hard to compose my thoughts (typing and deleting), or if I’m pasting something, or if I am about to deliver a soliloquy in the form of a giant message. (Imagine a typing indicator that gives a sense of the number of words you have typed but not sent.)

I like that instant messaging detects your state automatically, using something that you are already engaged with (the text input box). Sqwiggle has a problem here: because you aren’t trying to project any emotions to your computer screen, Sqwiggle catches expressions that don’t mean anything. We can engage with our computers in different ways, there’s something there to express, it’s just not revealed on our faces.

I’d like to add to the activity indicators we have. Like the pages (and web apps) you are looking at (or some privacy-aware subset). I’d like to show how you are interacting with those pages. Are you flopping between tabs? Are you skimming? Scrolling through in a way that shows you are studying the page? Typing? Clicking controls?

I want to show something like the body language of how you are interacting with the computer. First I wondered if we could interpret your actions and show them as things like “reading”, “composing”, “being pissed off with your computer”, etc. But then I thought more about body language. When I am angry there’s no “angry” note that shows up above my head. A furrowed brow isn’t a message, or at least mostly not a message. Body language is what we read from cues that aren’t explicit. And so we might be able to show what a person is doing, and let the person watching figure out why.

#### 3. Working in close parallel

This is where both people (or more than 2 people) are actively working on the same thing, same project, same goal, but aren’t directly supporting each other at every moment.

When you’ve entered into this level of collaboration you’ve both agreed that you are working together — you’re probably actively talking through tasks, and may regularly be relying on each other (“does what I wrote sound right?” or “did you realize this test is failing” etc). A good working meeting will be like this. A bad meeting would probably have been better if you could have stuck to ambient awareness and promoted it to a more intense level of collaboration only as needed.

#### 4. Working directly

This is where you are both locked on a single task. When I write something and say “does what I wrote sound right?” we have to enter this mode: you have to look at exactly what I’m talking about. In some sense “close parallel” may mean “prepared to work directly”.

I have found that video calls are better than audio-only calls, more than I would have expected. It’s not because the video content is interesting. But the video makes you work directly, while being slightly uncomfortable so you are encouraged to acknowledge when you should end the call. In a way you want your senses filled. Or maybe that’s my propensity to distraction.

There’s a lot more to video calls than this (like the previously mentioned body language). But in each feature I suspect there are parallels in collaborative work. Working directly together should show some of the things that video shows when we are focused on a conversation, but can’t show when we are focusing on work.

#### 5. Demonstrating to another person

This is common for instruction and teaching, but that shouldn’t be the only case we consider. In Hotdish we have often called it “presenting” and “viewing”. In this mode someone is the driver/presenter, and someone is the passenger/viewer. When the presenter focuses on something, you want the viewer to be aware of that and follow along. The presenter also wants to be confident that the viewer is following along. Maybe we want something like how you might say “uh huh” when someone is talking to you — if a listener says nothing it will throw off the talker, and these meaningless indications of active listening are important.

Demonstration could just be a combination of direct work and social convention. Does it need to be specially mediated by tools? I’m not sure. Do we need a talking stick? Can I take the talking stick? Are these interactions like a conversation, where sometimes one person enters into a kind of monologue, but the rhythm of the conversation will shift? If we focus on the demonstration tools we could miss the social interactions we are trying to support.

#### Switching modes

Between each of these styles of interaction I think there must be some kind of positive action. A natural promotion of demotion of your interaction with someone should be mutual. (A counter example would be the dangling IM conversation, where you are never sure it’s over.)

At the same time, the movement between modes also builds your shared context and your relationship with the other person. You might be proofing an article with another person, and you say: “clearly this paragraph isn’t making sense, let me just rewrite it, one minute” — now you know you are leaving active collaboration, but you also both know you’ll be reentering it soon. You shouldn’t have to record that expectation with the tool.

I’m reluctant to put boundaries up between these modes, I’d rather tools simply inform people that modes are changing and not ask if they can change. This is part of the principles behind Defaulting To Together.

### Ownership

At least in the context of computers we often have strong notions of ownership. Maybe we don’t have to — maybe it’s because we have to hand off work explicitly, and maybe we have to hand off work explicitly because we lack fluid ways to interact, cooperate, delegate.

With good tools in hand I see “ownership” being exchanged more regularly:

• I find some documentation, then show it to you, and now it’s yours to make use of.

• I am working through a process, get stuck, and need your skills to finish it up. Now it’s yours. But you might hand it back when you unstick me.

• You are working through something, but are not permitted to complete the operation, you have to hand it over to me for me to complete the last step.

Layered on this we have the normal notions of ownership and control — the login accounts and permissions of the applications we are using. Whether these are in opposition to cooperation or maybe complementary I have not decided.

### Screensharing vs. Peer-to-Peer

Perhaps a technical aside, but when dealing with real-time collaboration (not asynchronous) there are two distinct approaches.

Screensharing means one person (and one computer) is “running” the session — that one person is logged in, their page or app is “live”, everyone else sees what they see.

Screensharing doesn’t mean other people can’t interact with the screen, but any interaction has to go through the owner’s computer. In the case of a web page we can share the DOM (the current visual state of the page) with another person, but we can’t share the Javascript handlers and state, cookies, etc., so most interactions have to go back through the original browser. Any side effects have to make a round trip. Latency is a problem.

It’s hard to figure out exactly what interactivity to implement in a screensharing situation. Doing a view-only interaction is not too hard. There are a few things you can add after that — maybe you let someone touch a form control, suggest that you follow a link, send clicks across the wire — but there’s no clear line to stop at. Worse, there’s no clear line to express. You can implement certain mechanisms (like a click), but these don’t always map to what the user thinks they are doing — something like a drag might involve a mousedown/mousemove/mouseup event, or it might be implemented directly as dragging. Implementing one of those interactions is a lot easier than the other, but the distinction means nothing to the user.

When you implement incomplete interactions you are setting up a situation where a person can do something in the original application that viewers can’t do, even though it looks like the real live application. An uncanny valley of collaboration.

I’ve experimented with DOM-based screen sharing in Browser Mirror, and you can see this approach in a tool like Surfly. As I write this a minimal version of this is available in Hotdish.

In peer-to-peer collaboration both people are viewing their own version of the live page. Everything works exactly like in the non-collaborative environment. Both people are logged in as themselves. This is the model TogetherJS uses, and is also present as a separate mode in Hotdish.

This has a lot of obvious advantages over the problems identified above for screensharing. The big disadvantage is that hardly anything is collaborative by default in this model.

In the context of the web the building blocks we do have are:

• URLs. Insofar as a URL defines the exact interface you look at, then putting both people at the same URL gives a consistent experience. This works great for applications that use lots of server-side logic. Amazon is pretty great, for example, or Wikipedia. It falls down when content is substantially customized for each person, like the Facebook frontpage or a flight search result.

• Event echoing: events aren’t based on any internal logic of the program, they are something initiated by the user. So if the user can do something, a remote user can do something. Form fields are the best example of this, as there’s a clear protocol for doing form changes (change the value, fire a change event).

But we don’t have:

• Consistent event results: events aren’t state changes, and transferring events about doesn’t necessarily lead to a consistent experience. Consider the modest toggle control, where a click on the toggler element shows or hides some other element. If our hidden states are out of sync (e.g., my toggleable element is hidden, yours is shown), sending the click event between the clients keeps them consistently and perfectly out of sync.

• Consistent underlying object models. In a single-page app of some sort, or a whatever fancy Javascript-driven webapp, a lot of what we see is based on Javascript state and models that are not necessarily consistent across peers. This is in contrast to old-school server-side apps, where there’s a good chance the URL contains enough information to keep everything consistent, and ultimately the “state” is held on a single server or database that both peers are connecting to. But we can’t sync the client’s object models, as they are not built to support arbitrary modification from the outside. Apps that use a real-time database work well.

To make this work the application usually has to support peer-to-peer collaboration to some degree. A messy approach can help, but can never be enough, not complete enough, not robust enough.

So peer-to-peer collaboration offers potentially more powerful and flexible kinds of collaboration, but only with work on the part of each application. We can try to make it as easy as possible, and maybe integrate with tools or libraries that support the kinds of higher-level synchronization we would want, but it’s never reliably easy.

### Synchronized vs. Coordinated Experiences

Another question: what kind of experiences do we want to create?

The most obvious real-time experience is: everything sees the same thing. Everything is fully synchronized. In the screensharing model this is what you always get and what you have to get.

The obvious experience is probably a good starting point, but shouldn’t be the end of our thinking.

The trivial example here is the cursor point. We can both be editing content and viewing each other’s edits (close to full sync), but we don’t have to be at exactly the same place. (This is something traditional screensharing has a hard time with, as you are sharing a screen of pixels instead of a DOM.)

But other more subtle examples exist. Maybe only one person has the permission to save a change. A collaboration-aware application might allow both people to edit, while still only allowing one person to save. (Currently editors will usually be denied to people who don’t have permission to save.)

I think there’s fruit in playing with the timing of actions. We don’t have to replay remote actions exactly how they occurred. For example, in a Demonstration context we might detect that when the driver clicks a link the page will change. To the person doing the click the order of events is: find the link, focus attention on the link, move cursor to the link, click. To the viewer the order of events is: cursor moves, maybe a short click indicator, and boom you are at a new page. There’s much less context given to the viewer. But we don’t have to display those events with the original timing for instance we could let the mouse hover over its target for a more extended amount of time on the viewer.

High-level (application-specific) representation of actions could be available. Instead of trying to express what the other person is doing through every click and scroll and twiddling of a form, you might just say “Bob created a new calendar event”.

In the context of something like a bug tracker, you might not want to synchronize the comment field. Instead you might want to show individual fields for all participants on a page/bug. Then I can see the other person’s in-progress comment, even add to it, but I can also compose my own comment as myself.

This is where the peer-to-peer model has advantages, as it will (by necessity) keep the application in the loop. It does not demand that collaboration take one form, but it gives the application an environment in which to build a domain-specific form of collaboration.

We can imagine moving from screenshare to peer-to-peer through a series of enhancements. The first might be: let applications opt-in to peer-to-peer collaboration, or implement a kind of transparent-to-the-application screensharing, and from there tweak. Maybe you indicate some scripts should run on the viewer’s side, and some compound UI components can be manipulated. I can imagine with a component system like Brick where you could identify safe ways to run rich components, avoiding latency.

### How do you package all this?

Given tools and interactions, what is the actual context for collaboration?

TogetherJS has a model of a persistent session, and you invite people to that session. Only for technical reasons the session is bound to a specific domain, but not a specific page.

In Hotdish we’ve used a group approach: you join a group, and your work clearly happens in the group context or not.

One of the interesting things I’ve noticed when getting feedback about TogetherJS is that people are most interested in controlling and adding to how the sessions are setup. While, as an implementor, I find myself drawn to the tooling and specific experiences of collaboration, there’s just as much value in allowing new and interesting groupings of people. Ways to introduce people, ways to start and end collaboration, ways to connect to people by role instead of identity, and so on.

Should this collaboration be a conversation or an environment? When it is a conversation you lead off with the introduction, the “hello” the “so why did you call?” and finish with “talk to you later” — when it is an environment you enter the environment and any coparticipants are just there, you don’t preestablish any specific reason to collaborate.

### And in conclusion…

I’m still developing these ideas. And for each idea the real test is if we can create a useful experience. For instance, I’m pretty sure there’s some ambient information we want to show, but I haven’t figured out what.

Experience has shown that simple history (as in an activity stream) seems too noisy. And is history shown by group or person?

In the past I unintentionally exposed all tab focus and unfocus in TogetherJS, and it felt weird to both expose my own distracted state and my collaborator’s distraction. But part of why it was weird was that in some cases it was simply distraction, but in other cases it was useful multitasking (like researching a question in another tab). Was tab focus too much information or too little?

I am still in the process of figuring out how and where I can explore these questions, build the next thing, and the next thing after that — the tooling I envision doesn’t feel impossibly far away, but still more than one iteration of work yet to be done, maybe many more than one but I can only see to the next peak.

Who else is thinking about these things? And thinking about how to build these things? If you are, or you know someone who is, please get in contact — I’m eager to talk specifics with people who have been thinking about it too, but I’m not sure how to find these people.

### Lukas Blakk — I’m looking at you, Gift Horse

I’m going to say something that might be controversial, or hard to understand for some folks but it’s getting to the point where I’m starting to stay away from the office more than I’d like to so here goes:

The snacks. The never-ending supply that I would *never* eat otherwise. That I would not go to a corner store and purchase. I really wish they were gone. I wish that we, people who all make salaries above that needed for living decently, were accountable for buying and bringing in our own snacks as we chose. Keep them at your desk, share with nearby co-workers, I would love to see this. It would be so much better for me if the only things we had in the kitchen were fruit and veg. Milk for coffee, sure.

When I first started working for Mozilla, as a working class grew up broke kid, I was floored by all the free stuff & free food. I lived off it as an intern to save money. I appreciated it. It made me feel cared for. Now it’s like a trap. A constant test of my ability to make “good” decisions for myself 250 times a day. Often I fail. Failure makes me stay away from the office as an attempt to cope. Staying away from the office causes loss of connection with you all.

I suspect there might be feelings of being ‘punished’ if the snacks were less abundant (or even gone) because we’re used to all these ‘perks’ in our tech offices. It’s not something most offices (outside of tech industry) have and I would encourage a perspective shift towards accountability, recognizing the privileges we *already* have even without free all-day snacks, and thinking about what it means if some people have to choose to stay away.  Considering the origin of these snacks is from a startup mentality where workers were expected to be pulling really long hours without getting up, out, or going home.  Is that really what we want to promote and call a perk?

### Andrew Halberstadt — A Workflow for using Mach with multiple Object Directories

Mach is an amazing tool which facilitates a large number of common user stories in the mozilla source tree. You can perform initial setup, execute a build, run tests, examine diagnostics, even search Google. Many of these things require an object directory. This can potentially lead to some confusion if you typically have more than one object directory at any given time. How does mach know which object directory to operate on?

It turns out that mach is pretty smart. It takes a very good guess at which object directory you want. Here is a simplification of the steps in order:

1. If cwd is an objdir or a subdirectory of an objdir, use that
2. If a mozconfig is detected and MOZ_OBJDIR is in it, use that
3. Attempt to guess the objdir with build/autoconf/config.guess

The cool thing about this is that there are tons of different workflows that fit nicely into this model. For example, many people put the mach binary on their $PATH and then always make sure to 'cd' into their objdirs before invoking related mach commands. It turns out that mach works really well with a tool I had written quite awhile back called mozconfigwrapper. I won't go into details about mozconfigwrapper here. For more info, see my previous post on it. Now for the sake of example, let's say we have a regular and debug build called 'regular' and 'debug' respectively. Now let's say I wanted to run the 'mochitest-plain' test suite on each build, one after the other. My workflow would be (from any directory other than an objdir): $ buildwith regular
$mach mochitest-plain$ buildwith debug
$mach mochitest-plain  How does this work? Very simply, mozconfigwrapper is exporting the$MOZCONFIG environment variable under the hood anytime you call 'buildwith'. Mach will then pick up on this due to the second step listed above.

Your second question might be why bother installing mozconfigwrapper when you can just export MOZCONFIG directly? This is a matter of personal preference, but one big reason for me is the buildwith command has full tab completion, so it is easy to see which mozconfigs you have available to choose from. Also, since they are hidden away in your home directory, you don't need to memorize any paths. There are other advantages as well which you can see in the mozconfigwrapper readme.

I've specially found this workflow useful when building several platforms at once (e.g firefox and b2g desktop) and switching back and forth between them with a high frequency. In the end, to each their own and this is just one possible workflow out of many. If you have a different workflow please feel free to share it in the comments.

### Michael Kaply — New Features for CCK2

I realize it's been quite a while since I've posted any updates. I've been heads down on some projects, as well as dealing with some personal issues.

I have been working on the CCK2, though, and I have a new version to share.

This version adds some requested features, including custom JavaScript in the AutoConfig file as well as better backup of your configurations. It also migrates configurations from the original CCK Wizard.

I've also changed the versioning to make it clear that this version is newer than the CCK Wizard.

My plan is to deprecate and remove the old CCK Wizard in the next few weeks, so please take some time to make sure the new CCK2 works for you.

### Doug Belshaw — What’s new with Open Badges?

Those keeping track will know that last year I moved teams within the Mozilla Foundation. I moved away from the Open Badges team to focus on (what is now) the Web Literacy Map. Despite this, I still have close ties to the Open Badges team. In fact, I’m currently helping design Webmaker and Web Literacy badges.

The big news at the start of 2014 on the Open Badges front is that there’s a new Badge Alliance to grow and develop the wider ecosystem. The Badge Alliance is a non-profit organisation to be led by Erin Knight, co-founder of the Open Badges Infrastructure (OBI). Over the next few months she’ll be joined at the Badge Alliance with a few members of the current Open Badges team. There’s more detail in Erin’s blog post.

Happily, Mozilla will continue to develop and nurture the open source technical stack behind the OBI. The next milestone is the release of BadgeKit in the next few months. This should remove any remaining friction from issuing Open Badges. For more on BadgeKit be sure to follow the blogs of Sunny Lee and Chris McAvoy. And, as ever, you should also follow Carla Casilli’s posts on badge system design.

If you want to keep up with what’s going on with Open Badges in general, the easiest thing to do is to keep tabs on the Open Badges blog. The weekly ‘Badger Beats’ in particular is a useful round-up of news from the world of badges. There’s also a good deal of conversation within the Open Badges discussion group. This is a friendly forum for those planning to dip their toes into the water for the first time.

Having joined Mozilla in 2012 to work both on the Open Badges project and (what’s grown into) the Web Literacy Map. I’m delighted that the former has been incubated with such success. I’m also pleased that the latter is to underpin both the next iteration of Webmaker and Mozilla’s aims to create a more web literate planet.

If you’d like to get involved with Mozilla’s work to create a better web then we’d love to have you onboard! The easiest way to get involved with the two projects I’ve mentioned is to join their respective weekly calls. The Open Badges community call is every Wednesday, and you can join us for the new #TeachTheWeb community call every Thursday.

Questions? I’ll do my best to respond to them in the comments below.

Image CC BY-NC-SA RSC Scotland

## March 02, 2014

### Tobias Markus — Atypical Confessions of an atypical contributor

I, your favorite hypocrite, traitor or whatever people might call me, thought about writing this for a long time. Just to give a different perspective on things. I have been a contributor for a rather long time (5 years to be exact). I confess that I am not a regular contributor. These things are mostly thoughts that I have, and I am going to be blunt about them. If people can’t deal with it, so be it. And this is not gonna be about Directory Tiles, even though I am eager to write my thoughts about them, but, no, not this time.

#1 – I hate the word ‘Mozillians’.

The word “Mozillians” suggest that people contributing to the Mozilla project are all the same. All one kind. It takes the focus away from potential differences in personality or culture. In my opinion, these differences need to be valued. For we are all different. And the fact that we contribute doesn’t take that difference away.

There’s another reason why I hate that word: “Mozillian” seems like “Human”. A status that, once acquired, lasts forever and cannot be taken away. However, what if I don’t want to be part of that group anymore? I think the term “contributor” is much more suited for that. It still gives some means of escape from the whole thing.

#2 – Products mean nothing to me.

Given there’s so much focus on products nowadays: I don’t like that. Products mean nothing to me. It’s just a name for what? Bunch of code? Initiatives? Philosophical thoughts? Honestly, do we need a name for everything? If we think about it, our product is not so different from other products. The browser is written in some programming language and evolved over years. Same for other products. That doesn’t make it ‘awesome’. That just means it’s evolved. Product names are just for identification, and that’s marketing. I don’t have anything to do with marketing reasons. I have my own mind.

Are we humans awesome? You’d assume it depends on the point of view. And YES: IT DOES!

#3 – I don’t like direction.

I am someone who values individuality. I don’t like direction. Giving direction means potentially saying “I know more than you do” or “I am better than you”, therefore I tell you what to do, and I will never accept that. Not when it comes to a such diverse group of people as the contributor base.

#4 – Good things take time

I don’t like for things being rushed. That’s not how I work. There have been so many things in the past years that have been rushed and then improved and improved or abandoned for reasons. Why not improve them and release them when ready? Unfinished projects do not make us more popular with the userbase.

#5 – Controversial discussions are good discussions

On a related note: In order for discussions to get most healthy, it’s important for them to be vivid. Vivid discussions means that any stand point is accepted. It might get a little heated from time to time, but that way we can ensure that there are a lot of thoughts covered in the discussion. I hate it when community doesn’t even have a say in what gets decided.

#6 – I hate the “1 million mozillians” initiative.

It’s hard to manage 1 million people. And frankly, I doubt Mozilla can do this. Let’s think about this: People are people, they have their own mind, their own ideals, their own hobbies, their own opinion. You can find more potential reasons in my comment here: http://hoosteeno.com/2013/12/17/a-new-mozillians-org-signup-process/#comment-159

What do they need 1 million contributors for? Hint: It’s not to benefit the people themselves.

### Jeff Griffiths — How I (quickly) write Firefox Extensions

Over the last 2 or so years I've written a lot of add-ons and in particular a lot of very very simple add-ons that are either quick examples to show someone how to do something, or to try to reproduce a bug in Firefox or the SDK. And because I'm lazy, I've automated a lot of the tedious parts of this. Here's how I do it.

### Disclaimer

I use OS X and a lot of this assumes having a reasonable UNIX shell environment. If you're on Windows and don't have bash, some parts of this will not work for you. Thankfully, node.js 'jus works' everywhere.

• you need to install the Add-on SDK, the instructions are over here. In particular you need to make sure that the bin/cfx script is on your $PATH environment variable. • in Firefox, install Wladimir Palant's excellent Extension Auto-installer extension. This extension allows you to reload newer builds of your extension into Firefox without having to re-start. ### Creating a skeletal add-on The cfx tool does this already, but it creates a skeletal add-onwith example code and I almost never want that. So I use this quick shell script called jpinit: #!/bin/bash _PWD=pwd mkdir -p "$_PWD/$1" && cd "$_PWD/$1" && cfx init echo "" > lib/main.js && echo "" > doc/main.md && echo "" > test/test-main.js && echo "" > ./README.md  Put this on your path too, I tend to put scripts like this in $HOME/usr/local/bin or similar.

Once I've got a blank project going, I get to the fun part of actually implementing features. Whenenver I want to test my code, all I need to do is run this scipt and it will build and install my add-on in Firefox:

https://gist.github.com/canuckistani/9301061

Under the hood it's just using node's child_process module to first run cfx xpi, then wget to upload the xpi file to Firefox.

How does this even work? That's the magic of the 'Extension Auto-Installer' extension, how it works is it runs a local web server inside Firefox that listens on a port for xpi file uploads. If it gets an xpi file, it tries to install or reload it.

### Editor Support

I primarily use Sublime Text. The slightly tricky thing about using ST for add-on development is that internal Firefox and add-on code is able to use more advanced JS features. To convince ST that things like the let and yield keywords are okay, you need to do the following:

• install Brandon Benvie's JavaScriptNext package, and use it with JS files by going to 'View / Syntax / 'Open all with current extension as...' and selecting 'JavaScritpNext - ES6 Syntax'.
• install a linter, I use SublimeLinter. I use it in jshint mode, with jshint installed and this .jshintrc file

Here's a quick screencast to show you how quickly this can get you hacking:

When you consider how time-consuming this has been in the past, I'm pretty happy with it, in particular if I just need to hack on something to answer a question on Stack Overflow or from IRC. The time consuming part becomes writing code, and that's the way it should be.

I do expect this all to change soon; the Jetpack team has been working on some new features that will provide everyone with an even more streamlined process without the need for extra extensions or shell scripts, as well as an advanced debugger specifically for add-ons. For a glimpse of just how awesome you really need to check out Erik's post from last week.

### Christian Heilmann — Too easy – didn’t learn – my keynote at jQuery Europe 2014

I am right now on the plane back to England after my quick stint at Vienna giving the keynote at jQuery Europe 2014. True to my ongoing task to challenge myself as a speaker (and as announced here before) I made a bit of a bet by giving a talk that is not in itself technical, but analytical of what we do as developers. The talk was filmed and if you can’t wait, I made the slides available and recorded a screencast (with low sound, sorry).

There is also a audio recording on SoundCloud and on archive.org.

### Quick keynote recap

In the keynote, I tried to analyse the massive discrepancy between what we as web developers get and how happy we seem to be.

We are an elite group in the job market: we are paid well, our work environment is high-tech and our perks make other people jealous. We even get the proverbial free lunches.

And yet our image is that of unsatisfied, hard to work with people who need to be kept happy and are socially awkward. I was confused that a group with all the necessary creature comforts is not an example of how easy working together could be. Instead, we even seem to need codes of conduct for our events to remind people not to behave badly towards people of the other sex or cultural background. Are we spoiled? Are we just broken? Or is there more?

I’ve found a few reasons why we can come across as unsatisfied and hard to handle and the biggest to me was that whilst we are getting pampered, we lack real recognition for what we do.

When you get a lot, but you yourself feel you are not really doing much, you are stuck between feeling superior to others who struggle with things you consider easy and feeling like a fraud. Instead of trying to communicate out about what we do, how much work it involves and why we do things in a certain way we seem to flee into a world of blaming our tools and trying to impress one another.

### Initial Feedback

I am very happy to report that the feedback I got at the event was very good. I had some criticism, which is great as it gives me something to think about. And I had some heartfelt remarks from people who said I’ve opened their eyes a bit as to why they behaved in a certain way and now know how to fix some issues and clashes they had.

### Want more?

I don’t want to repeat it all here again – if wanted, I could write a larger article on the subject to be published somewhere with more eyeballs. Simply listen to the recording or wait for the video to be released.

### Material

I couldn’t have done this without watching some other talks and reading some other posts, so here are links to the materials used:

### Thanks

I want to thank the audience of jQuery Europe for listening and being open to something different. I also want to thank the organisers for taking the chance (and setting the speakers up in the most epic hotel I ever stayed in). I also want to point out that another talk at jQuery Europe 2014 – “A Web Beyond Touch” by Petro Salema was one of the most amazing first stage performances by a speaker I have seen. So keep your eyes open for this video.

Photo by Douglas Neiner

### Andy McKay — Foundations

As it turns out, software foundations can be pretty useful. There's a tipping point in open source software when all of a sudden money becomes involved and it starts to get serious. Companies are formed, trademarks are violated and all of a sudden, stuff becomes real.

About 10 years ago I was around when Plone went from a small project to a "starting to get serious project". Alexander Limi and Alan Runyan had the foresight right there and then to realise that they needed a foundation, someone to hold the IP and trademarks of Plone. This got it away from being controlled by individuals or companies and instead placed in the community.

The Plone Foundation was formed in 2004 and one of the issues was that a company had been formed, Plone Solutions. in Norway to provide consulting. Having a company with the projects name in the title was a trademark violation. It also leads to confusion within the project and implies a relationship between the company and the project that doesn't exist. The foundation and the company talked and the company amicably changed its name to Jarn instead. Although Jarn is no longer around, Plone is still going strong and so is the Plone Foundation.

As another example, recently the Python Software Foundation had to fight for the Python trademark in Europe when a hosting company tried to trademark Python. The company backed down when the Python community stepped up to help the foundation fight that and proceed with its own trademark.

The Mozilla foundation, not the Mozilla corporation, holds the trademark and intellectual property for Mozilla projects. To commit to a Mozilla project, you have to agree to the committers agreement. Like other foundations it fights against people abusing trademarks. For example, taking Firefox, bundling it up with malware and then distributing it under the Firefox trademark.

In all these cases it's not a company holding trademarks and any intellectual property (depending on the foundation and the software license). Instead a foundation is created by the developers and maintained by the development community. It is the foundations job to act on behalf of the projects software community to maintain and improve it.

And I think Node.js is finding out why it needs one.

Note: I was on the Plone Foundation board, I'm currently the secretary of the Django Software Foundation and I work at Mozilla Corporation (not the Mozilla Foundation)

### Planet Mozilla Interns — Willie Cheong: Mouse / MooseTaste of Good Old Times

Today I meet Erasmus again after almost 4 long years. Back in Singapore I remember we would hang out with the usuals every other day for lunch/dinner/12am meals. How time has gone by since then. I heard about how everyone was doing well and we had a great time catching up.

Ever since moving to Canada I have often wondered about whether old friends will meet again and despite the years, still have great time hanging out. I occasionally get dreams about meeting my old friends and feeling awkward in the moment because we’ve all changed and been disconnected for so long. The image of these friends frozen in my mind exactly as they were when I last saw them years ago.

Yes, indeed we have changed. The friends we once thought were the best people ever have grown, matured, started new careers, gotten new experiences and built new lives. But friends will always be friends. Meeting with Erasmus today felt good, and sort of heartwarming. It’s like watching Toy Story 3 after many years since Toy Story 2, but way better because this is real life.

## March 01, 2014

### Priyanka Nag — GNUnify 2014

This was my third GNUnify in a row as a participant....and the second one as an organizer. Being a student of SICSR and a member of PLUG, GNUnify is somehow way more than just another conference....its MY conference. Everything about GNUnify is a bit too special for me.

The GNUnify fever gets on all the organizers right from December, but this time, being away from college (because of my job)....I did catch the fever a bit late ;)
Right from 7th of Feb, right from a week before the commencement of the event, I went to college almost every evening; to jump into the excitement of GNUnify preparation. Its very difficult to explain others how we get this passion for the so called EXTRA work after office, but we who live it...just LOVE it!
GNUnify is of-course way more to me than just the Mozilla Dev rooms. This year, the Mozilla Dev rooms were a tough job though...due to the lack of budget.
The last MozCafe meeting that we had before GNUnify was a tricky one. Planning an event like GNUnify with "0" budget is not easy (rather its not possible)....someone we did manage it well with little spending from our own pockets :P

Day one was planned to be a bit different this year. We tried out 'the booth format' (instead of one to one talks, we had project tables (kind of booths) for as many Mozilla projects as we could. There were project experts in each booth) instead of the normal tracks. It wasn't much of a success though. There were several reasons behind the failure of this format on Day one:
• Lack of a proper space- The space provided wasn't exactly a suitable structure for booths.
• Improper crowd management - When in a technical conference, we never like to wait. Initially our booths were sooo much crowded that people had to wait outside the room which eventually ended up in us losing them.
• Improper publicity- Somehow, the entire booth format wasn't well advertised to our audience. The people who visited the Mozilla rooms expected the traditional tracks and were disappointed not to find them.

 A glimpse of the MDN booth on Day I

Day two went much better. On this day, we had several different tracks like: Webmaker, MDN, Localization, Privacy & Security, Rust, Firefos OS, Reps and FSA. Things went better than expected here. There were sessions like Firefox OS where we had to send some of our audience away due to the lack of sitting space in the Dev rooms.

A few very important learning from GNUnify 2014:
• For any Open Source conference, using Windows OS while making a presentation should be strictly avoided. The Mozillians had to face some criticism as some of our speakers forgot this rule of thumb.
• A little more coordination is required within the team...mainly in situation where we have limited resources and a huge responsibility, everyone needs to understand their role well.
GNUnify 2014 was a good show...though there are obviously some immense chances of improvements. Hopefully 2015 will be even better :)