Mozilla Release Management TeamFirefox 32 beta7 to beta8

  • 20 changesets
  • 52 files changed
  • 363 insertions
  • 162 deletions



List of changesets:

Ryan VanderMeulenBug 1023472 - Disable test_bug935876.html on Android for perma-failing when pushed to a different chunk; a=bustage - 1764a68fe1ae
Ryan VanderMeulenBug 1054087 - Disable test_dom_input_event_on_htmleditor.html on Android 2.3 for perma-failing since the number of Android mochitest chunks was increased; a=bustage - ef94af3dd0ad
Jon CoppeardBug 999158 - Keep a spare chunk around to mitigate GGC OOM crashes on tenuring. r=terrence, a=lmandel - 97fd0156fdc2
Ryan VanderMeulenBug 1026805 - Disable frequently-hanging mozapps tests on OSX. a=test-only - 76f7c4f771f5
Matthew NoorenbergheBug 1054411 - Cancel the HTTP requests in browser_keywordSearch.js to avoid making network contact. r=adw, a=test-only - 6dec02f8d0ea
Florian QuèzeBug 1048375 - browser_aboutHome.js intermittently causes external requests to r=gavin, a=test-only - 8e09aad61a79
Randell JesupBug 1054166: Mirror Add/RemoveListener in Add/RemoveDirectListener r=roc a=abillings - 6a2810252cf8
Simon MontaguBug 1037641 - Split SetDirectionFromChangedTextNode into TextNodeWillChangeDirection and TextNodeChangedDirection. r=ehsan, a=abillings - 9e94aa2f0ae7
Brian HackettBug 1053683 - Add overrecursion checks to FillInBMInfo. r=jandem, a=abillings - c6e134b4ed52
Ed LeeBug 1039881 - Use an empty directory tiles data source pref before uplift [r=adw r=bholley a=lmandel] - 6790f9333fec
Wes JohnstonBug 910893 - Don't disable the try again button. r=margaret, r=benb, a=lmandel - 7bb962c117df
Valentin GosuBug 1045886 - Remove Cache directory from Android profiles. r=michal, a=lmandel - 07eb5ce30325
Valentin GosuBug 1045886 - Increase assertion count in test_bug437844.xul. a=test-only - c444cb84a78b
Jan de MooijBug 1054359 - Add is-object check to IonBuilder::makeCallHelper. r=efaust, a=lmandel - f5bfa8f3434c
Jared WeinBug 1016434 - Backout Bug 759252 from Firefox 32 and Firefox 33 for causing blurry throbbers. a=lmandel - 3741e9a5c6ca
Jean-Yves AvenardBug 1045591 - Fix media element's autoplay for audio-only stream. r=cpearce, a=lmandel - f595bdcdbd1e
Alessio PlacitelliBug 1037214 - Throw OOM to the script instead of aborting in FragmentOrElement::GetTextContentInternal. r=bz, a=lmandel - 353ade05d903
Ed MorleyBug 1026987 - Give the MOZ_DISABLE_NONLOCAL_CONNECTIONS error a TBPL-parsable prefix. r=froydnj, a=NPOTB - 92aead6bd5fb
Andrew McCreightBug 1039633 - Always try to set the ASan symbolizer in gtest runs. r=ted, a=test-only - e0e150f31ffe
Tooru FujisawaBug 1053692 - Do not use optimized stub for spread call with many arguments. r=jandem, a=lmandel - 45953c4613d2

Andrew Overholt“Bootcamp” talks on Air Mozilla

Thanks to Jonathan Lin and Spencer Hui some of the talks that were presented at the recent “bootcamp” are appearing on Air Mozilla and more will do so as we get them ready. They’re all in Air Mozilla’s engineering channel:

Gregory SzorcSubmit Feedback about Mercurial

Are you a Mozillian who uses Mercurial? Do you have a complaint, suggestion, observation, or any other type of feedback you'd like to give to the maintainers of Mercurial? Now's your chance.

There is a large gathering of Mercurial contributors next weekend in Munich. The topics list is already impressive. But Mozilla's delegation (Mike Hommey, Ben Kero, and myself) would love to advance Mozilla's concerns to the wider community.

To leave or vote for feedback, please visit before August 29 so your voice may be heard.

I encourage you to leave feedback about any small, big or small, Mozilla-specific or not. Comparisons to Git, GitHub and other version control tools and services are also welcome.

If you have feedback that can't be captured in that moderator tool, please email me.

Michael KaplyWebconverger

One of projects I've been working on is Webconverger. Webconverger is an open source Linux-based kiosk that uses a customized version of Firefox as the user interface.

Webconverger is a great choice if you are setting up a kiosk or digital signage. It can be quickly and easily deployed on any type of machine. It works especially well on legacy hardware because of its low resource requirements. It can even be installed onto a USB stick and simply plugged in to an existing machine.

The configuration for the kiosk is downloaded from a server allowing you to customize your kiosk remotely and it will pick up your latest changes. It has a full featured API that allows you to do things like customize the browser chrome or whitelist certain sites. Plus it even stays updated automatically if you choose by downloading the latest version in the background.

If you're looking for a kiosk or digital sign solution, I would definitely recommend checking it out. Go to for more information or email

Will Kahn-GreeneInput status: August 19th, 2014


High-level summary:

It's been a slower two weeks than normal, but we still accomplished some interesting things:

  • L Guruprasad finished cleaning up the Getting Started guide--that work helps all future contributors. He did a really great job with it. Thank you!
  • Landed a minor rewrite to rate-limiting/throttling.
  • Redid the Elasticsearch indexing admin page.
  • Fixed some Heartbeat-related things.

Landed and deployed:

  • cf2e0e2 [bug 948954] Redo index admin
  • f917d41 Update Getting Started guide to remove submodule init (L. Guruprasad)
  • 5eb6d6d Merge pull request #329 from lgp171188/peepify_submodule_not_required_docs
  • c168a5b Update peep from v1.2 to v1.3
  • adf7361 [bug 1045623] Overhaul rate limiting and update limits
  • 7647053 Fix response view
  • f867a2d Fix rulename
  • 8f0c36e [bug 1051214] Clean up DRF rate limiting code
  • 0f0b738 [bug 987209] Add django-waffle (v0.10)
  • b52362a Make peep script executable
  • 461c503 Improvie Heartbeat API docs
  • 8f0ccd3 [bug 1052460] Add heartbeat view
  • d1604f0 [bug 1052460] Add missing template

Landed, but not deployed:

  • ed2923f [bug 1015788] Cosmetic: flake8 fixes (analytics)
  • afdfc6a [bug 1015788] Cosmetic: flake8 fixes (base)
  • 05e0a33 [bug 1015788] Cosmetic: flake8 fixes (feedback)
  • 2d9bc26 [bug 1015788] Cosmetic: flake8 fixes (heartbeat)
  • dc6e990 Add anonymize script

Current head: dc6e990

Rough plan for the next two weeks

  1. Working on Dashboards-for-everyone bits. Documenting the GET API. Making it a bit more functional. Writing up some more examples. (
  2. Update Input to ElasticUtils v0.10 (bug 1055520)
  3. Land all the data retention policy work (bug 946456)
  4. Gradients (
  5. Product administration views (bug 965796)

Most of that is in some state of half-done, so we're going to spend the next couple of weeks focusing on finishing things.

What I need help with

  1. (django) Update to django-rest-framework 2.3.14 (bug 934979) -- I think this is straight-forward. We'll know if it isn't if the tests fail.
  2. (django, cookies, debugging) API response shouldn't create anoncsrf cookie (bug 910691) -- I have no idea what's going on here because I haven't looked into it much.
  3. (html) Fixing the date picker in Chrome (bug 1012965) -- The issue is identified. Someone just needs to do the fixing.

For details, see our GetInolved page:

If you're interested in helping, let me know! We hang out on #input on and there's the input-dev mailing list.

Additional thoughts

We're in the process of doing a Personally Identifiable Information audit on Input, the systems it's running on and the processes that touch and move data around. This covers things like "what data are we storing?", "where is the data stored?", "who/what has access to that data?", "does that data get copied/moved anywhere?", "who/what has access to where the data gets copied/moved to?", etc.

I think we're doing pretty well. However, during the course of the audit, we identified a few things we should be doing better. Some of them already have bugs, one of them is being worked on already and the otehrs need to be written up.

Some time this week, I'll turn that into a project and write up missing bugs.

That's about it!

Adam LoftingTrendlines and Stacking Logs


  • Our MoFo dashboards now have trendlines based on known activity to date
  • The recent uptick in activity is partly new contributors, and partly new recognition of existing contributors (all of which is good, but some of which is misleading for the trendline in the short term)
  • Below is a rambling analogy for thinking about our contributor goals and how we answer the question ‘are we on track for 2014?’
  • + if you haven’t seen it, OpenMatt has crisply summarized a tonne of the data and insights that we’ve unpicked during Maker Party

Stacking Logs

I was stacking logs over the weekend, and wondering if I had enough for winter, when it struck me that this might be a useful analogy for a post I was planning to write. So bear with me, I hope this works…

To be clear, this is an analogy about predicting and planning, not a metaphor for contributors* :D

So the trendline looks good, but…

Screen Shot 2014-08-19 at 11.47.27

Trendlines can be misleading.

What if our task was gathering and splitting logs?

Vedstapel, Johannes Jansson (1)

We’re halfway through the year, and the log store is half full. The important questions is, ‘will it be full when the snow starts falling?

Well, it depends.

It depends how quickly we add new logs to the store, and it depends how many get used.

So let’s push this analogy a bit.

Firewood in the snow

Before this year, we had scattered stacks of logs here and there, in teams and projects. Some we knew about, some we didn’t. Some we thought were big stacks of logs but were actually stacked on top of something else.

Vedstapel, Johannes Jansson

Setting a target was like building a log store and deciding to fill it. We built ours to hold 10,000 logs. There was a bit of guesswork in that.

It took a while to gather up our existing logs (build our databases and counting tools). But the good news is, we had more logs than we thought.

Now we need to start finding and splitting more logs*.

Switching from analogy to reality for a minute…

This week we added trendlines to our dashboard. These are two linear regression lines. One based on all activity for the year to-date, and one based on the most recent 4 weeks. It gives a quick feedback mechanism on whether recent actions are helping us towards to our targets and whether we’re improving over the year to-date.

These are interesting, but can be misleading given our current working practices. The trendline implies some form of destiny. You do a load of work recruiting new contributors, see the trendline is on target, and relax. But relaxing isn’t an option because of the way we’re currently recruiting contributors.

Switching back to the analogy…

We’re mostly splitting logs by hand.

Špalek na štípání.jpg

Things happen because we go out and make them happen.

Hard work is the reason we have 1,800 Maker Party events on the map this year and we’re only half-way through the campaign.

There’s a lot to be said for this way of making things happen, and I think there’s enough time left in the year to fill the log store this way.

But this is not mathematical or automated, which makes trendlines based on this activity a bit misleading.

In this mode of working, the answer to ‘Are we on track for 2014?‘ is: ‘the log store will be filled… if we fill it‘.


Holzspalter 2

As we move forward, and think about scale… say a hundred-thousand logs (or even better, a Million Mozillians). We need to think about log splitting machines (or ‘systems’).

Systems can be tested, tuned, modified and multiplied. In a world of ‘systems’ we can apply trendlines to our graphs that are much better predictors of future growth.

We should be experimenting with systems now (and we are a little bit). But we don’t yet know what the contributor growth system looks like that works as well as the analogous log splitting machines of the forestry industry. These are things to be invented, tested and iterated on, but I wouldn’t bet on them as the solution for 2014 as this could take a while to solve.

I should also state explicitly that systems are not necessarily software (or hardware). Technology is a relatively small part of the systems of movement building. For an interesting but time consuming distraction, this talk on Social Machines from last week’s Wikimania conference is worth a ponder:

Predicting 2014 today?

Even if you’re splitting logs by hand, you can schedule time to do it. Plan each month, check in on targets and spend more or less time as required to stay on track for the year.

This boils down to a planning exercise, with a little bit of guess work to get started.

In simple terms, you list all the things you plan to do this year that could recruit contributors, and how many contributors you think each will recruit. As you complete some of these activities you reflect on your predictions, and modify the plans and update estimates for the rest of the year.

Geoffrey has put together a training workshop for this, along with a spreadsheet structure to make this simple for teams to implement. It’s not scary, and it helps you get a grip on the future.

From there, we can start to feed our planned activity and forecast recruitment numbers into our dashboard as a trendline rather than relying solely on past activity.

The manual nature of the splitting-wood-like-activity means what we plan to do is a much more important predictor of the future than extrapolating what we have done in the past, and that changing the future is something you can go out and do.

*Contributors are not logs. Do not swing axes at them, and do not under any circumstances put them in your fireplace or wood burning stove.

Laurent JouanneauRelease of SlimerJS 0.9.2

Few days ago, I released a minor version of SlimerJS, my scriptable browser based on XulRunner: SlimerJS 0.9.2.

If you discover my project: this is a browser which is controlled by a script, not by a human. So it has no user interface. In fact this is a browser like PhantomJS, which proposes the same API as PhantomJS. But it is based on Gecko, not on Webkit. See my previous post about the start of the project.

This new version fixes some bugs and is now compatible with Gecko/Firefox/Xulrunner 31.

Next big work on SlimerJS:

  • fix last issues that prevent GhostDriver to work well with SlimerJS
  • support Marionette(
  • try to implement remote debugging, to allow to debug your script from Firefox Dev Tools
  • try to have a true headless browser (so to have a browser without visible windows)

Help is welcomed, See you on Github ;-)

Christian – drop images into the browser, get a zip of thumbnails

About 2½ years ago I wrote a demo for Mozilla Hacks how to use Canvas to create thumbnails. Now I felt the itch to update this a bit and add more useful functionality. The result is:

It is very easy to use: Drop images onto the square and the browser creates thumbnails for them and sends them to you as a zip.


Thumbnail settings page

You can set the size of the thumbnails, if you want them centered on a coloured background of your choice or cropped to their real size and you can set the quality. All of this has a live preview.

If you resize the browser to a very small size (or click the pin icon on the site and open a popup) you can use it as neat extra functionality for Finder:

resize to simple mode

All of your settings are stored locally, which means everything will be ready for you when you return.

As there is no server involved, you can also download the app and use it offline.

The source, of course, of course is available on GitHub.

To see it in action, you can also watch the a quick walkthrough of Makethumbnails on YouTube

Happy thumbing!


Rizky AriestiyansyahWebmaker with SMK ITACO

August 18, 2014 we will carry out the webmaker event we’ve scheduled previously, the event held at SMK ITACO Bekasi, this is a vocational school for children who are less economic conditions. We only...

The post Webmaker with SMK ITACO appeared first on oonlab.

Doug BelshawFacebook and Twitter: beyond the like/favorite binary?

There’s been a couple of developments with the social networks Facebook and Twitter that fit together quite nicely this week. The first is the news that Facebook likes make a huge difference in terms of what you see while browsing your news feed:

Wired writer Mat Honan found out what happens when you like every single thing that shows up in your Facebook feed. The results were dramatic: Instead of his friends’ updates, he saw more and more updates from brands and publishers. And, based on what he had liked most recently, Facebook’s algorithm made striking judgements about his political leanings, giving him huge numbers extremely right-wing or extremely left-wing posts. What’s more, all that liking made Honan’s own posts show up far more in his friends’ feeds — distorting their view of the world, too.

But Medium writer Elan Morgan tried the opposite experiment: Not liking anything on Facebook. Instead of pressing like, she wrote a few thoughtful words whenever she felt the need to express appreciation: “What a gorgeous shock of hair” or “Remember how we hid from your grandmother in the gazebo and smoked cigarettes?” The result, as you might guess, is just the opposite of Honan’s experience: Brand messages dwindled away and Facebook became a more relaxed, conversational place for Morgan.

The second piece of news is that Twitter is experimenting with changes to the way that ‘Favorites’ work:

Favorites have also been pseudo-private; while you can view a list of favorited tweets from an account’s profile page or on a tweet’s detail page, typically only the “favoriter” and the “favoritee” ever know about it. If Twitter starts surfacing favorited tweets in timelines, they’ve suddenly become far more public. The change — and the backlash — is somewhat similar to Facebook’s attempts to share just about everything “friends” did with Open Graph.


For those who have used Twitter for years, the change is so shocking it can seem like the company is completely ignorant to how its customers use the service. But even seasoned Twitter veterans should admit that the service’s core functionality is fairly arcane — it’s far from accessible to new users, and that’s a problem for Twitter.

What I find interesting is that most sites allow you to ‘love’, ‘like’, ‘favourite’, ‘+1’ or otherwise show your appreciation towards content. You can do this with Mozilla Webmaker too, when browsing the gallery. The trouble is that this is extremely limiting when it comes to data mining. If it’s used in conjunction with an algorithm to serve up content (not currently the case with Webmaker) then it’s a fairly blunt instrument.

There are some sites that have attempted to go beyond this. I’m thinking specifically of for Feelings, which allows you to share content that you don’t agree with. But there’s not a lot of great examples.

The trouble is, I guess, is that human emotions are complex, changeable and along three-dimensional analogue spectrum. Digital technologies, on the other hand - and particularly like/favorite buttons - are binary.

Update: after posting this I found that Yahoo! are planning to scan photos you publish on Tumblr to gauge brand sentiment. I’m not sure if that’s better or worse, to be honest!

Questions? Comments? I’m @dajbelshaw on Twitter, or you can email me at

Nigel BabuArrrgh! Tracebacks and Exceptions

My colleague asked me to take a look at a logging issue on a server last week. He noticed that the error logs had way too little information about exceptions. In this particular instance, we had switched to Nginx + gunicorn instead of our usual Nginx + Apache + mod_wsgi (yeah, we’re weird). I took a quick look this morning and everything looked exactly like they should. I’ve read up more gunicorn docs today than I’ve ever done, I think.

Eventually, I asked my colleague Tryggvi for help. I needed a third person to tell me if I was making an obvious mistake. He asked me if I tried running gunicorn without supervisor, which I hadn’t. I tried that locally first, and it worked! I was all set to blame supervisor for my woes and tried it on production. Nope. No luck. As any good sysadmin would do, I checked if the versions matched and they did. CKAN itself has it’s dependencies frozen, this lead to more confusion in my brain. It didn’t make sense.

I started looking at the Exception in more detail, there was a note about email not working and the actual traceback. Well, since I didn’t actually have a mail server on my local machine, I commented those configs out, and now I just had the right Traceback. A few minutes later, it dawned on me. It’s a Pylons “feature”. The full traceback is printed to stdout if and only if there’s no email handling. Our default configs have an email configured and our servers have postfix installed on them and all the errors go to an email alias that’s way too noisy to be useful (Sentry. Soon). I went and commented out the relevant bits of configuration and voilà, it works!

Palm Face

Image source: Unknown, but provided by Tryggvi :)

J. Ryan StinnettWebIDE enabled in Nightly

I am excited to announce that WebIDE is now enabled by default in Nightly (Firefox 34)! Everyone on the App Tools team has been working hard to polish this new tool that we originally announced back in June.


While the previous App Manager tool was great, that tool's UX held us back when trying support more complex workflows. With the redesign into WebIDE, we've already been able to add:

  • Project Editing
    • Great for getting started without worrying about an external editor
  • Project Templates
    • Easy to focus on content from the start by using a template
  • Improved DevTools Toolbox integration
    • Many UX issues arose from the non-standard way that App Manager used the DevTools
  • Monitor
    • Live memory graphs help diagnose performance issues


All projects you may have created previously in the App Manager are also available in WebIDE.

While the App Manager is now hidden, it's accessible for now at about:app-manager. We do intend to remove it entirely in the future, so it's best to start using WebIDE today. If you find any issues, please file bugs!

What's Next

Looking ahead, we have many more exciting things planned for WebIDE, such as:

  • Command line integration
  • Improved support for app frameworks like Cordova
  • Validation that matches the Firefox Marketplace

If there are features you'd like to see added, file bugs or contact the team via various channels.

Gregory SzorcMercurial hooks move and testing Mercurial

Mozilla has a number of source repositories under that cumulatively define how version control works at Mozilla.

Back in February, I launched an effort to establish a unified Mercurial repository for all this code. That repository is version-control-tools and it has slowly grown.

The latest addition to this repository is the import of the hghooks repository. This now-defunct repository contained all the server-side Mercurial hooks that Mozilla has deployed on

Soon after that repository was imported into version-control-tools, we started executing the hooks tests as part of the existing test suite in version-control-tools. This means we get continuous integration, code coverage, and the ability to run tests against multiple versions of Mercurial (2.5.4 through 3.1) in one go.

This is new for Mozilla and is a big deal. For the first time, we have a somewhat robust testing environment for Mercurial that is testing things we run in production.

But we still have a long way to go. The ultimate goal is to get everything rolled into the version-control-tools repository and to write tests for everything people rely on. We also want the test environment to look as much like our production environment as possible. Once that's in place, most of the fear and uncertainty around upgrading or changing the server goes away. This will allow Mozilla to move faster and issues like our recent server problems can be diagnosed more quickly (Mercurial has added better logging in newer versions).

If you want to contribute to this effort, please write tests for behavior you rely on. We're now relying on Mercurial's test harness and test types rather than low-level unit tests. This means our tests are now running a Mercurial server and running actual Mercurial commands. The tests thus explicitly verify that client-seen behavior is exactly as you intend. For an example, see the WebIDL hook test.

So what are you waiting for? Find some gaps in code coverage and write some tests today!

Matt ThompsonWebmaker: what is the latest data telling us?

What are we learning? This post highlights new metrics and some early analysis from Adam, Amira, Geoff, Hannah and many others. The goal: turn our various sources of raw data into some high-level narrative headlines we can learn from.

Getting to 10K

Current contributor count: 5,529 (Aug 15)

  • Are we on track to hit 10K? No, not yet. The statistical increase we’re seeing is based on good work to record past contribution. But our current growth-rate isn’t enough.
  • Why is the 4-week trend-line up? Because of Maker Party + bulk capturing historical activity (especially Hive + MVP contribution badges).
  • What can we do to grow faster? Short term, we can focus on (amongst other things):

    • 1) Maker Party partners. Convert more partner commitments into action, through a streamlined on-boarding process.
    • 2) Webmaker users. Try to convert more users into contributors. Ask them to do something more directly.
    • 3) Training. Net Neutrality teach-ins, train the trainer events, MozCamps, etc.
      • + …what else?

Webmaker users


  • We now have about 120K Webmaker users. We’re seeing big recent increases, mostly thanks to the snippet.
  • About 2% of those users are currently contributors.
  • ~50% of users have published something.
    • Most of that publishing happens on the user’s first day. (Users who don’t make something on their first day tend not to make anything at all.)
    • There’s very little overlap between tools. Users tend to make with a single tool. (e.g., of the ~46K people who have made something, only 2K have made something with both Thimble and Popcorn.)
    • About 20% have opted in to receive email updates from us. (e.g., via BSD)

Owned media

  • Snippet
    • Our top snippet performer: “The Web is your playground! See what you can build with Mozilla Webmaker and our global Maker Party.” (+ animated pug icon)
      • CTR = 0.58%. (Other MP variations: 0.15% – 0.49%)
      • The icon and animation have a big influence on CTR. Fun icons and playfulness are the hook.
      • “Teach and learn” language generally performs as well as more playful language.

  • Landing pages
    • A “survey-based approach” is our top performer. Asking people *why* they’re interested in Webmaker. (vs straight email sign-up ask) (+4.7% conversion rate)
    • 80 / 20 split for learning vs. teaching. About 78% of survey respondents express interest in making / learning, with 22% wanting to teach / mentor.
  • Language focused on teaching, learning and education performs well.
    • e.g., “Welcome to Webmaker, Mozilla’s open source education project, where you can teach and learn the web through making.” (+17%)
    • vs. “We believe anyone can be a tinkerer, creator, builder of the Web. Including you.”

  • referral traffic
    • “Webmaker” out-performs “Maker Party.” Our conversion rate dropped to half when we shifted from from “Learn the web” to “Join our Maker Party.”

“The further away we get from the Mozilla brand, the more work there is to get someone on board.” — Adam

Maker Party

  • 1,796 events currently entered (Aug 15)
    • That means we’ve already surpassed last year’s total! 1,694 total Maker Party events last year, vs. same number in our first month this year.
    • But: we’ll still need a big event push in second half to hit our contributor target.
  • Key takeaways:
    • Tracking partner activity. Automated tracking has been hard — we’re relying instead on one-to-one calls.
    • We’re gathering great data from those calls. e.g.,
      • Unreported success. Partners are participating in ways that aren’t showing up in our system. Manual badging is filling that gap.
      • Occasional confusion about the ask. Some think “Maker Party” is a “MozFest-level” commitment. They don’t realize the ask is simpler than that.
      • They need easier ways to get started. More simplification and hand-holding. Working on a simplified “Event Wizard” experience now.
      • Some partners see more value in Maker Party than others. Orgs with offerings similar to our own may perceive less value than those in adjacent spaces.
    • We haven’t cracked the earned media nut. Not much coverage. And little evidence of impact from the coverage we got.
    • We don’t have a good way for measuring participation from active Mozillians.
    • Second half. We should gear up for a second “back to school” wave to maximize contributors.

“There’s the ‘summer wave’ and ‘back to school’ waves. We need to have strategies and actions towards both.” –Hannah

Next steps

Short-term focus:

  • 1) Partner conversion. This is probably our best immediate strategy for boosting contribution. Ship a simplified on-ramp for Maker Party partners. A new “Event Wizard,” simple start-up events, and user success support.
  • 2) Convert Webmaker users to contributors. We’ve seen a *big* increase in user numbers. This opens an opportunity to focus on converting those users. Ask them to do something more directly. Try new low-bar CTAs, email optimization, re-activating dormant users, etc.
  • 3) Training. Train the trainer events, MozCamps, MozFest, etc.

Longer-term questions

  • Year-long engagement. How do we more evenly distribute event creation throughout the entire year?
  • Match-making. How do we identify the teachers? How do we connect those who want to learn with those who want to teach? What are the pathways for teachers / learners?
  • Impact. How many people are learning? How much are they learning? Should we make “number of people learning” Webmaker’s KPI in 2015?

Jess KleinRemix + Hack the Firefox Home page. No really, we want you to!

If you are a Firefox desktop user, you may have seen the Firefox default home page. This page contains a default engine web search and quick links to downloads, bookmarks, history, add-ons, sync and settings. Additionally, if you happen to have had tabs open the last time you used the browser,  you can restore them from the home page.  We often share important news and updates underneath the search bar.

Screen Shot 2014-08-15 at 10.18.04 AM.png

This is what I currently see at the Firefox default home page. Animated gifs FTW.

A few months back, Hive Labs, (a new project within the Hive Learning Networks designed to explore the question “how do we use design to transform edupunk ethics into great products?”), was approached by the Mozilla Foundation Engagement team to brainstorm how the space could be used in an innovative way to educate Firefox users about the Maker Party. Maker Party is Mozilla's global campaign to teach the web, uniting educators, organizations and enthusiastic web users with hands-on learning and making. While I have to admit, I have never really created something in the realm of owned media, I saw this as an interesting opportunity for Mozilla to show (vs. tell) what Maker Party is all about.  


The team (which included creative individuals from many different projects across the Mozilla Foundation and the Corporation) immediately identified the opportunity space and came up with a few project requirements:
  • use the space in an interactive way to introduce the website visitor to web literacy skills
  • acknowledge that the visitor may not have ever seen code before, and understand that we do not know what web literacy skills they are coming to this space with
  • create something playful


While we tossed around a few different ideas, the solution that we came up with was to create a Webmaker Goggles - like experience that lets the visitor see under the hood of the webpage.

Screen Shot 2014-08-15 at 10.35.04 AM.png

After doing some initial sketches, we realized that we needed to define our learning objectives for the project.  While normally this is fairly easy to do - you say that the learner will come away with the ability to remix a paragraph written in HTML and understand what p tags are, or something very basic. Here, the challenge was two-fold: 1. the webpage visitor did not identify as a learner and 2. as I mentioned before, they might have no knowledge of the fact that the code is written in order to create a webpage. So, after several false starts, we came up with the the goal of having the website visitor walk away understanding that if you look under the hood of a webpage, you will see it is made from code.

Initial sketches for the snippet included replacing the Firefox logo with an image

After the learning objective was defined, we had to interpret what that meant in terms of interaction design. I believe that the most effective way to empower a user is to put the tools in their hands to allow them to directly address and grapple with the thing that they might learn by tinkering with it themselves. We tried out a few different iterations on this. Above is a sketch where the visitor might get instructed to remix the page from a video. The idea was to have a person in the video describe what to do, and then the learner would use the goggles to swap out the video for an image or video of their choosing. This idea was fun, and had a lot of potential community localization opportunities. However, there was a risk that the user would just not click on the video, and miss out on all the fun.

Ultimately, we ended up utilising what Atul Varma calls “cruise control” —that’s where we model the behavior in order to encourage the site visitor to try it out themselves. It looks like someone is typing out all of the words on the screen.  We decided to focus on revealing a little CSS, because you can use real words to represent colors and seeing those colors immediately can have a visceral impact on the site visitor. Here is a screencast of the interaction:

We’re really excited about this project, as it represents one of the first interactive uses (if not THE first) of the space of the Firefox home page. We hope that as site visitors dip their toes into understanding the craft of the Web, they’ll be inspired to learn more through Webmaker and Maker Party.  Our ultimate goal is for people to feel empowered to become creators, not just consumers, of the Web.

Jordan LundThis week in Releng - Aug 11th 2014

Completed work (resolution is 'FIXED'):

In progress work (unresolved and not assigned to nobody):

Alex GibsonAnimating the Firefox desktop pages using CSS and SVG

I recently co-authored a post over on the Mozilla Web Development blog! It's a technical run through of how we did some of the CSS and SVG animations on the new Firefox desktop web pages over on If that's your sort of thing, you can read the full article here.

Nigel BabuThe story of hgstats

tl;dr: I built a thing to see public graphs of called hgstats.

Lately, we’ve had problems with Mercurial at Mozilla. The Developer Services Team added a bunch of instrumentation to the hg webheads to help us track what is going wrong and when, to give us somewhat an early indicator of when things get shot to hell. All of these are on the Mozilla Graphite instance which are behind employee-only LDAP. However, an interesting quirk is that the image rendering is actually available without authentication. As a community Sheriff, I’ve been keeping close watch on hg throughout my shift with images that releng folks or hwine gave me. This gave an indicator of when to close trees so that we don’t end up having everything turn red. On Thursday evening, I was watching the conversation in #vcs on, when bkero mentioned he’d made a dashboard in graphite. It suddenly dawned on me that I could just embed those images onto a page and quickly have a public dashboard!

Armed with a bunch of images from Ben, I created a github pages repo with a lovely theme that’s available by default. I embedded the images onto a static HTML page and suddenly, we had a minimal dashboard. It wouldn’t auto-refresh or let you alter the duration of the graph, but hey, now we had one place for things! This first step took about 15 minutes.

There were two features I had in my mind as must-haves: a) the page must let me change the hours of the graphs (i.e. last 2 hours, last 4 hours, last 8 hours, etc), and b) it should auto-refresh. I’ve looked at backbone several times in the past and I figured this was a good time as any to get cracking on building a backbone.js app.

I started slowly, the first step was, get everything I have right now, rendered with backbone. I spent a lot of frustrating hours trying to get it to work, but couldn’t because of silly mistakes. I haven’t been coding in JS much and it shows :) I think I stayed up until 2 am trying to diagnose it, but I couldn’t. When I woke up in the morning, I spotted the trouble immediately and it was a tiny typo. Instead of <%=, I typed <%$. After that first step, I got the router bit working and I had an app that could dynamically change the range of hours in the graph! I’d met my first goal!

I talked to mdoglio who took a quick look at the code and thought models might be a good idea if I’m dealing with data. I refactored the code again to use models, which cleaned it up quite well! Overnight, I had a pull request from hwine to add another graph as well, which I also made more dynamic.

The hardest bit was getting auto-refresh working. I couldn’t figure out an easy way to solve the problem. Eventually, I ended up with setTimer, but the full credit for the right incandation goes to bwinton.

High Five!

Working with backbone has been great, but I wish the documentation did more than just tell me what each function did. Python’s documentation often gives you more than function’s description, it tells you how you would use it practically. Of course, there quite a few resources that already fill this gap. I found pretty useful. I got most of the basic idea of backbone from the site.

I also submitted it to Webdev Beer and Tell (my first submission!). Mike kindly presented it for me (no, he’s not the real nigelb!) and you can watch the video on Air Mozilla if you have some free time :) I would totally recommend watching the whole video, but if you don’t have a lot of time, skip to 6:37.

This is the first time I’ve built a single-page app, so I’d love feedback (extra points if you can do a code review). The code is on GitHub.

Hannah KaneMaker Party Engagement Week 5

Week 5!

tl;dr highlights of the week:

  • Though we saw significant jumps in Wm accounts and events, our Contributor numbers did not increase accordingly
  • We’re identifying many opportunities from the partner calls
  • Hack the Snippet is coming soon, along with the next iteration of the snippet funnel
  • The TweetChat created a temporary increase in Twitter engagement, but took attention away from press

Overall stats:

  • Contributors: 5552 (2% increase from last week’s 5441)
  • Webmaker accounts: 124K (17% increase from last week’s 106.3K)
  • Events: 1799 (crazy 50% jump from last week’s 1199)
  • Hosts: 493 (10% increase from last week’s 450)
  • Expected attendees: 76,200  (23% increase from 61,910)
  • Cities: 362 (40% increase from 260 – what caused this jump?)
  • Traffic: here’s the last three weeks. We continue to see the major boost from the snippet.

  • And the Webmaker user account conversion rate increased a bit further:


Engagement Strategy #1: PARTNER OUTREACH

We are learning a lot from the partner calls. Here are some of the most salient takeaways (borrowing from Amira and Melissa’s notes during Friday’s call):

Partner trends
  • Partners see value in badging their event mentors, speakers and volunteers as a form of appreciation. But there is a potential for those who receive the badges to have no idea who is badging them or what it means (lack of connection to MP). Opportunity: We need to better explain to people why they’ve received a badge and why they might want to create a Webmaker account.
  • Partners are doing things but we just haven’t captured them.  Opportunity: We need to offer real value to users in order to increase the amount of sharing/broadcasting/badging that happens through the site. 
  • Some people need way more training — Opportunity: this is where the event wizard might play a role; there also might be an opportunity to run TTT within certain orgs and spaces.
  • We need to clarify our value statement for partners. It may not be in  adding numbers to their events or traction to their programs/site, or getting press for non-Hive partners. Instead it may be in providing resources and curriculum. We can better segment partners into affinity groups (e.g. afterschool programs) and provide content, trainings, resources, CTAs specifically for them.  We can also localize those offerings to reduce hand-holding.
  • People don’t understand how broad our definition of Maker Party is: everyday events, small events, stands/booths/tables within other events — have to push them to realize that and include all of these on the events platform (note from HK: I would argue we have to offer them a reason to)
  • Opportunity: There’s the summer wave and back to school waves. We need to have strategies and actions towards both.
  • Challenges:
    • Age and time continue to be a blocker for new Wm accounts.
    • Mass emails to order swag, upload events, share information just didn’t work. They need 1-to-1.
    • We lost interest by a lot of people along the way. There’s a good 20-30% we will not be able to bring back in.
    • Parties sound like fun kid-like things (making toys etc.)
    • Getting the Maker Party logo/brand included in event promotion in a meaningful way is not happening, and the meaning behind the brand seems to cause confusion in some cases.

PROMOTIONAL PARTNERS: We continue to see only a tiny amount of referrals from promotional partner urls with RIDs.


Engagement Strategy #2: ACTIVE MOZILLIANS

Haven’t heard anything this week, but Amira and I are meeting with the FSA Community Manager on Monday of this week.


Engagement Strategy #3: OWNED MEDIA

Snippet Funnel:

The snippet funnel continues to perform well in terms of driving traffic. We’re aiming to beat a baseline 1.8% conversion rate.

We were a bit blocked by technical issues this week and weren’t able to release the new tailored account signup pages, but we continue to work on that.

The “hack the snippet” test was delayed, but will be live soon. We have a comms strategy around it (for after it’s been tested).


Engagement Strategy #4: EARNED MEDIA

Press this week:

Aside from a cross-post of last week’s Washington Post Magazine story (, we didn’t see press this week. We were focused on our Net Neutrality tweetchat instead.

SOCIAL (not one of our key strategies):

As expected, the Tweetchat temporarily increased our Twitter engagement for a two-day period—we saw double the usual amount of favorites, retweets, and replies. You can view the Storify here:

The #MakerParty trendline for this week is back up to where it had been two weeks ago: 

See #MakerParty tweets here:

Nigel BabuOKFestival Fringe Events

The writeup of the OKFestival is very incomplete, because I haven’t mentioned the fringe events! I attended two fringe events and they both were very good.

First, I attended CKANCon right before OKFestival. It was informal and co-located with CSVConf. My best takeaway has been talking to people from the wider community around CKAN. I often feel blind-sided because we don’t have a good view of CKAN. I want to know how a user of a portal built on CKAN feels about the UX. After all, the actual users of open data portals are citizens who get data that they can do awesome things with. I had a good conversation with folks from DKAN about their work and I’ve been thinking about how we can make that better.

I finally met Max! (And I was disappointed he didn’t have a meatspace sticker :P

The other event I attended was Write the Docs. Ali and Florian came to Berlin to attend the event. It was total surprise running into them at the Mozilla Berlin office. The discussions at the event were spectacular. The talks by by Paul Adams and Jessica Rose were great and a huge learning experience. I missed parts of oncletom’s talk, but the bit I did catch sounded very different to my normal view of documentation.

We had a few discussions around localization and QA of docs which were pretty eye opening. At one of the sessions, Paul, Ali, Fabian and I discussed rules of documentation, which turned out pretty good! It was an exercise in patience narrowing them down!

I was nearly exhausted and unable to think clearly by the time Write the Docs started, but managed to face through it! Huge thanks to (among others ) Mikey and Kristof for organizing the event!

Francesca CiceriAdventures in Mozillaland #4

Yet another update from my internship at Mozilla, as part of the OPW.

An online triage workshop

One of the most interesting thing I've done during the last weeks has been to held an online Bug Triage Workshop on the #testday channel at
That was a first time for me: I had been a moderator for a series of training sessions on IRC organized by Debian Women, but never a "speaker".
The experience turned out to be a good one: creating the material for the workshop had me basically summarize (not too much, I'm way too verbose!) all what I've learned in this past months about triaging in Mozilla, and speaking of it on IRC was a sort of challenge to my usual shyness.

And I was so very lucky that a participant was able to reproduce the bug I picked as example, thus confirming it! How cool is that? ;)

The workshop was about the very basics of triaging for Firefox, and we mostly focused on a simplified lifecycle of bugs, a guided tour of bugzilla (including the quicksearch and the advanced one, the list view, the individual bug view) and an explanation of the workflow of the triager. I still have my notes, and I plan to upload them to the wiki, sooner or later.

I'm pretty satisfied of the outcome: the only regret is that the promoting wasn't enough, so we have few participants.
Will try to promote it better next time! :)


Another thing that had me quite busy in the last weeks was to learn more about crashes and stability in general.
If you are unfortunate enough to experience a crash with Firefox, you're probably familiar with the Mozilla Crash Reporter dialog box asking you to submit the crash report.

But how does it works?

From the client-side, Mozilla uses Breakpad as set of libraries for crash reporting. The Mozilla specific implementation adds to that a crash-reporting UI, a server to collect and process crash reported data (and particularly to convert raw dumps into readable stack traces) and a web interface, Socorro to view and parse crash reports.

Curious about your crashes? The about:crashes page will show you a list of the submitted and unsubmitted crash reports. (And by the way, try to type about:about in the location bar, to find all the super-secret about pages!)

For the submitted ones clicking on the CrashID will take you to the crash report on crash-stats, the website where the reports are stored and analyzed. The individual crash report page on crash-stats is awesome: it shows you the reported bug numbers if any bug summaries match the crash signature, as well as many other information. If crash-stats does not show a bug number, you really should file one!

The CrashKill team works on these reports tracking the general stability of the various channels, triaging the top crashes, ensuring that the crash bugs have enough information and are reproducible and actionable by the devs.
The crash-stats site is a mine of information: take a look at the Top Crashes for Firefox 34.0a1.
If you click on a individual crash, you will see lots of details about it: just on the first tab ("Signature Summary") you can find a breakdown of the crashes by OS, by graphic vendors or chips or even by uptime range.
A very useful one is the number of crashes per install, so that you know how widespread is the crashing for that particular signature. You can also check the comments the users have submitted with the crash report, on the "Comments" tab.

One and Done tasks review

Last week I helped the awesome group of One and Done developers, doing some reviewing of the tasks pages.

One and Done is a brilliant idea to help people contribute to the QA Mozilla teams.
It's a website proposing the user a series of tasks of different difficulty and on different topics to contribute to Mozilla. Each task is self-contained and can last few minutes or be a bit more challenging. The team has worked hard on developing it and they have definitely done an awesome job! :)

I'm not a coding person, so I just know that they're using Django for it, but if you are interested in all the dirty details take a look at the project repository. My job has been only to check all the existent tasks and verify that the description and instruction are correct, that the task is properly tagged and so on. My impression is that this an awesome tool, well written and well thought with a lot of potential for helping people in their first steps into Mozilla. Something that other projects should definitely imitate (cough Debian cough).

What's next?

Next week I'll be back on working on bugs. I kind of love bugs, I have to admit it. And not squashing them: not being a coder make me less of a violent person toward digital insects. Herding them is enough for me. I'm feeling extremely non-violent toward bugs.

I'll try to help Liz with the Test Plan for Firefox 34, on the triaging/verifying bugs part.
I'll also try to triage/reproduce some accessibility bugs (thanks Mario for the suggestion!).

Planet Mozilla InternsWillie Cheong: Shutdown: 4A study term

This term has been very unfruitful. I picked up League of Legends after an abstinence streak from DotA that lasted 4 good years. This kinda makes me sad. I’ve also lost a lot of motivation, especially with books and academia. It really isn’t the gaming that’s causing this. It is more just a lack of willpower to carry on doing something that seems so pointless. There’s a whole new post graduation world out there, with new and relevant things to learn.

I’ve really taken a liking to software development. It’s funny because in first year I remember believing that I could never picture myself sitting in front of a computer all day typing away. Yet here I am now, not knowing what else I would rather be doing.

I also remember having long-term plans for myself to run a self-grown start-up right after graduation. It’s not that I haven’t been trying. I have been working hard on these things over the past years but nothing seems to have gained any valuable traction at all. With only 8 months left to graduation, this once long-term goal and deadline is suddenly approaching and hitting the reality of being unattainable. Such a realization kills the motivation to carry on pushing.

Visions of life after university used to be so bright and optimistic. But as the moment slowly approaches I realize how clueless I really am and that’s OK. Engineers are trained problem solvers; we figure things out, eventually.

Raniere SilvaMathml August Meeting

Mathml August Meeting

This is a report about the Mozilla MathML August IRC Meeting (see the announcement here). The topics of the meeting can be found in this PAD (local copy of the PAD) and the IRC log (local copy of the IRC log) is also available.

In the last 4 weeks the MathML team closed 5 bugs, worked in other 6 and open one bug. This are only the ones tracked by Bugzilla.

The next meeting will be in September 11th at 8pm UTC. Please add topics in the PAD.

Leia mais...

Raniere SilvaGSoC: Pencil Down (August 11 - August 17)

GSoC: Pencil Down (August 11 - August 17)

This is the last report about my GSoC project and cover the thirteenth week of “Students coding”.

At this last week I worked at the auto capitalization and deployed a land page for the project:

Bellow you will find more details about the past week and some thoughts about the project as a hole.

Leia mais...

Mozilla Release Management TeamFirefox 32 beta6 to beta7

  • 7 changesets
  • 19 files changed
  • 104 insertions
  • 22 deletions



List of changesets:

Randell JesupBug 1013007: re-enable STUN throttling in mid-beta and later r=bwc a=lmandel - b42bbb72b7a8
Benoit JacobBug 777574 - Skip all quickCheckAPI tests on linux/android/emulator slaves. r=kamidphish, a=test-only - 791e4db4574b
Alexander SeleznevBug 1038607 - Fix text color in search field on about:newtab page. r=dao, a=lmandel - 6fd1ba78d246
Margaret LeibovicBug 1048941 - Make all empty view images 90x90dp. r=lucasr, a=lmandel - 511ac00e4e6c
Timothy NikkelBug 1027741 - Run decode complete notification handler for image documents on a script runner because they trigger invalidation and decode complete notifications are often dispatched during painting. r=smaug, a=lmandel - d4e47ec57f06
Axel VialaBug 1044584 - Fix incorrect computation of mUploadTransferred. r=bz, a=lmandel - cfec8a16880a
Luke WagnerBug 992461 - Turn off the shell's asm.js cache by default. r=bbouvier, a=sledru - 51f60a862089

Eric ShepherdThe Sheppy Report: August 15, 2014

I’m quite satisfied with how well the past week has gone. It’s been incredibly productive despite a few distractions and a great many meetings. Here’s my report on what I’ve been doing, and what I will be doing in the near future.

What I’m up to

I’ve been busy optimizing my own work processes, as well as setting up information so others know what needs to be done as well. I’ve also done a lot of copy-editing and organizational work in content, and have been touching up stuff ranging from the MDN inbox to the Learning Area to doc plans. It’s been a wonderfully productive week, and it feels good to be getting back into the swing of things.

What’s up next

Next week, I intend to dive into WebRTC, and to start putting together sample code so I can begin work on writing guides to working with WebRTC. It’s going to be really exciting!

As usual, of course, I have a number of other, smaller, tasks I want or need to accomplish, too.

What I did this week

  • Moved the main page on DocShell from the top level of MDN to its proper home, and filed a bug on getting it fully documented.
  • Dealt with infrastructure failures at my office: the air conditioning completely failed (working in a swelteringly hot office is not fun), and I discovered standing water in the restroom. The A/C is now fixed; the water problem has not been figured out yet, although the water has evaporated for now.
  • Helped test the new GitHub login support on the MDN staging server, and filed a few bugs regarding quirks I noticed.
  • Reviewed and had nothing but nice things to say about the new welcome email sent out by MDN to new members.
  • Got involved in the discussion about disabling styled pasting in the MDN editor. I’m opposed to this; I would much rather we solve the problem from the user’s end — contributors should learn to be sure they don’t include crufty styles when they paste into MDN. But ideally we can come up with a solution that doesn’t break existing workflows, punishing people who aren’t making this mistake.
  • Moved the page Write a new entry in the Glossary to the right place; it had accidentally been given an obsolete URL due to a couple of MDN bugs. Reviewed and copy-edited the content.
  • Filed a bug for a feature suggested by biraj: content from one page on MDN that’s presented inside another page should be reflected in the displayed contributor list. I don’t know how likely this is to be addressed (it certainly won’t happen soon). It’s a big project and there are many unanswered questions.
  • Copy-edited the new Glossary entry for the term “i18n“.
  • Added the word “Glossary” to the list of tags that MDN offers auto-completion for.
  • Followed-up on a bug asking me to write some copy for the Github login experience.
  • Did some tidying up of the MDN style guide, including moving Chris Mills’ excellent new section on our policies on gender-neutral terminology to be among the language and grammar topics rather than in the markup and wiki usage topics area.
  • Minor changes to the Learning Area page on CSS. This page needs a lot of work still but I saw low-hanging fruit.
  • Converted the Learning Area into a zone. Its landing page needs finishing, but this is a nice step.
  • Finished an extensive review and copy-edit of the Learning Area page Write an article to help learn about the web.
  • Removed a page that was actually just a set of Firefox problem reports, and emailed the author information about how to properly report issues.
  • Found an MDN “Linking Guide” lurking in a dead part of the site, and moved it into the MDN user guide, with major updates and copy-edits.
  • Updated the MDN user guide’s landing page to use the LandingPageListSubpages macro, so it looks a little better.
  • Adapted Luke’s screenshot/diagram about how to enable a page subscription on MDN into a new page in the MDN how-to guide.
  • Tweaks to the Inbox page in preparation for expanding its visibility.
  • Integrated the first round of feedback into the WebGL documentation plan.
  • Updated my Geeksphone Peak to Firefox OS 2.0 nightly for use in upcoming WebRTC sample code tests.
  • Filed a bug about saying “Android not supported” on Firefox OS 2.0′s browser.
  • Pinged developers about reviewing the WebGL documentation plan.
  • Created several new basic (that is, mostly empty) MDN development project plan pages:
  • Copy-edited the Learning Area’s How to contribute to the Learning Area article.
  • Filed a documentation request bug for documenting the NavigatorFeatures (hasFeature/getFeature) API. This API is low-priority privileged API, documentation-wise.
  • Added notes to a couple of pages in the MDN contributor guide about being careful when pasting, to avoid pasting unwanted styles and classes into MDN.
  • Created the DocPlanHelpUs macro, which inserts text inviting participation in a project and describing how to get started. Added it to the appropriate place in all extant doc plans.
  • Took some notes, sent some emails, and added links to the project planning page for the on-site messaging project.
  • Added a link to the MDN contributor guide to the footer of messages on the dev-mdc mailing list, and tweaked my email address on the moderator email list names.

Meetings attended this week


  • #mdndev bug triage
  • MDN development planning


  •  1:1 meeting with Jean-Yves


  • MDN Community meeting
  • 1:1 meeting with Ali


As you see, it was an intensely busy week! I’ve started moving into using OmniFocus to track what needs to be done and by when and I think it’s going to help, but we will see how it plays out over time. I have a history of not doing well at keeping up with my organizational systems, as you’ve possibly noted if you read my posts over history about my various attempts to get organized.

At any rate, it’s been a good week, and I can’t wait to get more done!


Hannah Kane1/3 of a year

Four Months (just remembered this blog is for (H)an(n)a(h)grams, so: Fonts Humor)

I’ve been here for four months. I think the famous Mozilla firehose is finally starting to slow down. A main difference between me now and me three months ago is that now, on most days, I actually know how to do the things on my ToDo list. SuperBonus: I can usually follow what’s happening in meetings now!

Significantly, I’m starting to add things to my ToDo list that are more than just the bare minimum of program maintenance. I’m starting to understand where I might be able to innovate and add value.

About a month after I started, I inherited the job of maintaining @Mozilla social channels, and about a month after that, I inherited the job of managing the relationship with our Maker Party PR company. Together these things took up a good chunk of my time over the past two months, largely because they’re outside my area of expertise (I helped launch a social media program at my last job, but that was back when Twitter was brand spankin’ new, and things have changed tremendously since then).

While I think both of these tasks ended up providing me with a great platform for learning about the organization (I have to know what’s going on so I can tweet about it!), I am looking forward to focusing more intently on the aspects of the program manager job I feel I’ve been neglecting.

I Feel Good (I Do Elf Ego)

Some of the things I feel good about from the past few months:

  • I think the Maker Party engagement updates and analyses (some of which I’ve posted on this blog) have been helpful in sparking some good conversation at our daily “Peace Room” meetings. Also, charts make me seem smart.
  • Our Salesforce for Partners implementation is a tiny bit behind schedule, but I feel good about where we are in the process. I was glad to be able to take this partially off of others’ plates and share the burden, because no one should have to face Salesforce alone.
  • Working with Dave, Erika, Mavis, and Sabrina on the Advocacy site has been a pleasure, and I think the end product is going to be great.
  • Yesterday’s Tweetchat was pretty fun.

Can Do Better (rent taco bed)

Some things I want to work on in the months ahead:

  • I want to operationalize what it means to be a Clockmaster, and refine the suite of tools we use to manage our work. Now that we have Sprinter, I feel a lot better about Bugzilla (which, I admit, I headdesked about for the first couple months I was here). But I don’t think it fully meets our needs, so we’ll need to supplement with other tools and processes.
  • I want to help reduce the pain in our grant reporting process. Gettin’ paid shouldn’t hurt so bad.
  • I want to crack the nut of social media. I was inspired by a recent conversation with Michaela Smiley, and I believe we can do a much better job of engaging and serving our community, while also better expressing the Mozilla brand and growing the Webmaker community. Hashtag win.
  • I want to make sure Maker Party 2015 is even more full of awesome by capturing and acting upon our learnings from this year. In general, I’d like to help create a culture of reflection and continuous improvement. Not to get too existential, but isn’t this what life is about? </insight into hannah’s worldview>
  • I want to improve our systems for distributing knowledge across the organization. I’ve seen really good examples of this (Andrea’s email-fu workshop, the Fundraising workshops that happened a few months ago, Geoffrey’s trendlines workshop from this morning, and probably many more). I don’t think Encyclopedia BestPractica is working as a tool for knowledge sharing, so I’d like to come up with something that meets people where they are (rather than making them come find it).
  • I want to keep improving our cross-team collaboration. Even in my short time here, I’ve already seen great strides in this, but there’s more to do. This project brief template is one of my first direct efforts toward that, in addition to just building relationships with many of my super rad co-workers.

Finally, I just want send a big ol’ shout out to said co-workers for making my first third of a year so enjoyable.


Christian HeilmannCreating a set of icons in various sizes in the browser

Hooray, I did do some coding again for a change! One of the issues I had with submitting apps for the Firefox Marketplace is that the validator of the manifest always complains about me missing out on certain icon sizes. That’s why I thought it’d be sweet to have an in-browser tool to generate all of the icons one needs from an image. And here it is:

icon generator in action

You can see a demo of it working on YouTube:

That’s all there is to it – it uses Canvas and the fileReader API to convert the images and create the files. JSZip, a neato library to create Zips was also in use.

For now your original image needs to be square and 512×512 pixels or the generator will just paste the first 512×512 pixels in. Images are automatically resized to 512 pixels and centered on a transparent background. A later version might allow you to move the image around. Let’s see when I get the time.

Soledad PenadesAudio for the masses

The video above is from LXJS – the Lisbon JavaScript conference, which happened more than a month ago. I gave this talk past week again at VanJS, so I decided it was time for that belated write up on this talk.

If you want to follow along, or play with the examples, the slides are online and you can also check out the code for the slides.

As I’ve given this talk several times I keep changing bits of the content each time depending on what the audience seems more interested in, plus I also sometimes improvise stuff which I don’t remember when writing the final write up, so if you were at any of the talks and see that something’s missing or different now you know why! I’ve also added a section at the end with frequent questions I’ve been asked, hope that’s useful for you too.

I work at Mozilla

red panda

I work at Mozilla (the above is a red panda, which we love), but that’s not what I want to talk about today. I want to talk about music instead.

I ♥︎ music

ukulele and things

I’ve been interested in music since forever, but I have no formal training at all–it’s all self-taught. For example, past year I was walking in Greenwich Park (note for the Americans: Greenwich is where time begins) during one nice summer afternoon, and I got this idea that I should totally learn to play the ukulele. As soon as I got home I went to buy one online, and the store had an offer where you’d get free shipping if you spent at least £25… so I added more instruments to the order: the tambourine! the shaker! the harmonica! And all that even if I didn’t know how to play any of those, because I thought: I can learn with videos or tutorials!

But it wasn’t always this way…

Learning from old books

old books

At the beginning, my only source of musical information was old books. I would go to my grandma’s and find books from my mum or my father, and I’d try to read them even if I didn’t fully understand it all. Or I would maybe go to the local library and look at books on music, but it was really hard to learn how to play musical instruments this way because those were mostly books on the history of music, and besides that, I didn’t have any musical instrument to play with. So it was all really frustrating.

Learning playground

casio pt-100

Things got a little bit better when I got a CASIO keyboard as a gift. I was finally able to play sounds! I really enjoyed playing with it and getting lost in the melodies and the chords and the different atmospheres I could create. And when I say I enjoyed it, I mean it literally. I don’t think anyone from my family was really enjoying it as I didn’t have any sense of rhythm whatsoever and I would just hang on notes or chords that I particularly liked for as long as I felt like hanging, which was probably driving them all nuts.

At some point I was given a booklet with the scores of popular songs, but even then it was hard to play anything that resembled the original songs, because I didn’t know how to interpret note lengths–again I just stayed on interesting notes for as long as I wanted. If there had been Internet back then, I could have listened to what the song was supposed to sound like, but there wasn’t, so I didn’t have any reference that would let me understand where I was doing it wrong.

Computers + trackers

impulse tracker

Everything really started to accelerate when I got access to one of those “multimedia” computers and tracker software. For those who don’t know, trackers are a type of software that allows you to sequence music and store it together with the sampled audio data, so later on it is relatively easy to reproduce the song and make it sound like the way the author intended, unlike what happens with MIDI files, which mostly just contain the score of the song, not the actual sounds contained in it.

Despite there being no internet (or being accessible to just a few people in academia and big cities in Spain), there were lots of distribution networks that got these files copied between enthusiasts. There was people that loved to trade these songs by snail mail (sending actual floppy disks in the post), others used BBSs, and finally there was a monthly tracking contest in a computer magazine that I used to buy–they would put all the participating songs into a directory in their CD with goodies, and this is how I got into the whole tracker scene.

A great thing about trackers was that you could see all the notes and effects used and also edit them, so effectively they were open source music, way before that term even existed. We all could learn from the ways of the masters, and that’s how I got a lot better.

The most hilarious part was how people ‘hacked’ the names of the samples in the trackers so that together they could form a bigger message, and that way an strange communication channel with other people in the contest was created, and everyone started “sending messages” to each other using the sample names. Of course, as in any sufficiently popular channel, there were flamewars! People would fiercely fight over issues such as which style of music or which tracker software were better. Looking back, it’s both hilarious and amazing that all this happened inside a directory in a CD.

Music communities


A while after, when I finally got access to proper Internet, I learnt about online music communities like Traxinspace. Suddenly there was this new universe of music and trackers from all over the world, and we could interact with each other–it wasn’t just speaking to people from Spain! Traxinspace had this feature where people could be artist of the day, or of the month, in addition to other rankings. If you got to the top of these or got to be a featured artist, it was nothing short from winning a Grammy or going to Oprah: it was a huge thing in these circles! The competition to be the best was fierce.


js1k minecraft

More or less at the same time I got into the demoscene as well. For those who don’t know, the demoscene is mostly about making the computers do impressive stuff. For example, size-limited coding: make a tiny bit of code do a lot of things. A modern example of this is the JS1K competition, where authors create something that runs in the browser using less than 1024 characters and does some impressive thing. The screenshot above is one of the entries from the latest edition, “here be dragons“, rendering a Minecraft-like world on the browser using WebGL.

Size limited sound generation

supersole @ breakpoint

One of the areas that piqued my coding curiosity in the demoscene was sound synthesis, where you could write software that generated lots of audio using a small amount of code. At some point I started coding “sorollet“, my own synthesiser with C and C++. One of the first incarnations was 410 Kb of C++ code that, when compiled, generated a 13 Kb executable file, so that is a good compression ratio here… but running it would generate 2:25 minutes of music (with only 13Kb of binaries!) and that is actually the equivalent to 25.5 Mb of stereo WAV data at 44.1KHz i.e. CD quality audio. All that with only 13Kb of code!

Web Audio

web audio modular routing

Even if I had had great fun building my synthesiser in C++, that path wasn’t without issues. I was mostly a web developer, so coding in C meant lots of dealing with memory allocations and management instead of having fun with pure audio code. I jumped at Web Audio as soon as I could because I was quite experienced with JavaScript, and it seemed so easy in comparison with C!



The first thing I did was just porting my C++ synthesiser to JavaScript, using one of the provided Web Audio nodes that allowed me to generate audio on the fly with JavaScript (the ScriptProcessorNode, formerly JavaScriptNode).

I was really happy to have my code running in the browser, in realtime! But… I quickly realised that was not what the web is about. The web is not about telling someone to visit a page, wait until some code loads, and then spend their next minutes listening to some music Sole composed. The web is about interacting and connecting APIs together, and I was failing quite miserably at that.

Sorollet.js UI

sorollet ui

I started building a UI for my synthesiser, so people could change the parameters and experiment with the different output they would get, and play some notes using their keyboard or clicking on the keys on the screen. They would also get a visual representation of what was being played, thanks to an oscilloscope of sorts that I would draw using the Canvas API.

But the best of all was that each time they changed any parameter for the synthesiser, the URL hash would update automatically, and if they then copied and pasted that URL and sent it to a friend, their friend could get a synthesiser “copy” with those settings applied. Likewise, that person could make more changes in the settings and send the new URL to another friend, which is way more “web-like” than what I had built initially.

Web Audio === easy (?)


Now I was really happy about the output and super excited because Web Audio was so easy to use! After all it’s all about connecting modules together!

OK… easy? I know not everyone finds it easy or has had even a tiny bit of knowledge about it, so I built some examples for progressively introducing its features and explaining how to combine them together with other Web APIs.

I also built a web component for helping me with these demonstrations–so I wouldn’t be purely livecoding, but I would still be able to run things step by step instead of running it all in just one go. If I were to be demonstrating this in front of you in a talk, I know how to operate it, and so you wouldn’t need to do anything, but that’s not the case here, so these are the instructions for using the demos:

  • Command + E executes either the selected piece of code, or if nothing is selected, the entire code in the editor
  • You can toggle showing the code or not
  • You can also run the whole thing pressing the run button
  • Some examples have the autostart attribute so you don’t need to press anything in order to get things going

For more details have a look at the source code of the component. I’m working on making it an independent component but I still haven’t quite figured how to do it in a way that doesn’t involve using bower, so stay tuned for more news if you’re interested in bower-free web components.

That said, let’s move on to the examples!



Oscillators are one of the basic units to generate sound in Web Audio. But before you get to have an oscillator instance, you have to create an Audio Context. If you’re familiar with Canvas 2D or Web GL contexts, Audio contexts are very similar: once you have one, they give you access to methods and constants to generate stuff within that context. It’s where everything happens, but it is also akin to a painter’s toolbox, since it provides you with tools you will need to deal with audio.

Here’s how you create the context:

var audioContext = new AudioContext();

and once you have it… well, there’s nothing happening yet! We create an oscillator with this:

var oscillator = audioContext.createOscillator();

nothing’s playing yet… and in fact the oscillator is not even connected anywhere, it’s just floating in the “web audio context nothingness”. Let’s connect it before we start using it:


audioContext.destination represents the final output for the audio context, or in other words: your computer’s sound card, and ultimately, the speakers or headphones–whatever you use to listen to audio!

We are now ready to generate some sound using our newly created oscillator:


We can also change the frequency the oscillator is playing at. By default it starts at 440.0 Hz, which is the standard A-4 note. Let’s make it play A-3, i.e. 220 Hz:

oscillator.frequency.value = 220;

That change is immediate. But we could also schedule the change to happen in two seconds from now:

oscillator.frequency.setValueAtTime(440, audioContext.currentTime + 2);

or even smoothly ramp to that value for two seconds:

oscillator.frequency.linearRampToValueAtTime(220, audioContext.currentTime + 2);

And that’s how we can create basic sounds and manipulate them with quite accurate timing with relative ease.

Another great feature of Web Audio is, as mentioned, its modularity. You can connect the output of one oscillator to the parameters of another oscillator, so you make the value of that parameter oscillate, and build more complex sounds. But what is a parameter? It is any value you can change in nodes. For example, the frequency is a parameter in OscillatorNodes (they are technically known as AudioParams).

Suppose we create one oscillator which we’ll use to play sounds, as we did before. Now we create another oscillator but give it a very slow frequency value, 10 Hz. That’s why we call it an LFO: Low Frequency Oscillator.

var lfo = audioContext.createOscillator();
lfo.frequency.value = 10;

Now we create a Gain Node, which is another of the nodes that Web Audio provides to us. The purpose of these nodes is basically to multiply their input value by the value of their gain parameter, so you can use it, for example, to reduce loudness (with gain values less than 1.0) or to amplify very quiet sounds (with gain values higher than 1.0):

var lfoGain = audioContext.createGain();
lfoGain.gain.value = 100;

So if we connect the output of the LFO oscillator (which changes from -1 to 1) to the input of the gain node (which is set to multiply everything by 100), we’ll get values from -100 to 100:


If we can connect this to the frequency parameter of the initial oscillator, it will be added to its frequency value–if the frequency is 220, it will start oscillating between 120 and 320 (220 – 100, 220 + 100), creating a funny spooky kind of sound:


This is just a small sample of what the Web Audio API can do, but still it’s just Web Audio, and we agreed before that the greatness of the Web relied in connecting multiple APIs together. So let’s look at an example that does more things at the same time:

Drag and Play

drag and play

We want to load a sample to play it in our example, and we want to be able to load it in two different ways

  1. dragging and dropping it from our file explorer to the browser window–we’ll use the Drag And Drop API, or…
  2. selecting a file using a file input (which makes more sense in touch devices where there is generally no way to drag items across currently running apps) –we’ll use the File API to read the contents of the file client side, instead of sending it to a server for further processing

Once we get the sample data as an ArrayBuffer, we’ll create a node of type BufferSource, and set its buffer to be the data we just decoded:

bufferSource = audioContext.createBufferSource();
bufferSource.buffer = buffer;

We also want it to loop!

bufferSource.loop = true;

And then starting it is similar to the way we start oscillators:


Another thing we want to do is to display a representation of the loaded wave. We have a canvas we’ll use for this, and the drawSample function that takes values from -1 to 1–exactly the same values we have in the buffer! So it’s just a matter of running the following:

drawSample(waveCanvas, buffer.getChannelData(0));

Note: getChannelData returns the first channel’s data. For monophonic sounds, the buffer will only have one channel, but stereo and 3D sounds will have more than one channel. I’m keeping it safe and using the first one which for stereo corresponds to the left channel. It’s not totally accurate as we might be discarding too much data (if the signal is very different between both channels), but for demonstration purposes it should be more than enough.

We also want to draw the wave that is being played on a canvas. To “hook” into the BufferSource output and get some already preprocessed data that we can then use on the canvas, we’ll use an instance of AnalyserNode:

var analyser = audioContext.createAnalyser();

This analyser is connected between the output of the bufferSource and the audio context’s destination, so that it can “inspect” what is going through:


Note: due to the way Web Audio is architected, bufferSources are meant to be disposed of when you’re done playing them–i.e., once you run their stop method, they’re over and calling start again has no effect; you have to create another BufferSource and assign it the buffer and all parameters and connections. And in this particular example, each time you load a sample you need to create a new BufferSource too.

But we do not want to be reconnecting the buffer source to the analyser every time, so we instead create a “finalGain” node that we permanently connect to the analyser, and we’ll connect the bufferSources to the finalGain node instead, and let Web Audio clean up the disposed nodes when it deems appropriate (via JavaScript’s Garbage Collector mechanism).

Back to the analyser node: we will create an array of unsigned integers to store the analysis data. We will also make sure it is big enough that it can hold all the values that the analyser will return:

analyser.fftSize = 2048;
analyserData = new Uint8Array(analyser.frequencyBinCount);

Each time we want to draw the wave, we’ll ask the analyser to have a look and return the results of its analysis into the analyserData array:


These values are bytes–which means they go from 0 to 255. But, as we mentioned, our drawing function drawSample requires values from -1 to 1, so we just convert them and put them into a Float32 array we initialised earlier on:

for(var i = 0; i < analyserData.length; i++) {
        osciData[i] = analyserData[i] / 128 - 1;

And we’re finally ready to draw the wave:

drawSample(osciCanvas, osciData);

Just in case you wondered, we’re using requestAnimationFrame to drive the animation.

So here’s a moderately complex example that does a bunch of things using different Web APIs… and it’s just less than two hundred lines of code. This shows that the web platform is really powerful! Building the same thing using native code would involve lots more of code, plus longer development time, and lots of debugging. Plus it would work in only one platform, whereas this little but powerful example works in all the platforms, no porting required.

Can we go even wilder? Of course we can, let’s involve WebGL so we can smoothly draw lots of elements at the same time, and let’s also use live microphone input instead of a pre-recorded sample, and we’ll display a visualisation of what’s happening.

Realtime visualisations

realtime visualisations

We’ll use Three.js for dealing with the WebGL side of things–i.e. all the rendering with nice antialias, shadow, fog, etc.

We’ll use the getUserMedia part of WebRTC. This allows us to access both the webcam and microphone input, but for the purposes of this demo we just want to “hear” things, so we’ll request only audio.

We will be creating an analyser node again, but instead of connecting a bufferSource to it as we did in the previous example, we’ll connect the MediaStreamSource we just created using the stream we got from getUserMedia. MediaStreamSourceNodes allow us to take a MediaCaptureStream (the type that getUserMedia returns) and send it to another nodes, so we can integrate external sources of sound into our web audio graph–even sound from another peer if we’re using WebRTC for a call!

        { audio: true },
        function yay(stream) {
                source = audioContext.createMediaStreamSource(stream);
        function nope(err) {
                console.err("oh noes", err);

Once the microphone is allowed access, we’ll start getting interesting data out from the analyser, and the bars won’t be boringly static but move in response to the changes in the input levels. Try clapping!

So we have a really smooth example that is drawing a lot of detail on the screen in response to live microphone input, and not only is multiplatform, but again is less than two hundred lines of code. Doing this for native platforms would be really, really long and tedious to build.

Browser makers have already put on a lot of work into unifying this kind of multimedia interfaces (sample decoding, live input streams, accelerated graphics) so you can take advantage of them and build awesome stuff instead of fighting with compilers and platform-specific frameworks for accessing these capabilities.

The web platform is really incredibly powerful nowadays, but…

We shouldn’t stop here

There’s still over two billion people who don’t have access to the Internet.

That’s right. 2,000,000,000+ people. It’s about two Europes and a half, or in terms of Canada (where I gave this talk too), about 16 times the population of Canada.

At Mozilla we believe the Internet must be open and accessible, so we are working on fixing this too. We partnered with some manufacturers to make a phone that would run Firefox OS and also be affordable, “the $25 dollar phone”.


This phone is in the same price range than featurephones, but it runs Firefox OS, which can be upgraded periodically and also has a way lower barrier of entry for businesses and creators than iOS or Android, since apps are written using JavaScript. They also can be run in other devices and operating systems–not only Firefox OS.

We’re also working in new hardware APIs for accessing all these new sensors and features using purely JavaScript. The work that goes on this benefits all the platforms as these APIs are standardised and more vendors implement them in their browsers—so we get closer and closer to the Write Once, Run Anywhere “dream”.

We have a lot of incredible powers on the web, and as Uncle Ben would say, with great power comes great responsibility. It’s great that we can do so much, but we also should be thinking about doing good things–it’s our responsibility!

So I sat down and tried to think of ways in which I could use my new powers for good. What about…

Simulating instruments

Suppose you’re a kid just like I once was and want to learn music, but have no instrument. But you have a phone.

There is this huge list of instruments in Wikipedia. What if we built some sort of simulation software that could recreate different instruments using just a bunch of parameters and no sample data–just as I did with my sound synthesis experiments? Once you got the application code, getting “new instruments” would be just a matter of downloading parameters data, which would imply very little bandwidth requirements. That kid with no instruments but a phone now could have a bunch of different virtual instruments!

Also, since this would be running in phones with lots of sensors, we could make the most of out them and for example use touch and pressure where available, so we could build an engaging interactive simulation.

What about if instead of keeping our sets of parameters to ourselves we share them by uploading them to a patch database where other people could download patches too? We would be building an amazing resource–specially if we enable people to remix existing patches. And another great outcome would be that, by exposing your creation to people from a different background from yours, you’ll get unusual contributions, and that’s always great and enriching.

Unconventional composers

Once you’ve built an instrument simulator, what is stopping you from building some sort of composer so that you can write down your own songs? But we should be very careful and avoid building a conventional composer, in the manner of staff or drum machine based composers.


Because these composers are not suited to non-Western music. For example some music from Eastern Europe has lots of tempo and key changes, and all these things are lost when “translating” to a staff based music transcription.

Instead, I’d propose we start by recording everything that happens while playing a simulated instrument, and make the data available to “data scientists”–preferably local data scientists, so they experiment with the recordings and devise some sort of visualisation/manipulation interface that works well for that kind of music. Maybe they will come up with local symbols that will seem very strange to us, but that work really well in their settings.

And again, since we’re not storing the sample data but only the events, transmitting these songs would take way less bandwidth, time and money than sending an MP3 file with a 2.5G connection.

People might start composing their own songs using their own very local traditions and maybe share them afterwards, and what might happen is that we end up with a world wide library of local songs—a true treasure trove for mankind that anyone could access.

Too conventional

But still even if they sound fun, these ideas sounded very conventional. I had to think of something that went further and was more original. What about if I took the Web FM API (available in Firefox OS) and mixed it with the Web Audio API? What could happen?

+ Web Audio

I think we could maybe have “over the FM” data transmission. Granted, the bandwidth wouldn’t be specially amazing: only 0.65 Mb a day, but still that is like 4000 SMS messages. And because it is broadcasted, it won’t slow down if many users try to get the data at the same time.

There are some existing precedents, mostly focused on providing updates on things such as national news and weather or complementary information to the currently broadcasted programme, but what if communities used this to deliver local updates? For example, the status of water tanks, the area weather forecast—things that are really important to the people close to that FM station.

And although these ideas might sound clever and cool…

…these are just some examples that my ignorant Western mind came up with…

… but I’ve never set my feet outside of my bubble of privilege and thus I can’t predict what is really required in a place where an EDGE connection is the best you can get (if you can get anything at all). And while I humbly recognise that I might be severely wrong about this, I also believe that 3D games or the common reasons why “Web Audio is awesome” in the Western world are not what solves issues for people in those places.

However, that doesn’t mean that we should just give up on Web Audio and feel sad and miserable that we can’t help people because we don’t even know about their issues. Remember that we have great powers… and a great responsibility—a responsibility to teach and make this all as accessible and easy to use as we can. And to keep experimenting, devising new ideas, and creating more code over which lots of Web Audio stuff can be built in the future.

Let’s build stuff and let’s share it. Let’s speak about this and make it all better, for everyone, so they can build their own solutions to their problems—which they understand better than we do!

And let’s do it together! :-)


Frequently asked questions

Each time I’ve given this talk I’ve got lots of interesting questions, so I figured they should be accompanying this post too because some of them are asked really often! There we go:

Where do I start learning about Web Audio? Which library should I use?

You could start by having a look at the Web Audio API book by Boris Smus–maybe even buy it if you find it useful!

Once you’re done with the book, the Web Audio API specification is also quite understandable, and it’s hosted in github, so if you find that something is not obvious you should file a new issue to get it clarified.

The Web Audio API is simple enough that you don’t need any library to get started. THANKFULLY.

How would you go about writing those instrument simulators?

There are many ways to simulate instruments. You can write new simulators using JavaScript or we could try to compile the core emulation from existing C/C++ libraries into JS using tools such as Emscripten into asm.js—we don’t need to spend our time rewriting things that already work well.

Of course you also have to take into account what can actually run in a phone. It’s not a full blown computer, so you have to be mindful of restrictions and adjust your code so it degrades nicely in less powerful platforms.

Have you written any sort of online composer?

Yes, but not the sort that I am advocating for. I built a drum machine demo that is included in Sorollet.js – online here. It has many issues, specially timing issues! but it was an early attempt, so I don’t go too heavy on the self-torturing department here. Still it has nice things such as the ability to store the whole song in the URL so you can share it. Sadly the URL is a bit too long for some places so you can’t actually share it, ha!

I started building something else later but it is not public (mostly because I broke something and it doesn’t work right now, but also because there’s nothing to see yet).

Can I actually connect Web FM with Web Audio today?

Turns out you can’t–so far the Web FM API speaks directly to the hardware and doesn’t go through JS, but there have been discussions hinting at being able to get a data URI for the media stream instead of just connecting to the speakers.

I’ve asked at the Web API list for clarifications. Let’s see what happens :-)

What about MIDI in the browser?

There is a Web MIDI API but it is not implemented in Firefox. I wrote about this a while ago, but in short, if you think you have what it takes, you’re more than welcome to help us implement it!

In the meantime you can “hack” temporary solutions such as running node in your computer to read the MIDI or OSC data and then forward it to the browser using something such as Socket.IO, which is what I did for my JSConf.EU 2013 project.

Can you do speech recognition with Firefox OS and Web audio?

Not yet… but here’s a page detailing a roadmap of sorts, and here’s a tracking bug with all the pending subtasks.

When are Firefox OS phones going to be sold in (put your country here)?

I can’t answer that with certainty because it depends on the operators in each country (for operator “branded” phones) and on selling restrictions (for developer phones sold directly–some stores won’t sell to some countries). Your best bet is either search with your favourite search engine or maybe contact your local mozilla community/reps to see if they are more aware of the current status than I can possibly be.

Otherwise I will refer you to the Flame developer phone page.

Also, no, I don’t have phones to give you away, sorry.

What sort of features will those phones have? Conductive vs capacitive screens?

Again, this depends on which sort of markets the operators are targeting and which sort of phones they’re willing to work with. I’m sorry I can’t answer :-)

flattr this!

Mozilla WebDev CommunityAnimating the Firefox Desktop Pages

As you may have noticed, Firefox for desktop computers (Windows, Mac, and Linux) got a redesigned interface with the release of Firefox 29.0. This redesigned browser called for redesigned web pages to showcase the new interface (the tabs, icons, menus, etc., collectively called “browser chrome”) and new features (especially the new customization menu)

Naturally, the main audience for these pages are people using browsers that aren’t Firefox, so we wanted to illustrate the new Firefox design in a fun and compelling way that gives them a real sense of what it looks like, hopefully encouraging folks to download it and see it first hand. Another big target audience are Firefox users who haven’t yet updated, so we needed to give them an overview of what’s new.

This also gave us a chance to create some snazzy animations to show off some of the advances in CSS and SVG supported by the current generation of browsers, both on desktop computers and mobile devices. Here’s how we made it.

Browser Chrome Animations

In order to demonstrate features of Firefox, we needed to simulate the way interface elements respond to user actions; opening and closing tabs, rearranging icons, adding a bookmark, and so on. This called for fairly complicated animation sequences that had to be quick and buttery smooth. The complex interplay of multiple elements moving both sequentially and in tandem really drove home the need for a CSS animation editor (vote it up!).

Approach & structure

Each animation illustrating the browser chrome (One, Two, Three) is wrapped in a div element with a common class applied (animation-wrapper), along with an id we can use to target the specific element with JavaScript.

The first element inside the animation wrapper is a composite fallback image for browsers that don’t support CSS animations. This image has a common classname (fallback) for easy CSS targeting. We can conditionally hide this image by leveraging the cssanimations class Modernizr applies to the body. By first assuming that animation is not supported, we ensure a functional degradation for even the oldest and least capable browsers, and we can progressively enhance the page for more advanced browsers that support the more advanced features.

The next element inside the wrapper div is the stage for the entire animation – <div class="stage">, really just an invisible box in which other elements can move around. Using the same cssanimations class from Modernizr, we’ll display the stage for browsers that can handle the animation.

/* for legacy browsers */
.fallback {
    display: block;
.stage {
    display: none;
/* for modern browsers */
.cssanimations {
    .fallback {
        display: none;
    .stage {
        display: block;

(We use Less to preprocess our CSS, so those nested rules are converted into separate rules with descendant selectors.)

The final task is to trigger the animations only when they come into view, as there’s no sense running an animation while it’s off screen. We used jQuery Waypoints to monitor the page’s scroll position, adding an animate class to each wrapper div when it comes into view. The addition of that class sets off the CSS animation sequence.

.animating-element {
    position: absolute;
    top: 10px;
    right: 40px;
/* animate class added via JavaScript/Waypoints based on scroll position */
.animate {
    .animating-element {
        animation: moveAround 0.7s ease 0s 1 normal forwards;

This approach worked well and helped us keep each animation block self-contained and modular. It provided a common and easily customizable HTML & CSS structure for each animation, and less capable browsers still have access to all the content in a well styled page. Within that stage box we can add any other content or elements we need.

Timing is everything

The browser chrome animations have multiple elements with multiple animations applied, so getting the timing just right became rather tedious. Because separate animations are completely independent in CSS, there’s no simple way to tell a browser to “start animationY 2.1 seconds after animationX completes.” Instead, you need to do the calculations yourself and hard code them into each animation declared in the CSS, liberally using animation-duration and animation-delay to fire off each step of the scene in sequence. The mental gymnastics go something like this:

Step 1 has a 0.7 second delay and runs for 1.5 seconds. Then Step 2 should start 1.4 seconds after Step 1 completes, so it should have a delay of… 3.6 seconds. Step 2 runs for 2 seconds, and Step 3 needs to begin a quarter of a second before Step 2 completes, so Step 3 needs a delay of 5.35 seconds…

As you can imagine, the more elements you animate and the more steps you have in the sequence, the harder the math becomes. Adjusting the timing of one step in the chain can mean adjusting all the subsequent steps to compensate.

Designer Ty Flanagan created video mockups in Adobe After Effects to serve as a guide for the CSS animation, which was an enormous help. There was still a fair amount of fine tuning to be done by hand, constantly refreshing the page and tweaking a few milliseconds until it just “felt right,” but that process could have taken much longer without the videos for reference.

Another way to do all of this would have been controlling the chained animations in JavaScript, relying on the animationend event to fire off the next step in the sequence. However, a bunch of event listeners and setTimeout calls in a script probably wouldn’t have been a faster or better approach.

Animations in a Circle

Some of our favorite animations are the customize icons, mostly because the circular mask effect is so neat in its simplicity.

The key to achieving the circular mask is a bit of absolute positioning and the incredibly versatile border-radius. The markup isn’t too complex – a stage to contain everything, a div for the circular mask, and whatever elements need to be animated.

<div class="stage">
  <div class="circle-mask"></div>
  <div class="animated" id="animated-block1"></div>
  <div class="animated" id="animated-block2"></div>

If you’d like to see an example and play around with the code before reading about the methodology, here’s a little demo on CodePen.

The stage

The stage has a set height and width with a hidden overflow and relative positioning. The background color of the stage fills the circular mask.

.stage {
    position: relative;
    width: 300px;
    height: 180px;
    overflow: hidden;
    background: #fff;

The circular mask

The circular mask is absolutely positioned at the center of the stage, calculated by (stage width - (mask width + mask border width))/2 (this equation could be simpler with box-sizing: border-box). The mask has a wide enough border to reach just past the furthest boundary of the stage. The border bumping up against the page background is what completes the illusion of the mask, so the mask’s border color matches that of the section’s background color (sadly, this means the technique only works with a solid colored background).

To make sure the mask covers the animated elements, it has a z-index at least one higher than the front-most animated element.

.circular-mask {
    position: absolute;
    width: 164px;
    height: 164px;
    border: 100px solid #ccc;
    border-radius: 50%;
    top: -100px;
    left: -32px;
    z-index: 2;
/* animated elements share absolute positioning and 
   a z-index lower than .circular-mask */
.animated {
    position: absolute;
    z-index: 1;

The animated elements

The only requirement for the animated elements is that they reside inside the stage and have a z-index lower than the mask. Otherwise, anything goes.

Though purely flair (as opposed to the feature demonstrations provided by the browser chrome animations), these circular animations were fun to build and we’re very pleased with the result.

Drawing Firefox in the browser

When we first watched a video mockup of the proposed intro animation for the new Firefox for Desktop landing page, we wondered if this was actually possible to pull off in a web browser. The animation involves a series of moving lines which fill in as the outlines fade onto the web page, creating an illustrated image of the Firefox browser. Definitely not a typical animation you see on the web every day!

The first step on the path of discovery was to choose an appropriate image format. SVG seemed like the most obvious choice, given that the images needed to scale. Nobody on the team had any prior experience with SVG animation but it seemed like a fun challenge! Ty came up with a rough demo showing how we might use SVG path strokes for the moving lines, which seemed like a perfect starting point. We could have chosen to use an SVG animation library such Raphael or SnapSVG, but we wanted to try to keep our dependencies as light as possible (we had plenty already; no reason to add any more if we can avoid it). The timing and intricacies of the animation made a strong case for trying to use CSS keyframe animations, and this would also be a good opportunity to show off their potential. It was then we recalled this really clever technique that could pull off the same line-drawn effect using CSS.

Animating SVG line paths using CSS

The trick to the line drawing effect is to animate the stroke-dashoffset of an SVG image path. The stroke-dasharray property allows you to apply a dashed border effect to the outline of an SVG image. The clever part is that if you set the length of the dash equal to the total length of the image path, you can then animate stroke-dashoffset to make it appear as if the line is being drawn one segment at a time. Magic!

Here’s an example of an SVG path:

<path class="circle" stroke="#5d7489" stroke-width="2" stroke-opacity="1" 
fill="#5d7489" fill-opacity="0" 

And some CSS to animate it:

.circle {
    stroke-dasharray: 117;
    stroke-dashoffset: 117;
    animation: draw-circle 5s linear forwards;
@keyframes draw-circle {
    100% {
        stroke-dashoffset: 0;

You can find the required length of a path pretty easily using a bit of JavaScript:

var circle = document.querySelector('.circle');
var length = circle.getTotalLength();

The animation on the finished page is quite a bit more complicated than this example, but hopefully you can get the idea. We also animated fill-opacity and stroke-opacity to color in the browser panels and fade out the lines at the end of the animation, leaving a scalable vector drawing of the new Firefox.

Scaling SVG using CSS transforms

As well as animating the line drawing, we also needed to scale the image as it zooms onto the page. From there, the icons also zoom into their appropriate places. This was all done using regular CSS transforms via translate and scale.

There are some notable cross-browser inconsistencies here when it comes to scaling SVG using this method. Both Chrome and Safari render a bitmap of an SVG prior to performing a CSS transform. This is presumably for performance reasons, but it does lead to blurry images when you blow them up. Firefox seems to weigh up performance and image quality a little differently, and renders sharper images when they scale. To get around the resizing issues, the best solution was to render icons at their largest size initially and then scale them down, as opposed to the other way around. It seems browsers still have some work to do in this area in order to improve SVG rendering under these circumstances.

Putting it all together

Combining all the separate CSS keyframe animations together was probably the most time consuming task. We can also look forward to the day when we no longer need vendor prefixes for CSS keyframe animations, as the duplication of code required is still a bit undesirable. Aside from this, getting the timing right was once again the trickiest part. For a less-than-5-second animation, having to reload and run through the whole sequence over and over made the process pretty time consuming (here’s another vote for that CSS animation editor).

The final result of all this work is a set of pages that beautifully show off what the new desktop Firefox looks like while also showing off what it can do with open web technologies. If you haven’t yet, please do check it out. It’s all responsive, mobile-friendly, progressively enhanced, retina-ready, and still pretty light weight all things considered. And without a single byte of Flash.

The team

  • Jon Petto – Developer
  • Alex Gibson – Developer
  • Holly Habstritt Gaal – UX Designer
  • Ty Flanagan – Graphic Designer
  • Matej Novak – Copywriter
  • Jennifer Bertsch – Product Manager
  • Mike Alexis – Program Manager

This article was co-written by Jon Petto and Alex Gibson, with editorial assistance from Craig Cook.

Nicholas NethercoteThe story of a tricky bug

The Bug Report

A few weeks ago I skimmed through /r/firefox and saw a post by a user named DeeDee_Z complaining about high memory usage in Firefox. Somebody helpfully suggested that DeeDee_Z look at about:memory, which revealed thousands of blank windows like this:

  │    │  ├────0.15 MB (00.01%) ++ top(about:blank, id=1001)
  │    │  ├────0.15 MB (00.01%) ++ top(about:blank, id=1003)
  │    │  ├────0.15 MB (00.01%) ++ top(about:blank, id=1005

I filed bug 1041808 and asked DeeDee_Z to sign up to Bugzilla so s/he could join the discussion. What followed was several weeks of back and forth, involving suggestions from no fewer than seven Mozilla employees. DeeDee_Z patiently tried numerous diagnostic steps, such as running in safe mode, pasting info from about:support, getting GC/CC logs, and doing a malware scan. (Though s/he did draw the line at running wireshark to detect if any unusual network activity was happening, which I think is fair enough!)

But still there was no progress. Nobody else was able to reproduce the problem, and even DeeDee_Z had trouble making it happen reliably.

And then on August 12, more than three weeks after the bug report was filed, Peter Van der Beken commented that he had seen similar behaviour on his machine, and by adding some logging to Firefox’s guts he had a strong suspicion that it was related to having the “keep until” setting for cookies set to “ask me every time”. DeeDee_Z had the same setting, and quickly confirmed that changing it fixed the problem. Hooray!

I don’t know how Peter found the bug report — maybe he went to file a new bug report about this problem and Bugzilla’s duplicate detection identified the existing bug report — but it’s great that he did. Two days later he landed a simple patch to fix the problem. In Peter’s words:

The patch makes the dialog for allowing/denying cookies actually show up when a cookie is set through the DOM API. Without the patch the dialog is created, but never shown and so it sticks around forever.

This fix is on track to ship in Firefox 34, which is due to be released in late November.

Takeaway lessons

There are a number of takeaway lessons from this story.

First, a determined bug reporter is enormously helpful. I often see vague complaints about Firefox on websites (or even in Bugzilla) with no responses to follow-up questions. In contrast, DeeDee_Z’s initial complaint was reasonably detailed. More importantly, s/he did all the follow-up steps that people asked her/him to do, both on Reddit and in Bugzilla. The about:memory data made it clear it was some kind of window leak, and although the follow-up diagnostic steps didn’t lead to the fix in this case, they did help rule out a number of possibilities. Also, DeeDee_Z was extremely quick to confirm that Peter’s suggestion about the cookie setting fixed the problem, which was very helpful.

Second, many (most?) problems don’t affect everyone. This was quite a nasty problem, but the “ask me every time” setting is not commonly used because causes lots of dialogs to pop up, which few users have the patience to deal with. It’s very common that people have a problem with Firefox (or any other piece of software), incorrectly assume that it affects everyone else equally, and conclude with “I can’t believe anybody uses this thing”. I call this “your experience is not universal“. This is particular true for web browsers, which unfortunately are enormously complicated and have many combinations of settings get little or no testing.

Third, and relatedly, it’s difficult to fix problems that you can’t reproduce. It’s only because Peter could reproduce the problem that he was able to do the logging that led him to the solution.

Fourth, it’s important to file bug reports in Bugzilla. Bugzilla is effectively the Mozilla project’s memory, and it’s monitored by many contributors. The visibility of a bug report in Bugzilla is vastly higher than a random complaint on some other website. If the bug report hadn’t been in Bugzilla, Peter wouldn’t have stumbled across it. So even if he had fixed it, DeeDee_Z wouldn’t have known and probably would have had been stuck with the problem until Firefox 34 came out. That’s assuming s/he didn’t switch to a different browser in the meantime.

Fifth, Mozilla does care about memory usage, particularly cases where memory usage balloons unreasonably. We’ve had a project called MemShrink running for more than three years now. We’ve fixed hundreds of problems, big and small, and continue to do so. Please use about:memory to start the diagnosis, and add the “[MemShrink]” tag to any bug reports in Bugzilla that relate to memory usage, and we will triage them in our fortnightly MemShrink meetings.

Finally, luck plays a part. I don’t often look at /r/firefox, and I could have easily missed DeeDee_Z’s complaint. Also, it was lucky that Peter found the bug in Bugzilla. Many tricky bugs don’t get resolved this quickly.

William LachanceA new meditation app

I had some time on my hands two weekends ago and was feeling a bit of an itch to build something, so I decided to do a project I’ve had in the back of my head for a while: a meditation timer.

If you’ve been following this log, you’d know that meditation has been a pretty major interest of mine for the past year. The foundation of my practice is a daily round of seated meditation at home, where I have been attempting to follow the breath and generally try to connect with the world for a set period every day (usually varying between 10 and 30 minutes, depending on how much of a rush I’m in).

Clock watching is rather distracting while sitting so having a tool to notify you when a certain amount of time has elapsed is quite useful. Writing a smartphone app to do this is an obvious idea, and indeed approximately a zillion of these things have been written for Android and iOS. Unfortunately, most are not very good. Really, I just want something that does this:

  1. Select a meditation length (somewhere between 10 and 40 minutes).
  2. Sound a bell after a short preparation to demarcate the beginning of meditation.
  3. While the meditation period is ongoing, do a countdown of the time remaining (not strictly required, but useful for peace of mind in case you’re wondering whether you’ve really only sat for 25 minutes).
  4. Sound a bell when the meditation ends.

Yes, meditation can get more complex than that. In Zen practice, for example, sometimes you have several periods of varying length, broken up with kinhin (walking meditation). However, that mostly happens in the context of a formal setting (e.g. a Zendo) where you leave your smartphone at the door. Trying to shoehorn all that into an app needlessly complicates what should be simple.

Even worse are the apps which “chart” your progress or have other gimmicks to connect you to a virtual “community” of meditators. I have to say I find that kind of stuff really turns me off. Meditation should be about connecting with reality in a more fundamental way, not charting gamified statistics or interacting online. We already have way too much of that going on elsewhere in our lives without adding even more to it.

So, you might ask why the alarm feature of most clock apps isn’t sufficient? Really, it is most of the time. A specialized app can make selecting the interval slightly more convenient and we can preselect an appropriate bell sound up front. It’s also nice to hear something to demarcate the start of a meditation session. But honestly I didn’t have much of a reason to write this other than the fact than I could. Outside of work, I’ve been in a bit of a creative rut lately and felt like I needed to build something, anything and put it out into the world (even if it’s tiny and only a very incremental improvement over what’s out there already). So here it is:


The app was written entirely in HTML5 so it should work fine on pretty much any reasonably modern device, desktop or mobile. I tested it on my Nexus 5 (Chrome, Firefox for Android)[1], FirefoxOS Flame, and on my laptop (Chrome, Firefox, Safari). It lives on a subdomain of this site or you can grab it from the Firefox Marketplace if you’re using some variant of Firefox (OS). The source, such as it is, can be found on github.

I should acknowledge taking some design inspiration from the Mind application for iOS, which has a similarly minimalistic take on things. Check that out too if you have an iPhone or iPad!

Happy meditating!

[1] Note that there isn’t a way to inhibit the screen/device from going to sleep with these browsers, which means that you might miss the ending bell. On FirefoxOS, I used the requestWakeLock API to make sure that doesn’t happen. I filed a bug to get this implemented on Firefox for Android.

Zack WeinbergI Will File Bugs For You

This post prompted by Aaron Klotz’s “Diffusion of Responsibility” and Sumana Harihareswara’s “Inessential Weirdnesses in Open Source.”

One of the most common ways to start interacting with a free software project, as opposed to just using the software produced by that project, is when you trip over a bug or a missing feature and now you need to go tell the developers about it. Unfortunately, that process is often incredibly off-putting. If there’s a bug tracking system, it is probably optimized for people who spend all day every day working with it, and may appear to demand all kinds of information you have no idea how to supply. If there isn’t, you’re probably looking at signing up for some sort of mailing list (mailing list! how retro!) Either way, it may not be easy to find, and there’s a nonzero chance that some neckbeard with a bad attitude is going to yell at you. It shouldn’t be so, but it is.

So, I make this offer to you, the general public, as I have been doing for close friends for many years: if you don’t want to deal with that shit, I will file bugs for you. I’ve been on the Internet since not quite the elder days, and I’ve been hacking free software almost as long; I know how to find these people and I know how to talk to them. We’ll have a conversation and we’ll figure out exactly what’s wrong and then I’ll take it from there. I’m best at compilers and Web browsers, but I’ll give anything a shot.

THE FINE PRINT: If you want to take me up on this, please do so only via email; my address is on the Contact page. Please allow up to one week for an initial response, as this service is provided in my copious free time.

Offer valid only for free software (also known as “open source”) (as opposed to software that you are not allowed to modify or redistribute, e.g. Microsoft Word). Offer also only valid for problems which I can personally reproduce; it’s not going to go well for anyone involved if I have to play telephone with you and the developers. Offer specifically not valid for operating system kernels or device drivers of any kind, both because those people are even less pleasant to work with than the usual run of neckbeards, and because that class of bugs tends to be hardware-dependent and therefore difficult for me to personally reproduce on account of I don’t have the exact same computer as you.

The management cannot guarantee this service will cause bugs to actually get fixed in any kind of timely fashion, or, in fact, ever.

Sean MartellMozilla ID Project: Wordmark exploration

This is the first in a series of live streaming work sessions exploring a refresh of the Mozilla brand ID system.

Today’s session was exploring Fira Sans for our wordmark versus the current Meta Bold. Fira Sans is an open Web font whereas Meta is a closed commercial font. In the video I explore customizing certain characters to make a unique wordmark yet basing it on our Fira font. I also start to explore how we could show data visualization in a logo mark.

More to come! To follow along with this project, feel free to subscribe to the Youtube channel and follow me on twitter.


Aaron KlotzDiffusion of Responsibility

Something that I’ve been noticing on numerous social media and discussion forum sites is that whenever Firefox comes up, inevitably there are comments in those threads about Firefox performance. Given my role at Mozilla, these comments are of particular interest to me.

The reaction to roc’s recent blog post has motivated me enough to respond to a specific subset of comments. These comments all exhibit a certain pattern: their authors are experiencing problems with Firefox, they are very dissatisfied, but they are not discussing them in a way that is actionable by Mozilla.

How Mozilla Finds Problems

Mozilla encourages our contributors to run prerelease versions of Firefox, especially Nightly builds. This allows us to do some good old-fashioned dogfooding during the development of a Firefox release.

We also have many tools that run as part of our continuous integration infrastructure. Valgrind, Address Sanitizer, Leak Sanitizer, reference count tracking, deadlock detection, assertions, Talos performance tests, and xperf are some of the various tools that we apply to our builds. I do not claim that this list is exhaustive! :–)

We use numerous technologies to discover problems that occur while running on our users’ computers. We have a crash reporter that (with the user’s consent) reports data about the crash. We have Firefox Health Report and Telemetry that, when consented to, send us useful information for discovering problems.

Our ability to analyze crash report/FHR/telemetry data is limited to those users who consent to share it with us. As much as I am proud of the fact that we respect the privacy of our users, this means that we only receive data from a fraction of them; many users who are experiencing problems are not included in this data.

Despite the fact that we have all of these wonderful tools to help us deliver quality releases, the fact is that they cannot exhaustively catch every possible bug that is encountered out in the wild. There are too many combinations of extensions and configurations out there to possibly allow us to catch everything before release.

That’s where you, our users, come in!

If You See Something, Report It!

Reddit, Hacker News, Slashdot and other similar sites are fantastic for ranting. I should know — I do it with the best of them! Having said that, they are also terrible for the purposes of bug reporting!

As users it’s easy for us to assume that somebody else will encounter our problems and report them. Unfortunately that is not always the case, especially with a browser that is as configurable as Firefox.

Reporting Bugs

If you are experiencing a bug, the best way to ensure that something can be done about your bug is to report it in Bugzilla. This might seem a little bit intimidating for somebody who is new to bug reporting, but I assure you, Mozillians are really nice! As long as you follow the etiquette guidelines, you’ll be fine! One suggestion though: try to follow our bug writing guidelines. Doing so will maximize the likelihood of a contributor being able to reproduce your problem. In addition to these suggestions for bug filing, I also suggest including certain types of data for specific types of problems:

Reporting a Bug for High Memory Usage

If you’re experiencing problems with Firefox’s memory use, open a tab, and point your browser to about:memory. This nifty feature provides a breakdown of Firefox memory consumption. Save that report and attach it to the bug that you’ve filed.

Reporting a Bug for Slowness

If you want report a problem with Firefox being slow, the best way to help us is is to include data that has been generated by the Gecko Profiler. Unfortunately this is tool requires a bit of technical savvy, but attaching the URL of an uploaded profile to your performance bug can be very helpful.

Reporting a Bug for a Persistent, Reproducable Crash

As you can see in our crash report data, crashes reported to Mozilla are ranked by frequency. As you might expect, this implies that it’s often the squeaky wheels that get the grease.

If you have an easily reproducable crash and you are sending your reports to Mozilla, you can help us by pointing Firefox to about:crashes. This page lists all of the crash reports that have been generated on your computer. If the crash that you are experiencing isn’t on our list of top crashers, you can still help us to fix it: filing a bug that includes multiple crash report URLs from your about:crashes screen will help tremendously.

In Conclusion

If there is one idea that you can take away from this post (a TL;DR, if you will), it is this: Mozilla cannot fix 100% of the bugs that we do not know about.

Taking an active role in the Mozilla community by reporting your issues through the proper channels is the best way to ensure that your problems can be fixed.

EDIT: To be clear: What I am suggesting is that users who are enthusiastic enough to post a comment to Hacker News (for example) should also be savvy enough to be able to file a proper bug report. Please do not misconstrue this post as a demand that novice users start filing bugs.

EDIT August 15, 2014: Nick Nethercote just blogged about a tricky memory bug that couldn’t have been diagnosed without the help of a Redditor whose complaint we steered to Bugzilla.

Matt ThompsonWhat would a web literate school look like?

As we think about what’s next for Webmaker, we’re conducting interviews to better understand our audience and develop user personas. What challenges do teachers in the classroom face, for example? How can we help them spread web literacy? Here’s what Phil Macoun, an educator from Nanaimo, B.C., had to tell us.

Phil: a tech-savvy educator trying to help his school

  • Phil Macoun is the Technology Coordinator at Aspengrove School in Nanaimo
  • He’s thinking about how to implement a complete digital literacy curriculum for the entire school, from grades 1 to 12
  • He recently started pursuing a Masters in Educational Leadership. “Because real change is going to happen at a higher level. Technology alone isn’t enough — the technology needs to support the pedagogy.”

Notes from Phil’s blog

What would K to 12 digital literacy look like?

Phil’s been thinking a lot about what “digital literacy” might look like from kindergarten all the way to grade 12. As his school’s Technology Coordinator, he has the opportunity to implement a school-wide curriculum, influencing an entire staff of teachers and several hundred students.

He’s been surveying the landscape. Phil has researched various digital literacy offerings and approaches, including:

He’s familiar with Webmaker tools like Thimble, and has been following Webmaker’s Web Literacy Map.

The whole maker movement thing is a big part of what I’m thinking about right now. [Mozilla's] web literacy map outlines things kids need to do, but there also need to be attitudes and approaches tied up into the learning. How to design and be creative.”

The hard part is implementation

The biggest challenge for Phil is: how to help busy, time-strapped teachers get started teaching this stuff in their own classrooms. “In terms of implementation, this is where I get stuck,” Phil says. “[Webmaker] has got good ideas — but I don’t know how to scale them up for my school.

“I can’t possibly do all this myself — I need other teachers to be responsible for implementing it. I need a framework.”

His best solution so far?

What has worked to help him solve this problem so far? The Common Sense Media “Digital Citizenship” curriculum. By sending his fellow teachers that one link, along with a bit of context and guidance, he was able to offer time-strapped colleagues something close to a turn-key solution. They loved it.

It lowers the barrier to entry. They can quickly see the grade level, learning outcomes, download a lesson plan, get worksheets. There’s everything they need to get started.”

Phil likes that Common Sense Media also just published an e-book manual for teachers, and says that many other independent schools in BC are now adopting the Common Sense curriculum.

Parents want these skills for their kids

I mostly get parents coming and saying: thank you for teaching my kids this stuff!” Phil says. “They like that I’m telling their kids how to search the Internet properly. They know that their kids are immersed in this online world, and they’re looking for help to manage it properly.”

How Phil explains digital literacy to parents

From exploring and building to connecting

Mozilla’s Web Literacy Map is based around exploring, building and connecting. Phil says that parents and colleagues intuitively grasp the value of  “Exploring” and “Building” — but less so with “Connecting,” the piece he actually thinks is the most valuable.

Trying to get people to understand that piece is much harder,” he says. “‘Exploring’ is easy — people want kids to be able to search the internet better. The ‘building’ piece is easy as well — kids programming video games, printing stuff on a 3D printer. Parents love that stuff. Its harder to explain the connecting piece.”

“You want to get from ‘Help me to manage my kids online life’ to ‘help me teach my kids to leverage this tool to its full potential.”

How could Webmaker’s curriculum offering improve?

We recently shipped a new series of pages that we think of as a “textbook for web literacy.” I invited Phil to tale a look at the “Privacy” page, from a teacher’s perspective.

As a busy teacher what I’m looking for is: what’s the stuff that’s relevant to me.
If I was a teacher who didn’t know a lot about this topic, I’m looking for: ‘What am I teaching? What are my learning outcomes? How am I going to do it?’”

I look at this page and go: I don’t have time to figure this out right now. I had to scroll right down to the very bottom of the page to know that there was stuff here for teachers.”

“If I had a teacher portal, like the Common Sense Media stuff, it could show me what the different elements of the Web Literacy thing might look like in primary school, vs middle school, vs high school, etc. When it’s all kinda jumbled up, I don’t have time to pick out the good stuff.”

Badges as a more fluid way to recognize learning

I’d love to use badges as a formative assessment tool in my classroom. A more fluid way students could celebrate their learning.  Maybe I could find a way to loop badges into what my kids are already doing with Google Docs, or Scratch, or TinkerPad. That would be really cool.”

Cloud-based collaboration

Google Apps recently became Aspengrove school’s go-to digital platform. They moved the whole school over to it. Every student from grade 8 and up now has a Google Apps email address.

“All our students are doing their writing in Google Docs now.”

In a way, Phil’s school is using Google Docs the same way Mozilla uses etherpads — for  immediate web-based collaboration.

The first thing teachers and students do is open up a Google Doc and start putting all their ideas in one document. In many cases, teachers have been writing alongside the kids, so that students can get comments from the teacher as they go. And teachers are doing most of their classroom presentations in Google Docs as well.”

Some early conclusions and analysis

I found this interview hugely insightful. I’m going to think some more about analysis, early conclusions and next steps. But in the mean time: what do you think? Please share your thoughts as comments on this post.

Sriram RamasubramanianMultiple Text Layout

The pretty basic unit for developing UI in Android is a View. But if we look closely, View is a UI widget that provides user interaction. It comprises of Drawables and text Layouts. We see drawables everywhere — right from the background of a View. TextView has compound drawables too. However, TextView has only one layout. Is it possible to have more than one text layout in a View/TextView?

Multiple Text Layout

Let’s take an example. We have a simple ListView with each row having an image, text and some sub-text. Since TextView shows only one text Layout by default, we would need a LinearLayout with 2 or 3 views (2 TextViews in them) to achieve this layout. What if TextView can hold one more text layout? It’s just a private variable that can be created and drawn on the Even if it can hold and draw it, how would we be able to let TextView’s original layout account for this layout?

If we look at TextView’s onMeasure() closely, the available width for the layout accounts for the space occupied by the compound drawables. If we make TextView account for a larger compound drawable space on the right, the layout will constrain itself more. Now that the space is carved out, we can draw the layout in that space.

    private Layout mSubTextLayout;

    public int getCompoundPaddingRight() {
        // Assumption: the layout has only one line.
        return super.getCompoundPaddingRight() + mSubTextLayout.getLineWidth(0);

Now we need to create a layout for the sub-text and draw. Ideally it’s not good to create new objects inside onMeasure(). But if we take care of when and how we create the layouts, we don’t have to worry about this restriction. And what different kind of Layouts can we create? TextView allows creating a BoringLayout, a StaticLayout or a DynamicLayout. BoringLayout can be used if the text is only single line. StaticLayout is for multi-line layouts that cannot be changed after creation. DynamicLayout is for editable text, like in an EditText.

    public void onMeasure(int widthMeasureSpec, int heightMeasureSpec) {
        int width = MeasureSpec.getSize(widthMeasureSpec);

        // Create a layout for sub-text.
        mSubTextLayout = new StaticLayout(

        // TextView doesn't know about mSubTextLayout.
        // It calculates the space using compound drawables' sizes.
        super.onMeasure(widthMeasureSpec, heightMeasureSpec);

The mPaint used here has all the attributes for the sub-text — like text color, shadow, text-size, etc. This is what determines the size used for a text layout.

    public void onDraw(Canvas canvas) {
        // Do the default draw.

        // Calculate the place to show the sub-text
        // using the padding, available width, height and
        // the sub-text width and height.
        // Note: The 'right' padding to use here is 'super.getCompoundPaddingRight()'
        // as we have faked the actual value.

        // Draw the sub-text.

But hey, can’t we just use a Spannable text? Well… what if the name is really long and runs into multiple lines or need to be ellipsized?

By this, we use the same TextView to draw two layouts. And that has helped us remove 2 Views! Happy hacking! ;)

P.S: The icons are from:

Hub FiguièreI was at Guadec

Guadec 2014 Volunteers

I was at Guadec in Strasbourg - thanks to all the volunteers who helped making this event possible.. For those who don't know Guadec is the annual Gnome User And Developer European Conference. I hadn't attended since 2008 — such is life — but I reconnected with people I hadn't seen in a while, as well as met awesome people that joined the project since. Attending this year made me regain a lot of motivation on why a Free Software desktop and why Gnome are really necessary. This is even more important as to why at Mozilla I use Fedora Linux rather than MacOS X like most of my team mates —*hint* at least on Fedora I don't have code signing break existing apps, and I have a real full screen browser to use to do presentation based on web technologies or even the risk that one day third party browser be barred like they are on iOS — and it is important to keep the alternatives alive. And Matthew Garrett gave us, during his keynote, good arguments on the importance of a Free Software desktop designed with users in mind.

I'll defintely try to figure out how I can make it to Göteborg, Sweden next year ; this year was facilitated by having a work week in Paris just before Guadec. Maybe I'll even present something as I resumed working on my projects.

James LongBlog Rebuild: Build Systems & Cross-Compiling

This is an entry in a series about rebuilding my custom blog with react, CSP, and other modern tech. Read more in the blog rebuild series.

A few years ago I remember being surprised at how popular grunt was getting. Not because it wasn't great software, but because I didn't understand what problem it solved. If I needed to process a few things like CSS before deploying to production, make seemed to work just fine.

Back then I thought things like build steps for JavaScript were an unnecessary complexity. I couldn't have been more wrong. A build system adds some complexity, yes, but a good one like gulp or broccoli is simple enough, and the returns are enormous. A complex Makefile for a JavaScript project would be a mistake, but these build tools are great.

tl;dr I chose gulp as my build system and webpack as my client-side module bundler. My final setup is on github, specifically gulpfile.js and webpack.config.js.

A Practical Approach

I'm going to be as practical as possible during this rebuild. I'm going to investigate newer things like ES6 modules, but if the tools are too immature I will fallback to something like CommonJS. I want something that works now with little effort.

What I need:

  1. A common module format for the client and server. Node uses CommonJS, and currently browsers do not enforce modules.
  2. For client-side code, a way to compile modules to run in the browser.
  3. An extensible pipeline for hooking in compilation stages for both server and client JS. This lets me hook in various JS transformations that I need.
  4. A watcher that will automatically trigger the necessary compilations to get updates automatically (and only re-compile the necessary files)
  5. Ability to define a few basic build tasks for moving files around and running the app

There are lots of things involved in the above requirements: compilation strategies, module bundling, and build task management. I don't know yet which combination of projects will work out, so let's investigate various solutions.

The main drive for a compilation pipeline is to compile out ES6 features into ES5. I don't want to hook something big like Traceur in because there are projects that compile out specific features better. For example, I want to use regenerator to compile out generators and then defs to compile out let. I've always enjoyed this post about ClojureScript's compilation pipeline, and I'm reminded of it when I think of this strategy of incrementally compiling an AST. Ideally, we will pass an AST around, but we'll see if the tools are good enough for that yet.

Of course, I'm a big fan of sweet.js so that will be the first compilation phase. I may compile out some ES6 features with the es6-macros project, but the reality is that the JS community has written mature ES6 transformations in the form of compilers, so it might make sense just to use them. I will still use macros for user-land syntax extensions, which I'll talk more about in future posts.

The Core Problem

I think the core problem is that the client and server are very different beasts. Node requires CommonJS and modules separated out into individual files. Browsers don't have modules and it's desirable to bundle everything together into a single JS file to deploy. To make things harder, everything should be sourcemapped.

The first question to ask is how a build system can help. Since we want to work with modules, we need support for N:M files at each build step. That means that given N files, a build step can produce M files. For example, given 1 file, a module plugin will return 10 files (all the dependencies), and then the next step could bundle them all together into 1 file.

This is important for watching and incremental builds. If a dependency changes, even if it's not listed directly in the files to watch, the build system should recompile. Additionally, it should only recompile the necessary changes, so it should cache each dependency, even if it's not explicitly listed in the original sources.

The second question to ask is what tools are out there for working with modules. The build system is the backbone, but we need plugins for actually doing things with modules. How well the build system supports N:M files affects how much the module loaders need to do.

Lastly, there's one more desirable feature. There are several transformations I want to do to my code (like sweet.jsregeneratordefs). It would be far better to pass an AST through this process rather than passing strings. This means we probably don't want to hook up this whole pipeline through whatever build system we choose, but wrap it up into a single plugin.

Gulp + Webpack

Gulp is a build system built around streams. One thing I like is that it's very simple to use and define new tasks. (Note: I'm going to skip over grunt because its config syntax is really bad and I just don't like it.)

Gulp supports the N:M file builds in the form of stream events. A plugin can take a single file from a stream and output multiple files. If you add a caching layer with gulp-cache, and use the more advanced gulp-watch, you could effectively pass in one JS file and have it watch and rebuild all of its dependencies.

I'm not sure a lot of people understand that you can do this, which emits 2 files for every file that comes down the stream:

function explode() {
  return es.through(function(file) {
    this.emit('data', new gutil.File({
      base: file.base,
      cwd: file.cwd,
      path: path.join(file.base, 'foo.js'),
      contents: new Buffer('boo')

    this.emit('data', file);

gulp.task("explode", function() {

Not very many projects use this to help with module bundling, though. There is one project, amd-optimize, that does basic dependency tracing for AMD modules. Still, the more sophisticated gulp-watch is needed if you want to watch new files from the stream (you could apply it after explode()); it is not builtin. Generally, there is very little mature code that integrates a module bundler into gulp. You have to work at it. So this doesn't really solve our problem of compiling modules for client-side. Everyone just uses browserify or webpack.

Additionally, you really only care about your local dependencies, not ones pulled from npm. You don't need to run your code transformations on npm dependencies. So it's really easy to just give the native watch all of your modules by just doing gulp.src('src/**/*.js'). Because of this, and the fact that server-side code doesn't require module bundling, gulp works well for transforming server-side code. This code transforms each file from src and generates files in the build folder with sourcemaps.

function makeNodeStream(src, withoutSourcemaps) {
  var stream = src.pipe(cache('src'))
        readableNames: true,
        modules: ['es6-macros']
      .pipe(jsheader('var wrapGenerator = require("regenerator/runtime/dev").wrapGenerator;'))

  if(!withoutSourcemaps) {
    stream = stream.pipe(sourcemaps.write('.'));
  return stream;

gulp.task("src", function(cb) {
    gulp.src(['src/**/*', '!src/**/*.js']).pipe(gulp.dest('build'))
  ).on('end', function() {

An additional complexity is that I have a shared folder that also needs to be transformed and output to a different directory. As for as I could tell, I couldn't combine that into a single gulp.src and gulp.dest, so I created makeNodeStream to run it on both. I also copy anything that's not a JS file from src to the build folder. Lastly, when it's finished it restarts the node process using nodemon.

My transformation pipeline here goes like this: sweet.js → regenerator → header append. I will likely add more steps in the future. This is passing around strings, which I talked about before, when we really should pass around ASTs. One thing I could do is use esnext instead and integrate sweet.js with it, and then do a single pipe to it. It would probably be much faster.

It takes about 2 seconds to compile my whole src directory, which is a bunch of code. But who cares? You don't need to recompile everything when just one file changes! Note that I use the cache('src') step first from gulp-cached; this will cache all files coming through the stream, and only re-emit files that have changed. That means we only transform new files, and it only takes a few hundred ms now.


What about client-side code? As mentioned before, even though gulp could be used as a module bundler, nobody does that since mature projects like browserify and webpack exist. I chose to use webpack since I like the API and documentation better (and it has more features).

This basically requires me to use CommonJS modules for the browser. This route is well-established in the JS community so I benefit from mature tools. Eventually I'd like to use ES6 modules, but the ecosystem isn't quite there yet. I'm being conservative here so that I don't spend too much time on my tools.

Now that I'm using webpack, all of my problems for client-side development are solved. It has everything, from code splitting to hot module replacement. Here is my webpack config:

var config = {
  cache: true,
  entry: './static/js/main.js',
  output: {
    filename: './static/js/bundle.js'
  resolve: {
    extensions: ['', '.js', '.sjs'],
    fallback: __dirname
  module: {
    loaders: [
      {test: /\.js$/,
       exclude: [/static\/js\/lib\/.*\.js$/,
       loader: 'regenerator!sweetjs?modules[]=es6-macros'},
      {test: /\.less$/, loader: "style!css!less"},
      {test: /\.css$/, loader: "style!css"}

Webpack is explicitly a module bundler, so all it needs is just one file and it will walk the dependencies. Everything will be bundled together into a single file bundle.js. This happens by default, so you can see why this doesn't work for server-side code where we just need a 1:1 file mapping.

This uses a loader on JS files to run them through sweet.js and regenerator. Again, I really should look into esnext so that I don't keep re-parsing the code.

It also uses some really cool loaders to deal with stylesheets. less-loader compiles out lesscss. css-loader is an awesome loader that converts all @import and url statements to require so that everything is resolved the same way, and lets you apply loaders on those resources being loaded, allowing things like inlining the url content straight into the stylesheet. Having everything go through the same mechanism (and able to pull from npm dependencies) is extremely liberating.

To top it all off, style-loader is a loader that automatically adds a style tag to the page when the css file is requireed. It also inlines all the CSS into your JavaScript bundle, but you can also make it reference an external CSS file. Either way, all you have to do is require('css/main.css') in your JavaScript and it just works.

There are a few other things I do with gulp and webpack, mostly to get integration with a few modules pulled down from npm (like React) working. I also have a run task that starts my app and uses nodemon to track it so it can be restarted whenever a change happens.

View my final setup on github.

Broccoli + ES6 modules

Broccoli is a rather new build tool that operates on tree structures, so it gets good incremental rebuilds and watches for free. See the annoucement blog post for more details.

I'm not sure if broccoli competes more with gulp or webpack. It sits somewhere in the middle. It doesn't have any concept of tasks, so I can't make a run task that restarts my server on changes. But it's also not nearly as specific as webpack, and doesn't dictate anything specific about modules or how things are bundled.

I think broccoli makes it a lot easier to write something like webpack, and that's the idea. Basically, in broccoli plugins are always passing around whole trees of files, and a plugin can easily expand a tree into a much bigger tree if needed. This makes it easy to expand dependencies but still leverage the build system to handle them. So watching for changes in dependencies works great, and incremental builds are really fast because it can easily figure out what to do. Webpack has to figure all of this stuff out itself.

I like the idea of broccoli, and because working with modules is easy people are doing a lot of great work to get a workflow for compiling ES6 modules. This plugin integrates es6-module-transpiler with broccoli and does all the dependency stuff.

The thing broccoli could solve for me is not only using ES6 modules, but also to unify the JS transformation between server-side and client-side. Using gulp and webpack, I have two completely separate processes.

This was my first Brocfile.js to see how it would work out:

var pickFiles = require('broccoli-static-compiler');
var sweetjs = require('broccoli-sweetjs');
var transpileES6 = require('broccoli-es6-module-transpiler');

var src = pickFiles('src', {
  srcDir: '/',
  destDir: '/'

src = sweetjs(src, {
  modules: ['es6-macros']

src = transpileES6(src, { type: 'cjs' });
module.exports = src;

Unfortunately, I immediately ran into a bug and it wouldn't compile my code. Somehow I was using an older version that didn't work with nested yields (I guess a newer version needs to be pushed to npm). These kinds of bugs can easily be fixed.

I also ran into a bigger issue though: that project does not have a good story for integration with npm dependencies yet (more discussion here). With webpack, I could require just require dependencies and it would look in node_modules, and it worked awesomely. I don't know why we can't do something similar with ES6 modules.

There was also another big issue in general with broccoli: sourcemaps. The sourcemap story for broccoli is very vague (es6-module-transpiler supports them just fine, but I don't know how to expand with sweet.js and pass it the result & sourcemaps and make it combine them). The standard project broccoli-filter which is supposed to be used by plugins that simply map files 1:1 states right in the README that is does not support sourcemaps. That is insane to me and I can't think about using broccoli until sourcemaps are deeply integrated through and through. Also see this discussion.

In gulp, it's really easy with the awesome gulp-sourcemaps project. You just hook into the stream and write sourcemaps to a directory:


Plugins have a standard method of applying sourcemaps. The sourcemap is attached to the File instances that are passed through the stream, and combined using vinyl-sourcemaps-apply. It looks like this:

var applySourceMap = require('vinyl-sourcemaps-apply');
// ...
if(myGeneratedSourceMap) {
  applySourceMap(file, myGeneratedSourceMap);

That incrementally combines sourcemaps as they are applied through the streams. It has worked out really well for me.

Even without all these problems, the story in general for browser-side module bundling isn't nearly as strong as browserify or webpack, which have tons of features specific for browser modules. So until we get a solid build system that has plugins that implement most of those features of a module bundler, right now using gulp/broccoli + browserify/webpack works pretty darn well.

Most likely, I will switch my project to ES6 modules when can find a good cross-compiler that works well with CommonJS and my current build system.

I could use broccoli and webpack, but at this point I'm just going to stick with gulp. It's easy to use and works really well with server-side transformation and sourcemaps. As for broccoli, I understand the design and I like it, but it does make plugin development very complicated and I'm not entirely sold on it, especially when you can do N:M compilations with gulp. Lastly, it uses temporary files so gulp is potentially faster with streams.

Stream of Thought EOF

There are several other build systems out there and a million ways to combine them. I can't possibly cover all of them, but I hope this gave some insight into my process for researching. I have something that works well, and the only thing I'll improve in the future is using ES6 modules instead of CJS.

View the full repo to see all the glorious code. Specifically, check out the full gulpfile.js and webpack.config.js. What's neat about this set up is I can run webpack from the CLI like normal, but it's also defined as a task so gulp webpack will work and it can be used as a task dependency (for tasks like gulp all). I can switch between the systems easily.

I'm sure I have made some errors in this post, as it was mostly stream of thought as I was doing my research. If something is completely off, let me know.

Daniel StenbergI’m with Firefox OS!


I have received a Firefox OS tablet as part of a development program. My plan is to use this device to try out stuff I work on and see how it behaves on Firefox OS “for real” instead of just in emulators or on other systems. While Firefox OS is a product of my employer Mozilla, I personally don’t work particularly much with Firefox OS specifically. I work on networking in general for Firefox, and large chunks of the networking stack is used in both the ordinary Firefox browser like on desktops as well as in Firefox OS. I hope to polish and improve networking on Firefox OS too over time.

Firefox OS tablet


The primary development device for Firefox OS is right now apparently the Flame phone, and I have one of these too now in my possession. I took a few photos when I unpacked it and crammed them into the same image, click it for higher res:

Flame - Firefox OS phone

A brief explanation of Firefox OS

Firefox OS is an Android kernel (including drivers etc) and a bionic libc – simply the libc that Android uses. Linux-wise and slightly simplified, it runs a single application full-screen: Firefox, which then can run individual Firefox-apps that appears as apps on the phone. This means that the underlying fundamentals are shared with Android, while the layers over that are Firefox and then a world of HTML and javascript. Thus most of the network stack used for Firefox – that I work with – the http, ftp, dns, cookies and so forth is shared between Firefox for desktop and Firefox for Android and Firefox OS.

Firefox OS is made to use a small footprint to allow cheaper smartphones than Android itself can. Hence it is targeted to developing nations and continents.

Both my devices came with Firefox OS version 1.3 pre-installed.

The phone

The specs: Qualcomm Snapdragon 1.2GHZ dual-core processor, 4.5-inch 854×480 pixel screen, five-megapixel rear camera with auto-focus and flash, two-megapixel front-facing camera. Dual-SIM 3G, 8GB of onboard memory with a microSD slot, and a 1800 mAh capacity battery.

The Flame phone should be snappy enough although at times it seems to take a moment too long to populate a newly shown screen with icons etc. The screen surface is somehow not as smooth as my Nexus devices (we have the 4,5,7,10 nexuses in the house), leaving me with a constant feeling the screen isn’t cleaned.

Its dual-sim support is something that seems ideal for traveling etc to be able to use my home sim for incoming calls but use a local sim for data and outgoing calls… I’ve never had a phone featuring that before. I’ve purchased a prepaid SIM-card to use with this phone as my secondary device.

Some Good

I like the feel of the tablet. It feels like a solid and sturdy 10″ tablet, just like it should. I think the design language of Firefox OS for a newbie such as myself is pleasing and good-looking. The quad-core 1GHz thing is certainly fast enough CPU-wise to eat most of what you can throw at it.

These are really good devices to do web browsing on as the browser is a highly capable and fast browser.

Mapping: while of course there’s Google maps app, using the openstreetmap map is great on the device and Google maps in the browser is also a perfectly decent way to view maps. Using openstreetmap also of course has the added bonus that it feels great to see your own edits in your own neck of the woods!

I really appreciate that Mozilla pushes for new, more and better standardized APIs to enable all of this to get done in web applications. To me, this is one of the major benefits with Firefox OS. It benefits all of us who use the web.

Some Bad

Firefox OS feels highly US-centric (which greatly surprised me, seeing the primary markets for Firefox OS are certainly not in the US). As a Swede, I of course want my calendar to show Monday as the first day of the week. No can do. I want my digital clock to show me the time using 24 hour format (the am/pm scheme only confuses me). No can do. Tiny teeny details in the grand scheme of things, yes, but annoying. Possibly I’m just stupid and didn’t find how to switch these settings, but I did look for them on both my devices.

The actual Firefox OS system feels like a scaled-down Android where all apps are simpler and less fancy than Android. There’s a Facebook “app” for it that shows Facebook looking much crappier than it usually does in a browser or in the Android app – although on the phone it looked much better than on the tablet for some reason that I don’t understand.

I managed to get the device to sync my contacts from Google (even with my google 2-factor auth activated) but trying to sync my Facebook contacts just gave me a very strange error window in spite of repeated attempts, but again that worked on my phone!

I really miss a proper back button! Without it, we end up in this handicapped iphone-like world where each app has to provide a back button in its own UI or I have to hit the home button – which doesn’t just go back one step.

The tablet supports a gesture, pull up from the button of the screen, to get to the home screen while the phone doesn’t support that but instead has a dedicated home button which if pressed a long time shows up cards with all currently running apps. I’m not even sure how to do that latter operation on the tablet as it doesn’t’ have a home button.

The gmail web interface and experience is not very good on either of the devices.

Building Firefox OS

I’ve only just started this venture and dipped my toes in that water. All code is there in the open and you build it all with open tools. I might get back on this topic later if I get the urge to ventilate something from it… :-) I didn’t find any proper device specific setup for the tablet, but maybe I just don’t know its proper code word and I’ve only given it a quick glance so far. I’ll do my first builds and installs for the phone. Any day now!


My seven year old son immediately found at least one game on my dev phone (he actually found the market and downloaded it all by himself the first time he tried the device) that he really likes and now he wants to borrow this from time to time to play that game – in competition with the android phones and tablets we have here already. A pretty good sign I’d say.

Firefox OS is already a complete and competent phone operating system and app ecosystem. If you’re not coming from Android or Iphone it is a step up from everything else. If you do come from Android or Iphone I think you have to accept that this is meant for the lower end spectrum of smart-phones.

I think the smart-phone world can use more competition and Firefox OS brings exactly that.


Matt BrubeckLet's build a browser engine! Part 3: CSS

This is the third in a series of articles on building a toy browser rendering engine. Want to build your own? Start at the beginning to learn more:

This article introduces code for reading Cascading Style Sheets (CSS). As usual, I won’t try to cover everything in the spec. Instead, I tried to implement just enough to illustrate some concepts and produce input for later stages in the rendering pipeline.

Anatomy of a Stylesheet

Here’s an example of CSS source code:

h1, h2, h3 { margin: auto; color: #cc0000; }
div.note { margin-bottom: 20px; padding: 10px; }
#answer { display: none; }

Next I’ll walk through the css module from my toy browser engine, robinson. The code is written in Rust, though the concepts should translate pretty easily into other programming languages. Reading the previous articles first might help you understand some the code below.

A CSS stylesheet is a series of rules. (In the example stylesheet above, each line contains one rule.)

struct Stylesheet {
    rules: Vec<Rule>,

A rule includes one or more selectors separated by commas, followed by a series of declarations enclosed in braces.

struct Rule {
    selectors: Vec<Selector>,
    declarations: Vec<Declaration>,

A selector can be a simple selector, or it can be a chain of selectors joined by combinators. Robinson supports only simple selectors for now.

Note: Confusingly, the newer Selectors Level 3 standard uses the same terms to mean slightly different things. In this article I’ll mostly refer to CSS2.1. Although outdated, it’s a useful starting point because it’s smaller and more self-contained than CSS3 (which is split into myriad specs that reference both each other and CSS2.1).

In robinson, a simple selector can include a tag name, an ID prefixed by '#', any number of class names prefixed by '.', or some combination of the above. If the tag name is empty or '*' then it is a “universal selector” that can match any tag.

There are many other types of selector (especially in CSS3), but this will do for now.

enum Selector {

struct SimpleSelector {
    tag_name: Option<String>,
    id: Option<String>,
    class: Vec<String>,

A declaration is just a name/value pair, separated by a colon and ending with a semicolon. For example, "margin: auto;" is a declaration.

struct Declaration {
    name: String,
    value: Value,

My toy engine supports only a handful of CSS’s many value types.

enum Value {
    Color(u8, u8, u8, u8), // RGBA
    Length(f32, Unit),
    // insert more values here

enum Unit { Px, /* insert more units here */ }

All other CSS syntax is unsupported, including @-rules, comments, and any selectors/values/units not mentioned above.


CSS has a regular grammar, making it easier to parse correctly than its quirky cousin HTML. When a standards-compliant CSS parser encounters a parse error, it discards the unrecognized part of the stylesheet but still processes the remaining portions. This is useful because it allows stylesheets to include new syntax but still produce well-defined output in older browsers.

Robinson uses a very simplistic (and totally not standards-compliant) parser, built the same way as the HTML parser from Part 2. Rather than go through the whole thing line-by-line again, I’ll just paste in a few snippets. For example, here is the code for parsing a single selector:

    /// Parse one simple selector, e.g.: `type#id.class1.class2.class3`
    fn parse_simple_selector(&mut self) -> SimpleSelector {
        let mut result = SimpleSelector { tag_name: None, id: None, class: Vec::new() };
        while !self.eof() {
            match self.next_char() {
                '#' => {
           = Some(self.parse_identifier());
                '.' => {
                '*' => {
                    // universal selector
                c if valid_identifier_char(c) => {
                    result.tag_name = Some(self.parse_identifier());
                _ => break

Note the lack of error checking. Some malformed input like ### or *foo* will parse successfully and produce weird results. A real CSS parser would discard these invalid selectors.


Specificity is one of the ways a rendering engine decides which style overrides the other in a conflict. If a stylesheet contains two rules that match an element, the rule with the matching selector of higher specificity can override values from the one with lower specificity.

The specificity of a selector is based on its components. An ID selector is more specific than a class selector, which is more specific than a tag selector. Within each of these “levels,” more selectors beats fewer.

pub type Specificity = (uint, uint, uint);

impl Selector {
    pub fn specificity(&self) -> Specificity {
        let Simple(ref simple) = *self;
        let a =;
        let b = simple.class.len();
        let c = simple.tag_name.iter().len();
        (a, b, c)

(If we supported chained selectors, we could calculate the specificity of a chain just by adding up the specificities of its parts.)

The selectors for each rule are stored in a sorted vector, most-specific first. This will be important in matching, which I’ll cover in the next article.

    /// Parse a rule set: `<selectors> { <declarations> }`.
    fn parse_rule(&mut self) -> Rule {
        Rule {
            selectors: self.parse_selectors(),
            declarations: self.parse_declarations()

    /// Parse a comma-separated list of selectors.
    fn parse_selectors(&mut self) -> Vec<Selector> {
        let mut selectors = Vec::new();
        loop {
            match self.next_char() {
                ',' => { self.consume_char(); }
                '{' => break, // start of declarations
                c   => fail!("Unexpected character {} in selector list", c)
        // Return selectors with highest specificity first, for use in matching.
        selectors.sort_by(|a,b| b.specificity().cmp(&a.specificity()));

The rest of the CSS parser is fairly straightforward. You can read the whole thing on GitHub. And if you didn’t already do it for Part 2, this would be a great time to try out a parser generator. My hand-rolled parser gets the job done for simple example files, but it has a lot of hacky bits and will fail badly if you violate its assumptions. Eventually I hope to replace it with one built on rust-peg or similar.


As before, you should decide which of these exercises you want to do, and skip the rest:

  1. Implement your own simplified CSS parser and specificity calculation.

  2. Extend robinson’s CSS parser to support more values, or one or more selector combinators.

  3. Extend the CSS parser to discard any declaration that contains a parse error, and follow the error handling rules to resume parsing after the end of the declaration.

  4. Make the HTML parser pass the contents of any <style> nodes to the CSS parser, and return a Document object that includes a list of Stylesheets in addition to the DOM tree.


Just like in Part 2, you can skip parsing by hard-coding CSS data structures directly into your program, or by writing them in an alternate format like JSON that you already have a parser for.

To be continued…

The next article will introduce the style module. This is where everything starts to come together, with selector matching to apply CSS styles to DOM nodes.

The pace of this series might slow down soon, since I’ll be busy later this month and I haven’t even written the code for some of the upcoming articles. I’ll keep them coming as fast as I can!

Mozilla Release Management TeamFirefox 32 beta5 to beta6

  • 26 changesets
  • 33 files changed
  • 503 insertions
  • 244 deletions



List of changesets:

Matthew GreganBug 992238 - Disable test_init_destroy_multiple_contexts_and_streams on Windows 7. r=padenot, a=test-only - 980f2d449ccb
Martijn WargersBug 928678 - Hopeful fix for intermittent failure. r=jmaher, a=test-only - 04f540c59457
Chris PearceBug 1046003 - Ensure duplicate frames after a seek have an Image. r=kinetik, a=sledru - 91f078c385f8
Mike HommeyBug 1036286 - Delay registration of the faulty.lib signal handler until when it's necessary. r=nfroyd, a=sledru - ec230387fad2
Mike HommeyBug 1036286 - Ensure faulty.lib diverts libc's sigaction instead of a LD_PRELOADed one. r=nfroyd, a=sledru - 1f96d584763a
Bobby HolleyBug 1042436 - Always enter the wrapper's compartment before invoking SilentFailure. r=gabor, a=sledru - 6037db66624b
Bobby HolleyBug 1042436 - Lowercase the "reason" param to SilentFailure. a=sledru - 4cb373d9ce33
Bobby HolleyBug 1042436 - Warn once to the console when XrayWrappers deny access to an object. r=gabor, sr=smaug, a=sledru - 1cf7b5810eb5
Ryan VanderMeulenBacked out changesets 1cf7b5810eb5, 4cb373d9ce33, and 6037db66624b (Bug 1042436) for bustage. - 53c7aceaf1a8
Bobby HolleyBug 1042436 - Always enter the wrapper's compartment before invoking SilentFailure. r=gabor - 08c5b02a125e
Bobby HolleyBug 1042436 - Lowercase the "reason" param to SilentFailure. r=me - 84bf42618416
Bobby HolleyBug 1042436 - Warn once to the console when XrayWrappers deny access to an object. r=gabor,sr=smaug a=sylvestre - c405f720f587
Georg FritzscheBug 1048793 - Skip CertUtils checks as we changed providers and PublicKeyPinningService already provides them. r=mmc, a=sledru - 7aa9781a3758
Marco BonardoBug 997970 - Add search suggest to Amazon for en-US. r=ttaubert, a=sledru - 3f7ae08da07f
Paul AdenotBug 1047831. r=ehsan, a=sledru - 0c488a1d2142
Richard NewmanBug 1050690. r=wesj, a=sledru - 6b9281a057b7
Tim TaubertBug 1048133 - Check key algorithms before using them for encryption/signatures r=rbarnes a=abillings - 6a7e6708ded1
Cosmin MalutanBug 1032255 - TPS has to exit with code != 0 in case of failures. r=aeftimie, dhunt a=testonly DONTBUILD - 1b581df81c9b
Mike HommeyBug 1048064 - Fix libstagefright build on GNU/kFreeBSD. r=ajones, a=NPOTB - 2e0eeb786e7d
Karl TomlinsonBug 995075 - Include update_filter() changes from upstream speexdsp. r=padenot, a=sledru - 92c3567e5a0c
Patrick McManusBug 1045640 - disable tls proxying Bug 378637 on gecko 32 r=backout a=lmandel r=hurley - c9dec07b4148
Ryan VanderMeulenBacked out changeset 551f71d3138f (Bug 1038243) - b8d426a326f5
Richard NewmanBug 936851 - Part 1: Remove VideoPlayer. r=mfinkle, a=lmandel - e9fc6b3efaf3
Richard NewmanBug 936851 - Part 2: Delegate to Fennec itself for YouTube video instead of VideoPlayer. r=mfinkle, a=lmandel - 086068fe1b98
Wes JohnstonBug 1046500 - Fix mediagroup parsing in feed parser. r=mak, a=sledru - f76498a1bcbd
Lawrence MandelPost Beta 4: disable EARLY_BETA_OR_EARLIER a=me - 8bf5700cb82e

Robert NymanNeurodiversity

A couple of months ago I got invited by Maja Brisvall, who works at Stockholm Resilience Centre and Shift, for two sessions with Kathryn Myronuk and Brad Templeton from Singularity University.

They were very interesting, and delivered by two highly intelligent persons with great insight. One thing that stood out to me was something that Kathryn said, on discussing group dynamics, team work and learning (and I’m probably paraphrasing like hell now):

You got to respect the neurodiversity of the group. Some people will be loud and seem to be the most ambitious, where others can be quiet for 10 minutes and then ask just one question, but wow, what a question!

And this really struck me: having been, like most people, in so many different contexts for learning – in school, work, conferences, workshops and more, some of the most obvious faults – or lack of successes – have been forcing people to have an opinion or conclusion right away. If you don’t ask a question immediately, you’re not interested or have nothing to contribute.

Or in the name of inclusivity, do a round of the entire room where everyone immediately has to have and express an opinion, take or query. That’s not how it works and not how it has to work. Humans are so diverse in how they are analyzing facts and impressions, how they gain perspective and clarity.

And I think this applies not only to learning but also online conversations as well, like Twitter, Facebook, YouTube comments (*shivers*) and more. One of the only things we know for sure is that people are very different, and we need to keep that in mind in our communication. Make sure that everyone gets a chance to learn, grasp and take it all in. To express themselves.

So next time you believe someone is a slow learner, doesn’t get it, is just provoking or similar, take one step back. Give them a chance, engage in constructive conversation and give them the room they need.

Respect the neurodiversity.

Henrik SkupinFirefox Automation report – week 27/28 2014

In this post you can find an overview about the work happened in the Firefox Automation team during week 27 and 28.


Henrik continued his work on our QA specific PuppetAgain setup. One of the blockers for us was bug 997721, which is the full proxy support on OS X and Linux. By week 27 we were finally able to get this finished. Further Henrik also got the manifest for installing Java done.

On TPS we also made progress. So Cosmin got the Pulse listener script implemented for the Coversheet CI, which triggers TPS tests whenever new nightly builds of Firefox have been made available. Further a couple of fixes for Mozrunner were necessary given that the 6.0 release caused a couple of regressions for TPS. As result we agreed on to pin Python package dependencies to specific versions of mozrunner and related packages.

One big thing for our team is also to assist people in the decision, if automated tests are possible for certain Firefox features. The questions mainly come up for tests, which cannot be implemented for any of our developer driven test frameworks due to limitations on buildbot (no network access allowed, restart of the application, and others…). To be more successful in the future, Henrik started a discussion on the dev-quality mailing list. We hope to get the proposed process established for bug verification.

Individual Updates

For more granular updates of each individual team member please visit our weekly team etherpad for week 27 and week 28.

Meeting Details

If you are interested in further details and discussions you might also want to have a look at the meeting agenda, the video recording, and notes from the Firefox Automation meetings of week 27 and week 28.

Kat BraybrookeCall for Artists, Technologists + Curators: Mozfest 2014, London

Exciting news today — we’ve just launched a Public Call to bring artists, curators and creatives together in London for the first-ever “Art & Culture of the Web" track at the Mozilla Festival this October — and you are invited!

Each year, MozFest is built around a set of core tracks featuring topics that have the greatest potential for innovation and empowerment with the Web, bringing in 1,000+ innovators to the shores of the Thames to build and make together. And this year, my co-conspirator Paula le Dieu and I will be bringing together a brand new topic, “Art & Culture of the Web”, to explore the ways millions of users around the world are making the transition from consumers to creators, merging art, technology and networks to build new forms with unprecedented results.

We are especially interested in finding out what the combination of these experiments in theory, code and creativity — a practice we refer to as "networked art" — might mean for cultural heritage organizations, artists, technologists and curators? And how might creative works inform our understandings of the open web’s key challenges, from privacy to ownership, and from identity to governance?

From the launch of OPEN STUDIO, a groundbreaking site where participatory artworks will be created live, to a youth-led gallery curated by Hive Learning Networks, and from a global GLAM (Gallery, Library, Archive and Museum) skill-shares, this track is already shaping up to be the most creative experiment Mozfest has ever seen. And as you may have seen from my experience wrangling the Build and Teach the Web track at last year’s Mozfest, it will also be a massive — and very inspiring — party.

Come and make with us by submitting an idea to the Call for Artists, Technologists + Curators by August 22, 2014. We welcome your craziest and most audacious dreams and brainstorms. In return, we’ll leave you with hands covered in electronic paint. Already excited for the creative moments yet to come.

Matej NovakAngry and sad…

Depression has taken another person from the world. That he was funny and talented doesn’t make it any more or less upsetting. It just means more people know about it — but it happens to others every day.

I’ve never been suicidal, but I have struggled with depression. I know the hopelessness. I know how trapped and alone it can make you feel. I hope it never takes me to darker places than it already has.

All I can do is think about my son. How do I protect him? How do I keep him from feeling the way I have at my lowest? How I keep him from feeling the way I feel today?

The deeply unsatisfying answer is, of course, that I don’t. Not for sure, anyway. I can only do my best — love him, teach him and pick him up when he falls down. And hope that it’s enough.

Ben HearsumUpcoming changes to Mac package layout, signing

Apple recently announced changes to how OS X applications must be packaged and signed in order for them to function correctly on OS X 10.9.5 and 10.10. The tl;dr version of this is “only mach-O binaries may live in .app/Contents/MacOS, and signing must be done on 10.9 or later”. Without any changes, future versions of Firefox will cease to function out-of-the-box on OS X 10.9.5 and 10.10. We do not have a release date for either of these OS X versions yet.

Changes required:
* Move all non-mach-O files out of .app/Contents/MacOS. Most of these will move to .app/Contents/Resources, but files that could legitimately change at runtime (eg: everything in defaults/) will move to .app/MozResources (which can be modified without breaking the signature): This work is in progress, but no patches are ready yet.
* Add new features to the client side update code to allow partner repacks to continue to work. (
* Create and use 10.9 signing servers for these new-style apps. We still need to use our existing 10.6 signing servers for any builds without these changes. ( and
* Update signing server code to support new v2 signatures.

We are intending to ship the required changes with Gecko 34, which ships on November 25th, 2014. The changes required are very invasive, and we don’t feel that they can be safely backported to any earlier version quickly enough without major risk of regressions. We are still looking at whether or not we’ll backport to ESR 31. To this end, we’ve asked that Apple whitelist Firefox and Thunderbird versions that will not have the necessary changes in them. We’re still working with them to confirm whether or not this can happen.

This has been cross posted a few places – please send all follow-ups to the newsgroup.

Gervase MarkhamAbsence

I will be away and without email from Thu 14th August to Friday 22nd August, and then mostly away from email for the following week as well (until Friday 29th August).

Benjamin KerensaUbuConLA: Firefox OS on show in Cartagena

firefox logo only RGB 300x282 UbuConLA: Firefox OS on show in CartagenaIf you are attending UbuConLA I would strongly encourage you to check out the talks on Firefox OS and Webmaker. In addition to the talks, there will also be a Firefox OS workshop where attendees can go more hands on.

When the organizers of UbuConLA reached out to me several months ago, I knew we really had to have a Mozilla presence at this event so that Ubuntu Users who are already using Firefox as their browser of choice could learn about other initiatives like Firefox OS and Webmaker.

People in Latin America always have had a very strong ethos in terms of their support and use of Free Software and we have an amazingly vibrant community there in Columbia.

So if you will be anywhere near Universidad Tecnológica De Bolívar in Catagena, Columbia, please go see the talks and learn why Firefox OS is the mobile platform that makes the open web a first class citizen.

Learn how you can build apps and test them in Firefox on Ubuntu! A big thanks to Guillermo Movia for helping us get some speakers lined up here! I really look forward to seeing some awesome Firefox OS apps getting published as a result of our presence at UbuConLA as I am sure the developers will love what Firefox OS has to offer.


Feliz Conferencia!

Fredy RougeOpenBadges at Duolingo test center

Duolingo is starting a new certification program:

I think is good idea if anyone at MoFo (pay staff) write or call this people to propose the integration of in their certification program.

I really don’t have friends at MoFo/OpenBadges if you think is a good idea and you know anyone at OpenBadges please FW this idea.

Classé dans:Statut Tagged: duolingo, english, Mozilla, OpenBadgets

Byron Joneshappy bmo push day!

the following changes have been pushed to

  • [1049929] Product Support / Corp Support to Business Support
  • [1033897] Firefox OS MCTS Waiver Request Submission Form
  • [1041964] Indicate that a comment is required when selecting a value which has an auto-comment configured
  • [498890] Bugzilla::User::Setting doesn’t need to sort DB results
  • [993926] Bugzilla::User::Setting::get_all_settings() should use memcached
  • [1048053] convert bug 651803 dupes to INVALID bugs in “Invalid Bugs” product

discuss these changes on

Filed under: bmo, mozilla

Hannah KaneMaker Party Engagement Week 4

We’re almost at the halfway point!

Here’s some fodder for this week’s Peace Room meetings.

tl;dr potential topics of discussion:

  • big increase in user accounts this week caused by change to snippet strategy
    • From Adam: We’re directing all snippet traffic straight to while we develop a tailored landing page experience with built in account creation.This page is really converting well for an audience as broad and cold as the snippet, and I believe we can increase this rate further with bespoke pages and optimization.

      Fun fact: this approach is generating a typical month’s worth of new webmaker users every three days.

  • what do we want from promotional partners?
  • what are we doing to engage active Mozillians?


Overall stats:

  • Contributors: 5441 (we’ve passed the halfway point!)
  • Webmaker accounts: 106.3K (really big jump this week—11.6K new accounts this week as compared to 2.6K last week) (At one point we thought that 150K Webmaker accounts would be the magic number for hitting 10K Contributors. Should we revisit that assumption?)
  • Events: 1199 (up 10% from last week; this is down from the previous week which saw a 26% jump)
  • Hosts: 450 (up 14% from last week, same as the prior week)
  • Expected attendees: 61,910 (up 13% from last week, down a little bit from last week’s 16% increase)
  • Cities: 260 (up 8% from 241 last week)
  • Traffic: here’s the last three weeks. You can see we’re maintaining the higher levels that started with last week’s increase to our snippet allotment. ​ ​


  • The Webmaker user account conversion rate also went up this week:

Screen Shot 2014-08-10 at 4.50.50 PM

  • Do we know what caused the improved conversion rate?


Engagement Strategy #1: PARTNER OUTREACH

EVENT PARTNERS: This week we started implementing our phone-based “hand holding” strategy. We’re tracking finding from partner calls on a spreadsheet and capturing learnings on an etherpad.


  • as I understand it, we need to populate the Potential Contributors column with numbers (not words) to inform the expected Contributors trend line
  • same for the Potential Accounts column
  • are we using the Potential Events column to inform a trend line on any dashboard?
  • oh, and let’s agree on a format convention for the date field, so that we can sort by date

PROMOTIONAL PARTNERS: It still looks like we’re only getting handfuls of referrals through the specific partner URLs. I’d like to clarify what exactly our goals are for promotional partners, so that we can figure out whether to focus more attention on tracking results.


Engagement Strategy #2: ACTIVE MOZILLIANS

I haven’t heard anything about engaging Reps or FSAs this week. Have we done anything on this front?


Engagement Strategy #3: OWNED MEDIA


The snippet continues to perform well in terms of driving traffic. Last week we sent the first of the drip campaign emails and saw the following results after the first two days:

  • Sent to 75,964
  • Unique opens 13187
  • Open rate 17%
  • Unique clicks 4004
  • Open to click rate 30%
  • New accounts 554
  • Email to account conversion 0.73%
  • Click to conversion 13.84%

The snippet working group met and agreed to build the following two iterations:

  • Survey without email > 2 x tailored account signup pages > ongoing journey
  • Immediate account signup page > ongoing journey


Engagement Strategy #4: EARNED MEDIA

Press this week:

We revised our strategy with Turner this week. See previous email on that topic.

Brand awareness

Here’s this week’s traffic coming from searches for “webmaker” and “maker party” (blue line) vs. the week before (orange line). There’s been a 28% increase (though the overall numbers are quite small).

Screen Shot 2014-08-10 at 5.41.34 PMSOCIAL (not one of our key strategies): #MakerParty trendline: Back down a bit this week. ​

Screen Shot 2014-08-10 at 5.45.58 PMSee #MakerParty tweets here:

Sean MartellWhat is a Living Brand?

Today, we’re starting the Mozilla ID project, which will be an exploration into how to make the Mozilla identity system as bold and dynamic as the Mozilla project itself. The project will look into tackling three of our brand elements – typography, color, and the logo. Of these three, the biggest challenge will be creating a new logo since we currently don’t have an official mark at the moment. Mozilla’s previous logo was the ever amazing dino head that we all love, which has now been set as a key branding element for our community-facing properties. Its replacement should embody everything that Mozilla is, and our goal is to bake as much of our nature into the visual as we can while keeping it clean and modern. In order to do this, we’re embracing the idea of creating a living brand.

A living brand you say? Tell me more.


Image from DesignBoom

I’m pleased to announce you already know what a living brand is, you just may not know it under that term. If you’ve ever seen the MTV logo – designed in 1981 by Manhattan Design – you’ve witnessed a living brand. The iconic M and TV shapes are the base elements for their brand and building on that with style, color, illustrations and animations creates the dynamic identity system that brings it alive. Their system allows designers to explore unlimited variants of the logo, while maintaining brand consistency with the underlying recognizable shapes. As you can tell through this example, a living brand can unlock so much potential for a logo, opening up so many possibilities for change and customization. It’s because of this that we feel a living brand is perfect for Mozilla – we’ll be able to represent who we are through an open visual system of customization and creative expression.

You may be wondering how this is so open if Mozilla Creative will be doing all of the variants for this new brand? Here’s the exciting part. We’re going to be helping define the visual system, yes, but we’re exploring dynamic creation of the visual itself through code and data visualization. We’re also going to be creating the visual output using HTML5 and Web technologies, baking the building blocks of the Web we love and protect into our core brand logo.

OMG exciting, right? Wait, there’s still more!

In order to have this “organized infinity” allow a strong level of brand recognition, we plan to have a constant mark as part of the logo, similar to how MTV did it with the base shapes. Here’s the fun part and one of several ways you can get involved – we’ll be live streaming the process with a newly minted YouTube channel where you can follow along as we explore everything from wordmark choices to building out those base logo shapes and data viz styles. Yay! Open design process!

So there it is. Our new fun project. Stay tuned to various channels coming out of Creative – this blog, my Twitter account, the Mozilla Creative blog and Twitter account – and we’ll update you shortly on how you’ll be able to take part in the process. For now, fell free to jump in to #mologo on IRC to say hi and discuss all things Mozilla brand!

It’s a magical time for design, Mozilla. Let’s go exploring!

Gervase MarkhamAccessing Vidyo Meetings Using Free Software: Help Needed

For a long time now, Mozilla has been a heavy user of the Vidyo video-conferencing system. Like Skype, it’s a “pretty much just works” solution where, sadly, the free software and open standards solutions don’t yet cut it in terms of usability. We hope WebRTC might change this. Anyway, in the mean time, we use it, which means that Mozilla staff have had to use a proprietary client, and those without a Vidyo login of their own have had to use a Flash applet. Ick. (I use a dedicated Android tablet for Vidyo, so I don’t have to install either.)

However, this sad situation may now have changed. In this bug, it seems that SIP and H.263/H.264 gateways have been enabled on our Vidyo setup, which should enable people to call in using standards-compliant free software clients. However, I can’t get video to work properly, using Linphone. Is there anyone out there in the Mozilla world who can read the bug and figure out how to do it?

Gervase MarkhamIt’s Not All About Efficiency

Delegation is not merely a way to spread the workload around; it is also a political and social tool. Consider all the effects when you ask someone to do something. The most obvious effect is that, if he accepts, he does the task and you don’t. But another effect is that he is made aware that you trusted him to handle the task. Furthermore, if you made the request in a public forum, then he knows that others in the group have been made aware of that trust too. He may also feel some pressure to accept, which means you must ask in a way that allows him to decline gracefully if he doesn’t really want the job. If the task requires coordination with others in the project, you are effectively proposing that he become more involved, form bonds that might not otherwise have been formed, and perhaps become a source of authority in some subdomain of the project. The added involvement may be daunting, or it may lead him to become engaged in other ways as well, from an increased feeling of overall commitment.

Because of all these effects, it often makes sense to ask someone else to do something even when you know you could do it faster or better yourself.

– Karl Fogel, Producing Open Source Software

Just Browsing“Because We Can” is Not a Good Reason

The two business books that have most influenced me are Geoffrey Moore’s Crossing the Chasm and Andy Grove’s Only the Paranoid Survive. Grove’s book explains that, for long-term success, established businesses must periodically navigate “strategic inflection points”, moments when a paradigm shift forces them to adopt a new strategy or fade into irrelevance. Moore’s book could be seen as a prequel, outlining strategies for nascent companies to break through and become established themselves.

The key idea of Crossing the Chasm is that technology startups must focus ruthlessly in order to make the jump from early adopters (who will use new products just because they are cool and different) into the mainstream. Moore presents a detailed strategy for marketing discontinuous hi-tech products, but to my mind his broad message is relevant to any company founder. You have a better chance of succeeding if you restrict the scope of your ambitions to the absolute minimum, create a viable business and then grow it from there.

This seems obvious: to compete with companies who have far more resources, a newcomer needs to target a niche where it can fight on an even footing with the big boys (and defeat them with its snazzy new technology). Taking on too much means that financial investment, engineering talent and, worst of all, management attention are diluted by spreading them across multiple projects.

So why do founders consistently jeopardize their prospects by trying to do too much? Let me count the ways.

In my experience the most common issue is an inability to pass up a promising opportunity. The same kind of person who starts their own company tends to be a go-getter with a bias towards action, so they never want to waste a good idea. In the very early stages this is great. Creativity is all about trying as many ideas as possible and seeing what sticks. But once you’ve committed to something that you believe in, taking more bets doesn’t increase your chances of success, it radically decreases them.

Another mistake is not recognizing quickly enough that a project has failed. Failure is rarely total. Every product will have a core group of passionate users or a flashy demo or some unique technology that should be worth something, dammit! The temptation is to let the project drag on even as you move on. Better to take a deep breath and kill it off so you can concentrate on your new challenges, rather than letting it weigh you down for months or years… until you inevitably cancel it anyway.

Sometimes lines of business need to be abandoned even if they are successful. Let’s say you start a small but prosperous company selling specialized accounting software to Lithuanian nail salons. You add a cash-flow forecasting feature and realize after a while that it is better than anything else on the market. Now you have a product that you can sell to any business in any country. But you might as well keep selling your highly specialized accounting package in the meantime, right? After all, it’s still contributing to your top-line revenue. Wrong! You’ve found a much bigger opportunity now and you should dump your older business as soon as financially possible.

Last, but certainly not least, there is the common temptation to try to pack too much into a new product. I’ve talked to many enthusiastic entrepreneurs who are convinced that their product will be complete garbage unless it includes every minute detail of their vast strategic vision. What they don’t realize is that it is going to take much longer than they think to develop something far simpler than what they have in mind. This is where all the hype about the Lean Startup and Minimum Viable Products is spot on. They force you to make tough choices about what you really need before going to market. In the early stages you should be hacking away big chunks of your product spec with a metaphorical machete, not agonizing over every “essential” feature you have to let go.

The common thread is that ambitious, hard-charging individuals, the kind who start companies, have a tough time seeing the downside of plugging away endlessly at old projects, milking every last drop out of old lines of business and taking on every interesting new challenge that comes their way. But if you don’t have a coherent, disciplined strategic view of what you are trying to achieve, if you aren’t willing to strip away every activity that doesn’t contribute to this vision, then you probably aren’t working on the right things.

Roberto A. VitilloDasbhoard generator for custom Telemetry jobs

tldr: Next time you are in need of a dashboard similar to the one used to monitor main-thread IO, please consider using my dashboard generator which takes care of displaying periodically generated data.

So you wrote your custom analysis for Telemetry, your map-reduce job is finally giving you the desired data and you want to set it up so that it runs periodically. You will need some sort of dashboard to monitor the weekly runs but since you don’t really care how it’s done what do you do? You copy paste the code of one of our current dashboards, a little tweak here and there and off you go.

That basically describes all of the recent dashboards, like the one for main-thread IO (mea culpa). Writing dashboards is painful when the only thing you care about is data. Once you finally have what you were looking for, the way you present is often considered an afterthought at best. But maintaining N dashboards becomes quickly unpleasant.

But what makes writing and maintaining dashboards so painful exactly? It’s simply that the more controls you have, the more the different kind events you have to handle and the easier things get out of hand quickly. You start with something small and beautiful that just displays some csv and presto you end up with what should have been properly described as a state machine but instead is a mess of intertwined event handlers.

What I was looking for was something on the line of Shiny for R, but in javascript and with the option to have a client-only based interface. It turns out that React does more or less what I want. It’s not necessary meant for data analysis so there aren’t any plotting facilities but everything is there to roll your own. What makes exactly Shiny and React so useful is that they embrace reactive programming. Once you define a state and a set of dependencies, i.e. a data flow graph in practical terms, changes that affect the state end up being automatically propagated to the right components. Even though this can be seen as overkill for small dashboards, it makes it extremely easy to extend them when the set of possible states expands, which is almost always what happens.

To make things easier for developers I wrote a dashboard generator, iacumus, for use-cases similar to the ones we currently have. It can be used in simple scenarios when:

  • the data is collected in csv files on a weekly basis, usually using build-ids;
  • the dashboard should compare the current week against the previous one and mark differences in rankings;
  • it should be possible to go back back and forward in time;
  • the dashboard should provide some filtering and sorting criterias.

Iacumus is customizable through a configuration file that is specified through a GET parameter. Since it’s hosted on github, it means you just have to provide the data and don’t even have to spend time deploying the dashboard somewhere, assuming the machine serving the configuration file supports CORS. Here is how the end result looks like using the data for the Add-on startup correlation dashboard. Note that currently Chrome doesn’t handle properly our gzipped datasets and is unable to display anything, in case you wonder…

My next immediate goal is to simplify writing map-reduce jobs for the above mentioned use cases or to the very least write down some guidelines. For instance, some of our dashboards are based on Firefox’s version numbers and not on build-ids, which is really what you want when you desire to make comparisons of Nightly on a weekly basis.

Another interesting thought would be to automatically detect differences in the dashboards and send alerts. That might be not as easy with the current data, since a quick look at the dashboards makes it clear that the rankings fluctuate quite a bit. We would have to collect daily reports and account for the variance of the ranking in those as just using a few weekly datapoints is not reliable enough to account for the deviation.

Matt BrubeckLet's build a browser engine! Part 2: Parsing HTML

This is the second in a series of articles on building a toy browser rendering engine:

This article is about parsing HTML source code to produce a tree of DOM nodes. Parsing is a fascinating topic, but I don’t have the time or expertise to give it the introduction it deserves. You can get a detailed introduction to parsing from any good course or book on compilers. Or get a hands-on start by going through the documentation for a parser generator that works with your chosen programming language.

HTML has its own unique parsing algorithm. Unlike parsers for most programming languages and file formats, the HTML parsing algorithm does not reject invalid input. Instead it includes specific error-handling instructions, so web browsers can agree on how to display every web page, even ones that don’t conform to the syntax rules. Web browsers have to do this to be usable: Since non-conforming HTML has been supported since the early days of the web, it is now used in a huge portion of existing web pages.

A Simple HTML Dialect

I didn’t even try to implement the standard HTML parsing algorithm. Instead I wrote a basic parser for a tiny subset of HTML syntax. My parser can handle simple pages like this:

        <div id="main" class="test">
            <p>Hello <em>world</em>!</p>

The following syntax is allowed:

  • Balanced tags: <p>...</p>
  • Attributes with quoted values: id="main"
  • Text nodes: <em>world</em>

Everything else is unsupported, including:

  • Comments
  • Doctype declarations
  • Escaped characters (like &amp;) and CDATA sections
  • Self-closing tags: <br/> or <br> with no closing tag
  • Error handling (e.g. unbalanced or improperly nested tags)
  • Namespaces and other XHTML syntax: <html:body>
  • Character encoding detection

At each stage of this project I’m writing more or less the minimum code needed to support the later stages. But if you want to learn more about parsing theory and tools, you can be much more ambitious in your own project!

Example Code

Next, let’s walk through my toy HTML parser, keeping in mind that this is just one way to do it (and probably not the best way). Its structure is based loosely on the tokenizer module from Servo’s cssparser library. It has no real error handling; in most cases, it just aborts when faced with unexpected syntax. The code is in Rust, but I hope it’s fairly readable to anyone who’s used similar-looking languages like Java, C++, or C#. It makes use of the DOM data structures from part 1.

The parser stores its input string and a current position within the string. The position is the index of the next character we haven’t processed yet.

struct Parser {
    pos: uint,
    input: String,

We can use this to implement some simple methods for peeking at the next characters in the input:

impl Parser {
    /// Read the next character without consuming it.
    fn next_char(&self) -> char {

    /// Do the next characters start with the given string?
    fn starts_with(&self, s: &str) -> bool {

    /// Return true if all input is consumed.
    fn eof(&self) -> bool {
        self.pos >= self.input.len()

    // ...

Rust strings are stored as UTF-8 byte arrays. To go to the next character, we can’t just advance by one byte. Instead we use char_range_at which correctly handles multi-byte characters. (If our string used fixed-width characters, we could just increment pos.)

    /// Return the current character, and advance to the next character.
    fn consume_char(&mut self) -> char {
        let range = self.input.as_slice().char_range_at(self.pos);
        self.pos =;

Often we will want to consume a string of consecutive characters. The consume_while method consumes characters that meet a given condition, and returns them as a string:

    /// Consume characters until `test` returns false.
    fn consume_while(&mut self, test: |char| -> bool) -> String {
        let mut result = String::new();
        while !self.eof() && test(self.next_char()) {

We can use this to ignore a sequence of space characters, or to consume a string of alphanumeric characters:

    /// Consume and discard zero or more whitespace characters.
    fn consume_whitespace(&mut self) {
        self.consume_while(|c| c.is_whitespace());

    /// Parse a tag or attribute name.
    fn parse_tag_name(&mut self) -> String {
        self.consume_while(|c| match c {
            'a'..'z' | 'A'..'Z' | '0'..'9' => true,
            _ => false

Now we’re ready to start parsing HTML. To parse a single node, we look at its first character to see if it is an element or a text node. In our simplified version of HTML, a text node can contain any character except <.

    /// Parse a single node.
    fn parse_node(&mut self) -> dom::Node {
        match self.next_char() {
            '<' => self.parse_element(),
            _   => self.parse_text()

    /// Parse a text node.
    fn parse_text(&mut self) -> dom::Node {
        dom::text(self.consume_while(|c| c != '<'))

An element is more complicated. It includes opening and closing tags, and between them any number of child nodes:

    /// Parse a single element, including its open tag, contents, and closing tag.
    fn parse_element(&mut self) -> dom::Node {
        // Opening tag.
        assert!(self.consume_char() == '<');
        let tag_name = self.parse_tag_name();
        let attrs = self.parse_attributes();
        assert!(self.consume_char() == '>');

        // Contents.
        let children = self.parse_nodes();

        // Closing tag.
        assert!(self.consume_char() == '<');
        assert!(self.consume_char() == '/');
        assert!(self.parse_tag_name() == tag_name);
        assert!(self.consume_char() == '>');

        dom::elem(tag_name, attrs, children)

Parsing attributes is pretty easy in our simplified syntax. Until we reach the end of the opening tag (>) we repeatedly look for a name followed by = and then a string enclosed in quotes.

    /// Parse a single name="value" pair.
    fn parse_attr(&mut self) -> (String, String) {
        let name = self.parse_tag_name();
        assert!(self.consume_char() == '=');
        let value = self.parse_attr_value();
        (name, value)

    /// Parse a quoted value.
    fn parse_attr_value(&mut self) -> String {
        let open_quote = self.consume_char();
        assert!(open_quote == '"' || open_quote == '\'');
        let value = self.consume_while(|c| c != open_quote);
        assert!(self.consume_char() == open_quote);

    /// Parse a list of name="value" pairs, separated by whitespace.
    fn parse_attributes(&mut self) -> dom::AttrMap {
        let mut attributes = HashMap::new();
        loop {
            if self.next_char() == '>' {
            let (name, value) = self.parse_attr();
            attributes.insert(name, value);

To parse the child nodes, we recursively call parse_node in a loop until we reach the closing tag:

    /// Parse a sequence of sibling nodes.
    fn parse_nodes(&mut self) -> Vec<dom::Node> {
        let mut nodes = vec!();
        loop {
            if self.eof() || self.starts_with("</") {

Finally, we can put this all together to parse an entire HTML document into a DOM tree. This function will create a root node for the document if it doesn’t include one explicitly; this is similar to what a real HTML parser does.

/// Parse an HTML document and return the root element.
pub fn parse(source: String) -> dom::Node {
    let mut nodes = Parser { pos: 0u, input: source }.parse_nodes();

    // If the document contains a root element, just return it. Otherwise, create one.
    if nodes.len() == 1 {
    } else {
        dom::elem("html".to_string(), HashMap::new(), nodes)

That’s it! The entire code for the robinson HTML parser. The whole thing weighs in at just over 100 lines of code (not counting blank lines and comments). If you use a good library or parser generator, you can probably build a similar toy parser in even less space.


Here are a few alternate ways to try this out yourself. As before, you can choose one or more of them and ignore the others.

  1. Build a parser (either “by hand” or with a library or parser generator) that takes a subset of HTML as input and produces a tree of DOM nodes.

  2. Modify robinson’s HTML parser to add some missing features, like comments. Or replace it with a better parser, perhaps built with a library or generator.

  3. Create an invalid HTML file that causes your parser (or mine) to fail. Modify the parser to recover from the error and produce a DOM tree for your test file.


If you want to skip parsing completely, you can build a DOM tree programmatically instead, by adding some code like this to your program (in pseudo-code; adjust it to match the DOM code you wrote in Part 1):

// <html><body>Hello, world!</body></html>
let root = element("html");
let body = element("body");
body.children.push(text("Hello, world!"));

Or you can find an existing HTML parser and incorporate it into your program.

The next article in this series will cover CSS data structures and parsing.

Christian HeilmannPresenter tip: animated GIFs are not as cool as we think

Disclaimer: I have no right to tell you what to do and how to present – how dare I? You can do whatever you want. I am not “hating” on anything – and I don’t like the term. I am also guilty and will be so in the future of the things I will talk about here. So, bear with me: as someone who spends most of his life currently presenting, being at conferences and coaching people to become presenters, I think it is time for an intervention.

The hardest part of putting together a talk for developers is finding the funny gifs that accurately represent your topic.

The Tweet that started this and its thread

If you are a technical presenter and you consider adding lots of animated GIFs to your slides, stop, and reconsider. Consider other ways to spend your time instead. For example:

  • Writing a really clean code example and keeping it in a documented code repository for people to use
  • Researching how very successful people use the thing you want the audience to care
  • Finding a real life example where a certain way of working made a real difference and how it could be applied to an abstract coding idea
  • Researching real numbers to back up your argument or disprove common “truths”

Don’t fall for the “oh, but it is cool and everybody else does it” trap. Why? because when everybody does it there is nothing cool or new about it.

Animated GIFs are ubiquitous on the web right now and we all love them. They are short videos that work in any environment, they are funny and – being very pixelated – have a “punk” feel to them.

This, to me, was the reason presenters used them in technical presentations in the first place. They were a disruption, they were fresh, they were different.

We all got bored to tears by corporate presentations that had more bullets than the showdown in a Western movie. We all got fed up with amazingly brushed up presentations by visual aficionados that had just one too many inspiring butterfly or beautiful sunset.

added text to sunrise

We wanted something gritty, something closer to the metal – just as we are. Let’s be different, let’s disrupt, let’s show a seemingly unconnected animation full of pixels.

This is great and still there are many good reasons to use an animated GIF in our presentations:

  • They are an eye catcher – animated things is what we look at as humans. The subconscious check if something that moves is a saber tooth tiger trying to eat me is deeply ingrained in us. This can make an animated GIF a good first slide in a new section of your talk: you seemingly do something unexpected but what you want to achieve is to get the audience to reset and focus on the next topic you’d like to cover.
  • They can be a good emphasis of what you are saying. When Soledad Penades shows a lady drinking under the table (6:05) when talking about her insecurities as someone people look up to it makes a point. soledad and drinking lady When Jake Archibald explains that navigator.onLine will be true even if the network cable is plugged into some soil (26:00) it is a funny, exciting and simple thing to do and adds to the point he makes. jake and the soil
  • It is an in-crowd ting to do – the irreverence of an animated, meme-ish GIF tells the audience that you are one of them, not a professional, slick and tamed corporate speaker.

But is it? Isn’t a trick that everybody uses way past being disruptive? Are we all unique and different when we all use the same content? How many more times do we have to endure the “this escalated quicklyGIF taken from a 10 year old movie? Let’s not even talk about the issue that we expect the audience to get the reference and why it would be funny.

We’re running the danger here of becoming predictable and boring. Especially when you see speakers who use an animated GIF and know it wasn’t needed and then try to shoe-horn it somehow into their narration. It is not a rite of passage. You should use the right presentation technique to achieve a certain response. A GIF that is in your slides just to be there is like an unused global variable in your code – distracting, bad practice and in general causing confusion.

The reasons why we use animated GIFs (or videos for that matter) in slides are also their main problem:

  • They do distract the audience – as a “whoa, something’s happening” reminder to the audience, that is good. When you have to compete with the blinking thing behind you it is bad. This is especially true when you chose a very “out there” GIF and you spend too much time talking over it. A fast animation or a very short loop can get annoying for the audience and instead of seeing you as a cool presenter they get headaches and think “please move on to the next slide” without listening to you. I made that mistake with my rainbow vomiting dwarf at HTML5Devconf in 2013 and was called out on Twitter.
  • They are too easy to add – many a times we are tempted just to go for the funny cat pounding a strawberry because it is cool and it means we are different as a presenter and surprising.

Well, it isn’t surprising any longer and it can be seen as a cheap way out for us as creators of a presentation. Filler material is filler material, no matter how quirky.

You don’t make a boring topic more interesting by adding animated images. You also don’t make a boring lecture more interesting by sitting on a fart cushion. Sure, it will wake people up and maybe get a giggle but it doesn’t give you a more focused audience. We stopped using 3D transforms in between slides and fiery text as they are seen as a sign of a bad presenter trying to make up for a lack of stage presence or lack of content with shiny things. Don’t be that person.

When it comes to technical presentations there is one important thing to remember: your slides do not matter and are not your presentation. You are.

Your slides are either:

  • wallpaper for your talking parts
  • emphasis of what you are currently covering or
  • a code example.

If a slide doesn’t cover any of these cases – remove it. Wallpaper doesn’t blink. It is there to be in the background and make the person in front of it stand out more. You already have to compete with a lot of of other speakers, audience fatigue, technical problems, sound issues, the state of your body and bad lighting. Don’t add to the distractions you have to overcome by adding shiny trinkets of your own making.

You don’t make boring content more interesting by wrapping it in a shiny box. Instead, don’t talk about the boring parts. Make them interesting by approaching them differently, show a URL and a screenshot of the boring resources and tell people what they mean in the context of the topic you talk about. If you’re bored about something you can bet the audience is, too. How you come across is how the audience will react. And insincerity is the worst thing you can project. Being afraid or being shy or just being informative is totally fine. Don’t try too hard to please a current fashion – be yourself and be excited about what you present and the rest falls into place.

So, by all means, use animated GIFs when they fit – give humorous and irreverent presentations. But only do it when this really is you and the rest of your stage persona fits. There are masterful people out there doing this right – Jenn Schiffer comes to mind. If you go for this – go all in. Don’t let the fun parts of your talk steal your thunder. As a presenter, you are entertainer, educator and explainer. It is a mix, and as all mixes go, they only work when they feel rounded and in the right rhythm.

Nicholas NethercoteSome good reading on sexual harassment and being a decent person

Last week I attended a sexual harassment prevention training seminar. This was the first  of several seminars that Mozilla is holding as part of its commendable Diversity and Inclusion Strategy. The content was basically “how to not get sued for sexual harassment in the workplace”. That’s a low bar, but also a reasonable place to start, and the speaker was both informative and entertaining. I’m looking forward to the next seminar on Unconcious Bias and Inclusion, which sounds like it will cover more subtle issues.

With the topic of sexual harassment in mind, I stumbled across a Metafilter discussion from last year about an essay by Genevieve Valentine in which she describes and discusses a number of incidents of sexual harassment that she has experienced throughout her life. I found the essay interesting, but the Metafilter discussion thread even more so. It’s a long thread (594 comments) but mostly high quality. It focuses initially on one kind of harassment that some men perform on public transport, but quickly broadens to be about (a) the full gamut of harassing behaviours that many women face regularly, (b) the responses women make towards these behaviours, and (c) the reactions, both helpful and unhelpful, that people can and do have towards those responses. Examples abound, ranging from the disconcerting to the horrifying.

There are, of course, many other resources on the web where one can learn about such topics. Nonetheless, the many stories that viscerally punctuate this particular thread (and the responses to those stories) helped my understanding of this topic — in particular, how bystanders can intervene when a woman is being harassed — more so than some dryer, more theoretical presentations have. It was well worth my time.

Jonas Finnemann JensenUsing Aggregates from Telemetry Dashboard in Node.js

When I was working on the aggregation code for telemetry histograms as displayed on the telemetry dashboard, I also wrote a Javascript library (telemetry.js) to access the aggregated histograms presented in the dashboard. The idea was separate concerns and simplify access to the aggregated histogram data, but also to allow others to write custom dashboards presenting this data in different ways. Since then two custom dashboards have appeared:

Both of these dashboards runs a cronjob that downloads the aggregated histogram data using telemetry.js and then aggregates or analyses it in an interesting way before publishing the results on the custom dashboard. However, telemetry.js was written to be included from, so that we could update the storage format, use a differnet data service, move to a bucket in another region, etc. I still want to maintain the ability to modify telemetry.js without breaking all the deployments, so I decided to write a node.js module called telemetry-js-node that loads telemetry.js from As evident from the example below, this module is straight forward to use, and exhibits full compatibility with telemetry.js for better and worse.

// Include telemetry.js
var Telemetry = require('telemetry-js-node');

// Initialize telemetry.js just the documentation says to
Telemetry.init(function() {
  // Get all versions
  var versions = Telemetry.versions();

  // Pick a version
  var version = versions[0];

  // Load measures for version
  Telemetry.measures(version, function(measures) {

    // Print measures available
    console.log("Measures available for " + version);

    // List measures
    Object.keys(measures).forEach(function(measure) {

Whilst there certainly is some valid concerns (and risks) with loading Javascript code over http. This hack allows us to offer a stable API and minimize maintenance for people consuming the telemetry histogram aggregates. And as we’re reusing the existing code the extensive documentation for telemetry is still applicable. See the following links for further details.

Disclaimer: I know it’s not smart to load  Javascript code into node.js over http. It’s mostly a security issue as you can’t use telemetry.js without internet access anyway. But considering that most people will run this as an isolated cron job (using docker, lxc, heroku or an isolated EC2 instance), this seems like an acceptable solution.

By the way, if you make a custom telemetry dashboard, whether it’s using telemetry.js in the browser or Node.js, please file a pull request against telemetry-dashboard on github to have a link for your dashboard included on

Jordan LundThis Week In Releng - Aug 4th, 2014

Major Highlights:

  • Kim enabled c3.xlarge slaves for selected b2g tests - bug 1031083
  • Catlee added pvtbuilds to list of things that pass through proxxy
  • Coop implemented the ability to enable/disable a slave directly from slave health

Completed work (resolution is 'FIXED'):

In progress work (unresolved and not assigned to nobody):

Mike ConleyDocShell in a Nutshell – Part 2: The Wonder Years (1999 – 2004)

When I first announced that I was going to be looking at the roots of DocShell, and how it has changed over the years, I thought I was going to be leafing through a lot of old CVS commits in order to see what went on before the switch to Mercurial.

I thought it’d be so, and indeed it was so. And it was painful. Having worked with DVCS’s like Mercurial and Git so much over the past couple of years, my brain was just not prepared to deal with CVS.

My solution? Take the old CVS tree, and attempt to make a Git repo out of it. Git I can handle.

And so I spent a number of hours trying to use cvs2git to convert my rsync’d mirror of the Mozilla CVS repository into something that I could browse with GitX.

“But why isn’t the CVS history in the Mercurial tree?” I hear you ask. And that’s a good question. It might have to do with the fact that converting the CVS history over is bloody hard – or at least that was my experience. cvs2git has the unfortunate habit of analyzing the entire repository / history and spitting out any errors or corruptions it found at the end.1 This is fine for small repositories, but the Mozilla CVS repo (even back in 1999) was quite substantial, and had quite a bit of history.

So my experience became: run cvs2git, wait 25 minutes, glare at an error message about corruption, scour the web for solutions to the issue, make random stabs at a solution, and repeat.

Not the greatest situation. I did what most people in my position would do, and cast my frustration into the cold, unfeeling void that is Twitter.

But, lo and behold, somebody on the other side of the planet was listening. Unfocused informed me that whoever created the gecko-dev Github mirror somehow managed to type in the black-magic incantations that would import all of the old CVS history into the Git mirror. I simply had to clone gecko-dev, and I was golden.

Thanks Unfocused. :)

So I had my tree. I cracked open Gitx, put some tea on, and started pouring through the commits from the initial creation of the docshell folder (October 15, 1999) to the last change in that folder just before the switch over to 2005 (December 15, 2004)2.

The following are my notes as I peered through those commits.

Artist’s rendering of me reading some old commit messages. I’m not claiming to have magic powers.

“First landing”

That’s the message for the first commit when the docshell/ folder was first created by Travis Bogard.

Without even looking at the code, that’s a pretty strange commit just by the message alone. No bug number, no reviewer, no approval, nothing even approximating a vague description of what was going on.

Leafing through these early commits, I was surprised to find that quite common. In fact, I learned that it was about a year after this work started that code review suddenly became mandatory for commits.

So, for these first bits of code, nobody seems to be reviewing it – or at least, nobody is signing off on it in commit messages.

Like I mentioned, the date for this commit is October 15, 1999. If the timeline in this Wikipedia article about the history of the Mozilla Suite is reliable, that puts us somewhere between Milestone 10 and Milestone 11 of the first 1.0 Mozilla Suite release.3

That means that at the time that this docshell/ folder was created, the Mozilla source code had been publicly available for over a year4, but nothing had been released from it yet.

Travis Bogard

Before we continue, who is this intrepid Travis Bogard who is spearheading this embedding initiative and the DocShell / WebShell rewrite?

At the time, according to his LinkedIn page, he worked for America Online (which at this point in time owned Netscape.5) He’d been working for AOL since 1996, working his way up the ranks from lowly intern all the way to Principal Software Engineer.

Travis was the one who originally wrote the wiki page about how painful it was embedding the web engine, and how unwieldy nsWebShell was.6 It was Travis who led the charge to strip away all of the complexity and mess inside of WebShell, and create smaller, more specialized interfaces for the behind the scenes DocShell class, which would carry out most of the work that WebShell had been doing up until that point.7

So, for these first few months, it was Travis who would be doing most of the work on DocShell.

Parallel development

These first few months, Travis puts the pedal to the metal moving things out of WebShell and into DocShell. Remember – the idea was to have a thin, simple nsWebBrowser that embedders could touch, and a fat, complex DocShell that was privately used within Gecko that was accessible via many small, specialized interfaces.

Wholesale replacing or refactoring a major part of the engine is no easy task, however – and since WebShell was core to the very function of the browser (and the mail/news client, and a bunch of other things), there were two copies of WebShell made.

The original WebShell existed in webshell/ under the root directory. The second WebShell, the one that would eventually replace it, existed under docshell/base. The one under docshell/base is the one that Travis was stripping down, but nobody was using it until it was stable. They’d continue using the one under webshell/, until they were satisfied with their implementation by both manual and automated testing.

When they were satisfied, they’d branch off of the main development line, and start removing occurances of WebShell where they didn’t need to be, and replace them with nsWebBrowser or DocShell where appropriate. When they were done that, they’d merge into main line, and celebrate!

At least, that was the plan.

That plan is spelled out here in the Plan of Attack for the redesign. That plan sketches out a rough schedule as well, and targets November 30th, 1999 as the completion point of the project.

This parallel development means that any bugs that get discovered in WebShell during the redesign needs to get fixed in two places – both under webshell/ and docshell/base.

Breaking up is so hard to do

So what was actually changing in the code? In Travis’ first commit, he adds the following interfaces:

along with some build files. Something interesting here is this nsIHTMLDocShell – where it looked like at this point, the plan was to have different DocShell interfaces depending on the type of document being displayed. Later on, we see this idea dissipate.

If DocShell was a person, these are its baby pictures. At this point, nsIDocShell has just two methods: LoadDocument, LoadDocumentVia, and a single nsIDOMDocument attribute for the document.

And here’s the interface for WebShell, which Travis was basing these new interfaces off of. Note that there’s no LoadDocument, or LoadDocumentVia, or an attribute for an nsIDOMDocument. So it seems this wasn’t just a straight-forward breakup into smaller interfaces – this was a rewrite, with new interfaces to replace the functionality of the old one.8

This is consistent with the remarks in this wikipage where it was mentioned that the new DocShell interfaces should have an API for the caller to supply a document, instead of a URI – thus taking the responsibility of calling into the networking library away from DocShell and putting it on the caller.

nsIDocShellEdit seems to be a replacement for some of the functions of the old nsIClipboardCommands methods that WebShell relied upon. Specifically, this interface was concerned with cutting, copying and pasting selections within the document. There is also a method for searching. These methods are all just stubbed out, and don’t do much at this point.

nsIDocShellFile seems to be the interface used for printing and saving documents.

nsIGenericWindow (which I believe is the ancestor of nsIBaseWindow), seems to be an interface that some embedding window must implement in order for the new nsWebBrowser / DocShell to be embedded in it. I think. I’m not too clear on this. At the very least, I think it’s supposed to be a generic interface for windows supplied by the underlying operating system.

nsIGlobalHistory is an interface for, well, browsing history. This was before tabs, so we had just a single, linear global history to maintain, and I guess that’s what this interface was for.

nsIScrollable is an interface for manipulating the scroll position of a document.

So these magnificent seven new interfaces were the first steps in breaking up WebShell… what was next?

Enter the Container

nsIDocShellContainer was created so that the DocShells could be formed into a tree and enumerated, and so that child DocShells could be named. It was introduced in this commit.


In this commit, only five days after the first landing, Travis appears to reverse the decision to pass the responsibility of loading the document onto the caller of DocShell. LoadDocument and LoadDocumentVia are replaced by LoadURI and LoadURIVia. Steve Clark (aka “buster”) is also added to the authors list of the nsIDocShell interface. It’s not clear to me why this decision was reversed, but if I had to guess, I’d say it proved to be too much of a burden on the callers to load all of the documents themselves. Perhaps they punted on that goal, and decided to tackle it again later (though I will point out that today’s nsIDocShell still has LoadURI defined in it).

First implementor

The first implementation of nsIDocShell showed up on October 25, 1999. It was nsHTMLDocShell, and with the exception of nsIGlobalHistory, it implemented all of the other interfaces that I listed in Travis’ first landing.

The base implementation

On October 25th, the stubs of a DocShell base implementation showed up in the repository. The idea, I would imagine, is that for each of the document types that Gecko can display, we’d have a DocShell implementation, and each of these DocShell implementations would inherit from this DocShell base class, and only override the things that they need specialized for their particular document type.

Later on, when the idea of having specialized DocShell implementations evaporates, this base class will end up being nsDocShell.cpp.

That same day, most of the methods were removed from the nsHTMLDocShell implementation, and nsHTMLDocShell was made to inherit from nsDocShellBase.

“Does not compile yet”

The message for this commit on October 27th, 1999 is pretty interesting. It reads:

added a bunch of initial implementation. does not compile yet, but that’s ok because docshell isn’t part of the build yet.

So not only are none of these patches being reviewed (as far as I can tell), and are not mapped to any bugs in the bug tracker, but the patches themselves just straight-up do not build. They are not building on tinderbox.

This is in pretty stark contrast to today’s code conventions. While it’s true that we might land code that is not entered for most Nightly users, we usually hide such code behind an about:config pref so that developers can flip it on to test it. And I think it’s pretty rare (if it ever occurs) for us to land code in mozilla-central that’s not immediately put into the build system.

Perhaps the WebShell tests that were part of the Plan of Attack were being written in parallel and just haven’t landed, but I suspect that they haven’t been written at all at this point. I suspect that the team was trying to stand something up and make it partially work, and then write tests for WebShell and try to make them pass for both old WebShell and DocShell. Or maybe just the latter.

These days, I think that’s probably how we’d go about such a major re-architecture / redesign / refactor; we’d write tests for the old component, land code that builds but is only entered via an about:config pref, and then work on porting the tests over to the new component. Once the tests pass for both, flip the pref for our Nightly users and let people test the new stuff. Once it feels stable, take it up the trains. And once it ships and it doesn’t look like anything is critically wrong with the new component, begin the process of removing the old component / tests and getting rid of the about:config pref.

Note that I’m not at all bashing Travis or the other developers who were working on this stuff back then – I’m simply remarking on how far we’ve come in terms of development practices.

Remember AOL keywords?

Tangentially, I’m seeing some patches go by that have to do with hooking up some kind of “Keyword” support to WebShell.

Remember those keywords? This was the pre-Google era where there were only a few simplistic search engines around, and people were still trying to solve discoverability of things on the web. Keywords was, I believe, AOL’s attempt at a solution.

You can read up on AOL Keywords here. I just thought it was interesting to find some Keywords support being written in here.

One DocShell to rule them all

Now that we have decided that there is only one docshell for all content types, we needed to get rid of the base class/ content type implementation. This checkin takes and moves the nsDocShellBase to be nsDocShell. It now holds the nsIHTMLDocShell stuff. This will be going away. nsCDocShell was created to replace the previous nsCHTMLDocShell.

This commit lands on November 12th (almost a month from the first landing), and is the point where the DocShell-implementation-per-document-type plan breaks down. nsDocShellBase gets renamed to nsDocShell, and the nsIHTMLDocShell interface gets moved into nsIDocShell.idl, where a comment above it indicates that the interface will soon go away.

We have nsCDocShell.idl, but this interface will eventually disappear as well.


So, this commit message on November 13th caught my eye:

pork jockey paint fixes. bug=18140, r=kmcclusk,pavlov

What the hell is a “pork jockey”? A quick search around, and I see yet another reference to it in Bugzilla on bug 14928. It seems to be some kind of project… or code name…

I eventually found this ancient wiki page that documents some of the language used in bugs on Bugzilla, and it has an entry for “pork jockey”: a pork jockey bug is a “bug for work needed on infrastructure/architecture”.

I mentioned this in #developers, and dmose (who was hacking on Mozilla code at the time), explained:

16:52 (dmose) mconley: so, porkjockeys
16:52 (mconley) let’s hear it
16:52 (dmose) mconley: at some point long ago, there was some infrastrcture work that needed to happen
16:52 mconley flips on tape recorder
16:52 (dmose) and when people we’re talking about it, it seemed very hard to carry off
16:52 (dmose) somebody said that that would happen on the same day he saw pigs fly
16:53 (mconley) ah hah
16:53 (dmose) so ultimately the group of people in charge of trying to make that happen were…
16:53 (dmose) the porkjockeys
16:53 (dmose) which was the name of the mailing list too

Here’s the e-mail that Brendan Eich sent out to get the Porkjockey’s flying.

Development play-by-play

On November 17th, the nsIGenericWindow interface was removed because it was being implemented in widget/base/nsIBaseWindow.idl.

On November 27th, nsWebShell started to implement nsIBaseWindow, which helped pull a bunch of methods out of the WebShell implementations.

On November 29th, nsWebShell now implements nsIDocShell – so this seems to be the first point where the DocShell work gets brought into code paths that might be hit. This is particularly interesting, because he makes this change to both the WebShell implementation that he has under the docshell/ folder, as well as the WebShell implementation under the webshell/ folder. This means that some of the DocShell code is now actually being used.

On November 30th, Travis lands a patch to remove some old commented out code. The commit message mentions that the nsIDocShellEdit and nsIDocShellFile interfaces introduced in the first landing are now defunct. It doesn’t look like anything is diving in to replace these interfaces straight away, so it looks like he’s just not worrying about it just yet. The defunct interfaces are removed in this commit one day later.

On December 1st, WebShell (both the fork and the “live” version) is made to implement nsIDocShellContainer.

One day later, nsIDocShellTreeNode interface is added to replace nsIDocShellContainer. The interface is almost identical to nsIDocShellContainer, except that it allows the caller to access child DocShells at particular indices as opposed to just returning an enumerator.

December 3rd was a very big day! Highlights include:

Noticing something? A lot of these changes are getting dumped straight into the live version of WebShell (under the webshell/ directory). That’s not really what the Plan of Attack had spelled out, but that’s what appears to be happening. Perhaps all of this stuff was trivial enough that it didn’t warrant waiting for the WebShell fork to switch over.

On December 12th, nsIDocShellTreeOwner is introduced.

On December 15th, buster re-lands the nsIDocShellEdit and nsIDocShellFile interfaces that were removed on November 30th, but they’re called nsIContentViewerEdit and nsIContentViewerFile, respectively. Otherwise, they’re identical.

On December 17th, WebShell becomes a subclass of DocShell. This means that a bunch of things can get removed from WebShell, since they’re being taken care of by the parent DocShell class. This is a pretty significant move in the whole “replacing WebShell” strategy.

Similar work occurs on December 20th, where even more methods inside WebShell start to forward to the base DocShell class.

That’s the last bit of notable work during 1999. These next bits show up in the new year, and provides further proof that we didn’t all blow up during Y2K.

On Feburary 2nd, 2000, a new interface called nsIWebNavigation shows up. This interface is still used to this day to navigate a browser, and to get information about whether it can go “forwards” or “backwards”.

On February 8th, a patch lands that makes nsGlobalWindow deal entirely in DocShells instead of WebShells. nsIScriptGlobalObject also now deals entirely with DocShells. This is a pretty big move, and the patch is sizeable.

On February 11th, more methods are removed from WebShell, since the refactorings and rearchitecture have made them obsolete.

On February 14th, for Valentine’s day, Travis lands a patch to have DocShell implement the nsIWebNavigation interface. Later on, he lands a patch that relinquishes further control from WebShell, and puts the DocShell in control of providing the script environment and providing the nsIScriptGlobalObjectOwner interface. Not much later, he lands a patch that implements the Stop method from the nsIWebNavigation interface for DocShell. It’s not being used yet, but it won’t be long now. Valentine’s day was busy!

On February 24th, more stuff (like the old Stop implementation) gets gutted from WebShell. Some, if not all, of those methods get forwarded to the underlying DocShell, unsurprisingly.

Similar story on February 29th, where a bunch of the scroll methods are gutted from WebShell, and redirected to the underlying DocShell. This one actually has a bug and some reviewers!9 Travis also landed a patch that gets DocShell set up to be able to create its own content viewers for various documents.

March 10th saw Travis gut plenty of methods from WebShell and redirect to DocShell instead. These include Init, SetDocLoaderObserver, GetDocLoaderObserver, SetParent, GetParent, GetChildCount, AddChild, RemoveChild, ChildAt, GetName, SetName, FindChildWithName, SetChromeEventHandler, GetContentViewer, IsBusy, SetDocument, StopBeforeRequestingURL, StopAfterURLAvailable, GetMarginWidth, SetMarginWidth, GetMarginHeight, SetMarginHeight, SetZoom, GetZoom. A few follow-up patches did something similar. That must have been super satisfying.

March 11th, Travis removes the Back, Forward, CanBack and CanForward methods from WebShell. Consumers of those can use the nsIWebNavigation interface on the DocShell instead.

March 30th sees the nsIDocShellLoadInfo interface show up. This interface is for “specifying information used in a nsIDocShell::loadURI call”. I guess this is certainly better than adding a huge amount of arguments to ::loadURI.

During all of this, I’m seeing references to a “new session history” being worked on. I’m not really exploring session history (at this point), so I’m not highlighting those commits, but I do want to point out that a replacement for the old Netscape session history stuff was underway during all of this DocShell business, and the work intersected quite a bit.

On April 16th, Travis lands a commit that takes yet another big chunk out of WebShell in terms of loading documents and navigation. The new session history is now being used instead of the old.

The last 10% is the hardest part

We’re approaching what appears to be the end of the DocShell work. According to his LinkedIn profile, Travis left AOL in May 2000. His last commit to the repository before he left was on April 24th. Big props to Travis for all of the work he put in on this project – by the time he left, WebShell was quite a bit simpler than when he started. I somehow don’t think he reached the end state that he had envisioned when he’d written the original redesign document – the work doesn’t appear to be done. WebShell is still around (in fact, parts of it around were around until only recently!10 ). Still, it was a hell of chunk of work he put in.

And if I’m interpreting the rest of the commits after this correctly, there is a slow but steady drop off in large architectural changes, and a more concerted effort to stabilize DocShell, nsWebBrowser and nsWebShell. I suspect this is because everybody was buckling down trying to ship the first version of the Mozilla Suite (which finally occurred June 5th, 2002 – still more than 2 years down the road).

There are still some notable commits though. I’ll keep listing them off.

On June 22nd, a developer called “rpotts” lands a patch to remove the SetDocument method from DocShell, and to give the implementation / responsibility of setting the document on implementations of nsIContentViewer.

July 5th sees rpotts move the new session history methods from nsIWebNavigation to a new interface called nsIDocShellHistory. It’s starting to feel like the new session history is really heating up.

On July 18th, a developer named Judson Valeski lands a large patch with the commit message “webshell-docshell consolodation changes”. Paraphrasing from the bug, the point of this patch is to move WebShell into the DocShell lib to reduce the memory footprint. This also appears to be a lot of cleanup of the DocShell code. Declarations are moved into header files. The nsDocShellModule is greatly simplified with some macros. It looks like some dead code is removed as well.

On November 9th, a developer named “disttsc” moves the nsIContentViewer interface from the webshell folder to the docshell folder, and converts it from a manually created .h to an .idl. The commit message states that this work is necessary to fix bug 46200, which was filed to remove nsIBrowserInstance (according to that bug, nsIBrowserInstance must die).

That’s probably the last big, notable change to DocShell during the 2000′s.

2001: A DocShell Odyssey

On March 8th, a developer named “Dan M” moves the GetPersistence and SetPersistence methods from nsIWebBrowserChrome to nsIDocShellTreeOwner. He sounds like he didn’t really want to do it, or didn’t want to be pegged with the responsibility of the decision – the commit message states “embedding API review meeting made me do it.” This work was tracked in bug 69918.

On April 16th, Judson Valeski makes it so that the mimetypes that a DocShell can handle are not hardcoded into the implementation. Instead, handlers can be registered via the CategoryManager. This work was tracked in bug 40772.

On April 26th, a developer named Simon Fraser adds an implementation of nsISimpleEnumerator for DocShells. This implementation is called, unsurprisingly, nsDocShellEnumerator. This was for bug 76758. A method for retrieving an enumerator is added one day later in a patch that fixes a number of bugs, all related to the page find feature.

April 27th saw the first of the NSPR logging for DocShell get added to the code by a developer named Chris Waterson. Work for that was tracked in bug 76898.

On May 16th, for bug 79608, Brian Stell landed a getter and setter for the character set for a particular DocShell.

There’s a big gap here, where the majority of the landings are relatively minor bug fixes, cleanup, or only slightly related to DocShell’s purpose, and not worth mentioning.11

And beyond the infinite…

On January 8th, 2002, for bug 113970, Stephen Walker lands a patch that takes yet another big chunk out of WebShell, and added this to the nsIWebShell.h header:


I’m actually surprised it too so long for something like this to get added to the nsIWebShell interface – though perhaps there was a shared understanding that nsIWebShell was shrinking, and such a notice wasn’t really needed.

On January 9th, 2003 (yes, a whole year later – I didn’t find much worth mentioning in the intervening time), I see the first reference to “deCOMtamination”, which is an effort to reduce the amount of XPCOM-style code being used. You can read up more on deCOMtamination here.

On January 13th, 2003, Nisheeth Ranjan lands a patch to use “machine learning” in order to order results in the urlbar autocomplete list. I guess this was the precursor to the frencency algorithm that the AwesomeBar uses today? Interestingly, this code was backed out again on February 21st, 2003 for reasons that aren’t immediately clear – but it looks like, according to this comment, the code was meant to be temporary in order to gather “weights” from participants in the Mozilla 1.3 beta, which could then be hard-coded into the product. The machine-learning bug got closed on June 25th, 2009 due to AwesomeBar and frecency.

On Februrary 11th, 2004, the onbeforeunload event is introduced.

On April 17, 2004, gerv lands the first of several patches to switch the licensing of the code over to the MPL/LGPL/GPL tri-license. That’d be MPL 1.1.

On July 7th, 2004, timeless lands a patch that makes it so that embedders can disable all plugins in a document.

On November 23rd, 2004, the parent and treeOwner attributes for nsIDocShellTreeItem are made scriptable. They are read-only for script.

On December 8th, 2004, bz lands a patch that makes DocShell inherit from DocLoader, and starts a move to eliminate nsIWebShell, nsIWebShellContainer, and nsIDocumentLoader.

And that’s probably the last notable patch related to DocShell in 2004.

Lord of the rings

Reading through all of these commits in the docshell/ and webshell/ folders is a bit like taking a core sample of a very mature tree, and reading its rings. I can see some really important events occurring as I flip through these commits – from the very birth of Mozilla, to the birth of XPCOM and XUL, to porkjockeys, and the start of the embedding efforts… all the way to the splitting out of Thunderbird, deCOMtamination and introduction of the MPL. I really got a sense of the history of the Mozilla project reading through these commits.

I feel like I’m getting a better and better sense of where DocShell came from, and why it does what it does. I hope if you’re reading this, you’re getting that same sense.

Stay tuned for the next bit, where I look at years 2005 to 2010.

  1. Messages like: ERROR: A CVS repository cannot contain both cvsroot/mozilla/browser/base/content/metadata.js,v and cvsroot/mozilla/browser/base/content/Attic/metadata.js,v, for example 

  2. Those commits have hashes 575e116287f99fbe26f54ca3f3dbda377002c5e7 and  60567bb185eb8eea80b9ab08d8695bb27ba74e15 if you want to follow along at home. 

  3. Mozilla Suite 1.0 eventually ships on June 5th, 2002 

  4. According to Mozilla’s timeline, the source code was posted publicly on March 31st, 1998 

  5. According to Wikipedia, Netscape was purchased by AOL on March 17, 1999 

  6. That wiki page was first posted on October 8th, 1999 – exactly a week before the first docshell work had started. 

  7. I should point out that there is no mention of this embedding work in the first roadmap that was published for the Mozilla project. It was only in the second roadmap, published almost a year after the DocShell work began, that embedding was mentioned, albeit briefly. 

  8. More clues as to what WebShell was originally intended for are also in that interface file:

     * The web shell is a container for implementations of nsIContentViewer.
     * It is a content-viewer-container and also knows how to delegate certain
     * behavior to an nsIWebShellContainer.
     * Web shells can be arranged in a tree.
     * Web shells are also nsIWebShellContainer's because they can contain
     * other web shells.

    So that helps clear things up a bit. I think. 

  9. travis reviewed it, and the patch was approved by rickg. Not sure what approval meant back then… was that like a super review? 

  10. This is the last commit to the webshell folder before it got removed on September 30th, 2010. 

  11. Hopefully I didn’t miss anything interesting in there! 

Peter BengtssonGzip rules the world of optimization, often

So I have a massive chunk of JSON that a Django view is sending to a piece of Angular that displays it nicely on the page. It's big. 674Kb actually. And it's likely going to be bigger in the near future. It's basically a list of dicts. It looks something like this:

>>> pprint(d['events'][0])
{u'archive_time': None,
 u'archive_url': u'/manage/events/archive/1113/',
 u'channels': [u'Main'],
 u'duplicate_url': u'/manage/events/duplicate/1113/',
 u'id': 1113,
 u'is_upcoming': True,
 u'location': u'Cyberspace - Pacific Time',
 u'modified': u'2014-08-06T22:04:11.727733+00:00',
 u'privacy': u'public',
 u'privacy_display': u'Public',
 u'slug': u'bugzilla-development-meeting-20141115',
 u'start_time': u'15 Nov 2014 02:00PM',
 u'start_time_iso': u'2014-11-15T14:00:00-08:00',
 u'status': u'scheduled',
 u'status_display': u'Scheduled',
 u'thumbnail': {u'height': 32,
                u'url': u'/media/cache/e7/1a/e71a58099a0b4cf1621ef3a9fe5ba121.png',
                u'width': 32},
 u'title': u'Bugzilla Development Meeting'}

So I thought one hackish simplification would be to convert each of these dicts into an list with a known sort order. Something like this:

>>> event = d['events'][0]
>>> pprint([event[k] for k in sorted(event)])
 u'Cyberspace - Pacific Time',
 u'15 Nov 2014 02:00PM',
 {u'height': 32,
  u'url': u'/media/cache/e7/1a/e71a58099a0b4cf1621ef3a9fe5ba121.png',
  u'width': 32},
 u'Bugzilla Development Meeting']

So I converted my sample events.json file like that:

$ l -h events*
-rw-r--r--  1 peterbe  wheel   674K Aug  8 14:08 events.json
-rw-r--r--  1 peterbe  wheel   423K Aug  8 15:06 events.optimized.json

Excitingly the file is now 250Kb smaller because it no longer contains all those keys.

Now, I'd also send the order of the keys so I could do something like this in the AngularJS code:

 .success(function(response) {
   events = [] {
     var new_event = {}
     response.keys.forEach(function(key, i) {
       new_event[k] = event[i]

Yuck! Nested loops! It was just getting more and more complicated.
Also, if there are keys that are not present in every element, it means I'd have to replace them with None.

At this point I stopped and I could smell the hackish stink of sulfur of the hole I was digging myself into.
Then it occurred to me, gzip is really good at compressing repeated things which is something we have plenty of in a document store type data structure that a list of dicts is.

So I packed them manually to see what we could get:

$ apack events.json.gz events.json
$ apack events.optimized.json.gz events.optimized.json

And without further ado...

$ l -h events*
-rw-r--r--  1 peterbe  wheel   674K Aug  8 14:08 events.json
-rw-r--r--  1 peterbe  wheel    90K Aug  8 14:20 events.json.gz
-rw-r--r--  1 peterbe  wheel   423K Aug  8 15:06 events.optimized.json
-rw-r--r--  1 peterbe  wheel    81K Aug  8 15:07 events.optimized.json.gz

Basically, all that complicated and slow hoopla for saving 10Kb. No thank you.

Thank you gzip for existing!

Mozilla Release Management TeamFirefox 32 beta4 to beta5

  • 27 changesets
  • 61 files changed
  • 1186 insertions
  • 161 deletions



List of changesets:

Brian HackettBug 1037666. r=billm, a=abillings - 784c7fb4c431
Byron Campen [:bwc]Bug 963524 - Avoid setting SO_REUSEADDR when binding a UDP socket to port 0, since the linux kernel might select an already open port. r=mcmanus, a=sledru - 92364eef664a
Mike HommeyBug 1047791 - Use integers for audio when on Android, or when using ARM on other OSes, and disable webm encoding. r=padenot, a=NPOTB - 333e7f930c63
Boris ZbarskyBug 1045096 - Make sure initial about:blank in iframes have a nsDOMNavigationTiming object, so we don't end up with window.performance throwing when accessed on their window. r=smaug, a=sledru - 70277dbb9071
Bobby HolleyBug 1048330 - Null-check the XBL scope in more places. r=smaug, a=sledru - f5df74fab22f
Brian NicholsonBug 987864 - Move API 13 styles to values-v13 folder. r=wesj, a=sledru - 68181edc64c1
Blair McBrideBug 1024073 - Whitelist plugin: Roblox Plugin launcher. r=gfritzsche, a=sledru - cda6534d9780
Blair McBrideBug 1029654 - Whitelist plugin: Box Edit. r=gfritzsche, a=sledru - 4d812a850eb1
Blair McBrideBug 1024965 - Whitelist plugin: Nexus Personal. r=gfritzsche, a=sledru - c2fc1e357ca0
Martin ThomsonBug 1048261 - Safe dispatch from DTLS connect to PeerConnectionImpl. r=jesup, r=bwc, a=abillings - adb28e85421f
Kai EngertBug, 1048876, Update Mozilla 32 to use NSS 3.16.4, a=lmandel - 22589028e561
Cameron McCormackBug 1018524 - Make copies of more SVGLength objects when manipulating SVGLengthLists. r=longsonr, a=lmandel - 3315c53c4bb7
Cameron McCormackBug 1040575 - Make a copy of SVGSVGElement.currentTranslate if it is inserted into an SVGPointList. r=longsonr, a=lmandel - 4da65dc7d057
Valentin GosuBug 1035007 - Make sure hostname is inserted in the right place. r=mcmanus, a=sledru - e6cee3b7907e
Ben KellyBug 995688 - Always show scrollbars in test to avoid fade effect evicting document from BF cache. r=smaug, a=test-only - 8f34703f5065
Richard NewmanBug 1049217 - Fix CharacterValidator for API levels below 12. r=margaret, a=sledru - fc7ce6481ea9
Randell JesupBug 1037000 - Include "default communications device" in the audio input enumeration for gUM. r=padenot, a=sledru - ee74d30a8968
Gregory SzorcBug 1045421 - Remove date time bomb from test_crash_manager.js, add logging. r=gfritzsche, a=test-only - ac0afa7b1b25
Jason OrendorffBug 1043690 - Part 1: Provide helper function for HTMLDocument and HTMLFormElement proxies to use from [[Set]]. r=efaust, a=lmandel - 8375886783f2
Boris ZbarskyBug 1043690 - Part 2: Change the codegen for DOM proxies to ignore named props when looking up property descriptors on [[Set]]. r=efaust, a=lmandel - 06542873b0dc
Jan de MooijBug 1041148 - Move HoldJSObjects call to XMLHttpRequest constructor to simplify the code. r=bz, a=lmandel - 7b29fabbf26a
Ryan VanderMeulenBug 1043690 - Remove extraneous const usage. r=jorendorff, a=bustage - c755d28a5266
Michael ComellaBug 1047264 - Dynamically retrieve Views for Display Mode on tablet. r=lucasr, a=sledru - e8fbf14de311
Richard NewmanBug 1047549 - Copy libraries out of the APK when they're missing. r=blassey, a=lmandel - 9d8f79b400bf
Andrew McCreightBug 1038243 - Avoid static casting from nsIDOMAttr to Attr. r=smaug, a=lmandel - 551f71d3138f
Michael WuBug 1044193 - Always cleanup on heap allocation path. r=glandium, a=lmandel - e51295fe2c54
Jonas JenwaldBug 1028735 - Revert one aspect of CIDFontType2 font parsing to the state prior to CMAP landing. r=yury, a=lmandel - 599c7756380c

Sean BoltonCommunity Lessons from LEGO (and other thoughts on community)

Communities form when you find a group of people that you feel you can be yourself around. In this environment, you don’t have to self-edit as much to fit in. You get the feeling that these people share a similar view of the world in some way. Communities form when the answer to, “are they like me?” is “yes.”

It’s not very surprising then that communities can form around brands as brands often represent a certain view of the world. Smart companies notice this and they engage these people in a win-win partnership to give their community value and to create value for the organization. That is no easy task. Here is a bit about how LEGO does it…

LEGO cares deeply about their community. They know that these people drive their brand, provide the best form of marketing and can help fuel new product development. And the community members get a powerful sense of fulfillment in meeting similar people and getting to be more of a creator than a consumer – they get to make LEGO their own. When working with community LEGO follows these principles (a non-exhaustive list):

  1.  Every engagement must be win-win
    • This matters because only in win-win partnerships do both sides have a good experience and can community continue to grow. It means that there is some saying no. But the saying no is so that more yes can happen later.
  2. LEGO does not interfere with community
    • This is powerful because it lets people all over the world organize their groups in a way that best makes sense for them. LEGO does not have to waste resources and can be more culturally sensitive by letting others manage their groups in the ways that best suits them.
  3. Members are respected and expected to respect each other
    • Every community should have a code of conduct to ensure that people respect each other. This is common practice so that members don’t get abused.
  4. Empower people to do what they want to do
    • You can’t force community to do something there is no passion behind doing – it’s just not how it works. The fact that LEGO is explicit about that is a big deal and it makes community respect them for it. That’s powerful.

LEGO has high quality standards and the way their community team works is no exception. They have a communication structure that empowers people to help and learn from each other so that not every person needs to communicate with LEGO directly. There are designated ‘ambassadors’ that communicate directly with LEGO – they help distill and clarify communication, taking a sort of leadership role in the community. This helps LEGO better focus resources and helps build a stronger sense of community among members (win-win).

There is a massive community surrounding LEGO, over 300K. To reference, we currently have about 8K at Mozilla. While our communities do different things, we are all fundamentally driven to be part of that group that makes us feel like we can be more of ourselves. It gives people a sense of belonging and fulfillment – something we all want in life. That is why community management matters. Without it, these large groups of people can’t exist and that sense of belonging can be easily lost.

[Post featured on Mozilla's Community blog.]

Kim MoirMozilla pushes - July 2014

Here's the July 2014 monthly analysis of the pushes to our Mozilla development trees. You can load the data as an HTML page or as a json file.
Like every month for the past while, we had a new record number of pushes. In reality, given that July is one day longer than June, the numbers are quite similar.


  • 12,755 pushes
    • new record
  •  411 pushes/day (average)
  • Highest number of pushes/day: 625 pushes on July 3, 2014
  • Highest 23.51 pushes/hour (average)
    • new record

General remarks
Try keeps on having around 38% of all the pushes. Gaia-Try is in second place with around 31% of pushes.  The three integration repositories (fx-team, mozilla-inbound and b2g-inbound) account around 22% of all the pushes.

July 2014 was the month with most pushes (12,755 pushes)
June 2014 has the highest pushes/day average with 662 pushes/day
July 2014 has the highest average of "pushes-per-hour" with 23.51 pushes/hour
June 4th, 2014 had the highest number of pushes in one day with 662 

Eric ShepherdThe Sheppy Report: August 8, 2014

It’s been another good week of Making Things Happen. I’m pleased with the productivity this week: not just mine, but the entire writing community’s.

What I’m up to

As usual, what I did this week doesn’t entirely line up with what I’d expected, but it was still productive, which is the important thing, right?

Work continued on finishing up the doc project planning page migration work, and better integration with other pages on MDN and elsewhere. I also put together the first draft of the WebGL doc plan.

I’m working on trying to reshuffle my many personal appointments that happen regularly to less often interfere with meetings, but unfortunately there’s only so much I can do.

What I did this week

  • Replaced the main documentation plan page on wikimo with a link to the new page on MDN, with an explanation of why it moved.
  • Finished work on moving the Learning area doc project plan to MDN.
  • Migrated the “Writing chrome code” doc plan to MDN, then emailed Will Bamberg to let him know his planning docs had moved.
  • Wrote first draft of the WebGL doc plan and emailed dev team to request feedback.
    • Got quick feedback with WebGL 2.0 ship date and added that information to the doc plan.
  • Added the dev-doc-needed keyword to man WebGL 2.0 related bugs.
  • Filed a bug about a problem with the link editor not suggesting zone pages that have moved out of the /docs/ hierarchy.
  • Added a link to the WebRTC doc project plan to the WebRTC documentation status page.
  • Posted to the dev-media list asking for suggestions on topics to cover in the WebRTC docs.
  • Updated the MDN page about KumaScript macros to link to the new article on troubleshooting them that Stephanie Hobson wrote.
  • Did a quick copy-edit pass on the troubleshooting article, and added some information about how to use search keywords to get to macro pages quickly (to read the built-in documentation most have).
  • Emailed out various meeting reminders.
  • Updated the team priority list spreadsheet with updated URLs and new information.
  • Wrote agenda for writers’ staff meeting and Web API docs meeting.
  • Wrote a nifty new MDN macro, ContentFromWikimo, which imports the content of a specified block (by ID) from a page on wikimo and inserts it into the MDN page.
  • Used the ContentFromWikimo macro to embed module owner information about WebRTC, WebGL, Web Workers, and XPCOM to their doc plans.
  • Filed a number of meta/tracking bugs for the various doc plans.
  • Created meta/tracking bugs for all current documentation plans. See my standup for today for links; I’m not going to copy and paste them all here. :)

Meetings attended this week


  • #mdndev bug/triage/planning meetings.


  • Messaging discussion for MDN feature planning.


  • 1:1 meeting with Ali.


  • MDN writers’ staff meeting.


  • MDN development bug swat meeting.
  • Web APIs documentation meeting.

So, whew! Lots done! I’m particularly proud of the ContentFromWikimo macro work. It was also a lot of fun to do. I think it’ll be useful, too, at least sometimes.

I have a good feeling about next week. I think it’ll be even more productive!

Mozilla Open Policy & Advocacy BlogThank you, Mr. President.

Today, Mozilla joined with dozens of advocacy organizations and companies to urge President Obama to take action on net neutrality in response to his recent vocal support for fair and meaningful net neutrality rules. Expressing views echoed by millions of Americans, the groups urged the FCC to stand against fast lanes for those who can afford it and slow lanes for the rest of us. Below find the full text of the letter.


Mr. President:

Earlier this week, you made a strong statement in support of net neutrality by saying:

“One of the issues around net neutrality is whether you are creating different rates or charges for different content providers. That’s the big controversy here. So you have big, wealthy media companies who might be willing to pay more and also charge more for spectrum, more bandwidth on the Internet so they can stream movies faster.

I personally, the position of my administration, as well as a lot of the companies here, is that you don’t want to start getting a differentiation in how accessible the Internet is to different users. You want to leave it open so the next Google and the next Facebook can succeed.”

We want to thank you for making your support for net neutrality clear and we are counting on you to take action to ensure equality on the Internet. A level playing field has been vital for innovation, opportunity and freedom of expression, and we agree that the next big thing will not succeed without it. We need to avoid a future with Internet slow lanes for everybody except for a few large corporations who can pay for faster service.

Like you, we believe in preserving an open Internet, where Internet service providers treat data equally, regardless of who is creating it and who is receiving it. Your vision of net neutrality is fundamentally incompatible with FCC’s plan, which would explicitly allow for paid prioritization. The only way for the FCC to truly protect an open Internet is by using its clear Title II authority. Over the next few months, we need your continued and vocal support for fair and meaningful net neutrality rules. Our organizations will continue to pressure the FCC to put forth solidly based rules, and will continue to encourage you and other elected officials to join us in doing so.

Thank you again for standing up for the open Internet so that small businesses and people everywhere have a fair shot.


ACLU, 18 Million Rising, Center for Media Justice, Center for Rural Strategies, ColorOfChange, Common Cause, Consumers Union, CREDO, Daily Kos, Demand Progress, Democracy for America, EFF, Engine, Enjambre Digital, Etsy, EveryLibrary, Fandor, Fight for the Future, Free Press, Future of Music Coalition, Greenpeace, Kickstarter, Louder, Media Action Grassroots Network, Media Alliance, Media Literacy Project, Media Mobilizing Project,, Mozilla, Museums and the Web, National Alliance for Media Arts and Culture, National Hispanic Media Coalition, Open Technology Institute, OpenMedia International,, Progressive Change Campaign Committee, Progressives United, Public Knowledge, Reddit, Rural Broadband Policy Group, SumOfUs, The Student Net Alliance, ThoughtWorks, United Church of Christ, OC Inc., Women’s Institute for Freedom of the Press, Women’s Media Center, Y Combinator

Matt BrubeckLet's build a browser engine! Part 1: Getting started

I’m building a toy HTML rendering engine, and I think you should too. This is the first in a series of articles describing my project and how you can make your own. But first, let me explain why.

You’re building a what?

Let’s talk terminology. A browser engine is the portion of a web browser that works “under the hood” to fetch a web page from the internet, and translate its contents into forms you can read, watch, hear, etc. Blink, Gecko, WebKit, and Trident are browser engines. In contrast, the the browser’s own UI—tabs, toolbar, menu and such—is called the chrome. Firefox and SeaMonkey are two browsers with different chrome but the same Gecko engine.

A browser engine includes many sub-components: an HTTP client, an HTML parser, a CSS parser, a JavaScript engine (itself composed of parsers, interpreters, and compilers), and much more. The many components involved in parsing web formats like HTML and CSS and translating them into what you see on-screen are sometimes called the layout engine or rendering engine.

Why a “toy” rendering engine?

A full-featured browser engine is hugely complex. Blink, Gecko, WebKit—these are millions of lines of code each. Even younger, simpler rendering engines like Servo and WeasyPrint are each tens of thousands of lines. Not the easiest thing for a newcomer to comprehend!

Speaking of hugely complex software: If you take a class on compilers or operating systems, at some point you will probably create or modify a “toy” compiler or kernel. This is a simple model designed for learning; it may never be run by anyone besides the person who wrote it. But making a toy system is a useful tool for learning how the real thing works. Even if you never build a real-world compiler or kernel, understanding how they work can help you make better use of them when writing your own programs.

So, if you want to become a browser developer, or just to understand what happens inside a browser engine, why not build a toy one? Like a toy compiler that implements a subset of a “real” programming language, a toy rendering engine could implement a small subset of HTML and CSS. It won’t replace the engine in your everyday browser, but should nonetheless illustrate the basic steps needed for rendering a simple HTML document.

Try this at home.

I hope I’ve convinced you to give it a try. This series will be easiest to follow if you already have some solid programming experience and know some high-level HTML and CSS concepts. However, if you’re just getting started with this stuff, or run into things you don’t understand, feel free to ask questions and I’ll try to make it clearer.

Before you start, a few remarks on some choices you can make:

On Programming Languages

You can build a toy layout engine in any programming language. Really! Go ahead and use a language you know and love. Or use this as an excuse to learn a new language if that sounds like fun.

If you want to start contributing to major browser engines like Gecko or WebKit, you might want to work in C++ because it’s the main language used in those engines, and using it will make it easier to compare your code to theirs. My own toy project, robinson, is written in Rust. I’m part of the Servo team at Mozilla, so I’ve become very fond of Rust programming. Plus, one of my goals with this project is to understand more of Servo’s implementation. (I’ve written a lot of browser chrome code, and a few small patches for Gecko, but before joining the Servo project I knew nothing about many areas of the browser engine.) Robinson sometimes uses simplified versions of Servo’s data structures and code. If you too want to start contributing to Servo, try some of the exercises in Rust!

On Libraries and Shortcuts

In a learning exercise like this, you have to decide whether it’s “cheating” to use someone else’s code instead of writing your own from scratch. My advice is to write your own code for the parts that you really want to understand, but don’t be shy about using libraries for everything else. Learning how to use a particular library can be a worthwhile exercise in itself.

I’m writing robinson not just for myself, but also to serve as example code for these articles and exercises. For this and other reasons, I want it to be as tiny and self-contained as possible. So far I’ve used no external code except for the Rust standard library. (This also side-steps the minor hassle of getting multiple dependencies to build with the same version of Rust while the language is still in development.) This rule isn’t set in stone, though. For example, I may decide later to use a graphics library rather than write my own low-level drawing code.

Another way to avoid writing code is to just leave things out. For example, robinson has no networking code yet; it can only read local files. In a toy program, it’s fine to just skip things if you feel like it. I’ll point out potential shortcuts like this as I go along, so you can bypass steps that don’t interest you and jump straight to the good stuff. You can always fill in the gaps later if you change your mind.

First Step: The DOM

Are you ready to write some code? We’ll start with something small: data structures for the DOM. Let’s look at robinson’s dom module.

The DOM is a tree of nodes. A node has zero or more children. (It also has various other attributes and methods, but we can ignore most of those for now.)

struct Node {
    // data common to all nodes:
    children: Vec<Node>,

    // data specific to each node type:
    node_type: NodeType,

There are several node types, but for now we will ignore most of them and say that a node is either an Element or a Text node. In a language with inheritance these would be subtypes of Node. In Rust they can be an enum (Rust’s keyword for a “tagged union” or “sum type”):

enum NodeType {

An element includes a tag name and any number of attributes, which can be stored as a map from names to values. Robinson doesn’t support namespaces, so it just stores tag and attribute names as simple strings.

struct ElementData {
    tag_name: String,
    attributes: AttrMap,

type AttrMap = HashMap<String, String>;

Finally, some constructor functions to make it easy to create new nodes:

fn text(data: String) -> Node {
    Node { children: vec![], node_type: Text(data) }

fn elem(name: String, attrs: AttrMap, children: Vec<Node>) -> Node {
    Node {
        children: children,
        node_type: Element(ElementData {
            tag_name: name,
            attributes: attrs,

And that’s it! A full-blown DOM implementation would include a lot more data and dozens of methods, but this is all we need to get started. In the next article, we’ll add a parser that turns HTML source code into a tree of these DOM nodes.


These are just a few suggested ways to follow along at home. Do the exercises that interest you and skip any that don’t.

  1. Start a new program in the language of your choice, and write code to represent a tree of DOM text nodes and elements.

  2. Install the latest version of Rust, then download and build robinson. Open up and extend NodeType to include additional types like comment nodes.

  3. Write code to pretty-print a tree of DOM nodes.


For much more detailed information about browser engine internals, see Tali Garsiel’s wonderful How Browsers Work and its links to further resources.

For example code, here’s a short list of “small” open source web rendering engines. Most of them are many times bigger than robinson, but still way smaller than Gecko or WebKit. WebWhirr, at 2000 lines of code, is the only other one I would call a “toy” engine.

You may find these useful for inspiration or reference. If you know of any other similar projects—or if you start your own—please let me know!

To be continued

Irving ReidTelemetry Results for Add-on Compatibility Check

Earlier this year (in Firefox 32), we landed a fix for bug 760356, to reduce how often we delay starting up the browser in order to check whether all your add-ons are compatible. We landed the related bug 1010449 in Firefox 31 to gather telemetry about the compatibility check, so that we could to before/after analysis.


When you upgrade to a new version of Firefox, changes to the core browser can break add-ons. For this reason, every add-on comes with metadata that says which versions of Firefox it works with. There are a couple of straightforward cases, and quite a few tricky corners…

  • The add-on is compatible with the new Firefox, and everything works just fine.
  • The add-on is incompatible and must be disabled.
    • But maybe there’s an updated version of the add-on available, so we should upgrade it.
    • Or maybe the add-on was installed in a system directory by a third party (e.g. an antivirus toolbar) and Firefox can’t upgrade it.
  • The add-on says it’s compatible, but it’s not – this could break your browser!
    • The add-on author could discover this in advance and publish updated metadata to mark the add-on incompatible.
    • Mozilla could discover the incompatibility and publish a metadata override at to protect our users.
  • The add-on says it’s not compatible, but it actually is.
    • Again, either the add-on author or Mozilla can publish a compatibility override.

We want to keep as many add-ons as possible enabled, because our users love (most of) their add-ons, while protecting users from incompatible add-ons that break Firefox. To do this, we implemented a very conservative check every time you update to a new version. On the first run with a new Firefox version, before we load any add-ons we ask *and* each add-on’s update server whether there is a metadata update available, and whether there is a newer version of the add-on compatible with the new Firefox version. We then enable/disable based on that updated metadata, and offer the user the chance to upgrade those add-ons that have new versions available. Once this is done, we can load up the add-ons and finish starting up the browser.

This check involves multiple network requests, so it can be rather slow. Not surprisingly, our users would rather not have to wait for these checks, so in bug 760356 we implemented a less conservative approach:

  • Keep track of when we last did a background add-on update check, so we know how out of date our metadata is.
  • On the first run of a new Firefox version, only interrupt startup if the metadata is too out of date (two days, in the current implementation) *or* if some add-ons were disabled by this Firefox upgrade but are allowed to be upgraded by the user.

Did it work?

Yes! On the Aurora channel, we went from interrupting 92.7% of the time on the 30 -> 31 upgrade  (378091 out of
407710 first runs reported to telemetry) to 74.8% of the time (84930 out of 113488) on the 31 -> 32 upgrade, to only interrupting 16.4% (10158 out of 61946) so far on the 32 -> 33 upgrade.

The change took effect over two release cycles; the new implementation was in 32, so the change from “interrupt if there are *any* add-ons the user could possibly update” to “interrupt if there is a *newly disabled* add-on the user could update” is in effect for the 31 -> 32 upgrade. However, since we didn’t start tracking the metadata update time until 32, the “don’t interrupt if the metadata is fresh” change wasn’t effective until the 32 -> 33 upgrade. I wish I had thought of that at the time; I would have added the code to remember the update time into the telemetry patch that landed in 31.

Cool, what else did we learn?

On Aurora 33, the distribution of metadata age was:

Age (days) Sessions
< 1 37207
1 9656
2 2538
3 997
4 535
5 319
6 – 10 565
11 – 15 163
16 – 20 94
21 – 25 69
26 – 30 82
31 – 35 50
36 – 40 48
41 – 45 53
46 – 50 6

so about 88% of profiles had fresh metadata when they upgraded. The tail is longer than I expected, though it’s not too thick. We could improve this by forcing a metadata ping (or a full add-on background update) when we download a new Firefox version, but we may need to be careful to do it in a way that doesn’t affect usage statistics on the AMO side.

What about add-on upgrades?

We also started gathering detailed information about how many add-ons are enabled or disabled during various parts of the upgrade process. The measures are all shown as histograms in the telemetry dashboard at;

The number of add-ons (both user-upgradeable and non-upgradeable) disabled during the upgrade because they are not compatible with the new version.
The number of user-upgradeable add-ons disabled during the upgrade.
The number of add-ons that changed from disabled to enabled because of metadata updates during the compatibility check.
The number of add-ons that changed from enabled to disabled because of metadata updates during the compatibility check.
The number of add-ons upgraded to a new compatible version during the add-on compatibility check.
The number of add-ons that had upgrades available during the compatibility check, but the user chose not to upgrade.
The number of add-ons that appeared to have upgrades available, but the attempt to install the upgrade failed.

For these values, we got good telemetry data from the Beta 32 upgrade. The counts represent the number of Firefox sessions that reported that number of affected add-ons (e.g. 3170 Telemetry session reports said that 2 add-ons were XPIDB_DISABLED by the upgrade):

0 2.6M 2.6M  2.6M 2.6M  2.6M 2.6M  2.6M
1 36230 7360  59240 14780  824  121  98
2 3170 1570  2 703  5  1  0
3 648 35  0 43  1  0  0
4 1070 14  1 6  0  0  0
5 53 20  0 0  0  0  0
6 157 194  0 0  0  0  0
7+ 55 9  0 1  0  0  0

The things I find interesting here are:

  • The difference between XPIDB disabled and APPUPDATE disabled is (roughly) the add-ons installed in system directories by third party installers. This implies that 80%-ish of add-ons made incompatible by the upgrade are 3rd party installs.
  • upgraded + declined + failed is (roughly) the add-ons a user *could* update during the browser upgrade, which works out to fewer than one in 2000 browser upgrades having a useful add-on update available. I suspect this is because most add-on updates have already been performed by our regular background update. In any case, to me this implies that further work on updating add-ons during browser upgrade won’t improve our user experience much.

Mozilla WebDev CommunityWebdev Extravaganza – August 2014

Once a month, web developers from across Mozilla gather to summon cyber phantoms and techno-ghouls in order to learn their secrets. It’s also a good opportunity for us to talk about what we’ve shipped, share what libraries we’re working on, meet newcomers, and just chill. It’s the Webdev Extravaganza! Despite the danger of being possessed, the meeting is open to the public; you should stop by!

You can check out the wiki page that we use to organize the meeting, check out the Air Mozilla recording, or amuse yourself with the wild ramblings that constitute the meeting notes. Or, even better, read on for a more PG-13 account of the meeting.

Shipping Celebration

The shipping celebration is for anything we finished and deployed in the past month, whether it be a brand new site, an upgrade to an existing one, or even a release of a library.

Peep 1.3 is out!

There’s a new release of ErikRose‘s peep out! Peep is essentially pip, which installs Python packages, but with the ability to check downloaded packages against cryptographic hashes to ensure you’re receiving the same code each time you install. The latest version now passes through most arguments for pip install, supports Python 3.4, and installs a secondary script tied to the active Python version.

Open-source Citizenship

Here we talk about libraries we’re maintaining and what, if anything, we need help with for them.


pmac and peterbe, with feedback from the rest of Mozilla Webdev, have created contribute.json, a JSON schema for open-source project contribution data. The idea is to make contribute.json available at the root of every Mozilla site to make it easier for potential contributors and for third-party services to find details on how to contribute to that site. The schema is still a proposal, and feedback or suggestions are very welcome!

New Hires / Interns / Volunteers / Contributors

Here we introduce any newcomers to the Webdev group, including new employees, interns, volunteers, or any other form of contributor.

Name IRC Nick Project
John Whitlock jwhitlock Web Platform Compatibility API
Mark Lavin mlavin Mobile Partners


The Roundtable is the home for discussions that don’t fit anywhere else.

How do you feel about Playdoh?

peterbe brought up the question of what to do about Playdoh, Mozilla’s Django-based project template for new sites. Many sites that used to be based on Playdoh are removing the components that tie them to the semi-out-of-date library, such as depending on playdoh-lib for library updates. The general conclusion was that many people want Playdoh to be rewritten or updated to address long-standing issues, such as:

  • Libraries are currently either installed in the repo or included via git submodules. A requirements.txt-based approach would be easier for users.
  • Many libraries included with Playdoh were made to implement features that Django has since included, making them redundant.
  • Django now supports project templates, making the current install method of using funfactory to clone Playdoh obsolete.

pmac has taken responsibility as a peer on the Playdoh module to spend some time extracting improvements from Bedrock into Playdoh.

Helping contributors via Cloud9

jgmize shared his experience making Bedrock run on the Cloud9 platform. The goal is to make it easy for contributors to spin up an instance of Bedrock using a free Cloud9 account, allowing them to edit and submit pull requests without having to go through the complex setup instructions for the site. jgmize has been dogfooding using Cloud9 as his main development environment for a few weeks and has had positive results using it.

If you’re interested in this approach, check out Cloud9 or ask jgmize for more information.

Unfortunately, we were unable to learn any mystic secrets from the ghosts that we were able to summon, but hey: there’s always next month!

If you’re interested in web development at Mozilla, or want to attend next month’s Extravaganza, subscribe to the mailing list to be notified of the next meeting, and maybe send a message introducing yourself. We’d love to meet you!

See you next month!

Frederic PlourdeGecko on Wayland

266px-Wayland_Logo.svgAt Collabora, we’re always on the lookout for cool opportunities involving Wayland and we noticed recently that Mozilla had started to show some interest in porting Firefox to Wayland. In short, the Wayland display server is becoming very popular for being lightweight, versatile yet powerful and is designed to be a replacement for X11. Chrome and Webkit already got Wayland ports and we think that Firefox should have that too.

Some months ago, we wrote a simple proof-of-concept basically starting from actual Gecko’s GTK3 paths and stripping all the MOZ_X11 ifdefs out of the way. We did a bunch of quick hacks fixing broken stuff but rather easily and quickly (couple days), we got Firefox to run on Weston (Wayland official reference compositor). Ok, because of hard X11 dependencies, keyboard input was broken and decorations suffered a little, but that’s a very good start! Take a look at the below screenshot :)


Christian HeilmannMicrosoft’s first web page and what it can teach us

Today Microsoft released a re-creation of their first web site, 20 years ago complete with a readme.html explaining how it was done and why some things are the way they are.

microsoft's first web site

I found this very interesting. First of all because it took me back to my beginnings – I built my first page in 1996 or so. Secondly this is an interesting reminder how creating things for the web changed over time whilst our mistakes or misconceptions stayed the same.

There are a few details worth mentioning in this page:

  • Notice that whilst it uses an image for the whole navigation the texts in the image are underlined. Back then the concept of “text with underscore = clickable and taking you somewhere” was not quite ingrained in people. We needed to remind people of this new concept which meant consistency was king – even in images.
  • The site is using the ISMAP attribute and a client side CGI program to turn x and y coordinates of the click into a redirect. I remember writing these in Perl and it is still a quite cool feature if you think about. You get the same mouse tracking for free if you use input type=image as that tells you where the image was clicked as form submission parameters
  • Client-side image maps came later and where a pain to create. I remember first using Coffeecup’s Image Mapper (and being super excited to meet Jay Cornelius, the creator, later at the Webmaster Jam session when I was speaking there) and afterwards Adobe’s ImageReady (which turned each layer into an AREA element)
  • Table layouts came afterwards and boy this kind of layout would have been one hell of a complex table to create with spacer GIFs and colspan and rowspan.

And this, to me, is the most interesting part here: one of the first web sites created by a large corporation makes the most basic mistake in web design – starting with a fixed design created in a graphical tool and trying to create the HTML to make it work. In other words: putting print on the web.

The web was meant to be consumed on any device capable of HTTP and text display (or voice, or whatever you want to turn the incoming text into). Text browsers like Lynx were not uncommon back then. And here is Microsoft creating a web page that is a big image with no text alternative. Also interesting to mention is that the image is 767px × 513px big. Back then I had a computer capable of 640 × 480 pixels resolution and browsers didn’t scale pictures automatically. This means that I would have had quite horrible scrollbars.

If you had a text browser, of course there is something for you:

If your browser doesn’t support images, we have a text menu as well.

This means that this page is also the first example of graceful degradation – years before JavaScript, Flash or DHTML. It means that the navigation menu of the page had to be maintained in two places (or with a CGI script on the server). Granted, the concept of progressive enhancement wasn’t even spoken of and with the technology of the day almost impossible (could you detect if images are supported and then load the image version and hide the text menu? Probably with a beacon…).

And this haunts us until now: the first demos of web technology already tried to be pretty and shiny instead of embracing the unknown that is the web. Fixed layouts were a problem then and still are. Trying to make them work meant a lot of effort and maintainability debt. This gets worse the more technologies you rely on and the more steps you put into between what you code and what the browser is meant to display for you.

It is the right of the user to resize a font. It is completely impossible to make assumptions of ability, screen size, connection speed or technical setup of the people we serve our content to. As Brad Frost put it, we have to Embrace the Squishiness of the web and leave breathing room in our designs.

One thing, however, is very cool: this page is 20 years old, the technology it is recreated in is the same age. Yet I can consume the page right now on the newest technology, in browsers Microsoft never dreamed of existing (not that they didn’t try to prevent that, mind you) and even on my shiny mobile device or TV set.

Let’s see if we can do the same with Apps created right now for iOS or Android.

This is the power of the web: what you build now in a sensible, thought-out and progressively enhanced manner will always stay consumable. Things you force into a more defined and controlled format will not. Something to think about. Nobody stops you from building an amazing app for one platform only. But don’t pretend what you did there is better or even comparable to a product based on open web technologies. They are different beasts with different goals. And they can exist together.

Robert O'Callahancf1e5386ecde9c2eb9416c9b07416686