The Mozilla BlogMozFest 2014 begins today

Today marks the beginning of the fifth annual Mozilla Festival, one of the world’s biggest celebrations of the open web. More than 1,600 participants from countries around the globe will gather at Ravensbourne in East London for a weekend of … Continue reading

WebmakerMozFest 2014 begins today

MozFest logo copy

Welcome to MozFest!

Today marks the beginning of the fifth annual Mozilla Festival, one of the world’s biggest celebrations of the open web.

More than 1,600 participants from countries around the globe will gather at Ravensbourne in East London for a weekend of collaborating, building prototypes, designing innovative web literacy curricula and discussing how the ethos of the open web can contribute to the fields of science, journalism, advocacy and more.



Envisioning the future of the open web

In the next decade, billions more people will be coming online for the first time, largely thanks to the increased accessibility and affordability of mobile devices. There is a growing concern that the web of the future will have little to offer us except closed social networks and media consumption using apps, services and platforms created by a few big players. Additionally, troubling questions are emerging about how our online activity is monitored by governments and corporations. In the face of these threats, it’s crucial that we maintain our freedom, independence and agency as creators of the web, not just consumers.


Ambitious goals for MozFest 2014

MozFest brings together a passionate, global cohort to establish the open values that will govern the web of the future. Our aim this year is to develop tools and practices to keep the democratic principles of the Internet alive. We’ll be strategizing how to use both distributed organizing and skill-sharing to engage the global open web community. Web literacy – the critical skills necessary to read, write and participate on the the Internet – are central to this mission. We’ll address the challenges facing the Internet  and explore how to spread web literacy on a global scale through hands-on, interactive sessions organized into 11 themed tracks.


Inspiring keynote speakers

While the motto of MozFest is Less Yack, More Hack, participants will be treated to some engaging keynote speakers including Baroness Beeban Kidron, Mary Moloney from CoderDojo, Mark Surman, Executive Director of the Mozilla Foundation, and Mitchell Baker, Executive Chairwoman of Mozilla.

Dive in

MozFest is our biggest party of the year. If you’re celebrating with us in London, we invite you to dive in, meet some kindred spirits and start hacking. If you’re interested in joining the festivities from afar, check out these great options for remote participation.

Get Involved:



hacks.mozilla.orgSVG & colors in OpenType fonts

Sample of a colorfont


Until recently having more than one color in a glyph of a vector font was technically not possible. Getting a polychrome letter required multiplying the content for every color. Like it happened with many other techniques before, it took some time for digital type to overcome the constraints of the old technique. When printing with wood or lead type the limitation to one color per glyph is inherent (if you don’t count random gradients). More than one color per letter required separate fonts for the differently colored parts and a new print run for every color. This has been done beautifully and pictures of some magnificent examples are available online. Using overprinting the impression of three colors can be achieved with just two colors.

Overprinting colors
Simulation of two overprinting colors resulting in a third.

Digital font formats kept the limitation to one ‘surface’ per glyph. There can be several outlines in a glyph but when the font is used to set type the assigned color applies to all outlines. Analog to letterpress the content needs to be doubled and superimposed to have more than one color per glyph. Multiplying does not sound like an elegant solution and it is a constant source of errors.

It took some emojis until the demand for multi-colored fonts was big enough to develop additional tables to store this information within OpenType fonts. As of this writing there are several different ways to implement this. Adam Twardoch compares all proposed solutions in great detail on the FontLab blog.

To me the Adobe/Mozilla way looks the most intriguing.

Upon its proposal it was discussed by a W3C community group and published as a stable document. The basic idea is to store the colored glyphs as svgs in the OpenType font. Of course this depends on the complexity of your typeface but svgs should usually result in a smaller file size than pngs. With the development of high resolution screens vectors also seem to be a better solution than pixels. The possibility to animate the svgs is an interesting addition and will surely be used in interesting (and very annoying) ways. BLING BLING.


I am not a font technician or a web developer just very curious about this new developments. There might be other ways but this is how I managed to build colorful OpenType fonts.

In order to make your own you will need a font editor. There are several options like RoboFont and Glyphs (both Mac only), FontLab and the free FontForge. RoboFont is the editor of my choice, since it is highly customizable and you can build your own extensions with python. In a new font I added as many new layers as the amount of colors I wanted to have in the final font. Either draw in the separate layers right away or just copy the outlines into the respective layer after you’ve drawn them in the foreground layer. With the very handy Layer Preview extension you can preview all Layers overlapping. You can also just increase the size of the thumbnails in the font window. At some point they will show all layers. Adjust the colors to your liking in the Inspector since they are used for the preview.

RoboFont Inspector
Define the colors you want to see in the Layer Preview
A separated letter
Layer preview
The outlines of the separate layers and their combination

When you are done drawing your outlines you will need to safe a ufo for every layer / color. I used a little python script to safe them in the same place as the main file:

f = CurrentFont()
path = f.path
for layer in f.layerOrder:
newFont = RFont()
for g in f:
    orig = g.getLayer(layer)
    newFont[].width = orig.width
    newFont[].update() = = layer = path[:-4] +"_%s" % layer +".ufo")
print "Done Splitting"

Once I had all my separate ufos I loaded them into TransType from FontLab. Just drop your ufos in the main window and select the ones you want to combine. In the Effect menu click ‘Overlay Fonts …’. You get a preview window where you can assign a rgba value for each ufo and then hit OK. Select the newly added font in the collection and export it as OpenType (ttf). You will get a folder with all colorfont versions.

The preview of your colorfont in TransType.


In case you don’t want to use TransType you might have a look at the very powerful RoboFont extension by Jens Kutílek called RoboChrome. You will need a separate version of your base-glyph for every color, which can also be done with a scipt if you have all of your outlines in layers.

f = CurrentFont()
selection = f.selection
for l, layer in enumerate(f.layerOrder):
for g in selection:
    char = f[g]
    name = g + ".layer%d" % l
    f[name].width = f[g].width
    l_glyph = f[g].getLayer(layer)
    f[name].mark = (.2, .2, .2, .2)
print "Done with the Devision"

For RoboChrome you will need to split your glyph into several.


You can also modify the svg table of a compiled font or insert your own if it does not have any yet. To do so I used the very helpful fonttools by Just van Rossum. Just generate a otf or ttf with the font editor of your choice. Open the Terminal and type ttx if you are on Mac OS and have fonttools installed. Drop the font file in the Terminal window and hit return. Fonttools will convert your font into an xml (YourFontName.ttx) in the same folder. This file can then be opened, modified and recompiled into a otf or ttf.

This can be quite helpful to streamline the svg compiled by a program and therefore reduce the file size. I rewrote the svg of a 1.6mb font to get it down to 980kb. Using it as a webfont that makes quite a difference. If you want to add your own svg table and font that does not have any yet you might read a bit about the required header information. The endGlyphID and startGlyphID for the glyph you want to supply with svg data can be found in the <GlyphOrder> Table.

<svgDoc endGlyphID="18" startGlyphID="18">
    <!-- here goes your svg -->
<svgDoc endGlyphID="19" startGlyphID="19">...</svgDoc>
<svgDoc endGlyphID="20" startGlyphID="20">...</svgDoc>

One thing to keep in mind is the two different coordinate systems. Contrary to a digital font svg has a y-down axis. So you either have to draw in the negative space or you draw reversed and then mirror everything with:

Y-axis comparison
While typefaces usually have a y-up axis SVG uses y-down.


Now if you really want to pimp your fonts you should add some unnecessary animation to annoy everybody. Just insert it between the opening and closing tags of whatever you want to modify. Here is an example of a circle changing its fill-opacity from zero to 100% over a duration of 500ms in a loop.

<animate    attributeName="fill-opacity" 


Technically these fonts should work in any application that works with otfs or ttfs. But as of this writing only Firefox shows the svg. If the rendering is not supported the application will just use the regular glyph outlines as a fallback. So if you have your font(s) ready it’s time to write some css and html to test and display them on a website.

The @font-face

@font-face {
font-family: "Colors-Yes"; /* reference name */
src: url('./fonts/Name_of_your_font.ttf');
font-weight: 400; /* or whatever applies */
font-style: normal; /* or whatever applies */
text-rendering: optimizeLegibility; /* maybe */

The basic css

.color_font { font-family: "Colors-Yes"; }


<p class="color_font">Shiny polychromatic text</p>


As of this writing (October 2014) the format is supported by Firefox (26+) only. Since this was initiated by Adobe and Mozilla there might be a broader support in the future.

While using svg has the advantage of reasonably small files and the content does not have to be multiplied it brings one major drawback. Since the colors are ‘hard-coded’ into the font there is no possibility to access them with css. Hopefully this might change with the implementation of a <COLR/CPAL> table.

There is a bug that keeps animations from being played in Firefox 32. While animations are rendered in the current version (33) this might change for obvious reasons.

Depending how you establish your svg table it might blow up and result in fairly big files. Be aware of that in case you use them to render the most crucial content of your websites.


Links, Credits & Thanks

Thanks Erik, Frederik, Just and Tal for making great tools!

Software CarpentryA New Lesson Template, Version 2

Update: this post now includes feedback from participants in the instructor training session run at TGAC on Oct 22-23, 2014. Please see the bottom of this page for their comments.

Thanks to everyone for their feedback on the first draft of our new template for lessons. The major suggestions were:

  1. We need to explain how this template supports student experience, quality lesson planning, etc. It's not clear now how compliance with these Markdown formatting rules will help improve teaching and learning.
  2. The template needs to be much simpler. As Andromeda Yelton said, "It just looks to me like a minefield of ways to get things wrong—things that have nothing to do with pedagogy..."
  3. There needs to be a validator that authors can run locally before submitting changes or new lessons. (The proposal did mention that make check would run bin/, but this point needs to be more prominent.
  4. Every topic should have at least one challenge, and challenges should be explicitly connected to specific learning objectives.
  5. We need a clearer explanation of the difference between the reference guide (which is meant to be a cheat sheet for learners to take away) and the instructor's guide (which is meant to be tips and tricks for teaching). We should also suggest formatting rules for both.
  6. The instructor's guide should explicitly present each lesson's "legend", i.e., the story that ties it together which instructors gradually reveal to learners.
  7. We need to decide whether the instructor's guide is a separate document, or whether there are call-out sections in each topic for instructors. The former puts the whole story in one place, and helps updaters to see the whole thing when making changes; the latter puts it in context, and helps updaters check that the instructor's material is consistent with the lesson material.
  8. Every topic should explicitly list the time required to teach it. (We should do this for topics, rather than for whole lessons, because people often don't get through all of the latter, which makes timing reports difficult to interpret.)
  9. We need to make it clear that lessons must be CC-BY licensed to permit remixing.

With all that in mind, here's another cut at the template—as before, we'd be grateful for comments. Note that this post mingles description of "what" with explanation of "why"; the final guide for people building lessons will disentangle them to make the former easier to follow.

Note also that Trevor Bekolay has drafted an implementation at for people who'd like to see what the template would look like in practice. There's still work to do (see below), but it's a great start. Thanks to Trevor, the other Trevor, Erik with a 'k', Molly, Emily, Karin, Rémi, and Andromeda for their feedback.

To Do

  • Some people suggested getting rid of the web/ folder and have lessons load CSS, Javascript, and standard images from the web. This would reduce the size of the repository, and help ensure consistency, but (a) a lot of people write when they're offline (I'm doing it right now), and (b) people may not want their lessons' appearance to change outwith their control.
  • We need to figure out how example programs will reference data files (i.e., what paths they will use). See the notes under "Software, Data, and Images" below for full discussion.
  • Trevor Bekolay writes:
    I took a stab at implementing a minimal motivation slides. Unfortunately this isn't very clean right now; I just included the <section> and <script> tags in the Markdown, which I know we want to avoid. I initially had the slides in a separate Markdown file, which is possible with reveal.js. There are a few weird things with this though, which we may or may not be able to fix, since we're limited in what we can do with Jekyll. Briefly, we can have the slides.html layout do something like this:
    <div class="slides"><section data-markdown="blog/2014/10/new-lesson-template-v2.html" data-separator="^\n\n\n" data-vertical="^\n\n"></section></div>
    The only wart with this is that the Markdown file (i.e., page.path) doesn't get copied to _site. I couldn't figure out a way to do it using vanilla Jekyll, but it might be possible. Even if it does get copied, however, we might have to strip out the YAML header.


  • A lesson is a complete story about some subject, typically taught in 2-4 hours.
  • A topic is a single scene in that story, typically 5-15 minutes long.
  • A slug is a short identifier for something, such as filesys (for "file system").

Design Choices

  • We define everything in terms of Markdown. If lesson authors want to use something else for their lessons (e.g., IPython Notebooks), it's up to them to generate and commit Markdown formatted according to the rules below.
  • We avoid putting HTML inside Markdown: it's ugly to read and write, and error-prone to process. Instead, we put things that ought to be in <div> blocks, like the learning objectives and challenge exercises, in blocks indented with >, and do a bit of post-processing to attach the right CSS classes to these blocks.
  • Whatever Markdown-to-HTML converter we use must support {.attribute} syntax for specifying anchors and classes rather than the clunky HTML-in-Markdown syntax our current notes have to use to be compatible with Jekyll.
  • Any "extra" metadata (e.g., the human language of the lesson) will go into the YAML header of the lesson's index page rather than into a separate configuration file.

Justification and Tutorial

The main Software Carpentry website will contain a one-page tutorial explaining (a) how to create and update lessons and (b) how the various parts of the template support better teaching. A sketch of the second of these is:

  • A standard layout so that:
    1. Lessons have the same look and feel, and can be navigated in predictable ways, even when they are written by different (and multiple) people.
    2. Contributors know where to put things when they are extending or modifying lessons.
    3. Content can more easily be checked. For example, we want to make sure that every learning objective is matched by a challenge, and that every challenge corresponds to one or more learning objectives.
    In the longer term, a standard format will help us build tools, but the format must be justifiable in terms of short-term gains for instructors and learners.
  • One short page per topic: to show each learning sprint explicitly, and to create small chunks for recording timings. The cycle we expect is:
    1. Explain the topic's objectives.
    2. Teach it.
    3. Do one or more challenges (depending on time).
  • Introductory slides: to give learners a sense of where the next couple or three hours are going to take them.
  • Reference guide: because everybody wants a cheat sheet. This includes a glossary of terms to help lesson authors think through what they expect learners to be unfamiliar with, and to make searching through lessons easier.
  • Instructor's guide: our collected wisdom, and solutions to the challenge exercises. Once lessons have been reformatted, we will ask everyone who teaches for us to review and update the instructor's guide for each lesson they taught after each workshop. Note that the instructor's guide (including challenge solutions) will be on the web, both because we believe in openness, and because it's going to be publicly readable anyway.
  • Tools: because machines should check formatting rules, not people.

Overall Layout

Each lesson is stored in a directory laid out as described below. That directory is a self-contained Git repository (i.e., there are no submodules or clever tricks with symbolic links).

  1. the home page for the lesson. (See "Home Page" below.)
  2. the topics in the lesson. dd is a sequence number such as 01, 02, etc., and slug is an abbreviated single-word mnemonic for the topic. Thus, is the third topic in this lesson, and is about the filesystem. (Note that we use hyphens rather than underscores in filenames.) See "Topics" below.
  3. slides for a short introductory presentation (three minutes or less) explaining what the lesson is about and why people would want to learn it. See "Introductory Slides" below.
  4. a cheat sheet summarizing key terms and commands, syntax, etc., that can be printed and given to learners. See "Reference Guide" below.
  5. the instructor's guide for the lesson. See "Instructor's Guide" below.
  6. code/: a sub-directory containing all code samples. See "Software, Data, and Images" below.
  7. data/: a sub-directory containing all data files for this lesson. See "Software, Data, and Images" below.
  8. img/: images (including plots) used in the lesson. See "Software, Data, and Images" below.
  9. tools/: tools for managing lessons. See "Tools" below.
  10. _layouts/: page layout templates. See "Layout" below.
  11. _includes/: page inclusions. See "Layout" below.

Home Page must be structured as follows:

layout: lesson
title: Lesson Title
keywords: ["some", "key terms", "in a list"]
Paragraph of introductory material.

> ## Prerequisites
> A short paragraph describing what learners need to know
> before tackling this lesson.

## Topics

* [Topic Title 1](01-slug.html)
* [Topic Title 2](02-slug.html)

## Other Resources

* [Introduction](intro.html)
* [Reference Guide](reference.html)
* [Instructor's Guide](guide.html)


  • The description of prerequisites is prose for human consumption, not a machine-comprehensible list of dependencies. We may supplement the former with the latter once we have more experience with this lesson format and know what we actually want to do. The block must be titled "Prerequisites" so we can detect it and style it properly.
  • Software installation and configuration instructions aren't in the lesson, since they may be shared with other lessons. They will be stored centrally on the Software Carpentry web site and linked from the lessons that need them.


Each topic must be structured as follows:

layout: topic
title: Topic Title
minutes: MM
> ## Learning Objectives {.objectives}
> * Learning objective 1
> * Learning objective 2

Paragraphs of text mixed with:

~~~ {.python}
some code:
    to be displayed
~~~ {.output}
~~~ {.error}
error reports from program (if any)

and possibly including:

> ## Callout Box {.callout}
> An aside of some kind.

> ## Challenge Title {.challenge}
> Description of a single challenge.
> There may be several challenges.


  1. The "expected time" heading is called minutes to encourage people to create topics that are short (10-15 minutes at most).
  2. There are no sub-headings inside a topic other than the ones shown: if a topic needs sub-headings, it should be broken into two or more topics.
  3. We need to figure out how to connect challenges back to learning objectives. Markdown doesn't appear to allow us to add id attributes to list elements, or to create anchors that challenges can refer back to.

Introductory Slides

Every lesson must include a short slide deck suitable for a short presentation (3 minutes or less) that the instructor can use to explain to learners how knowing the subject will help them. Slides are written in Markdown, and compiled into HTML for use with reveal.js.


  1. We should provide an example.

Reference Guide

The reference guide is a cheat sheet for learners to print, doodle on, and take away. The format of the actual guide is deliberately unconstrained for now, since we'll need to see a few before we can decide how they ought to be laid out (or whether they need to be laid out the same way at all).

The last thing in it must be a Level-2 heading called "Glossary" followed by definitions of key terms Each definition must be formatted as a separate blockquote indented with > signs:

layout: reference
...commands and examples...

## Glossary

> **Key Word 1**: the definition
> relevant to the lesson.

> **Key Word 2**: the definition
> relevant to the lesson.

Again, we use blockquotes because standard [sic] Markdown doesn't have a graceful syntax for <div> blocks. If definition lists become part of CommonMark, or if we standardize on Pandoc as our translation engine, we can use definition lists here instead of hacking around with blockquotes.

Instructor's Guide

Many learners will go through the lessons outside of class, so it seems best to keep material for instructors in a separate document, rather than interleaved in the lesson itself. Its structure is:

title: Instructor's Guide
## Overall

One or more paragraphs laying out the lesson's legend.

## General Points

* Point
* Point

## Topic 1

* Point
* Point

## Topic 2

* Point
* Point


  1. The topic headings must match the topic titles. (Yes, we could define these as variables in a configuration file and refer to those variables everywhere, but in this case, repetition will be a lot easier to read, and our validator can check that the titles line up.)
  2. The points can be anything: specific ways to introduce ideas, common mistakes learners make and how to get out of them, or anything else.

Software, Data, and Images

All of the software samples used in the lesson must go in a directory called code/. Stand-alone data files must go in a directory called data/. Groups of related data files must be put together in a sub-directory of data/ with a meaningful (short) name.

Images used in the lessons must go in an img/ directory. We strongly prefer SVG for line drawings, since they are smaller, scale better, and are easier to edit. Screenshots and other raster images must be PNG or JPEG format.


  1. This mirrors the layout a scientist would use for actual work (see Noble's "A Quick Guide to Organizing Computational Biology Projects" or Gentzkow and Shapiro's "Code and Data for the Social Sciences: A Practitioner's Guide").
  2. However, it may cause novice learners problems. If code/ includes a hard-wired path to a data file, that path must be either datafile.ext or data/datafile.ext. The first will only work if the program is run with the lesson's root directory as the current working directory, while the second will only work if the program is run from within the code/ directory. This is a learning opportunity for students working from the command line, but a confusing annoyance inside IDEs and the IPython Notebook (where the tool's current working directory is less obvious). And yes, the right answer is to pass filenames on the command line, but that requires learners to understand how to get command line arguments, which isn't something they'll be ready for in the first hour or two.
  3. We have removed the requirement for an index file in the code/ and data/ directories. It is tempting to require code fragments in topics to have an extra attribute src="code/filename.ext" so that we can prune files that are no longer used as lessons change, but that may be more effort than authors are willing to put in.


The tools/ directory contains tool to help create and maintain lessons:

  • tools/check: make sure that everything is formatted properly, and print error messages identifying problems if it's not.
  • tools/build: build the lesson website locally for previewing. This assumes tools/check has given the site a clean bill of health.
  • tools/update: run the right Git commands to update shared files (e.g., layout templates).


The template still contains _layouts/ and _includes/ directories for page layout templates and standard inclusions. These are needed to support lesson preview.

Major Changes

  • We no longer rely on Make. Instead, the two key tools are scripts in the tools/ directory.
  • There is no longer a separate glossary page. Instead, the glossary is part of the reference guide given to learners.
  • The index page no longer lists overall learning objectives, since learning objectives should all be paired with challenges.
  • Topic pages no longer have key points: anything that would have gone here properly belongs in the reference guide.

Feedback from TGAC Instructor Trainees

Participants in the instructor training session run at TGAC on Oct 22-23 gave us feedback on the content shown above. Their points are listed below; we'll try to factor them into the final template.


  • Details +2
  • Lots of technical detail
  • Enables flexibility - adding contents
  • Markdown
  • Helps to structure / think about content
  • Good outline of what you want to do
  • Good organisation
  • Enough detail for somebody who doesn't have much experience
  • Uncomplicated visually
  • Required variables section
  • Proper highlighting for the syntax part
  • Clearly listed variables


  • Assumed knowledge (keywords) +2
  • Not much introduction +2
  • Overwhelming
  • Some terms jargon unclear
  • Not live yet so you can't check if works
  • Mixed instructions (website + Jekyll info)
  • Text on the lesson template needs reordering (restucturing)
  • See Markdown rendered so that it's easier to review
  • Key info down at the bottom
  • More visual info
  • No "Get it touch" info
  • Customizing lessons badly explained
  • Which md and translators to use
  • Colours (background + foreground)
  • for email


  • Maybe two different overviews (depending on the audience) +2
  • Why these engineering choices were made? (if that was supposed to be simple)
  • Troubleshooting?
  • Shortcut to the "how to set it up and skip the whole info"?
  • How is feedback to lessons made available to others?
  • Metasection on each lesson - which audience it is particularly working well with?
  • Why should we create our own website?

The Mozilla BlogIntroducing the 2015 Knight-Mozilla Fellows

The Knight-Mozilla Fellowships bring together developers, technologists, civic hackers, and data crunchers to spend 10 months working on open source code with partner newsrooms around the world. The Fellowships are part of the Knight-Mozilla OpenNews project, supported by the John … Continue reading

hacks.mozilla.orgThe Visibility Monitor supported by Gaia

With the booming ultra-low-price device demands, we have to more carefully calculate about each resource of the device, such as CPU, RAM, and Flash. Here I want to introduce the Visibility Monitor which has existed for a long time in Gaia.


The Visibility Monitor originated from the Gallery app of Gaia and appeared in Bug 809782 (gallery crashes if too many images are available on sdcard) for the first time. It solves the problem of the memory shortage which is caused by storing too many images in the Gallery app. After a period of time, Tag Visibility Monitor, the “brother” of Visibility Monitor, was born. Both of their functionalities are almost the same, except that Tag Visibility Monitor follows pre-assigned tag names to filter elements which need to be monitored. So, we are going to use the Tag Visibility Monitor as the example in the following sections. Of course, the Visibility Monitor is also applicable.

For your information, the Visibility Monitor was done by JavaScript master David Flanagan. He is also the author of JavaScript: The Definitive Guide and works at Mozilla.

Working Principle

Basically, the Visibility Monitor removes the images that are outside of the visible screen from the DOM tree, so Gecko has the chance to release the image memory which is temporarily used by the image loader/decoder.

You may ask: “The operation can be done on Gecko. Why do this on Gaia?” In fact, Gecko enables the Visibility Monitor by default; however, the Visibility Monitor only removes the images which are image buffers (the uncompressed ones by the image decoder). However, the original images are still temporarily stored in the memory. These images were captured by the image loader from the Internet or the local file system. However, the Visibility Monitor supported by Gaia will completely remove images from the DOM tree, even the original ones which are temporarily stored in the image loader as well. This feature is extremely important for the Tarako, the codename of the Firefox OS low-end device project, which only equips 128MB memory.

To take the graphic above as the example, we can separate the whole image as:

  • display port
  • pre-rendered area
  • margin
  • all other area

When the display port is moving up and down, the Visibility Monitor should dynamically load the pre-rendered area. At the same time, the image outside of the pre-rendered area will not be loaded or uncompressed. The Visibility Monitor will take the margin area as a dynamically adjustable parameter.

  • The higher the margin value is, the bigger the part of the image Gecko has to pre-render, which will lead to more memory usage and to scroll more smoothly (FPS will be higher)
  • vice versa: the lower the margin is, the smaller the part of the image Gecko has to pre-render, which will lead to less memory usage and to scroll less smoothly (FPS will be lower).

Because of this working principle, we can adjust the parameters and image quality to match our demands.


It’s impossible to “have your cake and eat it too”. Just like it’s impossible to “use the Visibility Monitor and be out of its influence.” The prerequisites to use the Visibility Monitor) are listed below:

The monitored HTML DOM Elements are arranged from top to bottom

The original layout of Web is from top to bottom, but we may change the layout from bottom to top with some CSS options, such as flex-flow. After applying them, the Visibility Monitor may become more complex and make the FPS lower (we do not like the result), and this kind of layout is not acceptable for the Visibility Monitor. When someone uses this layout, the Visibility Monitor shows nothing at the areas where it should display images and sends errors instead.

The monitored HTML DOM Elements cannot be absolutely positioned

The Visibility Monitor calculates the height of each HTML DOM Elements to decide whether to display the element or not. So, when the element is fixed at a certain location, the calculation becomes more complex, which is unacceptable. When someone uses this kind of arrangement, the Visibility Monitor shows nothing at the area where it should display images and sends error message.

The monitored HTML DOM Elements should not dynamically change their position through JavaScript

Similar to absolute location, dynamically changing HTML DOM Elements’ locations make calculations more complex, both of them are unacceptable. When someone uses this kind of arrangement, the Visibility Monitor shows nothing at the area.

The monitored HTML DOM Elements cannot be resized or be hidden, but they can have different sizes

The Visibility Monitor uses MutationObserver to monitor adding and removal operations of HTML DOM Elements, but not appearing, disappearing or resizing of an HTML DOM Element. When someone uses this kind of arrangement, the Visibility Monitor again shows nothing.

The container which runs monitoring cannot use position: static

Because the Visibility Monitor uses offsetTop to calculate the location of display port, it cannot use position: static. We recommend to use position: relative instead.

The container which runs monitoring can only be resized by the resizing window

The Visibility Monitor uses the window.onresize event to decide whether to re-calculate the pre-rendered area or not. So each change of the size should send a resize event.

Tag Visibility Monitor API

The Visibility Monitor API is very simple and has only one function:

function monitorTagVisibility(

The parameters it accepts are defined as follows:

  1. container: a real HTML DOM Element for users to scroll. It doesn’t necessarily have be the direct parent of the monitored elements, but it has to be one of their ancestors
  2. tag: a string to represent the element name which is going to be monitored
  3. scrollMargin: a number to define the margin size out of the display port
  4. scrollDelta: a number to define “how many pixels have been scrolled should that shoukd have a calculation to produce a new pre-rendered area”
  5. onscreenCallback: a callback function that will be called after a HTML DOM Element moved into the pre-rendered area
  6. offscreenCallback: a callback function that will be called after a HTML DOM Element moved out of the pre-rendered area

Note: the “move into” and “move out” mentioned above means: as soon as only one pixel is in the pre-rendered area, we say it moves into or remains on the screen; as soon as none of the pixels are in the pre-rendered area, we say it moves out of or does not exist on the screen.

Example: Music App (1.3T branch)

One of my tasks is to the add the Visibility Monitor into the 1.3T Music app. Because lack of understanding for the structure of the Music app, I asked help from another colleague to find where I should add it in, which were in three locations:

  • TilesView
  • ListView
  • SearchView

Here we only take TilesView as the example and demonstrate the way of adding it. First, we use the App Manager to find out the real HTML DOM Element in TilesView for scrolling:

With the App Manager, we find that TilesView has views-tile, views-tiles-search, views-tiles-anchor, and li.tile (which is under all three of them). After the test, we can see that the scroll bar shows at views-tile; views-tiles-search will then automatically be scrolled to the invisible location. Then each tile exists in the way of li.tile. Therefore, we should set the container as views-tiles and set tag as li. The following code was used to call the Visibility Monitor:

    visibilityMargin,    // extra space top and bottom
    minimumScrollDelta,  // min scroll before we do work
    thumbnailOnscreen,   // set background image
    thumbnailOffscreen // remove background image

In the code above, visibilityMargin is set as 360, which means 3/4 of the screen. minimumScrollDelta is set as 1, which means each pixel will be recalculated once. thumbnailOnScreen and thumbnailOffscreen can be used to set the background image of the thumbnail or clean it up.

The Effect

We performed practical tests on the Tarako device. We launched the Music app and made it load nearly 200 MP3 files with cover images, which were totally about 900MB. Without the Visibility Monitor, the memory usage of the Music app for images were as follows:

├──23.48 MB (41.04%) -- images
│ ├──23.48 MB (41.04%) -- content
│   │   ├──23.48 MB (41.04%) -- used
│   │   │ ├──17.27 MB (30.18%) ── uncompressed-nonheap
│   │   │ ├───6.10 MB (10.66%) ── raw
│   │   │ └───0.12 MB (00.20%) ── uncompressed-heap
│   │   └───0.00 MB (00.00%) ++ unused
│   └───0.00 MB (00.00%) ++ chrome

With the Visibility Monitor, we re-gained the memory usage as follows:

├───6.75 MB (16.60%) -- images
│   ├──6.75 MB (16.60%) -- content
│   │  ├──5.77 MB (14.19%) -- used
│   │  │  ├──3.77 MB (09.26%) ── uncompressed-nonheap
│   │  │  ├──1.87 MB (04.59%) ── raw
│   │  │  └──0.14 MB (00.34%) ── uncompressed-heap
│   │  └──0.98 MB (02.41%) ++ unused
│   └──0.00 MB (00.00%) ++ chrome

To compare both of them:

├──-16.73 MB (101.12%) -- images/content
│  ├──-17.71 MB (107.05%) -- used
│  │  ├──-13.50 MB (81.60%) ── uncompressed-nonheap
│  │  ├───-4.23 MB (25.58%) ── raw
│  │  └────0.02 MB (-0.13%) ── uncompressed-heap
│  └────0.98 MB (-5.93%) ── unused/raw

To make sure the Visibility Monitor works properly, we added more MP3 files which reached about 400 files in total. At the same time, the usage of memory maintained around 7MB. It’s really a great progress for the 128MB device.


Honestly, we don’t have to use the Visibility Monitor if there weren’t so many images. Because the Visibility Monitor always influences FPS, we can have Gecko deal with the situation. When talking about apps which use lots of images, we can control memory resources through the Visibility Monitor. Even if we increase the amount of images, the memory usage still keeps stable.

The margin and delta parameters of the Visibility Monitor will affect the FPS and memory usage, which can be concluded as follows:

  • the value of higher marginvalue: more memory usage, FPS will be closer to Gecko native scrolling
  • the value of lower margin: less memory usage, lower FPS
  • The value of higher delta: memory usage increases slightly, higher FPS, higher possibility to see unloaded images
  • the value of lower delta: memory usage decreases slightly, lower FPS, lower possibility to see unloaded images

WebmakerMozFest 2014: Spotlight on “Community Building”

This is the ninth post in a series featuring interviews with the 2014 Mozilla Festival “Space Wranglers,” the curators of the many exciting programmatic tracks slated for this year’s Festival.

For this edition, we chatted with Beatrice Martini and Bekka Kahn who are co-wrangling the Community Building track at MozFest—a track all about being members, builders and fuel of communities joining their forces as part of the Open Web movement.

What excites you most about your track?

In the early days of the web, Mozilla pioneered community building efforts together with other open source projects. Today, the best practices have changed and there are many organisations to learn from. Our track aims to convene these practitioners and join forces to create a future action roadmap for the Open Web movement.

Building and mobilising community action requires expertise and understanding of both tools and crowd. The relationships between stakeholders need to be planned with inclusivity and sustainability in mind.

Our track has the ambitious aim to tell the story about this powerful and groundbreaking system. We hope to create the space where both newcomers and experienced community members can meet, share knowledge, learn from each other, get inspired and leave the festival feeling empowered and equipped with a plan for their next action.

The track will feature participatory sessions (there’s no projector is sight!), an ongoing wall-space action and a handbook writing sprint. In addition to this, participants and passers-by will be encouraged to answer the question: “What’s the next action, of any kind/ size/ location, you plan to take for the Open Web movement?”

Who are you working with to make this track happen?

We’ve been very excited to have the opportunity to collaborate with many great folks, old friends and new, to build such an exciting project. The track was added to just a few weeks before the event, so it’s very emergent—just the way we like it!

We believe that collaboration between communities is what can really fuel the future of the Open Web movement. We put this belief into practice through our curatorship structure, as well as the planning of the track’s programme, which is a combination of great ideas that were sent through the festival’s Call for Proposals and invitations we made to folks we knew would have had the ability to blow people’s mind with 60 minutes and a box of paper and markers at their disposal.

How can someone who isn’t able to attend MozFest learn more or get involved in this topic?

Anyone will be welcome to connect with us in (at least) three ways.

  1. We’ll have a dedicated hashtag to keep all online/remote Community conversations going: follow and engage with #MozFestCB on your social media plaftorm of choice, we’ll record a curated version of the feed on our Storify.
  2. We’ll also collect all notes, resources of documentation of anything that will happen in and around the track on our online home.
  3. The work to create a much awaited Community Building Handbook will be kicked off at MozFest and anyone who thinks could enrich it with useful learnings is invited to join the writing effort, from anywhere in the world.


WebmakerMozFest 2014 Keynote Speakers

MozFest logo copy

We’re excited to welcome a slate of thought-provoking keynote speakers who will discuss the state of the web today, why an open web matters more than ever, and how you can get involved in building the web of the future.

Beeban Kidron
Film Director & Co-Founder, FILMCLUB


The Baroness Beeban Kidron has been directing films for more than 30 years and is a joint founder of FILMCLUB, a educational charity that allows children to watch and analyze internationally iconic films. Each week the charity reaches 220,000  children, in more than 7,000 clubs.

Kidron is  best known for directing Bridget Jones: The Edge of Reason and  the Bafta-winning miniseries Oranges Are Not the Only Fruit. She also directed To Wong Foo Thanks for Everything, Julie Newmar, Antonia and Jane, as well as two documentaries on  prostitution: Hookers, Hustlers, Pimps and their Johns, and Sex, Death and the Gods, a film about “devadasi,” or Indian “sacred prostitutes.”

Her latest film, InRealLife, explores the first generation of British teenagers who are  growing up having never known a time before smartphones and social  media, whose childhoods are defined by status updates, emails and digitized friendships.

Mary Moloney
Global CEO, CoderDojo


Mary joined the CoderDojo Foundation team in June 2014, to take up the position of Global CEO. Prior to that, she was a partner in Accenture’s strategy practice, leading engagements with international clients in the Media, High Tech, Telco & Financial Services sectors. During her 23 years with Accenture Mary held a number of lead positions within the organization & within its clients, including; Partner, Managing Director and Multiple C-Suite positions. She has also been involved at board level with number of non profit organizations and remains on the boards of the Dublin Fringe Festival and the Professional Women’s Network. Both of her 9 year old and 7 year old sons are active ninjas who participate at the Science Gallery and Sandymount Dojos near where she lives in Dublin.

Mark Surman
Executive Director, Mozilla Foundation


A community activist and technology executive of 20+ years, Mark  currently serves as the Executive Director of the Mozilla Foundation, makers of Firefox and one of the largest social enterprises in the  world. At Mozilla, he is focused on using the open technology and ethos of the web to transform fields such as education, journalism and filmmaking. Mark has overseen the development of Popcorn.js, which Wired  has called the future of online video; the Open Badges initiative,  launched by the US Secretary of Education; and the Knight Mozilla News  Technology partnership, which seeks to reinvent the future of digital  journalism.

Prior to joining Mozilla, Mark was awarded one of the first Shuttleworth  Foundation Fellowships, where he explored the application of open  principles to philanthropy. During his fellowship, he advised a Harvard  Berkman study on open source licensing in foundations, was the lead  author on the Cape Town Open Education Declaration, and organized the  first open education track at the iCommons Summit, which led to him  becoming a founding board member of Peer-to-peer University (P2PU). Mark holds a BA in the History of Community Media from the  University of Toronto.

Mitchell Baker
Executive Chairwoman, Mozilla


As the leader of the Mozilla Project, Mitchell Baker is responsible for organizing and motivating a massive, worldwide, collective of employees and volunteers who are breathing new life into the Internet with the Firefox Web browser, Firefox OS and other Mozilla products.

Mitchell was born and raised in Berkeley, California, receiving her BA in Asian Studies from UC Berkeley and her JD from the Boalt Hall School of Law. Mitchell has been the general manager of the Mozilla project since 1999. She served as CEO of Mozilla until January 2008, when the organization’s rapid growth encouraged her to split her responsibilities and add a CEO. Mitchell remains deeply engaged in developing product offerings that promote the mission of empowering individuals. She also guides the overall scope and direction of Mozilla’s mission.

Get Involved:


hacks.mozilla.orgNew on MDN: Sign in with Github!

MDN now gives users more options for signing in!

Sign in with GitHub

Signing in to MDN previously required a Mozilla Persona account. Getting a Persona account is free and easy, but MDN analytics showed a steep drop-off at the “Sign in with Persona” interface. For example, almost 90% of signed-out users who clicked “Edit” never signed in, which means they never got to edit. That’s a lot of missed opportunities!

It should be easy to join and edit MDN. If you click “Edit,” we should make it easy for you to edit. Our analysis demonstrated that most potential editors stumbled at the Persona sign in. So, we looked for ways to improve sign in for potential contributors.

Common sense suggests that many developers have a GitHub account, and analysis confirms it. Of the MDN users who list external accounts in their profiles, approximately 30% include a GitHub account. GitHub is the 2nd-most common external account listed, after Twitter.

That got us thinking: If we integrated GitHub accounts with MDN profiles, we could one day share interesting GitHub activity with each other on MDN. We could one day use some of GitHub’s tools to create even more value for MDN users. Most immediately, we could offer “sign in with GitHub” to at least 30% (but probably more) of MDN’s existing users.

And if we did that, we could also offer “sign in with GitHub” to over 3 million GitHub users.

The entire engineering team and MDN community helped make it happen.

Authentication Library

Adding the ability to authenticate using GitHub accounts required us to extend the way MDN handles authentication so that MDN users can start to add their GitHub accounts without effort. We reviewed the current code of kuma (the code base that runs MDN) and realized that it was deeply integrated with how Mozilla Persona works technically.

As we’re constantly trying to remove technical debt that meant revisiting some of the decisions we’ve made years ago when the code responsible for authentication was written. After a review process we decided to replace our home-grown system, django-browserid, with a 3rd party library called django-allauth as it is a well known system in the Django community that is able to use multiple authentication providers side-by-side – Mozilla Persona and GitHub in our case.

One challenge was making sure that our existing user database could be ported over to the new system to reduce the negative impact on our users. To our surprise this was not a big problem and could be automated with a database migration–a special piece of code that would convert the data into the new format. We implemented the new authentication library and migrated accounts to it several months ago. MDN has been using django-allauth for Mozilla Persona authentication since then.

UX Challenges

We wanted our users to experience a fast and easy sign-up process with the goal of having them edit MDN content at the end. Some things we did in the interface to support this:

  • Remember why the user is signing up and return them to that task when sign up is complete.
  • Pre-fill the username and email address fields with data from GitHub (including pre-checking if they are available).
  • Trust GitHub as a source of confirmed email address so we do not have to confirm the email address before the user can complete signing up.
  • Standardise our language (this is harder than it sounds). Users on MDN “sign in” to their “MDN profile” by connecting “accounts” on other “services”. See the discussion.

One of our biggest UX challenges was allowing existing users to sign in with a new authentication provider. In this case, the user needs to “claim” an existing MDN profile after signing in with a new service, or needs to add a new sign-in service to their existing profile. We put a lot of work into making sure this was easy both from the user’s profile if they signed in with Persona first and from the sign-up flow if they signed in with GitHub first.

We started with an ideal plan for the UX but expected to make changes once we had a better understanding of what allauth and GitHub’s API are capable of. It was much easier to smooth the kinks out of the flow once we were able to click around and try it ourselves. This was facilitated by the way MDN uses feature toggles for testing.

Phased Testing & Release

This project could potentially corrupt profile or sign-in data, and changes one of our most essential interfaces – sign up and sign in. So, we made a careful release plan with several waves of functional testing.

We love to alpha- and beta-test changes on MDN with feature toggles. To toggle features we use the excellent django-waffle feature-flipper by James Socol – MDN Manager Emeritus.

We deployed the new code to our MDN development environment every day behind a feature toggle. During this time MDN engineers exercised the new features heavily, finding and filing bugs under our master tracking bug.

When the featureset was relatively complete, we created our beta test page, toggled the feature on our MDN staging environment for even more review. We did the end-to-end UX testing, invited internal Mozilla staff to help us beta test, filed a lot of UX bugs, and started to triage and prioritize launch blockers.

Next, we started an open beta by posting a site-wide banner on the live site, inviting anyone to test and file bugs. 365 beta testers participated in this round of QA. We also asked Mozilla WebQA to help deep-dive into the feature on our stage server. We only received a handful of bugs, which gave us great confidence about a final release.


It was a lot of work, but all the pieces finally came together and we launched. Because of our extensive testing & release plan, we’ve 0 incidents with the launch – no down-time, no stacktraces, no new bugs reported. We’re very excited to release this feature. We’re excited to give more options and features to our incredible MDN users and contributors, and we’re excited to invite each and every GitHub user to join the Mozilla Developer Network. Together we can make the web even more awesome. Sign in now.


Now that we have worked out the infrastructure and UX challenges associated with multi-account authentication, we can look for other promising authentication services to integrate with. For example, Firefox Accounts (FxA) is the authentication service that powers Firefox Sync. FxA is integrated with Firefox and will soon be integrated with a variety of other Mozilla services. As more developers sign up for Firefox Accounts, we will look for opportunities to add it to our authentication options.

WebmakerGet involved with Web Literacy Map v2.0!

TL;DR: Mozilla is working with the community to update the Web Literacy Map to v2.0. You can read more about the project below, or jump straight in and take the survey or join the community calls.

Mozilla Festival


Mozilla defines web literacy as the skills and competencies needed for reading, writing and participating on the web. To chart these skills and competencies, we worked alongside a community of stakeholders in 2013 to create the Web Literacy Map. You can read more about why Mozilla cares about web literacy in this Webmaker Whitepaper.

The Web Literacy Map underpins the work we do with Webmaker and, in particular, the Webmaker resources section. As the web develops and evolves, we have committed to keeping the Web Literacy Map up-to-date. That’s why we’ve begun work on a version 2.0 of the Web Literacy Map.

To date, we’ve interviewed 38 stakeholders on what they believe the Web Literacy Map is doing well, and how it could be improved. We boiled down their feedback to 21 emerging themes for Web Literacy Map v2.0 and some ideas for how Webmaker could be improved.

Mozilla Festival London 2012

Community survey

From the 21 emerging themes mentioned above, we identified five proposals that would help shape further discussion about the Web Literacy Map. These are:

  1. I believe the Web Literacy Map should explicitly reference the Mozilla manifesto.
  2. I believe the three strands should be renamed ‘Reading’, ‘Writing’ and ‘Participating’.
  3. I believe the Web Literacy Map should look more like a ‘map’.
  4. I believe that concepts such as ‘Mobile’, ‘Identity’, and ‘Protecting’ should be represented as cross-cutting themes in the Web Literacy Map.
  5. I believe a ‘remix’ button should allow me to remix the Web Literacy Map for my community and context.

We’ve added these to a survey* which is available in the following languages:

The survey will close on November 1st. If you’d like to translate the survey into another language, please join one of the teams (or create your own!) on Transifex.

*Note: you can email your responses directly if you’d rather not sign into a Google account.


Community calls

Today, we’re kicking off a series of seven Web Literacy Map v2.0 community calls. These will be at 3pm UTC:

There is a calendar that you can subscribe to here.

If you can’t make the calls, please do leave notes for discussion on the agenda for an upcoming call using the links above. Alternatively, get involved in the Web Literacy Map discussion area of the #TeachTheWeb forum.

Mozilla Maemo Danish Weekend 2009


We’re hoping to have the text of an updated Web Literacy Map finished by Q1 2015. The graphical elements and the reorganization of that it may entail will take longer. We’d be very interested in hearing how you plan to use it in your context.

You can keep up-to-date with everything to do with Web Literacy Map v2.0 by bookmarking this page on the Mozilla wiki.

Finally, there will be a few sessions at the Mozilla Festival next week about the Web Literacy Map. Look out for them, and get involved!

Images: mozillaeu, REV-, Paul Clarke, and William Quiviger

Software CarpentryPresenting the novice R materials and future plans for the SWC R community

Approximately seven months after our initial meeting, the SWC R community has developed the first set of R lessons for use both in workshops and for self-directed learning from the SWC website. These novice R lessons are a translation of the current novice Python lessons.

Translating these lessons was a big effort. Many thanks are due to both the major contributions made by Sarah Supp, Diego Barneche, and Gavin Simpson, as well as the contributions made by Carl Boettiger, Josh Ainsley, Daniel Chen, Bernhard Konrad, and Jeff Hollister (please let me know if I missed your contribution/review).

On language-agnostic lesson sets

The current set of novice R lessons fulfill the vision described in a summary of a meeting back in October 2012:

There is a general belief that SWC should be "language agnostic" and primarily teach the computing skills that transcend individual programming languages.

In general, the R-based workshops should reuse as much material as possible from the existing curriculum and contribute language-agnostic improvements and new lessons back to the "main" Python-based lesson set.

Dan Braithwaite and I recently taught these lessons at a workshop for novice biologists, and it went very well. Even though we weren't able to get through the entire lesson on command-line programs, it was very satisfying to see all the lightbulbs go off as they made the connection between the commands they were running in the shell the day before and the R code they wrote the second day (if you're interested, see this thread for more details on how the workshop went).

While focusing on language-agnostic principles enables us to cover lots of big ideas that are the core of Software Carpentry's mission like modular programming and automation, this means sacrificing the discussion of many important R-specific features. This has disappointed some R instructors. The dissenting view can be summarized by some recent posts to the r-discuss mailing list:

Dirk Eddelbuettel wrote (post):

Should we not play to R's strength, rather than to another languages's weaknesses?
And Gavin Simpson added (post):
We seem to be compromising R and an R-like approach just to maintain compatibility with the python lessons.

Thus we have two opposing philosophies. One wants to focus solely on the principles that transcend programming languages; while the other wants to teach the best practices through a more idiomatic approach.

A call for proposals

We set out about seven months ago to create a set of lessons that would be developed, maintained, and used by everyone in the Software Carpentry community who is teaching R. Having built these lessons, I now question whether that was the right goal.

First, while it was an accomplishment to finally have novice R lessons on the SWC website, the work of translating the materials ended up being done by only a few people, and only a few instructors have actually taught these materials in their workshops. Second, there is now another option for running R-based workshops, Data Carpentry. Thus, the common debate on how much we should focus on programming best practices versus data analysis skills has been somewhat solved: a Software Carpentry workshop should focus on programming best practices and a Data Carpentry workshop should focus on data anaylsis skills.

Third, in the wider SWC community, we are currently in the process of overhauling just about everything. The plan is to split up the bc repo, which will result in a new template for workshop websites and a new template for lesson material. One of the motivations for this effort is that instructors want the flexibility to add domain-specific data and introduce topics in an order that makes the most sense to them.

So instead of upfront trying to democratically choose a compromised solution for R-based workshops, let's try a more distributed approach. Any SWC R instructor can propose a new set of lessons and recruit other interested instructors to help create them. Once the lessons are finished, they can be submitted for official approval to be taught in Software Carpentry and/or Data Carpentry workshops (this approval process is also under development). With this approach, each set of R lessons will be maintained in proportion to the number of instructors interested in teaching them.

If you have an idea for a new set of R lessons, please send your proposal to the r-discuss mailing list. You should include a basic outline of your approach and what you intend to cover. For an example, please check out Scott Ritchie's blog post where he outlines his idea for a set of R lessons. In addition to describing your approach, it would be useful to include the answers to the following questions:

  • How much time will it take to teach the lessons?
  • Are the lessons intended for Software Carpentry, Data Carpentry, or both?
  • How can other instructors help? Do you need others to help create and/or review the lessons?
  • What learners are you targeting? Novice, intermediate, advanced? A specific discipline?

Hopefully this approach will lead to multiple sets of R lessons available for use in our workshops. I look forward to seeing the new proposals!

Mozilla IndiaFirefox OS Bus Day 2: Kochi here we are!

How it started?

After the awesome bus tour which was just the beginning, we drove for more than 10 hrs from Vellore to Kochi which was our next stop for spreading the word and doing much more. In the morning we stopped at a Restaurant to freshen up and rejuvenate ourselves with all the energy and give our best.
Upon reaching, Abid and the mozillians (Binoy, Anush) who had the whole plan setup making the mobilizers attain more better stability and go get right away get to the campaign. So, we directly headed towards to the Startup Village AKA the Silicon Valley Of Kerala.

The campaign

This Startup village plays host to a large number of tech and dev centric startup. The Mobilizer crew divided themselves in a block of 3 people and started talking to all the people who were working in every floor. The great coordination of regional mozillians had a great role in the perfect execution of the plans.

It was about 1300hrs when we finished the 10k startup building and went for some heavy snacks before traveling to Cocin University Science and Technology, which was again played host to Maker Party Kochi. So by the time we reached Cocin University Science and Technology (CUSAT) it has already started raining heavily so we had to dress up the mascot and get along with the activities. It turned out to be extremely successful when we saw so many people coming to pose with the mascot and many promising developers asking us about how they would help by contributing to marketplace. The day ended by giving out swags and Loads of pictures with foxy being posted with hashtag #FirefoxOSBus on the social media.

The Fun and Promise

After a short while headed to for a music festival of CUSAT where we had a nice evening all thanks to the volunteers for getting us the passes. The dinner was at Majlish, an Arabian Restaurant which was damn delicious. That’s how another day comes to an end , promising another morning where we would explore , learn and share the word “Firefox OS” with the prime believe that it will not only blaze the path but will also bring a revolution in the life of the next 2 billion who are about to come online in the near future. Now, we are resting the night in a hotel from where we will depart for Bangalore early morning Via Mysore and few more places.

Mozilla IndiaFirefox OS Bus Day 1: Fox On The Road

As per scheduled, we all landed in Hyderabad. Well, to be precise ‘we’ here refer to the 8 awesome mozillians from all over India who are a part of the mobilizer crew in the FirefoxOS bus.

Who were the awesome people?

Although we all had different flight timings we managed to gather in ‘Collab House‘, Jubilee Hills, Hyderabad. Sumantro and me arrived from Kolkata. We were warmly welcomed by Abid, Mission Commander of FirefoxOS bus. Soon after our arrival, Dipesh from Udaipur, Mrinal from Indore and Akshay from Hyderabad arrived. We were eagerly waiting for Vineel, our Head of the crew. After his arrival Vikas, logistics lead was there followed by some enthusiastic mozillians from Hyderabad.

Course of the day

We started of with the initial introduction and then we carried on with our work. There were loads of work left before getting onboard. We quickly grabbed our dinner at around 8.30pm. Only after finishing off with dinner we got to see the amazing FirefoxOs bus. The magnificent view of the bus made all of us really happy and excited. After some quick photo sessions, finally we started our journey.

Leaving for Chennai- first destination

Bidding goodbye to Hyderabad, we left for Chennai, Tamil Nadu. Traveling for almost eight hours, we stopped at ‘Ongole’. Getting some refreshments we continued our journey for Chennai. Our first destination was VIT, Chennai campus. The energetic, excited and awesome regional coordinators Gauthamraj and Tejdeep were eagerly waiting for us. Although, we lagged in time but nevertheless, We appreciate the students of VIT for their warm welcome and participation made us feel like home. The entire session was nicely conducted by all the crew members. The regional coordinators along with the volunteers (FSA’S) need a special mention for coordinating the event so well. On the other side, Sai Kiran, another Mozillian from Warangal, joined us. Last but no way the least, without the amazing participants FirefoxOS program would not have been successful.

 What attracted people?

The special attraction was the fox mascot. People got attracted towards the presence of the sweet Foxy. There was a huge queue for having a selfie session with the mascot. The best part about VIT Chennai campus was that soon after we left the campus we saw posts coming on #FirefoxOsbus.

After Chennai campus we started our journey towards VIT vellore campus. It took almost 3 hours for the FirefoxOs bus to reach VIT Vellore. We were really late but were happy to see the enthusiastic team waiting for us. With the help of the coordinators we successfully completed FirefoxOS campaign in Vellore. Thanks to Jaykumar and Kasish for helping us organize it in Vellore.

Once again props to all the Mozillians in and around Tamil Nadu who made this #FirefoxOSBus a grand success.

Had a lip smacking dinner in Olive Kitchen, Vellore and from there, we started our journey towards Kochi which is nearly 800 km. Long night but worth it. ;)

…and miles to go before we sleep.
Contributed By
Sreemegha Guha and Akshay Tiwari

Rumbling Edge - Thunderbird2014-10-17 Calendar builds

Common (excluding Website bugs)-specific: (2)

  • Fixed: 1061768 – BuildID in em:updateURL and UI is empty, seems that @GRE_BUILDID@ is not set during build
  • Fixed: 1076859 – fix compiler warnings in libical

Sunbird will no longer be actively developed by the Calendar team.

Windows builds Official Windows

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

Rumbling Edge - Thunderbird2014-10-17 Thunderbird comm-central builds

Thunderbird-specific: (21)

  • Fixed: 736002 – The editor for twitter should show inputtable character count
  • Fixed: 1016000 – Remove uses of arguments.callee in /mail (except /mail/test/*)
  • Fixed: 1025316 – Port |Bug 1016132 – fuelApplication.js – mutating the [[Prototype]] of an object will cause your code to run very slowly; instead create the object with the correct initial [[Prototype]] value using Object.create| to Thunderbird for steelApplication.js
  • Fixed: 1036592 – Thunderbird does not respect “Skip Integration”
  • Fixed: 1039963 – TEST-UNEXPECTED-FAIL | test-newmailaccount.js::test_show_tos_privacy_links_for_selected_providers.js
  • Fixed: 1059927 – Extend the inverted icon logic from bug 1046563 to AB, Composer and Lightning
  • Fixed: 1061648 – Mailing list display does not refresh correctly after addresses are deleted
  • Fixed: 1066551 – Add styling for .menulist-menupopup and .menulist-compact removed by bug 1030644
  • Fixed: 1067089 – Port bug 544672 and bug 621873 to Thunderbird – Pin icon on Win8 and don’t propose Quick Launch Bar on Win7+
  • Fixed: 1070614 – Fix some TypeErrors and SyntaxErrors seen in JS strict mode when running mozmill tests
  • Fixed: 1071069 – Thunderbird PFS removal – TEST-UNEXPECTED-FAIL | /builds/slave/test/build/mozmill/content-tabs/test-plugin-unknown.js | test-plugin-unknown.js::test_unknown_plugin_notification_inline | test-plugin-unknown.js::test_unknown_plugin_notification_bar
  • Fixed: 1072652 – Update removed-files for the move from Contents/MacOS to Contents/Resources
  • Fixed: 1073951 – octal literals and octal escape sequences are deprecated: … mozmill/extension/resource/modules/utils.js
  • Fixed: 1073955 – octal literals and octal escape sequences are deprecated: …resource://mozmill/stdlib/httpd.js
  • Fixed: 1074002 – Modify file structure of to allow for OSX v2 signing
  • Fixed: 1074006 – Get Thunderbird to launch with the new .app bundle structure
  • Fixed: 1074011 – Thunderbird’s preprocessed channel-prefs.js file needs to be the same for each build
  • Fixed: 1074814 – Fix some strict JS warnings in mail/base/modules
  • Fixed: 1082722 – Remove mozilla-xremote-client from our packages.
  • Fixed: 1083153 – EarlyBird not correctly signed, and doesn’t start up at all
  • Fixed: 1083196 – IM: Lists are broken in Chat

MailNews Core-specific: (11)

  • Fixed: 998189 – Add a basic structured header interface
  • Fixed: 1047883 – Modify test_offlinePlayback.js to use promises.
  • Fixed: 1062235 – Port bug 1062221 (kill add_tier_dir) to comm-central
  • Fixed: 1067116 – compile failes nsEudoraFilters.cpp on case-sensitive HFS+ filesystem
  • Fixed: 1070261 – Improve appearance of Advanced settings of an IMAP account
  • Fixed: 1071497 – error: no matching function for call to ‘NS_NewStreamLoader(nsGetterAddRefs<nsIStreamLoader>, nsCOMPtr<nsIURI>&, nsAbContentHandler* const, nsIInterfaceRequestor*&)’
  • Fixed: 1074034 – Simplify the comm-central build-system post pseudo-rework
  • Fixed: 1074585 – TEST-UNEXPECTED-FAIL | /builds/slave/test/build/tests/xpcshell/tests/mailnews/compose/test/unit/test_detectAttachmentCharset.js | “Shift_JIS” == “UTF-8″ – See following stack:
  • Fixed: 1078524 – TEST-UNEXPECTED-FAIL | /builds/slave/test/build/tests/xpcshell/tests/mailnews/import/test/unit/test_shiftjis_csv.js
  • Fixed: 1080351 – Fix compiler errors caused by bug 1076698
  • Fixed: 1083487 – /usr/bin/m4:./aclocal.m4:7: cannot open `mozilla/build/autoconf/ccache.m4′: No such file or directory

Windows builds Official Windows, Official Windows installer

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

WebmakerMozFest 2014 kicks off in one week!

MozFest logo copy

It’s Almost Time!

MozFest — Mozilla’s annual hands-on festival dedicated to forging the future of the open, global web — is about to begin.

This year’s festival, which takes place in London from October 24 – 26, will be packed with passionate technologists and creators eager to share their skills and hack on innovative digital solutions for the web’s most pressing issues.

The Web Is Vulnerable

It’s no secret that the web as a free and open public resource is under threat. Governments and corporations are vying for control, leaving web users across the globe struggling to protect not only their own personal online security, but the integrity of the Internet as a whole. As billions more people come online in the next decade thanks to affordable mobile technologies, is their web going to be open or closed? Decentralized or controlled? Will they be passive consumers or empowered creators? More and more people are realizing we need to step in and save the web, but that’s only going to happen if more of us are fighting.

Together We Are Strong

The good news is that hundreds of thousands of people, organizations and communities around the world are eager to help with this mission. MozFest is about imagining how we can work together. How can citizens of the web in communities around the world be empowered to take action? MozFest participants will tackle these challenges not just by talking about them, but by building new ways to teach and engage everyone in making the web together.

Hacking Practical Solutions

MozFest is where people who love the open web collaborate to envision how it can do more, and do better. The motto of the festival is Less Yack, More Hack which results in a focus on identifying current challenges and developing practical solutions. This year, MozFest will feature 11 themed tracks:

  • The Mobile Web
  • Policy & Advocacy
  • Community Building
  • Build and Teach the Web
  • Open Web With Things
  • Source Code for Journalism
  • Science and the Web
  • Art and Culture of the Web
  • Open Badges Lab
  • Hive Learning Networks
  • Musicians and Music Creators on the Open Web

Scores of individual sessions will be held as part of each track. Here’s just a taste of the sessions participants will be hacking:

  • How the next 1 billion internet users will bring their online ideas to life
  • Helping 10 million young people become digitally literate
  • Design your first mobile app
  • Hacking the gender gap
  • Using badges to support the delivery of the new computing curriculum
  • User privacy and security on the web
  • Let’s build an unbreakable internet
  • Making open web a part of the curriculum
  • I was born with the web – 25 under 25
  • How to get into the correct amount of trouble online

Our aim this year is to showcase and develop best practices for community leadership. Join us in discovering how distributed organizing and sharing skills through teaching and learning can build a web filled with opportunity for all!

Get Involved:


SUMO BlogWhat’s up with SUMO – 18 October

Hello, SUMO and the www (whole wide world). Here are the latest and greatest updates from SUMO headquarters, situated in cyberspace.

New arrivals to SUMO – we salute you!

Latest SUMO Community meeting video

Our latest meeting focused on SUMO presence in the online world. Watch the video to learn more.

If you want to comment on the video or ask questions regarding the discussion, please do so in the forum thread. Also, please remember that you’re always invited to join our Monday meetings, and we’re very happy when you do.

SUMO Day summary

It took place yesterday, as you may remember… We had a high number of questions and managed to answer 90% of them in 24 hours!

Here’s a list with the great people that made this SUMO Day awesome:


Can we make it to 100% next time?

KB updates

The KB dashboard is getting a makeover (thanks to Rehan, Kadir, and Ricky!). You can see the upcoming changes at

If you have feedback about, please leave it here:

Firefox OS goes feature complete for 2.1 and requires localization for 2.0

While FxOS 2.1 reached feature completion (on the 13th of October), we are kicking off a more focused localization cycle for 2.0. Localizers, please subscribe to this thread, with an upcoming update preview available here. Got questions or comments? Add them to the update thread and we’ll get back to you.

Shout-out time: Bangladesh l10ns!

Just in case you forgot how amazing the SUMO community in Bangladesh is… They kicked off their “Mission 100% SUMO KB” initiative recently, and we’re eagerly awaiting updates from the land of Bengal. Hats off to the Bangladesh l10ns!

And that’s it for the most recent round of updates… See you on Monday and/or on Twitter: we’re there at

Software CarpentryNum Wrongs Plus Plus

I was teaching Git to a room of roughly 25 students on day 2 of a Software Carpentry workshop and we ran into a problem that feels like a case study on the reason it's hard to move science to safer practices.

If you've taught Git before, you know that the first thing you do is have everyone open the shell, navigate to a new directory somewhere (remember how to do that from yesterday?) and type git init. A few hands will shoot up, and red sticky notes start to flower. If you are lucky, you have less people in need of help than you have helpers. We weren't so lucky.

Now what? Do you pause the lesson, let everyone get good and comfortable on Facebook while you jump in to help? Do you keep going and trust that the people whose Git isn't working will be able to catch up? I paused the lesson to help out. I've taught Git five or six times and I do not remember a single case of a student getting a late start on Git that was still with me by the first coffee break.

I think if we're honest, we accidently assume that a lot of the problems with installing Git are the learner. Maybe they didn't read the memo about installing it? It happens. Maybe they should have asked for help before I started my lesson? They've only known you could type into the terminal for 24 hours. Maybe we should provide better tools so they know they even have a problem? See previous answer.

Maybe the reason Git fails to install on a few machines every workshop is because software installation is just so incredibly broken.

This is the output that our student saw:

user173-85:~ user-name$ git init
dyld: lazy symbol binding failed: Symbol not found: ___strlcpy_chk
  Referenced from: /usr/local/git/bin/git
  Expected in: /usr/lib/libSystem.B.dylib

dyld: Symbol not found: ___strlcpy_chk
  Referenced from: /usr/local/git/bin/git
  Expected in: /usr/lib/libSystem.B.dylib

Trace/BPT trap: 5

Remember that lofty goal about de-mystifying computers so that smart people can use them to do lofty work? It's in danger.

One of our goals is to teach how to get "unstuck." We search for "git dyld: lazy symbol binding failed: Symbol not found: ___strlcpy_chk". We explain, loud enough for a few more interested neighbors to overhear, that is the place to go for this sort of information, and that they should start there.

Stack Overflow tells us to install Xcode. Right! Instructors don't think of things like that because instructors installed that ages ago. We gloss over what Xcode is and why the student needs it while they are navigating to the Mac Store. We shrug when the person in the seat didn't need to install Xcode and his seems to be working fine.

They find Xcode and start installing. I'm a little nervous about what our iffy wireless will think of a 4.4 Gbyte install and whether this will tank the GitHub module coming up in an hour.

Except that Xcode, which is in the store, will not install because this version of OSX is too old. (I believe it was version 10.8.) Instructors don't run into this because even those of us with creaky old machines were set up with programming tools ages ago.

Back to the internet, this time with less confidence. At that point, we were looking through the apple website, explaining why the "" ad on google isn't a good idea, and growing more and more nervous about the amount of time this is taking. The student actually suggested the final fix: "why don't I just run the installation instructions for windows? I have it in parallels." Genius! This install worked as advertised, and the student finished the entire lesson running a GitBash-enabled Windows cmd shell on her Mac. Fortunately, she was able to keep the file systems straight.

So that's the time that we fixed an OSX install issue by using the Windows shell instead. Or as they say in C++:

while(num_wrongs != right) {

Full disclosure: the author works at Microsoft. Software Carpentry takes an intentionally unopinionated view about the OS that our learners use. Our hope is that they learn to use it better.

Mozilla UXRe-imagine Firefox on tablet

Nowadays mobile space is dominated by applications that are created for mobile phones. Designers often start designing an application for phones first and then scale it for tablets. However, people interact with tablets quite differently due to the unique context of use. For a browser on tablet, you may find people use it in a kitchen for recipes, on a couch reading or shopping, or at home streaming music and videos. How can Firefox innovate and re-imagine the experience for tablet users?


Design process

Beginning in January 2014, mobile Firefox UX designers started envisioning solutions for an interesting challenge: a Firefox browser that is optimized for tablet-specific use cases and takes full advantage of tablet form factor.

The team defined two main user experience goals as the first milestone of this project.

Questions.001 Questions.002

To quickly test our design hypothesis for these two goals, I came up with a 10-day sprint model (inspired by Google Venture’s 5-day sprint) for the mobile Firefox UX team. I prototyped a few HTML5 concepts (GIF version) using Hype and published them on to get initial feedback from Android users.

What we learned from the sprint testings:

  1. Desktop controls were familiar to participants and they adopted them quickly
  2. Visual affordance built expectations
  3. Preview of individual tabs was helpful for tab switching
  4. Tab groups met the needs of a small set of tablet users
  5. Onscreen controls required additional time to get familiar with

Based on what we have learned from design sprints[full report], I put together an interaction design proposal for this redesign[full presentation]. To help myself and the rest of the team understand the scope of this redesign, I divided the work into a few parts, from fundamental structure to detailed interactions. My teammate Anthony Lam has been working closely with me, focusing on the visual design of the new UI.

Design Solution

The new Firefox on tablet achieves a good balance between simplicity and power by offering a horizontal tab strip and a full-screen tab panel. Designed for both landscape and portrait use, the new interface takes full advantage of the screen space on tablet to deliver a delightful experience. Here are some of the highlights.

1. Your frequently used actions are one tap away

The new interface features a horizontal tab strip that surfaces your frequent browsing actions, such as switching tabs, opening a new tab, closing a tab.

Tablet Refresh Presentation.001

2. Big screen invites gestures and advanced features

A full-screen tab panel gives a better visual representation of your normal and private browsing sessions. Taking advantage of the big space, the panel can also be a foundation for more advanced options, such as tab groups, gestural actions for tabs.

Tablet Refresh Presentation.002

3. Make sense of the Web through enhanced search

The new tablet interface will offer a simple and convenient search experience. The enhanced search overlay is powered by search history, search suggestions, your browsing history and bookmarks. You will be able to add search engines of your choice and surface them on the search result overlay.

Tablet Refresh Presentation.003

4. You have control over privacy as always

Private browsing allows you to browse the Internet without saving any information about which sites and pages you’ve visited.

Tablet Refresh Presentation.004.png.001

Future concepts

Besides basic tab structure and interactions, I have also experimented with some gestural actions for tabs. You can view some animations of those experiments via this link. I also included a list below with links to Bugzilla. If there is a concept that sounds interesting to you, feel free to post your thoughts and help us make it happen!

  • Add a new tab by long-tapping on the empty space of horizontal tab strip [Bug 1015467]
  • Pin a tab on horizontal tab strip [Bug 1018481]
  • Visual previews for horizontal tabs [Bug 1018493]
  • Blur effect for private tab thumbnails [Bug 1018456]


The big picture

Many of the highlighted features above, such as enhanced search, gestural shortcuts, can be adopted by Firefox Android on phone. And you may have noticed the new interface was heavily influenced by the simple and beautiful new look of Firefox on desktop.

Based on screen-sizes, tablet is a perfect platform for merging consistency in between desktop and phone. Focusing on the context of tablet use, Firefox Android on tablet will establish itself as a standalone product of the Firefox family. We are excited to see a re-imagined tablet experience make Firefox feel more like one product — more Firefoxy — across all our platforms, desktop to tablet to phone.

Tablet Refresh Presentation.005


Currently the mobile Firefox team is busy bringing those ideas to life. You can check out our progress by downloading Firefox Nightly build to your Android tablet and choose “Enable new tablet UI” in the Settings. And stay tuned for more awesomeness about this project from Anthony Lam, Lucas Rocha, Martyn Haigh, and Michael Comella!

WebmakerQ&A with Maker State

Every year we get the opportunity to connect with many great organizations who are spreading web literacy around the world at all times of the year. MakerState, hands-on makerspaces in New York City, is a perfect example. We had a chance to sit down with the founder of MakerState, Stephen Gilman to talk about what they’ve done in the past few months and the upcoming events they have planned for continuous making.

makerspace kids smaller

What is your organization and what do you do?

MakerState empowers kids ages 5-18 with science, technology, engineering, arts, and math (STEAM) passion and skill through makerspaces in robot engineering, fashion/wearable electronics, video game design, paper circuits, 3D prototyping and printing, comic book creation, and moviemaking. MakerState hosts makerspaces nationwide in schools and after-school programs as well as community workshops, pop up makerspaces, and summer camps.

What are the events you hosted or ran this year?

We hosted over 30 makerspaces this year in schools and community centers in New York, New Orleans, San Francisco, Boston, New Haven…and hopefully coming to your town soon!

Why did you choose to get involved with Maker Party?

We are a community of makers and educators  who believe that all learning can happen through building, creating, hacking, inventing…through making. We are committed to bringing as many maker-learning experiences as possible to kids and Maker Party is a perfect partner for us in that effort. Whether we’re doing pop up makerspaces with Maker Party or ongoing school-based makerspaces throughout the year, we’re excited to be Maker Party hosts.

What is the most exciting thing about running events?

Our favorite moment in the makerspace is when a young person, maybe five, six, seven years old, finds a maker project that they really love and becomes completely immersed in it. They are creating and building and learning science, engineering, design, or programming at the same time. But it’s the total immersion and joy that is so captivating to observe. Psychologist Mihaly Csikszentmihalyi has called that moment the “flow state”—we call it the maker state.

Why is it important for youth and adults to make things with technology?

We see technology as the tools and media humans use to create art, new products, and to interact with others. Tech is how people literally live their lives. Tech can also save lives and bring us joy and allow us to pursue common dreams. There is a darker side to tech too: polluting, disintegrating, even destroying life. We teach kids the power of tech and tool-making so that they understand how to create new technology and benefit from it. Ultimately, it’s about moving young people from passive consumption of tech to become the pro-active, socially responsible creators of it. We’re convinced that this generation of kids we’re working with will create safe forms of energy, life-saving medical treatments, and new forms of media that draw humanity together for peace and productivity. If we can engage kids at a young enough age and build skills, confidence and passions around tech, they will blow our minds with the new world they create.

What is the feedback you usually get from people who attend or teach at your events?

It’s so fun to observe parents as they watch their kids in the makerspace. I like to step back from the kids sometimes and stand beside their parents as they marvel at what their kids are building. The universal reaction: I can’t believe how much she loves this project. I’m so impressed with what my son has built. I wish their whole school experience could be like this. We agree!

Why is it important for people and organizations to get involved with Maker Party and teaching the web?

Maker Party gives kids and communities an opportunity to explore hands on creativity with technology, often for the first time. This experience is invaluable for young people—often it is life changing. It’s the moment a young girl realizes she can become an engineer and build her world. The moment an inner city student realizes the total joy of science and the rewarding life he can live in pursuit of new ideas and new solutions to human challenges. Maker Party offers these life-changing moments to young people and we are proud to be a part of the movement.

How can people get in touch with your organization?

To start a STEM-mastery makerspace in your school or host a summer camp, contact MakerState at

Air MozillaAscend Project Final Presentations - Portland Cohort

Ascend Project Final Presentations - Portland Cohort 5 minute lightning talks by new contributors to Mozilla who just completed the first ever Ascend Project.

The Mozilla BlogMozilla and Telefónica Partner to Simplify Voice and Video Calls on the Web

Mozilla is extending its relationship with Telefonica by making it easier than ever to communicate on the Web. Telefónica has been an invaluable partner in helping Mozilla develop and bring Firefox OS to market with 12 devices now available in … Continue reading

Air MozillaReps weekly

Reps weekly Weekly Mozilla Reps call

SUMO BlogThursday, Oct 16th is SUMO Day!

It’s Thursday so it’s the perfect time to organize a new SUMO day! We are answering questions in the support forum and helping each other in #sumo on IRC from 9am to 5pm PST (UTC -8) today.

Join us, create an account and then take some time today to help with unanswered questions. Please check the etherpad for additional tips. We have been experiencing quite a high number of questions in the last few days. Our goal this Thursday is to respond to each and ever one of them, so please try to answer as many questions as you can throughout the day.

Happy SUMO Day!

hacks.mozilla.orgCreating a mobile app from a simple HTML site

This article is a simple tutorial designed to teach you some fundamental skills for creating cross platform web applications. You will build a sample School Plan app, which will provide a dynamic “app-like” experience across many different platforms and work offline. It will use Apache Cordova and Mozilla’s Brick web components.

The story behind the app, written by Piotr

I’ve got two kids and I’m always forgetting their school plan, as are they. Certainly I could copy the HTML to JSFiddle and load the plan as a Firefox app. Unfortunately this would not load offline, and currently would not work on iOS. Instead I would like to create an app that could be used by everyone in our family, regardless of the device they choose to use.

We will build

A mobile application which will:

  1. Display school plan(s)
  2. Work offline
  3. Work on many platforms

Prerequisite knowledge

  • You should understand the basics of HTML, CSS and JavaScript before getting started.
  • Please also read the instructions on how to load any stage in this tutorial.
  • The Cordova documentation would also be a good thing to read, although we’ll explain the bits you need to know below.
  • You could also read up on Mozilla Brick components to find out what they do.


Before building up the sample app, you need to prepare your environment.

Installing Cordova

We’ve decided to use Apache Cordova for this project as it’s currently the best free tool for delivering HTML apps to many different platforms. You can build up your app using web technologies and then get Cordova to automatically port the app over to the different native platforms. Let’s get it installed first.

  1. First install NodeJS: Cordova is a NodeJS package.
  2. Next, install Cordova globally using the npm package manager:
    npm install -g cordova

Note: On Linux or OS X, you may need to have root access.

Installing the latest Firefox

If you haven’t updated Firefox for a while, you should install the latest version to make sure you have all the tools you need.

Installing Brick

Mozilla Brick is a tool built for app developers. It’s a set of ready-to-use web components that allow you to build up and use common UI components very quickly.

  1. To install Brick we will need to use the Bower package manager. Install this, again using npm:
    npm install -g bower
  2. You can install Brick for your current project using
    bower install mozbrick/brick

    but don’t do this right now — you need to put this inside your project, not just anywhere.

Getting some sample HTML

Now you should find some sample HTML to use in the project — copy your own children’s online school plans for this purpose, or use our sample if you don’t have any but still want to follow along. Save your markup in a safe place for now.

Stage 1: Setting up the basic HTML project

In this part of the tutorial we will set up the basic project, and display the school plans in plain HTML. See the stage 1 code on Github if you want to see what the code should look like at the end of this section.

  1. Start by setting up a plain Cordova project. On your command line, go to the directory in which you want to create your app project, and enter the following command:
    cordova create school-plan com.example.schoolplan SchoolPlan

    This will create a school-plan directory containing some files.

  2. Inside school-plan, open www/index.html in your text editor and remove everything from inside the <body> element.
  3. Copy the school plan HTML you saved earlier into separate elements. This can be structured however you want, but we’d recommend using HTML <table>s for holding each separate plan:
  4. Change the styling contained within www/css/index.css if you wish, to make the tables look how you want. We’ve chosen to use “zebra striping” for ease of reading.
    table {
      width: 100%;
      border-collapse: collapse;
      font-size: 10px;
    th {
      font-size: 12px;
      font-weight: normal;
      color: #039;
      padding: 10px 8px;
    td {
      color: #669;
      padding: 8px;
    tbody tr:nth-child(odd) {
      background: #e8edff;
  5. To test the app quickly and easily, add the firefoxos platform as a cordova target and prepare the application by entering the following two commands:
    cordova platform add firefoxos
    cordova prepare

    The last step is needed every time you want to check the changes.

  6. Open the App Manager in the Firefox browser. Press the [Add Packaged App] button and navigate to the prepared firefoxos app directory, which should be available in school-plan/platforms/firefoxos/www.

    Note: If you are running Firefox Aurora or Nightly, you can do these tasks using our new WebIDE tool, which has a similar but slightly different workflow to the App Manager.

  7. Press the [Start Simulator] button then [Update] and you will see the app running in a Firefox OS simulator. You can inspect, debug and profile it using the App Manager — read Using the App Manager for more details. App Manager buttons<br />
  8. Now let’s export the app as a native Android APK so we can see it working on that platform. Add the platform and get Cordova to build the apk file with the following two commands:
    cordova platform add android
    cordova platform build android
  9. The apk is build in school-plan/platforms/android/ant-build/SchoolPlan-debug.apk — read the Cordova Android Platform Guide for more details on how to test this.

Stage1 Result Screenshot<br />

Stage 2

In Stage 2 of our app implementation, we will look at using Brick to improve the user experience of our app. Instead of having to potentially scroll through a lot of lesson plans to find the one you want, we’ll implement a Brick custom element that allows us to display different plans in the same place.

You can see the finished Stage 2 code on Github.

  1. First, run the following command to install the entire Brick codebase into the app/bower_components directory.
    bower install mozbrick/brick
  2. We will be using the brick-deck component. This provides a “deck of cards” type interface that displays one brick-card while hiding the others. To make use of it, add the following code to the <head> of your index.html file, to import its HTML and JavaScript:
    <script src="app/bower_components/brick/dist/platform/platform.js"></script>
    <link rel="import" href="app/bower_components/brick-deck/dist/brick-deck.html">
  3. Next, all the plans need to be wrapped inside a <brick-deck> custom element, and every individual plan should be wrapped inside a <brick-card> custom element — the structure should end up similar to this:
    <brick-deck id="plan-group" selected-index="0">
      <brick-card selected>
          <!-- school plan 1 -->
          <!-- school plan 2 -->
  4. The brick-deck component requires that you set the height of the <html> and <body> elements to 100%. Add the following to the css/index.css file:
    html, body {height: 100%}
  5. When you run the application, the first card should be visible while the others remain hidden. To handle this we’ll now add some JavaScript to the mix. First, add some <link> elements to link the necessary JavaScript files to the HTML:
    <script type="text/javascript" src="cordova.js"></script>
    <script type="text/javascript" src="js/index.js"></script>
  6. cordova.js contains useful general Cordova-specific helper functions, while index.js will contain our app’s specific JavaScript. index.js already contains a definition of an app variable. The app is running after app.initialize() is called. It’s a good idea to call this when window is loaded, so add the following:
    window.onload = function() { 
  7. Cordova adds a few events; one of which — deviceready — is fired after all Cordova code is loaded and initiated. Let’s put the main app action code inside this event’s callback — app.onDeviceReady.
    onDeviceReady: function() {
        // starts when device is ready
  8. Brick adds a few functions and attributes to all its elements. In this case loop and nextCard are added to the <brick-deck> element. As it includes an id="plan-group" attribute, the appropriate way to get this element from the DOM is document.getElementById. We want the cards to switch when the touchstart event is fired; at this point nextCard will be called from the callback app.nextPlan.
    onDeviceReady: function() {
        app.planGroup = document.getElementById('plan-group');
        app.planGroup.loop = true;
        app.planGroup.addEventListener('touchstart', app.nextPlan);
    nextPlan: function() {

Stage2 Result Animation<br />

Stage 3

In this section of the tutorial, we’ll add a menu bar with the name of the currently displayed plan, to provide an extra usability enhancement. See the finished Stage 3 code on GitHub.

  1. To implement the menu bar, we will use Brick’s brick-tabbar component. We first need to import the component. Add the following lines to the <head> of your HTML:
    <script src="app/bower_components/brick/dist/platform/platform.js"></script>
    <link rel="import" href="app/bower_components/brick-deck/dist/brick-deck.html">
    <link rel="import" href="app/bower_components/brick-tabbar/dist/brick-tabbar.html">
  2. Next, add an id to all the cards and include them as the values of target attributes on brick-tabbar-tab elements like so:
    <brick-tabbar id="plan-group-menu" selected-index="0">
        <brick-tabbar-tab target="angelica">Angelica</brick-tabbar-tab>
        <brick-tabbar-tab target="andrew">Andrew</brick-tabbar-tab>
    <brick-deck id="plan-group" selected-index="0">
        <brick-card selected id="angelica">
  3. The Deck’s nextCard method is called by Brick behind the scenes using tab’s reveal event. The cards will change when the tabbar element is touched. The app got simpler, as we are now using the in-built Brick functionality, rather than our own custom code, and Cordova functionality. If you wished to end the tutorial here you could safely remove the <script> elements that link to index.js and cordova.js from the index.html file.

Stage3 Result Animation<br />

Stage 4

To further improve the user experience on touch devices, we’ll now add functionality to allow you to swipe left/right to navigate between cards. See the finished stage 4 code on GitHub.

  1. Switching cards is currently done using the tabbar component. To keep the selected tab in sync with the current card you need to link them back. This is done by listening to the show event of each card. For each tab from stored in app.planGroupMenu.tabs:
    tab.targetElement.addEventListener('show', function() {
        // select the tab
  2. Because of the race condition (planGroupMenu.tabs might not exist when the app is initialized) polling is used to wait until the right moment before trying to assign the events:
    function assignTabs() {
        if (!app.planGroupMenu.tabs) {
            return window.setTimeout(assignTabs, 100);
        // proceed

    The code for linking the tabs to their associated cards looks like so:

    onDeviceReady: function() {
        app.planGroupMenu = document.getElementById('plan-group-menu');
        function assignTabs() {
            if (!app.planGroupMenu.tabs) {
                return window.setTimeout(assignTabs, 100);
            for (var i=0; i < app.planGroupMenu.tabs.length; i++) {
                var tab = app.planGroupMenu.tabs[i];
                tab.targetElement.tabElement = tab;
                tab.targetElement.addEventListener('show', function() {
        // continue below ...
  3. Detecting a one finger swipe is pretty easy in a Firefox OS app. Two callbacks are needed to listen to the touchstart and touchend events and calculate the delta on the pageX parameter. Unfortunately Android and iOS do not fire the touchend event if the finger has moved. The obvious move would be to listen to the touchmove event, but that is fired only once as it’s intercepted by the scroll event. The best way forward is to stop the event from bubbling up by calling preventDefault() in the touchmove callback. That way scroll is switched off, and the functionality can work as expected:
    // ... continuation
    app.planGroup = document.getElementById('plan-group');
    var startX = null;
    var slideThreshold = 100;
    function touchStart(sX) {
        startX = sX;
    function touchEnd(endX) {
        var deltaX;
        if (startX) {
            deltaX = endX - startX;
            if (Math.abs(deltaX) > slideThreshold) {
                startX = null;
                if (deltaX > 0) {
                } else {
    app.planGroup.addEventListener('touchstart', function(evt) {
        var touches = evt.changedTouches;
        if (touches.length === 1) {
    app.planGroup.addEventListener('touchmove', function(evt) {

You can add as many plans as you like — just make sure that their titles fit on the screen in the tabbar. Actions will be assigned automatically.

Stage4 Result Screenshot<br />

To be continued …

We’re preparing the next part, in which this app will evolve into a marketplace app with downloadable plans. Stay tuned!

QMOFirefox 34 Beta 3 Testday, October 24th

Greetings mozillians,

We are happy to announce that Friday, October 24th, we’re going to hold the Firefox 34.0 Beta 3  Testday. We will be testing the latest Beta build, with focus on the most recent changes and fixes. Detailed instructions on how to get involved can be found in this etherpad.

No previous testing experience is required so feel free to join via #qa IRC channel and our moderators will offer you guidance and answer your questions as you go along.

Join us next Friday and let’s make Firefox better together!

When: October 24, 2014.

Software CarpentryWelcome More New Instructors

We are very pleased to welcome another new batch of instructors to our team:

Pete Alonzi Balamurugan Desinghu Leonor Garcia-Gutierrez Jeff Hollister
Paulina Lach Jacob Levernier Mark Wilber Chandler Wilkerson

Air MozillaProduct Coordination Meeting

Product Coordination Meeting Weekly coordination meeting for Firefox Desktop & Android product planning between Marketing/PR, Engineering, Release Scheduling, and Support.

about:communityGrow Mozilla discussion this Thursday

If you’re interested in helping new people get involved with Mozilla, join us Thursday for an open community building forum.

Open Policy & AdvocacySpotlight on Amnesty International: A Ford-Mozilla Open Web Fellows Host

{This is the second installment in our series highlighting the 2015 Host Organizations for the Ford-Mozilla Open Web Fellows program. We are now accepting applications to be a 2015 fellow. Amnesty International is a great addition to the program, especially as new technologies have such a profound impact – both positive and negative – on human rights. With its tremendous grassroots advocacy network and decades of experience advocating for fundamental human rights, Amnesty International, its global community and its Ford-Mozilla Fellow are poised to continue having impact on shaping the digital world for good.}

Spotlight on Amnesty International: A Ford-Mozilla Open Web Fellow Host
By Tanya O’Carroll, Project Officer, Technology and Human Rights, Amnesty International

For more than fifty years Amnesty International has campaigned for human rights globally: exposing information that governments will go to extreme measures to hide; connecting individuals who are under attack with solidarity networks that span the globe; fighting for policy changes that often seem impossible at first.

We’ve developed many tools and tactics to help us achieve change.

But the world we operate in is also changing.

Momentous developments in information and communications networks have introduced new opportunities and threats to the very rights we defend.

amnesty-logoThe Internet has paved the way for unprecedented numbers of people to exercise their rights online, crucially freedom of expression and assembly.

The ability for individuals to publish information and content in real-time has created a new world of possibilities for human rights investigations globally. Today, we all have the potential to act as witnesses to human rights violations that once took place in the dark.

Yet large shadows loom over the free and open Web. Governments are innovating and seeking to exploit new tools to tighten their control, with daunting implications for human rights.

This new environment requires specialist skills to respond. When we challenge the laws and practices that allow governments to censor individuals online or unlawfully interfere with their privacy, it is vital that we understand the mechanics of the Internet itself–and integrate this understanding in our analysis of the problem and solutions.

That’s why we’re so excited to be an official host for the Ford-Mozilla Open Web Fellowship.

We are seeking someone with the expert skill set to help shape our global response to human rights threats in the digital age.

Amnesty International’s work in this area builds on our decades of experience campaigning for fundamental human rights.

Our focus is on the new tools of control – that is the technical and legislative tools that governments are using to clamp down on opposition, restrict lawful expression and the free flow of information and unlawfully spy on private communications on a massive scale.

In 2015 we will be actively campaigning for an end to unlawful digital surveillance and for the protection of freedom of expression online in countries across the world.

Amnesty International has had many successes in tackling entrenched human rights violations. We know that as a global movement of more than 3 million members, supporters and activists in more than 150 countries and territories we can also help to protect the ideal of a free and open web. Our success will depend on building the technical skills and capacities that will keep us ahead of government efforts to do just the opposite.

Demonstrating expert leadership, the fellow will contribute their technical skills and experience to high-quality research reports and other public documents, as well as international advocacy and public campaigns.

If you are passionate about stopping the Internet from becoming a weapon that is used for state control at the expense of freedom, apply now to become a Ford-Mozilla Open Web Fellow and join Amnesty International in the fight to take back control.

Apply to be a 2015 Ford-Mozilla Open Web Fellow. Visit

hacks.mozilla.orgPasswordless authentication: Secure, simple, and fast to deploy

Passwordless is an authentication middleware for Node.js that improves security for your users while being fast and easy to deploy.

The last months were very exciting for everyone interested in web security and privacy: Fantastic articles, discussions, and talks but also plenty of incidents that raised awareness.

Most websites are, however, still stuck with the same authentication mechanism as from the earliest days of the web: username and password.

While username and password have their place, we should be much more challenging if they are the right solution for our projects. We know that most people use the same password on all the sites they visit. For projects without dedicated security experts, should we really open up our users to the risk that a breach of our site also compromises their Amazon account? Also, the classic mechanism has by default at least two attack vectors: the login page and the password recovery page. Especially the latter is often implemented hurried and hence inherently more risky.

We’ve seen quite a bit of great ideas recently and I got particularly excited by one very straightforward and low-tech solution: one-time passwords. They are fast to implement, have a small attack surface, and require neither QR codes nor JavaScript. Whenever a user wants to login or has her session invalidated, she receives a short-lived one-time link with a token via email or text message. If you want to give it a spin, feel free to test the demo on

Unfortunately—depending on your technology stack—there are few to none ready-made solutions out there. Passwordless changes this for Node.js.

Getting started with Node.js & Express

Getting started with Passwordless is straight-forward and you’ll be able to deploy a fully fledged and secure authentication solution for a small project within two hours:

$ npm install passwordless --save

gets you the basic framework. You’ll also want to install one of the existing storage interfaces such as MongoStore which store the tokens securely:

$ npm install passwordless-mongostore --save

To deliver the tokens to the users, email would be the most common option (but text message is also feasible) and you’re free to pick any of the existing email frameworks such as:

$ npm install emailjs --save

Setting up the basics

Let’s require all of the above mentioned modules in the same file that you use to initialise Express:

var passwordless = require('passwordless');
var MongoStore = require('passwordless-mongostore');
var email   = require("emailjs");

If you’ve chosen emailjs for delivery that would also be a great moment to connect it to your email account (e.g. a Gmail account):

var smtpServer  = email.server.connect({
   user:    yourEmail,
   password: yourPwd,
   host:    yourSmtp,
   ssl:     true

The final preliminary step would be to tell Passwordless which storage interface you’ve chosen above and to initialise it:

// Your MongoDB TokenStore
var pathToMongoDb = 'mongodb://localhost/passwordless-simple-mail';
passwordless.init(new MongoStore(pathToMongoDb));

Delivering a token

passwordless.addDelivery(deliver) adds a new delivery mechanism. deliver is called whenever a token has to be sent. By default, the mechanism you choose should provide the user with a link in the following format:{TOKEN}&amp;uid={UID}

deliver will be called with all the needed details. Hence, the delivery of the token (in this case with emailjs) can be as easy as:

    function(tokenToSend, uidToSend, recipient, callback) {
        var host = 'localhost:3000';
            text:    'Hello!\nAccess your account here: http://'
            + host + '?token=' + tokenToSend + '&amp;uid='
            + encodeURIComponent(uidToSend),
            from:    yourEmail,
            to:      recipient,
            subject: 'Token for ' + host
        }, function(err, message) {
            if(err) {

Initialising the Express middleware

app.use(passwordless.acceptToken({ successRedirect: '/'}));

sessionSupport() makes the login persistent, so the user will stay logged in while browsing your site. Please make sure that you’ve already prepared your session middleware (such as express-session) beforehand.

acceptToken() will intercept any incoming tokens, authenticate users, and redirect them to the correct page. While the option successRedirect is not strictly needed, it is strongly recommended to use it to avoid leaking valid tokens via the referrer header of outgoing HTTP links on your site.

Routing & Authenticating

The following takes for granted that you’ve already setup your router var router = express.Router(); as explained in the express docs

You will need at least two URLs to:

  • Display a page asking for the user’s email
  • Accept the form details (via POST)
/* GET: login screen */
router.get('/login', function(req, res) {
/* POST: login details */'/sendtoken',
    function(req, res, next) {
        // TODO: Input validation
    // Turn the email address into a user ID
        function(user, delivery, callback) {
            // E.g. if you have a User model:
            User.findUser(email, function(error, user) {
                if(error) {
                } else if(user) {
                    // return the user ID to Passwordless
                } else {
                    // If the user couldn’t be found: Create it!
                    // You can also implement a dedicated route
                    // to e.g. capture more user details
                    User.createUser(email, '', '',
                        function(error, user) {
                            if(error) {
                            } else {
    function(req, res) {
        // Success! Tell your users that their token is on its way

What happens here? passwordless.requestToken(getUserId) has two tasks: Making sure the email address exists and transforming it into a unique user ID that can be sent out via email and can be used for identifying users later on. Usually, you’ll already have a model that is taking care of storing your user details and you can simply interact with it as shown in the example above.

In some cases (think of a blog edited by just a couple of users) you can also skip the user model entirely and just hardwire valid email addresses with their respective IDs:

var users = [
    { id: 1, email: '' },
    { id: 2, email: '' }
/* POST: login details */'/sendtoken',
        function(user, delivery, callback) {
            for (var i = users.length - 1; i >= 0; i--) {
                if(users[i].email === user.toLowerCase()) {
                    return callback(null, users[i].id);
            callback(null, null);
        // Same as above…

HTML pages

All it needs is a simple HTML form capturing the user’s email address. By default, Passwordless will look for an input field called user:

        <form action="/sendtoken" method="POST">
            <br><input name="user" type="text">
            <br><input type="submit" value="Login">

Protecting your pages

Passwordless offers middleware to ensure only authenticated users get to see certain pages:

/* Protect a single page */
router.get('/restricted', passwordless.restricted(),
 function(req, res) {
  // render the secret page
/* Protect a path with all its children */
router.use('/admin', passwordless.restricted());

Who is logged in?

By default, Passwordless makes the user ID available through the request object: req.user. To display or reuse the ID it to pull further details from the database you can do the following:

router.get('/admin', passwordless.restricted(),
    function(req, res) {
        res.render('admin', { user: req.user });

Or, more generally, you can add another middleware that pulls the whole user record from your model and makes it available to any route on your site:

app.use(function(req, res, next) {
    if(req.user) {
        User.findById(req.user, function(error, user) {
            res.locals.user = user;
    } else {

That’s it!

That’s all it takes to let your users authenticate securely and easily. For more details you should check out the deep dive which explains all the options and the example that will show you how to integrate all of the things above into a working solution.


As mentioned earlier, all authentication systems have their tradeoffs and you should pick the right system for your needs. Token-based channels share one risk with the majority of other solutions incl. the classic username/password scheme: If the user’s email account is compromised and/or the channel between your SMTP server and the user’s, the user’s account on your site will be compromised as well. Two default options help mitigate (but not entirely eliminate!) this risk: short-lived tokens and automatic invalidation of the tokens after they’ve been used once.

For most sites token-based authentication represents a step up in security: users don’t have to think of new passwords (which are usually too simple) and there is no risk of users reusing passwords. For us as developers, Passwordless offers a solution that has only one (and simple!) path of authentication that is easier to understand and hence to protect. Also, we don’t have to touch any user passwords.

Another point is usability. We should consider both, the first time usage of your site and the following logons. For first-time users, token-based authentication couldn’t be more straight-forward: They will still have to validate their email address as they have to with classic login mechanisms, but in the best-case scenario there will be no additional details required. No creativity needed to come up with a password that fulfils all restrictions and nothing to memorise. If the user logins again, the experience depends on the specific use case. Most websites have relatively long session timeouts and logins are relatively rare. Or, people’s visits to the website are actually so infrequent that they will have difficulties recounting if they already had an account and if so what the password could have been. In those cases Passwordless presents a clear advantage in terms of usability. Also, there are few steps to take and those can be explained very clearly along the process. Websites that users visit frequently and/or that have conditioned people to login several times a week (think of Amazon) might however benefit from a classic (or even better: two-factor) approach as people will likely be aware of their passwords and there might be more opportunity to convince users about the importance of good passwords.

While Passwordless is considered stable, I would love your comments and contributions on GitHub or your questions on Twitter: @thesumofall

Air MozillaFxos Engineering Weekly "Late" Meeting

Fxos Engineering Weekly The weekly FirefoxOS engineering meeting.

Software CarpentryA Research Software Petition

"We must accept that software is fundamental to research, or we will lose our ability to make groundbreaking discoveries." If you agree—and I hope you do—then please take a moment to add your name to this petition posted by the Software Sustainability Institute, and then help to spread the word by blogging, tweeting, and telling your friends.

Mozilla SecurityThe POODLE Attack and the End of SSL 3.0


SSL version 3.0 is no longer secure. Browsers and websites need to turn off SSLv3 and use more modern security protocols as soon as possible, in order to avoid compromising users’ private information.

We have a plan to turn off SSLv3 in Firefox. This plan was developed with other browser vendors after a team at Google discovered a critical flaw in SSLv3, which can allow an attacker to extract secret information from inside of an encrypted transaction. SSLv3 is an old version of the security system that underlies secure Web transactions and is known as the “Secure Sockets Layer” (SSL) or “Transport Layer Security” (TLS).


In late September, a team at Google discovered a serious vulnerability in SSL 3.0 that can be exploited to steal certain confidential information, such as cookies. This vulnerability, known as “POODLE”, is similar to the BEAST attack. By exploiting this vulnerability, an attacker can gain access to things like passwords and cookies, enabling him to access a user’s private account data on a website.

Any website that supports SSLv3 is vulnerable to POODLE, even if it also supports more recent versions of TLS. In particular, these servers are subject to a downgrade attack, in which the attacker tricks the browser into connecting with SSLv3. This relies on a behavior of browsers called insecure fallback, where browsers attempt to negotiate lower versions of TLS or SSL when connections fail.

Today, Firefox uses SSLv3 for only about 0.3% of HTTPS connections. That’s a small percentage, but due to the size of the Web, it still amounts to millions of transactions per day.


The POODLE attack can be used against any browser or website that supports SSLv3. This affects all current browsers and most websites. As noted above, only 0.3% of transactions actually use SSLv3. Though almost all websites allow connections with SSLv3 to support old browsers, it is rarely used, since there are very few browsers that don’t support newer versions of TLS.

Sites that require SSLv3 will remain vulnerable until they upgrade to a more recent version of TLS. According to measurements conducted by Mozilla and the University of Michigan, approximately 0.42% of the Alexa top million domains have some reliance on SSLv3 (usually due to a subdomain requiring SSLv3).


SSLv3 will be disabled by default in Firefox 34, which will be released on Nov 25. The code to disable it is landing today in Nightly, and will be promoted to Aurora and Beta in the next few weeks. This timing is intended to allow website operators some time to upgrade any servers that still rely on SSLv3.

As an additional precaution, Firefox 35 will support a generic TLS downgrade protection mechanism known as SCSV. If this is supported by the server, it prevents attacks that rely on insecure fallback.

Additional Precautions

For Firefox users, the simplest way to stay safe is to ensure that Firefox is configured to automatically update. Look under Preferences / Advanced / Update and make sure that “Automatically install updates” is checked.

For users who don’t want to wait till November 25th (when SSLv3 is disabled by default in Firefox 34), we have created the SSL Version Control Firefox extension to disable SSLv3 immediately.

Website operators should evaluate their traffic now and disable SSLv3 as soon as compatibility with legacy clients is no longer required. (The only remaining browser that does not support TLSv1.0 is Internet Explorer 6). We recommend following the intermediate configuration level from Mozilla’s Server Site TLS guidelines.

We realize that many sites still receive traffic from IE6 and cannot disable SSLv3 entirely. Those sites may have to maintain SSLv3 compatibility, and should actively encourage their users to migrate to a more secure browser as soon as possible.

Air Mozillabz-layout and styles

bz-layout and styles 4 days of engineers teaching other engineers in Toronto in June. bz-layout and styles

QMONotice: Removing unusued “Subscriber” accounts this Friday, October 17th, 2014, as part of cleanup

Dear Mozilla QA community,

As we prepare to upgrade and migrate the WordPress which powers this site (and give it a fresh new theme, in the process), we’re taking this opportunity to:

  • clean up (remove) unused plugins
  • clean up (remove) unused user accounts, particularly those in the “Subscriber” role

Previously, we used a few plugins from Buddy Press in an attempt to make our blog more social (which included logins to forums, etc.); as we stopped using that plugin, we no longer have a need for a general “Subscriber” role, as that was specifically tied to functionality — such as forums — which we’re no longer using.

As the first part of this cleanup, this Friday, October 17th, 2014, we’re going to purge the 5,003 “Subscriber” accounts (none of whom have any posts, so no content will be lost), in an effort to help reduce complexity and maintenance of the site.  Additionally, nothing will change for those who currently have the ability to create and/or edit posts on the site.


– Stephen

The Mozilla BlogPlay Awesome Indie Games Directly in Firefox Including the Award-Winning FTL

Today, we’re announcing a promotion with Humble Bundle, one of the real innovators in game distribution, that brings eight hugely popular Indie games including the award-winning FTL directly to Firefox users. This promotion only runs for two weeks, so jump … Continue reading

hacks.mozilla.orgUnity games in WebGL: Owlchemy Labs’ conversion of Aaaaa! to asm.js

You may have seen the big news today, but for those who’ve been living in an Internet-less cave, starting today through October 28 you can check out the brand spankin’ new Humble Mozilla Bundle. The crew here at Owlchemy Labs were given the unique opportunity to work closely with Unity, maker of the leading cross-platform game engine, and Humble to attempt to bring one of our games, Aaaaa! for the Awesome, a collaboration with Dejobaan Games, to the web via technologies like WebGL and asm.js.

I’ll attempt to enumerate some of the technical challenges we hit along the way as well as provide some tips for developers who might follow our path in the future.

Unity WebGL exporter

Working with pre-release alpha versions of the Unity WebGL exporter (now in beta) was a surprisingly smooth experience overall! Jonas Echterhoff, Ralph Hauwert and the rest of the team at Unity did an amazing job getting the core engine running with asm.js and playing Unity content in the browser at incredible speeds; it was pretty staggering. When you look at the scope of the problem and the technical magic needed to go all the way from C# scripting down to the final 1-million-plus-line .js file, the technology is mind boggling.

Thankfully, as content creators and game developers, Unity has allowed us to focus our worries away from the problem of getting our games to compile in this new build target by taking care of the heavy lifting under the hood. So did we just hit the big WebGL export button and sit back while Unity cranked out the html and js? Well, it’s a bit more involved than that, but it’s certainly better than some of the prior early-stage ports we’ve done.

For example, our experience with bringing a game through the now defunct Unity to Stage3D/Flash exporter during the Flash in a Flash contest in late 2011 was more like taking a machete to a jungle of code, hacking away core bits, working around inexplicably missing core functionality (no generic lists?!) and making a mess of our codebase. WebGL was a breeze comparatively!

The porting process

Our porting process began in early June of this year when we gained alpha access to the WIP WebGL exporter to prove whether a complex game like Aaaaa! for the Awesome was going to be portable within a relatively short time frame with such an early framework. After two days of mucking about with the exporter, we knew it would be doable (and had content actually running in-browser!) but as with all tech endeavors like this, we were walking in blind as to the scope of the entire port that was ahead of us.

Would we hit one or two bugs? Hundreds? Could it be completed in the short timespan we were given? Thankfully we made it out alive and dozens of bug reports and fixes later, we have a working game! Devs jumping into this process now (October 2014 and onward) fortunately get all of these fixes built in from the start and can benefit from a much smoother pipeline from Unity to WebGL. The exporter has improved by a huge amount since June!

Initial issues

We came across some silly issues that were either caused by our project’s upgrade from Unity 4 to Unity 5 or simply the exporter being in such “early days”. Fun little things such as all mouse cursor coordinates being inverted inexplicably caused some baffled faces but of course has been fixed at the time of writing. We also hit some physics-related bugs that turned out to have been caused by the Unity 4 to Unity 5 upgrade — this led to a hilarious bug where players wouldn’t smash through score plates and get points but instead slammed into score plates as if they were made of concrete, instantly crushing the skydiving player. A fun new feature!

Additionally, we came across a very hard-to-track-down memory leak bug that only exhibited itself after playing the game for an extended session. With a hunch that the leak revolved around scene loading and unloading, we built a hands-off repro case that loaded and unloaded the same scene hundreds of times, causing the crash and helping the Unity team find and fix the leak! Huzzah!

Bandwidth considerations

Above examples are fun to talk about but have essentially been solved by this point. That leaves developers with two core development issues that they’ll need to keep in mind when bringing games to the Web: bandwidth considerations, and form factor / user experience changes.

Aaaaa! Is a great test case for a worst case scenario when it comes to file size. We have a game with over 200 levels or zones, over with 300 level assets that can be spawned at runtime in any level, 48 unique skyboxes (6 textures per sky!), and 38 full-length songs. Our standalone PC/Mac build weighs in at 388mb uncompressed. Downloading almost 400 megabytes to get to the title screen of our game would be completely unacceptable!

In our case, we were able to rely on Unity’s build process to efficiently strip and pack the build into a much smaller size, but also took advantage of Unity’s AudioClip streaming solution to stream in our music at runtime on demand! The file size savings of streaming music was huge and highly recommended for all Unity games. To glean additional file size savings, Asset Bundles can be used for loading levels on demand, but are best used in simple games or when building games from the ground up with web in mind.

In the end, our final *compressed* WebGL build size, which includes all of our loaded assets as well as the Unity engine itself ended up weighing in at 68.8 MB, compared to a *compressed* standalone size of 192 MB, almost 3x smaller than our PC build!

Form factor/user experience changes

User experience considerations are the other important factor to keep in mind when developing games for the Web or porting existing games to be fun, playable Web experiences. Examples of keeping the form factor of the Web include avoiding “sacred” key presses, such as Escape. Escape is used as pause in many games but many browsers eat up the Escape key and reserve it for exiting full-screen mode or releasing mouse lock. Mouse lock and full-screen are both important to creating fully-fledged gaming experiences on the web so you’ll want to find a way to re-bind keys to avoid these special key presses that are off-limits when in the browser.

Secondly, you’ll want to remember that you’re working within a sandboxed environment on the Web so loading in custom music from the user’s hard drive or saving large files locally can be problematic due to this sandboxing. It might be worth evaluating which features in your game you might want to be modified to fit the Web experience vs. a desktop experience.

Players also notice the little things that key someone into a game being a rushed port. For example, if you have a quit button on the title screen of your PC game, you should definitely remove it in your web build as quitting is not a paradigm used on the Web. At any point the user can simply navigate away from the page, so watch out for elements in your game that don’t fit the current web ecosystem.

Lastly you’ll want to think about ways to allow your data to persist across multiple browsers on different machines. Gamers don’t always sit on the same machine to play their games, which is why many services allow for cloud save functionality. The same goes for the Web, and if you can build a system (like the wonderfully talented Edward Rudd created for the Humble Player, it will help the overall web experience for the player.

Bringing games to the Web!

So with all of that being said, the Web seems like a very viable place to be bringing Unity content as the WebGL exporter solidifies. You can expect Owlchemy Labs to bring more of their games to the Web in the near future, so keep an eye out for those! ;) With our content running at almost the same speed as native desktop builds, we definitely have a revolution on our hands when it comes to portability of content, empowering game developers with another outlet for their creative content, which is always a good thing.

Thanks to Dejobaan Games, the team at Humble Bundle, and of course the team at Unity for making all of this possible!

Air MozillaEngineering Meeting

Engineering Meeting The weekly Mozilla engineering meeting.

Mozilla L10NFirefox L10n Report (cycles 34 & 35)

Hello localizers!

Thank you all for your great work with Firefox 33 and 34. Here’s an outline of what is currently in Aurora this cycle (35) and what we accomplished together last cycle:

This cycle (Fx35) — 13 Oct – 24 Nov

Key dates:
– Beta sign-offs for new locales must be complete by 3 Nov.
– Beta sign offs must be completed before 10 Nov.
– Aurora sign offs must be completed before 24 Nov.
– Firefox 34 releases 25 Nov.
– Approximately 160 new string changes were made to Aurora desktop, 57 for Aurora mobile exclusively (unshared).
– 54% of the desktop strings changes are strings or files that need to be removed from your repos. 25% of all changes are related to Loop. 19% of all string changes are related to session restore and profiles. 13% of all string changes are in devtools (see for more info).
– 23% of the mobile string changes are related to providing user feedback. The rest include screencasting, and preferences (see ).
Please remember that sign offs are a critical piece to the cycle and mean that you approve and can vouch for the work you’re submitting for shipment.

Last cycle — 1 Sept. – 13 Oct.

Noteworthy accomplishments:
80% of all locales shipped Firefox 33 on desktop updates on time. Congratulations to everyone who signed off and shipped this last cycle! This is an 10% increase in locale coverage between Firefox 32 and Firefox 33! Thank you to everyone involved in making this possible; it’s the highest update percentage we’ve seen in months!
74% of all locales shipped Fennec 33 on time. Congratulations to everyone who signed off and shipped this last cycle! This is an 1% decrease in locale coverage between Fennec 32 and Fennec 33!
– The Azerbaijani (az) team launched their first localization of Firefox desktop! Please contact the team to congratulate them on this massive accomplishment, and feel free to tweet all about!
– The Aragonese (an), Kazakh (kk), and Frisian (fy-NL) teams launched their first localizations of Fennec! Please contact the teams to congratulate them on this massive accomplishment, and feel free to tweet all about!
– Both the BBC and The Economist reported about the incredible efforts of the Mozilla l10n community, and featured interviews from Ibrahima Sarr and myself.

Thank you to everyone for all of your dedication and hard work this last sprint. As always, if you note anything missing in these reports, please let me know.

The Mozilla BlogSend videos from Firefox for Android straight to your TV

We make Firefox for Android to give you greater flexibility and control of your online life.          We want you to be able to view your favorite Web content quickly and easily, no matter where you are. That’s why we’re giving … Continue reading

Air MozillaMozilla/TechWomen/Internet Society Mixer and Lightning Talks

Mozilla/TechWomen/Internet Society Mixer and Lightning Talks Join Mozilla community members, SF Bay Internet Society, and TechWomen from the Middle East and Africa for an evening of short talks on: -Why and...

Software CarpentryYet Another Template for Lessons

After the splitting the repository post, Gabriel Devenyi and Greg Wilson wrote some suggestions for how the new lessons repositories should look like (see Gabriel's post about metadata and Greg's post about overall file structure). From my experience at the Mozilla Science Lab sprint I don't like Gabriel's preq metadata because I don't believe it helps very much. I also don't like Greg's proposal to duplicate some files in every Git repository, so here are some changes that I suggest.

Design Choices

In addition to Greg's design choices:

  • Avoid the duplication of files across Git repositories. In Greg's proposal the Git repositories should store CSS and Javascript files needed to properly render the page. We could avoid it.
  • Only automate the actions that users and developers will need to do very often. We try to automate workshops' home page but we are going to revert it. For that reason I think we should wait until people complain about the lack of some script before we write it.

Git Repositories

The lesson repositories must have two branches: master and gh-pages. The master branch will store the lessons in Markdown (or any other format, that can be convert to HTML, wanted by the community). The gh-pages branch will store the HTML version of the lesson so that students can view it online.

We had exactly this approach until a few weeks ago in bc repo. Why go back? In bc, we only merge master into gh-pages a few times and I would like to suggest that the topic manintainers do it before the in-service days proposed at last month meeting.

Also, this approach will avoid the problem of have Markdown and HTML side-by-side since Markdown extensions support by Pandoc aren't supported by Jekyll.

Overall Layout for the Master Branch

Changes to Greg's layout:

  • Drop in favor of linking words to Wikipedia articles.
  • Drop web/ to avoid duplication of files across repositories. Web resources, such as CSS files, icons, and Javascript, can be provided by a "third-party" server.
  • Drop _layouts/ and _includes to avoid duplication of files across repositories. Makefile will download the needed files from a "third-party" server when needed.
  • Drop bin/ to avoid duplication of files across repositories and scripts that no one will use. In case we need some tool for managing lessons it should live in its own repository and we should ask contributors to install it.

Software and Data

I suggest dropping code/ and data/ to avoid the work of keep them updated. Contributors can find the "description" of the files inside code/ and data/ using:

$ grep 'filename.ext' *.md

Overall Makefile

Changes to Greg's proposal:

  • Drop make topic dd-slug because is easy to copy one of the previous topics and correct the filenames if needed.
  • make check should run swc-lesson-check (that needs to be installed).
  • make site should download the necessary files (e.g. _layouts and _includes) and after that build the lesson website locally for previewing.
  • Drop make summary.
  • make release should update gh-pages based on master. This should be only used by topic maintainers.

Software CarpentryA Self-Recorded Workshop

Among the many great lessons contained in Greg Wilson's recent post on building better teachers, perhaps one of the most important was that in order to improve our collective teaching standards, we really need to see each other in action:

The teachers described lessons they gave and things students said, but they did not see the practices. When it came to observing actual lessons—watching each other teach—they simply had no opportunity... They had, he realized, no jugyokenkyu.

With this in mind, I went ahead and recorded a Software Carpentry workshop that we hosted at the University of Melbourne last month. I wanted to try a recording method that could be used by anyone, anywhere (i.e. no elaborate technological dependencies like lecture capture technology or a fancy external microphone), so I simply downloaded a 30-day free trial of Camtasia and set it to record a screencast, video and audio while I was teaching. The recording quality was even better than I expected, so feel free to check it out (and post feedback, which is the whole point of the exercise!) at the Research Bazaar YouTube channel. In particular, there has been a lot of interest in my Git session, which took less than one hour.

Software CarpentryOf Templates and Metadata

As an appendix to the splitting the repository post, Greg recently posted a straw man template for how lessons might be structured after the repo split. He followed up after with more details. There a lot of good ideas there on how we can encourage good structure for lessons and help learners and instructors alike going forward.

First, To assist in the production of workshop websites and to better define the relationship between them, lesson repositories should contain some metadata. YAML is a widely-adopted and reasonably flexible format for storing metadata in files: we're already using it as part of our existing Github-Jekyll workshop and site hosting. The file is the the sensible place to look for a lesson's metadata, as its the first thing people are writing and it should therefore be populated early in writing.

YAML headers on the top of the lessons would look like this:

title: "Beginner Shell"
authors: [Gabriel A. Devenyi, Greg Wilson]

Next is the question of what kind of metadata we want to include. The title of the lesson is essential since its not explicitly the name of any of the files. The list of authors of the material could also live in a YAML header, although there has also been discussion of extracting such information directly from the Git history. (Relying on the Git history would also avoid the problem of figuring out how large a change qualifies someone for being listed as an author.)

There have recently been discussions about recording and reporting the time required to teach lessons. Including the average in the metadata would allow someone constructing a multi-lesson workshop to determine if they have time to present all the material.

With the breakup of the lessons repository into smaller chunks, and the proliferation of intermediate and alternative lessons it would also be useful to specify dependencies for a given lesson. The exact structure for this is tricky, since we have to strike a balance between what's useful and how much effort is required of authors. Options include:

  1. the URLs of lessons that this one depends on
  2. keywords identifying the concepts this lesson requires people to know beforehand
  3. a long-form human-readable description of what learners need to know beforehand.

The first probably won't work for us because we expect to have several lessons covering the same topic, i.e., an introduction to the shell for astronomers and physicists, another for life scientists, and a third for economists. These will probably vary primarily in the examples they present, rather in the concepts they cover, so any of them could be used as a pre-requisite for a shell-based lesson on version control. The second requires us to agree on terms in order to be truly useful; judging from the history of the Semantic Web, that's unlikely. And while the third is probably easiest, it's also the hardest for software tools to work with: we wouldn't be able to check that a particular sequence of lessons hangs together without some natural language processing, and even then it probably wouldn't be reliable.

So here's what the YAML template might look like for a lesson:

title: "Beginner Shell"
authors: [Gabriel A. Devenyi, Greg Wilson]
presentation-time: "2h"
preq: [,]

The files may also contain YAML metadata, perhaps similar bits such as the title and time estimate, or authors. Having such data would allow further processing programmatically.

Tying this all together with the Makefile that Greg proposed, we can construct a workshop that includes lessons from a number of lesson repositories, check dependencies, and construct a nice site.

Finally, what about the and files mentioned in the template? The terms defined in the glossary could be used as a specification of what this lesson talks about in place of keywords in, but it's redundant to have both. The reference guide is similarly redundant—we can point people at any number of online references written by other people—but we do need something, since learners tell us after almost every workshop that they want a cheat sheet of some kind.

Software CarpentryA New Template for Lessons

We blogged two weeks ago about a new template for workshop websites. It's now time to start thinking about what lessons will look like: as we said at the last lab meeting, we're going to break the current lesson repository into smaller and more manageable pieces, but we need to decide what those pieces will look like first. The post below is our current thoughts; comments and/or follow-on posts about alternatives like those already written by Gabriel Devenyi and Raniere Silva would be very welcome.


A lesson is a complete story about some subject, typically taught in 2-4 hours.

A topic is a single scene in that story, typically 5-15 minutes long.

A slug is a short identifier for something, such as filesys (for "file system").

Design Choices

  • We define everything in terms of Markdown. If lesson authors want to use something else for their lessons (e.g., IPython Notebooks), it's up to them to generate and commit Markdown formatted according to the rules below.
  • We will use Pandoc for Markdown-to-HTML conversion, so we can use {.attribute} syntax for specifying anchors and classes rather than the clunky syntax our current notes use to be compatible with Jekyll.
  • We will avoid putting HTML inside Markdown since it's ugly to read and write, and error-prone to process. As a consequence, we put things like the learning objectives and each challenge exercise in a block indented with > to make scope easier for people and machines to see.

Overall Layout

Each lesson is stored in a directory that is laid out as described below. That directory is a self-contained Git repository (i.e., there are no submodules or clever tricks with symbolic links).

  1. the home page for the lesson. (See "Home Page" below.)
  2. the topics in the lesson. dd is a sequence number such as 01, 02, etc., and slug is an abbreviated single-word mnemonic for the topic. Thus, is the third topic in this lesson, and is about the filesystem. (Note that we use hyphens rather than underscores in filenames.) See "Topics" below.
  3. slides for a short presentation (3 minutes or less) explaining what the lesson is about and why people would want to learn it. See "Introductory Slides" below.
  4. definitions of key terms. This is what the lesson exports that other lessons can use (just as an API is the functions a library exports for other code to use). See "Glossary" below.
  5. a reference guide to key terms and commands, syntax, etc., to be printed and given to learners. See "Reference Guide" below.
  6. the instructor's guide for the lesson. See "Instructor's Guide" below.
  7. code/: a sub-directory containing all code samples. See "Software and Data" below.
  8. data/: a sub-directory containing all data files for this lesson. See "Software and Data" below.
  9. img/: images (including plots) used in the lesson. See "Images" below.
  10. web/: web resources, such as CSS files, icons, and Javascript. See "Web Resources" below.
  11. _layouts/: page layout templates. See "Web Resources" below.
  12. _includes/: page inclusions. See "Web Resources" below.
  13. bin/: tools for managing lessons. See "Tools" below.
  14. Makefile contains commands to build and manage the lesson. (See "Tools" below.)

Home Page must be structured as follows:

layout: lesson
title: Lesson Title
keywords: ["some", "key terms", "in a list"]
Paragraph of introductory material.

> ## Prerequisites {.prereq}
> A short paragraph describing what learners need to know
> before tackling this lesson.

> ## Learning Objectives {.objectives}
> * Overall objective 1
> * Overall objective 2

## Topics

* [Topic Title 1](01-slug.html)
* [Topic Title 2](02-slug.html)

## Other Resources

* [Introduction](intro.html)
* [Glossary](glossary.html)
* [Reference Guide](reference.html)
* [Instructor's Guide](guide.html)
* [Setting Up](

Note: software installation and configuration instructions aren't in the lesson. They may be shared with other lessons, so they will be stored centrally on the Software Carpentry web site and linked from the lessons that need them.

Note: the description of prerequisites is prose for human consumption, not a machine-comprehensible list of dependencies. We may supplement the former with the latter once we have more experience with this lesson format and know what we actually want to do.


Each topic must be structured as follows:

layout: topic
title: Topic Title
> ## Learning Objectives {.objectives}
> * Learning objective 1
> * Learning objective 2

Paragraphs of text mixed with:

~~~ {.python}
some code:
    to be displayed
~~~ {.output}
~~~ {.error}
error reports from program (if any)

and possibly including:

> ## Callout Box {.callout}
> An aside of some kind.

> ## Key Points {.keypoints}
> * Key point 1
> * Key point 2

> ## Challenge Title {.challenge}
> Description of a single challenge.
> There may be several challenges.
  1. There are no sub-headings inside a topic other than the ones shown, and only one block of challenges at the end. If a topic needs sub-headings, it probably wants to be broken into two or more files.
  2. Callout boxes are formatted as block quotes containing a level-2 heading having the callout class and some text, code, etc.
  3. Each challenge is formatted in the same way, i.e., as a block quote with a level-2 heading having the challenge class.

Introductory Slides

Every lesson must include a short slide deck suitable for a short presentation (3 minutes or less) that the instructor can use to explain to learners what the subject is, how knowing it will help learners, and what's going to be covered. Slides are written in Markdown, and compiled into HTML for use with reveal.js.

layout: slides
body of slides


Each term in the glossary is laid out as a separate block quote, with the term in a heading. Yes, this is odd, but as noted in the introduction, we want to avoid putting HTML in Markdown, and we can't add identifiers to paragraphs using {#whatever} notation: that only works on headers.

layout: glossary
> ## First Term {#first-anchor}
> The definition.
> See also: [another word](#another-anchor)

> ## Another Term {#another-anchor}
> The definition.
> See also: [first term](#some-anchor)

Reference Guide

The layout of the reference guide is up to the lesson's author. The only thing required is the YAML header:

layout: reference

Instructor's Guide

The instructor's guide is laid out as follows:

layout: guide

introductory text

## General Points

1.  first point

1.  second point (separated by blank line,
    may span multiple lines,
    starts with `1.` to indicate numbered list.

## Large Sub-Topic

1.  first point on a sub-topic large enough to need a section

1.  second point

Software and Data

All of the software samples used in the lesson must go in a directory called code/. Every sample must be listed in the file code/, which must be formatted as follows:

layout: index
* `filename.ext`: one-line description
* `filename.ext`: one-line description

Stand-alone data files must go in a directory called data/. Groups of related data files must be put together in a sub-directory of data/ with a meaningful (short) name. Every data file or data set must be listed in the file code/, which must be formatted as follows:

layout: index
* `filename.ext`: one-line description
* `sub-directory/`: one-line description

Note: This mirrors the layout a scientist would use for actual work (see Noble's "A Quick Guide to Organizing Computational Biology Projects" or Gentzkow and Shapiro's "Code and Data for the Social Sciences: A Practitioner's Guide"). However, it may cause novice learners problems. If code/ includes a hard-wired path to a data file, that path must be either datafile.ext or data/datafile.ext. The first will only work if the program is run with the lesson's root directory as the current working directory, while the second will only work if the program is run from within the code/ directory. This is a learning opportunity for students working from the command line, but a confusing annoyance inside IDEs and the IPython Notebook (where the tool's current working directory is less obvious). And yes, the right answer is to pass filenames on the command line, but that requires learners to understand how to get command line arguments, which isn't something they'll be ready for in the first hour or two.


Images used in the lessons must go in an img/ directory. We strongly prefer SVG for line drawings, since they are smaller, scale better, and are easier to edit. Screenshots and other raster images must be PNG or JPEG format. The img/ directory does not need to have an file.

Web Resources

Files used to generate the HTML version of a lesson are stored in the following directories:

  • web/css: CSS style files
  • web/js: Javascript files
  • web/img: images such as logos and buttons
  • _layouts: page layout templates
  • _includes: inclusions in web pages, such as the standard header and footer

These files will usually not be edited by lesson authors.


The bin/ directory contains a program called that checks that the contents of the lesson conform to the rules above.

The lesson's root directory contains a Makefile with commands to manage lesson content. Its targets are:

  • make: without a target, this will print help.
  • make commands: prints the same help.
  • make topic dd-slug: create a new topic file with the given sequence number and slug.
  • make check: run bin/ to make sure that everything is formatted properly, and print error messages identifying problems if it's not.
  • make site: build the lesson website locally for previewing. This assumes make check has given the site a clean bill of health, and requires Pandoc.
  • make summary: create a YAML-formatted summary of the lesson, including a list of the topics it includes, the terms it defines, the lesson's software requirements, etc.
  • make clean: tidy up (i.e., delete the locally-built website).

Note: The Makefile should also include targets to turn IPython Notebooks into Markdown and compare the result with the committed Markdown topic files, and do equivalent conversions for other formats.

Note: The Makefile should contain targets to re-run code and check the output, but there's no general way to do this (and we're not about to build our own literate programming environment).

Air MozillaMozilla Weekly Project Meeting - HLS Stream

Mozilla Weekly Project Meeting - HLS Stream The Monday Project Meeting

Air MozillaMozilla Weekly Project Meeting

Mozilla Weekly Project Meeting The Monday Project Meeting

Air MozillaWebmaker Demos October 10

Webmaker Demos October 10 Webmaker Demos October 10

SUMO BlogWhat’s up with SUMO – 10th October

Hello and thank you for coming to our blog on yet another sunny (at least here…) Friday! Let’s get on with the updates…

New arrivals to SUMO – welcome to the party :-)

Thunder, Thunder, Thunder…bird!

  • Roland and the Thunderbird community are busy working on the transition plan for 2015. Are you interested in shaping the future of Thunderbird’s support? Click that link!
  • For those in and around Toronto (or those who are headed there from afar), a quick reminder: the Thunderbird Summit is coming up, starting October 14 and going strong until October 18.

Support forum redesign – your feedback needed

Monday SUMO Community meeting update

  • As a reminder: we changed the structure and role of the Monday meetings (as an experiment).
  • Last Monday, we met and talked about Army of Awesome. If you want to join in the conversation, head over to the relevant forum thread.
  • Next Monday, we’re going to talk about SUMO presence online. More details and a space for your questions/comments can be found here.
  • As always, remember that it’s a community meeting, so everyone’s invited to join and contribute.

SUMO on Twitter

  • We used to be there. Now we are here :-). Please follow the new SUMO account for all community news and more!
  • Please note – we’re there for the community, so it’s not a support account. If you want immediate support regarding Firefox, Firefox for Android, Firefox OS or Thunderbird, you know where to find it. If you want to receive help on Twitter,

In other news… I don’t know whether I’ll need a Matchstick any time soon, but the logo is definitely one of the cutest I’ve seen in a while :-)

Oh, and don’t forget that Firefox 33 is around the corner!

Have a good weekend, everyone!

WoMozWoMoz @ AdaCamp Berlin

As mentioned in previous post, during the weekend of 11-12 October 2014 AdaCamp Berlin will take place at the Wikimedia Office in the beautiful capital of Germany. It will be the first time an AdaCamp is organized outside US.

WoMoz will be represented through Ioana Chiorean, Ednah Kiome, Dian Ina Mahendra and Kristi Progri which will be keeping us informed during the weekend.

AdaCamp is an unconference dedicated to increasing women’s participation in open technology and culture: open source software, Wikipedia and other wiki-related projects, open knowledge and education, open government and open data, open hardware and appropriate technology, library technology, creative fan culture, remix culture, translation/localization/internationalization, and more. ( from

adacampFollow #Womoz and the accounts on Facebook, Twitter and Google+ to see what are the AdaCampers up to this time!

Mozilla Web DevelopmentWebdev Extravaganza – October 2014

Once a month, web developers from across Mozilla don our VR headsets and connect to our private Minecraft server to work together building giant idols of ourselves for the hoards of cows and pigs we raise to worship as gods. While we build, we talk about the work that we’ve shipped, share the libraries we’re working on, meet new folks, and talk about whatever else is on our minds. It’s the Webdev Extravaganza! The meeting is open to the public; you should stop by!

You can check out the wiki page that we use to organize the meeting, view a recording of the meeting in Air Mozilla, or attempt to decipher the aimless scrawls that are the meeting notes. Or just read on for a summary!

Shipping Celebration

The shipping celebration is for anything we finished and deployed in the past month, whether it be a brand new site, an upgrade to an existing one, or even a release of a library.

Phonebook now Launches Dialer App

lonnen shared the exciting news that the Mozilla internal phonebook now launches the dialer app on your phone when you click phone numbers on a mobile device. He also warned that anyone who has a change they want to make to the phonebook app should let him know before he forgets all that he had to learn to get this change out.

Open-source Citizenship

Here we talk about libraries we’re maintaining and what, if anything, we need help with for them.

django-browserid 0.11 is out

I (Osmose) chimed in to share the news that a new version of django-browserid is out. This version brings local assertion verification, support for offline development, support for Django 1.7, and other small fixes. The release is backwards-compatible with 0.10.1, and users on older versions can use the upgrade guide to get up-to-date. You can check out the release notes for more information.

mozUITour Helper Library for Triggering In-Chrome Tours

agibson shared a wrapper around the mozUITour API, which was used in the Australis marketing pages on to trigger highlights for new features within the Firefox user interface from JavaScript running in the web page. More sites are being added to the whitelist, and more features are being added to the API to open up new opportunities for in-chrome tours.

Parsimonious 0.6 (and 0.6.1) is Out!

ErikRose let us know that a new version of Parsimonious is out. Parsimonious is a parsing library written in pure Python, based on formal Parsing Expression Grammars (PEGs). You write a specification for the language you want to parse in a notation similar to EBNF, and Parsimonious does the rest.

The latest version includes support for custom rules, which let you hook in custom Python code for handling cases that are awkward or impossible to describe using PEGs. It also includes a @rule decorator and some convenience methods on the NodeVisitor class that simplify the common case of single-visitor grammars.

contribute.json Wants More Prettyness

peterbe stopped by to show of the design changes on the contribute.json website. There’s more work to be done; if you’re interested in helping out with contribute.json, let him know!

New Hires / Interns / Volunteers / Contributors

Here we introduce any newcomers to the Webdev group, including new employees, interns, volunteers, or any other form of contributor.

Name IRC Nick Role Project
Cory Price ckprice Web Production Engineer Various


The Roundtable is the home for discussions that don’t fit anywhere else.

Leeroy was Broken for a Bit

lonnen wanted to let people know that Leeroy, a service that triggers Jenkins test runs for projects on Github pull requests, was broken for a bit due to accidental deletion of the VM that was running the app. But it’s fixed now! Probably.

Webdev Module Updates

lonnen also shared some updates that have happened to the Mozilla Websites modules in the Mozilla Module System:

Static Caching and the State of Persona

peterbe raised a question about the cache timeouts on static assets loaded from Persona by implementing sites. In response, I gave a quick overview of the current state of Persona:

  • Along with callahad, djc has been named as co-maintainer, and the two are currently focusing on simplifying the codebase in order to make contribution easier.
  • A commitment to run the servers for Persona for a minimum period of time is currently working it’s way through approval, in order to help ease fears that the Persona service will just disappear.
  • Mozilla still has a paid operations employee who manages the Persona service and makes sure it is up and available. Persona is still accepting pull requests and will review, merge, and deploy them when they come in. Don’t be shy, contribute!

The answer to peterbe’s original question was “make a pull request and they’ll merge and push!”.

Graphviz graphs in Sphinx

ErikRose shared sphinx.ext.graphviz, which allows you to write Graphviz code in your documentation and have visual graphs be generated from the code. DXR uses it to render flowcharts illustrating the structure of a DXR plugin.

Turns out that building giant statues out of TNT was a bad idea. On the bright side, we won’t be running out of pork or beef any time soon.

If you’re interested in web development at Mozilla, or want to attend next month’s Extravaganza, subscribe to the mailing list to be notified of the next meeting, and maybe send a message introducing yourself. We’d love to meet you!

See you next month!


WebmakerSupport web literacy in action, donate to Hive projects

We recently kicked off the Hive Challenge on Crowdrise , a month-long fundraising campaign to support non-profit organizations that are working to prepare youth for success in a digital age.

Thirty non-profit organizations from Hive communities in NYC, Chicago, Pittsburgh, Chattanooga and Kansas City are participating in this first-ever, cross-Hive fundraising effort, and they’re raising money for a variety of causes, including:

  • expanding programs that empower youth to explore and develop new skills and interests
  • purchasing new or upgraded technologies that support hands-on making
  • providing professional development and training opportunities for mentors
  • buying materials–from 3D printers to vans–to enhance or grow programs that reach more youth and/or address the needs of specific under-represented communities

Screen Shot 2014-10-09 at 3.43.38 PM

Hive Learning Networks champion digital skills and web literacy through connected learning. Non-profit organizations that are part of these city-based learning laboratories design and implement innovative programs and practices that advocate for creative learning and change. They are museums, libraries, coding clubs, makerspaces, community centers and afterschool programs, and they need your help to sustain their efforts and build impact.

Mozilla is showing its support by contributing $50,000 in prize money, available to the organizations that are participating in the Hive Challenge. In addition to the funds each organization raises, they’ll have the opportunity to win additional cash prizes–through grand prizes and weekly bonus challenges–ranging from $1,000 to $15,000.

The first weekly bonus challenge is already underway, and any organization that raises $250 this week will be entered to win an additional $1,000 from Mozilla. New bonus challenges launch every Tuesday until the Hive Challenge wraps up on Monday, November 4th at Noon ET.

Please consider donating to some of these exciting Hive programs that are doing great work to spread web literacy and equip young people with valuable skills, confidence and a true maker spirit.

Get Involved

  • Visit the Hive Challenge and donate!
  • Join a fundraising team. Does one program or cause resonate with you most? Sign up to join their team and help them raise even more money.
  • Help spread the word. Tweet a link to the Hive Challenge on Crowdrise and don’t forget to add #hivebuzz.

about:communityFirefox 33 New Contributors

With the upcoming release of Firefox 33, we are pleased to welcome the 75 developers who contributed their first code change to Firefox in this release, 64 of whom were brand new volunteers! Special thanks to Sezen Günes for compiling these statistics for this release. Please join us in thanking each of these diligent and enthusiastic individuals, and take a look at their contributions:

Open Policy & AdvocacySpotlight on the ACLU: A Ford-Mozilla Open Web Fellow Host

{The Ford-Mozilla Open Web Fellows applications are now open. To shed light on the fellowship, we will be featuring posts from the 2015 Host Organizations. Today’s post comes from Kade Crockford, the Director of the Technology for Liberty program at the ACLU of Massachusetts. We are so excited to have the ACLU as a host organization. It has a rich history of defending civil liberties, and has been on the forefront of defending Edward Snowden following his revelations of the NSA surveillance activities. The Ford-Mozilla Open Web fellow at the ACLU of Massachusetts will have a big impact in protecting Internet freedom.}

Spotlight on the ACLU: A Ford-Mozilla Open Web Fellow Host
By Kade Crockford, Director of Technology for Liberty, ACLU of Massachusetts

Intellectual freedom, the right to criticize the government, and freedom of association are fundamental characteristics of a democratic society. Dragnet surveillance threatens them all. Today, the technologies that provide us access to the world’s knowledge are mostly built to enable a kind of omnipotent tracking human history has never before seen. The law mostly works in favor of the spies and data-hoarders, instead of the people. We are at a critical moment as the digital age unfolds: Will we rebuild and protect an open and free Internet to ensure the possibility of democracy for future generations?

We need your help at the ACLU of Massachusetts to make sure we, as a society, answer that question in the affirmative.


The ACLU is the oldest civil rights and civil liberties organization in the U.S. It was founded in 1920 in the wake of the imprisonment of anti-World War I activists for distributing anti-war literature, and in the midst of widespread government censorship of materials deemed obscene, radical or insufficiently patriotic. In 1917, the U.S. Congress had passed the Espionage Act, making it a crime to interfere with military recruitment. A blatantly unconstitutional “Sedition Act” followed in 1918, making illegal the printing or utterance of anything “disloyal…scurrilous, or abusive” about the United States government. People like Rose Pastor Stokes were subsequently imprisoned for long terms for innocuous activity such as writing letters to the editor critical of US policy. In 1923, muckraking journalist Upton Sinclair was arrested simply for reading the text of the First Amendment at a union rally. Today, thanks to almost one hundred years of effective activism and impact litigation, people would be shocked if police arrested dissidents for writing antiwar letters to the editor.

But now we face an even greater threat: our primary means of communication, organization, and media—the Internet—is threatened by pervasive, dragnet surveillance. The Internet has opened up the world’s knowledge to anyone with a connection, but it has also put us under the microscope like never before. The stakes couldn’t be higher.

That’s why the ACLU—well versed in the Bill of Rights, constitutional precedent, community organizing, advocacy, and public education—needs your help. If we want to live in an open society, we must roll back corporate and government electronic tracking and monitoring, and pass on a free Internet to our children and theirs. We can’t do it without committed technologists who understand systems and code. Democracy requires participation and agitation; today, it also requires freedom fighters with computer science degrees.

Apply to become a Ford-Mozilla Open Web Fellow at the ACLU of Massachusetts if you want to put your technical skills to work on a nationally-networked team made up of the best lawyers, advocates, and educators. Join us as we work to build a free future. There’s much to be done, and we can’t wait for you to get involved.

After all, Internet freedom can’t protect itself.

Apply to be a 2015 Ford-Mozilla Open Web Fellow. Visit

WebmakerMozFest 2014: Spotlight on “Build and Teach the Web”

This is the eighth post in a series featuring interviews with the 2014 Mozilla Festival “Space Wranglers,” the curators of the many exciting programmatic tracks slated for this year’s Festival.

For this edition, we chatted with Paul Oh, Christina Cantrill, Chad Sansing, Antero Garcia, and Jane Park, the Space Wranglers for the Build and Teach the Web track. Participants in this track will keep the web wild through hands-on making with innovative tools and teaching the web as a commnunity.

What excites you most about your track?

We have a rich array of sessions planned that cover an incredible range of web building and teaching possibilities, from hack days with youth in a science center to game building. And all our sessions will radiate around a central hub of making, building and collaborating, focused on the idea of teaching what you build—in other words, helping others see what it is that you yourself make. Anyone passing by the hub is welcome to drop in, hang out, mess around, and geek out with us!

Who are you working with to make this track happen?

We have an amazing set of facilitators from organizations around the world. A couple of highlights: the folks involved with Inanimate Alice will be launching their next session as part of our track. CoderDojo will be engaging people at MozFest with their amazing work. As will Creative Commons. As will engineers and educators from Mozilla itself. We could go on and on–the list of incredible facilitators feels endless!

How can someone who isn’t able to attend MozFest learn more or get involved in this topic?

You can follow the hashtag #mozfest on Twitter, of course. And also #teachtheweb. We’re also planning for the possibility of a Live from MozFest through Educator Innovator so stay tuned for more info!


Head on over to the MozFest site to register!

The Mozilla BlogFirefox OS Shows Continued Global Growth

Firefox OS is now available on three continents with 12 smartphones offered by 13 operators in 24 countries. As the only truly open mobile operating system, Firefox OS demonstrates the versatility of the Web as a platform, free of the … Continue reading

Air MozillaProduct Coordination Meeting

Product Coordination Meeting Weekly coordination meeting for Firefox Desktop & Android product planning between Marketing/PR, Engineering, Release Scheduling, and Support.

Open Policy & AdvocacyLaunching the Ford-Mozilla Open Web Fellows Program, a Global Initiative to Recruit the Heroes of the Open Internet

{Re-posting from the Mozilla Blog on Sep 30, 2014}

By Mark Surman, Executive Director, Mozilla Foundation; and Darren Walker, President, Ford Foundation

We are at a critical point in the evolution of the Internet. Despite its emergence as an integral part of modern life, the Internet remains a contested space. Far too often, we see its core ethos – a medium where anyone can make anything and share it with anyone – undermined by forces that wish to make it less free and open. In a world in which the future health of the Internet is vital to democratic discourse and a free flow of ideas, we need a band of dedicated Mozilla Advocacyindividuals standing ready to protect it.

That’s why we are joining together today to launch the Ford-Mozilla Open Web Fellows program, a landmark initiative to create a worldwide community of leaders who will advance and protect the free and open Web.

Working in the Open on Core Issues with the World’s Most Innovative Organizations

Ford-Mozilla Fellows will be immersed in projects that create a better understanding of Internet policy issues among civil society, policy makers, and the broader public. Fellows will be technologists, hackers, and makers who work on a range of Internet policy issues, from privacy and security to surveillance and net neutrality. They will create an affirmative agenda and improve coordination across the sector, boosting the overall number of people throughout society (in nonprofit, government, philanthropy, academic and corporate sectors) that protect the Internet. At present, a whole new architecture is emerging at NGOs and in government where a technology lens is vital to achieving results, just as a focus on law and communications were important in building previous capacity. Fellows will be encouraged to work in the open so that they can share their experiences and learnings with others. Around the world, civil society organizations are working under difficult situations to advance social justice and create a thriving digital society where all voices have an opportunity to be heard.

Fellows will serve as technology advisors, mentors and ambassadors to host organizations, helping to better inform the policy discussion. We are thrilled to name the first cohort organizations that will host a Fellow in the first year of the program. They include:

A Call for Fellowship Applicants

Today also marks the official opening of the application window. Beginning immediately, people can apply to be a Ford-Mozilla Open Web Fellow by visiting The application deadline is December 31, 2014.

We are looking for emerging leaders who have a passion for influencing and informing the public policies that impact the Internet. Selected Fellows will have a track record of making and contributing to projects and an interest in working with Mozilla, the Ford Foundation, and our host organizations on specific initiatives to advance and protect the free and open Web.

Protecting the Internet

The Internet has the potential to be the greatest global resource in the history of the world, accessible to and shaped by all people. It has the promise to be the first medium in which anyone can make anything, and share it with anyone. In many ways, it already has helped bend the arc of history towards enlightenment and justice.

But continuing in that direction isn’t guaranteed without help. For all the good that can come from the Internet, in some areas it is already being used to weaken society and concentrate power in the hands of the few, and to shut down democratic discourse. The fight over preserving net neutrality in the U.S.; the debate over governments undermining the Internet to further surveillance efforts; the curtailing of speech and access to the Internet by authoritarian regimes — these are all threats to the Internet and to civil rights.

We need to take up the challenge to prevent this from happening. We must support the heroes – the developers, advocates and people who are fighting to protect and advance the free and open Internet. We must train the next generation of leaders in the promise and pitfalls of technology. We need to build alliances and infrastructure to bridge technology policy and social policy.

The Ford-Mozilla Open Web Fellows program is an effort to find and support the emerging leaders in the fight to protect the free and open Internet. Apply to become a Ford-Mozilla Fellow and tell us how you would stand up to protect and advance the Web to continue the effort to bend the arc toward justice.

Mozilla SecurityCSP for the web we have

Content Security Policy (CSP) is a good safety net against Cross Site Scripting (XSS). In fact, it’s the best one and I would recommend it to anyone building new sites.

For existing sites, implementing CSP can be a challenge because CSP introduces some restrictions by default and, if the code was written without these restrictions in mind, work will be required. Also, working around these issues can negate the benefits of applying a policy in the first place. In particular, inline scripts require thought; they’re commonly used and, if they’re allowed by your policy, the major benefit of CSP no longer applies. The only option available to make effective use of CSP, in the past, was to re-write the code to remove any existing inline scripts or styles.

Applying CSP to existing site might seem overwhelming at first but, considering the security benefit, the effort is well worth it. Fortunately, doing this has become much easier with CSP 2.

Some CSP 2 features:
CSP 2 provides some features that can really help; hash-source and nonce-source. These both provide a way of using inline scripts and styles without giving attackers free reign to inject things.

So how do they work? We’ll look at nonce-source first.

A CSP with a nonce-source might look like this:

content-security-policy: default-src 'self'; script-src 'nonce-2726c7f26c'

And the corresponding document might contain a script element that looks like this:

<script nonce="2726c7f26c">

There are 2 things to note here; firstly, it’s important that the nonce changes for each response (I’ve seen an example where it doesn’t!) and, secondly, it’s important that the nonce is sufficiently hard to predict.

Now, because the nonce changes in a way that isn’t predictable, the attacker doesn’t know what to inject and so, by only allowing script (or style) elements with valid nonce attributes, we can be sure that injections will fail.

And what about hash-source? Well, this is similar in that, again, a source in the CSP is used to ensure that a script or style element in the body is supposed to be there but the mechanism used differs. Rather than relying on an attribute on the script element in the document, hash-source provides a hash, in the CSP, of the script elements that are to be allowed in the document.

So for a script element like this:


You’d have a CSP containing a hash-source a bit like this:

content-security-policy: script-src 'sha256-cLuU6nVzrYJlo7rUa6TMmz3nylPFrPQrEUpOHllb5ic='

Obviously, you’d need to add a hash-source for each script or style you wanted to include in your document.

Which to use (and when):
So you may be wondering why there are two mechanisms (when both are designed to allow inline scripts and styles) and when you should use one rather than the other.
You should resist the temptation to use these mechanisms everywhere; these techniques are only intended for cases where removing inline scripts is not an option.

Nonce-source will be most useful in most cases because it is simpler. You only need to include a single source in your policy to cover a number of inline elements. The downside is that, since a nonce must only be used once, you need to generate a new header (and a new document) for each page load. This makes nonce-source a good option for dynamically generated pages, but completely unsuitable for static content.

Hash-source is more complicated; you have to generate hashes for each and every element you want to allow…. but, because it doesn’t rely on a value being unknown to an attacker, the CSP and the script element can remain the same. This makes hash-source a useful mechanism for protecting content that is served statically.

Words of warning:
Please be careful when using either of these mechanisms in dynamically generated content; if an attacker can inject content into something you’ve set a nonce attribute on (or something you generate a hash-source from) then you may have created a free bypass for an attacker.

The inline script restrictions imposed by CSP include script valued attributes (commonly used for DOM Level 0 event handlers, e.g. onclick); hash-source and nonce-source cannot help you with these.  Currently CSP does not provide mechanisms to apply directives to such script valued attributes but let’s see what the future brings!

Software CarpentryAnnouncing the Creation of the Software Carpentry Foundation

In order to foster Software Carpentry's continued growth, we are pleased to announce that we are creating an independent Software Carpentry Foundation (SCF). Like other non-profit open source foundations, it will decide Software Carpentry's overall scope and direction, manage finances hold its intellectual property.

In order to work through the details have assembled an interim board drawn from a wide cross-section of our community:

  • Jenny Bryan (University of British Columbia)
  • Neil Chue Hong (University of Edinburgh / Software Sustainability Institute)
  • Carole Goble (University of Manchester / ELIXIR UK)
  • Josh Greenberg (Sloan Foundation (non-voting))
  • Katy Huff (University of California Berkeley)
  • Damien Irving (University of Melbourne / Research Platforms)
  • Adam Stone (Lawrence Berkeley National Laboratory)
  • Tracy Teal (Michigan State University / Data Carpentry)
  • Kaitlin Thaney (Mozilla Science Lab)
  • Greg Wilson (Software Carpentry)

This group's mandate is to draft the SCF's initial bylaws and get the foundation legal standing, then arrange the transition to the first permanent board some time early in 2015. Until then, we will continue to do what we have always done: teach scientists and engineers how to use computers to do more research in less time and with less pain.

Our thanks to the Alfred P. Sloan Foundation, the Mozilla Foundation, the Software Sustainability Institute, Lawrence Berkeley Laboratory, Research Bazaar, and everyone else listed on for their sponsorship, and to all of our volunteer instructors for making Software Carpentry possible.

Air MozillaEngineering Meeting

Engineering Meeting The weekly Mozilla engineering meeting.

Air MozillaWebdev Extravaganza: October 2014

Webdev Extravaganza: October 2014 Web developers across the Mozilla community get together (in person and virtually) to share what side projects or cool stuff we've been working on.

hacks.mozilla.orgBlend4Web: the Open Source Solution for Online 3D

Half year ago Blend4Web was first released publicly. In this article I’ll show what Blend4Web is, how it is evolved and and how it can be used for web development.

What Is Blend4Web?

In short, Blend4Web is an open source framework for creating 3D web applications. It uses Blender – the popular open source 3D modeling suite – as the primary authoring tool. 3D graphics is rendered by means of WebGL which is also an open standard technology. The two main keywords here – Blender and Web(GL) – explain the purpose of this engine perfectly.

The full source code of Blend4Web together with some usage examples is available under GPLv3 on GitHub (there is also a commercial licensing option).

The 3D Web

On June the 2nd Apple presented their new operating systems – OS X Yosemite and iOS 8 – both featuring WebGL support in their Safari browser. That marked the end of a 5 year cycle during which the WebGL technology has been evolving, starting with the first unstable browser builds (if anybody remembers Firefox 3.7 alpha?). Now, all the major browsers on all desktop and mobile systems support this open standard for rendering 3D graphics, everywhere, without any plugins.

That was a long and difficult road, along which Blend4Web development has been following WebGL development as a shadow. Broken rendering, tab crashes, security “warnings” from some big guys, unavailability in public browser builds, all sorts of fears, uncertainty and doubts. All this didn’t matter, because we have the opportunity to do 3D graphics (and sound) in browsers!


The first Blender 2.5x builds appeared in summer 2010. At the time we, the programming geeks, were pushed to learn the basics of 3D modeling by the beautiful Sintel from the open source movie of the same name. After choosing Blender, we could be as independent as possible, with a full open source pipeline organized on a Linux platform. Blender gave us the power to make our own 3D scenes, and later helped to attract talanted artists from its wonderful community to join us.

Blend4Web Evolution in Demos

Our demo scenes matured together with the development of Blend4Web. The first one was a quite low-poly and almost non-interactive demo called The Island. It was created in 2011 and polished a bit before the public release. In this demo we introduced our Blender-based pipeline in which all the assets are stored in separate files and are linked into the main file for level design and further exporting (for this reason some of Blend4Web users call it “free Unity Pro”).

In Fashion Show we developed cloth animation techniques. Some post-processing effects, dynamic reflection and particle systems were added later. After Blend4Web has gone public we summarized these cloth-releated tricks in one of our tutorials.

The Farm is a huge scene (in the scale of a browser): over 25 hectares of land, buildings, animated animals and foliage. We added some gamedev elements into it, including the ability of first-person walking, interacting with objects, driving a vehicle. The demo features spatial audio (via Web Audio) and physics (via Bullet and Asm.js). The Freedesktop folks tried it as a benchmark while testing the Mesa drivers (and got “massive crashes” :).

We also tried some visualization and created Nature Morte. In this scene we used carefully crafted textures and materials, as well as post-processing effects to improve realism. However, the technology used for this demo was
quite simple and old-school, as we had no support for visual shader editing yet.

Things have changed when Blender’s node materials have become available to our artists. They created over 40 different materials for the Sports Car model: chromed metal, painted metal, glass, rubber, leather etc.

In our latest release we stepped even further by adding support for the animation control by the user. Now interactivity can be implemented without any coding. In order to demonstrate the new opening possibilities we presented interactive infographic of a light helicopter.

Among the other possible applications of this simple yet effective tool (called NLA Script) we can list the following: interactive 3D web design, product promotions, learning materials, cartoons with the ability to choose between different story lines, point-and-click games and any other applications previously created with Flash.

Using Blend4Web

It is very easy to start using Blend4Web – just download and install the Blender addon as shown in this video tutorial:

The most wonderful thing is that your Blender scene can be exported into a self-contained HTML file that can be emailed, uploaded to your own website or to a cloud – in short shared however you like. This freedom is a fundamental difference from numerous 3D web publishing services as we don’t lock our users to our technology by any means.

For those who want to create highly interactive 3D web apps we offer the SDK. Some notable examples of what is possible with the Blend4Web API are demonstrated in our programming tutorials, ranging from web design to games.

Programming 3D web apps with Blend4Web is not much harder than building average RIAs. Unlike some other WebGL frameworks in the wild we tried to offload all graphics, animation and audio tasks to respective professionals. The programmer just loads the scene…

var m_data = require("data");
m_data.load("example.json", load_cb);

…and then writes the logic which triggers the 3D scene changes that are “hard-coded” by the artists, e.g. plays the animation for the user-selected object:

var m_scenes = require("scenes");
var m_anim = require("animation");
var myobj = m_scenes.pick_object(event.clientX, event.clientY);

As you can see the APIs are structured in a CommonJS way which we believe is important for creating compact and fast web apps.

The Future

There are many possible ways in which the Internet and IT will be going but there is no doubt that the strong and steady development of 3D Web is already here. We expect that more and more users will change their expectations about how web content should look and feel like. We’re gonna help the web developers meet these demands with plans to improve usability and performance and to implement new interesting graphics effects.

We also follow the development of WebGL 2.0 (thanks Mozilla for your job) and expect to create even more nice things on top of it.

Stay Tuned

Read our blog, join us on Twitter, Google+, Facebook and Reddit, watch the demos and tutorials on our YouTube channel, fork Blend4Web at GitHub.

Air MozillaFxos Engineering Weekly "Early" Meeting

Fxos Engineering Weekly The weekly FirefoxOS engineering meeting.

User AdvocacyUser Advocacy Q3 Roundup

Hey all! first, the good news from your local UA team, Q3 is done! and along with it, most of our Q3 goals and projects. Here is a quick summary:

  • Squeaky V2: Squeaky is on track. The AMO team is working hard on some ways to make it more difficult for malware to attack Firefox. We are also working on improving Firefox Reset and a bunch of other projects. V2 Kickstart, DONE!
  • Firefox Updates v1.5: We not only successfully launched the initial version of the hotfix, updating over 12 million users, we have a second version fixing bugs and improving data collection almost ready to go. Giant DONE! (learn more at this post!)
  • Updating and Improving our backend: We have a lot of tools that we use on the backend to put together our reports and gather data for those reports. This quarter we committed a significant amount of time to improving these tools and growing their feature set. We will continue this going forward. DONE!!
  • Pulse: We have created the first experiment of what will eventually turn into our Pulse (Now named Heartbeat) project. We had extremely encouraging results and now are pursuing resources to make this a full-fledged part of the project. DONE!!
If you have further questions, please feel free to follow along in our wiki, We will be updating you soon on the new awesomeness coming in Q4.


Software CarpentryARCHER Software Carpentry workshop at The University of Edinburgh

ARCHER, the UK's national supercomputing service, offers training in software development and high-performance computing to scientists and researchers across the UK. As part of our training service we are running a 2 day Software Carpentry workshop at EPCC, The University of Edinburgh, UK, on 3-4 December.

Software Carpentry workshops help researchers become more productive by teaching software development skills that enable more to be done, in less time, and with less pain. We will cover skills including version control, task automation, good programming practice and automated testing. These are skills that, in an ideal world, researchers would master before tackling anything with "cloud" or "peta" or "HPC" in their name, skills that enable researchers to optimise their time and provide them with a secure basis to optimise and parallelise their code.

This workshop is being run by EPCC, as part of ARCHER. The workshop is in collaboration with EPCC's PRACE Advanced Training Centre (PATC), and Software Carpentry.

I'll be instructing alongside my three EPCC colleagues, Arno Proeme, Mario Antonioletti and Alistair Grant, who will have just completed the Software Carpentry: Instructor Training at TGAC this October.

For more information and to register please, visit the ARCHER training page.

Mozilla IndiaHop on the FirefoxOS Bus!

Yes, you heard it right, FirefoxOS Bus is coming to India!

MozBusWhy FirefoxOS Bus?
A Firefox OS themed bus will tour across India in target cities. FirefoxOS Bus shows the spirit of Mozilla, the awesomeness of Firefox. Spreading the goodness of Firefox OS. Firefox OS aims to bring the next 2 billion people online, providing first smartphone experience at a affordable price. With this we plan to reach to our users, reach where our community is. Through this trip we’ll share the story of Mozilla Firefox, maximize its reach & impact of Open Web.

Who Will Be On The Bus?
Anyone who loves Firefox OS and are excited to spread the word can be part of the journey! There will be several ways to be part of this epic journey:

Mobilizer Crew:
Onboard: Mobilizer Crew on-board are some of the *awesomest* Mozillians who will be part of the entire trip. This team will be a small group but with tons of energy and expertise on Mozilla, engaging users, community across the way. Sign-up to be a Mobilizer Crew.

Regional Coordinators: Regional coordinators will help plan activities in their respective city. They host FirefoxOS Bus and mobilize local community to join the fun. Suggest and help do public events. No need to be part of bus travel. Sign-up to be a Regional Coordinator.

Hop-on & Hop-off: Interested community can hop-on the bus from their city to next destination!

Sounds exciting? There is more – There will be public shows, swag, surprise gifts, online campaign and a lot more fun packed altogether! And most importantly our celebrity rockstar, ‘Firefox‘ will be part of this trip!

How To Get Involved?
* Sign-up to be a Mobilizer Crew. (Involves Travel)
* Sign-up to be a Regional Coordinator. (No Travel Involved)
* Sign-up to Hop-on & Hop-off: (Watch this space later!)

Where will be the Bus traveling to & How long is the trip?
This would be a ~10 day trip +/-2 days. Below is the tentative route & dates:

CITY/ REGION  +/- 2 hours DATE
Inaugural Event/ Crew Meetup TBD
Hyderabad -> Vizag 10 h – 591 km TBD
Vizag -> Bhubaneshwar 7 h – 443 km TBD
Bhubaneshwar -> Kolkata 6 h – 442 km TBD
Kolkata -> Lucknow(Varanasi) 14 h – 1000 km TBD
Lucknow/Varanasi -> (via Agra?) Delhi 7 h  – 473 km TBD
Delhi -> Jaipur 4 h – 264 km TBD
Jaipur -> Indore 10 h – 645 km TBD
Indore -> Nashik 7 h – 414 km TBD
Nashik -> Mumbai 4 h – 166 km TBD
Mumbai -> Pune 4 h – 148 km TBD
Pune -> Goa (?) 8 h – 457 km TBD
Goa -> Bangalore 10 h – 558 km TBD
Bangalore -> Hyderabad 9 h – 569 km TBD

NOTE: Above Route, dates are tentative and rough. Stay tuned for final details.

Mozilla Bus India

While FirefoxOS Bus is in making, check out the MozBus from Japan!

Be part of the epic journey! Make awesome friends, create positive impact. Mark your calenders, all you need is lots of energy, rest all will be taken care of – awesome food, wi-fi, safety and anything and everything to survive for the trip :)

Have questions, ideas, feedback on route or just excited to talk to us? Please reach-out to:
Abid Aboobaker, Mission Commander at FirefoxOSBus AT mozillaindia DOT org

The Bugzilla UpdateRelease of Bugzilla 4.4.6, 4.2.11, 4.0.15, and 4.5.6

Today we have several new releases for you!

All of today’s releases contain security fixes. We recommend that all Bugzilla administrators read the Security Advisory that was published along with these releases.

Bugzilla 4.4.6 is our latest stable release. It contains various useful bug fixes and security improvements:

Bugzilla 4.2.11 is a security update for the 4.2 branch:

Bugzilla 4.0.15 is a security update for the 4.0 branch:

Bugzilla 4.5.6 is an unstable development release. This release has not received QA testing from the Bugzilla Project, and should not be used in production environments. Development releases exist as previews of the features that the next major release of Bugzilla will contain. They also exist for testing purposes, to collect bug reports and feedback, so if you find a bug in this development release (or you don’t like how some feature works) please tell us.

WebmakerTake the Lights On Afterschool Webmaker Challenge

On Oct. 23, more than 1 million people around the United States will take part in the 15th annual Lights On Afterschool campaign. It’s an effort lead by the Afterschool Alliance to celebrate afterschool programs and their important role in the lives of young people, their families and communities.

Mozilla is excited to be a partner in this initiative! We created the Lights On Afterschool Webmaker Challenge–a fun activity that gives practitioners and youth in afterschool programs a chance to unleash their creativity—and learn some coding—by making their own digital afterschool posters using Thimble, our educational code editor.

Celebrating web literacy in afterschool programs across the US

We want educators and young people to see the web as a platform for creativity. Digital media and technology are constantly changing the way young people learn and interact with the world around them, and it’s vital that we provide them with the skills and know-how required to read, write, and participate effectively on the web.

The Webmaker Challenge is a simple, fun activity to help afterschool programs develop students’ webmaking and digital literacy skills. Staff and students will design their own digital poster to share the things they love about their afterschool program, while also learning a bit of HTML, CSS and concepts including remix and collaboration.

We created a step-by-step teaching guide to help afterschool facilitators and practitioners through every step in the process. They’ll work with students to share and reflect on their afterschool experiences, then they’ll create poster sketches, choose media and other images, remix and publish their digital posters! You can do this project as a group with one computer, or in teams if you’re fortunate enough to have multiple computers. If you can’t access the Internet, you can always try an activity from the Lo-Fi No-Fi Kit instead to teach other web literacy skills.

Share new skills and earn money for your afterschool programs

As an added bonus to participation, six programs that participate in the Lights On Afterschool Webmaker Challenge will be chosen at random to win $500! Select the box for the Webmaker Challenge when registering your event for Lights On Afterschool, or complete the activity and earn the Web Literacy Skill Sharer badge to win. See more here.

How to get involved

WebmakerOne Less Password

At Webmaker, we’re experimenting with a method that allows people to log in without a password by using a handshake over email or text message instead. Our goal is to reduce the frustrations that come with password management for our users. We also aim to reduce the security risks that come from weak and stolen passwords.

Webmaker will launch the new login experience soon. Check back here for updates, or join the discussion on Discourse.

password-requirements-2 Like many of you, we grew tired of passwords long ago. It’s a challenge to make them strong and a daily hassle to remember them. We often hear news of passwords stolen, even from tech-savvy companies with very sensitive information.

We wondered – why do we still use passwords? Aren’t there better ways to log in?

A quick search revealed that a growing number of people ask the same questions. Below, we discuss some of the existing password tools and alternatives, like Lastpass and social sign-on with Facebook. We share details on our ideas and solution to this problem. But first, here’s the short version of where we landed.

No Passwords to Forget. No Passwords to Steal.

For Webmaker’s platform, we designed a different experience for log in. New visitors can join simply by entering their email address and choosing a username. They can immediately explore Webmaker and use the tools, confirming their account later.


When people return to the site later, they log in with two steps. First, they identify themselves with their email address or username. Second, Webmaker reaches out to them with an email that includes a link to log in. If they check this email on a phone and want to log in on a desktop, they can copy a short key instead. No passwords to forget. No passwords to steal.

We added another nice feature to make Webmaker even easier to use. The log in email actually offers two links: “Sign in”, which is great for public computers, and “Sign in & remember me”, which lets you stay logged in on your home computer or other devices. Once you enter the site, there’s no need to check a box that says, “keep me logged in.”


Lost Password, Found Solution

We started this work like we start most projects, by asking obvious questions. Why do we log in to sites? What do passwords do for us? We found that people log in for two primary reasons: to identify themselves and to keep other people and spambots out of their accounts. A password is a portable way to uniquely and secretly say, “Hello website, this is really me.” Passwords can do this, but they are not the only way to identify ourselves and prevent other people from pretending to be us.

One day Kavita created an account for a new site, knowing she probably wouldn’t return for months. When asked for a password, she mashed her keyboard like a cat playing a piano. A friend next to her stared and stuttered, “But, how will you get back in?” She replied, “I’ll just reset the password like I do for every other site that I use only a few times a year.”

forgot-password At Webmaker, we considered experiences like Kavita’s and wondered: what if we skipped the password and deliberately used the password recovery process instead? Could we turn it inside out, reduce the clicks, and make this annoying experience a positive one? If so, an answer to broken passwords might be hiding right at the center of the problem.

As we explored this idea, we quickly learned that other people have written about this as well. It seems that a few sites do something similar for mobile users. This solution recycles existing technology and experiences, and just requires some careful design to make it smooth. At Webmaker, we decided to push the idea further and make it our primary form of login.

You can read more about the design of the system and the user experience in a post by Matthew Willse.

Remix for Your Web Service

Is this secure? The system rearranges existing technology and experiences to help us avoid the weakest links in our security: weak passwords, vulnerable password storage, and passwords that somebody repeatedly uses on many sites. It is more secure than the most commonly used current solution.

If designed and documented well, this solution could be useful for other sites as well, eliminating the burden on people who run web services to keep passwords secure. This approach reduces the need to maintain code for social sign-on services, and it decreases the vulnerability of stored passwords. The links and keys emailed to each user are temporary and expire after short interval or after repeated attempts to use them. You can find a more technical discussion of this in a post by Chris DeCairos.

Common Alternatives

Many efforts to make the web more secure make it less friendly to use, proving that technology only provides one part of good security – savvy design is often better than brute force. For example, some sites require longer and more complex passwords which only increases security in theory; people will find shortcuts that make things easier for themselves but less secure. They might use familiar words and dates, or repeat passwords across sites. They might keep passwords on their desk, or worse, their computer’s desktop.

Social sign-on using Facebook, Google or Twitter offer one alternative to identify ourselves. But while social sign-on offers convenience, it puts our privacy in the hands of a few companies that arguably know too much about us and our life online. Social sign-on can also be inconvenient for people who use public computers at a library or use shared devices with their family. Nobody wants to log out in order to log in. For site developers, social sign-on can also be a challenge to maintain as the implementation varies between services and periodically changes.

Services that store and remember your passwords, like Lastpass, are only a partial solution; they can’t help Webmaker and other sites keep your passwords safe. And like social sign-on, they can also be difficult for people who use public computers or shared devices.

Feedback & Next Steps

Right now we support email. We plan to also support phone numbers and SMS for an easier log in experience with mobile phones. We also made passwords optional, smoothly offering a different experience to users based on their preference. We are curious to see which option users prefer, and why they prefer it.

We will continue to test this system across a range of scenarios and devices and iterate improvements. We welcome your feedback, ideas, and bug reports. Post your questions in Discourse or bugs in bugzilla.

QMOFirefox Aurora 34 Testday results

Hello everyone!

On Friday, October 3rd, we held a new Testday, for Firefox 34 Aurora. Thank you all for your effort and involvement in test execution, bug verification and bug triage. We’ve had multiple tests run in MozTrap, some bugs triaged and verified, and an enhancement confirmed in Bugzilla.

Detailed results about the work done can be found in this etherpad.

Special thanks go to: TeoVermesan, Cristina Madaras, Gabriela (gaby2300), kenkon (Aurore), Rosencrantz, and all our moderators! Your work is always greatly appreciated.

WebmakerMozFest 2014: Spotlight on “Policy & Advocacy”

This is the seventh post in a series featuring interviews with the 2014 Mozilla Festival “Space Wranglers,” the curators of the many exciting programmatic tracks slated for this year’s Festival.

For this edition, we chatted with Dave Steer, Alina Hua, and Stacy Martin, the Space Wranglers for the Policy & Advocacy track. Participants in this track will help build the web we want by protecting and advancing the free and open web for everyone.

What excites you most about your track?
This is a critical time for the Internet. On one hand, it has become an integral part in the lives of billions of people. On the other hand, it is a fragile resource that is being undermined by interests that want to make it less free and open. We are excited to bring together the heroes of the Internet — the policy and advocacy community of developers, activists, and everyone fighting for a free and open Internet — to work together, share ideas, and celebrate the web we want. We’re excited for everyone to be together in a physical space that is interactive and inspiring, and that enables us to learn from each other.

We envision a space that invites all attendees to share questions and ideas for the web they want, and to feature sessions that invite people to work together to solve problems. Just imagine walls covered with thought provoking, challenging questions and a mass of people working together to answer some of the most vital issues of our times. That will be the Policy & Advocacy track.

We’ll also be running ‘fireside chats’ with leading thinkers about the current state of the web. Our track will aim to bring together the policy and advocacy community, discussing issues and topic areas that are important to the health of the Internet.

Who are you working with to make this track happen?
We’ve seen a ton of participation and collaboration among the community to make the Policy & Advocacy track happen. This participation has been widespread: from advocates in Europe to cyptographers and technologists in North America to Mozilla Reps in virtually every corner of the world. It’s wonderful to see the community come to life. In all, the community pulled together more than 60 submissions for MozFest sessions. It was great to see so many community members actively proposing sessions, and to see the participation of new members, such as the Web We Want campaign, a global effort celebrating the 25th birthday of the Web. As a result, we will have sessions that will teach everything from advocacy skills to anti-surveillance techniques.  We’ll explore the Web We Want, and we’ll see sessions devoted to enabling it for the youngest of people online.

How can someone who isn’t able to attend MozFest learn more or get involved in this topic?
We’re cooking up ideas of how to enable the community to participate in MozFest, even if they can’t physically be there. We have a few exciting ideas planned — stay tuned.

In the meantime, there are lots of things you can do to be an advocate for the open web. Here are a few things to get you started:


Head on over to the MozFest site to register!

Software CarpentryIdeas to Improve Instructor Training

Have you ever learned something new and then had it appear in other areas of your life? After a summer at SWC thinking about how to train better instructors (and how to be a better teacher myself) I get to try discussion-based teaching this quarter at UC Davis.

This summer at SWC we've spent a lot of time discussing "Building A Better Teacher" by Elizabeth Green. This book focuses on training teachers to lead students on a journey of discovery rather than teaching them a series of rules. The assumption is that teachers tend to tell their students a series of rules rather than leading students through a series of questions and class discussions which will ultimately allow students to figure out the rules as a class.

I just started in the Physics graduate program at UC Davis. Unbeknownst to me, the Physics department has an education research group which has been thinking about how to teach physics with the same philosophy that Green discusses (see this paper describing their methodology and results, summarized in the next paragraph). I (and my fellow first years) are lucky enough to teach this course and have spent the last three days training to be instructors.

Physics 7 consists of one hour a week of lecture and five hours of discussion lab. The discussion labs meet twice a week for two and a half hours and are a series of activities that allow the students to use, think through, and extend the equations and concepts that they've learned in class to better understand them. While there are quizzes in lecture, the discussion labs grades are based on participation. In class most activities are discussed in small groups. Each group has a blackboard on which they write up their answers and then the whole class discusses the conclusions of each group. I'm really excited to try this method of teaching and would like to see if we can extend it to SWC.

I haven't actually instructed yet (classes start Thursday) so I'm not ready to reform our instruction (yet) but I can discuss the training. There are three pieces that I think we could easily implement. Note: in the following paragraphs I will refer to the people teaching the instructor training course as 'teachers' and the people being trained as 'instructors'.

  1. Classes were videotaped and clips of the videos were shown and discussed with the class. Occasionally the video was paused so the teacher could point out a particular teaching method or issue we might encounter. I could see this being used from anything to viewing what it looks like to give the class an exercise, to going over the solution, to walking around the room, to answering a question, to using the Etherpad, to switching between the shell and something else. Nothing compares to seeing a method in action.
  2. The teachers lead an exercise with the instructors as the class. This allowed us both to get a sense of how the class flowed and to see how teachers handled different situations (such as calling on students, differing explanations from different groups, leading a group discussion, walking around the room, etc). We were also given plenty of time to ask questions so the teacher could further explain why he was doing what he was doing.
  3. Finally, we split a lesson into small pieces which encapsulated both small group discussion and whole class discussion. Each instructor taught a section to the rest of the class. After each section the instructor was asked how he/she thought it went and then the class and teacher gave both positive and negative feedback. The most interesting tidbit I picked up today: If you move around the class so that you are always on the opposite side of the room from whoever is speaking, they will naturally speak to you and include the whole class.

I can't wait to see all this pedagogy in action.