Karl DubostIE is aliasing some WebKit properties

DOUBLE RAINBOW. Microsoft Internet Explorer is aliasing some of the WebKit properties. Read their blog post The Mobile Web should just work for everyone.

Two men around a fire

A little while ago I was working for Opera in the Open The Web (aka Web Compatibility) team when Opera decided an experiment for Opera on Mobile. A version of the browser was released with aliases for popular CSS WebKit prefixes which were often forgotten by Web developers and name_here_your_popular framework. I can tell you that it didn't please anyone in particular at Opera, but as Bruce is mentionning in his blog post:

It’s difficult to argue for ideological purity when a simple aliasing makes the user experience so demonstrably better – and one thing I’ve learned at Opera is most users don’t care two hoots for ideology if their favourite site works better in another browser.

Any Web professional who is shocked by IE announcement has first to go through a serious introspection assessment about his/her own Web practices. What is the browser you are using on Desktop, on Mobile? Do you even know what is it to use a Web browser which is not Webkit compatible as your main browser, not just for testing? Go use Firefox for Android for one year or IE on mobile and then talk about the issues at stake. Mike on his blog, Hallvord on his and here myself are regularly talking about these issues. We specifically started Webcompat.com because of these issues. IE Team is participating to this. We hope to get wider participation from Chrome and Opera teams too.

No, what IE has announced is not surprising or even shocking. What really surprises me is that they didn't in fact fully switch to Blink or implemented a dual rendering engine. I'm not in the known, but I would not be surprised it happens one day. That would left Mozilla in a pretty nasty place of the Web and would probably force Mozilla to switch to Blink.

After wiping out all our tears if this was announced that would raise an interesting set of questions on Web governance around some projects. For example, you could imagine that Blink becomes a real opensource library not mainly controlled by Google. You may be too young for remembering, but at the very beginning of the Web most Web clients were using a single common opensource library: LibWWW.

The common code library is the basis for most browers. It contains the network access code and format handling.

Note that I'm not wishing a Blink world, because in the current situation, we don't really have an equilibrium of powers around the Blink project. Libwww was not driven by one company with a few participants. The work for standard organizations like W3C and IETF around Web technology would be dramatically be shifted from specifications to managing issues and pull requests. It would be another social dynamics.


Jeff Waldenmfbt now has UniquePtr and MakeUnique for managing singly-owned resources

Managing dynamic memory allocations in C++

C++ supports dynamic allocation of objects using new. For new objects to not leak, they must be deleted. This is quite difficult to do correctly in complex code. Smart pointers are the canonical solution. Mozilla has historically used nsAutoPtr, and C++98 provided std::auto_ptr, to manage singly-owned new objects. But nsAutoPtr and std::auto_ptr have a bug: they can be “copied.”

The following code allocates an int. When is that int destroyed? Does destroying ptr1 or ptr2 handle the task? What does ptr1 contain after ptr2‘s gone out of scope?

typedef auto_ptr<int> auto_int;
  auto_int ptr1(new int(17));
    auto_int ptr2 = ptr1;
    // destroy ptr2
  // destroy ptr1

Copying or assigning an auto_ptr implicitly moves the new object, mutating the input. When ptr2 = ptr1 happens, ptr1 is set to nullptr and ptr2 has a pointer to the allocated int. When ptr2 goes out of scope, it destroys the allocated int. ptr1 is nullptr when it goes out of scope, so destroying it does nothing.

Fixing auto_ptr

Implicit-move semantics are safe but very unclear. And because these operations mutate their input, they can’t take a const reference. For example, auto_ptr has an auto_ptr::auto_ptr(auto_ptr&) constructor but not an auto_ptr::auto_ptr(const auto_ptr&) copy constructor. This breaks algorithms requiring copyability.

We can solve these problems with a smart pointer that prohibits copying/assignment unless the input is a temporary value. (C++11 calls these rvalue references, but I’ll use “temporary value” for readability.) If the input’s a temporary value, we can move the resource out of it without disrupting anyone else’s view of it: as a temporary it’ll die before anyone could observe it. (The rvalue reference concept is incredibly subtle. Read that article series a dozen times, and maybe you’ll understand half of it. I’ve spent multiple full days digesting it and still won’t claim full understanding.)

Presenting mozilla::UniquePtr

I’ve implemented mozilla::UniquePtr in #include "mozilla/UniquePtr.h" to fit the bill. It’s based on C++11′s std::unique_ptr (not always available right now). UniquePtr provides auto_ptr‘s safety while providing movability but not copyability.

UniquePtr template parameters

Using UniquePtr requires the type being owned and what will ultimately be done to generically delete it. The type is the first template argument; the deleter is the (optional) second. The default deleter performs delete for non-array types and delete[] for array types. (This latter improves upon auto_ptr and nsAutoPtr [and the derivative nsAutoArrayPtr], which fail horribly when used with new[].)

UniquePtr<int> i1(new int(8));
UniquePtr<int[]> arr1(new int[17]());

Deleters are callable values, that are called whenever a UniquePtr‘s object should be destroyed. If a custom deleter is used, it’s a really good idea for it to be empty (per mozilla::IsEmpty<D>) so that UniquePtr<T, D> is as space-efficient as a raw pointer.

struct FreePolicy
  void operator()(void* ptr) {

  void* m = malloc(4096);
  UniquePtr<void, FreePolicy> mem(m);
  int* i = static_cast<int*>(malloc(sizeof(int)));
  UniquePtr<int, FreePolicy> integer(i);

  // integer.getDeleter()(i) is called
  // mem.getDeleter()(m) is called

Basic UniquePtr construction and assignment

As you’d expect, no-argument construction initializes to nullptr, a single pointer initializes to that pointer, and a pointer and a deleter initialize embedded pointer and deleter both.

UniquePtr<int> i1;
assert(i1 == nullptr);
UniquePtr<int> i2(new int(8));
assert(i2 != nullptr);
UniquePtr<int, FreePolicy> i3(nullptr, FreePolicy());

Move construction and assignment

All remaining constructors and assignment operators accept only nullptr or compatible, temporary UniquePtr values. These values have well-defined ownership, in marked contrast to raw pointers.

class B
    int i;

    B(int i) : i(i) {}
    virtual ~B() {} // virtual required so delete (B*)(pointer to D) calls ~D()

class D : public B
    D(int i) : B(i) {}

UniquePtr<B> MakeB(int i)
  typedef UniquePtr<B>::DeleterType BDeleter;

  // OK to convert UniquePtr<D, BDeleter> to UniquePtr<B>:
  // Note: For UniquePtr interconversion, both pointer and deleter
  //       types must be compatible!  Thus BDeleter here.
  return UniquePtr<D, BDeleter>(new D(i));

UniquePtr<B> b1(MakeB(66)); // OK: temporary value moved into i1

UniquePtr<B> b2(b1); // ERROR: b1 not a temporary, would confuse
                     // single ownership, forbidden

UniquePtr<B> b3;

b3 = b1;  // ERROR: b1 not a temporary, would confuse
          // single ownership, forbidden

b3 = MakeB(76); // OK: return value moved into b3
b3 = nullptr;   // OK: can't confuse ownership of nullptr

What if you really do want to move a resource from one UniquePtr to another? You can explicitly request a move using mozilla::Move() from #include "mozilla/Move.h".

int* i = new int(37);
UniquePtr<int> i1(i);

UniquePtr<int> i2(Move(i1));
assert(i1 == nullptr);
assert(i2.get() == i);

i1 = Move(i2);
assert(i1.get() == i);
assert(i2 == nullptr);

Move transforms the type of its argument into a temporary value type. Move doesn’t have any effects of its own. Rather, it’s the job of users such as UniquePtr to ascribe special semantics to operations accepting temporary values. (If no special semantics are provided, temporary values match only const reference types as in C++98.)

Observing a UniquePtr‘s value

The dereferencing operators (-> and *) and conversion to bool behave as expected for any smart pointer. The raw pointer value can be accessed using get() if absolutely needed. (This should be uncommon, as the only pointer to the resource should live in the UniquePtr.) UniquePtr may also be compared against nullptr (but not against raw pointers).

int* i = new int(8);
UniquePtr<int> p(i);
if (p)
  *p = 42;
assert(p != nullptr);
assert(p.get() == i);
assert(*p == 42);

Changing a UniquePtr‘s value

Three mutation methods beyond assignment are available. A UniquePtr may be reset() to a raw pointer or to nullptr. The raw pointer may be extracted, and the UniquePtr cleared, using release(). Finally, UniquePtrs may be swapped.

int* i = new int(42);
int* i2;
UniquePtr<int> i3, i4;
  UniquePtr<int> integer(i);
  assert(i == integer.get());

  i2 = integer.release();
  assert(integer == nullptr);

  assert(integer.get() == i2);

  integer.reset(new int(93)); // deletes i2

  i3 = Move(integer); // better than release()

  Swap(i3, i4); // mozilla::Swap, that is

When a UniquePtr loses ownership of its resource, the embedded deleter will dispose of the managed pointer, in accord with the single-ownership concept. release() is the sole exception: it clears the UniquePtr and returns the raw pointer previously in it, without calling the deleter. This is a somewhat dangerous idiom. (Mozilla’s smart pointers typically call this forget(), and WebKit’s WTF calls this leak(). UniquePtr uses release() only for consistency with unique_ptr.) It’s generally much better to make the user take a UniquePtr, then transfer ownership using Move().

Array fillips

UniquePtr<T> and UniquePtr<T[]> share the same interface, with a few substantial differences. UniquePtr<T[]> defines an operator[] to permit indexing. As mentioned earlier, UniquePtr<T[]> by default will delete[] its resource, rather than delete it. As a corollary, UniquePtr<T[]> requires an exact type match when constructed or mutated using a pointer. (It’s an error to delete[] an array through a pointer to the wrong array element type, because delete[] has to know the element size to destruct each element. Not accepting other pointer types thus eliminates this class of errors.)

struct B {};
struct D : B {};
UniquePtr<B[]> bs;
// bs.reset(new D[17]()); // ERROR: requires B*, not D*
bs.reset(new B[5]());
bs[1] = B();

And a mozilla::MakeUnique helper function

Typing out new T every time a UniquePtr is created or initialized can get old. We’ve added a helper function, MakeUnique<T>, that combines new object (or array) creation with creation of a corresponding UniquePtr. The nice thing about MakeUnique is that it’s in some sense foolproof: if you only create new objects in UniquePtrs, you can’t leak or double-delete unless you leak the UniquePtr‘s owner, misuse a get(), or drop the result of release() on the floor. I recommend always using MakeUnique instead of new for single-ownership objects.

struct S { S(int i, double d) {} };

UniquePtr<S> s1 = MakeUnique<S>(17, 42.0);   // new S(17, 42.0)
UniquePtr<int> i1 = MakeUnique<int>(42);     // new int(42)
UniquePtr<int[]> i2 = MakeUnique<int[]>(17); // new int[17]()

// Given familiarity with UniquePtr, these work particularly
// well with C++11 auto: just recognize MakeUnique means new,
// T means single object, and T[] means array.
auto s2 = MakeUnique<S>(17, 42.0); // new S(17, 42.0)
auto i3 = MakeUnique<int>(42);     // new int(42)
auto i4 = MakeUnique<int[]>(17);   // new int[17]()

MakeUnique<T>(...args) computes new T(...args). MakeUnique of an array takes an array length and constructs the correspondingly-sized array.

In the long run we probably should expect everyone to recognize the MakeUnique idiom so that we can use auto here and cut down on redundant typing. In the short run, feel free to do whichever you prefer.


UniquePtr was a free-time hacking project last Christmas week, that I mostly finished but ran out of steam on when work resumed. Only recently have I found time to finish it up and land it, yet we already have a couple hundred uses of it and MakeUnique. Please add more uses, and make our existing new code safer!

A final note: please use UniquePtr instead of mozilla::Scoped. UniquePtr is more standard, better-tested, and better-documented (particularly on the vast expanses of the web, where most unique_ptr documentation also suffices for UniquePtr). Scoped is now deprecated — don’t use it in new code!

Frédéric HarperHTML5 to the next level, the recording of my presentation at Montreal Python


Last month I spoke at the monthly Python Montréal meetup about, guess what, Firefox OS. I already uploaded the slides online, and now the recording of my talk.

Thanks again to the friend Christian Aubry, who, like always, did an amazing job with the recording. Thanks also to Python Montréal for having me and Google Montréal for sponsoring the event.

HTML5 to the next level, the recording of my presentation at Montreal Python is a post on Out of Comfort Zone from Frédéric Harper

Kent JamesThunderbird’s Future: the TL;DR Version

In the next few months I hope to do a series of blog posts that talk about Mozilla’s Thunderbird email client and its future. Here’s the TL;DR version (though still pretty long). These are my personal views, I have no authority to speak for Mozilla or for the Thunderbird project.

Current Status

  • Thunderbird usage is growing, we have a strong core team, and expect to remain relevant to the internet for the foreseeable future. Thunderbird is mission critical to tens of millions of users.
  • The last two “community-developed” Thunderbird releases, 24 and 31, while successful as stability releases, had few new features. The enormous effort required to maintain that stability left little time for feature development.
  • Thunderbird is an important piece, under the Mozilla Manifesto, of maintaining an open internet. But it is not “The Web” and is outside of the current Mozilla Mission of “Our mission is to promote openness, innovation & opportunity on the Web.” Mozilla and the Thunderbird team need to better define the implications of that.
  • Mozilla’s strategic focus on a “Web” that excludes Thunderbird has indirectly resulted in dis-empowerment of the Thunderbird team in a variety of ways. This is becoming an existential threat to the product that needs addressing.

Where We Need to Go

  • Thunderbird should be a full-featured desktop personal information management system, incorporating messaging, calendar, and contacts. We need to incorporate the calendaring component (Lightning) by default, and drastically improve contact management.
  • We should be actively promoting open internet standards in messaging, calendaring, and contacts through product implementations as well as advocacy and standards development.
  • Our product should continually adapt to changing internet usage patterns and issues, including messaging security challenges and mobile interoperability.
  • We need to focus on the needs of our existing user base through increased reliability and performance, as well as adding long-requested features that are expected of a full-featured application.

How We Get There

  • Three full-time developers are needed to ensure a stable core base, and allow forward progress on the minimum feature set expected of us.
  • We cannot reasonably expect Firefox and MoCo to subsidize our operations, so we need to raise income independently, through donations directly from our users.
  • We are proudly Mozillians and expect to remain under the Mozilla umbrella, but the current governance structure, reporting through a Web-focused corporate management, is dis-empowering and needs conversion to a community-focused model that is focused on the needs of Thunderbird users.
  • We should ask MoFo to fund one person on the Thunderbird team to serve as an advocate for open messaging standards, contributing product code as well as participating publicly in standards development and discussions.

The Thunderbird team is currently planning to get together in Toronto in October 2014, and Mozilla staff are trying to plan an all-hands meeting sometimes soon. Let’s discussion the future in conjunction with those events, to make sure that in 2015 we have a sustainable plan for the future.


Vladimir VukićevićVR and CSS Integration in Firefox

It’s taken a little longer than expected, but he second Firefox build with VR support and preliminary CSS content rendering support is ready. It is extremely early, and the details of interactions with CSS have not yet been worked out. I’m hoping that experimentation from the community will lead to some interesting results that can help us define what the interaction should be.

These builds support both the Oculus Rift DK1 and DK2 (see DK2 instructions towards the end of the post). No support for other devices is available yet.

API changes since last build

The API for accessing the VR devices has changed, and is now using Promises. Additionally, the “moz” prefix was removed from all methods. Querying VR devices now should look like:

function vrDeviceCallback(devices) {


CSS content integration

This build includes experimental integration of CSS 3D transforms and VR fullscreen mode. It also allows for mixing WebGL content along with browser-rendered CSS content. Within a block element that is made full screen with a VR HMD device, any children that have the transform-style: preserve-3d CSS property will cause the browser to render the content twice from different viewpoints. Any elements that don’t have preserve-3d will cause those elements to be stretched across the full display of the output device. For example:

#container { }
#css-square {
  position: absolute;
  top: 0; left: 0;

  transform-style: preserve-3d;
  transform: translate(100px, 100px, 100px);
  width: 250px;
  height: 250px;
  background: blue;

<div id="container">
   <canvas id="webgl" width="1920" height="1080"></canvas>

   <div id="css-square">Hello World</div>

Will cause the css-square element to be rendered by the browser in the 3D space, but the WebGL canvas will be rendered only once, in this case underneath all content. No depth buffering is done between elements at this time.

The interaction between CSS transformed elements and the VR 3D space is poorly defined at the moment. CSS defines a coordinate space where 0,0 starts at the top left of the page, with Y increasing downwards, X increasing to the right, and Z increasing towards the viewer (i.e. out of the screen). For 3D content creation using CSS, “camera” and “modelview” elements could be used to provide transforms for getting a normal scene origin (based on the page’s dimensions) and camera position. It should also take care of applying orientation and position information from the HMD.

The browser itself will take care of applying the per-eye projection matrices and distortion shaders. Everything else is currently up to content. (I’ll go into detail exactly what’s done in a followup blog post.) So, a suggested structure could be:

<div id="container">
  <div id="camera">
    <div id="modelview">
      <div id="contentItem1">...</div>
      <div id="contentItem2">...</div>

One issue is that there currently is no way to specify the min and max depth of the scene. Normally, these would be specified as part of the projection, but that is inaccessible. For regular CSS 3D transforms, the perspective property provides some of this information, but it’s not appropriate for VR because the browser itself will define the projection transform. In this particular build, the depth will range from -(max(width,height) / 2) * 10 .. +(max(width,height) / 2) * 10. This is a complete hack, and will likely be replaced with an explicit CSS depth range property very soon. Alternatively, it might be replaced by an explicit setDepthRange call on the VR HMD device object (much like the FOV can be changed there).

I have not yet worked out the full best practices here. Some combination of transform-origin and transform will be needed to set up a useful coordinate space. A simple demo/test is available here.

As before, issues are welcome via GitHub issues on my gecko-dev repo. Additionally, discussion is welcome on the web-vr-discuss mailing list.

DK2 Support

These builds can support the DK2 when it is run in legacy/extended desktop mode. The following steps should enable rendering to the DK2, with orientation/position data:

  1. Install the DK2 runtime, and update the firmware as specified in the manual.
  2. Inside the OculusConfigTool, under Tools -> Rift Display Mode, switch to extended desktop mode
  3. The DK2 display will now appear as a normal display, in a tall-and-narrow configuration (1080 x 1920). You need to change this to normal 1920x1080 – On Windows, in the NVIDIA Control Panel, you can select the “Portrait” orientation under “Rotate display” for the DK2. This will rotate it so that it becomes 1920x1080. Similar steps should be possible on other video cards.
  4. At this point, you should be able to fire up the Firefox build and be able to get input from the DK2, as well as going fullscreen on the DK2.
  5. If you can’t, quit Firefox, and kill the “wscrip.exe” process that’s running an Oculus script, and “ovservice_x64.exe”. This seems to be a service that mediates access to the Rift, which is not compatible with the older SDK in use by the current Firefox build.

Bogomil ShopovOptimize your GitHub Issues and 4 tricks and facts you should know about GitHub

I wrote an article, that can be found here about the new GitHub Issues, web development processes, using visual feedback and some facts about GitHub:['bigdata','github stats','xkcd comic ban','more''].

Actually I am using one old bug from input.mozilla.org project (a.k.a) Fjord. It’s a good 4 mins read full of useful stuff and fun.

If you are interested in the tool I am using to optimize your GitHub Issues processes, it is available for free for F(L)OSS projects from here, but you should read the article first.

Any thoughts?

Lucas RochaThe new TwoWayView

What if writing custom view recycling layouts was a lot simpler? This question stuck in my mind since I started writing Android apps a few years ago.

The lack of proper extension hooks in the AbsListView API has been one of my biggest pain points on Android. The community has come up with different layout implementations that were largely based on AbsListView‘s code but none of them really solved the framework problem.

So a few months ago, I finally set to work on a new API for TwoWayView that would provide a framework for custom view recycling layouts. I had made some good progress but then Google announced RecyclerView at I/O and everything changed.

At first sight, RecyclerView seemed to be an exact overlap with the new TwoWayView API. After some digging though, it became clear that RecyclerView was a superset of what I was working on. So I decided to embrace RecyclerView and rebuild TwoWayView on top of it.

The new TwoWayView is functional enough now. Time to get some early feedback. This post covers the upcoming API and the general-purpose layout managers that will ship with it.

Creating your own layouts

RecyclerView itself doesn’t actually do much. It implements the fundamental state handling around child views, touch events and adapter changes, then delegates the actual behaviour to separate components—LayoutManager, ItemDecoration, ItemAnimator, etc. This means that you still have to write some non-trivial code to create your own layouts.

LayoutManager is a low-level API. It simply gives you extension points to handle scrolling and layout. For most layouts, the general structure of a LayoutManager implementation is going to be very similar—recycle views out of parent bounds, add new views as the user scrolls, layout scrap list items, etc.

Wouldn’t it be nice if you could implement LayoutManagers with a higher-level API that was more focused on the layout itself? Enter the new TwoWayView API.

TWAbsLayoutManagercode is a simple API on top of LayoutManager that does all the laborious work for you so that you can focus on how the child views are measured, placed, and detached from the RecyclerView.

To get a better idea of what the API looks like, have a look at these sample layouts: SimpleListLayout is a list layout and GridAndListLayout is a more complex example where the first N items are laid out as a grid and the remaining ones behave like a list. As you can see you only need to override a couple of simple methods to create your own layouts.

Built-in layouts

The new API is pretty nice but I also wanted to create a space for collaboration around general-purpose layout managers. So far, Google has only provided LinearLayoutManager. They might end up releasing a few more layouts later this year but, for now, that is all we got.


The new TwoWayView ships with a collection of four built-in layouts: List, Grid, Staggered Grid, and Spannable Grid.

These layouts support all RecyclerView features: item animations, decorations, scroll to position, smooth scroll to position, view state saving, etc. They can all be scrolled vertically and horizontally—this is the TwoWayView project after all ;-)

You probably know how the List and Grid layouts work. Staggered Grid arranges items with variable heights or widths into different columns or rows according to its orientation.

Spannable Grid is a grid layout with fixed-size cells that allows items to span multiple columns and rows. You can define the column and row spans as attributes in the child views as shown below.



The new TwoWayView API will ship with a convenience view (TWView) that can take a layoutManager XML attribute that points to a layout manager class.


This way you can leverage the resource system to set layout manager depending on device features and configuration via styles.

You can also use TWItemClickListener to use ListView-style item (long) click listeners. You can easily plug-in support for those in any RecyclerView (see sample).

I’m also planning to create pluggable item decorations for dividers, item spacing, list selectors, and more.

That’s all for now! The API is still in flux and will probably go through a few more iterations. The built-in layouts definitely need more testing.

You can help by filing (and fixing) bugs and giving feedback on the API. Maybe try using the built-in layouts in your apps and see what happens?

I hope TwoWayView becomes a productive collaboration space for RecyclerView extensions and layouts. Contributions are very welcome!

Daniel StenbergMe in numbers, today

Number of followers on twitter: 1,302

Number of commits during the last 365 days at github: 686

Number of publicly visible open source commits counted by openhub: 36,769

Number of questions I’ve answered on stackoverflow: 403

Number of connections on LinkedIn: 608

Number of days I’ve committed something in the curl project: 2,869

Number of commits by me, merged into Mozilla Firefox: 9

Number of blog posts on daniel.haxx.se, including this: 734

Number of friends on Facebook: 150

Number of open source projects I’ve contributed to, openhub again: 35

Number of followers on Google+: 557

Number of tweets: 5,491

Number of mails sent to curl mailing lists: 21,989

TOTAL life achievement: 71,602

Brian WarnerTo Infinity And Beyond!

It’s been a great four and a half years at Mozilla, where I’ve had the privilege to work with the wonderful and brilliant people in Labs, Jetpack, Identity, and most recently Cloud Services. I’m grateful to you all.

Now it’s time for me to move on. This Friday will be my last day in the office (but certainly not as a Mozillian!), and this blog will probably be closed down or frozen at that time. You can reach me at warner@lothar.com, and my home blog lives at http://www.lothar.com/blog .

Mozilla is an amazing place, and will always be in my heart. Thank you all for everything!

Zbigniew BranieckiReducing MozL10n+Gaia technical debt in Firefox OS 2.1 cycle

Firefox OS is becoming a more mature platform, and one of the components that is maturing with it is the mozL10n library.

In 2.0 cycle, which is already feature complete, we introduced the codebase based on the L20n project.

The 2.1 cycle we’re currently in is a major API cleanup effort. We’re reviewing how Gaia apps use mozL10n, migrating them to the new API and minimizing the code complexity.

Simplifying the code responsible for localizability of Firefox OS is crucial for our ability to maintain and bring new features to the platform.

There are four major areas we’re working on:

  • mozL10n.translate – with introduction of Mutation Observer, we want to phase out manual DOM translation calls.
  • mozL10n.localize – this function is extremely hard to maintain and does enough “magic” to confuse devs.
  • mozL10n.get – manual l10n gets are the biggest cause of bugs and regressions in our code. They are synchronous, not retranslatable and badly misused
  • mock_l10n – many apps still use custom MockL10n class in tests, some even use real MozL10n code in tests. This makes testing harder to maintain and develop new l10n features.

We’re working on all four areas and would love to get your help.

No matter if you are a Gaia app owner, or if you’ve never wrote a patch for Gaia. If you know JavaScript, you can help us!

All of those bugs have instructions on how to start fixing, and I will be happy to mentor you.

We have time until October 13th. Let’s get Gaia ready for the next generation features we want to introduce soon! :)

Justin CrawfordVouched Improvements on Mozillians.org

Back in October I wrote a few blog posts describing a significant problem with the way we admit new members into the Mozillians.org community platform. Yesterday the Mozillians.org team fixed it!

Before yesterday, everyone with a “vouched” account in Mozillians.org was empowered to vouch others. But we never explained what it meant to vouch someone: What it implied, what it granted. As a result, the standard for being vouched was arbitrary, the social significance of being vouched was diluted, and the privileges granted to vouched users were distributed more widely than they ought to be.

Yesterday the Mozillians.org development team released a major refactor of the vouching system. For the first time we have a shared definition and understanding of vouching: A vouch signals participation and contribution in Mozilla’s community, and grants access to content and systems not available to the general public.

The new vouch system includes features that…

  • ask a “voucher” to explain to the community why they are vouching someone
  • grant the “vouching” privilege only to people who have themselves been vouched multiple times
  • remove “legacy vouches” from accounts that were vouched before we agreed what vouching meant and whose current contributor status can’t be easily verified

It is much clearer now who can access non-public information using Mozillians.org (people who have been vouched because they participate and contribute to Mozilla) and how that list of people can grow (through individual judgments by people who have themselves been vouched numerous times).

When we know the composition of a network and understand how it will grow, we can make better decisions about sharing things with the network. We can confidently choose to share some things because we understand whom we’re sharing with. And we can reasonably choose to withhold some things for the very same reason. Understanding a network simultaneously encourages more sharing and reduces inadvertent disclosure.

Thanks to the Mozillians.org development team for making these excellent improvements!

Wesley JohnstonBetter tiles in Fennec

We recently reworked Firefox for Android‘s homescreen to look a little prettier on first-run by shipping “tile” icons and colors for the default sites. In Firefox 33, we’re allowing sites to designate their own tiles images by supporting  msApplication-Tile and Colors in Fennec. So, for example, you might start seeing tiles that look like:

appear as you browse. Sites can add these with just a little markup in the page:

<meta name="msapplication-TileImage" content="images/myimage.png"/>
<meta name="msapplication-TileColor" content="#d83434"/>

As you can see above in the Boston Globe tile, sometimes we don’t have much to work with. Firefox for Android already supports the sizes attribute on favicon links, and our fabulous intern Chris Kitching improved things even more last year. In the absence of a tile, we’ll show a screenshot. If you’ve designated that Firefox shouldn’t cache the content of your site for security reasons, we’ll use the most appropriate size we can find and pull colors out of it for the background. But if sites can provide us with this information directly its 1.) must faster and 2.) gives much better results.

AFAIK, there is no standard spec for these types of meta tags, and none in the works either. Its a bit of the wild wild west right now. For instance, Apple supports apple-mobile-web-app-status-bar-style for designating the color of the status bar in certain situations, as well as a host of images for use in different situations.

Opera at one point supported using a minimized media query to designate a stylesheet for thumbnails (sadly they’ve removed all of those docs, so instead you just get a github link to an html file there). Gecko doesn’t have view-mode media query support currently, and not many sites have implemented it anyway, but it might in the future provide a standards based alternative. That said, there are enough useful reasons to know a “color” or a few different “logos” for an app or site, that it might be useful to come up with some standards based ways to list these things in pages.

Bogomil ShopovCapture screenshots and annotate web sites in your Firefox

Usersnap’s visual communication addon was just approved by the addon team and now we have a “green button”. Hooray!

This is a “must have” addon for every web developer who wants to solve problems with web sites faster and who wants to earn more money by shortening the communication time with the client or inside the team.

Collect ideas and share them with your team

  • Capture and annotate every website.
  • Share and discuss it with your team.
  • Communicate visually and save time.

Annotate screenshots and integrate like a pro

  • Sticky notes, a pen, the pixel ruler and more help you express yourself visually.
  • Integrate Usersnap in your existing workflow and connect it to one of our supported third party tools.

Capture screens in all environments

  • Works on every site, including localhost.
  • Works behind firewalls.
  • Works on your password-protected staging and QA servers.

Every developer will get access and  to the  Usersnap Dashboard where he/she can:

  • Discuss mockups and sketches. Annotate them and push them back to a bug tracking system
  • See advanced client-side JavaScript errors and XHR Logs and browse them.
  • Access extended Information about the user’s session: OS, browser version, screen and browser size and installed plugins


Click here to install it (no restart needed)

Pete MooreWeekly review 2014-07-30

Highlights from this week

  • Migrated stuff away from aki’s account for vcs sync (both vcs sync machines and people account for mapfiles)
  • Reviewed roll out of l10n with hwine; I discovered a bunch of issues with current production l10n branches that Hal is reviewing and will communicate with partners about, before cut-over
  • Set up weekly meetings with hwine for discussing all vcs sync / repo related issues (etherpad meeting notes)
  • Created patch for esr31 branch vcs sync, awaiting review
  • Started work on unit tests for vcs sync
  • Now testing gecko-git locally - hope to provide patch to Hal this week
  • Rather a lot of interrupts this week:
    • Windows try issues due to ash activity (patch created)
    • Windows builder issues
    • Legacy mapper timeouts (patch created)
    • Naughty slaves (terminated)
    • Updating wiki docs, puppet code etc re: scl1 death
    • Code reviews for Callek, Ben

Goals for next week:

Bugs I created this week:

Other bugs I updated this week:

Karl DubostReducing your bandwidth bill by customizing 404 responses.

In this post, you will learn on how to improve and reduce the bandwidth cost for both the user and the server owner. But first we need to understand a bit the issue (but in case you know all about it, you can jump to the tips at the end of the blog post).

Well-Known Location Doom Empire

Starting a long time ago, in 1994, because of a spidering program behaving badly, robots.txt was introduced and quickly adopted by WebCrawler, Lycos and other search engines at the time. Now, Web clients had the possibility to first inspect the robots.txt at the root of the Web site and to not index the section of the Web sites which declared "not welcome" to Web spiders. This file was put at the root of the Web site, http://example.org/robots.txt. It is called a "Well known location" resource. It means that the HTTP client is expecting to find something at that address when doing a HTTP GET.

Since then many of these resources have been created unfortunately. The issues is that it imposes on server owners certain names they might have want to use for something else. Let's say, as a Web site owner, I decide to create a Web page /contact at the root of my Web site. One day, a powerful company decides that it would be cool if everyone had a /contact with a dedicated format. I then become forced to adjust my own URI space to not create conflict with this new de facto popular practice. We usually say that it is cluttering the Web site namespace.

What are the other common resources which have been created since robots.txt?

  • 1994 /robots.txt
  • 1999 /favicon.ico
  • 2002 /w3c/p3p.xml
  • 2005 /sitemap.xml
  • 2008 /crossdomain.xml
  • 2008 /apple-touch-icon.png, /apple-touch-icon-precomposed.png
  • 2011 /humans.txt

Note that in the future if you would like to create a knew well-known resource, RFC 5785 (Defining Well-Known Uniform Resource Identifiers (URIs)) has been proposed specifically for addressing this issue.

Bandwidth Waste

In terms of bandwidth, why could it be an issue? These are files which are most of the time requested by autonomous Web clients. When an HTTP client requests a resource which is not available on the HTTP server, it will send back a 404 response. These response can be very simple light text or a full HTML page with a lot of code.

Google evaluated that the waste of bandwidth generated by missing apple-touch-icon on mobile was 3% to 4%. This means that the server is sending bits on the wire which are useless (cost for the site owner) and the same for the client receiving them (cost for the mobile owner).

It's there a way to fix that? Maybe.

Let's Hack Around It

So what about instead of having the burden to specify every resources in place for each clients, we could send a very light 404 answer targeted to the Web clients that are requesting the resources we do not have on our own server.

Let's say for the purpose of the demo, that only favicon and robots are available on your Web site. We need then to send a specialized light 404 for the rest of the possible resources.


With Apache, we can use the Location directive. This must be defined in the server configuration file httpd.conf or the virtual host configuration file. It can not be defined in .htaccess.

<VirtualHost *:80>
    DocumentRoot "/somewhere/over/the/rainbow"
    ServerName example.org
    <Directory "/somewhere/over/the/rainbow">
        # Here some options
        # And your common 404 file
        ErrorDocument 404 /fancy-404.html
    # your customized errors
    #<Location /robots.txt>
    #    ErrorDocument 404 /plain-404.txt
    #<Location /favicon.ico>
    #    ErrorDocument 404 /plain-404.txt
    <Location /humans.txt>
        ErrorDocument 404 /plain-404.txt
    <Location /crossdomain.xml>
        ErrorDocument 404 /plain-404.txt
    <Location /w3c/p3p.xml>
        ErrorDocument 404 /plain-404.txt
    <Location /apple-touch-icon.png>
        ErrorDocument 404 /plain-404.txt
    <Location /apple-touch-icon-precomposed.png>
        ErrorDocument 404 /plain-404.txt

Here I put in comments the robots.txt and the favicon.ico but you can adjust to your own needs and send errors or not to specific requests.

The plain-404.txt is a very simple text file with just NOT FOUND inside and the fancy-404.html is an html file helping humans to understand what is happening and invite them to find their way on the site. The result is quite cool.

For a classical mistake, let say requesting http://example.org/foba6365djh, we receive the html error.

GET /foba6365djh HTTP/1.1
Host: example.org

HTTP/1.1 404 Not Found
Content-Length: 1926
Content-Type: text/html; charset=utf-8
Date: Wed, 30 Jul 2014 05:30:33 GMT
ETag: "f7660-786-4e55273ef8a80;4ff4eb6306700"
Last-Modified: Sun, 01 Sep 2013 13:30:02 GMT

<!DOCTYPE html>

And then for a request to let say http://crossdomain.xml/foba6365djh, we get the plain light error message.

GET /crossdomain.xml HTTP/1.1
Host: example.org

HTTP/1.1 404 Not Found
Content-Length: 9
Content-Type: text/plain
Date: Wed, 30 Jul 2014 05:29:11 GMT



It is probably possible to do it for nginx too. Be my guest, I'll link your post from here.


Zack Weinberg2014 Hugo Awards ballot

I’m not attending the Worldcon, but I most certainly am voting the Hugos this year, and moreover I am publishing my ballot with one-paragraph reviews of everything I voted on. If you care about this sort of thing you probably already know why. If you don’t, the short version is: Some of the works nominated this year allegedly only made the shortlist because of bloc voting by Larry Correia’s fans, he having published a slate of recommendations.

There’s nothing intrinsically wrong with publishing a slate of recommendations—don’t we all tell our friends to read the stuff we love? In this case, though, the slate came with a bunch of political bloviation attached, and one of the recommended works was written by “Vox Day,” who is such a horrible person that even your common or garden variety Internet asshole backs slowly away from him, but nonetheless he has a posse of devoted fanboys and sock puppets. A frank exchange of views ensued; be glad you missed it, and I hope the reviews are useful to you anyway. If you want more detail, Far Beyond Reality has a link roundup.

I value characterization, sociological plausibility, and big ideas, in that order. I often appreciate ambitious and/or experimental stylistic choices. I don’t mind an absence of plot or conflict; if everyone involved is having a good time exploring the vast enigmatic construction, nothing bad happens, and it’s all about the mystery, that’s just fine by me. However, if I find that I don’t care what happens to these people, no amount of plot or concept will compensate. In the context of the Hugos, I am also giving a lot of weight to novelty. There is a lot of stuff in this year’s ballot that has been done already, and the prior art was much better. For similar reasons, volume N of a series has to be really good to make up for being volume N.

With two exceptions (both movies I didn’t get around to) I at least tried to read/watch everything mentioned below, and where I didn’t finish something, that is clearly indicated. This is strictly my personal strategy for this sort of thing and I am not putting it forward as right or wrong for anyone else.

Links go to the full text of the nominated work where possible, a Goodreads or IMDB page otherwise.

Note to other Hugo voters: I’m making pretty heavy use of No Award. If you mean to do the same, make sure you understand the correct way to use it:

If you want to vote No Award over something, put No Award at the end of your ballot and DO NOT list the things you’re voting No Award over.

The ballot below (unlike the ballot I submitted) includes line items below No Award, so that I have a space to explain my reasons.

Best Novel

  1. Ancillary Justice, Ann Leckie

    Let’s get one thing out of the way first: this book features the Roman Empire (with aspects of Persia and China and probably a couple others I missed), IN SPACE! That was out of fashion for quite some time, and rightly so, on account of having been done to death. However, it has been long enough that a genuinely fresh take is acceptable, if done well, and I happen to think this was done very well. It does not glorify conquest, but neither does it paint the conquerors a uniform shade of Evil; it depicts a multitude of cultures (more than one on the same planet, even!); it has realistic-given-the-setting stakes and conflicts of interest, and believable characters both human and otherwise. I might not have chosen to tell the story out of temporal sequence; one does spend an awfully long time wondering why the protagonist’s goals are as they are. But I see why Leckie did it the way she did it.

    Nearly every review of this novel talks about the thing where the protagonist’s native language doesn’t make gender distinctions and so e is always picking the wrong casemarkers, pronouns, etc. when speaking languages that do. I kinda wish they wouldn’t focus so much on that, because it is just one aspect of a larger problem the protagonist has: e didn’t used to be a human being and is, over the course of the novel, having to learn how to be one. E doesn’t know how to handle eir ‘irrational’ attachment to Lieutenant Seivarden either, and eir problem-solving strategies start out much more appropriate to the entity e used to be and progressively become suited to eir current state. It is also neither the most unusual nor the most interesting thing about Radchaai culture. I don’t believe I have ever before seen Space Romans with a state religion, a system of clientage, or a plot driven by a political quandary that the real Romans (as far as I understand) actually had.

  2. Neptune’s Brood, Charles Stross

    This reminds me of The Witches of Karres, in a good way. Distant-future setting, check. Packed to the gills with ideas, check. Self-contained storyline, check. Extraordinarily high stakes, check. Hardly anyone has any idea what’s going on, but they don’t let that stop them, check. It’s technically a sequel, but it might as well be a standalone novel, which is fortunate, because I bounced off Saturn’s Children really quite hard—the protagonist here is more congenial company, and the society depicted, more plausible. The plot is fundamentally a MacGuffin hunt, but played in a novel and entertaining manner.

    Strictly in terms of the story, this is neck and neck with Ancillary Justice, but it falls short on writing technique. There was a great deal of infodumpage, paired with excessive showing of work—at one point the first-person narrator actually says “I am now going to bore you to death with $TOPIC.” I like geeking out about political economy, but not at the expense of the narrative. (Continuing with the comparison to The Witches of Karres, Schmitz was much better at giving the reader just barely enough detail.) The ending is almost Stephensonian in its abruptness, and a little too pat—it sounded good at the time, but half an hour later I found myself unconvinced that the MacGuffin would have its stated consequences. Finally, there’s an irritating and frequent stylistic quirk, in which multi-clause sentences are, incorrectly, punctuated with colons instead of semicolons.

  3. Parasite, Mira Grant

    You know how the movie studios sometimes do A/B testing to decide which of several different edits of a movie to release as the finished product? I want to do that with this book. Specifically, I want to find out whether or not it’d be a better book if there weren’t page-length quotations from in-universe documents at the beginning of each chapter. These documents make it much clearer what is actually going on, which is a good thing in that the reader might be completely lost without them, and a bad thing in that the reader can (as I did) figure out exactly where the plot is going by the end of chapter 2.

    It’s reasonably well written, modulo some clumsy infodumpage in the middle, and it is a credible attempt to write a stock type of thriller novel (exactly which type would be a spoiler) without making the science completely laughable. Bonus points for actually being familiar with the geography, traffic headaches, and cultural diversity of the San Francisco Bay Area. However, it is, in the end, a stock type of thriller novel, and may not appeal if you don’t already like that sort of thing. (If you get to the end of chapter 2 and find yourself thinking “ugh, this is clearly going to be about X” … yeah, you can go ahead and put the book down.)

  4. No Award

  5. The Wheel of Time, Robert Jordan and Brandon Sanderson

    The rules need to be changed so that completed gargantua-series are their own category, or perhaps some sort of special award, considering there might or might not be any completed gargantua-series in any given year. I’d be showing up at the business meeting with a revision proposal if I were attending the Worldcon.

    Perennial latecomer to cultural phenomena that I am, I didn’t even notice that these novels existed until there were about seven of them, at which point the consensus on USENET (yeah, it was that long ago) seemed to be that they were not terribly inventive and the plot had ceased to make forward progress. So I didn’t bother picking them up. The Hugo voters’ packet contains the entire thing in one giant e-book file. I am on the last leg of a long plane flight as I type this, and I have just finished the first 8% of that e-book, i.e. The Eye of the World. I enjoyed it well enough, as it happens, and will probably continue reading the series as further long plane flights, sick days, and similar present themselves. If I don’t get too fed up with the plot failing to make forward progress, I may even finish it someday.

    However, based on that volume plus the aforementioned USENET commentary and what I’ve heard from other people since, it is not Hugo quality, for three reasons. First and foremost, as I was told so long ago, it is not at all inventive. The setting is exactly what was being skewered by The Tough Guide to Fantasyland. Worse, the plot of the first novel is Campbell’s monomyth, verbatim. (Well, the first few stages of it.) It escapes “Extruded Fantasy Product” territory only by virtue of having a whole bunch of characters who, at this point anyway, are all three-dimensional people with plausible motivations and most of whom are entertaining to watch. Second, I don’t have a lot of patience for whiny teenagers who spend much of the book refusing the call to adventure, distrusting the adults who actually know what’s going on, or both simultaneously. Yes, they’ve spent all their lives hearing stories starring the Aes Sedai as villains, but c’mon, Moiraine repeatedly saves everyone’s life at great personal cost, it could maybe occur to you that there might’ve been a bit of a smear campaign going on? Third, Jordan’s tendency to pad the plot is already evident in this one volume. It did not need to be 950 pages long.

  6. Warbound, Book III of the Grimnoir Chronicles, Larry Correia

    Noir-flavored urban fantasy, set in an alternate 1930s USA where people started developing superpowers (of the comic book variety) in roughly 1880. I would love to read a good detective noir with superheroes and/or fairy tale magic. This, however, is yet another jingoistic retread of the Pacific theater of the Second World War, shifted into the middle of the Great Depression, with The Good Guys (USA! USA! with superheroes) versus The Bad Guys (Imperial Japan circa 1934—an excellent choice if you like your Bad Guys utterly evil, I’ll admit—with supervillains) and a secret society trying to emulate Charles Xavier and failing at it because they’re too busy arguing and scheming. I almost gave up fifty pages into volume I because no sympathetic protagonists had yet appeared. Fortunately, someone whose story I was interested in did appear shortly thereafter, but it was still pretty slim pickings all the way to the end. This is not a case of bad characterization; it’s that most of the characters are unpleasant, petty, self-absorbed, and incapable of empathizing with people who don’t share their circumstances. Additional demerits for setting the story in the Great Depression, and then making nearly everyone we’re supposed to like, wealthy.

    Ironically, one of the most sympathetic characters in the entire trilogy is the leader of Imperial Japan (known almost exclusively as The Chairman)—I think this was because Correia knew he needed a villain who wasn’t cut from purest cardboard, but it didn’t occur to him that he needed to put as much work into his heroes. And by the same token, it did not occur to him that he had failed to convincingly refute his villain’s philosophy: if your villain espouses the rule of the strongest, and is then defeated by superior technology, intellect, and strength of will, that in itself only demonstrates that force of arms is weaker than those things.

    Regarding Larry Correia’s recommendation slate, all I care to say is that his taste in writing by others reflects the flaws in his own writing.

Best Novella

  1. Wakulla Springs, Andy Duncan and Ellen Klages

    Apart from a few unexplained-and-might-not-even-have-happened phenomena near the very end, this could be historical fiction with no speculative elements. Wakulla Springs is a real place and they really did film Tarzan and The Creature from the Black Lagoon there, and they really did turn various animals loose in the Florida swamps when they were done. However, if you squint at it a different way, it’s a fairy tale moved to the twentieth century, not any specific fairy tale but the bones of them all, with movie stars standing in for kings and princes, and rubber-suit monsters standing in for, well, monsters. And the characters are all just fun to be around.

  2. Six-Gun Snow White, Catherynne M. Valente

    This is overtly a fairy tale, specifically Snow White, moved to the nineteenth-century Wild West and shook up in a blender with the style and the form of the stories of Coyote. The first half of it is compelling, and the third quarter works okay, but the conclusion is disappointing. The problem is that if you’re going to retell Snow White, either you have to stick with love conquering all in the end (and you have to set that up proper), or you have to jump the tracks before Snow White eats the poison apple. And if you’re going to set Snow White up as a mythic hero after the fashion of Coyote, maybe you should give her at least some of Coyote’s miraculous ability to get back out of trouble? Valente deliberately avoided doing any of those things and so painted herself into a corner.

    Having said that, I’m still giving this the #2 slot because I really like the concept, and it only fails by not executing successfully on its grand ambitions, which is a thing I am prepared to cut an author some slack for.

  3. Equoid, Charles Stross

    Marvelously creepy cryptozoological meditation on unicorns, their life cycle and role in the ecosystem, and why they must be exterminated. In the Laundry Files continuity, and does not stand alone terribly well. Also, stay away if you dislike body horror.

  4. No Award

  5. The Chaplain’s Legacy, Brad Torgersen

    Remember what I said above about things that have been done already? This is a retread of Enemy Mine, breaking no new ground itself. Characterization is flat and style pedestrian. Not so boring as to make me put it down in the middle, and thankfully didn’t go for the cheap moral that I thought it would, but on the whole, disappointing.

  6. The Butcher of Khardov, Dan Wells

    An extended character study of an antihero of the most boring, clichéd, and overdone type: mistreated due to his size and strength, doubly mistreated due to his uncanny abilities, learns from betrayal to take everything personally, believes the only thing he’s good at is killing people, and in his secret heart, just wants to be loved. Overflowing with manpain. Told out of chronological order for no apparent reason, causing the ending to make no sense. Vaguely folktale-Russia setting (with steampunk and magic) that a better writer could have done something interesting with; I am given to understand that this is in fact the WARMACHINE tabletop wargaming setting. I do not object to tie-in fiction, but neither will I cut it any slack on that account. For instance, the Butcher himself is an official WARMACHINE character; I don’t know if Wells invented him or just got tapped to flesh out his backstory; regardless, I do not consider that a valid excuse for any of the above.

Best Novelette

  1. The Waiting Stars, Aliette de Bodard

    This one is difficult to describe without spoiling it, so I’ll just say that it’s a clash-of-cultures story, set in the extremely far future, and I liked how the two cultures are both somewhat sympathetic despite valuing very different things and being legitimately horrified by the other’s practices. The ending may be laying it on a little too thick, but I don’t know that it can be toned down without spoiling the effect.

  2. The Lady Astronaut of Mars, Mary Robinette Kowal

    Elma, the titular Lady Astronaut, was on the first manned expedition to Mars; that was thirty-odd years ago, and she is now semi-retired, living on Mars, and torn between getting back into space and taking care of her husband, who is dying. Apart from the setting, this could be mainstream literary fiction, and a type of mainstream literary fiction that, as a rule, rubs me entirely the wrong way. This one, however, I liked. The characters all seem genuine, and the setting throws the central question of the plot into sharp relief, forcing us to take it more seriously than we might otherwise have.

  3. The Truth of Fact, the Truth of Feeling, Ted Chiang

    Philosophical musing on the nature of memory and how technological aids change that. This used to be a professional interest of mine, but I didn’t think Chiang did all that much with it here. Told in two intertwined narratives, of which the story of the Tiv is more compelling, or perhaps it is just that the first-person narrator of the other half is kind of a blowhard.

  4. No Award

  5. The Exchange Officers, Brad Torgersen

    Near future plausible geopolitical conflict in low Earth orbit, POV first person smartass grunt-on-the-front-line. Entertaining, but neither memorable nor innovative.

  6. Opera Vita Aeterna, Vox Day

    This isn’t a story; it’s an object lesson in why publishers reject 95–99% of the slush pile. The prose is uniformly, tediously purple, and nearly all of it is spent on description of rooms, traditions, and illuminated manuscripts. The characters haven’t even got one dimension. Nothing happens for the first two-thirds of the text, and then everyone in the monastery (it takes place in a monastery) is, without any warning, murdered, cut to epilogue. To the extent I can tell what the author thought the plot was, it could be summarized in a paragraph without losing anything important, and it would then need a bunch of new material added to make it into a story.

    I’ve seen several other people say that this is bad but not terrible, comparing it positively to The Eye of Argon, and I want to explicitly disagree with that. If I may quote Sarah Avery, The Eye of Argon has characters; in the course of the story, something happens; several somethings, even, with some detectable instances of cause and effect; and it has a beginning, a middle, and (in some versions of the text) an end. It’s clichéd, sure, and and crammed full of basic grammar and vocabulary errors, and that’s what makes it bad in a hilarious and memorable way. Opera Vita Aeterna, by contrast, is bad in a boring and forgettable way, which is worse.

    There is no doubt in my mind that this is only on the ballot because it was included in Correia’s recommendations and then bloc-voted onto the shortlist by Day’s fanboys. To them I say: if you did not realize it was unworthy, you should be ashamed of yourself for being so undiscerning; if you knew it was unworthy and you nominated it anyway, you have committed a sin against Apollo, and may you live to regret it.

Best Short Story

These are all good enough that rank-ordering them is hard; I’d be happy to see any of them win. They are also all floating somewhere around the magical realism attractor, which is not what I would have expected.

  1. The Water That Falls on You from Nowhere, John Chu

    Tell a lie, even a white lie, or even fail to admit the truth, and water falls on you from nowhere; this just started happening one day—otherwise this is a story of ordinary people and their troubles and their connections to each other, and the magic is used to explore those things. Very elegant in its use of language; bonus points for making use of the ways in which Chinese (Mandarin, specifically, I think) describes family relationships differently than English does. Emotionally fraught, but satisfying.

  2. Selkie Stories Are for Losers, Sofia Samatar

    I always did wonder what happened to the children after the selkie went back to the ocean. Not so much the husband. The husband got what was coming to him, which is the point of the selkie story itself; but the daughter, who usually is the one to find the skin that the husband’s kept locked in the attic or wherever; she didn’t have it coming, did she?

    A kind storyteller might have it be that the daughter goes down to the ocean every Thursday afternoon, while the husband is out fishing, and her mother comes up from the waves and they have tea. Sofia Samatar is not a kind storyteller.

  3. The Ink Readers of Doi Saket, Thomas Olde Heuvel

    Based on a real Thai festival, Loi Krathong; in the story, the paper boats that are floated down the river contain wishes for the new year. The villagers of Doi Saket consider it their duty to collect the wishes and send them onward to Buddha in paper lanterns … and some of them, somehow, come true. Is it the intervention of the river goddess? Is it all just coincidence? Is it a scam to line the pockets of the village chief? It’s hard to tell. You will reach the end of this story not being sure what happened, and you will reread it and still not be completely sure. But it’s a fun read anyway.

  4. If You Were a Dinosaur, My Love, Rachel Swirsky

    A very, very old theme, here, but a take on it that would have been impossible not so long ago. I’m not sure it’s a story, though. More of a love poem. Or a curse poem. Bit of both, really. Still, it’s going to haunt me.

Best Graphic Story

  1. Time, Randall Munroe

    Back in 2005 I doubt anyone would have guessed that the new nerd-joke webcomic on the block, xkcd, would still be around in 2013 (over a thousand strips later), let alone that it would run a 3101-panel epic starring two stick figure people who are building sandcastles on the beach … really elaborate sandcastles … meanwhile discussing why the ocean level seems to be rising up … and then setting off in search of the source of the river, since presumably that’s where the extra water is coming from … and it just keeps elaborating from there. It was presented in an inconvenient format (the link goes to a more accessible compilation), but it’s got everything one could want in an SFnal epic: engaging characters (it’s amazing how much characterization Munroe can pack into pithy dialogue among stick figures), a carefully thought-out setting, the joy of discovery, the thrill of the unknown, a suitably huge problem to be solved, and, of course, Science!

  2. Saga, Volume 2, written by Brian K. Vaughan, illustrated by Fiona Staples

    A love story against the backdrop of an endless galaxy-shattering war, sung in the key of gonzo, and influenced by the best bits of everything from Métal Hurlant to Tank Girl to Tenchi Muyo! It’s hard to tell where it’s going; what we have so far could be summarized as “Romeo and Juliet IN SPACE! Neither of them is a teenage idiot, they’re determined to survive, and their respective sides are coming after them with as much dakka as they can scrape together on short notice.” The A-plot may not even have appeared onstage at this point. One thing’s for sure, though: Vaughan and Staples mean to put on one hell of a show. For a more in-depth description I refer you to io9’s review.

    Strictly in terms of the content, I could just as easily have placed this in the #1 slot. “Time” gets the nod because Saga is not quite as novel, because the subplot with The Will and The Stalk seemed icky and gratuitous, and because volume 1 won this category last year.

  3. Girl Genius, Volume 13: Agatha Heterodyne & The Sleeping City, written by Phil and Kaja Foglio; art by Phil Foglio; colors by Cheyenne Wright

    I love Girl Genius, but it’s won this category three times already (in a row, yet!) and this volume, while continuing to be quality material, is nonetheless more of the same.

  4. No Award

  5. The Girl Who Loved Doctor Who, written by Paul Cornell, illustrated by Jimmy Broxton

    What if the Doctor fell through a crack in time and landed in this universe, where he is a fictional character? Not a new conceit, but one with legs, and I think you could build a fine Doctor Who episode around it; unfortunately, this is not that. It is too heavy on the self-referential and meta-referential, to the point where I think it only makes sense if you are familiar with the show and its fandom. The story is rushed so that they can pack in more in-jokes, and the coda takes a turn for the glurge.

  6. Meathouse Man, adapted from the story by George R.R. Martin and illustrated by Raya Golden

    I think this was meant to be a deconstruction of the notion that for everyone there is a perfect romantic partner out there somewhere, just one, and all you have to do is find them and your life will be perfect forever. Which is a notion that could use some deconstructing. Unfortunately, between the male gaze, the embrace of the equally-in-need-of-deconstruction notion that men cannot comprehend women, the relentlessly grim future backdrop, and the absence of plausible character motivations, what you get is not deconstruction but enforcement by inversion: The only thing that can fix a man’s shitty life is the perfect romantic partner, but he will never find her, so he should just give up and embrace the hollow inside. (Gendered words in previous sentence are intentional.) I regret having finished this.

Best Dramatic Presentation, Long Form

  1. Gravity, written by Alfonso Cuarón & Jonás Cuarón, directed by Alfonso Cuarón

    This is probably as close as you can get to the ideal golden age hard-SF Protagonist versus Pitilessly Inhospitable Environment story in movie format. (I have actually seen this abstract plot done with precise conformance to the laws of orbital mechanics: Lifeboat, by James White. But storytelling suffered for it.) There are places where they go for the cheap wrench at your heart, but then there are places where they don’t do the obvious and clichéd thing, and this movie isn’t really about the plot, anyway, it’s about the spectacle. Clearly groundbreaking in terms of cinematography, also; I look forward to future use of the technology they developed. For more, please go read my friend Leonard’s review, as he is better at critiquing movies than I am.

  2. Pacific Rim, screenplay by Travis Beacham & Guillermo del Toro, directed by Guillermo del Toro

    It’s a giant monster movie, but it’s a really well thought through and executed giant monster movie. (Except for the utterly nonsensical notion of building a wall around the entire Pacific Ocean, which let us pretend never happened.) And I like that the scientists save the day by doing actual experimental science. Bonus points for not going grimdark or ironic or anything like that. Yes, earnest movies in which there was never any real doubt that the good guys would win were worn out in the 80s and 90s. But bleak movies in which there aren’t any good guys to begin with, and nothing ever really changes, certainly not for the better, are worn out here in the 2010s. Further bonus points for a close personal relationship between a man and a woman which does not turn into a romance.

  3. Iron Man 3, screenplay by Drew Pearce & Shane Black, directed by Shane Black

    Marvel continues to crank out superhero movies which do interesting things with established characters. (I particularly liked, in this one, that Potts gets her own independent source of superpowers and does not require rescuing, and that Stark is forced to work out his overprotectiveness issues on his own time.) However, in the end, it is another superhero movie with established characters. I said to someone on a convention panel back in 2001 that I wished Marvel and DC would allow their superheroes to come to the end of their stories, and I still feel that way.

  4. (my official vote for this category ended at this point)

  5. Frozen, screenplay by Jennifer Lee, directed by Chris Buck & Jennifer Lee

    I didn’t get around to seeing this; I’m sure it’s another respectable installment in the field of Disneyified fairy tale, but I can’t really imagine its breaking new ground.

  6. The Hunger Games: Catching Fire, screenplay by Simon Beaufoy & Michael Arndt, directed by Francis Lawrence

    I didn’t get around to seeing this either, and I’m frankly more than a little burnt out on YA dystopia.

Best Dramatic Presentation, Short Form

I have to abstain from this category, because I watch TV shows ages after everyone else does; I imagine I’ll get to current Doctor Who episodes (for instance) sometime in 2024.

Best Related Work

I wanted to vote this category, but I have run out of time to read things, so I have to skip it as well.

Best Semiprozine, Best Fanzine, Best Fancast, Best Editor, Best Professional Artist, Best Fan Artist, Best Fan Writer

And these categories, I have no idea how to evaluate.

The John W. Campbell Award for Best New Writer

Several of the qualifying works in this category are head and shoulders above everything that was nominated in the regular categories! I will definitely be looking out for more writing by Samatar and Gladstone, and maybe also Sriduangkaew.

  1. Sofia Samatar (A Stranger in Olondria)

    A form I haven’t seen attempted in some time: the travelogue of a fantastic land. In this case, Olondria is a great city, perhaps the greatest in the world, filled with merchants, priests, and books, and the traveler/narrator is a humble farmer from the islands in the south, come to Olondria to sell his peppers, as his father did before him. Well, that’s what everyone back at home expects him to do, anyway. In truth he has fallen in love with the literature of Olondria and, through the books, the city itself, and never had all that much interest in the family business to begin with. And then the plot catches up with him: there are two sects of those priests, and both wish to use him to advance their own interests: for you see, he is haunted by the ghost of a woman of his own people, whom he barely knew, but whom he was kind to in her last illness…

    This has got everything one could possibly want in a work of SF and everything one could possibly want in a work of capital-L literature; the form is elegant and fitted precisely to the content; the characters are engaging, the narrative flows smoothly, one does not want to put it down. As I mentioned above, Sofia Samatar is not a kind storyteller; this book is painful to read in places. But, having completed it, you will not regret the journey.

  2. Max Gladstone (Three Parts Dead, Two Serpents Rise)

    Alternate-Earth (and you have to pay close attention to realize that it is Earth) fantasy. All gods are real, but many of them are dead; the wizards (excuse me, “Craftsmen”) made war on them and slew them, claiming their power in the name of humanity. That was some hundred years ago, and the world is still finding a new equilibrium. Each of these books shows a different piece of that, with very little overlap. Plotwise they are both mysteries, of the ‘investigation of a small incident leads to something bigger … much bigger …’ type, which I liked; it allows the stakes to be appropriately high while avoiding all of the overused quest fantasy tropes. And it works well for showing the readers lots of little details that illustrate how this is not the world we know. Gladstone is also excellent at characterization; even the NPCs who are only onstage in a scene or two feel fully realized.

    The only complaints I have are that the way the magic works kinda squicks me out a little (this may have been intentional) and that the ending of Two Serpents Rise didn’t quite work for me (in a way which would be too spoilery to explain here).

  3. Benjanun Sriduangkaew (“Fade to Gold”; “Silent Bridge, Pale Cascade”; “The Bees Her Heart, the Hive Her Belly”)

    These are short stories. “Fade to Gold” is a variation on a Southeast Asian folktale, starring two people trapped by their natures and the demands of society; creepy, sorrowful, tragic. The other two are far-future magical-realist meditations on the nature of family, loyalty, and history in a setting where everyone’s memory is remotely editable. All are good, but the far-future ones may not be to everyone’s taste: e.g. if you don’t care for magical realism, or for stories where it’s not clear exactly what happened even after you’ve read all of it.

  4. Ramez Naam (Nexus)

    Near-future technothriller in which an elixir of brain-augmenting nanomachines, street name Nexus, offers people the chance to become ‘posthuman’…or could be abused to enslave humanity. Three different organizations are struggling to control it, and the protagonists, who just want to be left in peace to experiment on their own minds, are caught in the middle. Generally a fun read; occasionally clunky prose (particularly in fight scenes); overspecific about gadgets in use (lots of technothrillers do this and I don’t understand why). I am a little tired of cheap drama achieved by poor communication between people who are nominally on the same side.

    Brain-augmenting nanomachinery, and the United States of America sliding into police state-hood, seem to be in the zeitgeist right now. I’ve seen both done with a lot more nuance: this gets obnoxiously preachy in places. (Recommendations in this category with more nuance: Smoking Mirror Blues, A Girl and her Fed.)

  5. No Award

  6. Wesley Chu (The Lives of Tao)

    I gave up on this after three chapters. The story concept could have been interesting—two factions of immortal, nonphysical aliens, battling in the shadows of Earth’s history, possessing humans and using them to carry out their plans—but it’s got major problems on the level of basic storytelling craft. Those first three chapters are one page after another of unnecessary detail, repetition of information the audience already has, grating shifts in tone, boring conversations about trivia, and worst of all, self-spoilers. Maybe two pages’ worth of actual story happened, and a good chunk of that probably shouldn’t have happened on stage (because it was a self-spoiler). And I had been given no reason to care what happened to the characters.

Kevin NgoBrowserify and Gulp Workflow for React

The JS world moves quickly. New web tools are adopted faster than new Spidermans (men?) are produced. One second, people are talking about AngularJS, RequireJS, and Grunt. The next, it's React, Browserify, and Gulp. Who knows, by tomorrow we could have some new shiny things called McRib, Modulus, or Chug. But the new workflows that come along never fail to keep development interesting and never fail to make our lives easier. Kay, it's time to freshen up. Let us answer: what is so streets-ahead about these new web technologies?

To talk about Browserify and Gulp, we'd need to take a look at their siblings.

  • RequireJS is a JS module loader that manages dependencies. Very rough idea would be like having Python imports in JS.
  • Grunt is a JS task runner. A common use is the automation of builds of frontend codebases (i.e., JS minification, bundling, CSS pre-compilation). Very rough idea would be to think of it as a JS Makefile.

Let's explore what the new kids on the block are kicking around.

Why React?

React is a JS library for building reusuable components. These components do not require you to manually set up linking functions or any data binding. Just call render and it will refresh in the DOM. React even diffs refreshes to the DOM to minimize unnecessary re-rendering for very performant view rendering.

Note that React is not a full MVC framework. As such, It makes a nice wine pairing with other frameworks. I won't fully dive into React code today, but we can get a nice workflow set up specifically for it.

Why Browserify?

To pull in a third-party dependency in RequireJS, one must venture out into the internet and curl/wget/download/whatever the file into their project. Then they can required. Any updates will have to be refetched manually Repeat this with multiple dependencies for multiple projects, and it becomes a nuisance. Having to optimize RequireJS projects in another step is a rotten cherry on top.

Browserify piggybacks npm. Dependencies with Browserify support such as jQuery, Underscore.js, React, or AngularJS can be hosted on npm, specified and listed all in package.json, and Browserify will handle the bundling of these dependencies with your source code into a single file for you! Browserify even creates a dependency tree to figure out which modules need and need not to be included in the bundle. Smart lad.

Why Gulp?

Gulp consist more of code whereas Grunt are structured more towards configuration. It can be a matter of preference, but Gulp's focus on chained pipelines and streams make it such that intermediary data or files are not needed when handling things such as minification and pre-compilation.

Grunt is well-fleshed with its thousands plugins from the community. However, Gulp is getting there. In the short dip I've taken, Gulp had more than all the plugins I needed. Either way, you'll be well supported.

Here's my project's gulpfile. It uses reactify to precompile React JSX files into normal JS files which is then pipelined to browserify for bundling with dependencies. It compiles Stylus files to CSS. And everything's nicely set up to watch directories and rebuild when needed. I'm pretty giddy.

var gulp = require('gulp');

var browserify = require('browserify');
var del = require('del');
var reactify = require('reactify');
var source = require('vinyl-source-stream');
var stylus = require('gulp-stylus');

var paths = {
    css: ['src/css/**/*.styl'],
    index_js: ['./src/js/index.jsx'],
    js: ['src/js/*.js'],

gulp.task('clean', function(cb) {
    del(['build'], cb);

gulp.task('css', ['clean'], function() {
    return gulp.src(paths.css)

gulp.task('js', ['clean'], function() {
    // Browserify/bundle the JS.

// Rerun the task when a file changes
gulp.task('watch', function() {
    gulp.watch(paths.css, ['css']);
    gulp.watch(paths.js, ['js']);

// The default task (called when you run `gulp` from cli)
gulp.task('default', ['watch', 'css', 'js']);

It's finally nice to get outside. Away from the codebase of work. Into the virtual world. Smell the aromas of fresh technologies. I've grown two years younger, and with an extra kick in my step.

James LongBlog Rebuild: A Fresh Start

About two years ago I wanted to start blogging more seriously, focusing on in-depth tech articles and tutorials. Since then I've successfully made several posts like the one about games and another about react.js.

I decided to write my own blog from scratch to provide a better blogging experience, and it has served me well. I didn't want something big and complicated to maintain like Wordpress, and I had used static generators before but in my opinion you sacrifice a lot, and there's too much friction for writing and updating posts.

Back then I wanted to learn more about node.js, redis, and a few other things. So I wrote a basic redis-backed node.js blogging engine. In a few months (working here and there), I had a site with all the basic blog pages, a markdown editor with live preview, autosaving, unpublished drafts, tags, and some basic layout options. Here is the current ugly editor:

Redis is an in-memory data store, and node handles multiple connections well by default, so my simple site scales really well. I've have posts reach #1 on hacker news with ~750 visitors at the same time for hours (reaching about 60,000 views) with no problem at all. It may also help that my linode instance has 8 cores and I load up 4 instances of node to serve the site.

You may wonder why I don't just use something like ghost, a modern blogging platform already written in node. I tried ghost for a while but it's early software, includes complex features like multiple users which I don't need, and most importantly it was too difficult to implement my ideas. This is the kind of thing where I really want my site to be my code; it's my area to play, my grand experiment. For me, it's been working out really well (check out all of my posts).

But the cracks are showing. The code is JavaScript as I wrote it 2 years ago: ugly callbacks, poor modularity, no tests, random jQuery blobs to make the frontend work, and more. The site is stable and writing blog posts works, but implementing new features is pretty much out of the question. Since this is my site and I can do whatever I want, I'm going to commit the cardinal sin and rewrite it from scratch.

I've learned a ton over the past two years, and I'm really excited to try out some new techniques. I have a lot of the infrastructure set up already, which uses the following software:

  • react — for building user interfaces seamlessly between client/server
  • react-router — advanced route handling for react components
  • js-csp — CSP-style channels for async communication
  • mori — persistent data structures
  • gulp — node build system
  • webpack — front-end module bundler
  • sweet.js — macros
  • es6-macros — several ES6 features as macros
  • regenerator — compile generators to ES5

Check out the new version of the site at new.jlongster.com. You can see my progress there (right now it's just a glimpse of the current site). I will put it up on github soon.

I thought it would also be interesting to blog throughout the development process. I'm using some really interesting libraries in ways that are very new, so I'm eager to dump my thoughts quite often. You can expect a post a week, explaining what I worked on and how I'm using a library in a certain way. It will touch on everything such as build systems and cross-compiling, testing, front-end structuring. Others might learn something new as well.

Next time, I'll talk about build systems and cross-compiling infrastructure. See you then!

Frédéric HarperLe web ouvert avec Firefox OS et Firefox à Linux Montréal

Creative Commons: http://j.mp/1o9O86K

Creative Commons: http://j.mp/1o9O86K

Mardi prochain, je serai au CRIM (situé au 405 avenue Ogilvy, suite 101) pour présenter à propos de Firefox OS, mais aussi de Firefox au groupe Linux Montréal. Lors de cette soirée, je ne discuterais pas avec ma clientèle habituelle, soit les développeurs. En effet, la présentation aura bien sûr des aspects techniques, mais sera plus considérée comme haut niveau, pour les utilisateurs avec un intérêt pour la technologie. Voici un avant-goût de la soirée:

Firefox OS, mais qu’est-ce que Mozilla avait en tête pour lancer une Xième plateforme mobile sur le marché! Quel en est le but? Quels en sont les avantages pour les utilisateurs, mais aussi pour les développeurs? Qu’en est-il de Firefox et du web ouvert? Frédéric Harper de Mozilla viendra vous parler de ces deux plateformes, de l’Open Source et de l’Open Web au sein de cette organisation hors du commun.

C’est donc un rendez-vous à 18:30 mardi prochain. Vous pouvez confirmer votre présence sur différents réseaux, soit Google+, Facebook, Twitter, LinkedIn et Meetup.

Le web ouvert avec Firefox OS et Firefox à Linux Montréal is a post on Out of Comfort Zone from Frédéric Harper

Joel MaherSay hi to Kaustabh Datta Choudhury, a newer Mozillian

A couple months ago I ran into :kaustabh93 online as he had picked up a couple good first bugs.  Since then he has continued to work very hard and submit a lot of great pull requests to Ouija and Alert Manager (here is his github profile).  After working with him for a couple of weeks, I decided it was time to learn more about him, and I would like to share that with Mozilla as a whole:

Tell us about where you live-

I live in a town called Santragachi in West Bengal. The best thing about this place is its ambience. It is not at the heart of the city but the city is easily accessible. That keeps the maddening crowd of the city away and a calm and peaceful environment prevails here.

Tell us about your school-

 I completed my schooling from Don Bosco School, Liluah. After graduating from there, now I am pursuing an undergraduate degree in Computer Science & Engineering from MCKV Institute of Engineering.

Right from when it was introduced to me, I was in love with the subject ‘Computer Science’. And introduction to coding was one of the best things that has happened to me so far.

Tell us about getting involved with Mozilla-

I was looking for some exciting real life projects to work on during my vacation & it was then that the idea of contributing to open source projects had struck me. Now I have been using Firefox for many years now and that gave me an idea of where to start looking. Eventually I found the volunteer tab and thus started my wonderful journey on Mozilla.

Right from when I was starting out, till now, one thing that I liked very much about Mozilla was that help was always at hand when needed. On my first day , I popped a few questions in the IRC channel #introduction & after getting the basic of where to start out, I started working on Ouija under the guidance of ‘dminor’ & ‘jmaher’. After a few bug fixes there, Dan recommended me to have a look at Alert Manager & I have been working on it ever since. And the experience of working for Mozilla has been great.

Tell us what you enjoy doing-

I really love coding. But apart from it I also am an amateur photographer & enjoy playing computer games & reading books.

Where do you see yourself in 5 years?

In 5 years’ time I prefer to see myself as a successful engineer working on innovative projects & solving problems.

If somebody asked you for advice about life, what would you say?

Rather than following the crowd down the well-worn path, it is always better to explore unchartered territories with a few.

:kaustabh93 is back in school as of this week, but look for activity on bugzilla and github from him.  You will find him online once in a while in various channels, I usually find him in #ateam.

William ReynoldsImportant changes to mozillians.org vouching

Today we are rolling out new changes to the vouching system on mozillians.org, our community directory, in order to make vouching more meaningful.

mozillians.org has a new vouch form

mozillians.org has a new vouch form

Vouching is the mechanism we use to enable Mozillians access to special content, like viewing all profiles on mozillians.org, certain content on Air Mozilla, and Mozilla Moderator. Getting vouched as a Mozillian means you have made a meaningful contribution.

Action required

If you attended the 2013 Summit, there is no impact to how you use the site and no action required. But…we can use your help.

If you have vouched for others previously, go to their profiles and complete the vouching form to describe their contributions by September 30th. You can see the people that you have vouched listed on your mozillians.org profile.

If you did not attend the 2013 Summit, you can still use the mozillians.org site as you do now, until September 30th.

After September 30th, your profile will become unvouched, unless a Mozillian, who attended the 2013 Summit, vouches for you again.

Both volunteers and paid staff are being treated the same when it comes to receiving multiple vouches. That means everyone who wants to vouch for contributors needs to first receive 3 vouches – more information below.

Most importantly, no one’s vouched status is disappearing this week.

More details and an FAQ on the Vouching wiki page.

Thanks to the Mozillians.org Team who worked on this big initiative to make vouching better. The new vouching system was designed through discussions at Grow Mozilla meetings and several forum threads on community-building and dev.community-tools.

Dave HuntPerformance testing Firefox OS on reference devices

A while back I wrote about the LEGO harness I created for Eideticker to hold both the device and camera in place. Since then there has been a couple of iterations of the harness. When we started testing against our low-cost prototype device, the harness needed modifying due to the size difference and position of the USB socket. At this point I tried to create a harness that would fit all of our current devices, with the hope of avoiding another redesign.

Eideticker harness v2.0 If you’re interested in creating one of these yourself, here’s the LEGO Digital Designer file and building guide.

Unfortunately, when I first got my hands on our reference device (codenamed ‘Flame’) it didn’t fit into the harness. I had to go back to the drawing board, and needed to be a little more creative due to the width not matching up too well with the dimensions of LEGO bricks. In the end I used some slope bricks (often used for roof tiles) to hold the device securely. A timelapse video of constructing the latest harness follows.



We now are 100% focused on testing against our reference device, so in London we have two dedicated to running our Eideticker tests, as shown in the photo below.

Eideticker harness for FlameAgain, if you want to build one of these for yourself, download the LEGO Digital Designer file and building guide. If you want to learn more about the Eideticker project check out the project page, or if you want to see the dashboard with the latest results, you can find it here.

Mozilla Release Management TeamFirefox 32 beta1 to beta2

  • 21 changesets
  • 71 files changed
  • 788 insertions
  • 180 deletions



List of changesets:

Nicolas B. PierronBug 1006899 - Prevent stack iterations while recovering allocations. r=bhackett, a=sledru - b801d912ea0e
Ryan VanderMeulenBug 1006899 - Only run the test if TypedObject is enabled. rs=nbp, a=test-only - 794d229b5125
Blair McBrideBug 1026853 - Experiment is displayed as "pending removal" in detailed view. r=irving,a=lmandel - ecdde13a0aaa
Byron Campen [:bwc]Bug 1042873: Add the appropriate byte-order conversion to ChildDNSRecord::GetNextAddr r=mcmanus a=lmandel - fe0621be0dc3
Magnus MelinBug 1038635 - Zoom doesn't work in the email composer. r=Neil, a=sledru - f9b0de65c69d
Brian HackettBug 1024132 - Add one slot cache for stripping leading and trailing .* from RegExps for test() calls. r=jandem, a=lmandel - 2339e39f5ff4
Danny ChenBug 1005031 - Video controls are displayed in the middle of the video. r=wesj, a=lmandel - 5a6749df6a78
Markus StangeBug 995145 - Don't erase pixels from the window top when drawing the highlight line. r=smichaud, a=sledru - 4767a3451d00
Byron Campen [:bwc]Bug 980270 - Part 1: Plug a couple of common leaks in nICEr. r=drno, a=sledru - d8e7408cb510
Matt WoodrowBug 1035168 - Use Map api to check if DataSourceSurfaces have data available in DrawTargetCairo. r=Bas, a=lmandel - dd517186b945
Matt WoodrowBug 1035168 - Avoid calling GetData/Stride on a surface that we will later Map. r=Bas, a=lmandel - 2bc7c49698cc
Marco BonardoBug 1024133 - Switch to tab magic URL not trimmed on key navigation. r=mano, a=sledru - a8dd4b54e15b
Makoto KatoBug 984033 - Large OOM in nsStreamLoader::WriteSegmentFun. r=honza, a=lmandel, ba=comment-only-change - ecfc5bee1685
Richard NewmanBug 1018240 - Part 1: Reinitialize nsSearchService when the browser locale changes. r=adw, a=sledru - bbfebb4ec504
Richard NewmanBug 1018240 - Part 2: Invalidate BrowserSearch engine list when locale has changed. r=bnicholson, a=sledru - d1e5cb0fbe70
Victor PorofBug 1004104 - Disable caching when running netmonitor tests to hopefully fix some timeouts. r=dcamp, a=test-only - 00f079c876f6
J. Ryan StinnettBug 1038695 - Show all cookies in Net Monitor, not just first. r=vporof, a=sledru - bce84b70da30
Richard NewmanBug 1042984 - Add extensive logging and descriptive crash data for library load errors. r=mfinkle, a=sledru - 3808d8fbe348
Richard NewmanBug 1042929 - Don't throw Errors from JSONObject parsing problems. r=mcomella, a=sledru - b295f329dfd3
Steven MacLeodBug 1036036 - Stop leaking docshells in Session Store tests. r=ttaubert, a=test-only - 24aaa1ae41a6
Richard NewmanBug 1043627 - Only re-initialize nsSearchService on locale change in Fennec. r=adw, a=sledru - 61b30b605194

Prashish RajbhandariHow #MozDrive Started

For me, it all started with a phone call..

It was a lazy Monday morning when I got a phone call from my very good friend Ankurman from Cincinnati. It was the very next day after the Memorial Day Weekend, which meant I slept late and was in no intention to wake up early.

“Prashish! You’ve got to listen to what I’m planning to do.”

“Hey! What’s up? It’s 4 in the morning here by the way.”

I sensed his sudden realization of the time zone difference from the tone of my voice.

I’m sorry to have awakened you. Do you want me to call you later?”

“Nah. Tell me.”

Then Ankurman started narrating his story:

 “Imagine driving across the lower 48 States in the US, exploring the unknown lands, meeting with amazing people from different backgrounds, the cultural diversity, while raising social awareness regarding the obesity epidemic in the US.”

I had this sudden excitement which got me off my bed immediately. I went straight to the kitchen to get some caffeine.

“Okaay, I’m listening.”

He narrated me about the idea of his campaign and what he planned to achieve. We talked for more than an hour talking about various other awareness campaigns.

Later that evening while I was digging through my pictures from the MozFest in London earlier this year, the Mozillian inside me had the eureka moment.

“What if I could also travel across the United States and share the true story of Mozilla with the people; that we Mozillians truly care for the open web and that to make a positive impact in the society is one of our ulterior goals.”

You see, I was easily convinced by the idea of such public awareness campaigns. I was inspired by many stories of people traveling to raise funds for charity or awareness.

I found Tenlap by Greg Hartle, in particular, very interesting. Greg gave away all his possession and started off with $10 to understand and to create work in this world full of insecurity. He plans to travel to all 50 States, meet new people who are rebuilding their lives and then, launch businesses in areas new to him. All in 1,000 days. Gowtham Ramamurthy from my university just recently rode to all 48 States to raise funds for children and it was a massively successful.

The web is a global participative platform where everyone can have their own voice and opinion. Despite the presence of such a huge source of knowledge in the web, people are still confused about possibilities of the web.

People know Firefox as the browser with a blue globe and an orange fox around it.

But, we are more than a browser.

We are a global community of passionate volunteers and contributors whose mission is to serve the user above all, advance the state of the web and keep it open.

People need to know this.

People need understand the web better so that they can take control their online lives.

People need to know Mozilla’s mission, that we are more than a browser. They need to know what we are doing and how they can be a part of this global movement.

And it is our duty as Mozillians to spread this message to masses.

This is how #MozDrive started.

Follow the journey: Website, Twitter, Facebook or Feed.

Original Article: #MozDrive

Filed under: Mozilla, Personal Tagged: mozdrive, mozilla, mozrep

Raluca PodiucAF (analysis framework) and the task graph story

Analysis Framework is an application written on top of TaskCluster and provides a way to execute Telemetry MapReduce jobs.

AF comprises two modules:


This module contains an example of a MapReduce node that can be executed over TaskCluster.
It contains:
  •  a custom specification for a Docker image
  •  a Makefile for creating a Docker image and posting it on registry
  •  a Vagrantfile useful to developers working on MACOSX
  •  custom code for map/reduce jobs
  •  an encryption module used to decrypt STS temporary credentials

 A task up and running

In order to run a task on TaskCluster you need a Docker container, and a custom code to be executed in the container.
 A Docker container is a Docker image in execution. To obtain a custom image you can use the Docker specification in the telemetry-analysis-base repository.
 As TaskCluster needs a container to run your task in, you need to push your container on TaskCluster registry. The Makefile present in the repository takes care of creating a container and pushing it on the registry.

Because Telemetry jobs work with data that is not open to the public you need a set of credentials to access the files in S3. The Docker container expects a set of encrypted temporary credentials as an environment variable called CREDENTIALS.
As these credentials are visible in the task description on TaskCluster they are encrypted and base64 encoded. The temporary credentials used are STS Federation Token credentials. These credentials expire in 36 hours and can be obtain only by AWS users that hold a policy to generate them.
After obtaining this credentials they are encrypted with a symmetric key. The symmetric key is encrypted with a public key and sent together as an environment variable to the Docker instance. Inside the Docker container this credentials will be decrypted properly and used to make calls in S3.

Custom code

Inside the custom container resides the custom code that will receive as arguments a set of file names in S3.


After decrypting the credentials the mapper will take the file list and start downloading a batch of files in parallel. As files finish downloading they are stored in a directory called s3/<path is S3> and their names are sent as arguments to the mapperDriver.
MapperDriver will first read the specification of the job from analysis-tools.yml. In the configuration file it is specified if the files need to be decompressed, the mapper function that needs to run and also the language that the mapper is written in.
Next, as the example provided in the repository is in python, the driver spawns another process that executes python-helper-mapper that reads the files, decompresses them, loads the mapper function and sends the decompressed files line by line to the mapper function.
In the mapper function the output is written to result.txt. This file is an artifact for the task ran.


The reducer task requires an environment variable named INPUT_TASK_IDS specifying all the mapper task ids. Holding the list of all mappers the reducer makes calls to get all the result files from the mappers. As the files finish to download they are stored in a folder called mapperOutput.
The reducerDriver than reads the specification of the job from analysis-tools.yml. The analysis-tools.yml contains the reducer function name and the language that is written in.
In the example provided in the repository the reducer is also written in python so it uses an intermediary module called python-helper-reducer. This module loads the reducer, removes all empty lines from the result files and feeds them to the reducer function.
The output is written to the result file that is an artifact of the reducer task. After writing the result file, the reducer sends an email to the owner of the task. This mail contains a link to the output of the MapReduce job. The email address will be given as an environment variable called OWNER.


This module constructs a taskGraph and posts it to TaskCluster.
At this point a set of credentials is needed to run a task graph:
  • credentials in AWS allowing Federation Token generation. To obtain them you need to specify  a policy enabling STS credentials generation.
  • public key associated to the private one residing in the running Docker container
  • symmetric key used to encrypt the Federation Token credentials
  • access to IndexDB
  • credentials to TaskCluster (in the future)

 Example call:

./AF.js Filter.json "registry.taskcluster.net/aaa" '{"OWNER" : "unicorn@mozilla.com", "BLACK" : "black"}'

 AF takes as arguments a Filter.json, a Docker image (registry.taskcluster.net/aaa) and optionally some other arguments that will be passed as environment variables to the Docker container.
AF executes the following:

  • makes a call for a Federation Token. It encrypts with the public key the credentials and provides base64 encrypted credentials as CREDENTIALS environment variable to the Docker container
  • using Filter.json AF queries indexDB to get the specific file names and file sizes
  • creates skeletons for  mapper tasks and adds load to them (file names from indexDB)
  • pushes taskDefinition to graph Skeleton
  • creates reducer task and gives it as dependencies the labels of dependent tasks
  • posts the graph
  • gets the graph definition, prints the graph definition and a link to a simple monitor page
  • as the graph finises execution, the page will contain the links to the result page

Last but not least

Analysis Framework is a really interesting/fun project. It can be easily extended or reused, it is designed as an example that can be customized and has some documentation too. :p

Kim Moir2014 USENIX Release Engineering Summit CFP now open

The CFP for the 2014 Release Engineering summit (Western edition) is now open.  The deadline for submissions is September 5, 2014 and speakers will be notified by September 19, 2014.  The program will be announced in late September.  This one day summit on all things release engineering will be held in concert with LISA, in Seattle on November 10, 2014. 

Seattle skyline © Howard Ignatius, https://flic.kr/p/6tQ3H Creative Commons by-nc-sa 2.0

From the CFP

"Suggestions for topics include (but are not limited to):
  • Best practices for release engineering
  • Practical information on specific aspects of release engineering (e.g., source code management, dependency management, packaging, unit tests, deployment)
  • Future challenges and opportunities in release engineering
  • Solutions for scalable end-to-end release processes
  • Scaling infrastructure and tools for high-volume continuous integration farms
  • War and horror stories
  • Metrics
  • Specific problems and solutions for specific markets (mobile, financial, cloud)
URES '14 West is looking for relevant and engaging speakers and workshop facilitators for our event on November 10, 2014, in Seattle, WA. URES brings together people from all areas of release engineering—release engineers, developers, managers, site reliability engineers, and others—to identify and help propose solutions for the most difficult problems in release engineering today."

War and horror stories. I like to see that in a CFP.  Describing how you overcame problems with  infrastructure and tooling to ship software are the best kinds of stories.  They make people laugh. Maybe cry as they realize they are currently living in that situation.  Good times.  Also, I think talks around scaling high volume continuous integration farms will be interesting.  Scaling issues are a lot of fun and expose many issues you don't see when you're only running a few builds a day. 

If you have any questions surrounding the CFP, I'm happy to help as I'm on the program committee.   (my irc nick is kmoir (#releng) as is my email id at mozilla.com)

Mozilla Open Policy & Advocacy BlogJoin Mozilla for global teach-ins on Net Neutrality

(This is a repost from The Webmaker Blog)

At Mozilla, we exist to protect the free and open web. Today, that openness and freedom is under threat.

The open Internet’s founding principle is under attack. Policymakers in the U.S. are considering rules that would erase “Net Neutrality,” the principle that all data on the Internet should be treated equally. If these rule changes go through, many fear it will create a “two-tier” Internet, where monopolies are able to charge huge fees for special “fast lanes” while everyone else gets the slow lane. This would threaten the very openness, level playing field and innovation that make the web great — not only in the U.S., but around the world.

Using the open web to save the open web

This is a crucial moment that will affect the open web’s future. But not enough people know about it or understand what’s at stake. Net Neutrality’s opponents are banking on the fact that Net Neutrality is so “geeky,” complex, and hard to explain that people just won’t care. That’s why Mozilla is inviting you to join us and other Internet Freedom organizations to educate, empower, organize and win.

Local “teach-ins” around the world…

Join the global Mozilla community and our partners to host a series of Internet Freedom “teach-ins” around the world. Beginning Aug 4th, we’re offering free training to help empower local organizers, activists and people like you. Together we’ll share best practices for explaining what Net Neutrality is, why it matters to your local community, and how we can protect it together. Then we’ll help local organizers like you host local events and teach-ins around the world, sharing tools and increasing our impact together.

…plus global action

In addition to increasing awareness of the importance of Net Neutrality, the teach-ins will also allow participants to have an impact by taking immediate action. Imagine hundreds of videos in support of #TeamInternet and Net Neutrality, thousands of letters to the editor, and thousands of new signatures on Mozilla’s petition.

We’ll be joined by partners like reddit, Free Press, Open Media, IMLS / ALA, Media Alliance, Every Library and Engine Advocacy.

Get involved

1) Host an event. Ready to get started? Host a local meet-up or teach-in on Net Neutrality in your community. Our Maker Party event guides and platform make it easy. We even have a special guide for a 1 hour Net Neutrality Maker Party.

2) Get free training and help. Need a little help? We’ll tell you everything you need to know. From free resources and best practices for talking about Net Neutrality to nuts and bolts logistics and organizing. The free and open online training begins Monday, Aug 4th. All are welcome, no experience necessary.You’ll leave the training armed with everything you need to host your own local teach-in. Or just better explain the issue to friends and family.

3) Use our new Net Neutrality toolkit. Our new Net Neutrality teaching kit makes it easy for educators and activists to explain the issue and empower others. We’re gathering lots more resources here.

4) Spread the word. Here are some example tweets you can use:

  • I’m on #TeamInternet! That’s why I’m joining @Mozilla’s global teach-in on Net Neutrality. http://mzl.la/globalteachin #teachtheweb
  • Join @Mozilla’s global teach-in on Net Neutrality. Let’s educate, empower, organize and win. #TeamInternet http://mzl.la/globalteachin #teachtheweb
  • The internet is under attack. Join @Mozilla’s global teach-in to preserve Net Neutrality. #TeamInternet http://mzl.la/globalteachin #teachtheweb

Mitchell BakerChris Beard Named CEO of Mozilla

I am pleased to announce that Chris Beard has been appointed CEO of Mozilla Corp. The Mozilla board has reviewed many internal and external candidates – and no one we met was a better fit.

As you will recall, Chris re-joined Mozilla in April, accepting the role of interim CEO and joining our Board of Directors.

Chris first joined Mozilla in 2004, just before we shipped Firefox 1.0 – and he’s been deeply involved in every aspect of Mozilla ever since. During his many years here, he at various times has had responsibility for almost every part of the business, including product, marketing, innovation, communications, community and user engagement.

Before taking on the interim CEO role, Chris spent close to a year as Executive-in-Residence at the venture capital firm Greylock Partners, gaining a deeper perspective on innovation and entrepreneurship. During his term at Greylock, he remained an Advisor to me in my role as Mozilla’s chair.

Over the years, Chris has led many of Mozilla’s most innovative projects. We have relied on his judgment and advice for nearly a decade. Chris has a clear vision of how to take Mozilla’s mission and turn it into industry-changing products and ideas.

The months since Chris returned in April have been a busy time at Mozilla:
•   We released major updates to Firefox, including a complete redesign, easy customization mode and new services with Firefox Accounts.
•   Firefox OS launched with new operators, including América Móvil, and new devices, like the ZTE Open C and Open II, the Alcatel ONETOUCH Fire C and the Flame (our own reference device).
•   We announced that the Firefox OS ecosystem is expanding to new markets with new partners before the end of the year.
•   We ignited policy discussion on a new path forward with net neutrality through Mozilla’s filing on the subject with the FCC
•   In June, we kicked off Maker Party, our annual campaign to teach the culture, mechanics and citizenship of the Web through thousands of community-run events around the world. President Obama announced the news at the first-ever White House Maker Faire.

Today, online life is a combination of desktop, mobile, connected devices, cloud services, big data and social interactions. Mozilla connects all of these in an open system we call the Web – a system that puts individuals in control, offers freedom and flexibility and that is trustworthy and fun.

Mozilla builds products and communities that work to break down closed systems that limit online choice and opportunity. There is a huge need for this work today, as our digital lives become more centralized and controlled by just a few large companies. Toward that end, Mozilla builds products that put the user first, with a focus on openness, innovation and opportunity.

Chris has a keen sense of where Mozilla has been – and where we’re headed. He has unique experience connecting with every constituency that touches our products, including consumers, partners and community members. There’s simply no better person to lead Mozilla as we extend our impact from Firefox on the desktop to the worlds of mobile devices and services.

Chris, welcome back.

Prashish RajbhandariFor the love of Mozilla: #MozDrive

Hello Everyone,

I assume many of you must be aware of the recent project that I’ve undertaken. It has already been few weeks since I announced it, but the lazy me had been procrastinating to announce it here. I have also had the opportunity to test the whole idea on a recent volunteering trip to Squaw Valley (more on that later).

On 1st August 2014, I will be embarking on a journey across the lower 48 US States to spread the word and the love about Mozilla.

In 25 days, I will:

- Travel across the lower 48 States and share Mozilla’s story, vision, and their mission with the people I meet along the way.

- Engage in a one-to-one interaction with the locals and document their stories for an epic MozDrive video.

- Share my journey through the help of social media, as I go about making a difference and positive impact in the society.

Read more about the campaign here.

And please follow the entire journey and share the page (Facebook, Twitter) within your community or wherever you can in the social media space. The whole idea is to spread Mozilla love far out wide in the physical as well as digital world.

The entire campaign is mostly sponsored by The Mozilla Foundation (really grateful to them). But, I will be financing my own food and misc during the entire journey. I personally wanted everyone to become a part of this journey in some way. You can financially support the campaign – here!

I need your full support during the entire journey.

Thanks everyone!


Here are few pics from my recent #MozDrive test in Wunderlust, Squaw Valley.


See you on the other side!

‘Til then.

Filed under: Mozilla Tagged: mozdrive, mozilla, mozrep

Just BrowsingFastest Growing New Languages on Github are R, Rust and TypeScript (and Swift)

While researching TypeScript’s popularity I ran across a post by Adam Bard listing the most popular languages on Github (as of August 30, 2013). Adam used the Google BigQuery interface to mine Github’s repository statistics.

What really interested me was not absolute popularity but which languages are gaining adoption. So I decided to use the same approach to measure growth in language popularity, by comparing statistics for two different time periods. I used exactly the same query as Adam and ran it for the first half of 2013 (January 1st through June 30th) and then for the first half of 2014 (more details about the exact methodology at the end of this post).


Based on this analysis, the twenty fastest growing languages on Github in the past year are:

At the risk of jeopardizing my (non-existent) reputation as a programming language guru, I’ll admit that several of these are unfamiliar to me. Eliminating languages with less than 1000 repos to weed out the truly obscure ones yields this revised ranking:

We are assuming that growth in Github repository count serves as a proxy for increasing popularity, but it seems unlikely that Pascal, CSS and TeX are experiencing a sudden renaissance. Some proportion of this change is due to increasing use of Github itself, and this effect is probably more marked for older, more established languages that are only now moving onto Github. If we focus on languages that have started to attract attention more recently, the biggest winners over the past year appear to be R, Rust and TypeScript.

Random thoughts

What the hell is R?

The fastest growing newish language is one that was unfamiliar to me. According to Wikipedia, R is “a free software programming language and software environment for statistical computing and graphics.” Most of the developers around the office said they had heard of it but never used it. This is a great illustration of how specialized languages can gain traction without making much of an impact on the broader developer community.

Getting Rusty

Of the newer languages with C-like syntax, both Rust and Go are gaining adoption. Go has a headstart, but a lot of the commentary I’ve seen suggests that Rust is a better language. This is supported by its impressive 220% annual growth rate on Github.

Building a better JavaScript

Two transpile-to-JavaScript languages made it onto the list: TypeScript and CoffeeScript. Since JavaScript is the only language that runs in the browser, a lot of developers are forced to use it. But that doesn’t mean we have to like it. While CoffeeScript is still ahead, TypeScript has the advantage of strong typing (something many developers feel passionate about) in addition to a prettier syntax than JavaScript. If it keeps up its 100% year-on-year growth, it may catch up soon.


According to an old saw, everyone always talks about the weather but no one ever does anything about it. The same could be said about functional languages. Programming geeks love them and insist that they lead to better quality code. But they are yet to break into mainstream usage, and not a single functional language figures in our top-20 list (although R and Rust have some characteristics of functional languages).

Swift kick

The language with the highest growth of all didn’t even show up on the list because it had no repositories at all in the first half of 2013. Only a few months after it was publicly announced, Swift already had nearly 2000 repos. While it is unlikely to keep up its infinite annual growth rate for long, it is a safe bet that Swift is destined to be very popular indeed.


The data for 2013 and 2014 from BigQuery was imported into two CSV files and merged them into a single consolidated file using Bash:

$ cat results-20140723-094327.csv | sort -t , -k 1,1 > results1.csv 
$ cat results-20140723-094423.csv | sort -t , -k 1,1 > results2.csv 
$ join -o '1.1,2.1,1.2,2.2' -a 1 -a 2 -t, results1.csv results2.csv | awk -F ',' '{ if ($1) printf $1; else printf $2; print "," $3 "," $4 }'

The first two commands sort the CSV files by language name (the options -t , and -k 1,1 are needed to ensure that only the language name and not the comma delimiter or subsequent text is used for sorting). The join command takes the sorted output and merges it into a single consolidated file with the format:


If the language is present in both datasets then Language1 and Language2 are identical. If it isn’t, then one of them is empty. Either way we really want to merge these into one field, which is what the awk command does. (A colleague suggested using sed -r 's/^([^,]*),\1?/\1/', but I decided that awk—or pretty much anything—is easier to read and understand.)

I then imported the entire dataset into Google Spreadsheet. The “2014 Projected” column is the 2013 value increased by the overall growth rate in Github repository count for the top 100 languages. This is used as a baseline to compare the actual 2014 figure and calculate the growth rate, since it is most interesting to measure how fast a language is gaining adoption relative to the growth of Github itself.

Roberto A. VitilloRegression detection for Telemetry histograms.

tldr: An automatic regression detector system for Telemetry data has been deployed; the detected regressions can be seen in the dashboard.

Mozilla is collecting over 1,000 Telemetry probes which give rise to histograms, like the one in the figure below, that change slightly every day.

Average frame interval during any tab open/close animation (excluding tabstrip scroll).

Average frame interval during any tab open/close animation (excluding tabstrip scroll).


Until lately the only way to monitor those histogram was to sit down and literally stare the screen while something interesting was spotted. Clearly there was the need for an automated system which is able to discern between noise and real regressions.

Noise is a major challenge, even more so than with Talos data, as Telemetry data is collected from a wide variety of computers, configurations and workloads. A reliable mean of detecting regressions, improvements and changes in a measurement’s distribution is fundamental as erroneous alerts (false positives) tend to annoy people to the point that they just ignore any warning generated by the system.

I have looked at various methods to detect changes in histogram, like

  • Correlation Coefficient
  • Chi-Square statistic
  • U statistic (Mann-Whitney)
  • Kolmogorov-Smirnov statistic of the estimated densities
  • One Class Support Vector Machine
  • Bhattacharyya Distance

Only the Bhattacharyya distance proved satisfactory for our data. There are several reasons why each of the previous methods fails with our dataset.

For instance a one class SVM wouldn’t be a bad idea if some distributions wouldn’t change dramatically over the course of time due to regressions and/or improvements in our code; so in other words, how do you define how a distribution should look like? You could just take the daily distributions of the past week as training set but that wouldn’t be enough data to get anything meaningful from a SVM. A Chi-Square statistic instead is not always applicable as it doesn’t allow cells with an expected count of 0. We could go on for quite a while and there are ways to get around those issues but the reader is probably more interested in the final solution. I evaluated how well those methods are actually at pinpointing some past known regressions and the Bhattacharyya distance proved to be able to detect the kind of pattern changes we are looking for, like distributions shifts or bin swaps, while minimizing the number of false positives.

Having a relevant distance metric is only part of the deal since we still have to decide what to compare. Should we compare the distribution of today’s build-id against the one from yesterday? Or the one from a week ago? It turns out that trying to mimic what an human would do yields a good algorithm: if

  • the variance of the distance between the histogram of the current build-id and the histograms of the past N build-ids is small enough and
  • the distance between the histograms of the current build-id and the previous build-id is above a cutoff value K yielding a significant difference and
  • a significant difference is also present in the next K build-ids, then a distribution change is reported.

Furthermore, Histograms that don’t have enough data are filtered out and the cut-off values and parameters are determined empirically from past known regressions.

I am pretty satisfied with the detected regressions so far, for instance the system was able to correctly detect a regression caused by the OMTC patch that landed the 20st of May which caused a significant change in the the average frame interval during tab open animation:

Average frame interval during tab open animation of about:newtab.

Average frame interval during tab open animation of about:newtab.

We will soon roll-out a feature to allow histogram authors to be notified through e-mail when an histogram change occurs. In the meantime you can have a look at the detected regressions in the dashboard.

Hannah KaneMaker Party Engagement: Week 2

Two weeks in!

Let’s check in on our four engagement strategies.

First, some overall stats:

  • Events: 862 (up nearly 60% from the 541 we had last week, and more than a third of the way towards our goal of 2400)
  • Hosts: 347 (up >50% from 217 last week)
  • Expected attendees: 46,885 (up >75% from 25,930 last week)
  • Cities: 216 (goal is 450)

Note: I’ll start doing trend lines on these numbers soon, so we can see the overall shape.

Are there other things we should be tracking? For example, we have a goal of 70,000 Makes created through new user accounts, but I’m not sure if we have a way to easily get those numbers.

  • Webmaker accounts: 91,998 (I’m assuming “Users” on this dash is the number of account holders)
  • Contributors: If I understand the contributors dashboard correctly, we’re at 4,615, with 241 new this week.
  • Traffic: here’s the last three weeks. You can see we’re maintaining about the same levels as last week.


Engagement Strategy #1: PARTNER OUTREACH

  • # of confirmed event partners: 205 (5 new this week)
  • # of confirmed promotional partners: 63 (2 new this week)

We saw press releases/blog posts from these partners:

We also started engaging Net Neutrality partners by inviting them to join our global teach-ins.


Engagement Strategy #2: ACTIVE MOZILLIANS

  • Science Lab Global Sprint happened this week—I don’t yet know the total # of people who participated
  • Lots of event uploads this week from the Hive networks.


Engagement Strategy #3: OWNED MEDIA

  • Snippet: The snippet has generated nearly 350M impressions, >710K clicks, and >40,000 email sign-ups to date. We’ve nearly finalized some additional animal-themed icons to help prevent snippet fatigue, and have started drafting a two-email drip series for people who’ve provided their emails via the snippet (see the relevant bug).
  • Mozilla.org: In the first few days since the new Maker Party banner went live we saw a significant drop in Webmaker account conversions (as compared to the previous Webmaker focused banner). One likely cause is that, in addition to changing the banner itself, we also changed the target destination from Webmaker to Maker Party. We’ve rolled back the banner and target destination to the previous version, and are discussing iteration ideas here.

Analysis: We’ve learned quite a bit about which snippets perform best. The real test will be how many email sign-ups we can convert to Webmaker account holders.


Engagement Strategy #4: EARNED MEDIA

Planting seeds:

  • Mark had an interview with Press Trust of India, India’s premier news agency that has the largest press outreach in Asia.
  • Brett had an interview with The Next Web



What are the results of earned media efforts?

Here’s traffic coming from searches for “webmaker” and “maker party.” No boost here yet.


SOCIAL (not one of our key strategies):

#MakerParty trendline: You can see the spike we saw last week has tapered off.

See #MakerParty tweets here: https://twitter.com/search?q=%23makerparty&src=typd

Some highlights:

Screen Shot 2014-07-21 at 3.11.53 PM Screen Shot 2014-07-21 at 3.12.27 PM Screen Shot 2014-07-21 at 3.12.58 PM Screen Shot 2014-07-21 at 3.13.49 PM Screen Shot 2014-07-21 at 3.14.19 PM Screen Shot 2014-07-21 at 3.15.59 PM Screen Shot 2014-07-23 at 3.35.32 PMScreen Shot 2014-07-24 at 4.06.45 PM


Jennifer BorissLooking Ahead: Challenges for the Open Web

At the end of this week, I’m moving on after six amazing years at Mozilla. On August 25, I’ll be joining Reddit - another global open source project – as their first […]

Benjamin KerensaUntil Next Year CLS!

Bs7Qxr CMAAYtLa 300x199 Until Next Year CLS!

Community Leadership Summit 2014 Group Photo

This past week marked my second year helping out as a co-organizer of the Community Leadership Summit. This Community Leadership Summit was especially important because not only did we introduce a new Community Leadership Forum but we also introduced CLSx events and continued to introduce some new changes to our overall event format.

Like previous years, the attendance was a great mix of community managers and leaders. I was really excited to have an entire group of Mozillians who attended this year. As usual, my most enjoyable conversations took place at the pre-CLS social and in the hallway track. I was excited to briefly chat with the Community Team from Lego and also some folks from Adobe and learn about how they are building community in their respective settings.

I’m always a big advocate for community building, so for me, CLS is an event I try and make it to each and every year because I think it is great to have an event for community managers and builders that isn’t limited to any specific industry. It is really a great opportunity to share best practices and really learn from one another so that everyone mutually improves their own toolkits and technique.

It was apparent to me that this year there were even more women than in previous years and so it was really awesome to see that considering CLS is often times heavily attended by men in the tech industry.

I really look forward to seeing the CLS community continue to grow and look forward to participating and co-organizing next year’s event and possibly even kick of a CLSxPortland.

A big thanks to the rest of the CLS Team for helping make this free event a wonderful experience for all and to this years sponsors O’Reilly, Citrix, Oracle, Linux Fund, Mozilla and Ubuntu!

Dave HusebyHow to Sanitize Thunderbird and Enigmail

How to sanitize encrypted email to not disclose Thunderbird or Enigmail.

Benjamin KerensaMozilla at O’Reilly Open Source Convention

IMG 20140723 161033 300x225 Mozilla at OReilly Open Source Convention

Mozililla OSCON 2014 Team

This past week marked my fourth year of attending O’Reilly Open Source Convention (OSCON). It was also my second year speaking at the convention. One new thing that happened this year was I co-led Mozilla’s presence during the convention from our booth to the social events and our social media campaign.

Like each previous year, OSCON 2014 didn’t disappoint and it was great to have Mozilla back at the convention after not having a presence for some years. This year our presence was focused on promoting Firefox OS, Firefox Developer Tools and Firefox for Android.

While the metrics are not yet finished being tracked, I think our presence was a great success. We heard from a lot of developers who are already using our Developer tools and from a lot of developers who are not; many of which we were able to educate about new features and why they should use our tools.

IMG 20140721 171609 300x168 Mozilla at OReilly Open Source Convention

Alex shows attendee Firefox Dev Tools

Attendees were very excited about Firefox OS with a majority of those stopping by asking about the different layers of the platform, where they can get a device, and how they can make an app for the platform.

In addition to our booth, we also had members of the team such as Emma Irwin who helped support OSCON’s Children’s Day by hosting a Mozilla Webmaker event which was very popular with the kids and their parents. It really was great to see the future generation tinkering with Open Web technologies.

Finally, we had a social event on Wednesday evening that was very popular so much that the Mozilla Portland office was packed till last call. During the social event, we had a local airbrush artist doing tattoos with several attendees opting for a Firefox Tattoo.

All in all, I think our presence last week was very positive and even the early numbers look positive. I want to give a big thanks to Stormy Peters, Christian Heilmann, Robyn Chau, Shezmeen Prasad, Dave Camp, Dietrich Ayala, Chris Maglione, William Reynolds, Emma Irwin, Majken Connor, Jim Blandy, Alex Lakatos for helping this event be a success.

Kevin NgoOff-Camera Lighting with Two Strobes

Had to grab my dad's Canon SL1, my old nifty-fifty, and install proprietary Canon RAW support for this measly shot.

One muggy evening in a dimly-lit garage. The sun had expired, and everything began to lose their supplied illumination. I saw my dear friend, Chicken, starting at me from the couch. A shipment had come in a few weeks ago, a second radio receiver for a wireless camera flash. I had two strobes (or flashes), two receivers, a camera and a trigger, a model, and darkness. It was time to do a brief experiment lighting a subject with two off-camera strobes.

Off-camera lighting allows for great creativity. Unlike an on-camera flash with obliterates any dimension in your images, having an off-camera flash (or better yet, two) puts a paintbrush of light into your hands. Rather than relying on natural light, using strobes puts the whole scene under your control. It's more meticulous, but the results are beyond reach of a natural light snapshot.

I've read a bit of David Hobby's Strobist, the de-facto guide for making use of off-camera flashes. There are several ways to trigger an off-camera flash. I use radio triggers. The CowboyStudio NPT-04 is a ridiculously cheap, but nails, trigger. Place the trigger on the hotshoe, and attach the receivers to the flashes. Make sure your camera is on manual and your shutter speed is below the maximum flash sync speed. It's also nice to a flash capable of manual operation (to set power).

A good workflow for getting a scene set up is to:

  • Start without any flashes. Lower camera exposure as much as possible while still keeping all detail and legibility in the image. We start with an underexposed image, and paint the light on.
  • Add one flash at a time, illuminating what you wish to illuminate.
  • Through trial-and-error (it gets faster over time), tweak camera exposure and flash power until the scene is lit as desired.

Example 1: The Dimly-Lit Garage

Normally, when I'm indoors, I like to bounce the flash off the ceiling for a nice even swath of light. But there's less dimension and it's a one-pony technique. So let's first start without any flashes:

It was dark, so they de-strobed.

It's...dark. But no worries, a lot of the detail is still legible. We see the entire model (even the dark right hand), and details in the background. I'm shooting RAW so I still have a lot of dynamic range, and we'll layer in light in the next step. Let's add one strobe:

A bit out-of-focus since I was using a manual lens, but the light is there.

We now have a strobe on camera right. It's pointed 45-degrees towards the subject from camera right. This one isn't manual so I'm forced to use it at full power, but it is not too overpowering. Since it's only one flash from one side, we see a lot of dimension on the model with the shadows on camera left.

However, too much dimension can sometimes not be flattering. We see lot of the model's wrinkles (hadn't had her beauty sleep). We can flatten the lighting by adding yet another strobe:

A perfect white complexion.

And we have a nice even studio lighting. This strobe is set up on 45-degrees camera left pointed towards the subject. This set up of two strobes behind the camera each pointed 45-degrees toward the subject is fairly common to achieve even lighting. Here's a rough photo of the setup (one strobe on the left, one strobe within the shelves):

Lighting setup.

Example 2: Out At Dusk

I took it outside to the backyard. The sun was done, so no hope for golden hour, but as strobie-doos, we don't need no sun. We can make our own sun. I placed our model, Chicken, in front of the garden on a little wooden stool. Then one flash for a sidelight fill on camera left:

The ol' Chicken in the headlights.

With one flash, it doesn't quick look natural, though it makes for a cool scene. I imagine Chicken waiting at home for her husband on the porch, and the headlights slowly rearing in towards her. So our model is now pretty well lit, but the background leaves something to be desired. We can creatively use our second flash to accent the background!

Manufacturing our own golden hour.

It looks like sunrise! I actually had to manually hold the flash pointing down 45-degrees on camera right towards the plants, while setting a 12-second timer on the manually-focused camera. It's times like these where a stand to hold the flash would come in handy. The second flash created a nice warm swath of light, overpowering the first flash to create a sense that the sun is on camera right.

From Here

Off-camera lighting gives complete dictatorship over the lighting of the scene. It can make night into day, or add an extremely dramatic punch. With no strobe or on-camera flash, you are forced to make do with what you have with little choice. With multiple flashes, lighting modifiers (umbrellas, boxes), there are no limits.

Photos taken with Pentax K-30 w/ Pentax-M 50mm f1.7.

Florian QuèzeFirefox is awesome

"Hey, you know what? Firefox is awesome!" someone exclaimed at the other end of the coworking space a few weeks ago. This piqued my curiosity, and I moved closer to see what she was so excited about.

When I saw the feature she was delighted of finding, it reminded me a similar situation several years ago. In high school, I was trying to convince a friend to switch from IE to Mozilla. The arguments about respecting web standards didn't convince him. He tried Mozilla anyway to please me, and found one feature that excited him.
He had been trying to save some images from webpages, and for some reason it was difficult to do (possibly because of context menu hijacking, which was common at the time, or maybe because the images were displayed as a background, …). He had even written some Visual Basic code to parse the saved HTML source code and find the image urls, and then downloaded them, but the results weren't entirely satisfying.
Now with Mozilla, he could just right click, select "View Page Info", click on the "Media" tab, and find a list of all the images of the page. I remember how excited he looked for one second, until he clicked a background image in the list and the preview stayed blank; he then clicked the "Save as" button anyway and… nothing happened. Turns out that "Save as" button was just producing an error in the Error Console. He then looked at me, very disappointed, and said that my Mozilla isn't ready yet.
After that disappointment, I didn't insist much on him using Mozilla instead of IE (I think he did switch anyway a few weeks or months later).

A few months later, as I had time during summer vacations, I tried to create an add-on for the last thing I could do with IE but not Firefox: list the hostnames that the browser connects to when loading a page (the add-on, View Dependencies, is on AMO). I used this to maintain a hosts file that was blocking ads on the network's gateway.
Working on this add-on project caused me to look at the existing Page Info code to find ideas about how to look through the resources loaded by the page. While doing this, I stumbled on the problem that was causing background image previews to not be displayed. Exactly 10 years ago, I created a patch, created a bugzilla account (I had been lurking on bugzilla for a while already, but without creating an account as I didn't feel I should have one until I had something to contribute), and attached the patch to the existing bug about this background preview issue.
Two days later, the patch was reviewed (thanks db48x!), I addressed the review comment, attached a new patch, and it was checked-in.
I remember how excited I was to verify the next day that the bug was gone in the next nightly, and how I checked that the code in the new nightly was actually using my patch.

A couple months later, I fixed the "Save as" button too in time for Firefox 1.0.

Back to 2014. The reason why someone in my coworking space was finding Firefox so awesome is that "You can click "View Page Info", and then view all the images of the page and save them." Wow. I hadn't heard anybody talking to me about Page Info in years. I did use it a lot several years ago, but don't use it that much these days. I do agree with her that Firefox is awesome, not really because it can save images (although that's a great feature other browsers don't have), but because anybody can make it better for his/her own use, and by doing so making it awesome for millions of other people now but also in the future. Like I did, ten years ago.

Raniere SilvaMathML July Meeting

MathML July Meeting


Sorry for the delay in write this.

This is a report about the Mozilla MathML July IRC Meeting (see the announcement here). The topics of the meeting can be found in this PAD (local copy of the PAD) and the IRC log (local copy of the IRC log) is also available.

In the last 4 weeks the MathML team closed 4 bugs, worked in one other. This are only the ones tracked by Bugzilla.

The next meeting will be in July 14th at 8pm UTC (note that it will be in an different time from the last meating, more information below). Please add topics in the PAD.

Leia mais...

Nikhil MaratheServiceWorkers in Firefox Update: July 26, 2014

(I will be on vacation July 27 to August 18 and unable to reply to any comments. Please see the end of the post for other ways to ask questions and raise issues.)
It’s been over 2 months since my last post, so here is an update. But first,
a link to a latest build (and this time it won’t expire!). For instructions on
enabling all the APIs, see the earlier post.

Download builds

Registration lifecycle

The patches related to ServiceWorker registration have landed in Nightly
builds! unregister() still doesn’t work in Nightly (but does in the build
above), since Bug 1011268 is waiting on review.
The state mechanism is not available. But the bug is easy to fix and
I encourage interested Gecko contributors (existing and new) to give it a shot.
Also, the ServiceWorker specification changed just a few days ago,
so Firefox still has the older API with everything on ServiceWorkerContainer.
This bug is another easy to fix bug.


Ben Kelly has been hard at work implementing Headers and some of them have
landed in Nightly. Unfortunately that isn’t of much use right now since the
Request and Response objects are very primitive and do not handle Headers.
We do have a spec updated Fetch API, with
Request, Response and fetch() primitives. What works and what doesn’t?
  1. Request and Response objects are available and the fetch event will hand
    your ServiceWorker a Request, and you can return it a Response and this will
    work! Only the Response(“string body”) form is implemented. You can of
    course create an instance and set the status, but that’s about it.
  2. fetch() does not work on ServiceWorkers! In documents, only the fetch(URL)
    form works.
  3. One of our interns, Catalin Badea has taken over implementing Fetch while
    I’m on vacation, so I’m hoping to publish a more functional API once I’m


Catalin has done a great job of implementing these, and they are waiting for
. Unfortunately I was unable to integrate his patches into the
build above, but he can probably post an updated build himself.


Another of our interns, Tyler Smith, has implemented the new Push
! This is available for use on navigator.pushRegistrationManager
and your ServiceWorker will receive the Push notification.


Nothing available yet.


Currently neither ServiceWorker registrations, nor scripts are persisted or available offline. Andrea Marchesini is working on the former, and will be back from vacation next week to finish it off. Offline script caching is currently unassigned. It is fairly hairy, but we think we know how to do it. Progress on this should happen within the next few weeks.


Chris Mills has started working on MDN pages about ServiceWorkers.

Contributing to ServiceWorkers

As you can see, while Firefox is not in a situation yet to implement
full-circle offline apps, we are making progress. There are several employees
and two superb interns working on this. We are always looking for more
contributors. Here are various things you can do:
The ServiceWorker specification is meant to solve your needs. Yes, it
is hard to figure out what can be improved without actually trying it out, but
I really encourage you to step in there, ask questions and file
to improve the specification before it becomes immortal.
Improve Service Worker documentation on MDN. The ServiceWorker spec introduces
several new concepts and APIs, and the better documented we have them, the
faster web developers can use them. Start
There are several Gecko implementation bugs, here ordered in approximately
increasing difficulty:
  • 1040924 - Fix and re-enable the serviceworker tests on non-Windows.
  • 1043711 - Ensure ServiceWorkerManager::Register() can always
    extract a host from the URL.
  • 1041335 - Add mozilla::services Getter for
  • 982728 - Implement ServiceWorkerGlobalScope update() and
  • 1041340 - ServiceWorkers: Implement [[HandleDocumentUnload]].
  • 1043004 - Update ServiceWorkerContainer API to spec.
  • 931243 - Sync XMLHttpRequest should be disabled on ServiceWorkers.
  • 1003991 - Disable https:// only load for ServiceWorkers when Developer Tools are open.
  • Full list
Don’t hesistate to ask for help on the #content channel on irc.mozilla.org.

Chris McDonaldNegativity in Talks

I was at a meetup recently, and one of the organizers was giving a talk. They come across some PHP in the demo they are doing, and crack a joke about how bad PHP is. The crowd laughs and cheers along with the joke. This isn’t an isolated incident, it happens during talks or discussions all the time. That doesn’t mean it is acceptable.

When I broke into the industry, my first gig was writing Perl, Java, and PHP. All of these languages have stigmas around them these days. Perl has its magic and the fact that only neckbeard sysadmins write it. Java is the ‘I just hit tab in my IDE and the code writes itself!’ and other comments on how ugly it is. PHP, possibly the most made fun of language, doesn’t even get a reason most of the time. It is just ‘lulz php is bad, right gaise?’

Imagine a developer who is just getting started. They are ultra proud of their first gig, which happens to be working on a Drupal site in PHP. They come to a user group for a different language they’ve read about and think sounds neat. They then hear speakers that people appear to respect making jokes about the job they are proud of, the crowd joining in on this negativity. This is not inspiring to them, it just reinforces the impostor syndrome most of us felt as we started into tech.

So what do we do about this? If you are a group organizer, you already have all the power you need to make the changes. Talk with your speakers when they volunteer or are asked to speak. Let them know you want to promote a positive environment regardless of background. Consider writing up guidelines for your speakers to agree to.

How about as just an attendee? The best bet is probably speaking to one of the organizers. Bring it to their attention that their speakers are alienating a portion of their audience with the language trash talking. Approach it as a problem to be fixed in the future, not as if they intended to insult.

Keep in mind I’m not opposed to direct comparison between languages. “I enjoy the lack of type inference because it makes the truth table much easier to understand than, for instance, PHP’s.” This isn’t insulting the whole language, it isn’t turning it into a joke. It is just illustrating a difference that the speaker values.

Much like other negativity in our community, this will be something that will take some time to fix. Keep in mind this isn’t just having to do with user group or conference talks. Discussions around a table suffer from this as well. The first place one should address this problem is within themselves. We are all better than this pandering, we can build ourselves up without having to push others down. Let’s go out and make our community much more positive.

Tarek ZiadéToxMail experiment

I am still looking for a good e-mail replacement that is more respectful of my privacy.

This will never happen with the existing e-mail system due to the way it works: when you send an e-mail to someone, even if you encrypt the body of your e-mail, the metadata will transit from server to server in clear, and the final destination will store it.

Every PGP UX I have tried is terrible anyways. It's just too painful to get things right for someone that has no knowledge (and no desire to have some) of how things work.

What I aiming for now is a separate system to send and receive mails with my close friends and my family. Something that my mother can use like regular e-mails, without any extra work.

I guess some kind of "Darknet for E-mails" where they are no intermediate servers between my mailbox and my mom's mailbox, and no way for a eavesdropper to get the content.


  • end-to-end encryption
  • direct network link between my mom's mail server and me
  • based on existing protocols (SMTP/IMAP/POP3) so my mom can use Thunderbird or I can set her up a Zimbra server.

Project Tox

The Tox Project is a project that aims to replace Skype with a more secured instant messaging system. You can send text, voice and even video messages to your friends.

It's based on NaCL for the crypto bits and in particular the crypto_box API which provides high-level functions to generate public/private key pairs and encrypt/decrypt messages with it.

The other main feature of Tox is its Distributed Hash Table that contains the list of nodes that are connected to the network with their Tox Id.

When you run a Tox-based application, you become part of the Tox network by registering to a few known public nodes.

To send a message to someone, you have to know their Tox Id and send a crypted message using the crypto_box api and the keypair magic.

Tox was created as an instant messaging system, so it has features to add/remove/invite friends, create groups etc. but its core capability is to let you reach out another node given its id, and communicate with it. And that can be any kind of communication.

So e-mails could transit through Tox nodes.

Toxmail experiment

Toxmail is my little experiment to build a secure e-mail system on the top of Tox.

It's a daemon that registers to the Tox network and runs an SMTP service that converts outgoing e-mails to text messages that are sent through Tox. It also converts incoming text messages back into e-mails and stores them in a local Maildir.

Toxmail also runs a simple POP3 server, so it's actually a full stack that can be used through an e-mail client like Thunderbird.

You can just create a new account in Thunderbird, point it to the Toxmail SMPT and POP3 local services, and use it like another e-mail account.

When you want to send someone an e-mail, you have to know their Tox Id, and use TOXID@tox as the recipient.

For example:


When the SMTP daemon sees this, it tries to send the e-mail to that Tox-ID. What I am planning to do is to have an automatic conversion of regular e-mails using a lookup table the user can maintain. A list of contacts where you provide for each entry an e-mail and a tox id.

End-to-end encryption, no intermediates between the user and the recipient. Ya!

Caveats & Limitations

For ToxMail to work, it needs to be registered to the Tox network all the time.

This limitation can be partially solved by adding in the SMTP daemon a retry feature: if the recipient's node is offline, the mail is stored and it tries to send it later.

But for the e-mail to go through, the two nodes have to be online at the same time at some point.

Maybe a good way to solve this would be to have Toxmail run into a Raspberry-PI plugged into the home internet box. That'd make sense actually: run your own little mail server for all your family/friends conversations.

One major problem though is what to do with e-mails that are to be sent to recipients that are part of your toxmail contact list, but also to recipients that are not using Toxmail. I guess the best thing to do is to fallback to the regular routing in that case, and let the user know.

Anyways, lots of fun playing with this on my spare time.

The prototype is being built here, using Python and the PyTox binding:


It has reached a state where you can actually send and receive e-mails :)

I'd love to have feedback on this little project.

Kevin NgoPoker Sess.28 - Building an App for Tournament Players

Re-learning how to fish.

Nothing like a win to get things back on track. I went back to my bread-and-butter, the Saturday freerolls at the Final Table. Through the dozen or so times I've played these freerolls, I've amassed an insane ROI. After three hours of play, we chopped four ways for $260.

Building a Personalized Poker Buddy App

I have been thinking about building a personalized mobile app for myself to assist me in all things poker. I always try to look to my hobbies for inspiration to build and develop. With insomnia at time of writing, I was reading Rework by 37signals to pass the time. It said to make use of fleeting inspiration as it came. And this idea may click. The app will have two faces. A poker tracker on one side, and a handy tournament pocket tool on the other.

The poker tracker would track and graph earnings (and losings!) over time. Data is beautiful, and a solid green line slanting from the lower-left to the upper-right would impart some motivation. My blog (and an outdated JSON file) has been the only means I have for bookkeeping. I'd like to be able to easily input results on my phone immediately after a tournament (for those times I don't feel like blogging after a bust).

The pocket tool will act as a "pre-hand" reference during late stage live tournaments. To give recommendations, factoring in several conditions and siutations, on what I should do next hand. It will be optimized to be usable before a hand since phone-use during a hand is illegal. The visuals will be obfuscated and surreptitious (maybe style it like Facebook...) such that neighboring players not catch on. I'd input the blinds, antes, number of players, table dynamics, my stack size, and my position to determine the range of hands I can profitably open-shove.

Though it can also act a post-hand reference, containing Harrington's pre-flop strategy charts and some hand vs hand race percentages.

I'd pay for this, and I'd other live tournament players would be interested as well. I have been in sort of an entrepreneur mood lately, grabbing Rework and $100 Start-Up. I have domain knowledge, a need for a side project, a niche, and something I could dogfood.

The Poker Bits

Won the morning freeroll, busted on the final table on the afternoon freeroll. The biggest mistake that stuck out was calling off a river bet with AQ on a KxxQx board against a maniac (who was obviously high). He raised UTG, I flatted in position with AQ. I called his half-pot cbet with a gutshot and over keeping his maniac image in mind. I improved on the turn and called a min-bet. Then I called a good-sized bet on the river. I played this way given his image and the pot odds, but I should 3bet pre, folded flop, raised turn, or folded river given the triple barrel.

I have been doing well in online tournaments as well. A first place finish in single-table tourney, and a couple high finishes in 500-man multi-table tourneys. Though I have been doing terrible in cash games. It's weird, I used to play exclusively 6-max cash, but since I started playing full ring tourneys, I haven't gotten reaccustomed to it. I prefer the flow of tourneys; it has a start and an end, players increasingly become more aggressive, the blinds make it feel I'm always on the edge. Conversely, cash games are a boring grind.

Session Conclusions

  • Went Well: playing more conservative, decreasing cbet%, improving hand reading
  • Mistakes: should have folded a couple of one-pair hands to double/triple barrels, playing terribly at cash games, playing AQ preflop bad
  • Get Better At: understanding verbal poker tells from Elwood's new book
  • Profit: +$198

Jeff WaldenNew mach build feature: build-complete notifications on Linux

Spurred on by gps‘s recent mach blogging (and a blogging dry spell to rectify), I thought it’d be worth noting a new mach feature I landed in mozilla-inbound yesterday: build-complete notifications on Linux.

On OS X, mach build spawns a desktop notification when a build completes. It’s handy when the terminal where the build’s running is out of view — often the case given how long builds take. I learned about this feature when stuck on a loaner Mac for a few months due to laptop issues, and I found the notification quite handy. When I returned to Linux, I wanted the same thing there. evilpie had already filed bug 981146 with a patch using DBus notifications, but he didn’t have time to finish it. So I picked it up and did the last 5% to land it. Woo notifications!

(Minor caveat: you won’t get a notification if your build completes in under five minutes. Five minutes is probably too long; some systems build fast enough that you’d never get a notification. gps thinks this should be shorter and ideally configurable. I’m not aware of an existing bug for this.)

Florian QuèzeConverting old Mac minis into CentOS Instantbird build slaves

A while ago, I received a few retired Mozilla minis. Today 2 of them started their new life as CentOS6 build slaves for Instantbird, which means we now have Linux nightlies again! Our previous Linux build slave, running CentOS5, was no longer able to build nightlies based on the current mozilla-central code, and this is the reason why we haven't had Linux nightlies since March. We know it's been a long wait, but to help our dear Linux testers forgive us, we started offering 64bit nightly builds!

For the curious, and for future reference, here are the steps I followed to install these two new build slaves:

Partition table

The Mac minis came with a GPT partition table and an hfs+ partition that we don't want. While the CentOS installer was able to detect them, the grub it installed there didn't work. The solution was to convert the GPT partition table to the MBR older format. To do this, boot into a modern linux distribution (I used an Ubuntu 13.10 live dvd that I had around), install gdisk (sudo apt-get update && sudo apt-get gdisk) and use it to edit the disk's partition table:

sudo gdisk /dev/sda
Press 'r' to start recovery/transformation, 'g' to convert from GPT to MBR, 'p' to see the resulting partition table, and finally 'w' to write the changes to disk (instructions initially from here).
Exit gdisk.
Now you can check the current partition table using gparted. At this point I deleted the hfs+ partition.

Installing CentOS

The version of CentOS needed to use the current Mozilla build tools is CentOS 6.2. We tried before using another (slightly newer) version, and we never got it to work.

Reboot on a CentOS 6.2 livecd (press the 'c' key at startup to force the mac mini to look for a bootable CD).
Follow the instructions to install CentOS on the hard disk.
I customized the partition table a bit (50000MB for /, 2048MB of swap space, and the rest of the disk for /home).

The only non-obvious part of the CentOS install is that the boot loaded needs to be installed on the MBR rather than on the partition where the system is installed. When the installer asks where grub should be installed, set it to /dev/sda (the default is /dev/sha2, and that won't boot). Of course I got this wrong in my first attempts.

Installing Mozilla build dependencies

First, install an editor that is usable to you. I typically use emacs, so: sudo yum install emacs

The Mozilla Linux build slaves use a specifically tweaked version of gcc so that the produced binaries have low runtime dependencies, but the compiler still has the build time feature set of gcc 4.7. If you want to use something as old as CentOS6.2 to build, you need this specific compiler.

The good thing is, there's a yum repository publicly available where all the customized mozilla packages are available. To install it, create a file named /etc/yum.repos.d/mozilla.repo and make it contain this:


Adapt the baseurl to finish with i386 or x86_64 depending on whether you are making a 32 bit or 64 bit slave.

After saving this file, you can check that it had the intended effect by running this comment to list the packages from the mozilla repository: repoquery -q --repoid=mozilla -a

You want to install the version of gcc473 and the version of mozilla-python27 that appear in that list.

You also need several other build dependencies. MDN has a page listing them:

yum groupinstall 'Development Tools' 'Development Libraries' 'GNOME Software Development'
yum install mercurial autoconf213 glibc-static libstdc++-static yasm wireless-tools-devel mesa-libGL-devel alsa-lib-devel libXt-devel gstreamer-devel gstreamer-plugins-base-devel pulseaudio-libs-devel

Unfortunately, two dependencies were missing on that list (I've now fixed the page):
yum install gtk2-devel dbus-glib-devel

At this point, the machine should be ready to build Firefox.

Instantbird, because of libpurple, depends on a few more packages:
yum install avahi-glib-devel krb5-devel

And it will be useful to have ccache:
yum install ccache

Installing the buildbot slave

First, install the buildslave command, which unfortunately doesn't come as a yum package, so you need to install easy_install first:

yum install python-setuptools python-devel mpfr
easy_install buildbot-slave

python-devel and mpfr here are build time dependencies of the buildbot-slave package, and not having them installed will cause compiling errors while attempting to install buildbot-slave.

We are now ready to actually install the buildbot slave. First let's create a new user for buildbot:

adduser buildbot
su buildbot
cd /home/buildbot

Then the command to create the local slave is:

buildslave create-slave --umask=022 /home/buildbot/buildslave buildbot.instantbird.org:9989 linux-sN password

The buildbot slave will be significantly more useful if it starts automatically when the OS starts, so let's edit the crontab (crontab -e) to add this entry:
@reboot PATH=/usr/local/bin:/usr/bin:/bin /usr/bin/buildslave start /home/buildbot/buildslave

The reason why the PATH environment variable has to be set here is that the default path doesn't contain /usr/local/bin, but that's where the mozilla-python27 packages installs python2.7 (which is required by mach during builds).

One step in the Instantbird builds configured on our buildbot use hg clean --all and this requires the purge mercurial extension to be enabled, so let's edit ~buidlbot/.hgrc to look like this:
$ cat ~/.hgrc
purge =

Finally, ssh needs to be configured so that successful builds can be uploaded automatically. Copy and adapt ~buildbot/.ssh from an existing working build slave. The files that are needed are id_dsa (the ssh private key) and known_hosts (so that ssh doesn't prompt about the server's fingerprint the first time we upload something).

Here we go, working Instantbird linux build slaves! Figuring out all these details for our first CentOS6 slave took me a few evenings, but doing it again on the second slave was really easy.

Aki Sasakion leaving mozilla

Today's my last day at Mozilla. It wasn't an easy decision to move on; this is the best team I've been a part of in my career. And working at a company with such idealistic principles and the capacity to make a difference has been a privilege.

Looking back at the past five-and-three-quarter years:

  • I wrote mozharness, a versatile scripting harness. I strongly believe in its three core concepts: versatile locking config; full logging; modularity.

  • I helped FirefoxOS (b2g) ship, and it's making waves in the industry. Internally, the release processes are well on the path to maturing and stabilizing, and b2g is now riding the trains.

    • Merge day: Releng took over ownership of merge day, and b2g increased its complexity exponentially. I don't think it's quite that bad :) I whittled it down from requiring someone's full mental capacity for three out of every six weeks, to several days of precisely following directions.

    • I rewrote vcs-sync to be more maintainable and robust, and to support gecko-dev and gecko-projects. Being able to support both mercurial and git across many hundreds of repos has become a core part of our development and automation, primarily because of b2g. The best thing you can say about a mission critical piece of infrastructure like this is that you can sleep through the night or take off for the weekend without worrying if it'll break. Or go on vacation for 3 1/2 weeks, far from civilization, without feeling guilty or worried.

  • I helped ship three mobile 1.0's. I learned a ton, and I don't think I could have gotten through it by myself; John and the team helped me through this immensely.

    • On mobile, we went from one or two builds on a branch to full tier-1 support: builds and tests on checkin across all of our integration-, release-, and project- branches. And mobile is riding the trains.

    • We Sim-shipped 5.0 on Firefox desktop and mobile off the same changeset. Firefox 6.0b2, and every release since then, was built off the same automation for desktop and mobile. Those were total team efforts.

    • I will be remembered for the mobile pedalboard. When we talked to other people in the industry, this was more on-device mobile test automation than they had ever seen or heard of; their solutions all revolved around manual QA.

      (full set)

    • And they are like effin bunnies; we later moved on to shoe rack bunnies, rackmounted bunnies, and now more and more emulator-driven bunnies in the cloud, each numbering in the hundreds or more. I've been hands off here for quite a while; the team has really improved things leaps and bounds over my crude initial attempts.

  • I brainstormed next-gen build infrastructure. I started blogging about this back in January 2009, based largely around my previous webapp+db design elsewhere, but I think my LWR posts in Dec 2013 had more of an impact. A lot of those ideas ended up in TaskCluster; mozharness scripts will contain the bulk of the client-side logic. We'll see how it all works when TaskCluster starts taking on a significant percentage of the current buildbot load :)

I will stay a Mozillian, and I'm looking forward to see where we can go from here!

comment count unavailable comments

Vaibhav AgrawalLets have more green trees

I have been working on making jobs ignore intermittent failures for mochitests (bug 1036325) on try servers to prevent unnecessary oranges, and save resources that goes into retriggering those jobs on tbpl. I am glad to announce that this has been achieved for desktop mochitests (linux, osx and windows). It doesn’t work for android/b2g mochitests but they will be supported in the future. This post explains how it works in detail and is a bit lengthy, so bear with me.

Lets see the patch in action. Here is an example of an almost green try push:

Tbpl Push Log

 Note: one bc1 orange job is because of a leak (Bug 1036328)

In this push, the intermittents were suppressed, for example this log shows an intermittent on mochitest-4 job on linux :


Even though there was an intermittent failure for this job, the job remains green. We can determine if a job produced an intermittent  by inspecting the number of tests run for the job on tbpl, which will be much smaller than normal. For example in the above intermittent mochitest-4 job it shows “mochitest-plain-chunked: 4696/0/23” as compared to the normal “mochitest-plain-chunked: 16465/0/1954”. Another way is looking at the log of the particular job for “TEST-UNEXPECTED-FAIL”.


The algorithm behind getting a green job even in the presence of an intermittent failure is that we recognize the failing test, and run it independently 10 times. If the test fails < 3 times out of 10, it is marked as intermittent and we leave it. If it fails >=3 times out of 10, it means that there is a problem in the test turning the job orange.


Next to test the case of a “real” failure, I wrote a unit test and tested it out in the try push:


This job is orange and the log for this push is:


In this summary, a test is failing for more than three times and hence we get a real failure. The important line in this summary is:

3086 INFO TEST-UNEXPECTED-FAIL | Bisection | Please ignore repeats and look for ‘Bleedthrough’ (if any) at the end of the failure list

This tells us that the bisection procedure has started and we should look out for future “Bleedthrough”, that is, the test causing the failure. And at the last line it prints the “real failure”:

TEST-UNEXPECTED-FAIL | testing/mochitest/tests/Harness_sanity/test_harness_post_bisection.html | Bleedthrough detected, this test is the root cause for many of the above failures

Aha! So we have found a permanent failing test and it is probably due to some fault in the developer’s patch. Thus, the developers can now focus on the real problem rather than being lost in the intermittent failures.

This patch has landed on mozilla-inbound and I am working on enabling it as an option on trychooser (More on that in the next blog post). However if someone wants to try this out now (works only for desktop mochitest), one can hack in just a single line:

options.bisectChunk = ‘default’

such as in this diff inside runtests.py and test it out!

Hopefully, this will also take us a step closer to AutoLand (automatic landing of patches).

Other Bugs Solved for GSoC:

[1028226] – Clean up the code for manifest parsing
[1036374] – Adding a binary search algorithm for bisection of failing tests
[1035811] – Mochitest manifest warnings dumped at start of each robocop test

A big shout out to my mentor (Joel Maher) and other a-team members for helping me in this endeavour!

Just BrowsingTaming Gruntfiles

Every software project needs plumbing.

If you write your code in JavaScript chances are you’re using Grunt. And if your project has been around long enough, chances are your Gruntfile is huge. Even though you write comments and indent properly, the configuration is starting to look unwieldy and is hard to navigate and maintain (see ngbp’s Gruntfile for an example).

Enter load-grunt-config, a Grunt plugin that lets you break up your Gruntfile by task (or task group), allowing for a nice navigable list of small per-task Gruntfiles.

When used, your Grunt config file tree might look like this:

 |_ Gruntfile.coffee
 |_ grunt/
    |_ aliases.coffee
    |_ browserify.coffee
    |_ clean.coffee
    |_ copy.coffee
    |_ watch.coffee
    |_ test-group.coffee

watch.coffee, for example, might be:

module.exports = {
    files: [
      '<%= buildDir %>/**/*.coffee',
      '<%= buildDir %>/**/*.js'
    tasks: ['test']

    files: ['<%= srcDir %>/**/*.html']
    tasks: ['copy:html', 'test']

    files: ['<%= srcDir %>/**/*.css']
    tasks: ['copy:css', 'test']

    files: ['<%= srcDir %>/img/**/*.*']
    tasks: ['copy:img', 'test']

and aliases.coffee:

module.exports = {
  default: [

  dev: [

By default, load-grunt-config reads the task configurations from the grunt/ folder located on the same level as your Gruntfile. If there’s an alias.js|coffee|yml file in that directory, load-grunt-config will use it to load your task aliases (which is convenient because one of the problem with long Gruntfiles is that the task aliases are hard to find).

Other files in the grunt/ directory define configurations for a single task (e.g. grunt-contrib-watch) or a group of tasks.

Another nice thing is that load-grunt-config takes care of loading plugins; it reads package.json and automatically loadNpmTaskss all the grunt plugins it finds for you.

To sum it up, for a bigger project, your Gruntfile can get messy. load-grunt-config helps combat that by introducing structure into the build configuration, making it more readable and maintainable.

Happy grunting!

Gregory SzorcPlease run mach mercurial-setup

Hey there, Firefox developer! Do you use Mercurial? Please take the time right now to run mach mercurial-setup from your Firefox clone.

It's been updated to ensure you are running a modern Mercurial version. More awesomely, it has support for a couple of new extensions to make you more productive. I think you'll like what you see.

mach mercurial-setup doesn't change your hgrc without confirmation. So it is safe to run to see what's available. You should consider running it periodically, say once a week or so. I wouldn't be surprised if we add a notification to mach to remind you to do this.

Gervase MarkhamNow We Are Five…

10 weeks old, and beautifully formed by God :-) The due date is 26th January 2015.

Maja FrydrychowiczDatabase Migrations ("You know nothing, Version Control.")

This is the story of how I rediscovered what version control doesn't do for you. Sure, I understand that git doesn't track what's in my project's local database, but to understand is one thing and to feel in your heart forever is another. In short, learning from mistakes and accidents is the greatest!

So, I've been working on a Django project and as the project acquires new features, the database schema changes here and there. Changing the database from one schema to another and possibly moving data between tables is called a migration. To manage database migrations, we use South, which is sort of integrated into the project's manage.py script. (This is because we're really using playdoh, Mozilla's augmented, specially-configured flavour of Django.)

South is lovely. Whenever you change the model definitions in your Django project, you ask South to generate Python code that defines the corresponding schema migration, which you can customize as needed. We'll call this Python code a migration file. To actually update your database with the schema migration, you feed the migration file to manage.py migrate.

These migration files are safely stored in your git repository, so your project has a history of database changes that you can replay backward and forward. For example, let's say you're working in a different repository branch on a new feature for which you've changed the database schema a bit. Whenever you switch to the feature branch you must remember to apply your new database migration (migrate forward). Whenever you switch back to master you must remember to migrate backward to the database schema expected by the code in master. Git doesn't know which migration your database should be at. Sometimes I'm distracted and I forget. :(

As always, it gets more interesting when you have project collaborators because they might push changes to migration files and you must pay attention and remember to actually apply these migrations in the right order. We will examine one such scenario in detail.

Adventures with Overlooked Database Migrations

Let's call the actors Sparkles and Rainbows. Sparkles and Rainbows are both contributing to the same project and so they each regularly push or pull from the same "upstream" git repository. However, they each use their own local database for development. As far as the database goes, git is only tracking South migration files. Here is our scenario.

  1. Sparkles pushes Migration Files 1, 2, 3 to upstream and applies these migrations to their local db in that order.
  2. Rainbows pulls Migration Files 1, 2, 3 from upstream and applies them to their local db in that order.

    All is well so far. The trouble is about to start.

  3. Sparkles reverses Migration 3 in their local database (backward migration to Migration 2) and pushes a delete of the Migration 3 file to upstream.
  4. Rainbows pulls from upstream: the Migration 3 file no longer exists at HEAD but it must also be reversed in the local db! Alas, Rainbows does not perform the backward migration. :(
  5. Life goes on and Sparkles now adds Migration Files 4 and 5, applies the migrations locally and pushes the files to upstream.
  6. Rainbows happily pulls Migrations Files 4 and 5 and applies them to their local db.

    Notice that Sparkles' migration history is now 1-2-4-5 but Rainbows' migration history is 1-2-3-4-5, but 3 is nolonger part of the up-to-date project!

At some point Rainbows will encounter Django or South errors, depending on the nature of the migrations, because the database doesn't match the expected schema. No worries, though, it's git, it's South: you can go back in time and fix things.

I was recently in Rainbows' position. I finally noticed that something was wrong with my database when South started refusing to apply the latest migration from upstream, telling me "Sorry! I can't drop table TaskArea, it doesn't exist!"

FATAL ERROR - The following SQL query failed: DROP TABLE tasks_taskarea CASCADE;
The error was: (1051, "Unknown table 'tasks_taskarea'")
KeyError: "The model 'taskarea' from the app 'tasks' is not available in this migration."

In my instance of the Sparkles-Rainbows story, Migration 3 and Migration 5 both drop the TaskArea table; I'm trying to apply Migration 5, and South grumbles in response because I had never reversed Migration 3. As far as South knows, there's no such thing as a TaskArea table.

Let's take a look at my migration history, which is conveniently stored in the database itself:

select migration from south_migrationhistory where app_name="tasks";

The output is shown below. The lines of interest are 0010_auth__del and 0010_auto__chg; I'm trying to apply migration 0011 but I can't, because it's the same migration as 0010_auto__del, which should have been reversed a few commits ago.

|  migration                                                                   |
|  0001_initial                                                                |
|  0002_auto__add_feedback                                                     |
|  0003_auto__del_field_task_allow_multiple_finishes                           |
|  0004_auto__add_field_task_is_draft                                          |
|  0005_auto__del_field_feedback_task__del_field_feedback_user__add_field_feed |
|  0006_auto__add_field_task_creator__add_field_taskarea_creator               |
|  0007_auto__add_taskkeyword__add_tasktype__add_taskteam__add_taskproject__ad |
|  0008_task_data                                                              |
|  0009_auto__chg_field_task_team                                              |
|  0010_auto__del_taskarea__del_field_task_area__add_field_taskkeyword_name    |
|  0010_auto__chg_field_taskattempt_user__chg_field_task_creator__chg_field_ta |

I want to migrate backwards until 0009, but I can't do that directly because the migration file for 0010_auto__del is not part of HEAD anymore, just like Migration 3 in the story of Sparkles and Rainbows, so South doesn't know what to do. However, that migration does exist in a previous commit, so let's go back in time.

I figure out which commit added the migration I need to reverse:

# Display commit log along with names of files affected by each commit. 
# Once in less, I searched for '0010_auto__del' to get to the right commit.
$ git log --name-status | less

What that key information, the following sequence of commands tidies everything up:

# Switch to the commit that added migration 0010_auto__del
$ git checkout e67fe32c
# Migrate backward to a happy migration; I chose 0008 to be safe. 
# ./manage.py migrate [appname] [migration]
$ ./manage.py migrate oneanddone.tasks 0008
$ git checkout master
# Sync the database and migrate all the way forward using the most up-to-date migrations.
$ ./manage.py syncdb && ./manage.py migrate

Mark FinkleFirefox for Android: Collecting and Using Telemetry

Firefox 31 for Android is the first release where we collect telemetry data on user interactions. We created a simple “event” and “session” system, built on top of the current telemetry system that has been shipping in Firefox for many releases. The existing telemetry system is focused more on the platform features and tracking how various components are behaving in the wild. The new system is really focused on how people are interacting with the application itself.

Collecting Data

The basic system consists of two types of telemetry probes:

  • Events: A telemetry probe triggered when the users takes an action. Examples include tapping a menu, loading a URL, sharing content or saving content for later. An Event is also tagged with a Method (how was the Event triggered) and an optional Extra tag (extra context for the Event).
  • Sessions: A telemetry probe triggered when the application starts a short-lived scope or situation. Examples include showing a Home panel, opening the awesomebar or starting a reading viewer. Each Event is stamped with zero or more Sessions that were active when the Event was triggered.

We add the probes into any part of the application that we want to study, which is most of the application.

Visualizing Data

The raw telemetry data is processed into summaries, one for Events and one for Sessions. In order to visualize the telemetry data, we created a simple dashboard (source code). It’s built using a great little library called PivotTable.js, which makes it easy to slice and dice the summary data. The dashboard has several predefined tables so you can start digging into various aspects of the data quickly. You can drag and drop the fields into the column or row headers to reorganize the table. You can also add filters to any of the fields, even those not used in the row/column headers. It’s a pretty slick library.


Acting on Data

Now that we are collecting and studying the data, the goal is to find patterns that are unexpected or might warrant a closer inspection. Here are a few of the discoveries:

Page Reload: Even in our Nightly channel, people seem to be reloading the page quite a bit. Way more than we expected. It’s one of the Top 2 actions. Our current thinking includes several possibilities:

  1. Page gets stuck during a load and a Reload gets it going again
  2. Networking error of some kind, with a “Try again” button on the page. If the button does not solve the problem, a Reload might be attempted.
  3. Weather or some other update-able page where a Reload show the current information.

We have started projects to explore the first two issues. The third issue might be fine as-is, or maybe we could add a feature to make updating pages easier? You can still see high uses of Reload (reload) on the dashboard.

Remove from Home Pages: The History, primarily, and Top Sites pages see high uses of Remove (home_remove) to delete browsing information from the Home pages. People do this a lot, again it’s one of the Top 2 actions. People will do this repeatably, over and over as well, clearing the entire list in a manual fashion. Firefox has a Clear History feature, but it must not be very discoverable. We also see people asking for easier ways of clearing history in our feedback too, but it wasn’t until we saw the telemetry data for us to understand how badly this was needed. This led us to add some features:

  1. Since the History page was the predominant source of the Removes, we added a Clear History button right on the page itself.
  2. We added a way to Clear History when quitting the application. This was a bit tricky since Android doesn’t really promote “Quitting” applications, but if a person wants to enable this feature, we add a Quit menu item to make the action explicit and in their control.
  3. With so many people wanting to clear their browsing history, we assumed they didn’t know that Private Browsing existed. No history is saved when using Private Browsing, so we’re adding some contextual hinting about the feature.

These features are included in Nightly and Aurora versions of Firefox. Telemetry is showing a marked decrease in Remove usage, which is great. We hope to see the trend continue into Beta next week.

External URLs: People open a lot of URLs from external applications, like Twitter, into Firefox. This wasn’t totally unexpected, it’s a common pattern on Android, but the degree to which it happened versus opening the browser directly was somewhat unexpected. Close to 50% of the URLs loaded into Firefox are from external applications. Less so in Nightly, Aurora and Beta, but even those channels are almost 30%. We have started looking into ideas for making the process of opening URLs into Firefox a better experience.

Saving Images: An unexpected discovery was how often people save images from web content (web_save_image). We haven’t spent much time considering this one. We think we are doing the “right thing” with the images as far as Android conventions are concerned, but there might be new features waiting to be implemented here as well.

Take a look at the data. What patterns do you see?

Here is the obligatory UI heatmap, also available from the dashboard:

Gregory SzorcRepository-Centric Development

I was editing a wiki page yesterday and I think I coined a new term which I'd like to enter the common nomenclature: repository-centric development. The term refers to development/version control workflows that place repositories - not patches - first.

When collaborating on version controlled code with modern tools like Git and Mercurial, you essentially have two choices on how to share version control data: patches or repositories.

Patches have been around since the dawn of version control. Everyone knows how they work: your version control system has a copy of the canonical data and it can export a view of a specific change into what's called a patch. A patch is essentially a diff with extra metadata.

When distributed version control systems came along, they brought with them an alternative to patch-centric development: repository-centric development. You could still exchange patches if you wanted, but distributed version control allowed you to pull changes directly from multiple repositories. You weren't limited to a single master server (that's what the distributed in distributed version control means). You also didn't have to go through an intermediate transport such as email to exchange patches: you communicate directly with a peer repository instance.

Repository-centric development eliminates the middle man required for patch exchange: instead of exchanging derived data, you exchange the actual data, speaking the repository's native language.

One advantage of repository-centric development is it eliminates the problem of patch non-uniformity. Patches come in many different flavors. You have plain diffs. You have diffs with metadata. You have Git style metadata. You have Mercurial style metadata. You can produce patches with various lines of context in the diff. There are different methods for handling binary content. There are different ways to express file adds, removals, and renames. It's all a hot mess. Any system that consumes patches needs to deal with the non-uniformity. Do you think this isn't a problem in the real world? Think again. If you are involved with an open source project that collects patches via email or by uploading patches to a bug tracker, have you ever seen someone accidentally upload a patch in the wrong format? That's patch non-uniformity. New contributors to Firefox do this all the time. I also see it in the Mercurial project. With repository-centric development, patches never enter the picture, so patch non-uniformity is a non-issue. (Don't confuse the superficial formatting of patches with the content, such as an incorrect commit message format.)

Another advantage of repository-centric development is it makes the act of exchanging data easier. Just have two repositories talk to each other. This used to be difficult, but hosting services like GitHub and Bitbucket make this easy. Contrast with patches, which require hooking your version control tool up to wherever those patches are located. The Linux Kernel, like so many other projects, uses email for contributing changes. So now Git, Mercurial, etc all fulfill Zawinski's law. This means your version control tool is talking to your inbox to send and receive code. Firefox development uses Bugzilla to hold patches as attachments. So now your version control tool needs to talk to your issue tracker. (Not the worst idea in the world I will concede.) While, yes, the tools around using email or uploading patches to issue trackers or whatever else you are using to exchange patches exist and can work pretty well, the grim reality is that these tools are all reinventing the wheel of repository exchange and are solving a problem that has already been solved by git push, git fetch, hg pull, hg push, etc. Personally, I would rather hg push to a remote and have tools like issue trackers and mailing lists pull directly from repositories. At least that way they have a direct line into the source of truth and are guaranteed a consistent output format.

Another area where direct exchange is huge is multi-patch commits (branches in Git parlance) or where commit data is fragmented. When pushing patches to email, you need to insert metadata saying which patch comes after which. Then the email import tool needs to reassemble things in the proper order (remember that the typical convention is one email per patch and email can be delivered out of order). Not the most difficult problem in the world to solve. But seriously, it's been solved already by git fetch and hg pull! Things are worse for Bugzilla. There is no bullet-proof way to order patches there. The convention at Mozilla is to add Part N strings to commit messages and have the Bugzilla import tool do a sort (I assume it does that). But what if you have a logical commit series spread across multiple bugs? How do you reassemble everything into a linear series of commits? You don't, sadly. Just today I wanted to apply a somewhat complicated series of patches to the Firefox build system I was asked to review so I could jump into a debugger and see what was going on so I could conduct a more thorough review. There were 4 or 5 patches spread over 3 or 4 bugs. Bugzilla and its patch-centric workflow prevented me from importing the patches. Fortunately, this patch series was pushed to Mozilla's Try server, so I could pull from there. But I haven't always been so fortunate. This limitation means developers have to make sacrifices such as writing fewer, larger patches (this makes code review harder) or involving unrelated parties in the same bug and/or review. In other words, deficient tools are imposing limited workflows. No bueno.

It is a fair criticism to say that not everyone can host a server or that permissions and authorization are hard. Although I think concerns about impact are overblown. If you are a small project, just create a GitHub or Bitbucket account. If you are a larger project, realize that people time is one of your largest expenses and invest in tools like proper and efficient repository hosting (often this can be GitHub) to reduce this waste and keep your developers happier and more efficient.

One of the clearest examples of repository-centric development is GitHub. There are no patches in GitHub. Instead, you git push and git fetch. Want to apply someone else's work? Just add a remote and git fetch! Contrast with first locating patches, hooking up Git to consume them (this part was always confusing to me - do you need to retroactively have them sent to your email inbox so you can import them from there), and finally actually importing them. Just give me a URL to a repository already. But the benefits of repository-centric development with GitHub don't stop at pushing and pulling. GitHub has built code review functionality into pushes. They call these pull requests. While I have significant issues with GitHub's implemention of pull requests (I need to blog about those some day), I can't deny the utility of the repository-centric workflow and all the benefits around it. Once you switch to GitHub and its repository-centric workflow, you more clearly see how lacking patch-centric development is and quickly lose your desire to go back to the 1990's state-of-the-art methods for software development.

I hope you now know what repository-centric development is and will join me in championing it over patch-based development.

Mozillians reading this will be very happy to learn that work is under way to shift Firefox's development workflow to a more repository-centric world. Stay tuned.

Sean BoltonWhy Do People Join and Stay Part Of a Community (and How to Support Them)

[This post is inspired by notes from a talk by Douglas Atkin (currently at AirBnB) about his work with cults, brands and community.]

We all go through life feeling like we are different. When you find people that are different the same way you are, that’s when you decide to join.

As humans, we each have a unique self narrative: “we tell ourselves a story about who we are, what others are like, how the world works, and therefore how one does (or does not) belong in order to maximize self.” We join a community to become more of ourselves – to exist in a place where we feel we don’t have to self-edit as much to fit in.

A community must have a clear ideology – a set of beliefs about what it stands for – a vision of the world as it should be rather than how it is, that aligns with what we believe. Communities form around certain ways of thinking first, not around products. At Mozilla, this is often called “the web we want” or ‘the web as it should be.’

When joining a community people ask two questions: 1) Are they like me? and 2) Will they like me? The answer to these two fundamental human questions determine whether a person will become and stay part of a community. In designing a community it is important to support potential members in answering these questions – be clear about what you stand for and make people feel welcome. The welcoming portion requires extra work in the beginning to ensure that a new member forms relationships with people in the community. These relationships keep people part of a community. For example, I don’t go to a book club purely for the book, I go for my friends Jake and Michelle. Initially, the idea of a book club attracted me but as I became friends with Jake and Michelle, that friendship continually motivated me to show up. This is important because as the daily challenges of life show up, social bonds become our places of belonging where we can recharge.

Source: Douglas Atkin, The Glue Projecy

Source: Douglas Atkin, The Glue Project

These social ties must be mixed with doing significant stuff together. In designing how community members participate, a very helpful tool is the community commitment curve. This curve describes how a new member can invest in low barrier, easy tasks that build commitment momentum so the member can perform more challenging tasks and take on more responsibility. For example, you would not ask a new member to spend 12 hours setting up a development environment just to make their first contribution. This ask is too much for a new person because they are still trying to figure out ‘are the like me?’ and ‘will they like me?’ In addition, their sense of contribution momentum has not been built – 12 hours is a lot when your previous task is 0 but 12 is not so much when your previous was 10.

The community commitment curve is a powerful tool for community builders because it forces you to design the small steps new members can take to get involved and shows structure to how members take on more complex tasks/roles – it takes some of the mystery out! As new members invest small amounts of time, their commitment grows, which encourages them to invest larger amounts of time, continually growing both time and commitment, creating a fulfilling experience for the community and the member. I made a template for you to hack your own community commitment curve.

Social ties combined with a well designed commitment curve, for a clearly defined purpose, is powerful combination in supporting a community.

[Post featured on Mozilla's Community blog.]

Marco BonardoUnified Complete coming to Firefox 34

The awesomebar in Firefox Desktop has been so far driven by two autocomplete searches implemented by the Places component:

  1. history: managing switch-to-tab, adaptive and browsing history, bookmarks, keywords and tags
  2. urlinline: managing autoFill results

Moving on, we plan to improve the awesomebar contents making them even more awesome and personal, but the current architecture complicates things.

Some of the possible improvements suggested include:

  • Better identify searches among the results
  • Allow the user to easily find favorite search engine
  • Always show the action performed by Enter/Go
  • Separate searches from history
  • Improve the styling to make each part more distinguishable

When working on these changes we don't want to spend time fighting with outdated architecture choices:

  • Having concurrent searches makes hard to insert new entries and ensure the order is appropriate
  • Having the first popup entry disagree with the autoFill entry makes hard to guess the expected action
  • There's quite some duplicate code and logic
  • All of this code predates nice stuff like Sqlite.jsm, Task.jsm, Preferences.js
  • The existing code is scary to work with and sometimes obscure

Due to these reasons, we decided to merge the existing components into a single new component called UnifiedComplete (toolkit/components/places/UnifiedComplete.js), that will take care of both autoFill and popup results. While the component has been rewritten from scratch, we were able to re-use most of the old logic that was well tested and appreciated. We were also able to retain all of the unit tests, that have been also rewritten, making them use a single harness (you can find them in toolkit/components/places/tests/unifiedcomplete/).

So, the actual question is: which differences should I expect from this change?

  • The autoFill result will now always cope with the first popup entry. Though note the behavior didn't change, we will still autoFill up to the first '/'. This means a new popup entry is inserted as the top match.
  • All initialization is now asynchronous, so the UI should not lag anymore at the first awesomebar search
  • The searches are serialized differently, responsiveness timing may differ, usually improve
  • Installed search engines will be suggested along with other matches to improve their discoverability

The component is currently disabled, but I will shortly push a patch to flip the pref that enables it. The preference to control whether to use new or old components is browser.urlbar.unifiedcomplete, you can already set it to true into your current Nightly build to enable it.

This also means old components will shortly be deprecated and won't be maintained anymore. That won't happen until we are completely satisfied with the new component, but you should start looking at the new one if you use autocomplete in your project. Regardless we'll add console warnings at least 2 versions before complete removal.

If you notice anything wrong with the new awesomebar behavior please file a bug in Toolkit/Places and make it block Bug UnifiedComplete so we are notified of it and can improve the handling before we reach the first Release.

Christian Heilmann[Video+Transcript]: The web is dead? – My talk at TEDx Thessaloniki

Today the good folks at TEDx Thessaloniki released the recording of my talk “The web is dead“.

Christian Heilmann at TEDx

I’ve given you the slides and notes earlier here on this blog but here’s another recap of what I talked about:

  • The excitement for the web is waning – instead apps are the cool thing
  • At first glance, the reason for this is that apps deliver much better on a mobile form factor and have a better, high fidelity interaction patterns
  • If you scratch the surface of this message though you find a few disturbing points:
    • Everything in apps is a number game – the marketplaces with the most apps win, the apps with the most users are the ones that will get more users as those are the most promoted ones
    • The form factor of an app was not really a technical necessity. Instead it makes software and services a consumable. The full control of the interface and the content of the app lies with the app provider, not the users. On the web you can change the display of content to your needs, you can even translate and have content spoken out for you. In apps, you get what the provider allows you to get
    • The web allowed anyone to be a creator. The curb to mount from reader to writer was incredibly low. In the apps world, it becomes much harder to become a creator of functionality.
    • Content creation is easy in apps. If you create the content the app makers wants you to. The question is who the owner of that content is, who is allowed to use it and if you have the right to stop app providers from analysing and re-using your content in ways you don’t want them to. Likes and upvotes/downvotes aren’t really content creation. They are easy to do, don’t mean much but make sure the app creator has traffic and interaction on their app – something that VCs like to see.
    • Apps are just another form factor to control software for the benefit of the publisher. Much like movies on DVDs are great, because when you scratch them you need to buy a new one, publishers can now make software services become outdated, broken and change, forcing you to buy a new one instead of enjoying new features cropping up automatically.
    • Apps lack the data interoperability of the web. If you want your app to succeed, you need to keep the users locked into yours and not go off and look at others. That way most apps are created to be highly addictive with constant stimulation and calls to action to do more with them. In essence, the business models of apps these days force developers to create needy, bullying tamagotchi and call them innovation


You might have realized I’m not from here and I’m sorry for the translators because I speak fast, but it’s a good idea to know that life is cruel and you have to do a job.

Myself, I’ve been a web developer for 17 years like something like that now. I’ve dedicated my life to the web. I used to be a radio journalist. I used to be a travel agent, but that’s all boring because you saw people today who save people’s lives and change the life of children and stand there with their stick and do all kind of cool stuff.

I’m just this geek in the corner that just wants to catch this camera… how cool is that thing? They also gave me a laser pointer but it didn’t give me kitten. This is just really annoying.

I want to talk today about my wasted life because you heard it this morning already: we have a mobile revolution and you see it in press all over: the web is dead.

You don’t see excitement about it anymore. You don’t see people like, “Go to www awesome whatever .com” Nobody talks like this anymore. They shouldn’t have talked like that in the past as well but nobody does it anymore.

The web is not the cool thing. It’s not: “I did e-commerce yesterday.” Nobody does that, instead it is, “Oh, I’ll put things on my phone.”

This is what killed the web. The mobile phone factor is just not liking itself to the web. It’s not fun to type in TEDxThessaloniki.com with your two thumbs and forgetting how many S’s are there in Thessaloniki and then going to some strange website.

We text each other all the time, but typing in URL it feels icky, it feels not natural for a phone, so we needed to do something different that’s why we came up with the QR codes – robot barf as I keep calling it – because that didn’t work either. It’s beautiful isn’t it? You go there with your phone and you start scanning it and then two and a half minutes later with only 30% of your battery left, it goes to some URL.

Not a single mobile operating system came out with the QR reader out of the box. It’s worrying, so I realized there has to be a chance and the change happened. The innovation, the new beginning, the new dawn of the internet was the app.

“There’s an app for that,” was the great talk.

“There’s an app for everything. They’re beautiful. Apps are great.” I can explain it to my … well, not my mom but other people’s moms: “There’s an icon, you click on it, you open it and this is the thing and you use that now. There’s no address bar, there’s nothing to understand about domains, HTTP, cookies, all kind of things. You open it, you play with it and they’re beautiful.”

The interaction between the hardware and software with apps is gorgeous. On iOS, it’s beautiful what you can do and you cannot do any of that on the web with iOS because Apple … Well, because you can’t.

Apps are focused. That’s a really good thing about them, the problem with the web was that we’re like little rabbits. We’re running around like kittens with the laser pointer and we’re like, “Oh, that’s 20 tabs open.” Your friend is uploading something and there’s downloading something in the background and it’s multi-tasking.

With apps, you do one thing and one thing well. That is good because the web interfaces that we built over the last years were just like this much content, that much ads and blinking stuff. People don’t want that anymore. They want to do something with an app and that’s what they are focused and they make sense.

In order not to be unemployed and make my father not proud because he said, “The computer thing will never work out,” anyways, I thought it’s a good plan to start my own app idea. I’m going to pitch that to you tonight. I know there’s this few VC people in the audience. I’m completely buy-able. A few million dollars, I’m okay with that.

When I did my research, scientific research by scientists, I found out that most apps are used in leisure time. They’re not used during their work time. You will be hard pushed to find a boss that says like, “Wilkins, by lunch break you have to have a new extra level in Candy Crush or you’re fired.” It’s not going to happen. Most companies, I don’t know – some startup maybe.

We use them in our free time and being a public speaker and traveling all the time, I find that people use apps the most where you are completely focused and alone, in other words: public toilets. This goes so far that with every application that came out, the time spent in the facilities becomes longer and longer. At Snake, it was like 12 minutes and Angry Birds who are about 14, but Candy Crush and with Flappy Bird…

It happens. You sit there and you hear people inside getting a new high score and like, “Yeah, look what I did?” You’re like, “Yeah, look what I want to do.” That’s when I thought, “Why leave that to chance? Why is there no app that actually makes going to the public facilities, not a boring biological thing but makes it a social thing?”

I’m proposing the app called, “What’s Out.” What’s Out is a local application much like FourSquare and others that you can use to good use while you’re actually sitting down where you do things that you know how to do anyways without having to think about them.

You can check in, you can become the mayor, you can send reviews, you can actually check in with your friends and earn badges like three stalls in a row.All these things that make social apps social apps – and why not? You can actually link the photo that you took of the food in Instagram to your check in on What’s Out and that gets shared on the internet.

You can also pay for the full version and it doesn’t get shared on your Facebook account.

You might think I’m a genius, you might think that I have this great idea that nobody had before, but the business model is already in use and has been tested for years successfully in the canine market. The thing is dogs don’t have a thumb so they didn’t tweet about it. They also can’t write so they didn’t put a patent on it so I can do that.

Seriously now though, this is what I hate about apps. They are a hype, they’re no innovation, they’re nothing new. We had software that was doing one thing and one thing well before, we call it Word and Outlook. We called it things that we had to install and then do something with it.

The problem with apps is that the business model is all about hype. WhatsApp was not bought because it’s great software. WhatsApp was bought because millions of people use it. It’s because it actually allowed people to send text messages without paying for it.

Everybody now sees this as the new thing. “We got to have an app, you got to have an app.” For an app to be successful, it has to play a massive numbers game. An app needs millions of users continuously. Twitter has to change their business model every few months just to show more and more and more and more numbers.

It doesn’t really matter what the thing does. What the app does is irrelevant as long as it gets enough people addicted to using it continuously. It’s all about the eyeballs and you put content in these apps that advertisers can use that people can sell to other people. You are becoming the product inside a product.

That even goes into marketplaces. I work on Firefox OS and we have a marketplace for the emerging markets where people can build their first app without having to spend money or have a good computer or download a massive SDK, but people every time when I go to them like, “How many apps do you have in the marketplace?” “I don’t know. The HTML5 apps, they could be anything.”

“If it’s not a few million, the marketplace isn’t good.”

I go to a baker if they have three good things. I don’t need them to have 500 different rolls, but the marketplaces have to full. We just go for big numbers. That’s to me is the problem that we have with apps. I’m not questioning that the mobile web is the coming thing and is the current thing.

The desktop web is dying, it’s on decline, but apps to me are just a marketing model at the moment. They’re bringing the scratchability of CDs, the breaking of clothes, the outdated looking things of shoes into software. It’s becoming a consumer product that can be outdated and can look boring after a while.
That to me is not innovation. This is not bringing us further in the evolution of technology because I’ve seen the evolution. I came from radio to the internet. Out of a sudden, my voice was heard worldwide and not just in my hometown without me having to do anything extra.

Will you download a Christian Heilmann app? Probably not. Might you put my name in Google and find millions of things that I put in there over the last 17 years and some of them you like, probably and you can do that as well.
For apps to be successful, they have to lock you in.

The interoperability of the internet that made it so exciting, the things that Tim showed like I can use this thing and then I can do that, and then I use that. Then I click from Wikipedia to YouTube and from YouTube to this and I translate it if I need to because it’s my language. Nothing of that works in apps unless the app offers that functionality for a certain upgrade of $12.59 or something like that.

To be successful, apps have to be greedy. They have to keep you in themselves and they cannot talk to other apps unless they’re massively other successful apps. That to me doesn’t allow me as a publisher to come up with something new. It just means that the big players are getting bigger all the time and the few winners are out there and the other just go away and a lot of money has been wasted in the whole process.

In essence, apps are like Tamagotchi. Anybody old enough to remember Tamagotchi? These were these little toys for kids that were like a pet that couldn’t afford pets like in Japan, impossible. These little things were like, “Feed me, play with me, get me a playmate, do me these kind of things, do me this kind of thing.” After a few years, people were like, “Whatever.” Then rusting somewhere in the corner and collect dust and nobody cares about them any longer.

Imagine the annoyance that people have with Tamagotchi with over a hundred apps on your phone. It happens when your Android apps, for example, you leave your Android phone with like 600 updates like, “Oh please, I need a new update because I want to show you more ads.” I don’t even have insight of what updates do to the functionality of the app, it’s just I have to download another 12 MB.

If I’m on a contract where I have to pay per megabyte, that’s not fun. How is that innovative? How is that helping me? It’s helping the publisher. We’re making the problem of selling software our problem and we do it just by saying it’s a nicer interface.

Apps are great. Focus on one thing, one thing well, great. The web that we know right now is too complex. We can learn a lot from that one focus thing, but we shouldn’t let ourselves be locked into one environment. You upload pictures to Instagram now, have you read the terms and conditions?

Do you know who owns these pictures? Do you know if this picture could show up next to something that you don’t agree with like a political party because they have the right to show it? Nobody cares about them. Nobody reads that up.
What Tim showed, the image with the globe with the pictures, that was all from Flickr. Flickr, I was part of that group, licensed everything with Creative Commons. You knew that data is yours. There’s a button for downloading all your pictures. If you don’t want it anymore, here’s your pictures, thank you, we’re gone.

With other services, you get everything for free with ads next to it and your pictures might end up on like free singles in your area without you having to do anything with it. You don’t have insight. You don’t own the interface. You don’t own the software.

All in all, apps to me are a step back to the time that I replaced with the internet. A time when software came in a consumable format without me knowing what’s going on. In a browser, I can highlight part of the text, I can copy it into your email and send it to you. I can translate it. I can be blind and listen to a website. I can change things around. I can delete parts of it if it’s too much content there. I can use an ad blocker if I don’t like ads.
On apps, I don’t have any of that. I’m just the slave to the machine and I do it because everybody else does it. I’ve got 36,000 followers on Twitter, I don’t know why. I’m just putting things out there, but you see for example, Beyonce has 13.3 million followers on Twitter and she did six updates.

Twitter and other apps give you the idea that you have a social life that you don’t have. We stop having experiences and we talk about experiences instead. You go to concerts and you got a guy with an iPad in front of you filming the band like, “That’s going to be great sound and thank you for being in my face. I wanted to see the band, that’s what I came here for.” Your virtual life is doing well right? Everybody loves you are there. You don’t have to talk to real people. That would be boring”. Let’s not go back in time. Let’s not go back where software was there for us to just consume and take in.

I would have loved Word to have more functionality in 1995. I couldn’t get it because there wasn’t even add-ons. I couldn’t write any add-on. With the web, I can teach any of you in 20 minutes how to write your first website. HTML page, HTML5 app give me an hour and you learn it.

The technologies are decentralized. They’re open. They’re easy to learn and they’re worldwide. With apps, we go back to just one world that has it. What’s even worse is that we mix software with hardware again. “Oh you want that cool new game. You’re on Android? No, you got to wait seven months. You got to have an iPhone. Wait, do you have the old iPhone? No you got to buy the new one.”
How is that innovation? How is that taking it further? Software and technology is there to enrich our lives, to make it more magical, to be entertaining, to be beautiful. Right now, the model how we build apps right now, the economic model means that you put your life into apps and they make money with it. Something has gone very, very wrong there. I don’t think it’s innovation, I think it’s just dirty business and making money.

I challenge you all to go out and not upload another picture into an app or not type something into another closed environment. Find a way to put something on the web. This could be a blog software. This could be a comment on a newspaper.

Everything you put on that decentralized, beautiful, linked worldwide network of computers, and television sets, and mobile phones, and wearables, and Commodore 64s that people put their own things in, anything you put there is a little sign and a little sign can become a ripple and if more people like it, it become a wave. I’m looking forward to surfing the waves that you all generate. Thanks very much.

Monica ChewDownload files more safely with Firefox 31

Did you know that the estimated cost of malware is hundreds of billions of dollars per year? Even without data loss or identity theft, the time and annoyance spent dealing with infected machines is a significant cost.

Firefox 31 offers improved malware detection. Firefox has integrated Google’s Safe Browsing API for detecting phishing and malware sites since Firefox 2. In 2012 Google expanded their malware detection to include downloaded files and made it available to other browsers. I am happy to report that improved malware detection has landed in Firefox 31, and will have expanded coverage in Firefox 32.

In preliminary testing, this feature cuts the amount of undetected malware by half. That’s a significant user benefit.

What happens when you download malware? Firefox checks URLs associated with the download against a local Safe Browsing blocklist. If the binary is signed, Firefox checks the verified signature against a local allowlist of known good publishers. If no match is found, Firefox 32 and later queries the Safe Browsing service with download metadata (NB: this happens only on Windows, because signature verification APIs to suppress remote lookups are only available on Windows). In case malware is detected, the Download Manager will block access to the downloaded file and remove it from disk, displaying an error in the Downloads Panel below.

How can I turn this feature off? This feature respects the existing Safe Browsing preference for malware detection, so if you’ve already turned that off, there’s nothing further to do. Below is a screenshot of the new, beautiful in-content preferences (Preferences > Security) with all Safe Browsing integration turned off. I strongly recommend against turning off malware detection, but if you decide to do so, keep in mind that phishing detection also relies on Safe Browsing.
Many thanks to Gian-Carlo Pascutto and Paolo Amadini for reviews, and the Google Safe Browsing team for helping keep Firefox users safe and secure!

Erik VoldMozilla University

When I was young I only cared about Math, math and math, but saw no support, no interesting future, no jobs (that I cared for), nor any respect for the field. In my last year of high school I started thinking about my future, so I sought out advice and suggestions from the adults that I respected. The only piece of advice that I cared for was from my father, he simply suggested that I start with statistics because there are good jobs available for a person with those skills and if I didn’t like it I would find something else. I had taken an intro class to statistics and it was pretty easy, so I decided to give it a try.

Statistics was easy, and boring, so I tried computer science because I had been making web sites for people on the side, off and on, since I was 16, and it was interesting, especially since the best application of statistics to my mind is computer learning. I enjoyed the comp sci classes most of all, and I took 5 years to get my bachelor’s degree in statistics and computer science. It was a great experience for me.

Slightly before I graduated I started working full time as a web developer, after a couple of years I started tinkering with creating add-ons, because I was spending 8+ hours a day using Firefox and I figured I could make it suite my needs a little more, and maybe others would enjoy my hacks too, so I started making userscripts, ubiquity commands, jetpacks, and add-ons.

It’s been 5 years now since I started hacking on projects in the Mozilla community, and these last five years have been just as valuable to me as the 5 years that I spent at UBC. I consider this to be my 2nd degree.

Now when I think about how to grow the community, how to educate the masses, how to reward people for their awesome contributions, I can think of no better way than a free Mozilla University.

We have webmaker today, and I thought it was interesting at first, so I contributed to the best of ability for the first 2 years, but I see some fundamental issues with it. For instance, how do we measure the success of webmaker? how do we know that we’ve affected people? how do we know whether or not these people have decided to continue their education or not? and if they decide to continue their webmaker education then how do we help them? finally, do we respect the skills we teach if do we not provide credentials?

I, for one, would like to see Open Badges and Webmaker become Mozilla University, a free, open source, peer-to-peer, distributed, and widely respected place to learn.

I feel that one of the most important parts of my job at Mozilla is to teach, but how many of us are really doing this? Mozilla University could also be a way to measure our progress.

Nick CameronLibHoare - pre- and postconditions in Rust

I wrote a small macro library for writing pre- and postconditions (design by contract style) in Rust. It is called LibHoare (named after the logic and in turn Tony Hoare) and is here (along with installation instructions). It should be easy to use in your Rust programs, especially if you use Cargo. If it isn't, please let me know by filing issues on GitHub.

The syntax is straightforward, you add `[#precond="predicate"]` annotations before a function where `predicate` is any Rust expression which will evaluate to a bool. You can use any variables which would be in scope where the function is defined and any arguments to the function. Preconditions are checked dynamically before a function is executed on every call to that function.

You can also write `[#postcond="predicate"]` which is checked on leaving a function and `[#invariant="predicate"]` which is checked before and after. You can write any combination of annotations too. In postconditions you can use the special variable `result` (soon to be renamed to `return`) to access the value returned by the function.

There are also `debug_*` versions of each annotation which are not checked in --ndebug builds.

The biggest limitation at the moment is that you can only write conditions on functions, not methods (even static ones). This is due to a restriction on where any annotation can be placed in the Rust compiler. That should be resolved at some point and then LibHoare should be pretty easy to update.

If you have ideas for improvement, please let me know! Contributions are very welcome.

# Implementation

The implementation of these syntax extensions is fairly simple. Where the old function used to be, we create a new function with the same signature and an empty body. Then we declare the old function inside the new function and call it with all the arguments (generating the list of arguments is the only interesting bit here because arguments in Rust can be arbitrary patterns). We then return the result of that function call as the result of the outer function. Preconditions are just an `assert!` inserted before calling the inner function and postconditions are an `assert!` inserted after the function call and before returning.

Andrew OverholtWe held a Mozilla “bootcamp”. You won’t believe how it went!

For a while now a number of Mozillians have been discussing the need for some sort of technical training on Gecko and other Mozilla codebases. A few months ago, Vlad and I and a few others came up with a plan to try out a “bootcamp”-like event. We initially thought we’d have non-core developers learn from more senior developers for 4 days and had a few goals:

  • teach people not developing Mozilla code daily about the development process
  • expose Mozillians to areas with which they’re not familiar
  • foster shared ownership of areas of code and tools
  • teach people where to look in the code when they encounter a bug and to more accurately file a bug (“teach someone how to fish”)

While working towards this we realized that there isn’t as much shared ownership as there could be within Mozilla codebases so we focused on 2 engineering teams teaching other engineers. The JavaScript and Graphics teams agreed to be mentors and we solicited participants from a few paid Mozillians to try this out. We intentionally limited the audience and hand-picked them for this first “beta” since we had no idea how it would go.

The event took place over 4 days in Toronto in early June. We ended up with 5 or 6 mentors (the Graphics team having a strong employee presence in Toronto helped with pulling in experts here and there) and 9 attendees from a variety of engineering teams (Firefox OS, Desktop, and Platform).

The week’s schedule was fairly loose to accommodate questions and make it more of a conversational atmosphere. We planned sessions in an order to give a high level overview followed by deeper dives. We also made sessions about complementary Gecko components happen in a logical order (ex. layout then graphics). You can see details about the schedule we settled upon here: https://etherpad.mozilla.org/bootcamp1plans.

We collaboratively took notes and recorded everything on video. We’re still in the process of creating usable short videos out of the raw feeds we recorded. Text notes were captured on this etherpad which had some real-time clarifications made by people not physically present (Ms2ger and others) which was great.

The week taught us a few things, some obvious, some not so obvious:

  • people really want time for learning. This was noted more than once and positive comments I received made me realize it could have been held in the rain and people would have been happy
  • having a few days set aside for professional development was very much appreciated so paid Mozillians incorporating this into their goals should be encouraged
  • people really want the opportunity to learn from and ask questions of more seasoned Mozilla hackers
  • hosting this in a MozSpace ensured reliable facilities, flexibility in terms of space, and the availability of others to give ad hoc talks and answer questions when necessary. It also allowed others who weren’t official attendees to listen in for a session or two. Having it in the office also let us use the existing video recording setup and let us lean on the ever-amazing Jonathan Lin for audio and video help. I think you could do this outside a MozSpace but you’d need to plan a bit more for A/V and wifi, etc.
  • background noise (HVAC, server fans, etc.) is very frustrating for conversations and audio recording (but we already knew this)
  • this type of event is unsuitable for brand new {employees|contributors} since it’s way too much information. It would be more applicable after someone’s been involved for a while (6 months, 1 year?).

In terms of lessons for the future, a few things come to mind:

  • interactive exercises were very well received (thanks, kats!) and cemented people’s learning as expected
  • we should perhaps define homework to be done in advance and strongly encourage completion of it; videos of previous talks may be good material
  • scheduling around 2 months in advance seemed to be best to balance “I have no idea what I’ll be doing then” and “I’m already busy that week”
  • keeping the ratio of attendees to “instructors” to around 2 or 3 to 1 worked well for interactivity and ensuring the right people were present who could answer questions
  • although very difficult, attempting to schedule around major deadlines is useful (this week didn’t work for a few of the Firefox OS teams)
  • having people wear lapel microphones instead of a hand-held one makes for much better and more natural audio
  • building a schedule, mentors, and attendee list based on common topics of interest would be an interesting experiment instead of the somewhat mixed bag of topics we had this time
  • using whiteboards and live coding/demos instead of “slides” worked very well

Vlad and I think we should do this again. He proposed chaining organizers so each organizer sets one up and helps the next person do it. Are you interested in being the next organizer?

I’m very interested in hearing other people’s thoughts about this so if you have any, leave a comment or email me or find me on IRC or send me a postcard c/o the Toronto office (that would be awesome).

Swarnava SenguptaFlashing Flame Devices with Firefox OS

Mozilla Release Management TeamAuto-comment on the Release Management flags

Implemented in bug 853108 by the bmo team, using the tracking flags will automatically updated the comment field with some templates. The goal is to reduce back and forth in Bugzilla on bug tracking. We also hope that is going to improve our response time.

For example, for the tracking requests (tracking-firefoxNN, tracking-firefox-esrNN or blocking-b2g), the user will see the text added into the Bugzilla comment field:

[Tracking Requested - why for this release]:

With this change, we hope to simplify the decision process for the release team.

For the relnotes-* flags:

Release Note Request (optional, but appreciated)
[Why is this notable]:
[Suggested wording]:
[Links (documentation, blog post, etc)]:

This change aims to simplify the process of release notes writing. In some cases, it can be hard for release manager to translate a bug into a new feature description.

Flags on which this option is enabled are:

  • relnote-firefox
  • relnote-b2g
  • tracking-firefoxNN
  • tracking-firefox-esrNN
  • blocking-b2g

Finally, we reported bug 1041964 to discuss about a potential auto-focus on the comment area.

Doug BelshawMaking the web simple, but not simplistic

A couple of months ago, an experimental feature Google introduced in the ‘Canary’ build of its Chrome browser prompted a flurry of posts in the tech press. The change was to go one step further than displaying an ‘origin chip’ and do away with the URL entirely:

Hidden URL

I have to admit that when I first heard of this I was horrified – I assumed it was being done for the worst of reasons (i.e. driving more traffic to Google search). However, on reflection, I think it’s a nice example of progressive complexity. Clicking on the root name of the site reveals the URL. Otherwise, typing in the omnibox allows you to search the web:

Google Chrome experiment

Progressive complexity is something we should aspire to when designing tools for a wide range of users. It’s demonstrated well by my former Mozilla colleague Rob Hawkes in his work on ViziCities:

progressive-complexity.png http://slidesha.re/1kbYyYU

Using this approach means that those that are used to manipulating URLs are catered for – but the process is simplified for novice users.

Something we forget is that URLs often depend on the file structures of web servers: http://server.com/directory/sub-directory/file.htm. There’s no particular reason why this should be the case.

iCloud and Pages on OS X Pages on Mac OS X saving to iCloud

google-drive.png Google Drive interface

It’s worth noting that both Apple and Google here don’t presuppose you will create folders to organise your documents and digital artefacts. You can do so, or add tags, but it’s just as easy to dump them all in one place and search efficiently. It’s human-centred design.

My guiding principle here from a web literacy point of view is whether simplification and progressive complexity is communicated to users. Is it clear that there’s more to this than what’s presented on the surface? With the examples I’ve given in this post, I feel that they are.

Questions? Comments? I’m @dajbelshaw or you can email me at doug@mozillafoundation.org.

Pete MooreWeekly review 2014-07-23


This week I rolled out the l10n changes, after a few more iterations of tweaks / improvements / nice-to-haves. I am coordinating with Hal about when we can cut over from legacy (as this will need his involvement) which depends a little bit on his availability - he has already proactively contacted me to let me know he is quite tied up at the moment, so it is unlikely we’ll be able to engage in roll out work together for the next couple of weeks until hg issues have stablised, and he has completed some work with fubar/bkero and the interns.

I’ve had discussions with Aki about various vcs sync matters (both technical and business relationship-wise) and am confident I am in a good position to lead this going forward.

I also rolled out changes to the email notifications, which unfortunately I had to roll back.

Now l10n is done (apart from cutover) the last two parts are gecko.git and gecko-projects - which I anticipate as being relatively trouble-free.

After that comes git-hg and git-git support (currently new vcs sync only supports hg-git).


Looking forward to getting involved with release process (https://bugzilla.mozilla.org/show_bug.cgi?id=1042128)

Dave HuntA new home for the gaiatest documentation

The gaiatest python package provides a test framework and runner for testing Gaia (the user interface for Firefox OS). It also provides a handy command line tool and can be used as a dependency from other packages that need to interact with Firefox OS.

Documentation for this package has now been moved to gaiatest.readthedocs.org, which is generated directly from the source code whenever there’s an update. In order to make this more useful we will continue to add documentation to the Python source code. If you’re interested in helping us out please get in touch by leaving a comment, or joining #ateam on irc.mozilla.org and letting us know.

Francesca CiceriAdventures in Mozillaland #3

Yet another update from my internship at Mozilla, as part of the OPW.

A brief one, this time, sorry.

Bugs, Bugs, Bugs, Bacon and Bugs

I've continued with my triaging/verifying work and I feel now pretty confident when working on a bug.
On the other hand, I think I've learned more or less what was to be learned here, so I must think (and ask my mentor) where to go from now on.
Maybe focus on a specific Component?
Or steadily work on a specific channel for both triaging/poking and verifying?
Or try my hand at patches?
Not sure, yet.

Also, I'd like to point out that, while working on bug triaging, the developer's answers on the bug report are really important.
Comments like this help me as a triager to learn something new, and be a better triager for that component.
I do realize that developers cannot always take the time to put in comments basic information on how to better debug their component/product, but trust me: this will make you happy on the long run.
A wiki page with basic information on how debug problems for your component is also a good idea, as long as that page is easy to find ;).

So, big shout-out for MattN for a very useful comment!


After much delaying, we finally managed to pick a date for the Bug Triage Workshop: it will be on July 25th. The workshop will be an online session focused on what is triaging, why is important, how to reproduce bugs and what information ask to the reporter to make a bug report the most complete and useful possible.
We will do it in two different time slots, to accomodate various timezones, and it will be held on #testday on irc.mozilla.org.
Take a look at the official announcement and subscribe on the event's etherpad!

See you on Friday! :)

Gervase MarkhamFraudulent Passport Price List

This is a list (URL acquired from spam) of prices for fraudulent (but perhaps “genuine” in terms of the materials used, I don’t know) passports, driving licenses and ID cards. It is a fascinating insight into the relative security of the identification systems of a number of countries. Of course, the prices may also factor in the economic value of the passport, but it’s interesting that a Canadian passport is more expensive than a US one. That probably reflects difficulty of obtaining the passport rather than the greater desirability of Canada over the US. (Sorry, Canadians, I know you’d disagree! Still, you can be happy at the competence and lack of corruption in your passport service.)

One interesting thing to note is that one of the joint lowest-price countries, Latvia (€900), is a member of the EU. A Latvian passport allows you to live and work in any EU country, even Germany, which has the most expensive passports (€5200). The right to live anywhere in the EU – yours for only €900…

Also interesting is to sort by passport price and look if the other prices follow the same curve. A discrepancy may indicate particularly weak or strong security. So Russian ID cards are cheaper than one might expect, whereas Belgian ones are more expensive. Austrian and Belgian driver’s licenses also seem to be particularly hard to forge, but the prize there goes to the UK, which has the top-priced spot (€2000). I wonder if that’s related to the fact that the UK doesn’t have ID cards, so the driver’s license often functions as one?

Here is the data in spreadsheet form (ODS), so you can sort and analyse, and just in case the original page disappears…

Sylvestre LedruAuto-comment on the Release Management flags

Implemented in bug 853108 by the bmo team, using the tracking flags will automatically updated the comment field with some templates.
The goal is to reduce back and forth in Bugzilla on bug tracking. We also hope that is going to improve our response time.

For example, for the tracking requests (tracking-firefoxNN, tracking-firefox-esrNN or blocking-b2g), the user will see the text added into the Bugzilla comment field:

[Tracking Requested - why for this release]:

With this change, we hope to simplify the decision process for the release team.

For the relnotes-* flags:

Release Note Request (optional, but appreciated)
[Why is this notable]:
[Suggested wording]:
[Links (documentation, blog post, etc)]:

This change aims to simplify the process of release notes writing. In some cases, it can be hard for release manager to translate a bug into a new feature description.

Flags on which this option is enabled are:

  • relnote-firefox
  • relnote-b2g
  • tracking-firefoxNN
  • tracking-firefox-esrNN
  • blocking-b2g

Finally, we reported bug 1041964 to discuss about a potential auto-focus on the comment area.

Manish Goregaokar200, and more!

After my last post on my running GitHub streak, I've pretty much continued to contribute to the same projects, so I didn't see much of a point of posting about it again — the fun part about these posts is talking about all the new projects I've started or joined. However, this time an arbitrary base-ten milestone comes rather close to another development on the GitHub side which is way more awesome than a streak; hence the post.

Firstly, a screenshot:

I wish there was more dark green

Now, let's have a look at the commit that made the streak reach 200. That's right, it's a merge commit to Servo — something which is created for the collaborator who merges the pull request1. Which is a great segue into the second half of this post:

I now have commit/collaborator access to Servo. :D

It happened around a week back. Ms2ger needed a reviewer, Lars mentioned he wanted to get me more involved, I said I didn't mind reviewing, and in a few minutes I was reviewing a pull request for the first time. A while later I had push access.

This doesn't change my own workflow while contributing to Servo; since everyone still goes through pull requests and reviews. But it gives a much greater sense of belonging to a project. Which is saying something, since Mozilla projects already give one a sense of being "part of the team" rather early on, with the ability to attend meetings, take part in decision-making, and whatnot.

I also now get to review others' code, which is a rather interesting exercise. I haven't done much reviewing before. Pull requests to my own repos don't count much since they're not too frequent and if there are small issues I tend to just merge and fix. I do give feedback for patches on Firefox (mostly for the ones I mentor or if asked on IRC), but in this situation I'm not saying that the code is worthy to be merged; I'm just pointing out any issues and/or saying "Looks good to me".

With Servo, code I review and mark as OK is ready for merging. Which is a far bigger responsibility. I make mistakes (and style blunders) in my own code, so marking someone else's code as mistake free is a bit intimidating at first. Yes, everyone makes mistakes and yet we have code being reviewed properly, but right now I'm new to all this, so I'm allowed a little uncertainty ;) Hopefully in a few weeks I'll be able to review code without overthinking things too much.

In other GitHub-ish news, a freshman of my department submitted a very useful pull request to one of my repos. This makes me happy for multiple reasons: I have a special fondness for student programmers who are not from CS (not that I don't like CS students), being one myself. Such students face an extra set of challenges of finding a community, learning the hard stuff without a professor, and juggling their hobby with normal coursework (though to be fair for most CS students their hobby programming rarely intersects with coursework either).

Additionally, the culture of improving tools that you use is one that should be spread, and it's great that at least one of the new students is a part of this culture. Finally, it means that people use my code enough to want to add more features to it :)

1. I probably won't count this as part of my streak and make more commits later today. Reviewing is hard, but it doesn't exactly take the place of writing actual code so I may not count merge commits as part of my personal commit streak rules.

Julien VehentOpSec's public mailing list

Mozilla's Operations Security team (OpSec) protects the networks, systems, services and data that power the Mozilla project. The nature of the job forces us to keep a lot of our activity behind closed doors. But we thrive to do as much as possible in the open, with projects like MIG, Mozdef, Cipherscan, OpenVPN-Netfilter, Duo-Unix or Audisp-Json.

Opening up security discussions to the community, and to the public, has been a goal for some time, and today we are making a step forward with the OpSec mailing list at


This mailing list is a public place for discussing general security matters among operational teams, such as public vulnerabilities, security news, best practices discussions and tools. We hope that people from inside and outside of Mozilla will join the discussions, and help us keep Mozilla secure.

So join in, and post some cool stuff!

Mike ShalMoving Automation Steps in Tree

In bug 978211, we're looking to move the logic for the automation build steps from buildbot into mozilla-central. Essentially, we're going to convert this: Into this:

Rick EyreWebVTT Released in Firefox 31

If you haven't seen the release notes WebVTT has finally been released in Firefox 31. I'm super excited about this as it's the culmination of a lot of my own and countless others' work over the last two years. Especially since it has been delayed for releases 29 and 30.

That being said, there are still a few known major bugs with WebVTT in Firefox 31:

  • TextTrackCue enter, exit, and change events do not work yet. I'm working on getting them done now.
  • WebVTT subtitles do not show on audio only elements yet. This will probably be what is tackled after the TextTrackCue events (EDIT: To clarify, I meant audio only video elements).
  • There is no support for any in-band TextTrack WebVTT data yet. If your a video or audio codec developer that wants in-band WebVTT to work in Firefox, please help out :-).
  • Oh, and there is no UI on the HTML5 Video element to control subtitles... not the most convenient, but it's currently being worked on as well.
I do expect the bugs to start rolling in as well and I'm actually kind of looking forward to that as it will help improve WebVTT in Firefox.

Doug BelshawA list of all 15 Web Literacy 'maker' badges

Things have to be scheduled when there’s so much to ‘ship’ at an organization like Mozilla. So we’re still a couple of weeks away from a landing page for all of the badges at webmaker.org. This post has a link to all of the Web Literacy badges now available.

Web Literacy Map v1.1

We’ve just finished testing the 15 Web Literacy ‘maker’ badges I mentioned in a previous post. Each badge corresponds to the ‘Make’ part of the resources page for the relevant Web Literacy Map competency. We’re not currently badging ‘Discover’ and ‘Teach’. If this sounds confusing, you see what I mean by viewing, as an example, the resources page for the Privacy competency.

Below is a list of the Web Literacy badges that can apply for right now. Note that you might want to follow this guidance if and when you do!




Why not set yourself a challenge? Can you:

  1. Collect one from each strand?
  2. Collect all the badges within a given strand?
  3. Collect ALL THE BADGES?

Comments? Questions? I’m @dajbelshaw or you can email me: doug@mozillafoundation.org

Luis VillaSlide embedding from Commons

A friend of a friend asked this morning:

I suggested Wikimedia Commons, but it turns out she wanted something like Slideshare’s embedding. So here’s a test of how that works (timely, since soon Wikimanians will be uploading dozens of slide decks!)

This is what happens when you use the default Commons “Use this file on the web -> HTML/BBCode” option on a slide deck pdf:

Wikimedia Legal overview 2014-03-19

Not the worst outcome – clicking gets you to a clickable deck. No controls inline in the embed, though. And importantly nothing to show that it is clickable :/

Compare with the same deck, uploaded to Slideshare:

Some work to be done if we want to encourage people to upload to Commons and share later.

Update: a commenter points me at viewer.js, which conveniently includes a wordpress plugin! The plugin is slightly busted (I had to move some files around to get it to work in my install) but here’s a demo:

Update2: bugs are fixed upstream and in an upcoming 0.5.2 release of the plugin. Hooray!