Christian HeilmannWeb Components and you – dangers to avoid

Lego
Legos by C Slack

Web Components are a hot topic now. Creating widgets on the web that are part of the browser’s rendering flow is amazing. So is inheriting from and enhancing existing ones. Don’t like how a SELECT looks or works? Get it and override what you don’t like. With the web consumed on mobile devices performance is the main goal. Anything we can do to save on battery consumption and to keep interfaces responsive without being sluggish is a good thing to do.

Web Components are a natural evolution of HTML. HTML is too basic to allow us to create App interfaces. When we defined HTML5 we missed the opportunity to create semantic widgets existing in other UI libraries. Instead of looking at the class names people used in HTML, it might have been more prudent to look at what other RIA environments did. We limited the scope of new elements to what people already hacked together using JS and the DOM. Instead we should have aimed for parity with richer environments or desktop apps. But hey, hindsight is easy.

What I am more worried about right now is that there is a high chance that we could mess up Web Components. It is important for every web developer to speak up now and talk to the people who build browsers. We need to make this happen in a way our end users benefit from Web Components the best. We need to ensure that we focus our excitement on the long-term goal of Web Components. Not on how to use them right now when the platforms they run on aren’t quite ready yet.

What are the chances to mess up? There are a few. From what I gathered at several events and from various talks I see the following dangers:

  • One browser solutions
  • Dependency on filler libraries
  • Creating inaccessible solutions
  • Hiding complex and inadequate solutions behind an element
  • Repeating the “just another plugin doing $x” mistakes

One browser solutions

This should be pretty obvious: things that only work in one browser are only good for that browser. They can only be used when this browser is the only one available in that environment. There is nothing wrong with pursuing this as a tech company. Apple shows that when you control the software and the environment you can create superb products people love. It is, however, a loss for the web as a whole as we just can not force people to have a certain browser or environment. This is against the whole concept of the web. Luckily enough, different browsers support Web Components (granted at various levels of support). We should be diligent about asking for this to go on and go further. We need this, and a great concept like Web Components shouldn’t be reliant on one company supporting them. A lot of other web innovation that heralded as a great solution for everything went away quickly when only one browser supported it. Shared technology is safer technology. Whilst it is true that more people having a stake in something makes it harder to deliver, it also means more eyeballs to predict issues. Overall, sharing efforts prevents an open technology to be a vehicle for a certain product.

Dependency on filler libraries

A few years ago we had a great and – at the same time – terrible idea: let’s fix the problems in browsers with JavaScript. Let’s fix the weirdness of the DOM by creating libraries like jQuery, prototype, mootools and others. Let’s fix layout quirks with CSS libraries. Let’s extend the functionality of CSS with preprocessors. Let’s simulate functionality of modern browsers in older browsers with polyfills.

All these aim at a simple goal: gloss over the differences in browsers and allow people to use future technologies right now. This is on the one hand a great concept: it empowers new developers to do things without having to worry about browser issues. It also allows any developer to play with up and coming technology before its release date. This means we can learn from developers what they want and need by monitoring how they implement interfaces.

But we seem to forget that these solutions were build to be stop-gaps and we become reliant on them. Developers don’t want to go back to a standard interface of DOM interaction once they got used to $(). What people don’t use, browser makers can cross off their already full schedules. That’s why a lot of standard proposals and even basic HTML5 features are missing in them. Why put effort into something developers don’t use? We fall into the trap of “this works now, we have this”, which fails to help us once performance becomes an issue. Many jQuery solutions on the desktop fail to perform well on mobile devices. Not because of jQuery itself, but because of how we used it.

Which leads me to Web Components solutions like X-Tags, Polymer and Brick. These are great as they make Web Components available right now and across various browsers. Using them gives us a glimpse of how amazing the future for us is. We need to ensure that we don’t become dependent on them. Instead we need to keep our eye on moving on with implementing the core functionality in browsers. Libraries are tools to get things done now. We should allow them to become redundant.

For now, these frameworks are small, nimble and perform well. That can change as all software tends to grow over time. In an environment strapped for resources like a $25 smartphone or embedded systems in a TV set every byte is a prisoner. Any code that is there to support IE8 is nothing but dead weight.

Creating inaccessible solutions

Let’s face facts: the average web developer is more confused about accessibility than excited by it. There are many reasons for this, none of which worth bringing up here. The fact remains that an inaccessible interface doesn’t help anyone. We tout Flash as being evil as it blocks out people. Yet we build widgets that are not keyboard accessible. We fail to provide proper labeling. We make things too hard to use and expect the steady hand of a brain surgeon as we create tight interaction boundaries. Luckily enough, there is a new excitement about accessibility and Web Components. We have the chance to do something new and do it right this time. This means we should communicate with people of different ability and experts in the field. Let’s not just convert our jQuery plugins to Web Components in verbatim. Let’s start fresh.

Hiding complex and inadequate solutions behind an element

In essence, Web Components allow you to write custom elements that do a lot more than HTML allows you to do now. This is great, as it makes HTML extensible (and not in the weird XHTML2 way). It can also be dangerous, as it is simple to hide a lot of inefficient code in a component, much like any abstraction does. Just because we can make everything into an element now, doesn’t mean we should. What goes into a component should be exceptional code. It should perform exceptionally well and have the least dependencies. Let’s not create lots of great looking components full of great features that under the hood are slow and hard to maintain. Just because you can’t see it doesn’t mean the rules don’t apply.

Repeating the “just another plugin doing $x” mistake

You can create your own carousel using Web Components. That doesn’t mean though that you have to. Chances are that someone already built one and the inheritance model of Web Components allows you to re-use this work. Just take it and tweak it to your personal needs. If you look for jQuery plugins that are image carousels right now you better bring some time. There are a lot out there – in various states of support and maintenance. It is simple to write one, but hard to maintain.

Writing a good widget is much harder than it looks. Let’s not create a lot of components because we can. Instead let’s pool our research and findings and build a few that do the job and override features as needed. Core components will have to change over time to cater for different environmental needs. That can only happen when we have a set of them, tested, proven and well architected.

Summary

I am super excited about this and I can see a bright future for the web ahead. This involves all of us and I would love Flex developers to take a look at what we do here and bring their experience in. We need a rich web, and I don’t see creating DOM based widgets to be the solution for that for much longer with the diversity of devices ahead.

Nick CameronRust for C++ programmers - part 1: Hello world

This is the first in a series of blog posts (none written yet) which aim to help experienced C++ programmers learn Rust. Expect updates to be sporadic at best. In this first blog post we'll just get setup and do a few super basic things. Much better resources are at the tutorial and reference manual.

First you need to install Rust. You can download a nightly build from http://www.rust-lang.org/install.html (I recommend the nighlties rather than 'stable' versions - the nightlies are stable in that they won't crash too much (no more than the stable versions) and you're going to have to get used to Rust evolving under you sooner or later anyway). Assuming you manage to install things properly, you should then have a `rustc` command available to you. Test it with `rustc -v`.

Now for our first program. Create a file, copy and paste the following into it and save it as `hello.rs` or something equally imaginative.
fn main() {
    println!("Hello world!");
}
Compile this using `rustc hello.rs`, and then run `./hello`. It should display the expected greeting \o/

Two compiler options you should know are `-o ex_name` to specify the name of the executable and `-g` to output debug info; you can then debug as expected using gdb or lldb, etc. Use `-h` to show other options.

OK, back to the code. A few interesting points - we use `fn` to define a function or method. `main()` is the default entry point for our programs (we'll leave program args for later). There are no separate declarations or header files as with C++. `println!` is Rust's equivalent of printf. The `!` means that it is a macro, for now you can just treat it like a regular function. A subset of the standard library is available without needing to be explicitly imported/included (we'll talk about that later). The `println!` macros is included as part of that subset.

Lets change our example a little bit:
fn main() {
    let world = "world";
    println!("Hello {}!", world);
}
`let` is used to introduce a variable, world is the variable name and it is a string (technically the type is `&'static str`, but more on that in a later post). We don't need to specify the type, it will be inferred for us.

Using `{}` in the `println!` statement is like using `%s` in printf. In fact, it is a bit more general than that because Rust will try to convert the variable to a string if it is not one already*. You can easily play around with this sort of thing - try multiple strings and using numbers (integer and float literals will work).

If you like, you can explicitly give the type of `world`:
    let world: &'static str = "world";
In C++ we write `T x` to declare a variable `x` with type `T`. In Rust we write `x: T`, whether in `let` statements or function signatures, etc. Mostly we omit explicit types in `let` statements, but they are required for function arguments. Lets add another function to see it work:
fn foo(_x: &'static str) -> &'static str {
    "world"
}

fn main() {
    println!("Hello {}!", foo("bar"));
}
The function `foo` has a single argument `_x` which is a string literal (we pass it "bar" from `main`). We don't actually use that argument in `foo`. Usually, Rust will warn us about this. By prefixing the argument name with `_` we avoid these warnings. In fact, we don't need to name the argument at all, we could just use `_`.

The return type for a function is given after `->`. If the function doesn't return anything (a void function in C++), we don't need to give a return type at all (as in `main`). If you want to be super-explicit, you can write `-> ()`, `()` is the void type in Rust. `foo` returns a string literal.

You don't need the `return` keyword in Rust, if the last expression in a function body (or any other body, we'll see more of this later) is not finished with a semicolon, then it is the return value. So `foo` will always return "world". The `return` keyword still exists so we can do early returns. You can replace `"world"` with `return "world";` and it will have the same effect.



* This is a programmer specified conversion which uses the `Show` trait, which works a bit like `toString` in Java. You can also use `{:?}` which gives a compiler generated representation which is sometimes useful for debugging. As with printf, there are many other options.

Armen Zambrano GasparnianMozilla's pushes - March 2014

Here's March's monthly analysis of the pushes to our Mozilla development trees (read about Gaia merges at the end of the blog post).
You can load the data as an HTML page or as a json file.

TRENDS

March (as February did) has the highest number of pushes EVER.
We will soon have 8,000 pushes/month as our norm.
The only noticeable change in the distribution of pushes is that non-integration trees had a higher share of the cake (17.80% on Mar. vs 14.60% on Feb.).

HIGHLIGHTS

  • 7,939 pushes
    • NEW RECORD
  • 284 pushes/day (average)
    • NEW RECORD
  • Highest number of pushes/day: 435 pushes on March, 4th
    • NEW RECORD
  • 16.07 pushes/hour (average)

GENERAL REMARKS

Try keeps on having around 50% of all the pushes.
The three integration repositories (fx-team, mozilla-inbound and b2g-inbound) account around 30% of all the pushes.

RECORDS

  • March 2014 was the month with most pushes (7,939 pushes)
  • March 2014 has the highest pushes/day average with 284 pushes/day
  • February 2014 has the highest average of "pushes-per-hour" is 16.57 pushes/hour
  • March 4th, 2014 had the highest number of pushes in one day with 435 pushes



DISCLAIMERS

  • The data collected prior to 2014 could be slightly off since different data collection methods were used
  • Gaia pushes are more or less counted. I will write a blog post about in the near term.

Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Ben HearsumThis week in Mozilla RelEng – April 17th, 2014

Major Highlights:

Completed work (resolution is ‘FIXED’):

In progress work (unresolved and not assigned to nobody):

Dietrich AyalaFirefox OS and Academic Programs

Although Mozilla feels almost like a household name at this point, it is a relatively small organization – tiny, actually – compared to the companies that ship similar types of software [1]. We must, however, have the impact of a much larger entity in order to ensure that the internet stays an open platform accessible to all.

Producing consumer software which influences the browser and smartphone OS markets in specific ways is how we make that impact. Shipping that software requires teams of people to design, build and test it, and all the countless other aspects of the release process. We can hire some of these people, but remember: we’re relatively tiny. We cannot compete with multi-billion-dollar mega-companies while operating in traditional ways. Mozilla has to be far more than the sum of its paid-employee parts in order to accomplish audaciously ambitious things.

Open source code and open communication allow participation and contribution from people that are not employees. And this is where opportunities lie for both Mozilla and universities.

For universities, undergraduates can earn credit, get real world experience, and ship software to hundreds of millions of people. Graduate researchers can break ground in areas where their findings can be applied to real world problems, and where they can see the impact of their work in the hands of people around the world. And students of any kind can participate before, during and after any involvement that is formally part of their school program.

For Mozilla, we receive contributions that help move our products and projects forward, often in areas that aren’t getting enough attention only because we don’t have the resources to do so. We get an influx of new ideas and new directions. We gain awesome contributors and can educate tomorrow’s technology workers about our mission.

I’ve been working with a few different programs recently to increase student involvement in the Firefox OS:

  • Portland State University:  The PSU CS Capstone program, run by Prof. Warren Harrison, has teams of students tackling projects for open source groups. The teams are responsible for all parts of the software life-cycle during the project. In the spring of 2013, a group of five students implemented an example messaging app using Persona and Firebase, documenting the challenges of Web platform development and the Firefox OS  development/debugging workflow. This year’s group will implement a feature inside Firefox OS itself.
  • Facebook Open Academy: This is a program coordinated by Stanford and Facebook, that puts teams from multiple universities together to build something proposed by an existing open source project. The Firefox OS team includes students from Carnegie-Mellon, Purdue, Harvard, Columbia in the US, and Tampere UT in Finland. They’re adding a new feature to Firefox OS which allows you to share apps directly between devices using NFC and Bluetooth. With 14 members across five universities, this team is collaborating via Github, Google Groups, IRC and weekly meetings for both the front-end and back-end parts, providing experience with remote working, group coordination and cross-team collaboration.
  • University of MichiganProf. Z. Morley Mao’s mobile research group has started looking at device and network performance in Firefox OS. They’ve got a stack of phones and SIM cards, and we’re working with them to find ways to improve battery life and network efficiency on our devices. They’ve started a collection of focus areas and related research on the Mozilla wiki.

If you’re at an academic institution and would like to learn more about how to get your department or your students involved, or if you’re a Mozillian who wants to coordinate a project with your alma mater, email me!

 

1. Mozilla has ~1000 employees. According to Wikipedia, Google has ~50,000 employees, Apple ~80,000 and Microsoft ~100,000.


Robert NymanThe day Santa died

Today, just as we prepare for an Easter break, I heard my youngest daughter arguing with her older sister, and then the younger came running to me.

Dad, does the Easter Bunny and Santa exist?

As a parent, or anyone speaking to a child, you don’t want to take away their dreams or hopes. At the same time, it’s a rough world, so you see it as your responsibility to teach them as much as possible about how the world works so they both know more but also, to be honest, avoid the risk of them being ridiculed for not knowing something.

So should I leave it be and then she’ll eventually find out? Or should I tell her the truth, with the risk of making her sad? I do believe in honesty but at the same time I don’t want to be cynical.

I thought a bit about it, what to say and how to approach the situation. After some thinking, I decided to ask her what she thought:

What do you think? Do you believe they exist?

Yes. Or… I think so.

Have you ever seen any of them in real life?

Well… We had a bunny at the kindergarten once. Who wore shoes. And then you were Santa once, daddy.

Right. So do you believe they exist?

Hmm… No, not really, I guess.
sounding more like she wished they existed, than actually thinking they do

And then she bounced off, having learned another lesson abut life.

Gijs KruitboschWhy doing visual refreshes of Firefox is hard

We’re getting closer and closer to releasing Australis with Firefox 29, and that gives me more time to write something that’s been on my mind the last couple of weeks/months. Extra impetus was provided by sentiment along the lines of “how did you possibly miss this / think fix X was a good idea?” from some people outside the core development team, responding to some of our changes.

In this post, I’d like to give you an idea of the number of combinations of options, configurations, themes, add-ons, fonts and styles. It is enormous. Firefox generally tries to fit in with your operating system as best it can, and that means we have to pay attention to lots of things. And yes, that means sometimes we miss things. Here’s a breakdown of some of the things I’ve seen fly by as we made Australis, all linked to bugs specific to particular scenarios (there are 54 individual bugs linked, the majority of which were fixed for 29).

Firefox is available on three main (tier-1) platforms: Windows, Linux, and OS X.

All three platforms support lightweight themes, of which we support light and dark text variants. On light text lightweight themes, we invert the text and icons to be bright (which usually means the theme itself has a dark background). Interactions between these themes and the OS are not always the same everywhere, which leads to bugs.

Different toolbars like the menubar and the bookmarks toolbar can be toggled on and off (which sometimes makes certain ideas more difficult), and the menubar has an autohide state, which is new on Linux and caused specific bugs there.

And although we normally always show the tabbar and the navbar, there are popup windows where we don’t (toolbar=no), which, you guessed it, causes bugs.

Then we have per-window private browsing, which has an indicator which makes things ‘fun’ (and soon private browsing will look even more different from normal browsing).

Of course, while we stick to English layout direction is more or less fixed, but we ship both LTR and RTL locales on all platforms, which changes the order of things, which frequently leads to bugs.

Then there’s the padding that we added for “customize mode”, which affects layout of the toolbars and the (‘fake’) titlebar, which had its own problems.

Some issues are specific to pinned or overflowing tabs (sometimes even particular tabs), as well as panorama/tab groups.

Beyond that, styling is somewhat platform-specific, each with their own quirks:

OS X

OS X is, in a certain sense, “easiest” because the OS doesn’t have a lot of options that mess with things (font size, for example, isn’t easily configurable). But there’s still some variation:

  • Lion vs. pre-lion: 10.6, which we still support, has no fullscreen button in the titlebar (unlike 10.7-10.9) and has no concept of “Lion fullscreen”.
  • Spaces: causes odd bugs with panels.
  • HiDPI (“retina”): this causes bugs / missed cases. Add external displays which might not be hidpi, and you get even odder bugs (this one’s 10.9-only, too, it seems!).
  • RTL. Coupled with retina –> more bugs.
  • 10.9 broke more stuff.
  • Titlebar can be turned on/off now: cue more bugs.

Linux

Linux really means “Unix that has GTK”, as far as theming is concerned. Unfortunately that ends up being a wide spectrum of cases:

Windows

Windows really means “Windows XP, Windows Vista, Windows 7, Windows 8(.1) and all the corresponding Windows Server versions”. Which then means:

This list doesn’t include bugs caused or revealed by add-ons, but of course those also add interesting behaviour to the mix.

All in all, it’s been an interesting first year as an employee at Mozilla (I started April 1st, 2013), and I can’t wait to see all our changes ship: Firefox 29 is scheduled for release on April 29th.

Frederic WenzelThe Case for the Ubiquitous Mobile Web

As we use more and more mobile devices in our lives, an open platform is becoming more, not less important.

In an article "The decline of the Mobile Web", Chris Dixon worries about the future of the Web, as despite the dramatic uptake in mobile device usage, mobile Web usage is rapidly declining in favor of apps.

John Gruber of Daring Fireball argues, this is a success of good over bad user experiences and suggests we celebrate all this as an evolution of the Web towards a dumb pipe delivering data to whatever device and platform provides the user experience.

Apples and Oranges

Unfortunately, Gruber is drawing the wrong conclusions from a good premise. I think no one would disagree that users do (and should) gravitate towards good experiences. But when he weighs the UX of apps against not-mobile-oriented websites accessed via dedicated browsers, he's comparing apples to oranges and draws the conclusion that the Web's underlying technologies are therefore inherently inferior.

And besides, Web technology doesn't matter to him, as long as the user-facing app uses something akin to HTTP in the background he'll still count it as the Web:

"App Stores are walled gardens, but the apps themselves are just clients to the open web/internet."

This is a very strange use of the word "open". There's nothing open about relegating the Web to acting as a "dumb pipe" like the underlying communication protocols are supposed to be.

As it stands, Gruber is splitting hairs -- calling it the "Web" just because HTTP is involved -- but he's missing the point. Users don't care about what the data pipe looks like, they care about their window into that data.

The Ubiquitous Walled Garden

Why it is harmful to redefine the term Open Web like this becomes clear when we start our discussion from a level playing field. We should ask:

Assuming the same good user experience, is an application written on a proprietary platform just as good for the user as one written on an open stack?

Consider Evernote CEO Phil Libin's recent prediction we're moving towards a network of connected devices, where the experiences are not enclosed in apps, but are "just there".1

That's a wonderful world, just so long as you only use devices blessed by a company (iPhone, iWatch, iFridge and iTV). The walled garden is beautiful on the inside, if you enjoy the exact experiences they deem suitable for you (and that don't interfere with their revenue models, etc. etc.) Who owns the platform, owns the user. And while "users flock to the best experiences", the worst part is the owner of the platform can choke off innovation whenever they feel threatened by it, and the users may never know what they're missing out on.

The only way around this is by embracing a shared development platform that is not owned by any one competitor (hence, open). And this platform is the Web.

Will it suffice if all those services run on closed platforms, just so long as they speak HTTP in the background? No. What makes the Web open is that you can connect to its resources from anywhere, with any device, so long as a browser exists for the platform2. The Web, if it merely acts as a delivery vehicle for data, is not open anymore and cannot act as the level playing field for innovation and choice that it is meant to be.

Time for more, not less openness

This is, in a nutshell, why Firefox OS is such an important project. It's not yet another proprietary mobile app platform. In Firefox OS, the open Web is the platform.

Firefox OS is about weaving the open Web into the very fabric of the mobile landscape. It's about enabling the next generation of makers to hack their devices to their heart's content. It's about providing users with a platform that fosters actual innovation rather than giving them the illusion of choice.

And that's a user experience worth fighting for.


  1. He is, of course, not the first to predict such a thing, but with “smart watches” and such being released left and right, we’re certainly closer than ever to this reality.

  2. Or the platform is the browser engine, as is the case with Firefox OS.

Armen Zambrano GasparnianKiss our old Mac Mini test pool goodbye

Today we have stopped running test jobs on our old Revision 3 Mac Mini test pool (see previous announcement).

There's a very very long lit of people that have been involved in this project (see bug 864866).
I want to thank ahal, fgomes, jgriffin, jmaher, jrmuizel and rail for their help on the last mile.

We're very happy to have finally decommissioned this non-datacenter-friendly infrastructure.

A bit of history

These minis were purchased back in early 2010 and we bought more than 300 of them.
At first, we run on them Fedora 12, Fedora 12 x64, Windows Xp, Windows 7 and Mac 10.5. Later on we also added 10.6 to the mix (if my memory doesn't fail me).

Somewhere in 2012, we moved the Mac 10.6 testings to the revision 4 new mac server minis and deprecated the 10.5 rev3 testing pool. We then re-purposed those machines to increase the Windows and the Fedora pools.

By May of 2013, we stopped running Windows on them.
During 2013, we moved a lot of the Fedora testing to EC2.
Now, we managed to move the B2G reftests and Firefox debug mochitest-browser-chrome to EC2.

NOTE: I hope my memory does not fail me

Delivery of the Mac minis (photo credit to joduinn)
Racked at the datacenter (photo credit to joduinn)



Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Patrick ClokeCommunity and Volunteers

It was suggested that I cross-post this from mozilla.dev.planning onto my blog. This is in reply to a thread entitled "Proposal: Move Thunderbird and SeaMonkey to mozilla-central" about (essentially) merging comm-central back into mozilla-central. There have been many technical concerns raised in the thread (that I'm not going to rehash here). What I'm more interested in is the lack of community feeling there. As Nicholas Nethercote said in that thread:
"I am surprised [...] by how heartless the discussion has been."
I should note that I did have some help editing this down from my original post. Turns out I tend to write inflammatory statements that don't help get me point across. Who knew? Anyway, thanks to all of you who helped me out there!

My full post is below (with a few links added and plaintext formatting converted to HTML formatting):
On Monday, April 14, 2014 4:52:53 PM UTC-4, Nicholas Nethercote wrote:
> The technical aspects of this decision have been discussed to death,
> so I won't say anything about that. I am surprised, however, by how
> heartless the discussion has been.

I agree, the technical bits here seem to have solutions suggested by Joshua and others, but the non-technical parts of this discussion have left me feeling disheartened and confused with the Mozilla community.

I find it ironic/amusing/sad/upsetting that a few threads above this is a thread entitled "Contributor pathways, engagement points and bug mentoring" while in this thread I see community contributors being blocked at every turn!

Here I don't see people attempting to foster a community by putting their best foot forward. I see people trying to get their job done; with an attitude of "if this doesn't help me, get it outta my way!" I don't think this is the right way to grow a community. I don't think this is how Mozilla HAS grown it's community. I don't think it's in line with what Mozilla expects from it's community members (both employees and volunteers!)

Personally, I dislike the amount of Mozilla Corporation goals focus in this thread. Can we have a discussion as part of a larger community? Why must it focus on Corporate goals? I'm not part of the corporation, I don't really care what its goals are or are not. I care about Mozilla, I care about providing high-quality, free, open source software to improve the experience of the Internet for everyone. And no, I'm not talking about Firefox. I'm talking about Thunderbird. I understand that Mozilla's goals are currently Firefox and Firefox OS, but these are not my personal goals.

At the Summit I had a few conversations with people about "on-boarding" new employees and getting them to understand how the community works and that interacting with the community in a positive manner is an important part of Mozilla. I don't remember the exact context, but part of it was that it is important that new employees don't think of it as "How can I use the community?", for that implies taking advtange of them, but "How can I work with the community?"

Please don't see this as an "employees vs. volunteers" argument. I believe that I'm expected to live up to these same goals. If I, as a volunteer, can help an employee achieve his goals; I'm more than willing, no...I'm EXPECTED to do that. I think this is a two-way relationship that must be fostered. It has seemed to me that over the past couple of years that I've been hanging around here there's been less and less focus on the community and more and more on the Corporation.

I understand Thunderbird and SeaMonkey may not be important to you, but it is important to me! (And others who contribute to the Thunderbird/SeaMonkey community, including employees who contribute on their spare time.) When Mozilla stopped directly supporting development of Thunderbird it was widely announced that "Thunderbird is dead!". We, as part of the Mozilla community, have been fighting to prove this wrong. Could you please respect our efforts? Merging c-c into m-c will help us focus our efforts on building a great product instead of spending significant effort on keeping a dying one on life-support. (And prove to all that "Thunderbird is dead!" was just a sensational headline.)
I don't have much else to say beyond that (besides thanks for reading this far!)

Gervase MarkhamWho We Are And How We Should Be

“Every kingdom divided against itself will be ruined, and every city or household divided against itself will not stand.” — Jesus

It has been said that “Mozilla has a long history of gathering people with a wide diversity of political, social, and religious beliefs to work with Mozilla.” This is very true (although perhaps not all beliefs are represented in the proportions they are in the wider world). And so, like any collection of people who agree on some things and disagree on others, we have historically needed to figure out how that works in practice, and how we can avoid being a “kingdom divided”.

Our most recent attempt to write this down was the Community Participation Guidelines. As I see it, the principle behind the CPGs was, in regard to non-mission things: leave it outside. We agreed to agree on the mission, and agreed to disagree on everything else. And, the hope was, that created a safe space for everyone to collaborate on what we agreed on, and put our combined efforts into keeping the Internet open and free.

That principle has taken a few knocks recently, and from more than one direction.

I suggest that, to move forward, we need to again figure out, as Debbie Cohen describes it, “how we are going to be, together”. In TRIBE terms, we need a Designed Alliance. And we need to understand its consequences, commit to it as a united community, and back it up forcefully when challenged. Is that CPG principle still the right one? Are the CPGs the best expression of it?

But before we figure out how to be, we need to figure out who we are. What is the mission around which we are uniting? What’s included, and what’s excluded? Does Mozilla have a strict or expansive interpretation of the Mozilla Manifesto? I have read many articles over the past few weeks which simply assume the answer to this question – and go on to draw quite far-reaching conclusions. But the assumptions made in various quarters have been significantly different, and therefore so have the conclusions.

Now everyone has had a chance to take a breath after recent events, and with an interim MoCo CEO in place and Mozilla moving forward, I think it’s time to start this conversation. I hope to post more over the next few days about who I think we are and how I think we should be, and I encourage others to do the same.

Pete MooreWeekly review 2014-04-16

Accomplishments & status:

Christian HeilmannBrowser inconsistencies: animated GIF and drawImage()

I just got asked why Firefox doesn’t do the same thing as Chrome does when you copy a GIF into a canvas element using drawImage(). The short answer is: Chrome’s behaviour is not according to the spec. Chrome copies the currently visible frame of the GIF whereas Firefox copies the first frame. The latter is consistent with the spec.

You can see the behaviour at this demo page:
animated GIF on canvas

Here’s the bug on Firefox and the bug request in Webkit to make it consistent thanks to Peter Kasting there is also a bug filed for Blink.

The only way to make this work across browsers seems to be to convert the GIF into its frames and play them in a canvas, much like jsGIF does.

Sylvestre LedruChanges Firefox 29 beta7 to beta8

A bigger changelog that I would have liked. However, it is mainly about top crashs, polishing Australis and some sync bugs/improvements.

A few webapp bugs have been also fixed.

  • 57 changesets
  • 93 files changed
  • 1892 insertions
  • 409 deletions

ExtensionOccurrences
js25
css13
cpp8
xml6
jsm6
java5
xul4
ini3
mn2
html2
h2
json1
inc1
c1

ModuleOccurrences
browser36
services14
mobile11
content3
js2
image2
gfx2
dom2
toolkit1
testing1
netwerk1
mozglue1
modules1
layout1
accessible1

List of changesets:

Gavin SharpBug 995041 - Properly disable the problematic portions of browser_aboutHome.js. a=test-only - b1e9827af66f
Matthew NoorenbergheBug 992270 - ignoreAllUncaughtExceptions in the about:home test of browser_google_behavior.js. r=gavin, a=test-only - 0e107cfcd3fd
Tim TaubertNo Bug - Fix browser_net_timing-division.js leak when run as the last test on a CLOSED TREE. rs=past, a=test-only - f91bdb05883b
Ryan VanderMeulenBug 994798 - Disable browser_frameworker.js on Linux debug for frequent timeouts. a=test-only - e7806ccfe24f
JW WangBug 945475 - Clear |mVideoFrameContainer| to stop staled callbacks which give incorrect videoWidth/videoHeight. r=roc, a=sledru - eaf92a872145
Matt WoodrowBug 991767 - Use Moz2D for printing surfaces. r=roc, a=sledru - 5be8148fea1f
Ryan VanderMeulenBacked out changeset 5be8148fea1f (Bug 991767) for bustage. - 8d21ce8b440a
Chris KarlofBug 989549 - Call signOut() in FxAccountsClient.jsm from signOut() in FxAccounts.jsm. r=markh, a=sledru - 8b66928d0515
Mark HammondBug 986636 - Use icon instead of [?] on sync prefs when master password locked. r=ttaubert, a=sledru - 785fb5b58ae5
Mark HammondBug 985145 - Make node reassignment work correctly with FxA. r=rnewman, a=sledru - dcbe04c7a069
Ryan VanderMeulenBug 994798 - Disable browser_frameworker_sandbox.js on Linux debug for frequent timeouts. a=test-only - 66f9ad218574
Mike de Boer[Australis] Bug 477948: Keyhole back/ forward button for Linux. r=jaws, a=sledru. - 871c60982cac
Mike de Boer[Australis] Bug 477948: update tests for new keyhole on Linux. r=jaws, a=sledru. - ff6f90421768
Dão GottwaldBug 989701 - Set -moz-box-align:center for #urlbar-container and reduce the url bar's vertical margin in order to correctly align the urlbar-back-button-clip-path. r=mdeboer, a=sledru. - 4b355a2745cd
Jared WeinBug 971034 - Australis - [Windows] Zoom reset button isn't as tall as other zoom buttons in toolbar. r=mikedeboer, a=sledru. - 9d3d5c2225aa
Jared WeinBug 967110 - Add an inverted help icon and arrow to show on the menu panel anchor when the Help subview is open. r=mconley, a=sledru. - 5dcc734a5736
Mike ConleyBug 989609 - Dynamically added toolbars with API-created widgets should not break customize mode. r=Unfocused,mdeboer, a=sledru. - 9dd4a9d6739a
Jared WeinBug 971630 - Australis: Far right/left selected overflow tabs look bad on session restore. r=mconley, a=sledru. - 9798420b26fc
Jared WeinBug 993421 - Only set the position:relative on the PanelUI-footer-inner when a subview is showing. r=Gijs, a=sledru. - 5aa6eb09fe62
Mike ConleyBug 992373 - Items in the panel jump up slightly when the customization transition finishes. r=Gijs, mikedeboer, a=sledru. - 7c5fb4327c30
Jared WeinBug 993299 - Australis - There is no minimum width set in customization mode. r=mconley, a=sledru. - 3537a7b4b992
Mike ConleyBug 990218 - Simplify OS X's titlebar styling rules for tabs in titlebar. r=MattN, feedback=Gijs, a=sledru. - 163d2250a03e
Mike ConleyBug 994758 - Rename tabHeight define to tabMinHeight, which is more accurate. r=dao, a=sledru. - 9eca66ee5b10
Mike de BoerBug 989466: revert clip-path change made in Bug 893661 to fix font scaling. r=dao, a=sledru. - f1c211a4714d
Timothy NikkelBug 944353. If we've encountered an error while decoding an image and the main thread has asked to do more decoding of that image before the main thread has acknowledged the error then refuse to decode more. r=seth a=sledru - 1cf083a2ffe7
Timothy NikkelBug 944353. If we've had a decoder error then the image is not usable. r=seth a=sledru - d27facd3d35d
Blair McBrideBug 990979 - about:welcomeback is missing a CSS file on aero variant of the Windows theme. r=MattN a=sylvestre - 31aca79a5126
Matthew NoorenbergheBug 946987 - Add 2x tab images for Windows and use them for 1.25dppx and higher. r=mconley a=Sylvestre - 975b76d0b1c0
Matthew NoorenbergheBug 980220 - UITour: [Linux] Change the highlight style to have better fallback without an X compositor. r=Unfocused, ui-r=mmaslaney a=sylvestre - 8e6041de3ce7
Gijs KruitboschBug 989289 - only migrate builtin toolbars, also migrate toolbox, r=mconley a=sylvestre - e946bc71ae2b
Gijs KruitboschBacked out changeset 9d3d5c2225aa (Bug 971034) for stretching all the icons when navbar includes add-ons with overly large toolbar icons, a=backout, rs=mconley,MattN,gijs,jaws,Unfocused - 915700dc5198
Mark HammondBug 990834 (part 1) - minor refactor of hawk tests. r=ckarlof, a=sylvestre - deb83f2f75fa
Mark HammondBug 990834 (part 2) - Add support/tweak retry and backoff header support to hawk and tokenserverclient. r=rnewman, a=sylvestre - 329a2a180a8b
Mark HammondBug 990834 (part 3) - Fix handling of hawk errors. r=ckarlof, a=sylvestre - b074e386a410
Jonathan WattBug 991400 - Prevent gfxPath instances from being created on the stack (they're refcounted). r=dholbert, a=sylvestre - 679aa869f39f
Gijs KruitboschBug 971034 - adjust min-height of zoom control reset button only, r=jaws, a=sylvestre - c6f80ae1ee23
Gijs KruitboschBug 992747 - toolbar visibility setting doesn't work for non-customizable toolbars, r=jaws, a=sylvestre - 04e63b14de25
Gijs KruitboschBug 977572 - catch drag end and drop events from bookmarks drag handler so we can clean up, r=mak, a=sylvestre - 5eb91b9f89ec
Gijs KruitboschBug 993322 - fix widgets not showing up in toolbox, r=mconley, a=sylvestre - c1bbbe2e1309
Mike ConleyBug 973694 - Fix glitchy-looking private browsing indicator on OS X when tabs in titlebar are disabled. r=MattN, a=sylvestre - d20804c31f61
Jeff MuizelaarBug 969226 - Check if there is enough data to read u32 to avoid buffer overflow. r=bgirard, a=abillings - 05c933823ad8
Valentin GosuBug 991471 - Fix offset when setting host on URL. r=mcmanus, a=abillings - 1be8ef9bf661
Myk MelezBug 989294 - Release index when app is uninstalled. r=mfinkle, a=sledru - 7872e02410a7
Nick AlexanderBug 981827 - Make Android and Desktop FxAccounts client use same key parameters. r=rnewman, a=sledru - 13a97e892449
Wes JohnstonBug 994456 - Add a preview surface for webrtc in webapps. r=gcp, a=sledru - 4dd58172981c
David MajorBug 970362 - Block F-Secure on Windows XP. r=bsmedberg, a=sledru - 756b592c869f
Mike de BoerBug 993932: remove border-color transition to remedy TART regression. r=dao, a=sylvestre. - 8855f67b592c
Mike ConleyBug 995161 - Customize mode can still break after bootstrapped add-on with custom legacy:true toolbar restarts. r=Gijs, a=sylvestre. - 27221179c8b0
Gijs KruitboschBug 989683 - restrict how we inherit the menubar text color to tabsintitlebar cases on non-aero, r=dao, a=sylvestre. - bf8adf5a7040
Matthew NoorenbergheBug 987407 - Set the pref startup.homepage_override_url in-product for beta 29. r+a=gavin - a7433dd3312a
Bobby HolleyBug 993918 - Shut down CAPS and XPConnect after imagelib and gfx. r=Ms2ger,bsmedberg, a=sylvestre - afc5f648e247
Tim TaubertBug 995266 - Prevent mochitest-browser harness from leaking due to SimpleTest overrides. r=ted, a=test-only - f11f4dda1cde
Alexander SurkovBug 977668 - Firefox hangs on Facebook text entry when inline lookups pop up. r=jwei, a=sledru - 109cc0131968
Wes JohnstonBug 990395 - Use a url to find browser apps rather than a scheme. r=mfinkle, a=sledru - 2ab3be04316a
Rick EyreBug 981280 - Disable WebVTT support on 29 r=bz,cpearce a=sledru - 3a3224245147
Myk MelezBug 990125 - Ignore automatic update checks in webapp processes. r=mfinkle, a=sledru - 3960907890b7
Jan de MooijBug 991457 - Don't DCE MLoadElement if it needs a hole check. r=h4writer, a=sledru - 3437e5663d9e


r= means reviewed by
a= means uplift approved by

Previous changelogs:

Dave TownsendWelcome to the new Toolkit peers – Paolo, Matt, Jared and Irving

Slightly belated in some cases but I’d like to formally welcome four new toolkit peers. Paolo Amadini, Matthew Noorenberghe, Jared Wein and Irving Reid have all shown themselves to be well capable of reviewing patches in any of the toolkit code. Paolo, Matt and Jared actually got added a few months ago but apparently I failed to make an announcement at the time. Irving was added just last week. Please congratulate them all and don’t go too hard on their review queues!

Also if you think there are others who should be peers of Toolkit (or current peers that are no longer relevant) then please let me know.

Rick EyreGetting the number of lines of text in an Element

One of the biggest problems I faced when developing vtt.js is that a lot of the layout algorithm depends on being able to know the line height of the subtitle text. This boils down to being able to know the line height of the div within which the subtitle text sits. A lot of the time this is easy to get:

  var lineHeight = div.style.lineHeight;

But, what if you haven't set a line height? Then you would need to get the computed value of the line height:

  var lineHeight = window.getComputedStyle(null, div).getPropertyValue("lineHeight");

This works... some of the time. On some browsers if you try to get the computed value of the line height and you haven't explicitly set a line height, the computed property will return back as the value normal. That's helpful...

After much searching I found out that you if you use getClientRects on an inline element it will return you a TextRectangle box for each line of text in the inline element. At that point you can either assume that each line has the same height and get just use the height property of the first TextRectangle or to get a somewhat more accurate number you can take the height of the inline element and divide it by the number of TextRectangles you have.

  var inlineElement = document.getElementById("myInlineElement"),
      textRectangles = inlineElement.getClientRects(),
      container = inlineElement.getBoundingClientRect(),
      lineHeight = container.height / textRectangles.length;

  alert("The average line height is: " + lineHeight);

This works really well for the amount of actual code you need to write. I've read about more accurate methods, but they take some serious coding. Like walking through each character in the text and tracking when overflow happens serious.

Now back to my original question which was how to get the number of lines of text in a div (block level) element. The way I did this was to wrap my div which has my content in another div, and set the inner div's display property to inline. Then you can calculate the line height/number of lines of text of the inner div since it has inline display. This way you retain your contents block level layout while being able to figure out how many lines of text it is.

This is it all put together:

  <div>
    <div id="content" style="display:inline;">
      This is all my content in here. I wonder how many lines it is?
    </div>
  </div>
  var inlineElement = document.getElementById("content"),
      textRectangles = inlineElement.getClientRects(),
      container = inlineElement.getBoundingClientRect(),
      lineHeight = container.height / textRectangles.length;

  alert("The average line height is: " + lineHeight);

Chris CooperIf I had a million dollars

Kraft DinnerArmen has a blog post up about the cost savings Mozilla has been able to realize in its continuous integration infrastructure in Amazon over just the last 3 months. This has been a bit of a sea change for release engineering, who have historically been conservative with regards to changing core infrastructure and practices. We’re all coming to grips with the new world order, but I’m quite excited about the possibilities.

Some quick back-of-the-envelope calculations based on other recent numbers from Armen:

  • starting with a low-ball estimate of 7,000 pushes/month, if we project the rate of spending from December ($19/push) over an entire year, we end up with $1,596,000.
  • at the new rate ($6/push), a year of AWS time will cost only $504,000.
  • that’s a yearly savings of $1,092,000.

If history has taught us anything, continued growth will eat in to at least part of that savings, but think of what Mozilla could do with an extra million dollars. Depending on where we hire them, that money could easily buy 5-10 more engineers to continue driving the mission forward.

Seif LotfyIch bin ein Xamarin(er) ♥

My new home office

As of the beginning of the April I am a Xamarin (that is what Xamarin employees call themselves).

At Xummit I met the rest of the Xamarins and I had an incredible time there (dare I say magical ♥).
I met old friends like Rodrigo Moya, Jason Smith, David Siegel, Cody Russell, Neil Patel, Connor Curran, Gord Allot and others, but also made new friends:

  • Zack Gramana: The right amount of crazy and creative. He is helping me with my new pet project.
  • Seth Rosetter: SF chilled out hacker with an ear for techno and extreme positive attitude, a delight to hang out with.
  • Mike Krüger: One of the friendliest people I got to meet and know with exactly my kind of humour.
  • Victoria Grothey: Incredibly nice person with lots of energy and always smiling.
  • Marek Safar: The most passionate beer expert I know I guess. Also rumour has it that either I am stalking him or he is stalking me.
  • Václav Vančura: An awesome designer who motivated me to start drawing again. Thanks for that. And many many more.

One thing I believe in, is that interpersonal relationships between co-workers is a must for a community or a company to be productive and successful. Xamarin promoted (and still promotes) this positive habit, achieved it and even more. The upbeat attitude and enthusiasm at Xamarin is infectious. Combined with the diversity in culture as well as stuff/tasks to do brings the best out of Xamarins. I will not forget the bus ride to the venue. 8 people with 7 different nationalities, but all happy and psyched about what they are doing and what others are doing ♥.

Since I joined Xamarin I started doing more Mono in my free time too. Currently I am porting

Synapse to Mac (since I loved the interface and some of the functionalities I couldn’t find in Alfred and Quicksilver). Here is a small very early sneak peak :)

Synapse for Mac in the making

I am loving Xamarin and all its stands for and brings to the table.

P.S: Hylke Bons has a fan base here at Xamarin :)

Jeff WaldenIn which I demonstrate Supreme Court fitness in property law comparable to that of Justice Breyer

I said previously that I had two law posts to make. Here’s the non-Mozilla-related post.

Introduction

I’ve blogged about visiting the Supreme Court for oral arguments before. I had the opportunity to do so again for the extremely interesting week of January 13 earlier this year. I attended oral arguments concerning the Appointments Clause, assembly restrictions in Massachusetts, bankruptcy shenanigans, and railroad property law. A month ago, the first decision, in the property law case, Marvin M. Brandt Revocable Trust v. United States, was announced. I’m going to blog about it a little, because I think it’s cool and because of its impact on rail trails.

Before I do that, I’d like to note that the Marvin M. Brandt Revocable Trust v. United States article on Wikipedia is entirely my work (and my mistakes :-) ). (At present. Release the vandals in 3, 2, 1….) It’s the first article I’ve written start to finish. I’m more than a bit proud of that. And I’m particularly excited to have done it in such a cool area of law. :-)

Background

Back in the 1800s as the United States expanded toward the Pacific Ocean, it needed to be able to efficiently transport goods and people across that distance. At the time, the solution was railroads. So Congress passed acts incenting railroad creation by granting rights of way across federal land. After initially granting rights of way to specific, named railroads in separate bills, Congress streamlined the process in the General Railroad Right-of-Way Act of 1875. Under this act, any railroad meeting certain conditions could get a right of way, til those provisions’ repeal in 1976.

The facts

Fast-foward to (coincidentally) 1976. The United States granted a land patent (that is, a document making clear — “patent” — title to land) to Melvin Brandt for 83 acres in Wyoming, as part of a land swap. One limitation on the grant was that it was subject to a railroad right-of-way originally granted to the Laramie Hahn’s Peak & Pacific Railway Company under the 1875 Act. The grant mentioned no other limitations on the right-of-way.

LHP&P never really worked as a railroad, and it passed through several hands. In 2004 the ultimate owners legally abandoned it. What happened to the right-of-way? This is where things got complicated.

The United States wanted the right-of-way land, so it filed suit to quiet title in its favor to clear up ownership. The United States resolved claims with everyone along the way — except for Marvin Brandt, Melvin’s son.

Brandt’s position

Brandt argued that the right of way was an easement. An easement is a restriction on your ownership of land, that says some other person can enter into and (perhaps) use it for some particular purpose. So your house’s land may have an easement across it for a sidewalk, that allows people to go on the sidewalk, walk through, and briefly stop on it, and you have to accept that. You still own the land; you just don’t quite have free rein over it. (This is why you’re usually responsible for clearing snow off your sidewalk. It’s your land, your fault if someone slips and twists an ankle and it was reasonably foreseeable.) When an easement terminates, the land is unburdened by the easement. No physical property changes hands, the easement just doesn’t exist, and the land owner can again prevent entry and use of his land.

Brandt buttressed this argument by pointing to Great Northern Railway Company v. United States. In this 1942 case, the Supreme Court decided whether Great Northern could drill for oil and gas on an 1875 Act right-of-way. The United States said no, it couldn’t — the right-of-way was in the nature of an easement, only an easement had been granted, all signs (language, legislative history, early interpretation, Congress’s construction of it in subsequent acts) said it was an easement. The 1942 Court agreed. Open and shut case for Brandt, right? Yes and no.

The United States’s position

The United States argued that 1875 Act rights of way were a “limited fee made on implied condition of reverter”. Let’s unpack this gibberish. “fee” is roughly “ownership”, and “reverter” refers to what happens to the property after some condition (here, abandonment) holds. The United States thought railroad rights of way were an unusual sort of easement. Easements don’t typically let you come in and tear things up, but it’s necessary for railroads to dig, bore, build up, lay track, and so on. So these “railroad easements” were a fee in those regards. And in regard to reversion after abandonment, ownership reverted to the United States.

In light of Great Northern, this may sound ridiculous. But the United States found language in earlier cases, and to an extent in Great Northern, saying that railroad easements had “attributes of the fee”. And two cases predating Great Northern had treated 1875 Act rights of way as limited fees. The problem was, in those cases the Supreme Court had conflated 1875 Act rights-of-way with rights-of-way under acts before 1871. In 1871, Congress changed policy from basically giving railroads land, to only letting them lay tracks on it. Congress wanted to encourage settlement, not just the arbitrary enrichment of railroads (who had become incredibly huge land owners in the West). The Court conflated the two because, in at least one of the cases, neither side had filed briefs, and the Court made a legal mistake.

The United States argued that Great Northern didn’t really say 1875 Act rights of way were easements.

Oral argument

Oral argument was pretty interesting. I read half a dozen briefs and the lower court opinion in the case, so I was moderately prepared to follow argument. In some ways I was almost on par with the justices. Justice Breyer candidly admitted to fumbling with his recollections of A. James Casner‘s property law class, about which he briefly rambled (as is his wont — he’s known for rambling :-) ).

Oral argument generally trended against the United States. Sparks flew when the United States attorney began argument. Justice Alito bluntly told him the United States should receive a “prize for understatement” for “acknowledg[ing in its brief] that there is language in [] Great Northern and in the government’s brief in that case that lends some support to [Brandt's] argument.” Alito recited the brief’s subject headings, all forcefully arguing that the right-of-way was an easement and only an easement.

The argument didn’t go much better from there on for the United States. Various justices wanted to know how much land would be affected by a judgment that these rights-of-way were easements — permitting takings claims for just compensation, especially when the land had already been taken by the United States. No answer was forthcoming, because the records had been taken so long ago and were so geographically distributed. Breyer in particular repeatedly asked if there were any other easement-but-not-always constructs in the common law of property.

Opinions

The Court announced an opinion on March 10, just under two months after oral argument. Fast turnarounds typically indicate uncomplicated cases, and this was such a case. The justices divided 8-1 for Brandt, uncritically adopting his position. Chief Justice Roberts wrote the opinion, which began with a half-dozen pages of history of the West and particularly of LHP&P. (Definitely give it a read if you like Western history.) Roberts emphasized that the United States lost because it had won in Great Northern and faulted it for its “stark change in position”. He also asserted that 1875 Act railroad rights of way must be analyzed as common law easements — not a strange amalgam as the United States had argued.

Justice Sotomayor dissented alone. She argued that Great Northern had decided only one aspect of the property interest in railroad rights of way, and it hadn’t decided how reversion should play out. She also thought that railroad rights of way shouldn’t be analyzed under the common law, because of the extent to which they went beyond what normal easements allowed.

In the end the United States was roundly rebuked and defeated. Sometimes 8-1 decisions are a matter of some recognized, fundamental disagreement; see for example many of Justice Thomas’s solo dissents. But when a decision goes this way, in a case barely implicating deep jurisprudential disputes, you have to second-guess yourself a bit when you’re on the losing side. It’s one thing to lose with others agreeing with you. But when no one else sees it as you do, perhaps you’re the one who’s wrong.

Why did the United States pursue the case to a resounding loss? This particular case arose a bit weirdly. It was pushed by various property-rights groups, at the start. And for where it was raised, in the Tenth Circuit, existing circuit precedent said Brandt’s argument would lose, which it did. Brandt appealed to the Supreme Court, citing the circuit split: a good way to get your case heard, but no guarantee. What possibly tipped the balance was that the United States, despite winning, agreed the Court should hear the case. Why?

It looks to me like the United States got greedy. It saw an opportunity to wipe out the other circuits’ bad precedents, and it blinded itself to the weakness of its argument.

Consequences

What happens to Brandt specifically? The case returns to the Tenth Circuit to respond to the decision, but it’s unclear to me what’s supposed to happen there. I’d think they’d just quiet title in Brandt and be done, but the Rails-to-Trails Conservancy says it’ll keep working in the Tenth Circuit to “narrow the ultimate impact of the Supreme Court’s ruling”. How they can work against a predetermined quiet title action, I don’t know. (It’s possible this is just a face-saving claim on their part.). And it’s possible the United States might just acquire the right of way using eminent domain. (Why not do that and avoid suit? Money, of course. If it owns the land, no just compensation to pay. If not, that’s money out of the government’s pocket.) So Brandt’s not quite out of the woods yet, pun probably intended.

But Brandt’s particular plight isn’t the important thing here. It’s all the other places where suddenly takings claims can go forward. No one knows how many of these there are. Statutes of limitations and estoppel will preclude many claims, but not all of them. It’s still an unresolved mess.

Lessons

This touches a deeper concern. The United States acted here because it wanted to create rail trails, converting useless railroad corridors into bike trails. I like bikes. I like bike trails. But the law authorizing rail trails was enacted with flagrant disregard for the actual ownership of railroads in disuse. The CBO estimated the law wouldn’t cost a penny, but it now could cost $500 million, maybe more after this decision. We should demand a higher standard of Congress in the laws it passes.

Jeff WaldenIterating a number sequence for lulz and jail time

Hello, readers! Today I bring you two posts about law: one Mozilla-related, one not. This is the Mozilla-related post. Mozillians may already know this background, but I’ll review for those who don’t.

The “hack”

In 2010 Goatse Security (don’t look them up) discovered a flaw in AT&T’s website. AT&T’s site detected accesses from iPads, extracted a unique account number sent by the iPad, then replied with a private account email address. Account numbers were guessable, so if someone “spoofed” their UA to look like the iPad browser, they could harvest private email addresses using their guesses.

The lulz

Andrew Auernheimer ("weev") wearing an old-school AT&T baseball cap
Andrew Auernheimer, i.e. weev, CC-BY-SA

The people who figured this out were classic Internet trolls interested (to a degree) in minor mayhem (“lulz”) because they could, and they scraped 114000+ email addresses. Eventually Andrew Auernheimer (known online as “weev”) sent the list to Gawker for an exclusive.

The sky is falling!

AT&T, Apple, the people whose addresses had been scraped, and/or the government panicked and freaked out. The government argued that Auernheimer violated the Computer Fraud and Abuse Act, “exceeding authorized access” by UA-spoofing and loading pages using guessed account numbers.

This is a broad interpretation of “authorized access”. Auernheimer evaded no security measures, only accessed public, non-login-protected pages using common techniques. Anyone who could guess the address could view those pages using common browser addons. People guess at the existence of web addresses all the time. This site’s addresses appear of the form “/year/month/day/post-title/”. The monthly archive links to the side on my site have the form “/year/month/”. It’s a good guess that changing these components does what you expect: no dastardly hacking skills required, just logical guesses and experimentation. And automation’s hardly nefarious.

So what’s Mozilla’s brief with this?

Developers UA-spoof all the time for a variety of innocuous reasons. Newspapers have UA-spoofed during online price discrimination investigations. If UA spoofing is a crime, many people not out for lulz are in trouble, subject to a federal attorney’s whims.

The same is true for constructing addresses by modifying embedded numbers. I’ve provided one example. Jesse once wrote a generic implementation of the technique. Wikipedia uses these tactics internally, for example in the Supreme Court infobox template to linkify docket numbers.

Mozilla thus signed onto an amicus brief in the case. The brief laid out the reasons why the actions the government considered criminal, were “commonplace, legitimate techniques”.

The cool part of the brief

I read the brief last summer through one of Auernheimer’s attorneys at the inestimable Volokh Conspiracy. I’ve been lightly meaning to blog about this discussion of number-changing ever since:

Changing the value of X in the AT&T webpage address is trivial to do. For example, to visit this Court’s homepage, one might type the address “http://www.ca3.uscourts.gov/” into the address bar of the browser window. The browser sends an HTTP request to the Court website, which will respond with this Court’s homepage. Changing the “3” to “4” by typing in the browser window address bar returns the Court of Appeals for the Fourth Circuit’s homepage. Changing the “3” to a “12” returns an error message.

Illustrating the number-guessing technique (and implying its limitations in the “12″ part) via the circuit courts’ own websites? Brilliant.

Back to Auernheimer

The court recently threw out Auernheimer’s conviction. Not on CFAA grounds — on more esoteric matters of filing the case in the wrong court. But the opinion contains dicta implying that breaching a password gate or code-based barrier may be necessary to achieve a conviction. The government could bring the case in the right court, but with the implied warning here, it seems risky.

Sympathy

Auernheimer isn’t necessarily a sympathetic defendant. It’s arguably impolite and discourteous to publicly disclose a site vulnerability without giving the site notice and time to fix the issue. It may be “hard to feel sorry for them being handed federal criminal charges” as Ars Technica suggested.

But that doesn’t mean he committed a crime or shouldn’t be defended for doing things web developers often do. Justice means defending people who have broken no laws, when they are threatened with prosecution. It doesn’t mean failing to defend someone just because you don’t like his (legal) actions. Prosecution here was wrong.

One final note

I heard about the AT&T issue and the brief outside Mozilla. I’m unsure what Mozilla channel I should have followed, to observe or discuss the decision to sign onto this brief. Mozilla was right to sign on here. But our input processes for that decision could be better.

Daniel GlazmanWelcome cbeard

Among Mozillians, there is a small (not too small, in fact..) group of people who were already here before 15-jul-2003. After that date, we saw old-time contributors rejoin Mozilla one by one, and new hires too, something we had forgotten about since the 2002 Netscape layoffs. Chris Beard was one of them, at the end of 2004 IIRC (time flies, holy cow, time flies...). If old-time Mozillians saw a necessary little shift in the local culture because of these new hires, it was clearly not the case with cbeard, who adapted so well to Mozilla we immediately used his IRC nick to mention him. Having a vision, dealing very well with the community, always open to discussion, leading new projects, highly respected, I'm glad he was appointed interim CEO. Welcome Chris!

Rodrigo SilveiraContributing to FirefoxOS Cordova initiative

After the project I was working on got cancelled, I started contributing to Firefox OS Cordova project. Cordova is an open source framework for writing multi-platform native mobile applications using web technology. Cordova provides you with javascript APIs and the plumbing necessary to access the device's internals, such as battery status, GPS and camera. Neat stuff. Each mobile operating system has its own platform implementation for doing the communication between cordova's javascript API and the native OS code.

This post will focus on how to get started writing the Firefox OS platform and plugins. To get a better understanding on how to use cordova to write a Firefox OS app, I highly recommend the mozilla hacks post on the subject.

Cordova is written in node.js, you just need to understand javascript to work on it. It took me much code digging and asking around to get started, but you won't have to!

The repositories

Cordova code is organized into multiple repositories. The main ones you need to be aware of for Firefox OS development are cordova-cli, cordova-firefoxos and cordova-plugin-*. Here is a brief description of them:

  • cordova-cli - is where the code for the command line tools is located. There is some platform specific code under src/metadata which are config parsers. Firefox OS uses it to get the initial version of the manifest with the correct app name and other values.
  • cordova-firefoxos - is the repository for the Firefox OS platform tools. The code here is responsible for handling Firefox OS cordova commands and for the initial skeletal app.
  • cordova-plugin-* - are repositories for plugins. A plugin repository contains code for each supported platform too.

Running it locally

To work on the platform, you need to run on the latest code from the repositories. It's super helpful to run cordova entirely from local files so that you can edit code and see the effects. With the multiple repository organization used by cordova, this can be tricky. Make sure you have git and node.js installed. A github account will be handy if you plan to send us your changes. The prompt samples below are using bash.

First lets get cordova-cli from mozilla-cordova github account and install the dependencies. From the directory you'd like to keep cordova code run:

$ git clone https://github.com/mozilla-cordova/cordova-cli.git
$ cd cordova-cli
$ npm install
$ cd ..

The cordova binary is located at cordova-cli/bin/cordova. From now on this is the binary we'll use for all our cordova command line needs. You can add it to your PATH if you want, I'll use the relative path for clarity. Next let's clone Firefox OS platform bits from cordova-firefoxos repository:

$ git clone https://github.com/mozilla-cordova/cordova-firefoxos.git

No need to install dependencies for cordova-firefoxos, they're already part of the repository. Before creating an app, there's a little trick to tell cordova to use the local platform code we just downloaded. Create a file named firefoxos.json with the following contents:

{
    "lib": {
        "firefoxos": {
            "uri": "/<FULL PATH TO>/cordova-firefoxos",
            "version": "dev",
            "id": "cordova-firefoxos-dev"
        }
    }
}

Make sure to set the full path to cordova-firefoxos folder under uri. We can now create a new cordova app by running create. Let's create the app in myapp folder and give it the even more original project name of io.myapp and name it myapp. The fourth parameter to create is the json config file we just create as a string. To create the app run:

$ cordova-cli/bin/cordova create myapp io.myapp myapp "$(cat firefoxos.json)"
$ cd myapp

Alternatively, to use a local copy of cordova-firefoxos platform code on a cordova app that already exists, you can create a json file with the same content as above under yourapp/.cordova/config.json. In fact, that fourth parameter created that file for you. Go check.

To add the platform, all you need to run is:

$ ../cordova-cli/bin/cordova platform add firefoxos

That's it. If you make any changes to cordova-firefoxos, remove and add the platform again to make sure you have the latest.

Adding a plugin

Working with local plugins is much simpler. Lets download the contacts plugin as an exemple:

$ cd ..
$ git clone https://github.com/mozilla-cordova/cordova-plugin-contacts.git

Adding a local version is pretty simple, just add the path as parameter to plugin add command:

$ cd myapp
$ ../cordova-cli/bin/cordova plugin add ../cordova-plugin-contacts

NOTE: if at this point you hit a ReferenceError: xml_helpers is not defined error, don't despair. It's a bug in cordova-plugman code, which is responsible for plugin management. We can fix it by getting the latest version of cordova-plugman, and making sure cordova-cli uses it too. Here's how:

$ cd ..
$ git clone https://github.com/apache/cordova-plugman.git
$ cd cordova-cli
$ npm install ../cordova-plugman
$ cd ../myapp
$ ../cordova-cli/bin/cordova plugin add ../cordova-plugin-contacts

To see changes you made to plugin code you have to remove then add the plugin again. To remove the plugin you need to use the plugin name, not the path. Running ../cordova-cli/bin/cordova plugin ls will show you the names of installed plugins. For example, to remove the contacts plugin run ../cordova-cli/bin/cordova plugin remove org.apache.cordova.contacts.

That's it, you are now running the latest and greatest versions of it all!

Firefox OS plugin development: from javascript to javascript

Cordova provides you with a javascript API. They try to follow standards when possible. Firefox OS is built on web standards too. Sometimes they use the same API. How can a plugin developer access Firefox OS API when they clash?

Cordova provides us with a modulemapper library to access the original values of overwritten properties. Let's take a look at how the battery-status plugin uses modulemapper:

var mozBattery = cordova.require('cordova/modulemapper').getOriginalSymbol(window, 'navigator.battery');

The variable mozBattery now points to the original navigator.battery. The first parameter to getOriginalSymbol is the context, pretty much always window. The second is the value you want to get. To find out what value to use on the second parameter, check the <js-module> element in the plugin's plugin.xml configuration file. For the battery-status plugin it is:

<js-module src="www/battery.js" name="battery">
    <clobbers target="navigator.battery" />
</js-module>

The <clobbers> element's target attribute has the value that was overwritten.

Contributing

If you got this far, you're ready to get started! Open up your favorite editor and hack on. If you want to help with Firefox OS support, check out our status site and the project's wiki.

While writing this post I got news that I'm joining the team. Super excited to improve cordova support for Firefox OS! If you want to chat with us, we hang out on #cordova channel on mozilla's irc server.

Brian KingChris Beard Stories

You may have heard that Chris Beard came back (he never really left) to Mozilla as interim CEO. I have many Chris Beard stories, but here are just a couple of personal ones.

The first was back in 2006 when I first contracted for Mozilla writing an add-on. Chris was product managing the add-on and we were on an early call with others trying to wrap up and get a first version out the door. I forget the details, but the general tone of the conversation changed for me when Chris said something to the effect of “let’s ship something we are proud of and that users will love”. Up until that time I had volunteered for many years for Mozilla with a carefree attitude. This was Chris’ way of saying that what we are doing is important, and we have to do it well. After that I contracted on other projects but also put in a lot of volunteer time. It never lost the fun aspect, but I knew what we were doing was serious and making an impact.

Fast forward to 2010, to the Mozilla Balkans Meeting in Ljubljana. We gathered in the center of the city at a typical Slovenian ‘gostilna’ (restaurant) and were told a special guest was coming. Everyone was expecting a famous Balkans singer. Instead our brand new CEO at that time Gary Kovacs walked in, accompanied by Chris and a few others. After all the excitement, we settled down to eat and I was sitting beside Chris. We talked about many things, but throughout he was passionate and sharing his big ideas both for what I was working on and the opportunities that Mozilla had moving forward. Every encounter with Chris was a piece of advice, inspiration, a big idea or all wrapped up in one.

Balkans Mozillians

Chris in Ljubljana with the team. Picture by Tristan Nitot on Flickr.

Somehow I feel the best Chris Beard stories are to come.

Enhanced by Zemanta

Matthew NoorenbergheAn easy way to test the New Firefox Beta look and feel before it's released

Screenshot of the new Firefox UI on Windows 7 with the menu panel openThe new Firefox Beta is faster, simplified and easier to customize and we need your help to test it out before it gets released in a few weeks time. There is now an easy way to review Firefox user interface changes without even installing the new version. "How is that possible?" you might ask. We have a collection of hundreds of screenshots of the new Firefox in various different configurations (affecting features such as tabs, toolbars, themes, customization mode, and the new menu) that are ready for you to review. It's fast and easy to do in three simple steps:
  1. Open up the screenshot review tool and enter a nickname to log in (there may be a small reward for the most valuable contributions so keep that in mind when choosing)
  2. A random screenshot will be displayed where you can simply identify any visual issues related to the new user interface that you may see. Simply drag to select the region of the image and add a comment (and optionally a bug number). See an example.
  3. When you're done reviewing that image, simply click the button to get another and go to step 2. Endless fun ensues!
Of course, installing Firefox Beta, testing the functionality and filing bugs is still really valuable and encouraged. Areas to focus on include the new customization mode, menu panel, tabs, and Firefox Account Sync. So, what are you waiting for? Start reviewing now.

Mark SurmanMozilla is all of us

Ten years ago, a scrappy group of ten Mozilla staff, and thousands of volunteer Mozillians, broke up Microsoft’s monopoly on accessing the web with the release of Firefox 1.0. No single mastermind can claim credit for this achievement. Instead, it was a wildly diverse and global community brought together through their shared commitment to a singular goal: to protect and build the open web. They achieved something that seemed impossible. That’s what Mozillians can do when we’re at our best.

Over the last few years, we’ve taken on another huge challenge: building a smartphone incorporating the technology and values of the open web. In a few short years, we’ve taken Boot to Gecko, an idea for an open source operating system for mobile, all the way to the release of Firefox OS phones in 15+ countries. It was thousands of Mozillians — coders, localizers, partners, evangelists and others — that made this journey possible. These Mozillians, and the many more who will join us, will play a key role in achieving the audacious goal of putting the full power and potential of the web into the hands of the next two billion people who come online.

Over the last few weeks, the media and critics have jumped to the conclusion that our CEO defines who Mozilla is. But, that’s not the reality.

The reality is this: Mozilla is all of us. We are not one or two leaders, and we never have been. Mozilla is a global community of people building tools for a free and open web that we can’t build anywhere else. We’re people solving the tough problems on the web that most need solving. Mozilla is all of us taking action every day, wherever we are. Building. Teaching. Empowering. We all define who Mozilla is together. It’s the things we choose to build and teach and do every day that add up to ‘Mozilla’.

While hard, the past few weeks have been a reminder of that.  The attention, boycotts, ire from across the political spectrum, and departure of an original founder like Brendan would have devastated most companies, leaving them wounded and floundering with their leadership gone. But, Mozilla is not like most companies. Instead, we’re a global community that rolls up our sleeves to work on a common cause, not a company with single leader. Mozilla is all of us. As Mozillians, we need to remember this. And live it.

That’s one of the reasons I’m happy Chris Beard agreed to step in as interim CEO at the Mozilla Corporation today. Certainly, he knows technology and products, having played a key role in everything from the early success of Firefox to unveiling Firefox OS at the Mobile World Congress. But, more importantly right now, Chris is one of the best leaders I know at gathering people around Mozilla in a way that lets them have impact.

Just one example of where Chris has done this: the famous Firefox 1.0 ad in the New York Times.

Firefox 1.0 New York Times Ad

The notable thing about this ad is not its size or reach, but that Mozilla neither placed nor even paid for it. The ad was a grassroots effort, dreamed up and paid for by roughly 10,000 people who’d been using Firefox in beta and wanted the world to know that there was a real choice in how people could access the web. Chris was running marketing for Mozilla at the time. As he saw community momentum growing around the idea, he jumped in to help, bringing in more resources to make sure the ad actually made it into the Times. He did what Mozilla leaders do at their best: empower Mozillians to take concrete action to move our cause forward.

Mozilla has a tremendous amount of momentum right now. We’ve just shipped Firefox OS in 15 countries and released a $25 open source smartphone that will bring the web to tens of millions of people for the first time. We’re about to unleash the next round of events for our grassroots Maker Party campaign, which will bring in thousands of new volunteers and teach people around the world about how the web works. And we’re becoming a bigger — and more necessary — voice for trust and for privacy on the web at time when online security is facing unprecedented threats. The things we are all working on together are exciting, and they’re important.

In all honesty, the past few weeks have taken their toll. But, as they say, never waste a good crisis. We’re already seizing the opportunity to become even better and stronger than we were a month ago.
This starts with reminding ourselves that Mozilla is at its best when we all see ourselves as leaders, when we all bring our passion and our talent full bore to building Mozilla every single day. Chris has a role in making this happen. So do people like Mitchell and me. The members of our boards play a role, too. But, it is only when all of us roll up our sleeves to lead, act and inspire that we unlock the full potential of Mozilla. That is what we need to do right now.

 


Filed under: mozilla, openweb, webmakers

Lukas BlakkLearn To Teach Programming – Software Carpentry

Today, post PyCon conference, I spent the entire day immersed in an incredibly dynamic and educational workshop by Software CarpentryLearn to Teach Programming“.  I’m going to do a mix of dumping my notes in a play-by-play fashion with possible sidebars for commenting on what I experienced personally so that I have a record of this to look back on as I move forward with Ascend Project planning and execution.

Meet Your Neighbours

The event started off, as they always do, with a go-round of people introducing themselves in short form.  As we started taking turns our teacher, Greg Wilson, asked for the person who just spoke to tap the next person to speak before sitting down.  This proved to be our first of many small applications of the science behind learning and how it can play out in real life.  While it apparently takes a room of kindergarten children 3 reminders to do this extra step during intros, it took this room of ~25 adults 14 requests before we mostly started doing so without prompting from Greg.  By the way, during the intros I learned about Dames Making Games which I can now add to my mental list of awesome women-in-tech groups and if you’re reading this and are in Toronto, check them out!

Teaching Is Performance

It raises your adrenaline, brings out your nervousness, and it’s something you need to work at. A few quick tips from Greg on preparing for your ‘performance’ as teacher: always bring cough drops, and figure out what your ‘tell’ is.  Like with poker, everyone has at least on thing they do when they are nervous.  I suspect for me its likely that my ‘tell’ is talking fast and/or having trouble not smiling too much (at least in poker, it is).  This was our first introduction to how we should be reflective about our teaching – even go so far as to record yourself if you can’t get honest feedback from people around you – so that you can spot these things about your manner and work on adjusting them to ‘perform’ teaching in a more confident and reliable manner.

Improv came up as a way to work on this where you can get feedback on how you perform and also learn to keep other people engaged.  I used to do improv when I was an awkward teenager and didn’t feel like I was a superstar at it but I wonder what it could be like now that I have more confidence.  I’ll be looking for classes in SF to try it out.  What’s there to lose?

Why Don’t We Teach In Teams?

Greg pointed out how teaching, unlike music and comedy, is such a solo activity.  Musicians typically build up their experience and skills by playing with others.  The best comedians by and large spent a significant amount of time in some sort of comedy troupe before striking out on their own as a stand-up or as major film stars.  Teachers though?  Often alone in their classrooms and if my partner is an example of the ‘norm’, definitely alone while grading and preparing lessons.  This is something worth exploring: what could teaching be like for the teacher if there was team teaching?  What could we do with more feedback, more often, and with someone helping us track measurable progress towards our goals as agents inspiring learning?  Finland has an excellent system of teacher feedback and peer/mentoring for their educators.  Teacher’s college is harder to get into there than medical school (not sure that’s a good thing, but it’s what Greg told us).

Key Points About Teaching & Learning

  • People have two kinds of memory layers – short and long term – and short term memory (which is what we are working with in classroom environments) can hold ~7 items +/- 2 so really we should aim for 5 in order to teach to our students’ capacity

 

  • We have to balance on/off time – we lose some time switching between tasks or concepts in the teaching but working with memory limitations as mentioned above, we must let people take breaks to reset & refresh

 

  • Avg person can take in info for about 45 minutes before their attention wanes from exhaustion.  For me, this is more like 30 minutes. Hearing this from Greg reminds me that I want to propose that all meetings I’m involved with at work move the default length to 30 minutes and that we have a set of rules for how to deal with ‘overage’.  Either email or mailing list post, etherpad, set up a follow-up meeting, or make a proposal and request feedback so that we are not taking an hour because we *have* an hour.

 

  • Apparently the military has a lot of research and effective solutions for human performance.  Greg mentioned being at a naval academy and the grad students he was lecturing to dropped into doing pushups when a bell sounded on the hour.  This sounds like a great practice for anyone trying to learn and be engaged with others – get your blood pumping and change your position.  Reminds me to get that automated rest-taking app running on my laptop again and to actually pay attention to it for a while instead of dismissing over and over.

 

  • Continuous ‘flow’ – oh that elusive state for programmers.  There was some sort of quote about coffee but I missed the first part, the gist was that when we are immersed in something and truly engaged we can override that 45 minute intake limitation from before but if we do more than pause (without switching contexts) we could end up breaking flow and it takes at least 5-10 minutes to get back into it. This is key for people who work in environments full of distractions and interruptions. I’ve been thinking a lot about this one lately as I’d like to work on breaking my very unproductive cycle of checking IRC and email in a loop as though I am event-driven.  I need to make times to get into ‘flow’ and do bigger tasks with more focus.

 

  • A sidebar of the distraction mention was the fact that, in programming, syntax can be the distraction. That is, errors in.  When you get stuck trying to figure out where your semi-colon or indentation is off you break out of ‘flow’. In a language/framework like Scratch this is not possible as the blocks cannot be dragged and dropped into any order that creates errors except in ways that are related to logic and program flow – worth stopping to think about (and keeping you in your engagement ‘flow’)

 

  • There are roughly three types of minds out there to work with in teaching: a) Novice b) Competent c) Expert.  The Novice doesn’t know what they don’t know so the most important thing to do when trying to teach a Novice is to make sure their mental model of the concept you are teaching is correct.  This is to become a lot of the focus in the rest of the day – methods of determining if our concept is getting across correctly.  The Expert is such because they have more connections between all the facts they know about the concept/skill and so they can leap from point A to point J in one move where it takes a Competent mind all the dots in between – executed well, but with thought and intention – to complete them.  It is *as hard* to get Novices to become Competent as it is to get Experts to see the concept they are trying to teach as a Competent person does.  Think about something you might be and Expert at and see if you can tell what steps you assume other people will know.

 

  • Another key point about the Expert is the idea of reflection. Being able to reflect on your skill is huge for honing it.  An example would be how I went to a hockey skating workshop where they video taped us skating our fastest and when I saw that video, saw how knock-kneed I was and how my internal map that I was using wide leg strokes did not actually look like that in the tape I was a) horrified but also b) it’s a reminder of how far I have to go and how much more work I need to do in order to reach a higher level of expertise, such as that reflected to me by the instructors.

Accepting Feedback and Critique

We spent some time talking about critique. In architecture, art, music, and many other disciplines there is a built-in system for critique.  It helps the student to build up their sense of self, to know their strengths and weaknesses.  We do not always have this in teaching.  In our workshop, Greg had people write down one piece of positive and one negative feedback on two sticky notes (yellow for positive, pink for negative) and he asked us to put them on a piece of paper at the front of the room before we headed out on our first break (just over an hour of instruction had occurred).  When we returned we discussed what the anonymous feedback had provided Greg with and what he could actually work on in the moment vs. what was useful for later.  He mentioned doing this, and letting it be anonymous, was a great way to build trust with your students. Also we talked about how to get better at accepting feedback, working with it, not letting it paralyze you or derail your lesson.

One of the key takeaways for me here was the idea that the most senior leader/teacher should model this for others.  Show that you can hear feedback, both good and negative (hopefully constructive), and be able to move forward without crumbling under the pressure.  While I’m nervous about feedback, I will do my best to ‘fake it till I make it’ on this point because it’s definitely more important to correct course and create a better experience for students than to be proud and lose their interest and especially, trust.

Concept Maps

Our next major concept was the concept map.  This is a way to help yourself understand what you are trying to teach. It’s also a way to check yourself for the 7 items +/- 2 factor. If you have more than 5 main concepts in the concept map, it’s time to evaluate it for what can be put aside for now or what can become the next lesson.  The concept map can also be shared with students as a way to make sure everyone is on the same page or at least starting with the same page.  Greg recommended handing out a printout of the concept map so that students could doodle and expand it in ways he might not have thought of.

We learned how the concept map should never be used for grading.  It’s mostly a tool for the teacher to know if they have managed to get across the mental model well enough for the novice to reflect back a matching map and feel comfortable moving on to the next concept. It’s also a way of preventing the “blank screen” where students can be frozen trying to come up with what to put down (in programming or in writing) and having a scaffolding there in the form of map, or hints, any form of guidance can basically jump start the student and hold their hand until they need less and less of it to self-start, self-direct, and truly *learn* autonomously.

We did an exercise where we drew up concept maps for how to teach a for loop.  This was my first time doing a concept map and it was hard.  Definitely will take practice and likely some more reading/looking at other concept maps to drive home the concept for myself.

concept map explaining a for loopThis is an attempt to map out the concepts required to understand a for loop – note we went over 5 items

Key points from Greg:

  • Make your concept map look ‘cheap’ so that people aren’t afraid to give you honest feedback
  • Write and share maps with each other – try this with your team at work on a project you’re starting – you might see that others have a *very* different sense of what is being attempted
  • Try not to need things in your concept map that you will “explain later” – if you can’t explain it now you’re going to disrupt the ‘flow’ of maximizing the short term memory limits
  • Transfer your map into a list of bullet points as it will help you put the most important concepts first
  • Think of concept mapping like couples dances. You both want to be doing the same dance or there will be a lot of bruised shins :)

Sticky Notes as Invaluable Teaching Tool

We used sticky notes at several points in this workshop.  While we only had two colours today, Greg recommends three colours to be used as follows:

  • Green:  Students can put this up in a visible place when they have completed the exercise currently being done
  • Yellow: Students can put this up when they have a question.  Also this is a great tool for ensuring more participation in the classroom setting.  Some people talk more than others, there are definitely certain types of people who take up more space, and the deal with the yellow stickies was: You get two, when you ask a question put one aside.  Another question?  Put the other aside.  Now you have no more questions until EVERYONE in the class has used at least one of their yellow stickies.
  • Red:  Students can pop this up in a visible place when they need help on something.  This is great for two reasons: 1) the student can keep *trying* instead of worrying about holding a hand up and waiting for eye contact with a teacher and 2) the student can request help without drawing too much attention to themselves.  This is great for classes with people who might have learned it’s best not to speak up, ask questions, or draw attention to themselves out of fear and/or shame.

Know Your End Goal

This probably shouldn’t have *blown my mind* but it did.  It’s so obvious yet I’ve never once designed curriculum with this approach. You can bet that’s all changed now.  Here’s the key point:

DESIGN YOUR LESSON BY WRITING THE ‘EXAM’ FIRST

Ya.  It’s maybe obvious.  You want to make sure the students leave knowing what you intended to teach them?  Well, figure out how you’re going to measure that success *first*, then build your lesson up to that.  “They understand the for loop” is not enough.  Be specific.  Have a multiple choice question that tests the output of a for loop and gives 3 plausible answers and one right answer.  Use this to check if you are teaching well – their failure to choose the right question is your failure to teach the concept correctly.  This doesn’t have to be for actual grading (unless you want to grade yourself). Think of this like Test Driven Development for curriculum.  Teach to the goal.  You will develop lessons faster and more efficiently.  Your learners will appreciate it.  They can tell when they are learning vs. having a lecturer do a brain dump on them that goes nowhere in particular.  Backwards design works.  Greg’s book plug related to this section:  “Seeing Like a State

Another tip?  Create one or more user profiles for your lesson.  In our workshop we created Dawn: 15 year old girl who is good at science and math, learning programming in a one-day workshop. Then we did an exercise in crafting a question that would confirm if we had successfully taught how functions work to her.

We learned about Allison Elliott Tew‘s work and about “Concept Inventory” which is a way to use common mistakes in mental modeling to create multiple choice questions where the incorrect answers can help you understand *how* someone has misunderstood the concept you are trying to teach.  Multiple choice is great because it’s quick to get you an assessment (teacher grading time).

Peer Instruction

Related to multiple-choice as test of understanding is Peer Instruction.  This is a method that uses a multiple choice question in a really interesting, and engaging fashion.

Developed by Eric Mazur in the 1990′s this method expects students to have done some pre-work on the material before coming to class so that the entirety of the lesson can be used to compare and correct conceptual maps and understanding of the material.  It goes like this (at least Greg’s interpretation – it differs in Wikipedia as to how Eric designed it):

  1. Provide a multiple choice question based on the pre-work content.  Ensure 3 plausible answers and one correct
  2. Students select and *commit* to an answer (there is not yet software for this, though there are clickers) – you can also ask people to hold up the number of fingers for their choice and have classroom helpers count
  3. If everyone picks the right answer you can move on but otherwise you ask people to talk in groups with their neighbours to examine each other’s choices and what the correct answer might be and why.  This is great for having people explain their mental model/map
  4. Vote again and have students commit to the answer
  5. Instruction reveals the answer as well as perhaps a single sentence explaining why
  6. Groups discuss again, this time they can explore their understanding with the correct answer alongside people who, likely, had the correct model

This teaching technique was proven in 1989 but is still widely unused (esp. in MOOCs). Greg told us that he can usually do about 10 of these types of questions in a 1 hour class.  We did an example of one in the workshop to test out the method and it was a lively exercise.  This was also an opportunity for Greg to help us notice how noise in the room helps a teacher determine when a good time is to check in, continue the lesson, or make sure people aren’t stuck.  Active, engaged learning is boisterous and noticeably relaxed.  Quiet can mean focus, and then as people complete the exercise you can hear some discussions start up as those who are done talk with each other about the exercise.  I look forward to getting a bit of expertise at this level of listening and was impressed by Greg’s skills in classroom energy level reading.

F*ck It, I’m Outta Here

I have several more pages of notes but it’s getting late and this is a long post. There’s one more part of the workshop that I’d like to write about:  The moment when you decided you didn’t want to learn something anymore.

This is a really great piece of advice for teachers.  Greg started by saying that he used to ask students what motivated them to learn, what great experience in learning they had so he could tap into that motivation as a teacher.  Now?  He asks people what DE-motivated them.  You get a lot out of people this way.  Ask someone (or think of your own experiences): “What was something you were curious about, working on, getting into, and what happened that made you say ‘f*ck it’ and drop it? If you could go back in time what would you change?”.

For my example I spoke about returning to gym class at 12 years of age after recovering for many months from a very physically traumatic incident where I was hit by a car while on my bike (15 bones broken, 6 months in a wheelchair).  Being immobilized *and* being a pre-teen caused me to put on a fair amount of weight and I was no longer very physically active or able.  I also had yet-to-be-diagnosed asthma.  Not only did I have to endure a gym class where those with natural talents were help up while the rest of us were discarded but I also continued to fail tremendously at getting more than a “Participation” certificate(! Every other result got a very nice badge) for the Canada Fitness Test.

My “F*ck it” moment was when I got so frustrated with never getting a badge that I stole someone’s gold badge when no one was watching.  I also ended up eschewing all sports and athletic pursuits for many years if there was any hint of tryouts or actual talent needed.  Years later, at 29, I taught myself how to run by using a couch-to-10K program that did repetitions of running and walking in order to build up endurance.  Not only did I succeed at that but I learned to *love* running and feeling healthier in my body.  If I could go back in time I would become a Physical Education teacher and make sure every kid in my class knew that it’s not about natural talent at anything. It’s about setting achievable goals for yourself and comparing your results against your OWN RESULTS.  Never mind some test, and other kids. We’re all very different but no one should be denied a sense of accomplishment.  It’s what keeps you coming back to learn & build on what you’ve learned.

Badges awarded to Canada Fitness Test ParticipantsThe coveted badges.

 

Now Go Read More: Keep Learning How to Teach

It was an amazing day.  I have more notes to transcribe for myself but I think I’ve managed to capture the major concepts I learned today that will all be invaluable in my work on Ascend and beyond. Greg is an experienced, passionate, driven teacher and his enthusiasm for *knowing* what works in education is contagious.  I want to be a better scientist and educator too. The Software Carpentry movement is picking up momentum.  Look for workshops, blog posts, and opportunities to participate in a town near you.   See their site for up to date information and also check out their materials page for additional resources.  I’ve got a few new books to read on the plane home tomorrow.

Pascal FinetteNever Walk - A Talk About Entrepreneurship And Running

Part 1 - Roger

2011-10 Startup Week Presentation Never Walk.001.jpg

This is one of the most inspired moments in the history of athletics: Roger Bannister crossing the finish line on 6 May 1954 during a meet between British AAA and Oxford University at Iffley Road Track in Oxford, United Kingdom, where he became the first human to run the mile in less than four minutes. An extraordinary achievement which was, at the time, considered impossible. Seeing the picture of Roger crossing the line gives me goose bumps. Each and every time. This picture evokes so many emotions in me - in a lot of ways it’s the perfect capture of the perfect moment.

But we are getting ahead of ourselves. For now - keep Roger in mind, we will meet him again later.

2011-10 Startup Week Presentation Never Walk.002.jpg

“Reaching the finish line, never walking, enjoying the race. These three, in this order, are my goals.” — Haruki Murakami

This presentation is a story about running, running a business and running through life at large. And how all these things can be treated the same. A story about lessons learned. A story about failures, perseverance, winning and the sheer joy of accomplishment - large and small. And it is a story why we should never walk in life.

Let’s get ready… toe to the starting line.

Part 2 - A True Story

2011-10 Startup Week Presentation Never Walk.003.jpg

“We embrace pain. Pain is the purifier.” — Runner’s Proverb

2011-10 Startup Week Presentation Never Walk.004.jpg

In 2008 I found myself with pretty severe depression. A condition and feeling which I never experienced before. I felt helpless. I didn’t know what to do. And I didn’t know how to get out of it.

Over the course of some months I first talked with friends and family and tried to fix it myself. Thought I could figure out what it was, mend it and move on. But it didn’t work.

Eventually, I knew that I needed help. So I searched for help. And found a fantastic therapist. She worked with me through a lot of issues in my past - but more importantly she asked me why I stopped doing sports years ago, having spent most of my youth engaging in one sport or another. I didn’t know the answer. Life just got in the way.

2011-10 Startup Week Presentation Never Walk.005.jpg

My therapist asked me which sport I enjoyed most. The answer was immediately clear to me - running. Running is primal. It’s hardwired into our brains. Humans are born to run.

So I started running again. I ran for life. For my life.

2011-10 Startup Week Presentation Never Walk.006.jpg

About 10,000 miles later, after endless hours on the roads and trails in every place I lived & visited ever since, running with and without company - I learned something. I learned that the fundamental lessons which running taught me, hold true for running a startup. And running through your life.

They are the essential rules for any entrepreneur. They are the essence of living life. At least if you want to do the impossible - and break your own four minute mile.

Three - Ten Principles

2011-10 Startup Week Presentation Never Walk.007.jpg

“Somebody may beat me, but they are going to have to bleed to do it.” — Steve Prefontaine

2011-10 Startup Week Presentation Never Walk.008.jpg

Train hard. There is no way around it. It’s the foundation. Everything else will depend on it.

When I built my first startup, fresh out of university, I didn’t know anything. I had a huge ego and thought that I knew everything there is to know about building and running a startup. But I didn’t. I went into the race without training. It was ugly. I learned on the fly - which is fine. But I had people rely on me. And they suffered from my level of unpreparedness.

Train hard. If you want to race, you need to pour your heart and soul into the preparation. This is where races are lost and won.

2011-10 Startup Week Presentation Never Walk.009.jpg

Make sacrifices. Emil Zapotek is one of the greatest runners of all times. Emil wasn’t terribly talented or genetically gifted to run. But he made sacrifices. More than anyone else. And he won.

Building a startup requires huge sacrifices. I slept on the floor in my company when I worked through the night. I blew up a long-term relationship. I lost friends as I didn’t have the time to see them anymore. My first startup was a financial disaster. It was a sacrifice which, in the end, made me a better entrepreneur. And my following ventures so much better.

2011-10 Startup Week Presentation Never Walk.010.jpg

Make positive choices. Your life will be full of decision making points. Make sure you choose wisely. Choose the ones which will have a positive impact on you.

I made a choice in my startup which I paid dearly for - against my gut I chose the investor with the better term sheet. I wanted the money. When the company went downhill, it turned ugly. I didn’t make a positive choice - and paid dearly.

2011-10 Startup Week Presentation Never Walk.011.jpg

Seek your potential. I recently read that, unless you are an ultra-elite runner, you always have the ability to run faster. Always. I believe this is true for everything we do. Only very few people tap their whole potential.

Seek out your potential. Figure out what you’re good at and get better at it. Don’t waste time getting mediocre at something you’re bad at. It’s not worth it. I learned so much about myself doing startups, working at big, fast-growing companies and helping other entrepreneurs. I think I know my strengths now - and I am sure I haven’t reached the limits of my potential. Keep pushing. Become Muhammed Ali.

2011-10 Startup Week Presentation Never Walk.012.jpg

Set high goals. Remember Roger? When Roger set out to break the four-minute mile, people believed that the human body will never be able to run that fast. Doctors were of the opinion that the heart will explode if you run that fast. And despite all this, Roger knew that it was possible - he set his goal that high. And only weeks after he broke the four-minute mark, a handful of other runners broke the same barrier. The barrier was was only in their heads.

You can’t change the world if you don’t set out to do so. Be bold. Dream big. Who would have thought that we can put a man on the moon? Or that a little social network for Stanford students can become the largest website on the planet?

2011-10 Startup Week Presentation Never Walk.013.jpg

Relax under pressure. Look closely at Shalane Flanagan’s facial expression on this photo. Shalane is the world-record holder for the 3000m. And she is completely relaxed and in the zone while racing.

You can’t perform to the best of your abilities if you are tense. You will annoy the people around you. I know - I was tense when I did my first company. I yelled at people. It wasn’t nice - and it didn’t help. Learn to relax under pressure. Breathe deep - it will help you.

2011-10 Startup Week Presentation Never Walk.014.jpg

Attack pain. Pain is inevitable. You will feel pain. You can choose to let it dominate you or choose to attack it, ignore it, grind through it. At the end pain is just a neuro-signal. You can will your way through it. Pain is the purifier. Be Arnold.

I can’t count the amount of times I came to a point where I just wanted to stop. Wanted to give in to the pain. Or just take a break. Both in running, life and running my businesses. Ignore the feeling. Grind through. It’s just a neuro-signal. If its worth it - push on.

2011-10 Startup Week Presentation Never Walk.015.jpg

Push the pace. Go out and don’t hold back. Don’t be the guy who races in the shadow of others and tries to sneak by on the last few meters. Keep on pushing the pace. Steve Prefontaine to this day is the most courageous of runners in the world. He kept pushing the pace. Always.

You chose to start a company. Now do it properly - with every fibre of your body, continuously pushing the pace. Be bold. It’s the only way to succeed as a true leader.

2011-10 Startup Week Presentation Never Walk.016.jpg

Work as a team. Running looks from the onset like a very solitary sport. It is not. Roger had two good friends pace him through the first two and the third round of his four-round record run. Your team is everything. Without them you are nothing.

Embrace the spirit of the team in your organization. There is no room for anything else - you have to work as one, for a common goal. Even the brilliant Steve Jobs couldn’t make things happen without his team.

2011-10 Startup Week Presentation Never Walk.017.jpg

Run to win. History has it that Pheidippides died after reporting the Greek victory over Persia in the Battle of Marathon to Athens. Treat the marathon with respect. Run to win. Every time.

Don’t get into business if you aren’t in it for the win. And do what it takes to win. Honor Pheidippides. And run like Usain Bolt.

Encore - Two More

2011-10 Startup Week Presentation Never Walk.018.jpg

“I’m going to work so that it’s a pure guts race at the end, and if it is, I am the only one who can win it.” — Steve Prefontaine

2011-10 Startup Week Presentation Never Walk.019.jpg

Defeat the wall. When you run a marathon you will hit the wall. After 21 miles of running your body simply runs out of glycogen and wants to shut down. This is the point where your will is tested most. You push through it. You force carbohydrates into your body although your stomach started cramping up at mile 15. But deep down you always knew - it is possible. So you persevered and set one foot in front of the other. Repeat. And repeat.

In every venture, I hit the wall. There always was the day when I didn’t want to get out of bed. Where I just wanted to throw it all down the drain and give up. Persevere. Get dressed, get to work, get going. Force yourself through it. It won’t last. You can defeat the wall.

2011-10 Startup Week Presentation Never Walk.020.jpg

Relentless Focus & Boring Consistency. Running is all about spending hours and hours doing the same thing - running. You need to have laser-sharp focus and be consistent. There is no way around.

In your company there is nothing more important than making the main thing the main thing and then executing on it. It’s not flashy & glamorous - but it is how you will get to your goal. I ignored this piece of advice in my first company. I kept chasing the next new thing. And failed.

Again

2011-10 Startup Week Presentation Never Walk.021.jpg

  1. Train Hard
  2. Make Sacrifices
  3. Make Positive Choices
  4. Seek Your Potential
  5. Set High Goals
  6. Relax Under Pressure
  7. Attack Pain
  8. Push The Pace
  9. Work As A Team
  10. Run To Win
  11. Defeat The Wall
  12. Relentless Focus & Boring Consistency

One Rule

2011-10 Startup Week Presentation Never Walk.022.jpg

“The man who can drive himself further once the effort gets painful is the man who will win.” — Roger Bannister

2011-10 Startup Week Presentation Never Walk.023.jpg

The first rule is actually the first and second rule of everything you do.

2011-10 Startup Week Presentation Never Walk.024.jpg

If you don’t have a big, fat grin on your face when you run, don’t do it. Have fun while you’re out there. It is your race.

Remember

2011-10 Startup Week Presentation Never Walk.025.jpg

“Nach dem Spiel ist vor dem Spiel. – After the game is before the game.” — Sepp Herberger

2011-10 Startup Week Presentation Never Walk.026.jpg

“The only good race pace is suicide pace, and today looks like a good day to die.” — Steve Prefontaine

NEVER WALK.

Joel Maherbrowser-chrome is greener and in many chunks

On Friday we rolled out a big change to split up our browser-chrome tests.  It started out as a great idea to split the devtools out into their own suite, then after testing, we ended up chunking the remaining browser chrome tests into 3 chunks.

No more 200 minute wait times, in fact we probably are running too many chunks.  A lot of heavy lifting took place, a lot of it in releng from Armen and Ben, and much work from Gavin and RyanVM who pushed hard and proposed great ideas to see this through.

What is next?

There are a few more test cases to fix and to get all these changes on Aurora.  We have more work we want to do (lower priority) on running the tests differently to help isolate issues where one test affects another test.

In the next few weeks I want to put together a list of projects and bugs that we can work on to make our tests more useful and reliable.  Stay tuned!

 


Geoff LankowWeird network problem - help!

I know that strictly speaking I'm posting this to Planet Mozilla, and it's about Chrome/Chromium, but someone here will be able to point me in the right direction.

I'm having odd trouble with Chrome establishing an SSL connection to my webserver. Not only does it not connect, it cuts off any communication to the server for 5 minutes.

Steps to reproduce:

  1. Ping darktrojan.net. It resolves to 64.13.238.140 and responds as you'd expect.
  2. Visit https://www.darktrojan.net/ in Chrome. It gives a cert error in Firefox, but in Chrome will fail to connect.
  3. Ping darktrojan.net again. No response.

This issue has appeared in the last few days - right when I need it to be working most - which suggests it's a (recently released) Chrome 34 problem, except that I can reproduce it in Chromium 33. I don't use either on a regular basis so I don't know if that has anything to do with anything. I also wonder if it's something to do with Heartbleed but my webhost have said the site was never vulnerable so I assume nothing's changed there.

Please email or tweet at me if you have any idea what's going on. I'm tearing my hair out here.

Ian BarlowNotes from UX Immersion Mobile Conference 2014

Last week I was in Denver for a three day conference put on by User Interface Engineering. I met lots of great people, and the workshops and talks were fantastic. Would highly recommend to anyone looking for a good UX conference to attend.

http://uxim14.uie.com/

Brad Frost

Screen Shot 2014-04-13 at 10.22.59 AM

Screen Shot 2014-04-13 at 10.23.06 AM

Screen Shot 2014-04-13 at 10.23.13 AM

We don’t know what will be under the Christmas tree in two years, but that is what we need to design for.
Principles of Adaptive Design
  • Ubiquity
  • Flexibility
  • Performance
  • Enhancement
  • Future Friendly
Tools
Atomic Design
  • Break down design elements into reusable components of a system:
  • Atoms
  • Molecules
  • Organisms
  • Templates
  • Pages

Screen Shot 2014-04-13 at 10.29.36 AM

More details on Atomic Design here: http://bradfrostweb.com/blog/post/atomic-web-design/

 


 

Ben Callahan

Screen Shot 2014-04-13 at 10.34.57 AM

Screen Shot 2014-04-13 at 6.52.15 PM

Dissecting Design

Part 1: Establish the Aesthetic

Use tools you are comfortable with to establish the aesthetic

 

Part 2: Solve the Problem
  • Static design tools (photoshop, etc)
  • Responsive design tools
  • html/css

You best solve problems using tools you are fluent with

 

Part 3: Refine the Solution
  • Static tools
  • Instead of static design hand-offs, consider design pairing: one engineer, one designer, working together side by side.

Efficiency is key with refining a design solution

 

Group improvisation

Screen Shot 2014-04-13 at 10.38.29 AM

The fact is, there is no one way to design for screens. Every project is different. Every team is different. It’s interesting to look at it as a form of group improvisation, where everyone is contributing in the way that makes this particular project work.

“Group improvisation is a challenge. Aside from the weighty technical problem of collective coherent thinking, there is the very human, even social need for sympathy from all members to bend for the common result.”

Group Improvisation requires individuals on a team to be…

  • fluent
  • humble
  • empathetic

 

Ben’s Theory on Web Process

Create guidelines instead of rigid processes. “The amount of process required is inversely proportional to the skill, humility, and empathy of your team.”

More details on Dissecting Design here: http://seesparkbox.com/foundry/dissecting_design


 

Luke Wroblewski

Screen Shot 2014-04-13 at 10.41.26 AM

Mobile Growth

Mobile shopping in US

  • 2011: 14%
  • 2012: 30%
  • 2013: 50%

Paypal mobile payments

  • 2010: $750M
  • 2011: $4B
  • 2012: $14B
  • 2013: $27B

Mobile revenue

  • Yelp: 40%
  • Facebook: 53%
  • Twitter: 75%

Screen Shot 2014-04-13 at 10.42.39 AM

We’ve only had about 6 years to figure out mobile design, vs 30 years of figuring out PCs. We have lots to learn. And more importantly, lots to unlearn.

On the hamburger menu
  • Test showed that a button that reads “MENU” was selected 20% more than when a hamburger menu was used
  • Interesting Polar Mobile case study, where hiding content under a menu vs using a segmented control showed an instant and major drop off in usage as soon as they changed it
  • Measure measure measure
On the importance of good inputs
  • Airport wifi login – 23 steps on mobile to pay money to get online
  • Designers talked to Luke about how they cut it down to 19.
  • Luke’s response – I have an idea that uses *4* inputs.
  • Hotel Tonight — Using a signature gesture to solve the baby booking the hotel room problem. So good.
  • Booking a hotel happens in 3 taps and a swipe, giving them a competitive advantage

Screen Shot 2014-04-13 at 10.43.50 AM

On Startups
  • Release – As quick as you can
  • Refine – by observing real use
  • Repeat – design is never done
Idea: Preemptive customer service

They were watching the user logs, and when they saw bugs they fixed them before users complained, and then reached out to let them know they had fixed something. User feedback was 100% positive. Brilliant.

Screen Shot 2014-04-13 at 10.44.26 AM

 


 

 

Jared Spool

Designing Designers

Job interview test
  • Present candidate with a messy sketch of a web form
  • A good designer cleans it up
  • A better designer simplifies
  • An even better ask why do we need this info

Side comment about unintentional design: What happens when you spend time working with everything in the system *except* the user’s experience

The need for design talent is growing, massively. How do we staff it?

IBM is investing 100M to expand design business. 1000+ UX designers are going to IBM. This means all the big corporations are going to start hiring UX like crazy. How do we as the design community even staff that? Especially since today, all design unicorns are self taught.

How to become a design unicorn <3
  1. Train yourself
  2. Practice your skills
  3. Deconstruct as many designs as you can
  4. Seek out feedback (and listen to it)
  5. Teach others

 

It doesn’t happen like this in school, though.
  • Schools have too many constraints
  • Out of date (3yr accreditation process)
  • There aren’t enough schools to keep up with the new jobs in demand
  • Schools don’t go deep enough
  • The semester / class based school system can’t support the kind of learning designers need to do to develop their skills

Tying the education problem back to Unintentional Design. We focused so much on the system that we forgot what we were actually trying to do.

Changes to education system?

What if design school were more like Medical Education (combines theory and craft). This idea of pre-med, medical school, internships, residences, and finally fellowships.

Changes to our workplace?
  • We are the future managers of this next wave. What can we do?
  • Building a culture of learning
  • Integrating *practice* into our routines (critiques, sketching, what else?)
  • Apply our design skills to design learning

Jared is exploring this idea with The Center Centre — formerly known as the Unicorn Institute


 

Nate Schutta

JQuery Mobile Prototyping Workshop

“If a picture is worth a thousand words, how many meetings is a prototype worth?”

Useful links:


Toni Hermoso PulidoAuthentication with Persona and MySQL in an Express application

Since its beginning I liked Persona (also known as BrowserID), because it:

  • technically supports a more decentralised Internet
  • makes authentication easier for users

Shame on me, only just a few weeks ago I found time to play with this. As a proof of concept, I prepared an Express application that connects to MySQL so I could have a better understanding about how this authentication system actually works in practice (from a developer point of view).

You can find the code here: Express Persona MySQL Example.

The application is essentially based on Express Persona authentication module, but it separates the client part from the server side and adds a MySQL layer. So, instead of NodeJS Express for the server side, we could also use any other language, let's say Perl Mojolicious, but at the same time continuing to use the same code for the client webapp.

An example MySQL dump and an Apache virtual host configuration is provided as well (the latter for proxying requests from the client to the server and for ensuring 'same origin policy' is respected). We must not forget that Persona takes care only about authentication, so account creation must be handled apart.

One thing that can help when designing an application/service is knowing that custom Persona URLs can also be used. For instance, in the client code: /login/persona/verify is forwarded to http://localhost:4646/persona/verify (via Apache proxy) and this latter URL can also be further customised thanks to the Express-persona module (verifyPath optional parameter).

On the other hand, as a reference, the magic at the client side is done by navigator.id.watch.

In the slides below Alina details a bit more (in Spanish) about Persona and how to deploy the code I comment:

Hope this helps to get more people to try Persona!

Peter BengtssonCOPYFILE_DISABLE and python distutils in python 2.6

My friend and colleague Jannis (aka jezdez) Leidel saved my bacon today where I had gotten completely stuck.

So, I have this python2.6 virtualenv and whenever I ran python setup.py sdist upload it would upload a really nasty tarball to PyPI. What would happen is that when people do pip install premailer it would file horribly and look something like this:

...
IOError: [Errno 2] No such file or directory: '/path/to/virtual-env/build/premailer/setup.py'

What?!?! If you download the tarball and unpack it you'll see that there definitely is a setup.py file in there.

Anyway. What happens, which I didn't realize was that within the .tar.gz file there were these strange copies of files. For example for every file.py there was a ._file.py etc.

Here's what the file looked like after a tarball had been created:

(premailer26)peterbe@mpb:~/dev/PYTHON/premailer (master)$ tar -zvtf dist/premailer-2.0.2.tar.gz
-rwxr-xr-x  0 peterbe staff     311 Apr 11 15:51 ./._premailer-2.0.2
drwxr-xr-x  0 peterbe staff       0 Apr 11 15:51 premailer-2.0.2/
-rw-r--r--  0 peterbe staff     280 Mar 28 10:13 premailer-2.0.2/._LICENSE
-rw-r--r--  0 peterbe staff    1517 Mar 28 10:13 premailer-2.0.2/LICENSE
-rw-r--r--  0 peterbe staff     280 Apr  9 21:10 premailer-2.0.2/._MANIFEST.in
-rw-r--r--  0 peterbe staff      34 Apr  9 21:10 premailer-2.0.2/MANIFEST.in
-rw-r--r--  0 peterbe staff     280 Apr 11 15:51 premailer-2.0.2/._PKG-INFO
-rw-r--r--  0 peterbe staff    7226 Apr 11 15:51 premailer-2.0.2/PKG-INFO
-rwxr-xr-x  0 peterbe staff     311 Apr 11 15:51 premailer-2.0.2/._premailer
drwxr-xr-x  0 peterbe staff       0 Apr 11 15:51 premailer-2.0.2/premailer/
-rwxr-xr-x  0 peterbe staff     311 Apr 11 15:51 premailer-2.0.2/._premailer.egg-info
drwxr-xr-x  0 peterbe staff       0 Apr 11 15:51 premailer-2.0.2/premailer.egg-info/
-rw-r--r--  0 peterbe staff     280 Mar 28 10:13 premailer-2.0.2/._README.md
-rw-r--r--  0 peterbe staff    5185 Mar 28 10:13 premailer-2.0.2/README.md
-rw-r--r--  0 peterbe staff     280 Apr 11 15:51 premailer-2.0.2/._setup.cfg
-rw-r--r--  0 peterbe staff      59 Apr 11 15:51 premailer-2.0.2/setup.cfg
-rw-r--r--  0 peterbe staff     280 Apr  9 21:09 premailer-2.0.2/._setup.py
-rw-r--r--  0 peterbe staff    2079 Apr  9 21:09 premailer-2.0.2/setup.py
-rw-r--r--  0 peterbe staff     280 Apr 11 15:51 premailer-2.0.2/premailer.egg-info/._dependency_links.txt
-rw-r--r--  0 peterbe staff       1 Apr 11 15:51 premailer-2.0.2/premailer.egg-info/dependency_links.txt
-rw-r--r--  0 peterbe staff     280 Apr  9 21:04 premailer-2.0.2/premailer.egg-info/._not-zip-safe
-rw-r--r--  0 peterbe staff       1 Apr  9 21:04 premailer-2.0.2/premailer.egg-info/not-zip-safe
-rw-r--r--  0 peterbe staff     280 Apr 11 15:51 premailer-2.0.2/premailer.egg-info/._PKG-INFO
-rw-r--r--  0 peterbe staff    7226 Apr 11 15:51 premailer-2.0.2/premailer.egg-info/PKG-INFO
-rw-r--r--  0 peterbe staff     280 Apr 11 15:51 premailer-2.0.2/premailer.egg-info/._requires.txt
-rw-r--r--  0 peterbe staff      23 Apr 11 15:51 premailer-2.0.2/premailer.egg-info/requires.txt
-rw-r--r--  0 peterbe staff     280 Apr 11 15:51 premailer-2.0.2/premailer.egg-info/._SOURCES.txt
-rw-r--r--  0 peterbe staff     329 Apr 11 15:51 premailer-2.0.2/premailer.egg-info/SOURCES.txt
-rw-r--r--  0 peterbe staff     280 Apr 11 15:51 premailer-2.0.2/premailer.egg-info/._top_level.txt
-rw-r--r--  0 peterbe staff      10 Apr 11 15:51 premailer-2.0.2/premailer.egg-info/top_level.txt
-rw-r--r--  0 peterbe staff     280 Apr  9 21:21 premailer-2.0.2/premailer/.___init__.py
-rw-r--r--  0 peterbe staff      66 Apr  9 21:21 premailer-2.0.2/premailer/__init__.py
-rw-r--r--  0 peterbe staff     280 Apr  9 09:23 premailer-2.0.2/premailer/.___main__.py
-rw-r--r--  0 peterbe staff    3315 Apr  9 09:23 premailer-2.0.2/premailer/__main__.py
-rw-r--r--  0 peterbe staff     280 Apr  8 16:22 premailer-2.0.2/premailer/._premailer.py
-rw-r--r--  0 peterbe staff   15368 Apr  8 16:22 premailer-2.0.2/premailer/premailer.py
-rw-r--r--  0 peterbe staff     280 Apr  8 16:22 premailer-2.0.2/premailer/._test_premailer.py
-rw-r--r--  0 peterbe staff   37184 Apr  8 16:22 premailer-2.0.2/premailer/test_premailer.py

Strangly, this only happened in a Python 2.6 environment. The problem went away when I created a brand new Python 2.7 enviroment with the latest setuptools.

So basically, the fault lies with OSX and a strange interaction between OSX and tar.
This superuser.com answer does a much better job explaining this "flaw".

So, the solution to the problem is to create the distribution like this instead:

$ COPYFILE_DISABLE=true python setup.py sdist

If you do that, you get a healthy lookin tarball that actually works to pip install. Thanks jezdez for pointing that out!

Anthony HughesFirefox 27 Bug Statistics

I’m writing today to present the bug statistics for Firefox 27. My apologies for the tardiness of this blog post; too many things have got in my way recently. I try to get these posts out at the end of life of the respective Firefox version as that allows me to present the statistics across the entire life-cycle of a Firefox version. For Firefox 27, this should have coincided with Firefox 28′s release a few weeks ago. Again, my apologies for getting this out later than usual.

The first story I want to tell is about the high-level breakdown of all tracked bug in this release. As you can see below there was a marked drop in the total bug volume in Firefox 27. Perhaps unsurprisingly this allowed us to focus a bit more which resulted in a smaller amount of unresolved and unconfirmed bugs being shipped in this release. The numbers are still much higher than we would like but it is a small victory for the overall quality of Firefox if these numbers continue to trend downward.

Firefox27_TotalBugs

The second story I want to tell is about the percentage of incoming bugs confirmed. This is typically an indication of the effectiveness of our incoming bug triage practices. As the volume of incoming bugs decreases we like to see the number of confirmed bugs increase. Unfortunately we have been trending the opposite direction for some time. Previously I had attributed this to the ever increasing volume of bugs but I can no longer rely on this excuse. Looking forward to Firefox 28 I can say that we’ve made remarkable improvement in this area in an effort to reverse this trend. I’ll share more on that in a few weeks.

Firefox27_Confirmed

The third story I’d like to share is that of when fixes landed for Firefox 27. The following chart I’ve plotted the average time-line for the past few releases along with Firefox 27′s time-line. In general we expect to see an ever increasing curve toward through the Nightly cycle, trailing off as we proceed through Aurora and Beta, with spikes in the first half of these cycles.

Firefox 27 appeared to be trending higher than average as we approached the end of each cycle. While these numbers are not completely out of control it does put a bit of extra strain on QA. After all, the later a fix lands, the less time we have to test it. Ultimately this creates risk to the quality of the product we ship, but as long as we recognize that we can try to plan for it accordingly.

Firefox27_Fixes-by-Date

The fourth story I want to tell is about the number of bugs reopened. We typically reopen a bug when something is fundamentally flawed with the initial implementation and/or if a patch needs to be backed out. Even in cases where a regression is found, we tend to leave the bug closed and deal with the regression in its own bug report. As such, a high volume of bugs being reopened is usually indicative of a release that saw much churn and may point to quality issues in release.

Unfortunately Firefox 27 continues the story of many of the version before it and represents a marginal increase in the number of bugs reopened. Of course, the other side of this story may be that testing was more effective. It’s hard to say concretely just looking at the bug numbers.

Firefox27_Reopened

The fifth story I want to tell is one of stability. The following chart shows the number of topcrash bugs reported against Firefox 27 as compared to previous releases. For those unaware, a topcrash bug are those crashes which show up most frequently in the wild and present the greatest risk to quality and security for our users. The unfortunate story for Firefox 27 is that we’ve seen an end to the downward trend that we saw started with Firefox 25 and continued with Firefox 26. The volume of topcrashes puts Firefox 27 in the same ballpark as the rash of point-releases we saw in Firefox’s teens.

Of course there’s two sides to every story. The other side of this may very well be that we got better at reporting stability issues and that resulted in a higher volume of known bugs. It’s hard to say for sure.

Firefox27_Topcrashes

The final story I want to tell today is about the percentage of regressions reported post-release. As we hone our processes, bring on more engineers, and get assistance from more contributors, we’ve been getting better at finding and fixing regressions. It’s inevitable that the more code landing in a release increases the potential for regression. Naturally this leads to an increase in the total number of regressions reported. Firefox 27 was no different so I thought I’d look at regressions a little differently this time around.

The following chart shows the ratio of regressions reported before release to regressions reported after release. A release with a high-volume of post-release regressions is a failure from a QA perspective because it means many bugs slipping through our fingers. I wouldn’t expect the number of post-release regressions to ever be 0 but we need to strive to always be better.

Firefox 27 represents a huge victory on this front. We saw a huge drop in the number of Firefox 27 regressions reported post-release. For months we’ve sought to improve our triage processes, engage more with developers, and work harder to involve volunteers in our day to day efforts. It’s nice to see these efforts finally paying off.

Firefox27_Regressions

That’s Firefox 27, in a nutshell, from a QA perspective. I think it’s useful to be able to reflect on the bug numbers and see what kind of an impact our efforts are having on the product. I really do enjoy visualizing the data and talking about our “victories”, but it’s just as interesting seeing what the data is telling us about where we may have failed. I believe that learning from failures has far more impact than building on successes and acts as a great motivator. What we want to avoid is those crippling failures. I think Firefox 27 is a nice iterative step forward.

Rick EyreHosting your JavaScript library builds for Bower

A while ago I blogged about the troubles of hosting a pre-built distribution of vtt.js for Bower. The issue was that there is a build step we have to do to get a distributable file that Bower can use. So we couldn't just point Bower at our repo and be done with it as we weren't currently checking in the builds. I decided on hosting these builds in a separate repo instead of checking the builds into the main repo. However, this got troublesome after a while (as you might be able to imagine) since I was building and commiting the Bower updates manually instead of making a script like I should have. It might be a good thing that I didn't end up automating it with a script since we decided to switch to hosting the builds in the same repo as the source code.

The way I ended up solving this was to build a grunt task that utilizes a number of other tasks to build and commit the files while bumping our library version. This way we're not checking in new dist files with every little change to the code. Dist files which won't even be available through Bower or node because they're not attached to a particular version. We only need to build and check in the dist files when we're ready to make a new release.

I called this grunt task release and it utilizes the grunt-contrib-concat, grunt-contrib-uglify, and grunt-bump modules.

  grunt.registerTask( "build", [ "uglify:dist", "concat:dist" ] );

  grunt.registerTask( "stage-dist", "Stage dist files.", function() {
    exec( "git add dist/*", this.async() );
  });

  grunt.registerTask("release", "Build the distributables and bump the version.", function(arg) {
    grunt.task.run( "build", "stage-dist", "bump:" + arg );
  });

I've also separated out builds into dev builds and dist builds. This way in the normal course of development we don't build dist files which are tracked by git and have to worry about not commiting those changes. Which would be the case because our test suite needs to build the library in order to test it.

  grunt.registerTask( "build", [ "uglify:dist", "concat:dist" ] );
  grunt.registerTask( "dev-build", [ "uglify:dev", "concat:dev" ])
  grunt.registerTask( "default", [ "jshint", "dev-build" ]);

Then when we're ready to make a new release with a new dist we would just run.

  grunt release:patch // Or major or minor if we want too.

Matt ThompsonWriting for Webmaker’s new “Explore” page

Explore copy.021

What should this copy say?

We’re shipping a new “explore” page for Webmaker. The goal: help users get their feet wet, quickly grokking what they can do on Webmaker.org. Plus: make it easy to browse through the list of skills in the Web Literacy Standard, finding resources and teaching kits for each.

It’s like an interactive text book for teaching web literacy.

The main writing challenge: what should the top panel say? The main headline and two blurbs that follow.

In my mind, this section should try do three things:

  1. State what this is. And why you care.
  2. Tell a story about the list of skills at left. When you hit this page, you see a list of rainbow-coloured words that can be confusing or random if you’re here for the first time. “Sharing. Collaborating. Community Participation…. Hmmm…. what does that all actually mean?”
  3. Focus on what users can do here. What does exploring those things do for you? What’s the action or value?

Explore copy.022

First draft

Here a start:

Teach the web with Webmaker

Explore creative ways to teach
 digital skills…
through fun making and sharing, backed by the
global Mozilla community’s Web Literacy Standard.

Free. Open source. Fun.

Each skill has free resources and teaching kits anyone can use to teach others –
to help create a more web literate world.

Next steps

Ben HearsumThis week in Mozilla RelEng – April 11th, 2014

Major highlights:

Completed work (resolution is ‘FIXED’):

In progress work (unresolved and not assigned to nobody):

Ben HearsumThis week in Mozilla RelEng – April 11th, 2014

Major highlights:

Completed work (resolution is ‘FIXED’):

In progress work (unresolved and not assigned to nobody):

Pascal FinetteTechnology Trends (April 2014)

Earlier this month I was asked to present my thoughts and observations on “Technology Trends” in front of a group of Dutch business leaders. A lot of my thinking these days circles around the notion of “exponential growth” and the disruptive forces which come with this (full credit goes to Singularity University for putting these ideas into my head) and the notion of “ambient/ubiquitous computing” (full credit to my former colleague and friend Allen Wirfs-Brock).

In summary I believe we are truly in the midst of a new era with fundamental changes coming at an ever increasing pace at us.

Here’s my deck – it mostly works standalone.

Vodafone-WIP.jpg

Sylvestre LedruChanges Firefox 29 beta6 to beta7

This beta is a bit bigger than the beta6. It fixes some UI bugs, two bugs in the Gamepad API and some top crash bugs like bug 976536 or bug 987248.

  • 32 changesets
  • 50 files changed
  • 1414 insertions
  • 522 deletions

ExtensionOccurrences
cpp21
js10
h5
css4
xul1
mn1
mk1
json1
jsm1
java1
ini1
in1
build1

ModuleOccurrences
browser16
js8
image5
layout4
gfx4
mobile3
toolkit2
xpfe1
widget1
view1
netwerk1
media1
hal1
content1

List of changesets:

Nick AlexanderBug 967022 - Fix Gingerbread progressbar animation bustage. r=rnewman, a=sylvestre - 26f9d2df24af
Neil DeakinBug 972566, when a window is resized, panels should be repositioned after the view reflow rather than within the webshell listener, r=tn, a=lsblakk. - 1a92004a684f
Mike ConleyBug 989289 - Forcibly set the 'mode' attribute to 'icons' on toolbar construction. r=jaws, a=sledru. - 85d2c5b844bc
Gijs KruitboschBug 988191 - change to WCAG algorithm for titlebar font, r=jaws, a=sledru. - 5e0b16fe8951
Mike de Boer[Australis] Bug 986324: small refactor of urlbar and search field styles. r=dao, a=sledru. - 274d760590d5
Mike ConleyBacked out changeset 9fc38ffaff75 (Bug 986920) - a90a4219b520
Mike ConleyBug 989761 - Make sure background tabs have the right z-index in relation to the classic theme fog. r=dao, a=sledru. - 552251cb84b9
Mike ConleyBug 984455 - Bookmarks menu and toolbar context menus can be broken after underflowing from nav-bar chevron. r=mak,mdeboer,Gijs. a=sledru. - 3f2d6f68c415
Jan de MooijBug 986678 - Fix type check in TryAddTypeBarrierForWrite. r=bhackett, a=abillings - c19e0e0a8535
Jon CoppeardBug 986843 - Don't sweep empty zones if they contain marked compartments. r=terrence, a=sledru - ed9793adc2c7
Douglas CrosherBug 919592 - Ionmonkey (ARM): Guard against branches being out of range and bail out of compilation if so. r=mjrosenb, a=sledru - 7be150811dd8
Richard MartiBug 967674 - Port new Fxa sync options work to in-content prefs. r=markh, a=sledru - c8bcfc32f855
Till SchneidereitBug 976536 - Don't relazify inlined functions. r=jandem, a=sledru - ee6aea5824b7
Ted MielczarekBug 980876 - Be smarter about sending gamepad updates from the background thread. r=smaug, a=sledru - 7ccc27d5c8f4
Ted MielczarekBug 980876 - Null check GamepadService in case of events still in play during shutdown. r=smaug, a=sledru - 30c45853f8cb
Bobby HolleyBug 913138 - Release nsLayoutStatics when the layout module is unloaded. r=bsmedberg - 64fcbdc63ed7
Bobby HolleyBug 913138 - Shut down gfx at the end of layout shutdown. r=bsmedberg - 6899f7b4f57c
Bobby HolleyBug 913138 - Move imgLoader singleton management out of nsContentUtils. r=bsmedberg - 58786efcdbbb
Bobby HolleyBug 913138 - Shut down imagelib at the end of layout shutdown. r=bsmedberg a=sylvestre - 968f7b3ff551
Nick AlexanderBug 988437 - Part 1: Allow unpickling across Android Account types; bump pickle version. r=rnewman, a=sylvestre - 5dfea367b8b9
Nick AlexanderBug 988437 - Part 2: Make Firefox Account Android Account type unique per package. r=rnewman, a=sylvestre - 47c8852fde22
Matthew NoorenbergheBug 972684 - Don't use about:home in browser_findbar.js since it leads to intermittent failures and isn't necessary for the test. r=mikedeboer, a=test-only - b39c5ca49785
Edwin FloresBug 812881 - Ensure OMX plugins instantiate only one OMXClient instance. r=sotaro, a=sledru - 14b8222e1a24
Nicholas HurleyBug 987248 - Prevent divide-by-zero in seer. r=mcmanus, a=sledru - afdcb5d5d7cc
Tim ChienBug 963590 - [Mac] Make sure lightweight themes don't affect fullscreen toolbar height/position. r=MattN, a=sledru - 2d58340206f4
Gijs KruitboschBug 979653 - Fix dir attribute checks for url field in rtl mode. r=ehsan, a=sledru - 44a94313968a
Jeff GilbertBug 963962 - Fix use of CreateDrawTargetForData in CanvasLayerD3D9/10. r=Bas, a=sledru - 635f912b3164
Gijs KruitboschBacked out changeset 85d2c5b844bc (Bug 989289) because we realized it'd break add-on toolbars, a=backout - 1244d500650c
Blair McBrideBug 987492 - CustomizableUI.jsm should provide convenience APIs around windows, r=gijs,mconley, a=sledru. - 9c70e4856b3f
Mike de BoerBug 990533: use correct toolbar icon for the Home button when placed on the Bookmarks toolbar. r=mak, a=sledru. - 2948b8b5d51d
Mike de BoerBug 993265: preserve bookmark folder icons on the Bookmarks toolbar. r=mak, a=sledru. - 32d5b6ea4a64
Matt WoodrowBug 988862 - Treat DIRECT2D render mode as GDI when drawing directly to the window through BasicLayers. r=jrmuizel, a=sledru. - f5622633b23f

r= means reviewed by
a= means uplift approved by

Previous changelogs:

Soledad PenadesWhat have I been working on? (2014/03)

So it’s April the 11th already and here I am writing about what I did on March. Oh well!

I spent a bunch of time gathering and discussing requisites/feedback for AppManager v2, which implied

  1. thinking about the new ideas we sketched while at the Portland work week in February
  2. thinking about which AppManager questions to ask my team and the Partner Engineering team when we all were at Mountain View – because you can’t show up at a meeting without a set of questions ready to be asked
  3. then summarise the feedback and transmit it to Darrin, our UX guy who couldn’t be at the meeting in Mountain View
  4. then discuss the new questions with Paul & team who are going to implement it

And we were at Mountain View for the quarterly Apps meeting, when us in the Apps+Marketplace section of Mozilla get to talk apps and strategy and stuff for two days or more. It’s always funny when you meet UK-based workmates at another office and realise you’ve never spoken before, or you have, but didn’t associate their faces to their irc nicknames.

It was also the last of the meetings held at the already ex-Mozilla office in Castro Street, so it was sort of sad and bitter to leave Ten Forward (the big meeting room where most of the meetings and announcements have been happening) for the very last time. Mozilla HQ is now somewhere else in Mountain View, but I’ll always remember the Castro St. office with a smile because that’s where my first week at Mozilla happened :-)

But before I flew to San Francisco I attended GinJS, which I had been willing to attend for ages and couldn’t (because I’m never in town when it happens). I hadn’t even planned to go to that one, but some folks from Telefonica Digital were going and sort of convinced me to attend too. It was funny to sign up for the meeting while walking down Old Street on the way to the pub where GinJS was held. That’s what the mobile internet was designed for! Then at GinJS I met a number of cool people-some I had spoken before, some I hadn’t. I recommend you attend it, if you can make it :-)

I was also on the Components panel in EdgeConf. I still haven’t written about the experience and the aftermath of the conference because I basically fell ill at the end of it, and was very busy after that, but I’ll do it. I promise!

I also attended the first instalment of TRIBE, a sort of internal personal development program that is run at Mozilla. The first unwritten rule of TRIBE is you don’t talk about TRIBE… nah, I’m just joking! The first session is about “becoming aware of yourself”, and it was quite interesting to observe myself and my reactions in a conscious way rather than in the totally reactive, subconscious led mode we tend to operate under most of the time. It was also interesting to speak to other attendees and see things from their own point of view. I know it sounds tacky but it has led me to consider treating others more compassionately, or at least try to empathise more rather than instantly judge. This kind of seminars should be mandatory, whether you work in open source or not.

This session was held in Paris, so that gave me the chance to try and find the best croissant place in the morning and have a look at some nice views in the evening when we finished. Also, I went through the most amazingly thrilling and mindblowing-scary experience in a long time: a taxi ride to the airport during rush hour. I thought we were going to die on each of the multiple and violent street turns we did. I saw the Eiffel Tower in the distance and thought “Goodbye Paris, goodbye life” as we sped past a bus, just a few centimeters apart. Or as a bike almost ran over the taxi (an the opposite, too). I’m pretty sure I left a mark on the floor of the taxi as my foot involuntarily tried to brake all the time. And I thought that traffic in Rome was crazy… hah!

In between all this I managed to update a bunch of the existing Mortar templates, help improve some Brick components, publish an article in Mozilla Hacks, and give a ton of feedback on miscellaneous things (code, sites, peers, potential screencasts, conference talks, articles). Did I say a ton? Make it a ton and a half. Oh, and I also interviewed another potential intern. I’m starting to enjoy that–I wonder if it’s bad!

And I took a week of holidays.

I initially planned on hacking with WebMIDI and a KORG nanoKEY2, but my brain wasn’t willing to collaborate, so I just accepted that fact, and tried not to think much about it. The weather has been really warm and sunny so far so I’ve been hiking and staying outside as much as possible, and that’s been good after spending so much time indoors (or in planes).

flattr this!

Jan de MooijFast arrow functions in Firefox 31

Last week I spent some time optimizing ES6 arrow functions. Arrow functions allow you to write function expressions like this:

a.map(s => s.length);

Instead of the much more verbose:

a.map(function(s){ return s.length });

Arrow functions are not just syntactic sugar though, they also bind their this-value lexically. This means that, unlike normal functions, arrow functions use the same this-value as the script in which they are defined. See the documentation for more info.

Firefox has had support for arrow functions since Firefox 22, but they used to be slower than normal functions for two reasons:

  1. Bound functions: SpiderMonkey used to do the equivalent of |arrow.bind(this)| whenever it evaluated an arrow expression. This made arrow functions slower than normal functions because calls to bound functions are currently not optimized or inlined in the JITs. It also used more memory because we’d allocate two function objects instead of one for arrow expressions.
    In bug 989204 I changed this so that we treat arrow functions exactly like normal function expressions, except that we also store the lexical this-value in an extended function slot. Then, whenever this is used inside the arrow function, we get it from the function’s extended slot. This means that arrow functions behave a lot more like normal functions now. For instance, the JITs will optimize calls to them and they can be inlined.
  2. Ion compilation: IonMonkey could not compile scripts containing arrow functions. I fixed this in bug 988993.

With these changes, arrow functions are about as fast as normal functions. I verified this with the following micro-benchmark:

function test(arr) {
    var t = new Date;
    arr.reduce((prev, cur) => prev + cur);
    alert(new Date - t);
}
var arr = [];
for (var i=0; i<10000000; i++) {
    arr.push(3);
}
test(arr);

I compared a nightly build from April 1st to today’s nightly and got the following results:
arrow

We’re 64x faster because Ion is now able to inline the arrow function directly without going through relatively slow bound function code on every call.

Other browsers don’t support arrow functions yet, so they are not used a lot on the web, but it’s important to offer good performance for new features if we want people to start using them. Also, Firefox frontend developers love arrow functions (grepping for “=>” in browser/ shows hundreds of them) so these changes should also help the browser itself :)

Planet Mozilla InternsTiziana Sellitto: Outreach Program For Women a year later

It’s passed a year and a new Summer will begin…a new summer for the women that will be chosen and that will start soon the GNOME’s Outreach Program for Women.

This summer Mozilla will participate with three different projects listed here and among them the Mozilla Bug Wrangler for Desktop QA that is the one I applied for last year. It has been a great experience for me and I want to wish good luck to everyone who submitted the application.

I hope you’ll have a wonderful and productive summer :)

Andrew HalberstadtPart 2: How to deal with IFFY requirements

My last post was basically a very long winded way of saying, "we have a problem". It kind of did a little dance around "why is there a problem" and "how do we fix it", but I want to explore these two questions in a bit more detail. Specifically, I want to return to the two case studies and explore why our test harnesses don't work and why mozharness does work even though both have IFFY (in flux for years) requirements. Then I will explore how to use the lessons learned to improve our general test harness design. ### DRY is not everything I talked a lot about the [DRY principle][1] in the last article. Basically the conclusion about it was that it is very useful, but that we tend to fixate on it to the point where we ignore other equally useful principles. Having reached this conclusion, I did a quick internet search and found [an article][2] by Joel Abrahamsson arguing the exact same point (albeit much more succinctly than me). Through his article I found out about the [SOLID principles][3] of object oriented design (have I been living under a rock?). They are all very useful guidelines, but there are two that immediately made me think of our test harnesses in a bad way. The first is the [single responsibility principle][4] (which I was delighted to find is meant to mitigate requirement changes) and the second is the [open/closed principle][5]. The single responsibility principle states that a class should only be responsible for one thing, and responsibility for that thing should not be shared with other classes. What is a responsibility? A responsibility is defined as a *reason to change*. To use the wikipedia example, a class that prints a block of text can undergo two changes. The content of the text can change, or the format of the text can change. These are two different responsibilities that should be split out into different classes. The open/closed principle states that software should be open for extension, but closed for modification. In other words, it should be possible to change the behaviour of the software only by adding new code without needing to modify any existing code. A popular way of implementing this is through abstract base classes. Here the interface is closed for modification, and each new implementation is an extension of that. Our test harnesses fail miserably at both of these principles. Instead of having several classes each with a well defined responsibility, we have a single class responsible for everything. Instead of being able to add some functionality without worrying about breaking something else, you have to take great pains that your change won't affect some other platform you don't even care about! Mozharness on the other hand, while not perfect, does a much better job at both principles. The concept of actions makes it easy to extend functionality without modifying existing code. Just add a new action to the list! The core library is also much better separated by responsibility. There is a clear separation between general script, build, and testing related functionality. ### Inheritance is evil This is probably old news to many people, but this is something that I'm just starting to figure out on my own. I like Zed Shaw's [analogy from *Learn Python the Hard Way*][6] the best. Instead of butchering it, here it is in its entirety. > In the fairy tales about heroes defeating evil villains there's always a dark forest of some kind. > It could be a cave, a forest, another planet, just some place that everyone knows the hero > shouldn't go. Of course, shortly after the villain is introduced you find out, yes, the hero has > to go to that stupid forest to kill the bad guy. It seems the hero just keeps getting into > situations that require him to risk his life in this evil forest. > > You rarely read fairy tales about the heroes who are smart enough to just avoid the whole situation > entirely. You never hear a hero say, "Wait a minute, if I leave to make my fortunes on the high seas > leaving Buttercup behind I could die and then she'd have to marry some ugly prince named Humperdink. > Humperdink! I think I'll stay here and start a Farm Boy for Rent business." If he did that there'd > be no fire swamp, dying, reanimation, sword fights, giants, or any kind of story really. Because of > this, the forest in these stories seems to exist like a black hole that drags the hero in no matter > what they do. > > In object-oriented programming, Inheritance is the evil forest. Experienced programmers know to > avoid this evil because they know that deep inside the Dark Forest Inheritance is the Evil Queen > Multiple Inheritance. She likes to eat software and programmers with her massive complexity teeth, > chewing on the flesh of the fallen. But the forest is so powerful and so tempting that nearly every > programmer has to go into it, and try to make it out alive with the Evil Queen's head before they > can call themselves real programmers. You just can't resist the Inheritance Forest's pull, so you go > in. After the adventure you learn to just stay out of that stupid forest and bring an army if you > are ever forced to go in again. > > This is basically a funny way to say that I'm going to teach you something you should avoid called > Inheritance. Programmers who are currently in the forest battling the Queen will probably tell you > that you have to go in. They say this because they need your help since what they've created is > probably too much for them to handle. But you should always remember this: > > Most of the uses of inheritance can be simplified or replaced with composition, and multiple > inheritance should be avoided at all costs. I had never heard the (apparently popular) term "composition over inheritance". Basically, unless you really really mean it, always go for "X has a Y" instead of "X is a Y". Never do "X is a Y" for the sole purpose of avoiding code duplication. This is exactly the mistake we made in our test harnesses. The Android and B2G runners just inherited everything from the desktop runner, but oops, turns out all three are actually quite different from one another. Mozharness, while again not perfect, does a better job at avoiding inheritance. While it makes heavy use of the mixin pattern (which, yes, is still inheritance) at least it promotes separation of concerns more than classic inheritance. ### Practical Lessons So this is all well and great, but how can we apply all of this to our automation code base? A smarter way to approach our test harness design would have been to have most of the shared code between the three platforms in a single (relatively) bare-bones runner that *has a* target environment (e.g desktop Firefox, Fennec or B2G in this case). In this model there is no inheritance, and no code duplication. It is easy to extend without modifying (just add a new target environment) and there are clear and distinct responsibilities between managing tests/results and actually launching them. In fact this how the gaia team implemented their [marionette-js-runner][7]. I'm not sure if that pattern is common to node's [mocha runner][8] or something of their design, but I like it. I'd also like our test harnesses to employ mozharness' concept of actions. Each action could be an atomic as possible setup step. For example, setting preferences in the profile is a single action. Setting environment is another. Parsing a manifest could be a third. Each target environment would consist of a list of actions that are run in a particular order. If code needs to be shared, simply add the corresponding action to whichever targets need it. If not, just don't include the action in the list for targets that don't need it. My dream end state here is that there is no distinction between test runners and mozharness scripts. They are both trying to do the same thing (perform setup, launch some code, collect results) so why bother wrapping one around the other? The test harness should just *be* a mozharness script and vice versa. This would bring actions into test harnesses, and allow mozharness scripts to live in-tree. ### Conclusion Is it possible to avoid code duplication with a project that has IFFY requirements? I think yes. But I still maintain it is exceptionally hard. It wasn't until after it was too late and I had a lot of time to think about it that I realized the mistakes we made. And even with what I know now, I don't think I would have fared much better given the urgency and time constraints we were under. Though next time, I think I'll at least have a better chance. [1]: http://en.wikipedia.org/wiki/DRY_principle [2]: http://joelabrahamsson.com/the-dry-obsession [3]: http://en.wikipedia.org/wiki/SOLID [4]: http://en.wikipedia.org/wiki/Single_responsibility_principle [5]: http://en.wikipedia.org/wiki/Open/closed_principle [6]: http://learnpythonthehardway.org/book/ex44.html [7]: https://github.com/mozilla-b2g/marionette-js-runner [8]: http://visionmedia.github.io/mocha

Henrik SkupinFirefox Automation report – week 9/10 2014

In this post you can find an overview about the work happened in the Firefox Automation team during week 9 and 10. I for myself was a week on vacation. A bit of relaxing before the work on the TPS test framework should get started.

Highlights

In preparation to run Mozmill tests for Firefox Metro in our Mozmill-CI system, Andreea has started to get support for Metro builds and appropriate tests included.

With the help from Henrik we got Mozmill 2.0.6 released. It contains a helpful fix for waitForPageLoad(), which let you know about the page being loaded and its status in case of a failure. This might help us to nail down the intermittent failures when loading remote and even local pages. But the most important part of this release is indeed the support of mozcrash. Even that we cannot have a full support yet due to missing symbol files for daily builds on ftp.mozilla.org, we can at least show that a crash was happening during a testrun, and let the user know about the local minidump files.

Individual Updates

For more granular updates of each individual team member please visit our weekly team etherpad for week 9 and week 10.

Meeting Details

If you are interested in further details and discussions you might also want to have a look at the meeting agenda, the video recording, and notes from the Firefox Automation meetings of week 9 and week 10.

Gervase Markham21st Century Nesting

Our neighbours have acquired a 21st century bird’s nest:

Not only is it behind a satellite dish but, if you look closely, large parts of it are constructed from the wire ties that the builders (who are still working on our estate) use for tying layers of bricks together. We believe it belongs to a couple of magpies, and it contains six (low-tech) eggs.

I have no idea what effect this has on their reception…

Chris DoublePreventing heartbleed bugs with safe programming languages

The Heartbleed bug in OpenSSL has resulted in a fair amount of damage across the internet. The bug itself was quite simple and is a textbook case for why programming in unsafe languages like C can be problematic.

As an experiment to see if a safer systems programming language could have prevented the bug I tried rewriting the problematic function in the ATS programming language. I’ve written about ATS as a safer C before. This gives a real world testcase for it. I used the latest version of ATS, called ATS2.

ATS compiles to C code. The function interfaces it generates can exactly match existing C functions and be callable from C. I used this feature to replace the dtls1_process_heartbeat and tls1_process_heartbeat functions in OpnSSL with ATS versions. These two functions are the ones that were patched to correct the heartbleed bug.

The approach I took was to follow something similar to that outlined by John Skaller on the ATS mailing list:

ATS on the other hand is basically C with a better type system.
You can write very low level C like code without a lot of the scary
dependent typing stuff and then you will have code like C, that
will crash if you make mistakes.

If you use the high level typing stuff coding is a lot more work
and requires more thinking, but you get much stronger assurances
of program correctness, stronger than you can get in Ocaml
or even Haskell, and you can even hope for *better* performance
than C by elision of run time checks otherwise considered mandatory,
due to proof of correctness from the type system. Expect over
50% of your code to be such proofs in critical software and probably
90% of your brain power to go into constructing them rather than
just implementing the algorithm. It's a paradigm shift.

I started with wrapping the C code directly and calling that from ATS. From there I rewrote the C code into unsafe ATS. Once that worked I added types to find errors.

I’ve put the modified OpenSSl code in a github fork. The two branches there, ats and ats_safe, represent the latter two stages in implementing the functions in ATS.

I’ll give a quick overview of the different paths I took then go into some detail about how I used ATS to find the errors.

Wrapping C code

I’ve written about this approach before. ATS allows embedding C directly so the first start was to embed the dtls1_process_heartbeat C code in an ATS file, call that from an ATS function and expose that ATS function as the real dtls1_process_heartbeat. The code for this is in my first attempt of d1_both.dats.

Unsafe ATS

The second stage was to write the functions using ATS but unsafely. This code is a direct translation of the C code with no additional typechecking via ATS features. It uses usafe ATS code. The rewritten d1_both.dats contains this version.

The code is quite ugly but compiles and matches the C version. When installed on a test system it shows the heartbleed bug still. It uses all the pointer arithmetic and hard coded offsets as the C code. Here’s a snippet of one branch of the function:

val buffer = OPENSSL_malloc(1 + 2 + $UN.cast2int(payload) + padding)
val bp = buffer

val () = $UN.ptr0_set<uchar> (bp, TLS1_HB_RESPONSE)
val bp = ptr0_succ<uchar> (bp)
val bp = s2n (payload, bp)
val () = unsafe_memcpy (bp, pl, payload)
val bp = ptr_add (bp, payload)
val () = RAND_pseudo_bytes (bp, padding)
val r = dtls1_write_bytes (s, TLS1_RT_HEARTBEAT, buffer, 3 + $UN.cast2int(payload) + padding)
val () = if r >=0 && ptr_isnot_null (get_msg_callback (s)) then
           call_msg_callback (get_msg_callback (s),
                              1, get_version (s), TLS1_RT_HEARTBEAT,
                              buffer, $UN.cast2uint (3 + $UN.cast2int(payload) + padding), s,
                              get_msg_callback_arg (s))
val () = OPENSSL_free (buffer)

It should be pretty easy to follow this comparing the code to the C version.

Safer ATS

The third stage was adding types to the unsafe ATS version to check that the pointer arithmetic is correct and no bounds errors occur. This version of d1_both.dats fails to compile if certain bounds checks aren’t asserted. If the assertloc at line 123, line 178 or line 193 is removed then a constraint error is produced. This error is effectively preventing the heartbleed bug.

Testable Vesion

The last stage I did was to implement the fix to the tls1_process_heartbeat function and factor out some of the helper routines so it could be shared. This is in the ats_safe branch which is where the newer changes are happening. This version removes the assertloc usage and changes to graceful failure so it could be tested on a live site.

I tested this version of OpenSSL and heartbleed test programs fail to dump memory.

The approach to safety

The tls_process_heartbeat function obtains a pointer to data provided by the sender and the amount of data sent from one of the OpenSSL internal structures. It expects the data to be in the following format:

 byte = hbtype
 ushort = payload length
 byte[n] = bytes of length 'payload length'
 byte[16]= padding

The existing C code makes the mistake of trusting the ‘payload length’ the sender supplies and passes that to a memcpy. If the actual length of the data is less than the ‘payload length’ then random data from memory gets sent in the response.

In ATS pointers can be manipulated at will but they can’t be dereferenced or used unless there is a view in scope that proves what is located at that memory address. By passing around views, and subsets of views, it becomes possible to check that ATS code doesn’t access memory it shouldn’t. Views become like capabilities. You hand them out when you want code to have the capability to do things with the memory safely and take it back when it’s done.

Views

To model what the C code does I created an ATS view that represents the layout of the data in memory:

dataview record_data_v (addr, int) =
  | {l:agz} {n:nat | n > 16 + 2 + 1} make_record_data_v (l, n) of (ptr l, size_t n)
  | record_data_v_fail (null, 0) of ()

A ‘view’ is like a standard ML datatype but exists at type checking time only. It is erased in the final version of the program so has no runtime overhead. This view has two constructors. The first is for data held at a memory address l of length n. The length is constrained to be greater than 16 + 2 + 1 which is the size of the ‘byte’, ‘ushort’ and ‘padding’ mentioned previously. By putting the constraint here we immediately force anyone creating this view to check the length they pass in. The second constructor, record_data_v_fail, is for the case of an invalid record buffer.

The function that creates this view looks like:

implement get_record (s) =
  val len = get_record_length (s)
  val data = get_record_data (s)
in
  if len > 16 + 2 + 1 then
    (make_record_data_v (data, len) | data, len)
  else
    (record_data_v_fail () | null_ptr1 (), i2sz 0)
end

Here the len and data are obtained from the SSL structure. The length is checked and the view is created and returned along with the pointer to the data and the length. If the length check is removed there is a compile error due to the constraint we placed earlier on make_record_data_v. Calling code looks like:

val (pf_data | p_data, data_len) = get_record (s)

p_data is a pointer. data_len is an unsigned value and pf_data is our view. In my code the pf_ suffix denotes a proof of some sort (in this case the view) and p_ denotes a pointer.

Proof functions

In ATS we can’t do anything at all with the p_data pointer other than increment, decrement and pass it around. To dereference it we must obtain a view proving what is at that memory address. To get speciailized views specific for the data we want I created some proof functions that convert the record_data_v view to views that provide access to memory. These are the proof functions:

(* These proof functions extract proofs out of the record_data_v
   to allow access to the data stored in the record. The constants
   for the size of the padding, payload buffer, etc are checked
   within the proofs so that functions that manipulate memory
   are checked that they remain within the correct bounds and
   use the appropriate pointer values
*)
prfun extract_data_proof {l:agz} {n:nat}
               (pf: record_data_v (l, n)):
               (array_v (byte, l, n),
                array_v (byte, l, n) -<lin,prf> record_data_v (l,n))
prfun extract_hbtype_proof {l:agz} {n:nat}
               (pf: record_data_v (l, n)):
               (byte @ l, byte @ l -<lin,prf> record_data_v (l,n))
prfun extract_payload_length_proof {l:agz} {n:nat}
               (pf: record_data_v (l, n)):
               (array_v (byte, l+1, 2),
                array_v (byte, l+1, 2) -<lin,prf> record_data_v (l,n))
prfun extract_payload_data_proof {l:agz} {n:nat}
               (pf: record_data_v (l, n)):
               (array_v (byte, l+1+2, n-16-2-1),
                array_v (byte, l+1+2, n-16-2-1) -<lin,prf> record_data_v (l,n))
prfun extract_padding_proof {l:agz} {n:nat} {n2:nat | n2 <= n - 16 - 2 - 1}
               (pf: record_data_v (l, n), payload_length: size_t n2):
               (array_v (byte, l + n2 + 1 + 2, 16),
                array_v (byte, l + n2 + 1 + 2, 16) -<lin, prf> record_data_v (l, n))

Proof functions are run at type checking time. They manipulate proofs. Let’s breakdown what the extract_hbtype_proof function does:

prfun extract_hbtype_proof {l:agz} {n:nat}
               (pf: record_data_v (l, n)):
               (byte @ l, byte @ l -<lin,prf> record_data_v (l,n))

This function takes a single argument, pf, that is a record_data_v instance for an address l and length n. This proof argument is consumed. Once it is called it cannot be accessed again (it is a linear proof). The function returns two things. The first is a proof byte @ l which says “there is a byte stored at address l”. The second is a linear proof function that takes the first proof we returned, consumes it so it can’t be reused, and returns the original proof we passed in as an argument.

This is a fairly common idiom in ATS. What it does is takes a proof, destroys it, returns a new one and provides a way of destroying the new one and bring back the old one. Here’s how the function is used:

prval (pf, pff) = extract_hbtype_proof (pf_data)
val hbtype = $UN.cast2int (!p_data)
prval pf_data = pff (pf)

prval is a declaration of a proof variable. pf is my idiomatic name for a proof and pff is what I use for proof functions that destroy proofs and return the original.

The !p_data is similar to *p_data in C. It dereferences what is held at the pointer. When this happens in ATS it searches for a proof that we can access some memory at p_data. The pf proof we obtained says we have a byte @ p_data so we get a byte out of it.

A more complicated proof function is:

prfun extract_payload_length_proof {l:agz} {n:nat}
               (pf: record_data_v (l, n)):
               (array_v (byte, l+1, 2),
                array_v (byte, l+1, 2) -<lin,prf> record_data_v (l,n))

The array_v view repesents a contigous array of memory. The three arguments it takes are the type of data stored in the array, the address of the beginning, and the number of elements. So this function consume the record_data_v and produces a proof saying there is a two element array of bytes held at the 1st byte offset from the original memory address held by the record view. Someone with access to this proof cannot access the entire memory buffer held by the SSL record. It can only get the 2 bytes from the 1st offset.

Safe memcpy

One more complicated view:

prfun extract_payload_data_proof {l:agz} {n:nat}
               (pf: record_data_v (l, n)):
               (array_v (byte, l+1+2, n-16-2-1),
                array_v (byte, l+1+2, n-16-2-1) -<lin,prf> record_data_v (l,n))

This returns a proof for an array of bytes starting at the 3rd byte of the record buffer. Its length is equal to the length of the record buffer less the size of the payload, and first two data items. It’s used during the memcpy call like so:

prval (pf_dst, pff_dst) = extract_payload_data_proof (pf_response)
prval (pf_src, pff_src) = extract_payload_data_proof (pf_data)
val () = safe_memcpy (pf_dst, pf_src 
           add_ptr1_bsz (p_buffer, i2sz 3),
           add_ptr1_bsz (p_data, i2sz 3),
           payload_length)
prval pf_response = pff_dst(pf_dst)
prval pf_data = pff_src(pf_src)

By having a proof that provides access to only the payload data area we can be sure that the memcpy can not copy memory outside those bounds. Even though the code does manual pointer arithmetic (the add_ptr1_bszfunction) this is safe. An attempt to use a pointer outside the range of the proof results in a compile error.

The same concept is used when setting the padding to random bytes:

prval (pf, pff) = extract_padding_proof (pf_response, payload_length)
val () = RAND_pseudo_bytes (pf |
                            add_ptr_bsz (p_buffer, payload_length + 1 + 2),
                            padding)
prval pf_response = pff(pf)a

Runtime checks

The code does runtime checks that constrain the bounds of various length variables:

if payload_length > 0 then
    if data_len >= payload_length + padding + 1 + 2 then
    ...
...

Without those checks a compile error is produced. The original heartbeat flaw was the absence of similar runtime checks. The code as structured can’t suffer from that flaw and still be compiled.

Testing

This code can be built and tested. First step is to install ATS2:

$ tar xvf ATS2-Postiats-0.0.7.tgz
$ cd ATS2-Postiats-0.0.7
$ ./configure
$ make
$ export PATSHOME=`pwd`
$ export PATH=$PATH:$PATSHOME/bin

Then compile the openssl code with my ATS additions:

$ git clone https://github.com/doublec/openssl
$ cd openssl
$ git checkout -b ats_safe origin/ats_safe
$ ./config
$ make
$ make test

Try changing some of the code, modifying the constraints tests etc, to get an idea of what it is doing.

For testing in a VM, I installed Ubuntu, setup an nginx instance serving an HTTPS site and did something like:

$ git clone https://github.com/doublec/openssl
$ cd openssl
$ git diff 5219d3dd350cc74498dd49daef5e6ee8c34d9857 >~/safe.patch
$ cd ..
$ apt-get source openssl
$ cd openssl-1.0.1e/
$ patch -p1 <~/safe.patch
  ...might need to fix merge conflicts here...
$ fakeroot debian/rules build binary
$ cd ..
$ sudo dpkg -i libssl1.0.0_1.0.1e-3ubuntu1.2_amd64.deb \
               libssl-dev_1.0.1e-3ubuntu1.2_amd64.deb 
$ sudo /etc/init.d/nginx restart

You can then use a heartbleed tester on the HTTPS server and it should fail.

Conclusion

I think the approach of converting unsafe C code piecemeal worked quite well in this instance. Being able to combine existing C code and ATS makes this much easier. I only concentrated on detecting this particular programming error but it would be possible to use other ATS features to detect memory leaks, abstraction violations and other things. It’s possible to get very specific in defining safe interfaces at a cost of complexity in code.

Although I’ve used ATS for production code this is my first time using ATS2. I may have missed idioms and library functions to make things easier so try not to judge the verbosity or difficulty of the code based on this experiement. The ATS community is helpful in picking up the language. My approach to doing this was basically add types then work through the compiler errors fixing each one until it builds.

One immediate question becomes “How do you trust your proof”. The record_data_v view and the proof functions that manipulate it define the level of checking that occurs. If they are wrong then the constraints checked by the program will be wrong. It comes down to having a trusted kernel of code (in this case the proof and view) and users of that kernel can then be trusted to be correct. Incorrect use of the kernel is what results in the stronger safety. From an auditing perspective it’s easier to check the small trusted kernel and then know the compiler will make sure pointer manipulations are correct.

The ATS specific additions are in the following files:

Anthony HughesGoogle RMA, or how I finally got to use Firefox OS on Wind Mobile

The last 24 hours have really been quite an adventure in debugging. It all started last week when I decided to order a Nexus 5 from Google. It arrived yesterday, on time, and I couldn’t wait to get home to unbox it. Soon after unboxing my new Nexus 5 I would discover something was not well.

After setting up my Google account and syncing all my data I usually like to try out the camera. This did not go very well. I was immediately presented with a “Camera could not connect” error. After rebooting a couple times the error continued to persist.

I then went to the internet to research my problem and got the usual advice: clear the cache, force quit any unnecessary apps, or do a factory reset. Try as I might, all of these efforts would fail. I actually tried a factory reset three times and that’s where things got weirder.

On the third factory reset I decided to opt out of syncing my data and just try the camera with a completely stock install. However, this time the camera icon was completely missing. It was absent from my home screen and the app drawer. It was absent from the Gallery app. The only way I was able to get the Camera app to launch was to select the camera button on my lock screen.

Now that I finally got to the Camera app I noticed it had defaulted to the front camera, so naturally I tried to switch to the rear. However when I tried this, the icon to switch cameras was completely missing. I tried some third party camera apps but they would just crash on startup.

After a couple hours jumping through these hoops between factory resets I was about to give in. I gave it one last ditch effort and flashed the phone using Google’s stock Android 4.4 APK. It took me about another hour between getting my environment set up and getting the image flashed to the phone. However the result was the same: missing camera icons and crashing all over the place.

It was now past 1am, I had been at this for hours. I finally gave in and called up Google. They promptly sent me an RMA tag and I shipped the phone back to them this morning for a full refund. And so began the next day of my adventure.

I was now at a point where I had to decide what I wanted to do. Was I going to order another Nexus 5 and trust that one would be fine or would I save myself the hassle and just dig out an old Android phone I had lying around?

I remembered that I still had a Nexus S which was perfectly fine, albeit getting a bit slow. After a bit of research on MDN I decided to try flashing the Nexus S to use B2G. I had never successfully flashed any phone to B2G before and I thought yesterday’s events might have been pushing toward this moment.

I followed the documentation, checked out the source code, sat through the lengthy config and build process (this took about 2 hours), and pushed the bits to my phone. I then swapped in my SIM card and crossed my fingers. It worked! It seemed like magic, but it worked. I can again do all the things I want to: make phone calls, take pictures, check email, and tweet to my hearts content; all on a phone powered by the web.

I have to say the process was fairly painless (apart from the hours spent troubleshooting the Nexus 5). The only problem I encountered was a small hiccup in the config.sh process. Fortunately, I was able to work around this pretty easily thanks to Bugzilla. I can’t help but recognize my success was largely due to the excellent documentation provided by Mozilla and the work of developers, testers, and contributors alike who came before me.

I’ve found the process to be pretty rewarding. I built B2G, which I’ve never succeeded at before; I flashed my phone, which I’ve never succeeded at before; and I feel like I learned something new today.

I’ve been waiting a long time to be able to test B2G 1.4 on Wind Mobile, and now I can. Sure I’m sleep deprived, and sure it’s not an “official” Firefox OS phone, but that does not diminish the victory for me; not one bit.

 

Gervase MarkhamCopyright and Software

As part of our discussions on responding to the EU Copyright Consultation, Benjamin Smedberg made an interesting proposal about how copyright should apply to software. With Chris Riley’s help, I expanded that proposal into the text below. Mozilla’s final submission, after review by various parties, argued for a reduced term of copyright for software of 5-10 years, but did not include this full proposal. So I publish it here for comment.

I think the innovation, which came from Benjamin, is the idea that the spirit of copyright law means that proprietary software should not be eligible for copyright protections unless the source code is made freely available to the public by the time the copyright term expires.

We believe copyright terms should be much shorter for software, and that there should be a public benefit tradeoff for receiving legal protection, comparable to other areas of IP.

We start with the premise that the purpose of copyright is to promote new creation by giving to their authors an exclusive right, but that this right is necessary time-limited because the public as a whole benefits from the public domain and the free sharing and reproduction of works. Given this premise, copyright policy has failed in the domain of software. All software has a much, much shorter life than the standard copyright term; by the end of the period, there is no longer any public benefit to be gained from the software entering the public domain, unlike virtually all other categories of copyrighted works. There is already more obsolete software out there than anyone can enumerate, and software as a concept is barely even 50 years old, so none is in the public domain. Any which did fall into the public domain after 50 or 70 years would be useful to no-one, as it would have been written for systems long obsolete.

We suggest two ideas to help the spirit of copyright be more effectively realized in the software domain.

Proprietary software (that is, software for which the source code is not immediately available for reuse anyway) should not be eligible for copyright protections unless the source code is made freely available to the public by the time the copyright term expires. Unlike a book, which can be read and copied by anyone at any stage before or after its copyright expires, software is often distributed as binary code which is intelligible to computers but very hard for humans to understand. Therefore, in order for software to properly fall into the public domain at the end of the copyright term, the source code (the human-readable form) needs to be made available at that time – otherwise, the spirit of copyright law is not achieved, because the public cannot truly benefit from the copyrighted material. An escrow system would be ideal to implement this.

This is also similar to the tradeoff between patent law and trade secret protection; you receive a legal protection for your activity in exchange for making it available to be used effectively by the broader public at the end of that period. Failing to take that tradeoff risks the possibility that someone will reverse engineer your methods, at which point they are unprotected.

Separately, the term of software copyright protection should be made much shorter (through international processes as relevant), and fixed for software products. We suggest that 14 years is the most appropriate length. This would mean that, for example, Windows XP would enter the public domain in August 2015, which is a year after Microsoft ceases to support it (and so presumably no longer considers it commercially viable). Members of the public who wish to continue to run Windows XP therefore have an interest in the source code being available so technically-capable companies can support them.

Christie KoehlerAn Explanation of the Heartbleed bug for Regular People

I’ve put this explanation together for those who want to understand the Heartbleed bug, how it fits into the bigger picture of secure internet browsing, and what you can do to mitigate its affects.

HTTPS vs HTTP (padlock vs no padlock)

When you are browsing a site securely, you use https and you see a padlock icon in the url bar. When you are browsing insecurely you use http and you do not see a padlock icon.

Firefox url bar for HTTPS site (above) and non-HTTPS (below).Firefox url bar for HTTPS site (above) and non-HTTPS (below).

HTTPS relies on something called SSL/TLS.

Understanding SSL/TLS

SSL stands for Secure Sockets Layer and TLS stands for Transport Layer Security. TLS is the later version of the original, proprietary, SSL protocol developed by Netscape. Today, when people say SSL, they generally mean TLS, the current, standard version of the protocol.

Public and private keys

The TLS protocol relies heavily on public-key or asymmetric cryptography. In this kind of cryptography, two separate but paired keys are required: a public key and a private key. The public key is, as its name suggests, shared with the world and is used to encrypt plain-text data or to verify a digital signature. (A digital signature is a way to authenticate identity.) A matching private key, on the other hand, is used to decrypt data and to generate digital signatures. A private key should be safeguarded and never shared. Many private keys are protected by pass-phrases, but merely having access to the private key means you can likely use it.

Authentication and encryption

The purpose of SSL/TLS is to authenticate and encrypt web traffic.

Authenticate in this case means “verify that I am who I say I am.” This is very important because when you visit your bank’s website in your browser, you want to feel confident that you are visiting the web servers of — and thereby giving your information to — your actual bank and not another server claiming to be your bank. This authentication is achieved using something called certificates that are issued by Certificate Authorities (CA). Wikipedia explains thusly:

The digital certificate certifies the ownership of a public key by the named subject of the certificate. This allows others (relying parties) to rely upon signatures or assertions made by the private key that corresponds to the public key that is certified. In this model of trust relationships, a CA is a trusted third party that is trusted by both the subject (owner) of the certificate and the party relying upon the certificate.

In order to obtain a valid certificate from a CA, website owners must submit, at minimum, their server’s public key and demonstrate that they have access to the website (domain).

Encrypt in this case means “encode data such that only authorized parties may decode it.” Encrypting internet traffic is important for sensitive or otherwise private data because it is trivially easy eavesdrop on internet traffic. Information transmitted not using SSL is usually done so in plain-text and as such clearly readable by anyone. This might be acceptable for general internet broswing. After all, who cares who knows which NY Times article you are reading? But is not acceptable for a range of private data including user names, passwords and private messages.

Behind the scenes of an SSL/TLS connection

When you visit a website with HTTPs enabled, a multi-step process occurs so that a secure connection can be established. During this process, the sever and client (browser) send messages back and forth in order to a) authenticate the server’s (and sometimes the client’s) identity and, b) to negotiate what encryption scheme, including which cipher and which key, they will use for the session. Identities are authenticated using the digital certificates mentioned previously.

When all of that is complete, the secure connection is established and the server and client send traffic back and forth to each other.

All of this happens without you ever knowing about it. Once you see your bank’s login screen the process is complete, assuming you see the padlock icon in your browser’s url bar.

Keepalives and Heartbeats

Even though establishing an ssl connection happens almost imperceptibly to you, it does have an overhead in terms of computer and network resources. To minimize this overhead, network connections are often kept open and active until a given timeout threshold is exceed. When that happens, the connection is closed. If the client and server wish to communicate again, they need to re-negotiate the connection and re-incur the overhead of that negotiation.

One way to forestall a connection being closed is via keepalives. A keepalive message is used to tell a server “Hey, I know I haven’t used this connection in a little while, but I’m still here and I’m planning to use it again really soon.”

Keepalive functionality was added to the TLS protocol specification via the Heartbeat Extension. Instead of “Keepalives,” they’re called “Heartbeats,” but they do basically the same thing.

Specification vs Implementation

Let’s pause for a moment to talk about specifications vs implementations. A protocol is a defined way of doing something. In this case of TLS, that something is encrypted network communications. When a protocol is standardized, it means that a lot of people have agreed upon the exact way that protocol should work and this way is outlined in a specification. The specification for TLS is collaboratively developed, maintained and promoted by the standards body Internet Engineering Task Force (IETF). A specification in and of itself does not do anything. It is a set of documents, not a program. In order for a specifications to do something, they must be implemented by programmers.

OpenSSL implementation of TLS

OpenSSL is one implementation of the TLS protocol. There are others, including the open source GnuTLS as well as proprietary implementations. OpenSSL is a library, meaning that it is not a standalone software package, but one that is used by other software packages. These include the very popular webserver Apache.

The Heartbleed bug only applies to webservers with SSL/TLS enabled, and only those using specific versions of the open source OpenSSL library because the bug relates to an error in the code of that library, specifically the heartbeat extension code. It is not related to any errors in the TLS specification or and in any of the underlying ciper suites.

Usually this would be good news. However, because OpenSSL is so widely used, particularly the affected version, this simple bug has tremendously reach in terms of the number of servers and therefor the number of users it potentially affects.

What the heartbeat extension is supposed to do

The heartbeat extension is supposed to work as follows:

  • A client sends a heartbeat message to the server.
  • The message contains two pieces of data: a payload and the size of that payload. The payload can by anything up to 64kb.
  • When the server receives the heartbeat message, it is to add a bit of extra data to it (padding) and send it right back to the client.

Pretty simple, right? Heartbeat isn’t supposed to do anything other than let the server and client know they are each still there and accepting connections.

What the heartbeat code actually does

In the code for affected versions (1.0.1-1.0.1f) of the OpenSSL heartbeat extension, the programmer(s) made a simple but horrible mistake: They failed to verify the size of the received payload. Instead, they accepted what the client said was the size of the payload and returned this amount of data from memory, thinking it should be returning the same data it had received. Therefore, a client could send a payload of 1KB, say it was 64KB and receive that amount of data back, all from server memory.

If that’s confusing, try this analogy: Imagine you are my bank. I show up and make a deposit. I say the deposit is $64, but you don’t actually verify this amount. Moments later I request a withdrawal of the $64 I say I deposited. In fact, I really only deposited $1, but since you never checked, you have no choice but to give me $64, $63 of which doesn’t actually belong to me.

And, this is exactly how a someone could exploit this vulnerability. What comes back from memory doesn’t belong to the client that sent the heartbeat message, but it’s given a copy of it anyway. The data returned is random, but would be data that the OpenSSL library had been storing in memory. This should be pre-encryption (plain-text) data, including your user names and passwords. It could also technically be your server’s private key (because that is used in the securing process) and/or your server’s certificate (which is also not something you should share).

The ability to retrieve a server’s private key is very bad because that private key could be used to decrypt all past, present and future traffic to the sever. The ability to retreive a server’s certificate is also bad because it gives the ability to impersonate that server.

This, coupled with the widespread use of OpenSSL, is why this bug is so terribly bad. Oh, and it gets worse…

Taking advantage of this vulnerability leaves no trace

What’s worse is that logging isn’t part of the Heartbeat extension. Why would it be? Keepalives happen all the time and generally do not represent transmission of any significant data. There’s no reason to take up value time accessing the physical disk or taking up storage space to record that kind of information.

Because there is no logging, there is no trace left when someone takes advantage of this vulnerability.

The code that introduced this bug has been part of OpenSSl for 2+ years. This means that any data you’ve communicated to servers with this bug since then has the potential to be compromised, but there’s no way to determine definitively if it was.

This is why most of the internet is collectively freaking out.

What do server administrators need to do?

Server (website) administrators need to, if they haven’t already:

  1. Determine whether or not their systems are affected by the bug. (test)
  2. Patch and/or upgrade affected systems. (This will require a restart)
  3. Revoke and reissue keys and certificates for affected systems.

Furthermore, I strongly recommend you enable Perfect forward secrecy to safeguard data in the event that a private key is compromised:

When an encrypted connection uses perfect forward secrecy, that means that the session keys the server generates are truly ephemeral, and even somebody with access to the secret key can’t later derive the relevant session key that would allow her to decrypt any particular HTTPS session. So intercepted encrypted data is protected from prying eyes long into the future, even if the website’s secret key is later compromised.

What do users (like me) need to do?

The most important thing regular users need to do is change your passwords on critical sites that were vulnerable (but only after they’ve been patched). Do you need to change all of your passwords everywhere? Probably not. Read You don’t need to change all your passwords for some good tips.

Additionally, if you’re not already using a password manager, I highly recommend LastPass, which is cross-platform and works on pretty much every device. Yesterday LastPass announced they are helping users to know which passwords they need to update and when it is safe to do so.

If you do end up trying LastPass, checkout my guide for setting it up with two-factor auth.

Further Reading


If you like visuals, check out this great video showing how the Heartbleed exploit works.

If you’re interested in learning more about networking, I highly recommend Ilya Grigorik‘s High Performance Browser Networking, which you can also read online for free.

If you want some additional technical details about Heartbleed (including actual code!) checkout these posts:

Oh, and you can listen to Kevin and I talk about Heartbleed on In Beta episode 96, “A Series of Mathy Things.”

Conclusion

Henrik SkupinFirefox Automation report – week 7/8 2014

The current work load is still affecting my time for getting out our automation status reports. The current updates are a bit old but still worth to mention. So lets get them out.

Highlights

As mentioned in my last report, we had issues with Mozmill on Windows while running our restart tests. So during a restart of Firefox, Mozmill wasn’t waiting long enough to detect that the old process is gone, so a new instance has been started immediately. Sadly that process failed with the profile already in use error. Reason here was a broken handling of process.poll() in mozprocess, which Henrik fixed in bug 947248. Thankfully no other Mozmill release was necessary. We only re-created our Mozmill environments for the new mozprocess version.

With the ongoing pressure of getting automated tests implemented for the Metro mode of Firefox, our Softvision automation team members were concentrating their work on adding new library modules, and enhancing already existent ones. Another big step here was the addition of the Metro support to our Mozmill dashboard by Andrei. With that we got new filter settings to only show results for Metro builds.

When Henrik noticed one morning that our Mozmill-CI staging instance was running out of disk space, he did some investigation and has seen that we were using the identical settings from production for the build retention policy. Without having more then 40GB disk space we were trying to keep the latest 100 builds for each job. This certainly wont work, so Cosmin worked on reducing the amount of builds to 5. Beside this we were also able to fix our l10n testrun for localized builds of Firefox Aurora.

Given that we stopped support for Mozmill 1.5 and continued with Mozmill 2.0 already a while ago, we totally missed to update our Mozmill tests documentation on MDN. Henrik was working on getting all of it up-to-date, so new community members wont struggle anymore.

Individual Updates

For more granular updates of each individual team member please visit our weekly team etherpad for week 7 and week 8.

Meeting Details

If you are interested in further details and discussions you might also want to have a look at the meeting agenda and notes from the Firefox Automation meetings of week 7 and week 8.

Byron Joneshappy bmo push day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [924265] add an alert date field/mechanism to the “Infrastructure & Operations” product
  • [987765] Updates to Developer Documentation product’s “form.doc” form and configuration
  • [891757] “Additional hours worked” displayed inline with comments is now superfluous
  • [986590] Confusing error message when not finding reviewer
  • [987940] arbitrary product name (text) injection in guided workflow
  • [988175] Needinfo dropdown should include “myself”
  • [961843] Reset password token leakage
  • [984505] Link component and product to browse for other bugs in this category
  • [987521] flag activity api needs to prohibit requests which return the entire table
  • [990982] Use <optgroup/> to group products into classifications in the product drop-down on show_bug.cgi
  • [991477] changing a tracking flag’s value doesn’t result in the value being updated on bugs
  • [988414] The drop down menu icon next to the user name is not visible in Chrome on Mac OSX
  • [994467] remove “anyone” from the needinfo menu

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

Selena DeckelmannPython Core Summit: notes from my talk today

I gave a short talk today about new coders and contributors to developer documentation today. Here are my notes!

Me: Selena Deckelmann Data Architect, Mozilla Major contributor to PostgreSQL, PyLadies organizer in Portland, OR

Focusing on Documentation, Teaching and Outreach

Two main forks of thought around teaching and outreach: 1. Brand new coders: PyLadies, Software Carpentry and University are the main communities represented 2. New contributors to Python & ecosystem

1. Brand new coders: PyLadies, Software Carpentry and University are the main communities represented

(a) Information architecture of the website

Where do you go if you are a teacher or want to teach a workshop? Totally unclear on python.org. Really could use a section on the website for this, microsite.

Version 2 vs 3 is very confusing for new developers. Most workshops default to 2, some workshops now require 3. Maybe mark clearly on all workshops which version. Generally this is a very confusing issue when encountering the site for the first time.

Possible solution: Completely separate “brand new coder” tutorial. Jessica McKellar would like to write this.

(b) Packaging and Installation problems — see earlier long conversation in this meeting about this. Many problems linked to having to compile C code while installing with pip

(c) New coder contribution can come through documenting of issues around install and setup. We could make this easier — maybe direct initial reports to stack overflow, and then float solutions to bugs.python.org

2. New contributors to Python & ecosystem — with a focus on things useful for keeping documentation and tutorials up-to-date and relevant

(a) GNOME Outreach Program for WomenPython is participating!

More people from core should participate as mentors! PSF is funding 2-3 students this cycle, Twisted has participated for a while and had a great experience. This program is great because:

  • Supports code and non-code contribution
  • Developer community seems very cohesive, participants seem to join communities and stick around
  • Strong diversity support
  • Participants don’t have to be students
  • Participants are paid for 3 months
  • Participants come from geographically diverse communities
  • To participate, applicants must submit a patch or provide some other pre-defined contribution before their application is even accepted

Jessica McKellar and Lynn Root are mentors for Python itself. See them for more details about this round! Selena is a coordinator and former mentor for Mozilla’s participation and also available to answer questions.

(b) Write the Docs conference is a python-inspired community around documentation.

(c) Openstack – Anne Gentle & her blog. 3-year participant in OpenStack community and great resource for information about building technical documentation community.

(d) Better tooling for contribution could be a great vector for getting new contributors.

  • Wiki is a place for information to go and die (no clear owners, neglected SEO etc) – Maybe separate documentation repos from core code repos for tutorials
  • carefully consider the approval process – put the people who are most dedicated to maintaining the tutorials in charge of maintaining them

(e) bugs.python.org

Type selection is not relevant to ‘documentation’ errors/fixes. Either remove ‘type’ from the UI or provide relevant types. I recommend removing ‘type’ as a required (or implied required) form field when entering a bug.

The larger issue here is around how we design for contribution of docs:

  • What language do we use in our input systems?
  • What workflow do we expect technical writers to follow to get their contributions included?
  • What is the approval process?

Also see the “tooling for contribution”

Gervase MarkhamRecommended Reading

This response to my recent blog post is the best post on the Brendan situation that I’ve read from a non-Mozillian. His position is devastatingly understandable.

Joel Maheris a phone too hard to use?

Working at Mozilla, I get to see a lot of great things.  One of them is collaborating with my team (as we are almost all remoties) and I have been doing that for almost 6 years.  Sometime around 3 years ago we switch to using Vidyo as a way to communicate in meetings.  This is great, we can see and hear each other.  Unfortunately heartbleed came out and affects Mozilla’s Vidyo servers.  So yesterday and today we have been without Vidyo.

Now I am getting meeting cancellation notices, why are we cancelling meetings?  Did meetings not happen 3 years ago?  Mozilla actually creates an operating system for a … phone.  In fact our old teleconferencing system is still in place.  I thought about this earlier today and wondered why we are cancelling meetings.  Personally I always put Vidyo in the background during meetings and keep IRC in the foreground.  Am I a minority?

I am not advocating for scrapping Vidyo, instead I would like to attend meetings, and if we find they cannot be held without Vidyo, we should cancel them (and not reschedule them). 

Meetings existed before Vidyo and Open Source existed before GitHub, we don’t need the latest and greatest things to function in life. Pick up a phone and discuss what needs to be discussed.


Robert O'CallahanGetting Back To Work

... is what we need now. So let me give a brief summary of what's happening with me work-wise.

Last year I fully divested my direct reports. No more management work to do, yay! I think they probably all ended up with a better manager than they had.

I've been involved in a lot of different projects and a lot of helping other people get their work done. In between I managed to carve out a few things of my own:

  • I built reftests for async panning to try to stem the tide of regressions we were encountering there. Unfortunately that's not quite done because most of those tests aren't running on TBPL yet.
  • I worked on CSS scroll snapping with an intern. Unfortunately the spec situation got bogged down; there's an impasse between us and Microsoft over the design of the CSS feature, and Google has decided not to do a CSS feature at all for now and try to do it with JS instead. I'm skeptical that will work will, and looking forward to their proposal, but it's taking a while.
  • I landed an implementation of the CSS OM GeometryUtils interface, described in hacks.mozilla.org blog posts here and here. This fixes a functionality gap in the Web platform and was needed by our devtools team. Almost everything you would need to know about the CSS box geometry of your page is now easy to get.
  • Lately I've been doing work on rr. People are trying to use it, and they've been uncovering bugs and inconveniences that Chris Jones and I have been fixing as fast as we can. Terrence Cole used it successfully to help fix a JS engine bug! I'm using it a lot myself for general-purpose debugging, and enjoying it. I want to spend a few more days getting some of the obvious rough edges we've found filed off and then make a new release.

Looking forward, I expect to be working on making our async scrolling fully generic (right now there are some edge cases where we can't do it), and working on some improvements to our MediaStreamGraph code for lower latency video and audio.

Robert O'CallahanFighting Media Narratives

... is perhaps futile. A lot of what I have to say has already been said. Yet, in case it makes a difference:

  • Almost all Mozilla staff supported keeping Brendan Eich as CEO, including many prominent LGBT staff, and many made public statements to that effect. A small number of Tweeters calling for him to step down got all the media attention. The narrative that Mozilla staff as a group "turned against Brendan" is false. It should go without saying, but most Mozilla staff, especially me, are very upset that he's gone. I've known him, worked with him and fought alongside him (and sometimes against him :-) ) for fourteen years and having him ripped away like this is agonizing.
  • The external pressure for Brendan to step down was the primary factor driving the entire situation. The same issue exploded in 2012 but there was less pressure and he got through it. No doubt Mozilla could have handled it better but the narrative that blames Mozilla for Brendan's departure misses the big picture. Boycotting Mozilla (or anyone for that matter) for cracking under intense pressure is like shooting a shell-shocked soldier.
  • As a Christian, Mozilla is as friendly a workplace as any tech organization I've known --- which is to say, not super friendly, but unproblematic. Because of our geographic spread --- not just of headcount, but of actual power --- and our broad volunteer base I think we have more real diversity than many of our competitors. The narrative that Mozilla as a group has landed on one side of the culture war is false, or at least no more true than for other tech organizations. In fact one thing I've really enjoyed over the last couple of weeks is seeing a diverse set of Mozilla people pull together in adversity and form even closer bonds.

I'll also echo something else a lot of people are saying: we have to fix Internet discourse somehow. It's toxic. I wrote about this a while back, and this episode has made me experience the problem at a whole new level. I'll throw one idea out there: let's communicate using only recorded voice+video messages, no tweets, no text. If you want to listen to what I have to say, you have to watch me say it, and hopefully that will trigger some flickers of empathy. If you want me to listen to you, you have to show me your face. Want to be anonymous, do it the old-fashioned way and wear a mask. Yeah I know we'd have to figure out searchability, skimmability, editing, etc etc. Someone get to work on it.

Gervase MarkhamHeartbleed

For the good of the Internet, Someone needs to write an SSL library in Rust. Who would be up to such a task?

Christian HeilmannOn Windows XP and IE6

On Tuesday, Microsoft announced the end of support for Windows XP. For web developers, this meant much rejoicing as we are finally rid of the yoke that is Internet Explorer 6 and can now use all the cool things HTML5, CSS3 and other tech has to offer. Right? Maybe.

xp

When I started web development my first real day-to-day browser was IE4 and then Netscape Navigator Gold moving on to Netscape Communicator 4. I saw the changes of IE5, 5.5 and finally IE6. I was pretty blown away by the abilities IE6 had. You had filters, page transitions, rotation, blurring, animation using marquee and even full-screen applications using the .hta extension. In these applications you had full JScript access to the system, you can read and write files, traverse folders and much more. Small detail: so had attackers as the security model wasn’t the best, but hey, details…

None of this was a standard, and none of it got taken on by other browsers. That probably wasn’t possible as features of browsers were the main differentiator and companies protected their USPs.

IE was never and will never be just a browser: it is an integral part of the operating system itself. For better or worse, Microsoft chose to make the web consumption tool also the file browsing and document display tool. Many of the very – at that time – futuristic features of IE6 were in there as they were needed for Powerpoint-style presentations.

That’s why the end of XP is a light at the end of the tunnel for all those suffering the curse that is IE6. Many users just didn’t bother upgrading their browser as what the OS came with was good enough.

A cracker’s paradise

Of course we now have a security problem: not all XP installs will be replaced and the lack of security patches will result in many a hacked server. Which is scary seeing that many ATMs run on XP and lots of government computers (the UK government alone spent 5.5m GBP on getting extended support for XP as moving on seems to be hard to do with that many machines and that much red tape). XP and IE6 weren’t a nuisance for web developers – they are a real threat to internet security and people’s online identity for a long time now.

The fast innovator in a closed environment dilemma

You can say what you want about IE6 - and it has been a running joke for a long time – having it and having it as the nemesis of web standards based browsers (Opera, Netscape6 and subsequently Firefox) taught us a lot. Having a browser that dared to dabble with applications in HTML years before the W3C widget spec or Adobe Air was interesting. Having a browser in the operating system that naturally was the first thing people clicked to go online helped the internet’s popularity. It didn’t help the internet as a whole though.

The big issue of course was that people didn’t upgrade and the OS didn’t force-upgrade the browser. This meant that companies had a fixed goal to train people on: if it works in IE6, it is good enough for us. That’s why we have hundreds of large systems that only work in IE. Many of those are enterprise systems: CRM, Asset management, Ticketing, CMS, Document management – all these fun things with lots of menus and trees and forms with lots of rules.

Nobody likes using these things. People don’t care for them, they just see them as a necessary thing to do their job and something created by magical hairy pixies called the IT department. When you don’t like something but need to use it any change in it is scary, which is why a lot of attempts to replace these systems with more user-friendly and cross-platform systems is met with murmurings or open revolt. I call this the Stockholm syndrome of interfaces: I suffered it for years, so I must like it, right? All the other stuff means more work.

Back to the browser thing though: The issue wasn’t IE6, the issues were its ubiquity, an audience that wasn’t quite web savvy yet and didn’t crave choice but instead used what was there, and Microsoft’s tooling centering around creating amazing things for IE first and foremost and maybe a fallback for other browsers. The tools locked into IE6 were most of the time not created by web developers, but by developers of .NET, classic ASP, Sharepoint and many other – great – tools for the job at hand. Everything seemed easy, the tools seemed far superior to those that cover several targets and when you stayed inside the ecosystem, things were a breeze. You didn’t even have to innovate yourself – you just waited until the platform added the next amazing feature as part of the build process (this even happened at awesome events that only cost your employer and means you can get an awesome T-Shirt to boot). Sound eerily familiar to what’s happening now in closed platforms and abstracted developer tools, doesn’t it? Look – it’s the future, now – if you use platform x or browser y.

What should we take away from this?

Which brings me to the learning we should take away from these years of building things for a doomed environment: browsers change, operating systems change, form factors change. What we think is state-of-the-art and the most amazing thing right now will be laughable at best or destructive to innovation at worst just a year ahead.

And it is not Microsoft’s fault alone. Microsoft have seen the folly of their ways (OK, with some lawsuits as extra encouragement) and did a great job telling people to upgrade their systems and stop targeting OldIE. They understand that not every developer uses Windows and made testing with virtualisation much easier. They are also much more open in their messaging about what standards new IE supports. If they understand this, we should, too.

Here are the points we should keep in our heads:

  • Bolting a browser into an operating system makes it harder to upgrade it – you see this now in Android stock browsers or iOS. Many of the amazing features of HTML5 need to be poly-filled, not for old IE, but for relatively new browsers that will not get upgraded because the OS can’t get updated (at times on hardware that was $500 just a few months ago)
  • Building software for the current state of the browser is dangerous – especially when you can’t trust the current state to even be stable. Many solutions relying on the webkit prefix functionality already look as silly as a “if (document.layers || document.all) {}” does.
  • Stop pretending you can tell end users what browser to use – this is sheer arrogance. Writing software means dealing with the unknown and preparing for it. Error handling is more important than the success case. Great UX is invisible – the thing just works. Bad error handling creates unhappy users and there is nothing more annoying than being on a pay-by-the-minute connection in a hotel to be told I need to use another browser or update mine. Stop pretending your work is so important people have to change for you if all you need to do is being more creative in your approach.

There are only a few of us unlucky enough to have to support IE6 in a pixel-perfect manner right now. The death of XP wasn’t the big liberation we really needed. And by all means it should not mean that you write web apps and web sites now that rely on bleeding edge technology in newer browsers without testing for it. This will never go away, and it shouldn’t. It makes us craftsmen, it keeps us on the ball. We need to think before we code, and – to me – that is never a bad idea.

The rules did not change:

  • HTML is king – it will display when everything else fails, it will perform amazingly well.
  • Progressive Enhancement means you write for now and for tomorrow – expect things to break, and plan for it, and you can never be surprised.
  • Browser stats are fool’s gold – who cares how many people in Northern America who have a certain statistics package installed use browser x or browser y. What do your end-users use? Optimise for form factors and interaction, not for browsers. These will always change.
  • Writing for one browser helps that one in the competition with others, but it hurts the web as a whole – we’re right now in a breakneck speed rat-race about browser innovation. This yields a lot of great data but doesn’t help developers if the innovations vanish a few versions later. We have jobs to do and projects to deliver. There is not much time to be guinea pigs
  • Real innovation happens when we enhance the platform – we need WebComponents in the browsers, we need WebRTC in browsers, we need a more stable Offline and Local Storage solution. What we don’t need is more polyfills as these tend to become liabilities.

So, RIP XP, thanks for all the hardship and confusion that made us analyse what we do and learn from mistakes. Let’s crack on with it and not build the next XP/IE6 scenario because we like our new shiny toys.

Will Kahn-Greenepyvideo status: April 9th, 2014

What is pyvideo.org

pyvideo.org is an index of Python-related conference and user-group videos on the Internet. Saw a session you liked and want to share it? It's likely you can find it, watch it, and share it with pyvideo.org.

Status

I fixed a few issues and finally (finally) pushed out major site updates. Some of them are implemented in the worst possible way (e.g. facet filters for the search page), but some of them are great (e.g. Amara subtitle support).

I'm still struggling with a lot of technical debt on the site and a lack of time to really focus on it. That's mostly what's been making fixing the issues, improving the site and adding conferences take so long.

Sheila and I will be at PyCon US and hanging around for sprint days. If anyone is interested in sprinting, we'll be there. Even if we don't get any coding done, figuring out how to solve some of the bigger problems and planning what should be done in the next year would be a huge accomplishment.

If you're at PyCon and see either of us, feel free to give us a piece of your mind in regards to how you use PyVideo and what could be better.

Robert O'CallahanMozilla Matters

How much does the world need Mozilla? A useful, if uncomfortable, thought experiment is to consider what the world would be like without Mozilla.

Consider the world of Web standards. Microsoft doesn't contribute much to developing new Web features, and neither does Apple these days. Mozilla and Google do. Google, per Blink's own policy (mirroring our own), relies on feedback and implementation by other browser vendors, i.e. usually us. If you take Mozilla out of the equation, it's going to be awfully hard to apply the "two independent implementations" test for new Web features. (Especially since Webkit and Blink still have so much shared heritage.) For example it's hard to see how important stuff like Web Audio, WebGL and WebRTC would have become true multi-vendor standards without us. Without us, most progress would depend on unilateral extensions by individual vendors. That has all the downsides of a single-implementation ecosystem --- a single implementation's bugs become the de-facto standard; the standards process, if there even is one, becomes irrelevant; and even more power accrues to the dominant vendor.

In the bigger picture, it would be dangerous to leave the Web --- and the entire application platform space --- in the hands of three very large US corporations who have few scruples and who each have substantial non-Web interests to protect, including proprietary desktop and mobile platforms. It has always been very important that there be a compelling vendor-neutral platform for people to deploy content and apps on, a platform without gatekeepers and without taxes. The Web is that platform ---- for now. Mozilla is dedicated to preserving and enhancing that, but the other vendors are not.

Mozilla has plenty of faults, and lots of challenges, but our mission is as important as ever ... probably more important, given how computing devices and the Internet penetrate more and more of our lives. More than ever, pursuing our mission is the greatest good I can do with the talents God has given me.

Will Kahn-GreeneSite development using pagekite

Problem

I have this basic problem where I do a lot of web-site work and I need to show people what I've done so far so they can review it and help me make it better or make it suit their needs better. Screenshots aren't very helpful because the site is interactive. Further, the site needs to get tested on multiple devices/platforms/browsers. Also, I need to make sure that the site is only accessed via https.

What I've been doing up to now is failing miserably: I'd push work to our staging server for people to test out, but that sucks as an answer and affects my co-workers and makes a mess of our staging server. Plus iterating on things is difficult.

So, requirements:

  1. endpoint must be https-only
  2. must be easy to set up and take down
  3. must be easy to access so people can easily test things on my local machine

Solution

I looked around and this would be pretty easy to do if I didn't have the https-only requirement. That makes things difficult without a lot of work.

Then I found pagekite. They make it really easy.

Here's how you set it up:

  1. Download and install the pagekite software: http://pagekite.net/downloads/

  2. Run your website. In my case, I'm working on Django sites, so I launch like this:

    $ ./manage.py runserver
    

    That runs the Django project I'm working on on localhost:8000.

  3. Run pagekite:

    $ pagekite.py 8000 YOUR_NAME.pagekite.me:443
    

    That creates a tunnel from your machine to the pagekite.me server. When someone accesses https://YOUR_NAME.pagekite.me/, the request goes through the tunnel to your pagekite backend and that performs the request over http to your local webserver (in my case, the Django project) bound to localhost:8000.

    Access is https-only. If anyone tries to access http://YOUR_NAME.pagekite.me/, then they get a connection error.

    The https-only requirement is satisfied by restricting the kite to only listening to port 443--the https port. That's pretty key.

This lets me run my Django project locally on http without dealing with self-signed certificates, but still require https access so data isn't floating around in clear text.

The one problem with this is that my local server thinks it's running http and so redirects that include the protocol go to http rather than https.

If you don't already have an account, I'm pretty sure step 3 will walk you through setting one up. Free accounts are limited in what they can do.

Also, they hang out on #pagekite on Freenode. I had a problem, asked a question and got a super helpful reply. The code is Open Source, so it's possible to look through it and debug it.

I'll be using this going forward.

Why write this?

This is a common use case for web developers. I figured I'd write this up because the https-only part is pretty key and it was the part that I had to ask for help with.

Kumar McMillanHow To Protect Against Heartbleed And Other Vulnerabilities

The OpenSSL heartbleed bug was a serious kick to the Internet's collective ass. This video provides a quick overview if you want the details. In summary, an attacker could craft a payload with a fake size (up to 64k) and trick openssl into sending a random chunk of server memory. WTF?! To understand how bad this was I spent a minute hacking on this script that was going around. I pointed it at login.yahoo.com (which is no longer vulnerable) and tried to see if I could catch a username and password flying by. I had one within 30 seconds. That's how bad it was; you could read random parts of the server's memory which may contain passwords, private keys, or whatever else OpenSSL was processing for current site visitors.

I had stolen someone's credentials. Game over, right? How do you protect yourself against something as bad as this? ...

[read entire article]

Kumar McMillanRamblings

Oh, hey! I almost forgot I have a blog. Well, the colors are annoying to me and my comment system sucks but, meh. I wanted to write a quick note about where you can find stuff I write.

Carla CasilliOpen Badges, wicked problems, and that thing called hope

"feather bad weather" by Erik bij de Vaate

“feather bad weather” ©2008 Erik bij de Vaate, used under CC-BY-SA

Open badges: they are so tantalizing to so many people, so full of possibility. They appear to offer so many solutions to so many different problems. They encourage us to look at old problems with new eyes. And precisely because of their dynamism, their precious novelty, we occasionally find ourselves overwhelmed with the hope that they’ll solve all of the problems. Everything.

This, my friends, this is precisely what’s at issue with introducing badges to our current social structure: recognizing that there are problems with existing acknowledgement and recognition systems. Problems that have not been adequately addressed. We need to crack that nut wide open as we begin to figure out how badges might change the game. We need to figure out what works and what’s worth saving in this new badge world. We need to look hard at the wicked problems that they might at least influence.

The issues most often raised about badges—accessibility, injustice, value, meaning, and rigor—are not necessarily about badges themselves but instead are rooted in wicked problems, the larger systemic social, political, and economic issues that surround learning and recognition. When viewed from this perspective, it’s obvious that badges are not a panacea. So, let’s be realistic in our discussions about the ability of badges to solve all issues of access, fairness, and equity: nothing so far has solved those issues and badges alone won’t do it, either. This is a known known; let’s not waste time arguing this point. Instead, let’s wrestle mightily with the all-too-familiar feeling of impotence when discussing any possible inroad to wicked problems. Because discuss them we must.

On the plus side of this discussion, here’s a tiny sample of what badges can do. They can provide markers of social and professional possibilities, they can acknowledge varying degrees of expertise in social skills, they can indicate job skills compatibility, they can evidence a variety of important learning experiences including capturing prior learning, they can demonstrate continued professional engagement, they can represent vastly different company and brand values, and perhaps most importantly, they can provide important and meaningful personal insight.

So for now, while we’re building this ecosystem together, let’s hold tight to that thing with feathers—our sense of hope, our sense of possibility—for when seeking change, particularly systemic change, odd though it may feel and sound to outsiders, optimism is a feature not a bug.

 

If you’re reading this and nodding your head, you might also appreciate this related post from Badge Alliance Executive Director, Erin Knight: More Beefs

Much more soon. carla [at] badgealliance [dot] org

 


Tagged: badges, identity, learning, mozilla, Open badges, openbadges, politics, tools, wicked problems

Sylvestre LedruChanges Firefox 29 beta5 to beta6

The number of changeset has decreased (23 for beta6 compared to 43 for beta5). This is a good sign as we approach from the release date of 29. In this release, some top crashes have been fixed and some last bugs for Australis has been address.

  • 23 changesets
  • 55 files changed
  • 435 insertions
  • 704 deletions

By extensions:

ExtensionOccurrences
cpp12
js10
css8
h7
ini6
html4
mm2
xul1
xml1
json1
jsm1
inc1

By modules:

ModuleOccurrences
browser19
js13
dom6
widget5
content5
toolkit2
gfx2
modules1
layout1

List of changesets:

Tim NguyenBug 989449 - fix menu-button dropmarker corners to have border-radii on Windows 7, Vista and XP. r=mikedeboer, a=sylvestre. - bc6c34299b03
Tim NguyenBug 980339 - Remove border-radius from add-on manager on Windows 8. r=mikedeboer, sr=Unfocused, a=sylvestre. - cb7f81834560
Mike de BoerBug 991072: fix zoom percentage label to be centered in any toolbar. r=mconley, a=sylvestre. - 3e69377c027a
Gijs KruitboschBug 946595 - High contrast themes on Windows 8 shouldn't be considered the default theme in CSS, r=jimm, a=sylvestre. - 250d63775815
Boris ZbarskyBug 976920 - Mostly back out Bug 932322 for now; only define the unforgeable properties on the window object itself. r=jst, a=sledru - aecbb562466a
Robert StrongBug 982448 - Some fxmetro pref's still being left behind with values without --enable-metro in the mozconfig. r=bbondy, a=sledru - 6f0ad6b259ca
Jan de MooijBug 983709 - Simple branch patch for uplift. r=hv1989, a=sledru - 81285325c7db
Jon CoppeardBug 986864. r=sfink, a=sledru - e6b88dfe88cd
Phil RingnaldaBug 986760 (with a dash of 989101 added in) - disable browser_UITour3.js on Linux for excessive failures and lack of action taken toward fixing them. a=test-only - 6c1da25749a0
Matthew NoorenbergheBug 990384 - Define tabToolbarNavbarOverlap to reduce magic numbers in CSS for the overlap between the tabs and nav-bar. r=mconley a=sylvestre - a2fccb7d55f7
Matthew NoorenbergheBug 878436 - Update Lion Fullscreen window styling offsets to avoid themes shifting position. r=timdream a=sylvestre - 4d27870d3fdc
Matthew NoorenbergheBug 990387 - Toolbar buttons on the TabsToolbar appear below the nav-bar border with a theme. r=dao a=sylvestre - 81075b35ee13
Matthew NoorenbergheBug 973855 - [Australis] Include browser-bottombox in the customization mode padding. r=jaws a=sylvestre - 75c7e2c98e0c
Jan BeichBug 948946 - Use private-browsing indicator with GTK theme on non-Linux as well. r=MattN a=sylvestre - f7faeaf19dfa
John DaggettBug 975460 - disable async font loader on OSX 10.6 (beta/aurora). r=smichaud,mkato a=sylvestre - 79c61c6f632d
Joel MaherBug 987892 - Clear up oranges for deBug mochitest-browser-chrome jobs on Mozilla-Beta. r=armenzg a=test-only - 13bf6fe8df1f
Benjamin BouvierBug 969203 - Take out non strictly commutative Float32 functions. r=sstangl, a=sledru - 7e9b33204db9
Bobby HolleyBug 980537 - Only store FakeBackstagePass instances in mThisObjects. r=khuey, a=sledru - 9933fa36efa5
Mike KaplyBacking out Bug 889085 (dddfd63f1414, f8c14bd80676) due to regression Bug 987783. r=roc, a=sledru - 51e5b0ec21b3
Garrett RobinsonBug 971341 - Fix infinite tab loading due to missing characters in CSP's path regexes. r=sstamm, a=lsblakk - fe5d67aa5366
Shane CaraveoBug 992398 - Fix domain for cdn deployment of directory site. r=gavin, a=sledru - ea5b3027bb42
Karl TomlinsonBug 990794 - Crash on ovrfl in SharedBuffer::Create(). r=roc, a=sledru - 51a84afe085d
Karl TomlinsonBug 990794 - Crash on ovrfl in AllocateAudioBlock. r=roc, a=sledru - 004a7c15d761

r= means reviewed by
a= means uplift approved by

Previous changelogs:

Joel Maherpolishing browser-chrome – coming to a branch near you soon

The last 2 weeks I have gone head first into a world of resolving some issues with our mochitest browser-chrome tests with RyanVM, Armen, and the help of Gavin and many developers who are fixing problems left and right.

There are 3 projects I have been focusing on:

1) Moving our Linux debug browser chrome tests off our old fedora slaves in a datacenter and running them on ec2 slave instances, in bug 987892.

These are live and green on all Firefox 29, 30, and 31 trees!  More work is needed for Firefox-28 and ESR-24 which should be wrapped up this week.  Next week we can stop running all linux unittests on fedora slaves.

2) Splitting all the developer tools tests out of the browser-chrome suite into their own suite in bug 984930.

browser-chrome tests have been a thorn in the side of the sheriff team for many months.  More and more the rapidly growing features and tests of developer tools have been causing the entire browser-chrome suite to fail, in cases of debug to run for hours.  Splitting this out gives us a small shield of isolation.  In fact, we have this running well on Cedar, we are pushing hard to have this rolled out to our production and development branches by the end of this week!

3) Splitting the remaining browser chrome tests into 3 chunks, in bug 819963.

Just like the developer tools, we have been running browser-chrome in 3 chunks on Cedar.  With just 7 tests disabled, we are very green and consistently green. 

 

 

While there are a lot of other changes going on under the hood, what will be seen by next week on your favorite branch of Firefox will be:

  • ‘dt’ jobs for opt, and ‘dt1′, ‘dt2′, ‘dt3′ jobs for debug
  • ‘bc’ job will turn into ‘bc1′, ‘bc2′, ‘bc3′
  • much faster turnaround times on bc tests (62 minutes is the slowest right now, the rest are averaging ~20 minutes/job)
  • less random orange cluttering up results

 


Staś MałolepszyRefactored l10n.js landed in Gaia

Today Zibi and I landed the refactored code of shared/js/l10n.js in Gaia master. A culmination of months of hard work, the refactor is also a first step of a larger initiative to innovate in the area of mobile localization and eventually, to implement L20n in Gaia.

Firefox OS faces a number of challenges related to localization. Growing the locale coverage beyond 20 supported locales; adapting to multiple screen sizes and form factors; ensuring the performance and memory consumption is good when the DOM is localized on the fly. In order to respond to these needs and many more that we will encounter in the future, we need a flexible and modular codebase, with a thought-out API designed to perform well in asynchronous scenarios.

We decided to re-write Gaia's l10n.js drawing from our experiences from developing L20n. The underlying concepts are similar: the code is organized into useful abstractions like the localization Context managing the fallback chain of Locale objects, or the Entity class which represents a single translation unit.

It's not L20n just yet. The library still uses the .properties file format.
There are no custom macros or arbitrary dicts which allow localizers greater flexibility in creating translations. The language fallback is limited to two locales. It still is a good first step towards empowering developers and localizers alike.

To minimize the risk of regressions, the refactored code was almost a drop-in replacement for the old l10n.js library. We decided to keep the exact same API of the navigator.mozL10n object. Everything should just work as it did before. We feel that there's room for improvement in the API design, and we'll soon start suggesting changes to it (e.g. bug 993188 has made many developers implement workarounds in their apps, which would break if we fixed it right away in our refactored drop-in patch).

Notable changes

  • Modular code which encapsulates main concepts of localization with a clean OOP approach.

  • Better security thanks to the lazy compilation step.

  • Better error reporting, especially on build time:

    /build_stage/browser/index.html: [l10n] [ar]: 3 missing in the visible 
      DOM: enter-search-or-address, top-sites-startPage, 
      browserBrandShortName
    /build_stage/browser/index.html: [l10n] [ar]: 10 missing compared to 
      en-US: brandShortName, browserBrandShortName, browserBrandFullName, 
      enter-search-or-address, top-sites-startPage, top-sites-tab, 
      search-engines, default-search-engine, edit-bookmark, 
      edit-bookmark-header
    
  • The internal API (currently hidden behind the navigator.mozL10n wrapper) is ready to fully support asynchronous parsing, compilation and fetching of translations, which we think will be important when implementing language packages for Firefox OS.

  • You can now freely nest placeables, meaning that this will work:

    foo = Foo
    foobar = {{ foo }} Bar
    foobarbaz = {{ foobar }} Baz
    
  • Good test coverage, both in our source repo and as part of the Gallery app test suite.

Feedback and bugs

The top priority fot this refactor was to be 100% compatible with the old l10n.js library. We made sure all tests pass on Travis and on TBPL and put a lot of effort into writing additional tests. We will be monitoring the tree for any regressions. If you notice something weird going on with your device or your app, for instance related to language switching, please let us know.
File bugs in the newly created Gaia::L10n component in Bugzilla or find us in #gaia and #l20n.

Next steps

The list of tasks on our to-do list is long, but we couldn't be more exited about them. Ranging from UX improvements to developer-friendly APIs, to giving localizes more control over their translations, next months are going to be great for localization and Firefox OS. A sneak peek, in no particular order:

  • Revisit the build time optimizations currently used in production Firefox OS builds, which we suspect cripple the performance of builds with more than 20 locales installed.

  • Implement responsive localization tools, such as the @formFactor macro which makes it possible to define different translations for tablets and phones.

  • Clean up the app startup API; implement mozL10n.ready() and mozL10n.once() as separate methods (bug 993188) and move away from listening to the 'localized' events.

  • Improve app launch performance by delaying localization when possible.

  • Research the use of Mutation Observers to automatically localize new DOM nodes when they are inserted into the existing DOM.

  • Research langpacks, possibly using the DataStores API.

  • Implement better language negotiation and language fallback via the ECMAScript Internationalization API and the navigator.languages API.

Ron PiovesanQuality content rules the Facebook news feed

With over 1 billion users worldwide coupled with deep local reach and engagement, Facebook is a key social platform for financial services professionals to reach out and connect with customers.

Everyday, millions of agents and advisers use their Facebook Business Page to share content, post updates on their business, and provide useful insights into what is happening in the market.

But it is a simple truth that as more people engage on Facebook Business Pages, the more crowded a user’s news feed will be. At best, a user can review dozens or maybe a hundred updates a day.

To reduce noise and keep a user’s news feed as relevant as possible, Facebook uses over 1000 filters to determine which posts should get highlighted.

In a recent TechCrunch article,  Facebook News Feed Director of Product Management Will Cathcart highlighted the main filters:

  • How popular (Liked, commented on, shared, clicked) are the post creator’s past posts with everyone
  • How popular is this post with everyone who has already seen it
  • How popular have the post creator’s past posts been with the viewer
  • Doe the type of post (status update, photo, video, link) match what types have been popular with the viewer in the past
  • How recently was the post published

As an agent or adviser, you’ve worked hard to build a loyal and active following on your Facebook Business Page. Make sure you keep those followers engaged and up-to-date by posting timely, relevant content.

Focusing on consistency and quality of content will help determine how prominently your post will appear in your follower’s news feed.


David Rajchenbach TellerRecent changes to OS.File

A quick post to summarize some of the recent improvements to OS.File.

Encoding/decoding

To write a string, you can now pass the string directly to writeAtomic:

OS.File.writeAtomic(path, "Here is a string", { encoding: "utf-8"})

Similarly, you can now read strings from read:

OS.File.read(path, { encoding: "utf-8" } ); // Resolves to a string.

Doing this is at least as fast as calling TextEncoder/TextDecoder yourself (see below).

Native implementation

OS.File.read has been reimplemented in C++. The main consequence is that this function can now be used safely during startup, without having to wait for the underlying OS.File ChromeWorker to start. Also, decoding (see above) is performed off the main thread, which makes it much faster.

According to my benchmarks, using OS.File.read to read strings is about 2-5x faster than NetUtil.asyncFetch on large files and doesn’t block the main thread for more than 5ms, while asyncFetch performs string decoding on the main thread. Also, it doesn’t perform any main thread I/O by opposition to NetUtil.asyncFetch.

Backups

When using writeAtomic, it is now possible to request existing files to be backed up almost atomically. In many cases, this is a good strategy to ensure that data is safely written to disk, without having to use a flush, which would be expensive for the whole system.

yield OS.File.writeAtomic(path, data, { tmpPath: path + ".tmp", backupTo: path + ".backup} } );

Compression

writeAtomic and read both now support an implementation of lz4 compression

yield OS.File.writeAtomic(path, data, { compression: "lz4"});
yield OS.File.read(path, { compression: "lz4"});

Note that this format will not be understood by any command-line tool. It is somewhat proprietary. Also note that (de)compression is performed on the ChromeWorker thread for the time being, so it doesn’t benefit from the native reimplementation mentioned above.

Creating directories recursively

let dir = OS.Path.join(OS.Constants.Path.profileDir, "a", "b", "c", "d");
yield OS.File.makeDir(dir, { from: OS.Constants.Path.profileDir });

Laura HilligerTraining with Friends

This weekend, I’ll be leading a Webmaker Training for the National Citizens Service (NCS). NCS is an organization in the UK that provides learning opportunities for young people living in England and Northern Ireland – young people who are encouraged to lead positive change within their communities. For the first time ever, NCS has invited graduates from their programs to become Digital Champions, a group of people who will lead social action projects and spread web literacy skills in their local communities. This is the Teaching Kit we’ll be using to guide us during the event. Let me tell you why I’m SO EXCITED to be doing this:

This is the first official “Webmaker Training”

I run trainings all the time, but they’re always one-offs, offshoots, and truncated versions of my dream learning scenario. In 2013 we ran two prototypes – a live training for Mozilla Reps called Training Days and an online training called Teach the Web, both were hugely successful. My dream learning scenario combines these two initiatives. I think a blended-learning program that is open, inclusive, and pedagogically sound – something that helps people teach the culture, mechanics, and citizenship of the Web – is what a Mozilla professional development program should be. Why? Because open.

The NCS has been great to work withimage from https://secure.flickr.com/photos/centurionbd/

I expect the young people who participate in the NCS Community are amazing as well. The partnership started when one of our Sr Directors, the fantastic Paula Le Dieu, opened a conversation with some folks at the NCS to explain that Mozilla isn’t just a technology company, and the Web is not just a delivery mechanism for content. She talked to them about what it truly means to be part of the Open Community and our values resonated. We were asked if we could teach some of the values and skills around openness and web literacy while overlapping with NCS values around social action, personal responsibility and leadership. Spoiler Alert: Yeah, we totally can and will! I’m truly excited to share what I love about the open source community with the NCS Digital Champions, while helping them level up their social and technical skills. I’m excited to hear their ideas, push them to think bigger, and introduce them to the support networks on the web. I’m excited to learn from them. As an educator, I view the goals of this partnership (and future partnerships centered on Training) as being less about specific skills and more about big brained theories of education that say things like “You are educated when you can confidently and empathetically participate in society and the world.

The Digital Champions will help us grow

Last year, the Training Days graduates and the Teach the Web participants ran hundreds and hundreds of events, spreading Webmaker and digital skills. Our community's honesty, participation and drive has made Webmaker what it is today. The 42 NCS Digital Champions are committing to running their own Maker Parties later this year. They’re also committing to spreading web literacy within their local communities and among their peers in the NCS community. We’ll be inviting them to become mentors within our online training initiatives. In May, we’ll be inviting any and every one to participate in an online learning experience that will help you teach the web and become part of the open community. I’m hoping that this weekend seeds enough interest for the NCS Digital Champions to want to play around with the new and improved Training content and discussion platform*.

It's going to be fun!

People who know me, know that I don't really get invested in things that don't entertain me. One of the reasons I love teaching is because I think it’s fun. It's fun to watch people learn, see what people make, share ideas and talk about stuff. I even think it’s fun to watch myself fail at relating to people. It’s fun to learn about myself, other people, the world, technology…Our agenda has random, fun activities (ahem) that are designed to get people moving, thinking and growing. I’m enthusiastic about what I do, and enthusiasm is contagious. So, yeah, it’s going to be fun for everyone involved. All of this means more people will become web literate, more people will spread openness, more people will champion the values we have. *If YOU’RE interested in helping made the online components of Webmaker Training better, help us test them!
Enhanced by Zemanta

Yunier José Sosa VázquezDisponible el Add-on SDK 1.16

AddonsYa se encuentra con nosotros la versión 1.15 del Add-on SDK. Descargar Add-on SDK 1.16.

Según el blog de los Add-ons de Mozilla, esta liberación menor tiene como objetivo Este lanzamiento tiene como objetivo proveer compatibilidad con Firefox 29 y el uso de las nuevas APIs que provee Australis.

Con Australis el uso de botones se ampliará y se le podrán añadir paneles, frames, barras de herramientas. Algunas de estas características no están presentes en Firefox 29 pero si en la versión 30.

También se han solucionado varios como:

  • Bug 958609 – “Add-on SDK 1.15 es incompatible con Python 2.7.6″
  • Bug 944951 – “bootstrap.js debe remover la adición del recurso: URIs al cargar”

Para conocer otros detalles, pueden leer las notas de liberación.

Antes de descargar el Add-on SDK 1.16 recuerda que puedes contribuir a la mejora de este reportando bugs, mirando el código para que contribuyas dando tus soluciones o simplemente dejar tu impresión sobre esta nueva versión.

Rizky AriestiyansyahOpenX Quiz is featured on Firefox Marketplace, Awesome!

A good day to get awesome news from Mozilla, like another day I am checking my email every morning. The awesome news is my OpenX Quiz app is featured on Firefox Marketplace, this application...

Benjamin KerensaNorth America Mozilla Reps Meetup

DSC 0213 300x200 North America Mozilla Reps Meetup

Our group photo

This weekend, North America Mozilla Reps gathered in the not-so-sunny Portland, Oregon. We worked from the Portland Office during the weekend, where we collaborated on plans for North America for the next six month period. We also tackled a number of topics from websites and refined our priority cities which will help us be more successful in moving forward in our mission to grow contributors in North America.

We were very fortunate to have some new people participate this time round including Lukas Blakk, Janet Swisher, Larissa Shapiro, Joanna Mazgaj, Robby Sayles, Prashish Rajbhandari, Tanner Filip, Dan Gherman and Christie Koehler. It was excellent to have a larger group because this brought ideas from people who see things through different lenses.

14 1 300x221 North America Mozilla Reps Meetup

Voodoo Donuts delivered Firefox Donuts 2.0

All in all, I feel we tackled a lot more work this time than our previous meetup last year in San Francisco and we decided to have our next meetup in Portland again. One of my favorite activities during the meetup was a diversity activity that Lukas led us in that many of us hope to do with our own communities.

We closed off the meetup with a trip to the Ground Kontrol Arcade and Bar where there were many games of Pac Man and Dance Dance Revolution.

 North America Mozilla Reps Meetup

David Rajchenbach TellerA curse and a blessing

The curse

When Brendan Eich stepped in as a CEO, Mozilla and him were immediately faced a storm demanding his resignation because of his political opinions. To the best of my knowledge, none of those responsible for the storm were employees of the Mozilla Corporation and only 4 or 5 of them were members of the Mozilla Community (they were part of the Mozilla Foundation, which is a different organization).

When Brendan Eich resigned from his position as an employee of Mozilla, Mozilla was immediately faced by a storm assuming that Brendan Eich had been fired, either because of his opinions or as a surrender to the first storm.

Both storms are still raging, fueled by angry (and dismayed and saddened) crowds and incompetent news reporting.

We will miss Brendan. We have suffered and we will continue suffering from these storms. But we can also salvage from them.

The blessing

Think about it. We are being criticized by angry crowds. But the individuals who form these crowds are not our enemies. Many of them care deeply about Freedom of Speech and are shocked because they believe that we are extinguishing this freedom. Others care primarily about equality, an equality that can seldom be achieved wherever there is no Freedom of Speech.

Freedom of Speech. This is one of the core values of Mozilla, one of the values for which we have been fighting all these years.

We are being criticized by some of the people who need us most. They are our users, or our potential users, and they are getting in touch with us. Through Facebook, through Twitter, through the contribute form, through the governance mailing-list, through our blogs, or in real life discussions.

Some will say that we should ignore them. Some will be tempted to answer anger with anger and criticism with superiority.

Do neither. They are our users. They deserve to be heard.

We should listen to them. We should answer their concerns, not with FAQs or with press releases, but with individual answers, because these concerns are valid. We should explain what really happened. We should show them how Mozilla is largely about defending Freedom of Speech through the Open Web.

So please join the effort to answer the angry crowds. If you can, please reach out to media and the public and get the story out there. If only one person out of a hundred angry users receives the message and decides to join the community and the fight for the open web, we will have salvaged a victory out of the storm.