Adam OkoyeOPW Week Two – Coming to a Close

Ok so technically it’s the first full week because OPW started last Tuesday December 9th, but either way – my first almost two weeks of OPW are coming to a close. My experience has been really good so far and I think I’m starting to find my groove in terms of how and where I work best location wised. I also think I’ve been pretty productive considering this is the first internship or job I’ve had (not counting the Ascend Project) in over 5 years.

So far I’ve resolved three bugs (some of which were actually feature requests) and have a fourth pull request waiting. I’ve been working entirely in Django so far primarily editing model and template files. One of the really nice things that I’ve gotten out of my project is learning while and via working on bugs/features.

Four weeks ago I hadn’t done any work with Django and realized that it would likely behoove me to dive into some tutorials (originally my OPW project with Input was going to be primarily in JavaScript but in reality doing things in Python and Django makes a lot more sense and thus the switch despite the fact that I’m still working with Input). I started a three or four basic tutorials and completed one and a half of them – in short, I didn’t have a whole lot of experience with Django ten days ago. Despite that. all of the the looking through and editing of files that I’ve done has really improved my skills both in terms of syntax and also in terms of being able to find information – both where to take information from and also where to put it. I look forward to all of the new things that I will learn and put in to practice.

Mozilla FundraisingA/B Testing: ‘Sequential form’ vs ‘Simple PayPal’

This post is one of a series where we’re sharing things we’ve learned while running A/B tests during our End of Year fundraising campaign. At Mozilla we strive to ‘work open’, to make ourselves more accountable, and to encourage others … Continue reading

Dietrich AyalaRemixable Quilts for the Web

Atul Varma set up a quilt for MozCamp Asia 2012 which I thought was a fantastic tool for that type of event. It provided an engaging visualization, was collaboratively created, and allowed a quick and easy way to dive further into the details about the participating groups.

Screenshot 2014-12-18 15.23.19I wanted to use it for a couple of projects, but the code was tied pretty closely to that specific content and layout.

This week I finally got around to moving the code over to Mozilla Webmaker, so it could be easily copied and remixed. I made a couple of changes:

  • Update font to Open Sans
  • Make it easy and clear how to re-theme the colors
  • Allow arbitrary content in squares

The JS code is still a bit too complex for what’s needed, but it works on Webmaker now!

Screenshot 2014-12-18 15.09.42

View my demo quilt. Hit the “remix” button to clone it and make your own.

The source for the core JS and CSS is at https://github.com/autonome/quilt.


Sean McArthurhyper

Rust is a shiny new systems language that the lovely folks at Mozilla are building. It focuses on complete memory-safety, and being very fast. It’s speed is equivalent to C++ code, but you don’t have to manage pointers and the like; the language does that for you. It also catches a lot of irritating runtime errors at compile time, thanks to it’s fantastic type system. That should mean less crashes.

All of this sounds fantastic, let’s use it to make server software! It will be faster, and crash less. One speed bump: there’s no real rust http library.

rust-http and Teepee

There were 2 prior attempts at HTTP libraries, but the former (rust-http) has been ditched by it’s creator, and isn’t very “rust-like”. The latter, Teepee, started in an excellent direction, but life has gotten in the way for the author.1

For the client-side only, there exists curl-rust, which are just bindings to libcurl. Ideally, we’d like to have the all of the code written in Rust, so we don’t have to trust that the curl developers have written perfectly memory-safe code.

So I started a new one. I called it hyper, cause, y’know, hyper-text transfer protocol.

embracing types

The type system in Rust is quite phenomenal. Wait, what? Did I just say that? Huh, I guess I did. I know, I know, we hate wrestling with type systems. I can’t touch any Java code without cursing the type system. Thanks to Rust’s type inference, though, it’s not irritating at all.

In contrast, I’ve gotten tired of stringly-typed languages; chief among them is JavaScript. Everything is a string. Even property lookups. document.onlood = onload; is perfectly valid, since it just treats onlood as a string. You know a big problem with strings? Typos. If you write JavaScript, you will write typos that aren’t caught until your code is in production, and you see that an event handler is never triggered, or undefined is not a function.

I’m done with that. But if you still want to be able to use strings in your rust code, you certainly can. Just use something else besides hyper.

Now then, how about some examples. It’s most noticeable when using headers. In JavaScript, you’d likely do something like:

req.headers['content-type'] = 'application/json';

Here’s how to do the same using hyper:

req.headers.set(ContentType(Mime(Application, Json, vec![])));

Huh, interesting. Looks like more code. Yes, yes it is. But it’s also code that has been checked by the compiler. It has made sure there are no typos. It also has made sure you didn’t try to see the wrong format to a header. To get the header back out:

match req.headers.get() {
    Some(&ContentType(Mime(Application, Json, _))) => "its json!",
    Some(&ContentType(Mime(top, sub, _))) => "we can handle top and sub",
    None => "le sad"
}

Here’s an example that makes sure the format is correct:

req.headers.set(Date(time::utc_now()));
// ...
match req.headers.get() {
    Some(&Date(ref tm)) => {
        // tm is a Tm instance, without you dealing with
        // the various allowed formats in the HTTP spec.
    }
    // ...
}

Yea, yea, there is a stringly-typed API, for those rare cases you might need it, but it’s purposefully not easy to use. You shouldn’t use it. Maybe you think of a reason you might maybe have a good reason; no you don’t. Don’t use it. Let the compiler check for errors before you hit production.

Let’s look at status codes. Can you tell me what exactly this response means, without looking it up?

res.status = 307;

How about this instead:

res.status = StatusCode::MovedTemporarily;

Clearly better. You’ve seen code like this:

if res.status / 100 == 4 {}

What if we could make it better:

if res.status.is_client_error() {}

Message WriteStatus

I’ve been bitten by this before, I can only bet you have also: trying to write headers after they’ve already been sent. Hyper makes this a compile-time check. If you have a Request<Fresh>, then there exists header_mut() methods to get a mutable reference to the headers, so you can add some. You can’t accidently write to a Request<Fresh>, since it doesn’t implement Writer. When you are ready to start writing the body, you must specifically convert to a Request<Streaming> using req.start().

Likewise, a Request<Streaming> does not contain a headers_mut() accessor. You cannot change the headers once streaming has started. You can still inspect them, if that’s needed, but no setting! The compiler will make sure you don’t have that mistake in your code.

NetworkStreams

Both the Server and the Client are generic over NetworkStreams. The default is to use HttpStream, which can handle HTTP over TCP, and HTTPS using openssl. This design also allows something like Servo to implement a ServoStream or something, which could handle HTTPS using NSS instead.

Goals

These are some high level goals for the library, so you can see the direction:

  • Be fast!
    • The benchmarks preach that we’re already faster than both rust-http and libcurl. And we all know science doesn’t lie.
  • Embrace types.
    • See the above post for how we’re doing this.
  • Provide an excellent http server library for rust webdev.
    • Currently used by Iron, Rustless, Sserve, and others
  • Provide an excellent http client that can be used in place of curl.

The first step for hyper was get the streams and types working correctly and quickly. With the factory working underneath, it allows others to write specific implementations without re-doing all of HTTP, such as implementing the XHR2 spec in Servo. Work now has been on providing ergonomic Client and Server implementations.

It looks increasingly likely that hyper will be available to use on Rust-1.0-day. There will be an HTTP library for Rust 1.0!


  1. Teepee provided excellent inspiration in some of the design, and all that credit should go to it’s creator, Chris Morgan. He’s continued to provide insight in the development of hyper, so <3! 

  2. Yes, it differs. It’s been a delight to see that developers are never content with an existing spec

Gervase MarkhamGoogle Concedes Google Code Not Good Enough?

Google recently released an update to End-to-End, their communications security tool. As part of the announcement, they said:

We’re migrating End-To-End to GitHub. We’ve always believed strongly that End-To-End must be an open source project, and we think that using GitHub will allow us to work together even better with the community.

They didn’t specifically say how it was hosted before, but a look at the original announcement tells us it was here – on Google Code. And indeed, when you visit that link now, it says “Project “end-to-end” has moved to another location on the Internet”, and offers a link to the Github repo.

Is Google admitting that Google Code just doesn’t cut it any more? It certainly doesn’t have anything like the feature set of Github. Will we see it in the next round of Google spring-cleaning in 2015?

Mozilla Open Policy & Advocacy BlogThe Benefits of Fellowship

In just a few weeks, the application window to be a 2015 Ford-Mozilla Open Web Fellow will close. In its first year, the Fellows program will place emerging tech leaders at five of the world’s leading nonprofits fighting to keep the Internet as a shared, open and global resource.

We’ve already seen hundreds of applicants from more than 70 countries apply, and we wanted to answer one of the primary questions we’ve heard: why should I be a Fellow?

Fellowships offer unique opportunities to learn, innovate and gain credentials.

Fellowships offer unique opportunities to learn. Representing the notion that ‘the community is the classroom’, Ford-Mozilla Open Web Fellows will have a set of experiences in which they can learn and have an impact while working in the field. They will be at the epicenter of informing how public policy shapes the Internet. They will be working and collaborating together with a collection of people with diverse skills and experiences. They will be learning from other fellows, from the host organizations, and from the broader policy and advocacy ecosystem.

Fellowships offer the ability to innovate in policy and technology. The Fellowship offers the ability to innovate, using technology and policy as your toolset. We believe that the phrase ‘Move fast. Break things.’ is not reserved for technology companies – it is a way of being that Fellows will get to experience first-hand at our host organizations and working with Mozilla.

The Ford-Mozilla Fellowship offers a unique and differentiating credential. Our Fellows will be able to reference this experience as they continue in their career. As they advance in their chosen fields, alums of the program will be able to draw upon their experience leading in the community and working in the open. This experience will also enable them to expand their professional network as they continue to practice at the intersection of technology and policy.

We’ve also structured the program to remove barriers and assemble a Fellowship class that reflects the diversity of the entire community.

This is a paid fellowship with benefits to allow Fellows to focus on the challenging work of protecting the open Web through policy and technology work. Fellows will receive a $60,000 stipend for the 10-month program. In addition, we’ve created a series of supplements including support for housing, relocation, childcare, healthcare, continuing education and technology. We’re also offering visa assistance in order to ensure global diversity in participants.

In short, the Ford-Mozilla Open Web Fellowship is a unique opportunity to learn, innovate and gain credentials. It’s designed to enable Fellows to focus on the hard job of protecting the Internet.

More information on the Fellowship benefits can be found at https://advocacy.mozilla.org/. Good luck to the applicants of the 2015 Fellowship class.


The Ford-Mozilla Open Web Fellows application deadline is December 31, 2014. Apply at https://advocacy.mozilla.org/.

Daniel GlazmanBulgaria Web Summit

I will be speaking at the Bulgaria Web Summit 2015 in Sofia, Bulgaria, 18th of may.

Henrik SkupinFirefox Automation report – week 43/44 2014

In this post you can find an overview about the work happened in the Firefox Automation team during week 43 and 44.

Highlights

In preparation for the QA-wide demonstration of Mozmill-CI, Henrik reorganized our documentation to allow everyone a simple local setup of the tool. Along that we did the remaining deployment of latest code to our production instance.

Henrik also worked on the upgrade of Jenkins to latest LTS version 1.565.3, and we were able to push this upgrade to our staging instance for observation. Further he got the Pulse Guardian support implemented.

Mozmill 2.0.9 and Mozmill-Automation 2.0.9 have been released, and if you are curious what is included you want to check this post.

One of our major goals over the next 2 quarters is to replace Mozmill as test framework for our functional tests for Firefox with Marionette. Together with the A-Team Henrik got started on the initial work, which is currently covered in the firefox-greenlight-tests repository. More to come later…

Beside all that work we have to say good bye to one of our SoftVision team members.October the 29th was the last day for Daniel on the project. So thank’s for all your work!

Individual Updates

For more granular updates of each individual team member please visit our weekly team etherpad for week 43 and week 44.

Meeting Details

If you are interested in further details and discussions you might also want to have a look at the meeting agenda, the video recording, and notes from the Firefox Automation meetings of week 43 and week 44.

Mike HommeyInitial support for git pushes to mercurial, early testers needed

This push to try was not created by mercurial.

I just landed initial support for pushing to mercurial from git. Considering the scary fact that it’s possible to screw up a repository with bundles with missing content (and, guess what, I figured out the hard way), I have restricted it to local mercurial repositories until I am more confident.

As such, I would need volunteers to use and test it on local mercurial repositories. On top of being limited to local mercurial repositories, it doesn’t support pushing merges that would have been created by git, nor does it support pushing a root commit (one with no parent).

Here’s how you can use it:

$ git clone https://github.com/glandium/git-remote-hg
$ export PATH=$PATH:$(pwd)/git-remote-hg
$ git clone hg::/path/to/mercurial-repository
$ # work work, commit, commit
$ git push

[ Note: you can still pull from remote mercurial repositories ]

This will push to your local repository, where it would be useful if you could check the push didn’t fuck things up.

$ cd /path/to/mercurial-repository
$ hg verify

That’s the long, thorough version. You may just want to simply do this:

$ cd /path/to/mercurial-repository
$ hg log --stat

Hopefully, you won’t see messages like:

abort: data/build/mozconfig.common.override.i@56d6fdb13666: no match found!

Update: You can also add the following to /path/to/mercurial-repository/.hg/hgrc, which should prevent corruptions from entering the mercurial repository at all:

[server]
validate = True

Then you can push with mercurial.

$ hg push

Please note that this is integrated in git in such a way that it’s possible to pass refspecs to git push and do other fancy stuff. Be aware that there are still rough edges on that part, but that your commits will be pushed, even if the resulting state under refs/remotes/ is not very consistent.

I’m planning a replay of several repositories to fully validate pushes don’t send broken bundles, but it’s going to take some time before I can set things up. I figured I’d rather crowdsource until then.

Gregory Szorcmach sub-commands

mach - the generic command line dispatching tool that powers the mach command to aid Firefox development - now has support for sub-commands.

You can now create simple and intuitive user interfaces involving sub-actions. e.g.

mach device sync
mach device run
mach device delete

Before, to do something like this would require a universal argument parser or separate mach commands. Both constitute a poor user experience (confusing array of available arguments or proliferation of top-level commands). Both result in mach help being difficult to comprehend. And that's not good for usability and approachability.

Nothing in Firefox currently uses this feature. Although there is an in-progress patch in bug 1108293 for providing a mach command to analyze C/C++ build dependencies. It is my hope that others write useful commands and functionality on top of this feature.

The documentation for mach has also been rewritten. It is now exposed as part of the in-tree Sphinx documentation.

Everyone should thank Andrew Halberstadt for promptly reviewing the changes!

Advancing ContentGetting Tiles Data From Firefox

Following the launch of Tiles in November, I wanted to provide more information on how data is transmitted into and from Firefox.  Last week, I described how we get Tiles data into Firefox differently from the usual cookie-identified requests.  In this post, I will describe how we report on users’ interactions with Tiles.

As a reminder, we have three kinds of Tiles: the History Tiles, which were implemented in Firefox in 2012, Enhanced Tiles, where we have a custom creative design for a Tile for a site that a user has an existing relationship with, and Directory Tiles, where we place a Tile in a new tab page for users with no browsing history in their profile.  Enhanced and Directory Tiles may both be sponsored, involving a commercial relationship, or they may be Mozilla projects or causes, such as our Webmaker initiative.

 

We need to be able to report data on user’s interactions with Tiles for two main reasons:

  • to determine if the experience is a good one
  • to report to our commercial partners on volumes of interactions by Firefox users

And we do these things in accordance with our data principles both to set the standards we would like the industry to follow and, crucially, to maintain the trust of our users.

 

Unless a user has opted out by switching to Classic or Blank, Firefox currently sends a list of the Tiles on a user’s new tab page to Mozilla’s servers, along with data about the user’s interaction with the Tiles, e.g., view, click, or pin.

Directory and Enhanced Tiles are identified by a Tile id, (e.g., “Firefox for Android” Tile has an id of 499 for American English-speaking users while “Firefox pour Android” has an id of 510 for French-speaking users).  History Tiles do not have an id, so we can only know that the user saw a history screenshot but not what page — except for early release channel Telemetry related experiments, we do not currently send URL information for Tiles, although of course we are able to infer it for the Directory and Enhanced Tiles that we have sent to Firefox.

 

Our implementation of Tiles uses the minimal actionable dataset, and we protect that data with with multiple layers of security.  This means:

  • cookie-less requests
  • encrypted transmission
  • aggressive cleaning of data

We also break up the data into smaller pieces that cannot be reconstructed to the original data.  When our server receives a list of seen Tiles from an IP address, we record that the specific individual Tiles were seen and not the whole list.

Sample POST graphic

Sample POST from opening a new tab

 

With the data aggregated across many users, we can now calculate how many total times a given Tile has been seen and visited.  By dividing the number of clicks by the number of views, we get a click-through-rate (CTR) that represents how valuable users find a particular tile, as well as a pin-rate and a block-rate.  This is sufficient for us to determine both if we think a Tile is useful for a user and also for us to report to a commercial partner.

 

Calculating the CTR for each tile and comparing them helps us decide if a Tile is useful to many users.  We can already see that the most popular tiles are “Customize Firefox” and “Firefox for Android” (Tile 499, remember) both in terms of clicks and pins.

For an advertiser, we create reports from our aggregated data, and they in turn can see the traffic for their URLs and are able to measure goal conversions on their back end.  Since the Firefox 10th anniversary announcement, which included Tiles and the Firefox Developer Edition, we ran a Directory Tile for the Webmaker initiative.  After 25 days, it had generated nearly 1 billion views, 183 thousand clicks, and 14 thousand pins.

Webmaker Tile

The Webmaker Tile (static and rollover states)

The Webmaker team, meanwhile, are able to see the traffic coming in (as the Tile directs traffic to a distinct URL), and they are able to give attribution to the Tile and track conversions from there:

Webmaker Dashboard

Webmaker.org’s Analytics dashboard: 182,488 sessions and 3,551 new Webmaker users!

 

We started with a relatively straightforward implementation to be able to measure how users are interacting with Tiles.  But we’ve already gotten some good ideas on how to make things even better for improved accuracy with less data.  For example, we currently cannot accurately measure how many unique users have seen a given Tile, and traditionally unique identifiers are used to measure that, but HyperLogLog has been suggested as a privacy-protecting technique to get us that data.  A separate idea is that we can use statistical random sampling that doesn’t require all Firefox users to send data while still getting the numbers we need. We’ll test sampling through Telemetry experiments to measure site popularity, and we’ll share more when we get those results.

We would love to hear your thoughts on how we treat users data to find the Tiles that users want.  And if you have ideas on how we can improve our data collection, please send them over as well!

Ed Lee on behalf of the Tiles team.

Mark SurmanDavid, Goliath and empires of the web

People in Mozilla have been talking a lot about radical participation recently. As Mitchell said at recently, participation will be key to our success as we move into ’the third era of Mozilla’ — the era where we find ways to be successful beyond the desktop browser.

davidandgoliath

This whole conversation has prompted me to reflect on how I think about radical participation today. And about what drew me to Mozilla in the first place more than five years ago.

For me, a big part of that draw was an image in my mind of Mozilla as the David who had knocked over Microsoft’s Goliath. Mozilla was the successful underdog in a fight I really cared about. Against all odds, Mozilla shook the foundation of a huge empire and changed what was possible with the web. This was magnetic. I wanted to be a part of that.

I started to think about this more the other day: what does it really mean for Mozilla to be David? And how do we win against future Goliaths?

Malcom Gladwell wrote a book last year that provides an interesting angle on this. He said: we often take the wrong lesson from David and Goliath story, thinking that it’s surprising that such a small challenger could fell such a large opponent.

Gladwell argues that Goliath was much more vulnerable that we think. He was large. But he was also slow, lumbering and had bad eyesight. Moreover, he used the most traditional fighting techniques of his time: the armour and brute force of infantry.

David, on the other hand, actually had a significant set of strategic advantages. He was nimble and good with a sling. A sling used properly, by the way, is a real weapon: it can project a rock at the speed of a .45 caliber pistol. Instead of confronting Goliath with brute force, he used a different and surprising technique to knock over his opponent. He wasn’t just courageous and lucky, he was smart.

Most other warriors would have seen Goliath as invincible. Not David: he was playing the game by his own rules.

In many ways, the same thing happened when we took on Microsoft and Internet Explorer. They didn’t expect the citizens of the web to rally against them: to build — and then choose by the millions — an unknown browser. Microsoft didn’t expect the citizens of the web to sling a rock at their weak spot, right between their eyes.

IMG_20141202_144835~3

As a community, radical participation was our sling and our rock. It was our strategic advantage and our element of surprise. And it is what shook the web loose from Microsoft’s imperial grip on the web.

Of course, participation still is our sling. It is still part of who were are as an organization and a global community. And, as the chart above shows, it is still what makes us different.

But, as we know, the setting has changed dramatically since Mozilla first released Firefox. It’s not just — or even primarily — the browser that shapes the web today. It’s not just the three companies in this chart that are vying for territorial claim. With the internet growing at breakneck speed, there are many Goliaths on many fronts. And these Goliaths are expanding their scope around the world. They are building empires.

Screen Shot 2014-12-09 at 4.46.59 AM

This has me thinking a lot about empire recently: about how the places that were once the subjects of the great European empires are by and large the same places we call “emerging markets”. These are the places where billions of people will be coming online for the first time in coming years. They are also the places where the new economic empires of the digital age are most aggressively consolidating their power.

Consider this: In North America, Android has about 68% of smartphone market share. In most parts of Asia and Africa, Android market share is in the 90% range – give or take a few points by country. That means Google has a near monopoly not only on the operating system on these markets, but also on the distribution of apps and how they are paid for. Android is becoming the Windows 98 of emerging economies, the monopoly and the control point; the arbiter of what is possible.

Also consider that Facebook and WhatsApp together control 80% of the messaging market globally, and are owned by one company. More scary: as we do market research with new smartphone users in countries like Bangladesh and Kenya. We usually ask people: do you use the internet: do you use the internet on you phone? The response is often: “what’s the Internet?” “What do you use you phone for?”, we ask. The response: “Oh, Facebook and WhatsApp.” Facebook’s internet is the only internet these people know of or can imagine.

It’s not the Facebooks and Googles of the world that concern me, per se. I use their products and in many cases, I love them. And I also believe they have done good in the world.

What concerns me is that, like the European powers in the 18th and 19th centuries, these companies are becoming empires that control both what is possible and what is imaginable. They are becoming monopolies that exert immense control over what people can do and experience on the web. And over what the web – and human society as a whole – may become.

One thing is clear to me: I don’t want this sort of future for the web. I want a future where anything is possible. I want a future where anything is imaginable. The web can be about these kinds of unlimited possibilities. That’s the web that I want everyone to be able to experience, including the billions of people coming online for the first time.

This is the future we want as a Mozilla. And, as a community we are going to need to take on some of these Goliaths. We are going to need reach down into our pocket and pull out that rock. And we are going to need to get some practice with our sling.

The truth is: Mozilla has become a bit rusty with it. Yes, participation is still a key part of who we are. But, if we’re honest, we haven’t relied on it as much of late.

If we want to shake the foundations of today’s digital empires, we need to regain that practice and proficiency. And find new and surprising ways to use that power. We need to aim at new weak spots in the giant.

We may not know what those new and surprising tactics are yet. But there is an increasing consensus that we need them. Chris Beard has talked recently about thinking differently about participation and product, building participation into the actual features and experience of our software. And we have been talking for the last couple of years about the importance of web literacy — and the power of community and participation to get people teaching each other how to wield the web. These are are the kinds of directions we need to take, and the strategies we need to figure out.

It’s not only about strategy, of course. Standing up to Goliaths and using participation to win are also about how we show up in the world. The attitude each of us embodies every day.

Think about this. Think about the image of David. The image of the underdog. Think about the idea of independence. And, then think of the task at hand: for all of us to bring more people into the Mozilla community and activate them.

If we as individuals and as an organization show up again as a challenger — like David — we will naturally draw people into what we’re doing. It’s a part of who we are as Mozillians, and its magnetic when we get it right


Filed under: mozilla, poetry, webmakers

Yunier José Sosa VázquezActualizados los canales de Firefox y Thunderbird

Se encuentra disponible actualizaciones para Firefox y Thunderbird. Esto incluye, la versión 15 de plugin Adobe Flash Player y las versiones para Andriod de Firefox.

Release: Firefox 34.0.5, Thunderbird 31.3.0, Firefox Mobile 34.0

Beta: Firefox 35.0b4, Firefox Mobile 35.0b4

Aurora/Developer Edition: Firefox 36.0a2, Firefox Mobile36.0a2 (está ubicado en el canal Nightly)

Nightly: Firefox 37 (con procesos separados gracias a Electrolysis) y Thunderbird 36

Este es un momento ideal para actualizarse antes de fin de año y llevar lo último de Firefox y Thunderbird a nuestros amigos de donde vivimos.

Ir a Descargas

Andrew HalberstadtHow to Consume Structured Test Results

You may not know that most of our test harnesses are now outputting structured logs (thanks in large part to :chmanchester's tireless work). Saying a log is structured simply means that it is in a machine readable format, in our case each log line is a JSON object. When streamed to a terminal or treeherder log, these JSON objects are first formatted into something that is human readable, aka the same log format you're already familiar with (which is why you may not have noticed this).

While this might not seem all that exciting it lets us do many things, such as change the human readable formats and add metadata, without needing to worry about breaking any fragile regex based log parsers. We are now in the process of updating much of our internal tooling to consume these structured logs. This will let us move faster and provide a foundation on top of which we can build all sorts of new and exciting tools that weren't previously possible.

But the benefits of structured logs don't need to be constrained to the Tools and Automation team. As of today, anyone can consume structured logs for use in whatever crazy tools they can think of. This post is a brief guide on how to consume structured test results.

A High Level Overview

Before diving into code, I want to briefly explain the process at a high level.

  1. The test harness is invoked in such a way that it streams a human formatted log to stdout, and a structured log to a file.
  2. After the run is finished, mozharness uploads the structured log to a server on AWS using a tool called blobber. Mozharness stores a map of uploaded file names to blobber urls as a buildbot property. The structured logs are just one of several files uploaded via blobber.
  3. The pulse build exchange publishes buildbot properties. Though the messages are based on buildbot events and can be difficult to consume directly.
  4. A tool called pulsetranslator consumes messages from the build exchange, cleans them up a bit and re-publishes them on the build/normalized exchange.
  5. Anyone creates a NormalizedBuildConsumer in pulse, finds the url to the structured log and downloads it.

Sound complicated? Don't worry, the only step you're on the hook for is step 5.

Creating a Pulse Consumer

For anyone not aware, pulse is a system at Mozilla for publishing and subscribing to arbitrary events. Pulse has all sorts of different applications, one of which is receiving notifications whenever a build or test job has finished.

The Setup

First, head on over to https://pulse.mozilla.org/ and create an account. You can sign in with Persona, and then create one or more pulse users. Next you'll need to install the mozillapulse python package. First make sure you have pip installed, then:

$ pip install mozillapulse

As usual, I recommend doing this in a virtualenv. That's it, no more setup required!

The Execution

Creating a pulse consumer is pretty simple. In this example we'll download all logs pertaining to mochitests on mozilla-inbound and mozilla-central. This example depends on the requests package, you'll need to pip install it if you want to run it locally:

import json
import sys
import traceback

import requests

from mozillapulse.consumers import NormalizedBuildConsumer

def run(args=sys.argv[1:]):
    pulse_args = {
        # a string to identify this consumer when logged into pulse.mozilla.org
        'applabel': 'mochitest-log-consumer',

        # each message contains a topic. Only messages that match the topic specified here will
        # be delivered. '#' is a wildcard, so this topic matches all messages that start with
        # 'unittest'.
        'topic': 'unittest.#',

        # durable queues will store messages inside pulse even if your consumer goes offline for
        # a bit. Otherwise, any messages published while the consumer is not explicitly
        # listeneing will be lost forever. Keep it set to False for testing purposes.
        'durable': False,

        # the user you created on pulse.mozilla.org
        'user': 'ahal',

        # the password you created for the user
        'password': 'hunter1',

        # a callback that will get invoked on each build event
        'callback': on_build_event,
    }


    pulse = NormalizedBuildConsumer(**pulse_args)

    while True:
        try:
            pulse.listen()
        except KeyboardInterrupt:
            # without this ctrl-c won't work!
            raise
        except IOError:
            # sometimes you'll get a socket timeout. Just call listen again and all will be
            # well. This was fairly common and probably not worth logging.
            pass
        except:
            # it is possible for rabbitmq to throw other exceptions. You likely
            # want to log them and move on.
            traceback.print_exc()


def on_build_event(data, message):
    # each message needs to be acknowledged. This tells the pulse queue that the message has been
    # processed and that it is safe to discard. Normally you'd want to ack the message when you know
    # for sure that nothing went wrong, but this is a simple example so I'll just ack it right away.
    message.ack()

    # pulse data has two main properties, a payload and metadata. Normally you'll only care about
    # the payload.
    payload = data['payload']
    print('Got a {} job on {}'.format(payload['test'], payload['tree']))

    # ignore anything not from mozilla-central or mozilla-inbound
    if payload['tree'] not in ('mozilla-central', 'mozilla-inbound'):
        return

    # ignore anything that's not mochitests
    if not payload['test'].startswith('mochitest'):
        return

    # ignore jobs that don't have the blobber_files property
    if 'blobber_files' not in payload:
        return

    # this is a message we care about, download the structured log!
    for filename, url in payload['blobber_files'].iteritems():
        if filename == 'raw_structured_logs.log':
            print('Downloading a {} log from revision {}'.format(
                   payload['test'], payload['revision']))
            r = requests.get(url, stream=True)

            # save the log
            with open('mochitest.log', 'wb') as f:
                for chunk in r.iter_content(1024):
                    f.write(chunk)
            break

    # now time to do something with the log! See the next section.

if __name__ == '__main__':
    sys.exit(run())

A Note on Pulse Formats

Each pulse publisher can have its own custom topics and data formats. The best way to discover these formats is via a tool called pulse-inspector. To use it, type in the exchange and routing key, click Add binding then Start Listening. You'll see messages come in which you can then inspect to get an idea of what format to expect. In this case, use the following:

Pulse Exchange: exchange/build/normalized
Routing Key Pattern: unittest.#

Consuming Log Data

In the last section we learned how to obtain a structured log. Now we learn how to use it. All structured test logs follow the same structure, which you can see in the mozlog documentation. A structured log is a series of line-delimited JSON objects, so the first step is to decode each line:

lines = [json.loads(l) for l in log.splitlines()]
for line in lines:
    # do something

If you have a large number of log lines, you'll want to use a generator. Another common use case is registering callbacks on specific actions. Luckily, mozlog provides several built-in functions for dealing with these common cases. There are two main approaches, registering callbacks or creating log handlers.

Examples

The rest depends on what you're trying to accomplish. It now becomes a matter of reading the docs and figuring out how to do it. Below are several examples to help get you started.

List all failed tests by registering callbacks:

from mozlog.structured import reader

failed_tests = []
def append_if_failed(log_item):
    if 'expected' in log_item:
        failed_tests.append(log_item['test'])

with open('mochitest.log', 'r') as log:
    iterator = reader.read(log)
    action_map = { 'test_end': append_if_failed }
    reader.each_log(iterator, action_map)

print('\n'.join(failed_tests))

List the time it took to run each test using a log handler:

import json

from mozlog.structured import reader

class TestDurationHandler(reader.LogHandler):
    test_duration = {}
    start_time = None

    def test_start(self, item):
        self.start_time = item['timestamp']

    def test_end(self, item):
        duration = item['timestamp'] - self.start_time
        self.test_duration[item['test']] = duration

handler = TestDurationHandler()
with open('mochitest.log', 'r') as log:
    iterator = reader.read(log)
    reader.handle_log(iterator, handler)

print(json.dumps(handler.test_duration, indent=2))

How to consume the log is really up to you. The built-in methods can be helpful, but are by no means required. Here is a more complicated example that receives structured logs over a socket, and spawns an arbitrary number of threads to process and execute callbacks on them.

If you have questions, comments or suggestions, don't hesitate to speak up!

Finally, I'd also like to credit Ahmed Kachkach an intern who not only worked on structured logging in mochitest over the summer, but also created the system that manages pulse users and queues.

William ReynoldsRemoving “Legacy” vouches on Mozillians.org

We announced changes to our vouching system on mozillians.org on July 29. These changes require you to receive a new vouch by December 18 to keep your vouched status. On that day we will remove “Legacy” vouches, which are vouches that do not have a description and were made before July 29. This is the last step in having the site fully transition to the improved vouching system that gives a shared understanding of vouching and describes each vouch.

Being “vouched” means you have made a meaningful contribution to the Project and and because of that, have access to special content like all profiles on mozillians.org, certain content on Air Mozilla and Mozilla Moderator, and you receive messages that are sent to vouched Mozillians. Having to get re-vouched means our community directory and vouching overall, is more meaningful.

Since we first announced this change, 3,600 out of the 6000 Mozillians have been revouched. Cheers! That also means about 2,400 will not unless they get vouched by a Mozillian who has vouching permissions by December 18.

Here’s what you need to do:

– Check your profile to see if you have a new vouch (anything other than a “Legacy vouch”). All Summit 2013 participants and paid staff have already received a new vouch. If you don’t have a new vouch, ask someone who knows your contributions to vouch for you.

– Help those who have made meaningful contributions, get a new vouch (if they need one). You can vouch for others if you have three vouches or more on your profile.

All “Legacy vouches” (those before July 29) will be removed on December 18, and only contributors with a new (non-Legacy) vouch will remain vouched. Losing your vouched status means you will not be able to access vouched Mozillians content or get Mozillians email communications until someone vouches for you.

You can learn more on the Vouching FAQ wiki page.

PomaxLet's make a Firefox Extension, the painless way

Ever had a thing you really wanted to customise about Firefox, but you couldn't because it wasn't in any regular menu, advanced menu, or about:config?

For instance, you want to be able to delete elements on a page for peace of mind from the context menu. How the heck do you do that? Well, with the publication of the new node-based jpm, the answer to that question is "pretty dang simply"...

Let's make our own Firefox extension with a "Delete element" option added to the context menu:

a screenshot of the Firefox page context menu with a 'delete element' option

We're going to make that happen in five steps.

  1. Install jpm -- in your terminal simply run: npm install -g jpm (make sure you have node.js installed) and done (this is mostly prerequisite to developing an extension, so you only have to do this once, and then never again. For future extensions, you start at step 2!)
  2. Create a dir for working on your extension whereveryou like, navigate to it in the terminal and run: jpm init to set up the standard files necessary to build your extension. Good news: it's very few files!
  3. Edit the index.js file that command generated, writing whatever code you need to do what you want to get done,
  4. Turn your code into an .xpi extension by running : jpm xpi,
  5. Install the extension by opening the generated .xpi file with Firefox

Of course, step (3) is the part that requires some effort, but let's run through this together. We're going to pretty much copy/paste the code straight from the context menu API documentation:

      // we need to make sure we have a hook into "things" we click on:
  1:  var self = require("sdk/self");

      // and we'll be using the context menu, so let's make sure we can:
  2:  var contextMenu = require("sdk/context-menu");

      // let's add a menu item!
  3:  var menuItem = contextMenu.Item({
        // the label is pretty obvious...
  4:    label: "Delete Element",

        // the context tells Firefox which things should have this in their context
        // menu, as there are quite a few elements that get "their own" menu,
        // like "the page" vs "an image" vs "a link". .. We pretty much want
        // everything on a page, so we make that happen:
  5:    context: contextMenu.PredicateContext(function(data) { return true; }),

        // and finally the script that runs when we select the option. Delete!
  6:    contentScript: 'self.on("click", function (node, data) { node.outerHTML = ""; });'
      });

The only changes here are that we want "delete" for everything, so the context is simply "for anything that the context menu opens up on, consider that a valid context for our custom script" (which we do by using the widest context possible on line 5), and of course the script itself is different because we want to delete nodes (line 6).

The contentScript property is a string, so we're a little restricted in what we can do without all manner of fancy postMessages, but thankfully we don't need it: the addon mechanism will always call the contentScript function with two arguments, "node" and "data, and the "node" argument is simply the HTML element you clicked on, which is what we want to delete. So we do! We don't even try to be clever here, we simply set the element's .outerHTML property to an empty string, and that makes it vanish from the page.

If you expected more work, then good news: there isn't any, we're already done! Seriously: run jpm run yourself to test your extension, and after verifying that it indeed gives you the new "Delete element" option in the context menu and deletes nodes when used, move on to steps (4) and (5) for the ultimate control of your browser.

Because here's the most important part: the freedom to control your online experience, and Firefox, go hand in hand.

Mark CôtéSearching Bugzilla

BMO currently supports five—count ‘em, five—ways to search for bugs. Whenever you have five different ways to perform a similar function, you can be pretty sure the core problem is not well understood. Search has been rated, for good reason, one of the least compelling features of Bugzilla, so the BMO team want to dig in there and make some serious improvements.

At our Portland get-together a couple weeks ago, we talked about putting together a vision for BMO. It’s a tough problem, since BMO is used for so many different things. We did, however, manage to get some clarity around search. Gerv, who has been involved in the Bugzilla project for quite some time, neatly summarized the use cases. People search Bugzilla for only two reasons:

  • to find a set of bugs, or
  • to find a specific bug.

That’s it. The fact that BMO has five different searches, though, means either we didn’t know that, or we just couldn’t find a good way to do one, or the other, or both.

We’ve got the functionality of the first use case down pretty well, via Advanced Search: it helps you assemble a set of criteria of almost limitless specificity that will result in a list of bugs. It can be used to determine what bugs are blocking a particular release, what bugs a particular person has assigned to them, or what bugs in a particular Product have been fixed recently. Its interface is, admittedly, not great. Quick Search was developed as a different, text-based approach to Advanced Search; it can be quicker to use but definitely isn’t any more intuitive. Regardless, Advanced Search fulfills its role fairly well.

The second use of Search is how you’d answer the question, “what was that bug I was looking at a couple weeks ago?” You have some hazy recollection of a bug. You have a good idea of a few words in the summary, although you might be slightly off, and you might know the Product or the Assignee, but probably not much else. Advanced Search will give you a huge, useless result set, but you really just want one specific bug.

This kind of search isn’t easy; it needs some intelligence, like natural-language processing, in order to give useful results. Bugzilla’s solutions are the Instant and Simple searches, which eschew the standard Bugzilla::Search module that powers Advanced and Quick searches. Instead, they do full-text searches on the Summary field (and optionally in Comments as well, which is super slow). The results still aren’t very good, so BMO developers tried outsourcing the feature by adding a Google Search option. But despite Google being a great search engine for the web, it doesn’t know enough about BMO data to be much more useful, and it doesn’t know about new nor confidential bugs at all.

Since Bugzilla’s search engines were originally written, however, there have been many advances in the field, especially in FLOSS. This is another place where we need to bring Bugzilla into the modern world; MySQL full-text searches are just not good enough. In the upcoming year, we’re going to look into new approaches to search, such as running different databases in tandem to exploit their particular abilities. We plan to start with experiments using Elasticsearch, which, as the name implies, is very good at searching. By standing up an instance beside the main MySQL db and mirroring bug data over, we can refer specific-bug searches to it; even though we’ll then have to filter based on standard bug-visibility rules, we should have a net win in search times, especially when searching comments.

In sum, Mozilla developers, we understand your tribulations with Bugzilla search, and we’re on it. After all, we all have a reputation to maintain as the Godzilla of Search Engines!

Henrik SkupinFirefox Automation report – week 41/42 2014

In this post you can find an overview about the work happened in the Firefox Automation team during week 41 and 42.

With the beginning of October we also have some minor changes in responsibilities of tasks. While our team members from SoftVision mainly care about any kind of Mozmill tests related requests and related CI failures, Henrik is doing all the rest including the framework and the maintenance of Mozmill CI.

Highlights

With the support for all locales testing in Mozmill-CI for any Firefox beta and final release, Andreea finished her blacklist patch. With that we can easily mark locales not to be tested, and get rid of the long white-list entries.

We spun up our first OS X 10.10 machine in our staging environment of Mozmill CI for testing the new OS version. We hit a couple of issues, especially some incompatibilities with mozrunner, which need to be fixed first before we can get started in running our tests on 10.10.

In the second week of October Teodor Druta joined the Softvision team, and he will assist all the others with working on Mozmill tests.

But we also had to fight a lot with Flash crashes on our testing machines. So we have seen about 23 crashes on Windows machines per day. And that all with the regular release version of Flash, which we re-installed because of a crash we have seen before was fixed. But the healthy period did resist long, and we had to revert back to the debug version without the protect mode. Lets see for how long we have to keep the debug version active.

Individual Updates

For more granular updates of each individual team member please visit our weekly team etherpad for week 41 and week 42.

Meeting Details

If you are interested in further details and discussions you might also want to have a look at the meeting agenda, the video recording, and notes from the Firefox Automation meetings of week 41 and week 42.

Andy McKayDevelopers First

A while back we developed Marketplace Payments. The first version of those was for Firefox OS and it was tough. There were lots of thing happening at once: building out a custom API with a payment provide, a backend to talk to our payment provider through multiple security hoops, integrating the relatively new Persona, working on the Trusted UI and mozPay and so on.

At the moment we are prototyping and shipping desktop payments as part of our final steps in Marketplace Payments. One thing that came clear a while ago was that desktop payments are much, much, much easier to use, test and debug.

Desktop payments are easier for the developers who work on payments. That means they are easier to get team members working on, easier to demo, easier to record, easier to debug, easier to test and so on. That dramatically decreases the development time.

In the meantime we've also built out things that make this much easier: a Docker development environment that sets things up correctly and a fake backend so you don't need to process money to test things out.

Hindsight is wonderful thing, but at the time we were actively discouraged from doing desktop development. "Mobile first" and "Don't slow down mobile development".

But inadvertently we slowed down mobile development by not being developer first.

Nicholas NethercoteUsing Gmail filters to identify important Bugzilla mail in 2014

Many email filtering systems are designed to siphon each email into a single destination folder. Usually you have a list of rules which get applied in order, and as soon as one matches an email the matching process ends.

Gmail’s filtering system is different; it’s designed to add any number of labels to each email, and the rules don’t get applied in any particular order. Sometimes it’s really useful to be able to apply multiple labels to an email, but if you just want to apply one in a fashion that emulates folders, it can be tricky.

So here’s a non-trivial example of how I filter bugmail into two “folders”. The first “folder” contains high-priority bugmail.

  • Review/feedback/needinfo notifications.
  • Comments in bugs that I filed or am assigned to or am CC’d to.
  • Comment in secure bugs.
  • Comments in bugs in the DMD and about:memory components.

For the high priority bugmail, on Gmail’s “Create a Filter” screen, in the “From:” field I put:

bugzilla-daemon@mozilla.org

and in the “Has the words:” field I put:

“you are the assignee” OR “you reported” OR “you are on the CC list” OR subject:”granted:” OR subject:”requested:” OR subject:”canceled:” OR subject:”Secure bug” OR “Product/Component: Core :: DMD” OR “Product/Component: Toolkit :: about:memory” OR “Your Outstanding Requests”

For the low priority bugmail, on Gmail’s “Create a Filter” screen, in the “From:” field put:

bugzilla-daemon@mozilla.org

and in the “Doesn’t have:” field put:

(“you are the assignee” OR “you reported” OR “you are on the CC list” OR subject:”granted:” OR subject:”requested:” OR subject:”canceled:” OR subject:”Secure bug” OR “Product/Component: Core :: DMD” OR “Product/Component: Toolkit :: about:memory” OR “Your Outstanding Requests”)

(I’m not certain if the parentheses are needed here. It’s otherwise identical to the contents in the previous case.)

I’ve modified them a few times and they work very well for me. Everyone else will have different needs, but this might be a useful starting point.

This is just one way to do it. See here for an alternative way. (Update: Byron Jones pointed out that my approach assumes that the wording used in email bodies won’t change, and so the alternative is more robust.)

Finally, if you’re wondering about the “in 2014″ in the title of this post, it’s because I wrote a very similar post four years ago, and my filters have evolved slightly since then.

Will Kahn-GreeneDennis v0.6 released! Line numbers, double vowels, better cli-fu, and better output!

What is it?

Dennis is a Python command line utility (and library) for working with localization. It includes:

  • a linter for finding problems in strings in .po files like invalid Python variable syntax which leads to exceptions
  • a template linter for finding problems in strings in .pot files that make translator's lives difficult
  • a statuser for seeing the high-level translation/error status of your .po files
  • a translator for strings in your .po files to make development easier

v0.6 released!

Since v0.5, I've done the following:

  • Rewrote the command line handling using click and added an exception handler.
  • Merged the lint and linttemplate commands. Why should you care which file you're linting when the linter can figure it out for you?
  • Added the whimsical double vowel transform.
  • Added line numbers in the lint output. This will make it possible to find those pesky problematic strings in your .po/.pot files.
  • Add a line reporter to the linter.

Getting pretty close to what I want for a 1.0, so I'm pretty excited about this version.

Denise update

I've updated Denise with the latest Dennis and moved it to a better url. Lint your .po/.pot files via web service using http://denise.paas.allizom.org/.

Where to go for more

For more specifics on this release, see here: http://dennis.readthedocs.org/en/latest/changelog.html#version-0-6-december-16th-2014

Documentation and quickstart here: http://dennis.readthedocs.org/en/v0.6/

Source code and issue tracker here: https://github.com/willkg/dennis

Source code and issue tracker for Denise (Dennis-as-a-service): https://github.com/willkg/denise

6 out of 8 employees said Dennis helps them complete 1.5 more deliverables per quarter.

Michael KaplyManaging Firefox with Group Policy and PolicyPak

A lot of people ask me how to manage Firefox using Windows Group Policy. To that end, I have been working with a company called PolicyPak to help enhance their product to have more of the features that people are asking for (not just controlling preferences.) It's taken about a year, but the results are available for download now.

You can now manage the following things (and more) using PolicyPak, Group Policy and Firefox:

  • Set and lock almost all preference settings (homepage, security, etc) plus most settings in about:config
  • Set site specific permissions for pop-ups, cookies, camera and microphone
  • Add or remove bookmarks on the toolbar or in the bookmarks folder
  • Blacklist or whitelist any type of add-on
  • Add or remove certificates
  • Disable private browsing
  • Turn off crash reporting
  • Prevent access to local files
  • Always clear saved passwords
  • Disable safe mode
  • Remove Firefox Sync
  • Remove various buttons from Options

If you want to see it in action, you can check out these videos.

And if you've never heard of PolicyPak, you might have heard of the guy who runs it - Jeremy Moskowitz. He's a Group Policy MVP and literally wrote the book on Group Policy.

On a final note, if you decide to purchase, please let them know you heard about it from me.

Jennie Rose HalperinLeaving Mozilla as staff

December 31 will be my last day as paid staff on the Community Building Team at Mozilla.

One year ago, I settled into a non-stop flight from Raleigh, NC to San Francisco and immediately fell asleep. I was exhausted; it was the end of my semester and I had spent the week finishing a difficult databases final, which I emailed to my professor as soon as I reached the hotel, marking the completion of my coursework in Library Science and the beginning of my commitment to Mozilla.

The next week was one of the best of my life. While working, hacking, and having fun, I started on the journey that has carried me through the past exhilarating months. I met more friendly faces than I could count and felt myself becoming part of the Mozilla community, which has embraced me. I’ve been proud to call myself a Mozillian this year, and I will continue to work for the free and open Web, though currently in a different capacity as a Rep and contributor.

I’ve met many people through my work and have been universally impressed with your intelligence, drive, and talent. To David, Pierros, William, and particularly Larissa, Christie, Michelle, and Emma, you have been my champions and mentors. Getting to know you all has been a blessing.

I’m not sure what’s next, but I am happy to start on the next step of my career as a Mozillian, a community mentor, and an open Web advocate. Thank you again for this magical time, and I hope to see you all again soon. Let me know if you find yourself in Boston! I will be happy to hear from you and pleased to show you around my hometown.

If you want to reach out, find me on IRC: jennierose. All the best wishes for a happy, restful, and healthy holiday season.

Mike HommeyOne step closer to git push to mercurial

In case you missed it, I’m working on a new tool to use mercurial remotes in git. Since my previous post, I landed several fixes making clone and pull more reliable:

  • Of 247316 unique changesets in the various mozilla-* repositories, now only two (but both in fact come from the same patch, one of the changesets being a backport to aurora of the other) are “corrupted” because their mercurial date have a timezone with a second.
  • Of 23542 unique changesets in the canonical mercurial repository, only three are “corrupted” because their raw mercurial data contains, for an unknown reason, a whitespace after the timezone.

By corrupted, here, I mean that the round-trip hg->git->hg doesn’t lead to matching their sha1. They will be fixed eventually, but I haven’t decided how yet, because they’re really edge cases. They’re old enough that they don’t really matter for push anyways.

Pushing to mercurial, however, is still not there, but it’s getting closer. It involves several operations:

  • Negotiating with the mercurial server what it doesn’t have that we do.
  • Creating mercurial changesets, manifests and files for local git commits that were not imported from mercurial.
  • Creating a bundle of the mercurial changesets, manifests and files that we have that the server doesn’t.
  • Pushing that bundle to the server.

The first step is mostly covered by the pull code, that does a similar negotiation. I now have the third step covered (although I cheated around the “corruptions” mentioned above):

$ git clone hg::http://selenic.com/hg
Cloning into 'hg'...
(...)
Checking connectivity... done.
$ cd hg
$ git hgbundle > ../hg.hg
$ mkdir ../hg2
$ cd ../hg2
$ hg init
$ hg unbundle ../hg.hg
adding changesets
adding manifests
adding file changes
added 23542 changesets with 44305 changes to 2272 files
(run 'hg update' to get a working copy)
$ hg verify
checking changesets
checking manifests
crosschecking files in changesets and manifests
checking files
2272 files, 23542 changesets, 44305 total revisions

Note: that hgbundle command won’t actually exist. It’s just an intermediate step allowing me to work incrementally.

In case you wonder what happens when the bundle contains bad data, mercurial fortunately rejects it:

$ cd ../hg
$ git hgbundle-corrupt > ../hg.hg
$ mkdir ../hg3
$ cd ../hg3
$ hg unbundle ../hg.hg
adding changesets
transaction abort!
rollback completed
abort: integrity check failed on 00changelog.i:3180!

Andrea MarchesiniPriv8 is out!

Download page: click here

What is priv8? This is a Firefox addon that uses part of the security model of Firefox OS to create sandboxed tabs. Each sandbox is a completely separated world: it doesn’t share cookies, storage, and a lots of other stuff with the rest of Firefox, but just with other tabs from the same sandbox.

Each sandbox has a name and a color, therefore it will be always easy to identify which tab is sandboxed.

Also, these sandboxes are permanent! So, when you open one of them the second time, maybe after a restart, that sandbox will still have the same cookies, same storage, etc - as you left the previous time.

You can also switch between sandboxes using the context menu for the tab.

Here an example: with priv8 you can read your gmail webmail in a tab, and another gmail webmail in another tab at the same time. Still, you can be logged in on Facebook in a tab and not in the others. This is nice!

Moreover, if you are a web developer and you want to test a website using multiple accounts, priv8 gives you the opportunity to have each account in a sandboxed tab. Much easier then have multiple profiles or login and logout manung>ally every time!

Is it stable? I don’t know :) It works but more test must be done. Help needed!

Known issues?

  • window.open() doesn’t work from a sandbox
  • e10s is not supported yet.
  • The UI must be improved.

Screenshots:

The manager

This is the manager, where you can “manage” your sandboxes.

The panel

The panel is always accessible from the firefox toolbar.

Context menu

The context menu allows you to switch between sandboxes for the current tab. This will reload the tab after the switch.

3 gmail tabs

3 separate instances of Gmail at the same time.

License: Priv8 is released under Mozilla Public License.
Source code: bakulf :: priv8

Byron Joneshappy bmo push day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1063818] Updates to form.dev-engagement-event
  • [1111954] Updates to Spreadsheet Data in form.dev-engagement-event
  • [1092578] Decide if an email needs to be encrypted at the time it is generated, not at the time it is sent
  • [1107275] Include Build.PL file for bmo/4.2 to install Perl dependencies (useful for Travis CI, etc.)
  • [829358] Changing the name of a private attachment in an unhidden bug results in the name change being sent unencrypted
  • [1104291] The form.web.bounty page does not say it’s a bounty form
  • [1105585] Fix bug bounty form to validate its input more and relax the restriction on the paid field to include -+? suffix
  • [1105155] Indicate that an existing comment has been modified for tracking flags with prefill text
  • [1105745] changes made via the bounty form are not emailed immediately
  • [1111862] HTML code injection in review history page

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

Cameron McCormackSubmission

254 pages, eleven and a half years of my life.

Me submitting my thesis at the Monash Institute of Graduate Research office.

My thesis front cover: Authoring and Publishing Adaptive Diagrams.

Now for the months-long wait for the examiners to review it.

Paul RougetFirefox.html screencast

Firefox.html screencast. Contribute: http://github.com/paulrouget/firefox.html.

Youtube video: https://www.youtube.com/watch?v=IBzrCmGVDkA

David HumphreyVideo killed the radio star

One of the personal experiments I'm considering in 2015 is a conscious movement away from video-based participation in open source communities. There are a number of reasons, but the main one is that I have found the preference for "realtime," video-based communication media inevitably leads to ever narrowing circles of interaction, and eventually, exclusion.

I'll speak about Mozilla, since that's the community I know best, but I suspect a version of this is happening in other places as well. At some point in the past few years, Mozilla (the company) introduced a video conferencing system called Vidyo. It's pretty amazing. Vidyo makes it trivial to setup a virtual meeting with many people simultaneously or do a 1:1 call with just one person. I've spent hundreds of hours on Vidyo calls with Mozilla, and other than the usual complaints one could level against meetings in general, I've found them very productive and useful, especially being able to see and hear colleagues on the other side of the country or planet.

Vidyo is so effective that for many parts of the project, it has become the default way people interact. If I need to talk to you about a piece of code, for example, it would be faster if we both just hopped into Vidyo and spent 10 minutes hashing things out. And so we do. I'm guilty of this.

I'm talking about Vidyo above, but substitute Skype or Google Hangouts or appear.in or some cool WebRTC thing your friend is building on Github. Video conferencing isn't a negative technology, and provides some incredible benefits. I believe it's part of what allows Mozilla to be such a successful remote-friendly workplace (vs. project). I don't believe, however, that it strengthens open source communities in the same way.

It's possible on Vidyo to send an invitation URL to someone without an account (you need an account to use it, by the way). You have to be invited, though. Unlike irc, for example, there is no potential for lurking (I spent years learning about Mozilla code by lurking on irc in #developers). You're in or you're out, and people need to decide which it will be. Some people work around this by recording the calls and posting them online. The difficulty here is that doing so converts what was participation into performance--one can watch what happened, but not engage it, not join the conversation and therefore the decision making. And the more we use video, the more likely we are to have that be where we make decisions, further making it difficult for those not in the meeting to be part of the discussion.

Even knowing that decisions have been made becomes difficult in a world where those decisions aren't sticky, and go un-indexed. If we decided in a mailing list, bug, irc discussion, Github issue, etc. we could at least hope to go back and search for it. So too could interested members of the community, who may wish to follow along with what's happening, or look back later when the details around how the decision came to be become important.

I'll go further and suggest that in global, open projects, the idea that we can schedule a "call" with interested and affected parties is necessarily flawed. There is no time we can pick that has us all, in all timezones, able to participate. We shouldn't fool ourselves: such a communication paradigm is necessarily geographically rooted; it includes people here, even though it gives the impression that everyone and anyone could be here. They aren't. They can't be. The internet has already solved this problem by privileging asynchronous communication. Video is synchronous.

Not everything can or should be open and public. I've found that certain types of communication work really well over video, and we get into problems when we do too much over email, mailing lists, or bugs. For example, a conversation with a person that requires some degree of personal nuance. We waste a lot of time, and cause unnecessary hurt, when we always choose open, asynchronous, public communication media. Often scheduling an in person meeting, getting on the phone, or using video chat would allow us to break through a difficult impasse with another person.

But when all we're doing is meeting as a group to discuss something public, I think it's worth asking the question: why aren't we engaging in a more open way? Why aren't we making it possible for new and unexpected people to observe, join, and challenge us? It turns out it's a lot easier and faster to make decisions in a small group of people you've pre-chosen and invited; but we should consider what we give up in the name of efficiency, especially in terms of diversity and the possibility of community engagement.

When I first started bringing students into open source communities like Mozilla, I liked to tell them that what we were doing would be impossible with other large products and companies. Imagine showing up at the offices of Corp X and asking to be allowed to sit quietly in the back of the conference room while the engineers all met. Being able to take them right into the heart of a global project, uninvited, and armed only with a web browser, was a powerful statement; it says: "You don't need permission to be one of us."

I don't think that's as true as it used to be. You do need permission to be involved with video-only communities, where you literally have to be invited before taking part. Where most companies need to guard against leaks and breaches of many kinds, an open project/company needs to regularly audit to ensure that its process is porous enough for new things to get in from the outside, and for those on the inside to regularly encounter the public.

I don't know what the right balance is exactly, and as with most aspects of my life where I become unbalanced, the solution is to try swinging back in the other direction until I can find equilibrium. In 2015 I'm going to prefer modes of participation in Mozilla that aren't video-based. Maybe it will mean that those who want to work with me will be encouraged to consider doing the same, or maybe it will mean that I increasingly find myself on the outside. Knowing what I do of Mozilla, and its expressed commitment to working open, I'm hopeful that it will be the former. We'll see.

Daniel StenbergCan curl avoid to be in a future funnily named exploit that shakes the world?

During this year we’ve seen heartbleed and shellshock strike (and a  few more big flaws that I’ll skip for now). Two really eye opening recent vulnerabilities in projects with many similarities:

  1. Popular corner stones of open source stacks and internet servers
  2. Mostly run and maintained by volunteers
  3. Mature projects that have been around since “forever”
  4. Projects believed to be fairly stable and relatively trustworthy by now
  5. A myriad of features, switches and code that build on many platforms, with some parts of code only running on a rare few
  6. Written in C in a portable style

Does it sound like the curl project to you too? It does to me. Sure, this description also matches a slew of other projects but I lead the curl development so let me stay here and focus on this project.

cURLAre we in jeopardy? I honestly don’t know, but I want to explain what we do in our project in order to minimize the risk and maximize our ability to find problems on our own before they become serious attack vectors somewhere!

previous flaws

There’s no secret that we have let security problems slip through at times. We’re right now working toward our 143rd release during our around 16 years of life-time. We have found and announced 28 security problems over the years. Looking at these found problems, it is clear that very few security problems are discovered quickly after introduction. Most of them linger around for several years until found and fixed. So, realistically speaking based on history: there are security bugs still in the code, and they have probably been present for a while already.

code reviews and code standards

We try to review all patches from people without push rights in the project. It would probably be a good idea to review all patches before they go in for real, but that just wouldn’t work with the (lack of) man power we have in the project while we at the same time want to develop curl, move it forward and introduce new things and features.

We maintain code standards and formatting to keep code easy to understand and follow. We keep individual commits smallish for easier review now or in the future.

test cases

As simple as it is, we test that the basic stuff works. We don’t and can’t test everything but having test cases for most things give us the confidence to change code when we see problems as we then remain fairly sure things keep working the same way as long as the test go through. In projects with much less test coverage, you become much more conservative with what you dare to change and that also makes you more vulnerable.

We always want more test cases and we want to improve on how we always add test cases when we add new features and ideally we should also add new test cases when we fix bugs so that we know that we don’t introduce any such bug again in the future.

static code analyzes

We regularly scan our code base using static code analyzers. Both clang-analyzer and coverity are good tools, and they help us by pointing out code that look wrong or suspicious. By making sure we have very few or no such flaws left in the code, we minimize the risk. A static code analyzer is better than run-time tools for cases where they can check code flows that are hard to repeat in my local environment.

valgrind

bike helmetValgrind is an awesome tool to detect memory problems in run-time. Leaks or just stupid uses of memory or related functions. We have our test suite automatically use valgrind when it runs tests in case it is present and it helps us make sure that all situations we test for are also error-free from valgrind’s point of view.

autobuilds

Building and testing curl on a plethora of platforms non-stop is also useful to make sure we don’t depend on behaviors of particular library implementations or non-standard features and more. Testing it all is basically the only way to make sure everything keeps working over the years while we continue to develop and fix bugs. We would course be even better off with more platforms that would test automatically and with more developers keeping an eye on problems that show up there…

code complexity

Arguably, one of the best ways to avoid security flaws and bugs in general, is to keep the source code as simple as possible. Complex functions need to be broken down into smaller functions that are possible to read and understand. A good way to identify functions suitable for fixing is pmccabe,

essential third parties

curl and libcurl are usually built to use a whole bunch of third party libraries in order to perform all the functionality. In order to not have any of those uses turn into a source for trouble we must of course also participate in those projects and help them stay strong and make sure that we use them the proper way that doesn’t lead to any bad side-effects.

You can help!

All this takes time, energy and system resources. Your contributions and help will be appreciated where ever among these tasks that you can insert any. We could do more of all this, more often and more thorough if we only were more people involved!

Julien VehentStripe's AWS-Go and uploading to S3

Yesterday, I discovered Stripe's AWS-Go library, and the magic of auto-generated API clients (which is one fascinating topic that I'll have to investigate for MIG).

I took on the exercise of writing a simple file upload tool using aws-go. It was fairly easy to achieve, considering the complexity of AWS's APIs. I would have to evaluate aws-go further before recommending it as a comprehensive AWS interface, but so far it seems complete. Check out http://godoc.org/github.com/stripe/aws-go/gen for a detailed doc.

The source code is below. It reads credentials from ~/.awsgo:

$ cat ~/.awsgo
[credentials]
    accesskey = "AKI...."
    secretkey = "mw0...."

It takes a file to upload as the only argument, and returns the URL where it is posted.

$ ./s3up s3up
https://s3.amazonaws.com/testawsgo/s3up
AWS-Go is not revolutionary compared to python & boto, but benefits from Go's very clean approach to programming. And getting rid of install dependencies, pip and python{2{6,7},3} hell is kinda nice!

package main

import (
	"code.google.com/p/gcfg"
	"fmt"
	"github.com/stripe/aws-go/aws"
	"github.com/stripe/aws-go/gen/s3"
	"os"
)

// conf takes an AWS configuration from a file in ~/.awsgo
// example:
//
// [credentials]
//    accesskey = "AKI...."
//    secretkey = "mw0...."
//
type conf struct {
	Credentials struct {
		AccessKey string
		SecretKey string
	}
}

func main() {
	var (
		err         error
		conf        conf
		bucket      string = "testawsgo" // change to your convenience
		fd          *os.File
		contenttype string = "binary/octet-stream"
	)
	// obtain credentials from ~/.awsgo
	credfile := os.Getenv("HOME") + "/.awsgo"
	_, err = os.Stat(credfile)
	if err != nil {
		fmt.Println("Error: missing credentials file in ~/.awsgo")
		os.Exit(1)
	}
	err = gcfg.ReadFileInto(&conf, credfile)
	if err != nil {
		panic(err)
	}

	// create a new client to S3 api
	creds := aws.Creds(conf.Credentials.AccessKey, conf.Credentials.SecretKey, "")
	cli := s3.New(creds, "us-east-1", nil)

	// open the file to upload
	if len(os.Args) != 2 {
		fmt.Printf("Usage: %s <inputfile>\n", os.Args[0])
		os.Exit(1)
	}
	fi, err := os.Stat(os.Args[1])
	if err != nil {
		fmt.Printf("Error: no input file found in '%s'\n", os.Args[1])
		os.Exit(1)
	}
	fd, err = os.Open(os.Args[1])
	if err != nil {
		panic(err)
	}
	defer fd.Close()

	// create a bucket upload request and send
	objectreq := s3.PutObjectRequest{
		ACL:           aws.String("public-read"),
		Bucket:        aws.String(bucket),
		Body:          fd,
		ContentLength: aws.Integer(int(fi.Size())),
		ContentType:   aws.String(contenttype),
		Key:           aws.String(fi.Name()),
	}
	_, err = cli.PutObject(&objectreq)
	if err != nil {
		fmt.Printf("Error: %v\n", err)
	} else {
		fmt.Printf("%s\n", "https://s3.amazonaws.com/"+bucket+"/"+fi.Name())
	}

	// list the content of the bucket
	listreq := s3.ListObjectsRequest{
		Bucket: aws.StringValue(&bucket),
	}
	listresp, err := cli.ListObjects(&listreq)
	if err != nil {
		panic(err)
	}
	if err != nil {
		fmt.Printf("Error: %v\n", err)
	} else {
		fmt.Printf("Content of bucket '%s': %d files\n", bucket, len(listresp.Contents))
		for _, obj := range listresp.Contents {
			fmt.Println("-", *obj.Key)
		}
	}
}

PomaxLet's make a Firefox Extension, the painless way

Ever had a thing you really wanted to customise about Firefox, but you couldn't because it wasn't in any regular menu, advanced menu, or about:config?

For instance, you want to be able to delete elements on a page for peace of mind from the context menu. How the heck do you do that? Well, with the publication of the new node-based jpm, the answer to that question is "pretty dang simply"...

Let's make our own Firefox extension with a "Delete element" option added to the context menu:

a screenshot of the Firefox page context menu with a 'delete element' option

We're going to make that happen in five steps.

  1. Install jpm -- in your terminal simply run: npm install -g jpm (make sure you have node.js installed) and done (this is mostly prerequisite to developing an extension, so you only have to do this once, and then never again. For future extensions, you start at step 2!)
  2. Create a dir for working on your extension whereveryou like, navigate to it in the terminal and run: jpm init to set up the standard files necessary to build your extension. Good news: it's very few files!
  3. Edit the index.js file that command generated, writing whatever code you need to do what you want to get done,
  4. Turn your code into an .xpi extension by running : jpm xpi,
  5. Install the extension by opening the generated .xpi file with Firefox

Of course, step (3) is the part that requires some effort, but let's run through this together. We're going to pretty much copy/paste the code straight from the context menu API documentation:

      // we need to make sure we have a hook into "things" we click on:
  1:  var self = require("sdk/self");

      // and we'll be using the context menu, so let's make sure we can:
  2:  var contextMenu = require("sdk/context-menu");

      // let's add a menu item!
  3:  var menuItem = contextMenu.Item({
        // the label is pretty obvious...
  4:    label: "Delete Element",

        // the context tells Firefox which things should have this in their context
        // menu, as there are quite a few elements that get "their own" menu,
        // like "the page" vs "an image" vs "a link". .. We pretty much want
        // everything on a page, so we make that happen:
  5:    context: contextMenu.PredicateContext(function(data) { return true; }),

        // and finally the script that runs when we select the option. Delete!
  6:    contentScript: 'self.on("click", function (node, data) { node.outerHTML = ""; });'
      });

The only changes here are that we want "delete" for everything, so the context is simply "for anything that the context menu opens up on, consider that a valid context for our custom script" (which we do by using the widest context possible on line 5), and of course the script itself is different because we want to delete nodes (line 6).

The contentScript property is a string, so we're a little restricted in what we can do without all manner of fancy postMessages, but thankfully we don't need it: the addon mechanism will always call the contentScript function with two arguments, "node" and "data, and the "node" argument is simply the HTML element you clicked on, which is what we want to delete. So we do! We don't even try to be clever here, we simply set the element's .outerHTML property to an empty string, and that makes it vanish from the page.

If you expected more work, then good news: there isn't any, we're already done! Seriously: run jpm run yourself to test your extension, and after verifying that it indeed gives you the new "Delete element" option in the context menu and deletes nodes when used, move on to steps (4) and (5) for the ultimate control of your browser.

Because here's the most important part: the freedom to control your online experience, and Firefox, go hand in hand.

Mozilla Open Policy & Advocacy BlogSpotlight on Public Knowledge: A Ford-Mozilla Open Web Fellow Host

(This is the fourth in our series spotlighting host organizations for the 2015 Ford-Mozilla Open Web Fellowship. For years, Public Knowledge has been at the forefront of fighting for citizens and informing complex telecommunications policy to protect people. Working at Public Knowledge, the Fellow will be at the center of emerging policy that will shape the Internet as we know it. Apply to be a Ford-Mozilla Open Web Fellow and use your tech skills at Public Knowledge to protect the Web.)

Spotlight on Public Knowledge: A Ford-Mozilla Open Web Fellow Host
by Shiva Stella, Communications Manager of Public Knowledge

This year has been especially intense for policy advocates passionate about protecting a free and open internet, user protections, and our digital rights. Make no mistake: From net neutrality to the Comcast/Time Warner Cable merger, policy makers will continue to have an outsized influence over the web.

In order to enhance our advocacy efforts, Public Knowledge is hosting a Ford-Mozilla Open Web Fellow. We are looking for a leader with technical skills and drive to defend the internet, focusing on fair-use copyright and consumer protections. There’s a lot of important work to be done, and we know the public could use your help.

Public Knowledge Long

Public Knowledge works steadfastly in the telecommunications and digital rights space. Our goal is to inform the public of key policies that impact and limit a wide range of technology and telecom users. Whether you’re the child first responders fail to locate accurately because you dial 911 from a cell phone or the small business owner who can’t afford to “buy into” the internet “fast lane,” these policies affect your digital rights – including the ability to access, use and own communications tools like your set-top box (which you currently lease forever from your cable company, by the way) and your cell phone (which your carrier might argue can’t be used on a competing network due to copyright law).

There is no doubt that public policy impacts people’s lives, and Public Knowledge is advocating for the public interest at a critical time when special interests are attempting to shape policy that benefits them at our cost or that overlooks an issue’s complexity.

Indeed, in this interconnected world, the right policy outcome isn’t always immediately clear. Location tracking, for example, can impact people’s sense of privacy; and yet, when deployed in the right way, can lead to first responders swiftly locating someone calling 911 from a mobile device. Public Knowledge sifts through the research and makes sure consumers have a seat at the table when these issues are decided.

Public policy in this area can also impact the broader economy, and raises larger questions: Should we have an internet with a “fast lane“ for the relatively few companies that can afford it, and a slow lane for the rest of us? What would be the impact on innovation and small business if we erase net neutrality as we know it?

The answers to these questions require a community of leaders to advocate for policies that serve the public interest. We need to state in clear language the impact of ill-informed policies and how they affect people’s digital rights —including the ability to access, use and own communications tools, as well as the ability to create and innovate.

Even as the U.S. Federal Communications Commission reviews millions of net neutrality comments and considers approving huge mergers that risk consumers, the cable industry is busy hijacking satellite bills (STAVRA), stealthily slipping “pro-cable” provisions into legislation and that must be passed so 1.5 million satellite subscribers may continue receiving their (non-cable!) service. Public Knowledge shines light on these policies to prevent them from harming innovation or jeopardizing our creative and connected future. To this end we advocate for an open internet and public access to affordable technologies and creative works, engaging policy makers and the public in key policy decisions that affect us all.

Let us be clear: private interests are hoping you won’t notice or just don’t care about these issues. We’re betting that’s not the case. Please apply today to join the Public Knowledge team as a Ford-Mozilla Open Web Fellow to defend the internet we love.


Apply to be a Ford-Mozilla Open Web Fellow. Application deadline for the 2015 Fellowship is December 31, 2014.

Gervase MarkhamFirefoxOS 3 Ideas: Hack The Phone Call

People are brainstorming ideas for FirefoxOS 3, and how it can be more user-centred. Here’s one:

There should be ways for apps to transparently be hooked into the voice call creation and reception process. I want to use the standard dialer and address book that I’m used to (and not have to use replacements written by particular companies or services), and still e.g.:

  • My phone company can write a Firefox OS extension (like TU Go on O2) such that when I’m on Wifi, all calls transparently use that
  • SIP or WebRTC contacts appear in the standard contacts app, but when I press “Call”, it uses the right technology to reach them
  • Incoming calls can come over VoIP, the phone network or any other way and they all look the same when ringing
  • When I dial, I can configure rules such that calls to certain prefixes/countries/numbers transparently use a dial-through operator, or VoIP, or a particular SIM
  • If a person has 3 possible contact methods, it tries them in a defined order, or all simultaneously, or best quality first, or whatever I want

These functions don’t have to be there by default; what I’m arguing for is the necessary hooks so that apps can add them – an app from your carrier, an app from your SIP provider, an app from a dial-through provider, or just a generic app someone writes to define call routing rules. But the key point is, you don’t have to use a new dialer or address book to use these features – they can be UI-less (at least when not explicitly configuring them.)

In other words, I want to give control over the phone call back to the user. At the moment, doing SIP on Android requires a new app. TU Go requires a new app. There’s no way to say “for all international calls, when I’m in the UK, use this dial-through operator”. I don’t have a dual-SIM Android phone, so I’m not sure if it’s possible on Android to say “all calls to this person use SIM X” or “all calls to this network (defined by certain number prefixes) use SIM Y”. But anyway, all these things should be possible on FirefoxOS 3. They may not be popular with carriers, because they will all save the user money. But if we are being user-centric, we should do them.

Benjamin KerensaGive a little

GiveGive by Time Green (CC-BY-SA)

The year is coming to an end and I would encourage you all to consider making a tax-deductible donation (If you live in the U.S.) to one of the following great non-profits:

Mozilla

The Mozilla Foundation is a non-profit organization that promotes openness, innovation and participation on the Internet. We promote the values of an open Internet to the broader world. Mozilla is best known for the Firefox browser, but we advance our mission through other software projects, grants and engagement and education efforts.

EFF

The Electronic Frontier Foundation is the leading nonprofit organization defending civil liberties in the digital world. Founded in 1990, EFF champions user privacy, free expression, and innovation through impact litigation, policy analysis, grassroots activism, and technology development.

ACLU

The ACLU is our nation’s guardian of liberty, working daily in courts, legislatures and communities to defend and preserve the individual rights and liberties that the Constitution and laws of the United States guarantee everyone in this country.

Wikimedia Foundation

The Wikimedia Foundation, Inc. is a nonprofit charitable organization dedicated to encouraging the growth, development and distribution of free, multilingual, educational content, and to providing the full content of these wiki-based projects to the public free of charge. The Wikimedia Foundation operates some of the largest collaboratively edited reference projects in the world, including Wikipedia, a top-ten internet property.

Feeding America

Feeding America is committed to helping people in need, but we can’t do it without you. If you believe that no one should go hungry in America, take the pledge to help solve hunger.

Action Against Hunger

ACF International, a global humanitarian organization committed to ending world hunger, works to save the lives of malnourished children while providing communities with access to safe water and sustainable solutions to hunger.
These six non-profits are just one of many causes to support but these ones specifically are playing a pivotal role in protecting the internet, protecting liberties, educating people around the globe or helping reduce hunger.

Even if you cannot support one of these causes, consider giving this post a share to add visibility to your friends and family and help support these causes in the new year!

 

Erik VoldHow is the Jetpack/Add-on SDK used?

This is a follow-up post to my What is the Jetpack/Add-on SDK post, in which I wish to discuss the ways Jetpack/Add-on SDK is currently being used (that I know about).

Firefox DevTools

The first obvious place to mention is Firefox DevTools, not long after the Add-on SDK team was merged with the Firefox DevTools team, Dave Camp began a process of molding the Firefox DevTools code to use CommonJS module structure supported by the Jetpack SDK loader. Additionally new DevTools features have been prototyped with CFX/JPM (the SDK’s associated CLIs) such as the Valence (aka Firefox Tools Adapter) (source code), Firebug Next (source code), and the WebIDE (source code).

Firefox OS Simulator

The Firefox OS Simulator (source code) was built using the Jetpack/Add-on SDK, one core feature it utilized was the subprocess module (now called child_process for NodeJS parity) it even used third party modules to add UI like toolbar buttons, and menuitems. Finally it used the SDK test framework.

Click here to find the Firefox OS Simulator on AMO

Firefox Testpilot

Firefox Test Pilot is an opt-in program by feedback on things like features is collected. With Testpilot Firefox can experiment with new features (alone or in an A/B scenario), to see if and how they are used, how much they are used, and to determine whether or not they need more work.

You can explore the experiments here

Australis

Australis was the UI redesign project codename for Firefox, it was prototyped with the Jetpack SDK, the source code is here

New Tab Tiles

The new tab page, which uses tiles, some of which are ads, that was also prototyped with the Jetpack/Add-on SDK the source code is here

Mozilla Labs

Lightbeam (aka Collusion)

Before the Mozilla Labs project ended parts of the team used the Jetpack/Add-on SDK to develop ideas for Firefox, one highlight was Lightbeam (formerly known as Collusion) (source code) which was the topic of a TED talk which currently has been viewed more than 1.5 million times.

Click here to find Lightbeam on AMO

Prospector

There was also a sub project of MozLabs called Prospector which also used the Jetpack/Add-on SDK to build feature prototypes such as:

Click here for the full AMO list

Old School Add-ons

  • Scriptish which is a fork of Greasemonkey which uses the Jetpack/Add-on SDK loader even though it is still technically an old school add-on.

A few old school add-ons are using the dev tools loader, Firebug 2.

New School Add-ons

Mozilla Add-ons
Community Add-ons

Finally I’d like to mention some of the add-ons that were developed outside of Mozilla, which was one of the primary goals that lead to the project’s conception. This are some of my favorites:

Click here to see the full AMO list

  • Note: user counts were taken on Oct 15th 2014 and the numbers will obviously change over time

Addons.Mozilla.Org (AMO)

It should be no surprise that there is a fast track on AMO for extensions built with the Add-on SDK, this is because there is less code to review, and the reviews are generally easier. This is good news for the Mozilla community in three ways, the first is that there are more people developing add-ons, secondly the review times are shorter than they would be otherwise, and last of all (but not least) reviewing add-ons is easier, which results in more reviewers.

Summary

There are many ways in which the Jetpack/Add-on SDK is being used in, adds value to, and is an essential part of the Mozilla mission and community, which are not obvious. Furthermore all of these use cases now depend on the Jetpack/Add-on SDK and all have to be factored in to the team’s decision making, because bugs come from all of these important sources. So the team can no longer merely focus on new school add-on metrics imho.

Next I want to describe areas the project could work on in the future.

Related Links

Nigel BabuMozlandia - Arrival

Portland. The three words that come to mind are overwhelmed, cold, and exhilarating. Getting there was a right pain, I’d have to admit. Though, flying around the US the weekend after Black Friday isn’t the best idea anyway. According to my rough calculations, it took about 25 hours from take off in Delhi to wheels down in Portland. That’s a heck a lot of time on planes and at airports. But hey, I’ve been doing this for weeks in a row at this point.

At the airport, I ran into people holding up the Mozilla board. As I waited for the shuttle, I was very happy to run into Luke, from the MDN team. We met at the summit and he was a familiar face. We were chatting all the way to the hotel about civic hacking.

This work week is the most exciting Mozilla event that I’ve attended. I’m finally getting to meet a lot of people I know and renewing friendships from the last few events. I started contributing to Mozilla by contributing to the Webdev team. My secret plan at this work week was to meet all the folks from the old Webdev team in person. I’ve known them for more than 3 years and never quite managed to meet everyone in person.

After a quick shower, I decided to step out to the Mozilla PDX. According to Google Maps, it was a quick walk away and I was trying not to sleep all day despite my body trying to convince me it was a good idea. At the office, I met Fred’s team and we sat around talking for a while. It was good to meet Christie again too! That’s when a wave of exhaustion hit. I didn’t see it coming. Suddenly, I felt sluggish and a warm bed seemed very tempting. After lunch with Jen, Sole, and Matt, I quickly retired to bed.

Sole and the Whale

When I got down after the nap, there was a small group headed to the opening event. This was good, because I got very confused with Google Maps (paper maps were much more helpful).

Whoa, people overload. I walked around a few rounds meeting lots of people. It was fun running into a lot of people from IRC in the flesh. I enjoyed meeting the folks from the Auckland office (I often back them out :P). And I finally met Laura and her team. For change, I’m visiting bkero’s town this time instead of him visiting mine ;)

The crowd

The rest of the evening is a bit of a blur. Eventually, I was exhausted and walked back to the hotel for a good night’s sleep before the fun really started!

Andy McKaySelf Examination

A few weeks ago we had the Mozilla Mozlanida meet up in Portland. I had a few things on my agenda going into that meeting. My biggest was to critically examine the project my team and I have been working on for almost two years.

That project is Marketplace Payments, which we provide through the Firefox Marketplace for developers. We don't limit what kind of payment system you use in Web Apps, unlike Google or Apple.

In Mozlandia, I was arguing (along with some colleagues) that there really is little point in working on this much anymore. There are many reasons for this, but here's the high level:

  • Providing a payments service that competes against every other web based payment service in existence is outside of our core goals

  • We can't actually compete against every other web based payment service without significant investment

  • Developer uptake doesn't support further investment in the project.

There was mostly agreement on this, so we've agreed to complete our existing work on it and then leave it as it is for a while. We'll watch the metrics, see what happens and make some decisions based on the that.

But really the details of this are not that important. What I believe is really, really important is the ability to critically examine your job and projects and examine their worth.

What normally happens is that you get a group of people and tell them to work on project X. They will iterate through features and complete features. And repeat and keep going. And if you don't stop at some point and critically examine what is going on, it will keep repeating. People will find new features, new enhancements, new areas to add to the project. Just as they have been trained to do so. And the project will keep growing.

That's a perfectly normal thing for a team to do. It's harder to call a project done, the features complete and realize that there might be an end.

Normally that happens externally. Sometimes its done a positive way, sometimes it's done negatively. In the latter people get upset and recriminations and accusations fly. It's not a fun time.

But being able to step aside and declare the project done internally can be hard for one main reason: people fear for their job.

That's what some people said to me in Mozlandia "Andy you've just talked yourself out of a job" or "You've just thrown yourself under a bus".

Maybe, but so be it. I have no fear that there's important stuff to be doing at Mozilla and that my awesome team will have plenty to do.

Right, next project.

Update: Marketplace Payments are still there and we are completing the last projects we have for them. But we aren't going to be doing development beyond that on them for a while. Let's see what the data shows.

Doug BelshawBittorrent's Project Maelstrom is 'Firecloud' on steroids

Earlier this week, BitTorrent, Inc. announced Project Maelstrom. The idea is to apply the bittorrent technologies and approaches to more of the web.

Project Maelstrom

Note: if you can’t read the text in the image, it says: “This is a webpage powered by 397 people + You. Not a central server.” So. Much. Win.

The blog post announcing the project doesn’t have lots of details, but a follow-up PC World article includes an interview with a couple of the people behind it.

I think the key thing comes in this response from product manager Rob Velasquez:

We support normal web browsing via HTTP/S. We only add the additional support of being able to browse the distributed web via torrents

This excites me for a couple of reasons. First, I’ve thought on-and-off for years about how to build a website that’s untakedownable. I’ve explored DNS based on the technology powering Bitcoin, experimented with the PirateBay’s now-defunct blogging platform Baywords, and explored the dark underbelly of the web with sites available only through Tor.

Second, Vinay Gupta and I almost managed to get a project off the ground called Firecloud. This would have used a combination of interesting technologies such as WebRTC, HTML5 local storage and DHT to provide distributed website hosting through a Firefox add-on.

I really, really hope that BitTorrent turn this into a reality. I’d love to be able to host my website as a torrent. :-D

Update: People pay more attention to products than technologies, but I’d love to see Webtorrent get more love/attention/exposure.


Comments? Questions Email me: doug@mozillafoundation.org

Mozilla FundraisingPrivacy-Forward Fundraising

There are a lot of ways that fundraising at Mozilla is very different than the fundraising I’ve done at other non-profit organizations. One of the most striking differences is how our Privacy Principles guide our donor experience, our fundraising systems, … Continue reading

Fabien Cazenave"pip install" & "gem install" without sudo

Following yesterday’s post about using “npm install -g” without root privileges, here are the Python and Ruby counterparts for your beloved OSX or Linux box.

By default, pip install and gem install try to install stuff in /usr/, which requires root privileges. Hence, most users will “naturally” do a sudo to perform the install — which is, in my opinion at least, a very bad idea (do you really want to give root privileges to packages that haven’t been reviewed?). Fortunately, there’s more than the default setting.

Python: pip install --user

With Python 2.6 and later you can avoid “sudoing” your pip install by using the --user argument (thanks @cmdevienne for the tip!). Let’s test this with html-linter:

$ pip install --user html-linter

By default on Linux and OSX (non-framework builds) this will install your package into ~/.local, which is just fine for me. All executables are in ~/.local/bin/, which is included in my $PATH, and all Python libraries are in ~/.local/lib/python2.7/. The world couldn’t be any better.

You can specify a custom destination by setting the PYTHONUSERBASE environment variable:

$ export PYTHONUSERBASE=/myappenv
$ pip install --user html-linter

Of course, you’ll have to add that to your $PATH to make it work. You can add the following lines to your ~/.profile like that:

export PYTHONUSERBASE=/myappenv
PATH="$PYTHONUSERBASE/bin:${PATH}"

The only downside (compared to npm) is that you’ll have to remember to use the --user argument when installing Python packages. If there’s a way to make it the default mode, please let me know.

EDIT: a good workaround is to define a custom pip function in your ~/.bash_aliases (or bashrc, zshrc, whatever), as suggested in comment #1.

Ruby: gem install --user-install

gem’s --user-install argument is quite similar. One good thing is that you can easily make it the default mode:

$ echo "gem: --user-install" >> ~/.gemrc

Now let’s try that with the most valuable gem I know:

$ gem install vimgolf
Fetching: vimgolf-0.4.6.gem (100%)
WARNING:  You don't have /home/kaze/.gem/ruby/1.8/bin in your PATH,
          gem executables will not run.

As you can see, gem installs everything in ~/.gem by default; unfortunately, the file structure does not allow to put executables in the same ~/.local/bin/ directory. Never mind, we’ll add those ~/.gem/ruby/*/bin/ directories to the $PATH manually by adding these lines to the ~/.profile:

for dir in $HOME/.gem/ruby/*; do
  [ -d "$dir/bin" ] && PATH="${dir}/bin:${PATH}"
done

Source your ~/.profile, you’re done.

Joel MaherTracking Firefox performance as we uplift – the volume of alerts we get

For the last year, I have been focused on ensuring we look at the alerts generated by Talos.  For the last 6 months I have also looked a bit more carefully at the uplifts we do every 6 weeks.  In fact we wouldn’t generate alerts when we uplifted to beta because we didn’t run enough tests to verify a sustained regression in a given time window.

Lets look at data, specifically the volume of alerts:

Trend of improvements/regressions from Firefox 31 to 36 as we uplift to Aurora

Trend of improvements/regressions from Firefox 31 to 36 as we uplift to Aurora

this is a stacked graph, you can interpret it as Firefox 32 had a lot of improvements and Firefox 33 had a lot of regressions.  I think what is more interesting is how many performance regressions are fixed or added when we go from Aurora to Beta.  There is minimal data available for Beta.  This next image will compare alert volume for the same release on Aurora then on Beta:

Side by side stacked bars for the regressions going into Aurora and then going onto Beta.

Side by side stacked bars for the regressions going into Aurora and then going onto Beta.

One way to interpret this above graph is to see that we fixed a lot of regressions on Aurora while Firefox 33 was on there, but for Firefox 34, we introduced a lot of regressions.

The above data is just my interpretation of this, Here are links to a more fine grained view on the data:

As always, if you have questions, concerns, praise, or other great ideas- feel free to chat via this blog or via irc (:jmaher).


Mozilla Reps CommunityReps Weekly Call – December 11st 2014

Last Thursday we had our regular weekly call about the Reps program, where we talk about what’s going on in the program and what Reps have been doing during the last week.

whosjoining

Summary

  • FOSDEM update.
  • Portland Work Week.
  • ReMo/Mozillians websites testing.
  • End of year receipts campaign.
  • Remo challenges.
  • Stumbling in a box events.
  • Reps Monthly newsletter.

Detailed notes

AirMozilla video

Don’t forget to comment about this call on Discourse and we hope to see you next week!

Blair McBrideUX Deisgn Day, Dunedin 2014

Things I’ve been saying for a long time: I need to blog more. I haven’t been very good at achieving that.

So, recently I was at UX Design Day – a one-day conference focused on UX and design. It’s the only conference of it’s kind in NZ, and it started here in Dunedin. Working remotely and not really part of the design community, I don’t often get a chance to sit down and talk UX/design in-person with people. This year the conference was back in Dunedin, so I jumped at the chance to attend.

UX Design Day intro slideI was impressed by the diverse turnout this year. Interaction design, visual design, content strategy, marketing, education, user research, and software development were all represented. I had tried to drum up support from the local developer community to attend, and that seemed to have worked well. Too often do I see developers ignoring UX/design issues – either being very dismissive, or claiming it’s another person’s job; so this felt like a good sign.

Alone those lines, one of the things that stuck with me was the talk around not having UX teams separate from everything else. The largest example talked about was UX and content strategy, but I think it applies equally to software development teams too. Having these two groups work closely together, not segregated, helps bring so much context to both teams.

The other important take-away for me was the importance of not accepting crap. That is, experiences or systems that are, intentionally or not, lacking in design forethought and therefore lead to an unnecessarily difficult experiences, or a design that by default leads to harm. The primary concrete example here was physical safety in various workplaces, where people were put at needless risk due to the lack of safety by default design. I think this is a very relevant point for those of us building software, given that we so often experience design in software that feels broken, but too often don’t do anything constructive to help fix it.

Obligatory wall of Post-It notes

On the whole, I enjoyed the conference. However, since the talks covered such a wide corpus, I feel it didn’t provide enough time for any one area. Diversity is an asset, but I would have liked time for more in-depth explorations of topics.

Guillaume DestuynderVPN and DNS

The problem

Note

TLDR

With split-view DNS and VPN it makes your web browsing and what not slower due to slower DNS resolution. This is a “solution” mainly for Linux and OSX.

When connecting to a VPN, usually, it’s going to push it’s own DNS name servers. It does this because many, or dare I say most networks behind the VPN actually have hostnames that are “internal” and will only resolve on the internal name server. This situation is also called “split-view DNS”.

The internal name server also resolves public hostnames - but because of the VPN round-trip this is slower. In some cases, it can be much slower (for example if your company’s VPN is in the USA and you live in Europe... hint).

dnsmasq to the rescue

dnsmasq is a well-known DNS caching server, DHCP, TFTP, PXE (and recently even RA) server. You can configure it so that requests for certain domains are resolved with a specific name-server.

For example, you would want to forward all internal domains to the DNS name server that is provided by the VPN:

File: /etc/resolv.conf

nameserver 127.0.0.1

File: /etc/resolv2.conf

#Your local/ISP nameserver(s)
nameserver 192.168.0.1
nameserver 8.8.8.8

File: /etc/dnsmasq.conf

server=/scl3.mozilla.com/10.0.0.1/
server=/phx1.mozilla.com/10.0.0.1/

resolv-file=/etc/resolv2.conf

Note

In this example, *.scl3.mozilla.com will resolve through the name server at 10.0.0.1

If you use openresolv (if you don’t know, you probably do...) you’ll have to instruct it to always use your local DNS cache (dnsmasq) as well so that it doesn’t override your settings.

File: /etc/resolvconf.conf

#Optional, if you use openresolv
name_servers=127.0.0.1

And off you go! Don’t forget to restart dnsmasq ;)

systemctl restart dnsmasq
# or..
/etc/init.d/dnsmasq restart

Jeff WaldenIntroducing the JavaScript Internationalization API

(also cross-posted on the Hacks blog — comment over there if you have anything to say)

Firefox 29 issued half a year ago, so this post is long overdue. Nevertheless I wanted to pause for a second to discuss the Internationalization API first shipped on desktop in that release (and passing all tests!). Norbert Lindenberg wrote most of the implementation, and I reviewed it and now maintain it. (Work by Makoto Kato should bring this to Android soon; b2g may take longer due to some b2g-specific hurdles. Stay tuned.)

What’s internationalization?

Internationalization (i18n for short — i, eighteen characters, n) is the process of writing applications in a way that allows them to be easily adapted for audiences from varied places, using varied languages. It’s easy to get this wrong by inadvertently assuming one’s users come from one place and speak one language, especially if you don’t even know you’ve made an assumption.

function formatDate(d)
{
  // Everyone uses month/date/year...right?
  var month = d.getMonth() + 1;
  var date = d.getDate();
  var year = d.getFullYear();
  return month + "/" + date + "/" + year;
}

function formatMoney(amount)
{
  // All money is dollars with two fractional digits...right?
  return "$" + amount.toFixed(2);
}

function sortNames(names)
{
  function sortAlphabetically(a, b)
  {
    var left = a.toLowerCase(), right = b.toLowerCase();
    if (left > right)
      return 1;
    if (left === right)
      return 0;
    return -1;
  }

  // Names always sort alphabetically...right?
  names.sort(sortAlphabetically);
}

JavaScript’s historical i18n support is poor

i18n-aware formatting in traditional JS uses the various toLocaleString() methods. The resulting strings contained whatever details the implementation chose to provide: no way to pick and choose (did you need a weekday in that formatted date? is the year irrelevant?). Even if the proper details were included, the format might be wrong e.g. decimal when percentage was desired. And you couldn’t choose a locale.

As for sorting, JS provided almost no useful locale-sensitive text-comparison (collation) functions. localeCompare() existed but with a very awkward interface unsuited for use with sort. And it too didn’t permit choosing a locale or specific sort order.

These limitations are bad enough that — this surprised me greatly when I learned it! — serious web applications that need i18n capabilities (most commonly, financial sites displaying currencies) will box up the data, send it to a server, have the server perform the operation, and send it back to the client. Server roundtrips just to format amounts of money. Yeesh.

A new JS Internationalization API

The new ECMAScript Internationalization API greatly improves JavaScript’s i18n capabilities. It provides all the flourishes one could want for formatting dates and numbers and sorting text. The locale is selectable, with fallback if the requested locale is unsupported. Formatting requests can specify the particular components to include. Custom formats for percentages, significant digits, and currencies are supported. Numerous collation options are exposed for use in sorting text. And if you care about performance, the up-front work to select a locale and process options can now be done once, instead of once every time a locale-dependent operation is performed.

That said, the API is not a panacea. The API is “best effort” only. Precise outputs are almost always deliberately unspecified. An implementation could legally support only the oj locale, or it could ignore (almost all) provided formatting options. Most implementations will have high-quality support for many locales, but it’s not guaranteed (particularly on resource-constrained systems such as mobile).

Under the hood, Firefox’s implementation depends upon the International Components for Unicode library (ICU), which in turn depends upon the Unicode Common Locale Data Repository (CLDR) locale data set. Our implementation is self-hosted: most of the implementation atop ICU is written in JavaScript itself. We hit a few bumps along the way (we haven’t self-hosted anything this large before), but nothing major.

The Intl interface

The i18n API lives on the global Intl object. Intl contains three constructors: Intl.Collator, Intl.DateTimeFormat, and Intl.NumberFormat. Each constructor creates an object exposing the relevant operation, efficiently caching locale and options for the operation. Creating such an object follows this pattern:

var ctor = "Collator"; // or the others
var instance = new Intl[ctor](locales, options);

locales is a string specifying a single language tag or an arraylike object containing multiple language tags. Language tags are strings like en (English generally), de-AT (German as used in Austria), or zh-Hant-TW (Chinese as used in Taiwan, using the traditional Chinese script). Language tags can also include a “Unicode extension”, of the form -u-key1-value1-key2-value2..., where each key is an “extension key”. The various constructors interpret these specially.

options is an object whose properties (or their absence, by evaluating to undefined) determine how the formatter or collator behaves. Its exact interpretation is determined by the individual constructor.

Given locale information and options, the implementation will try to produce the closest behavior it can to the “ideal” behavior. Firefox supports 400+ locales for collation and 600+ locales for date/time and number formatting, so it’s very likely (but not guaranteed) the locales you might care about are supported.

Intl generally provides no guarantee of particular behavior. If the requested locale is unsupported, Intl allows best-effort behavior. Even if the locale is supported, behavior is not rigidly specified. Never assume that a particular set of options corresponds to a particular format. The phrasing of the overall format (encompassing all requested components) might vary across browsers, or even across browser versions. Individual components’ formats are unspecified: a short-format weekday might be “S”, “Sa”, or “Sat”. The Intl API isn’t intended to expose exactly specified behavior.

Date/time formatting

Options

The primary options properties for date/time formatting are as follows:

weekday, era
"narrow", "short", or "long". (era refers to typically longer-than-year divisions in a calendar system: BC/AD, the current Japanese emperor’s reign, or others.)
month
"2-digit", "numeric", "narrow", "short", or "long"
year
day
hour, minute, second
"2-digit" or "numeric"
timeZoneName
"short" or "long"
timeZone
Case-insensitive "UTC" will format with respect to UTC. Values like "CEST" and "America/New_York" don’t have to be supported, and they don’t currently work in Firefox.

The values don’t map to particular formats: remember, the Intl API almost never specifies exact behavior. But the intent is that "narrow", "short", and "long" produce output of corresponding size — “S” or “Sa”, “Sat”, and “Saturday”, for example. (Output may be ambiguous: Saturday and Sunday both could produce “S”.) "2-digit" and "numeric" map to two-digit number strings or full-length numeric strings: “70” and “1970”, for example.

The final used options are largely the requested options. However, if you don’t specifically request any weekday/year/month/day/hour/minute/second, then year/month/day will be added to your provided options.

Beyond these basic options are a few special options:

hour12
Specifies whether hours will be in 12-hour or 24-hour format. The default is typically locale-dependent. (Details such as whether midnight is zero-based or twelve-based and whether leading zeroes are present are also locale-dependent.)

There are also two special properties, localeMatcher (taking either "lookup" or "best fit") and formatMatcher (taking either "basic" or "best fit"), each defaulting to "best fit". These affect how the right locale and format are selected. The use cases for these are somewhat esoteric, so you should probably ignore them.

Locale-centric options

DateTimeFormat also allows formatting using customized calendaring and numbering systems. These details are effectively part of the locale, so they’re specified in the Unicode extension in the language tag.

For example, Thai as spoken in Thailand has the language tag th-TH. Recall that a Unicode extension has the format -u-key1-value1-key2-value2.... The calendaring system key is ca, and the numbering system key is nu. The Thai numbering system has the value thai, and the Chinese calendaring system has the value chinese. Thus to format dates in this overall manner, we tack a Unicode extension containing both these key/value pairs onto the end of the language tag: th-TH-u-ca-chinese-nu-thai.

For more information on the various calendaring and numbering systems, see the full DateTimeFormat documentation.

Examples

After creating a DateTimeFormat object, the next step is to use it to format dates via the handy format() function. Conveniently, this function is a bound function: you don’t have to call it on the DateTimeFormat directly. Then provide it a timestamp or Date object.

Putting it all together, here are some examples of how to create DateTimeFormat options for particular uses, with current behavior in Firefox.

var msPerDay = 24 * 60 * 60 * 1000;

// July 17, 2014 00:00:00 UTC.
var july172014 = new Date(msPerDay * (44 * 365 + 11 + 197));

Let’s format a date for English as used in the United States. Let’s include two-digit month/day/year, plus two-digit hours/minutes, and a short time zone to clarify that time. (The result would obviously be different in another time zone.)

var options =
  { year: "2-digit", month: "2-digit", day: "2-digit",
    hour: "2-digit", minute: "2-digit",
    timeZoneName: "short" };
var americanDateTime =
  new Intl.DateTimeFormat("en-US", options).format;

print(americanDateTime(july172014)); // 07/16/14, 5:00 PM PDT

Or let’s do something similar for Portuguese — ideally as used in Brazil, but in a pinch Portugal works. Let’s go for a little longer format, with full year and spelled-out month, but make it UTC for portability.

var options =
  { year: "numeric", month: "long", day: "numeric",
    hour: "2-digit", minute: "2-digit",
    timeZoneName: "short", timeZone: "UTC" };
var portugueseTime =
  new Intl.DateTimeFormat(["pt-BR", "pt-PT"], options);

// 17 de julho de 2014 00:00 GMT
print(portugueseTime.format(july172014));

How about a compact, UTC-formatted weekly Swiss train schedule? We’ll try the official languages from most to least popular to choose the one that’s most likely to be readable.

var swissLocales = ["de-CH", "fr-CH", "it-CH", "rm-CH"];
var options =
  { weekday: "short",
    hour: "numeric", minute: "numeric",
    timeZone: "UTC", timeZoneName: "short" };
var swissTime =
  new Intl.DateTimeFormat(swissLocales, options).format;

print(swissTime(july172014)); // Do. 00:00 GMT

Or let’s try a date in descriptive text by a painting in a Japanese museum, using the Japanese calendar with year and era:

var jpYearEra =
  new Intl.DateTimeFormat("ja-JP-u-ca-japanese",
                          { year: "numeric", era: "long" });

print(jpYearEra.format(july172014)); // 平成26年

And for something completely different, a longer date for use in Thai as used in Thailand — but using the Thai numbering system and Chinese calendar. (Quality implementations such as Firefox’s would treat plain th-TH as th-TH-u-ca-buddhist-nu-latn, imputing Thailand’s typical Buddhist calendar system and Latin 0-9 numerals.)

var options =
  { year: "numeric", month: "long", day: "numeric" };
var thaiDate =
  new Intl.DateTimeFormat("th-TH-u-nu-thai-ca-chinese", options);

print(thaiDate.format(july172014)); // ๒๐ 6 ๓๑

Calendar and numbering system bits aside, it’s relatively simple. Just pick your components and their lengths.

Number formatting

Options

The primary options properties for number formatting are as follows:

style
"currency", "percent", or "decimal" (the default) to format a value of that kind.
currency
A three-letter currency code, e.g. USD or CHF. Required if style is "currency", otherwise meaningless.
currencyDisplay
"code", "symbol", or "name", defaulting to "symbol". "code" will use the three-letter currency code in the formatted string. "symbol" will use a currency symbol such as $ or £. "name" typically uses some sort of spelled-out version of the currency. (Firefox currently only supports "symbol", but this will be fixed soon.)
minimumIntegerDigits
An integer from 1 to 21 (inclusive), defaulting to 1. The resulting string is front-padded with zeroes until its integer component contains at least this many digits. (For example, if this value were 2, formatting 3 might produce “03”.)
minimumFractionDigits, maximumFractionDigits
Integers from 0 to 20 (inclusive). The resulting string will have at least minimumFractionDigits, and no more than maximumFractionDigits, fractional digits. The default minimum is currency-dependent (usually 2, rarely 0 or 3) if style is "currency", otherwise 0. The default maximum is 0 for percents, 3 for decimals, and currency-dependent for currencies.
minimumSignificantDigits, maximumSignificantDigits
Integers from 1 to 21 (inclusive). If present, these override the integer/fraction digit control above to determine the minimum/maximum significant figures in the formatted number string, as determined in concert with the number of decimal places required to accurately specify the number. (Note that in a multiple of 10 the significant digits may be ambiguous, as in “100” with its one, two, or three significant digits.)
useGrouping
Boolean (defaulting to true) determining whether the formatted string will contain grouping separators (e.g. “,” as English thousands separator).

NumberFormat also recognizes the esoteric, mostly ignorable localeMatcher property.

Locale-centric options

Just as DateTimeFormat supported custom numbering systems in the Unicode extension using the nu key, so too does NumberFormat. For example, the language tag for Chinese as used in China is zh-CN. The value for the Han decimal numbering system is hanidec. To format numbers for these systems, we tack a Unicode extension onto the language tag: zh-CN-u-nu-hanidec.

For complete information on specifying the various numbering systems, see the full NumberFormat documentation.

Examples

NumberFormat objects have a format function property just as DateTimeFormat objects do. And as there, the format function is a bound function that may be used in isolation from the NumberFormat.

Here are some examples of how to create NumberFormat options for particular uses, with Firefox’s behavior. First let’s format some money for use in Chinese as used in China, specifically using Han decimal numbers (instead of much more common Latin numbers). Select the "currency" style, then use the code for Chinese renminbi (yuan), grouping by default, with the usual number of fractional digits.

var hanDecimalRMBInChina =
  new Intl.NumberFormat("zh-CN-u-nu-hanidec",
                        { style: "currency", currency: "CNY" });

print(hanDecimalRMBInChina.format(1314.25)); // ¥ 一,三一四.二五

Or let’s format a United States-style gas price, with its peculiar thousandths-place 9, for use in English as used in the United States.

var gasPrice =
  new Intl.NumberFormat("en-US",
                        { style: "currency", currency: "USD",
                          minimumFractionDigits: 3 });

print(gasPrice.format(5.259)); // $5.259

Or let’s try a percentage in Arabic, meant for use in Egypt. Make sure the percentage has at least two fractional digits. (Note that this and all the other RTL examples may appear with different ordering in RTL context, e.g. ٤٣٫٨٠٪ instead of ٤٣٫٨٠٪.)

var arabicPercent =
  new Intl.NumberFormat("ar-EG",
                        { style: "percent",
                          minimumFractionDigits: 2 }).format;

print(arabicPercent(0.438)); // ٤٣٫٨٠٪

Or suppose we’re formatting for Persian as used in Afghanistan, and we want at least two integer digits and no more than two fractional digits.

var persianDecimal =
  new Intl.NumberFormat("fa-AF",
                        { minimumIntegerDigits: 2,
                          maximumFractionDigits: 2 });

print(persianDecimal.format(3.1416)); // ۰۳٫۱۴

Finally, let’s format an amount of Bahraini dinars, for Arabic as used in Bahrain. Unusually compared to most currencies, Bahraini dinars divide into thousandths (fils), so our number will have three places. (Again note that apparent visual ordering should be taken with a grain of salt.)

var bahrainiDinars =
  new Intl.NumberFormat("ar-BH",
                        { style: "currency", currency: "BHD" });

print(bahrainiDinars.format(3.17)); // د.ب.‏ ٣٫١٧٠

Collation

Options

The primary options properties for collation are as follows:

usage
"sort" or "search" (defaulting to "sort"), specifying the intended use of this Collator. (A search collator might want to consider more strings equivalent than a sort collator would.)
sensitivity
"base", "accent", "case", or "variant". This affects how sensitive the collator is to characters that have the same “base letter” but have different accents/diacritics and/or case. (Base letters are locale-dependent: “a” and “ä” have the same base letter in German but are different letters in Swedish.) "base" sensitivity considers only the base letter, ignoring modifications (so for German “a”, “A”, and “ä” are considered the same). "accent" considers the base letter and accents but ignores case (so for German “a” and “A” are the same, but “ä” differs from both). "case" considers the base letter and case but ignores accents (so for German “a” and “ä” are the same, but “A” differs from both). Finally, "variant" considers base letter, accents, and case (so for German “a”, “ä, “ä” and “A” all differ). If usage is "sort", the default is "variant"; otherwise it’s locale-dependent.
numeric
Boolean (defaulting to false) determining whether complete numbers embedded in strings are considered when sorting. For example, numeric sorting might produce "F-4 Phantom II", "F-14 Tomcat", "F-35 Lightning II"; non-numeric sorting might produce "F-14 Tomcat", "F-35 Lightning II", "F-4 Phantom II".
caseFirst
"upper", "lower", or "false" (the default). Determines how case is considered when sorting: "upper" places uppercase letters first ("B", "a", "c"), "lower" places lowercase first ("a", "c", "B"), and "false" ignores case entirely ("a", "B", "c"). (Note: Firefox currently ignores this property.)
ignorePunctuation
Boolean (defaulting to false) determining whether to ignore embedded punctuation when performing the comparison (for example, so that "biweekly" and "bi-weekly" compare equivalent).

And there’s that localeMatcher property that you can probably ignore.

Locale-centric options

The main Collator option specified as part of the locale’s Unicode extension is co, selecting the kind of sorting to perform: phone book (phonebk), dictionary (dict), and many others.

Additionally, the keys kn and kf may, optionally, duplicate the numeric and caseFirst properties of the options object. But they’re not guaranteed to be supported in the language tag, and options is much clearer than language tag components. So it’s best to only adjust these options through options.

These key-value pairs are included in the Unicode extension the same way they’ve been included for DateTimeFormat and NumberFormat; refer to those sections for how to specify these in a language tag.

Examples

Collator objects have a compare function property. This function accepts two arguments x and y and returns a number less than zero if x compares less than y, 0 if x compares equal to y, or a number greater than zero if x compares greater than y. As with the format functions, compare is a bound function that may be extracted for standalone use.

Let’s try sorting a few German surnames, for use in German as used in Germany. There are actually two different sort orders in German, phonebook and dictionary. Phonebook sort emphasizes sound, and it’s as if “ä”, “ö”, and so on were expanded to “ae”, “oe”, and so on prior to sorting.

var names =
  ["Hochberg", "Hönigswald", "Holzman"];

var germanPhonebook = new Intl.Collator("de-DE-u-co-phonebk");

// as if sorting ["Hochberg", "Hoenigswald", "Holzman"]:
//   Hochberg, Hönigswald, Holzman
print(names.sort(germanPhonebook.compare).join(", "));

Some German words conjugate with extra umlauts, so in dictionaries it’s sensible to order ignoring umlauts (except when ordering words differing only by umlauts: schon before schön).

var germanDictionary = new Intl.Collator("de-DE-u-co-dict");

// as if sorting ["Hochberg", "Honigswald", "Holzman"]:
//   Hochberg, Holzman, Hönigswald
print(names.sort(germanDictionary.compare).join(", "));

Or let’s sort a list Firefox versions with various typos (different capitalizations, random accents and diacritical marks, extra hyphenation), in English as used in the United States. We want to sort respecting version number, so do a numeric sort so that numbers in the strings are compared, not considered character-by-character.

var firefoxen =
  ["FireFøx 3.6",
   "Fire-fox 1.0",
   "Firefox 29",
   "FÍrefox 3.5",
   "Fírefox 18"];

var usVersion =
  new Intl.Collator("en-US",
                    { sensitivity: "base",
                      numeric: true,
                      ignorePunctuation: true });

// Fire-fox 1.0, FÍrefox 3.5, FireFøx 3.6, Fírefox 18, Firefox 29
print(firefoxen.sort(usVersion.compare).join(", "));

Last, let’s do some locale-aware string searching that ignores case and accents, again in English as used in the United States.

// Comparisons work with both composed and decomposed forms.
var decoratedBrowsers =
  [
   "A\u0362maya",  // A͢maya
   "CH\u035Brôme", // CH͛rôme
   "FirefÓx",
   "sAfàri",
   "o\u0323pERA",  // ọpERA
   "I\u0352E",     // I͒E
  ];

var fuzzySearch =
  new Intl.Collator("en-US",
                    { usage: "search", sensitivity: "base" });

function findBrowser(browser)
{
  function cmp(other)
  {
    return fuzzySearch.compare(browser, other) === 0;
  }
  return cmp;
}

print(decoratedBrowsers.findIndex(findBrowser("Firêfox"))); // 2
print(decoratedBrowsers.findIndex(findBrowser("Safåri")));  // 3
print(decoratedBrowsers.findIndex(findBrowser("Ãmaya")));   // 0
print(decoratedBrowsers.findIndex(findBrowser("Øpera")));   // 4
print(decoratedBrowsers.findIndex(findBrowser("Chromè")));  // 1
print(decoratedBrowsers.findIndex(findBrowser("IË")));      // 5

Odds and ends

It may be useful to determine whether support for some operation is provided for particular locales, or to determine whether a locale is supported. Intl provides supportedLocales() functions on each constructor, and resolvedOptions() functions on each prototype, to expose this information.

var navajoLocales =
  Intl.Collator.supportedLocalesOf(["nv"], { usage: "sort" });
print(navajoLocales.length > 0
      ? "Navajo collation supported"
      : "Navajo collation not supported");

var germanFakeRegion =
  new Intl.DateTimeFormat("de-XX", { timeZone: "UTC" });
var usedOptions = germanFakeRegion.resolvedOptions();
print(usedOptions.locale);   // de
print(usedOptions.timeZone); // UTC

Legacy behavior

The ES5 toLocaleString-style and localeCompare functions previously had no particular semantics, accepted no particular options, and were largely useless. So the i18n API reformulates them in terms of Intl operations. Each method now accepts additional trailing locales and options arguments, interpreted just as the Intl constructors would do. (Except that for toLocaleTimeString and toLocaleDateString, different default components are used if options aren’t provided.)

For brief use where precise behavior doesn’t matter, the old methods are fine to use. But if you need more control or are formatting or comparing many times, it’s best to use the Intl primitives directly.

Conclusion

Internationalization is a fascinating topic whose complexity is bounded only by the varied nature of human communication. The Internationalization API addresses a small but quite useful portion of that complexity, making it easier to produce locale-sensitive web applications. Go use it!

(And a special thanks to Norbert Lindenberg, Anas El Husseini, Simon Montagu, Gary Kwong, Shu-yu Guo, Ehsan Akhgari, the people of #mozilla.de, and anyone I may have forgotten [sorry!] who provided feedback on this article or assisted me in producing and critiquing the examples. The English and German examples were the limit of my knowledge, and I’d have been completely lost on the other examples without their assistance. Blame all remaining errors on me. Thanks again!)

(and to reiterate: comment on the Hacks post if you have anything to say)

Sriram RamasubramanianCentered Buttons

How can we use the same hack as Multiple Text Layout in some UI we need most of the times? Let’s take buttons for example. If we want the glyph in the button to be centered along with the text, we cannot use compound drawables — as they are always drawn along the edges of the container.

Centered Buttons

We could use our getCompoundPaddingLeft() to pack the glyph with the text.

    @Override
    public int getCompoundPaddingLeft() {
        // Ideally we should be overriding getTotalPaddingLeft().
        // However, android doesn't make use of that method,
        // instead uses this method for calculations.
        int paddingLeft = super.getCompoundPaddingLeft();
        paddingLeft += mDrawableWidth + getCompoundDrawablePadding();
        return paddingLeft;
    }

This offsets the space on the left and Android will take care of placing the text accordingly. Now we can place the Drawable in the space we created.

    @Override
    protected void onMeasure(int widthMeasureSpec, int heightMeasureSpec) {
        super.onMeasure(widthMeasureSpec, heightMeasureSpec);

        int paddingLeft = getPaddingLeft();
        int paddingRight = getPaddingRight();

        int drawableVerticalHeight = mDrawableHeight + getPaddingTop() + getPaddingBottom();
        int width = getMeasuredWidth();
        int height = Math.max(drawableVerticalHeight, getMeasuredHeight());
        setMeasuredDimension(width, height);

        int compoundPadding = getCompoundDrawablePadding();
        float totalWidth = mDrawableWidth + compoundPadding + getLayout().getLineWidth(0);
        float offsetX = (width - totalWidth - paddingLeft - paddingRight)/2.0f;
        mTranslateX = offsetX + paddingLeft;
        mTranslateY = (height - mDrawableHeight)/2.0f;
    }

The mTranslateX and mTranslateY are used to hold how far to translate to draw the drawable. Either the Drawable’s bounds can be shifted inside onMeasure() to reflect the translation. Or, the Canvas can be translated inside onDraw(). This will help us draw the glyph centered along with the text as a part of a Button!


Kim MoirReleng 2015 CFP now open

Florence, Italy.  Home of beautiful architecture.

Il Duomo di Firenze by ©runner310, Creative Commons by-nc-sa 2.0


Delicious food and drink.

Panzanella by © Pete Carpenter, Creative Commons by-nc-sa 2.0

Caffè ristretto by © Marcelo César Augusto Romeo, Creative Commons by-nc-sa 2.0


And next May, release engineering :-)

The CFP for Releng 2015 is now open.  The deadline for submissions is January 23, 2015.  It will be held on May 19, 2015 in Florence Italy and co-located with ICSE 2015.   We look forward to seeing your proposals about the exciting work you're doing in release engineering!

If you have questions about the submission process or anything else, please contact any of the program committee members. My email is kmoir and I work at mozilla.com.

Naoki HirataEinstein Quote for Mozillians

“Out of clutter, find simplicity. From discord find harmony. In the middle of difficulty lies opportunity.” – Albert Einstein

From : http://www.folderarchy.com/albert-einstein/


Filed under: Planet Tagged: Planet

Fabien CazenaveKompoZer 0.8b2

KompoZer logo

KompoZer 0.8b2 is finally ready. Few visible changes, but a lot of bugfixes and code cleaning under the hood.

You can grab KompoZer 0.8b2 here: http://kompozer.net/download.php

Enjoy, and please report bugs!

Bug Fixes

We’ve tried to solve the most frequently reported bugs:

  • the CSS Editor shouldn't add those annoying “*|” strings in the selectors any more
  • the preview in the “Image Properties” box now works properly
  • better FTP support (right-click in the Site Manager context menu)
  • the markup cleaner doesn't crash on nested lists any more
  • Enter in a paragraph now creates a new paragraph
  • the “Credits” panel in the About box is back ;-)

KompoZer 0.8b2 is now a more reliable editor: the regressions in the CSS editor were a complete blocker for myself, so I guess it’s been a real nightmare for most users. We’ve fixed a lot of small bugs and I think the overall user experience should be much better than with the previous versions.

18*4 Localized Binaries

Cédric Corazza, our l10n lead, has done a great job to release localized binaries for all the supported languages at once. This time he’s had much more work than for the previous beta:

  • we had 9 locales for the 0.8b1 release, there are 18 locales for 0.8b2:
    • Catalan, Dutch, Hungarian, Japanese got ready after the 0.8b1 release
    • Simplified Chinese, Esperanto, Finnish, Portuguese, Upper Sorbian have been added for the 0.8b2
  • Cédric has made Windows™ installers, which should put an end to one of the most frequent feature request
  • he’s built all binaries manually, as we don’t have any kind of script to ease this task (I considered that as a typical “l10n lead job”)

Cédric, congrats! and go get some sleep, the Korean and Bulgarian locales are getting ready. ;-) I’ll definitely write a few scripts to ease your work for the next release.

Inline Spell Checker

The inline spell checker in KompoZer 0.7.10 was inherited from Nvu, it was implemented with a specific patch against the Gecko 1.7 core and it caused a lot of freezes and crashes. As a result, most users (including myself) disabled it and I didn’t see it as an important feature to bring back in KompoZer 0.8.

As you can guess, a lot of users had a very different opinion on this. :-)

Unlike Gecko 1.7, Gecko 1.8.1 has a very good built-in inline spell checker. I’ve had a look at Thunderbird’s code and I found out enabling the inline spell checker in KompoZer was a snap. I’m sorry I didn’t do it sooner — but now it’s done, and it’s working fine as far as I know.

DOM Explorer Sidebar

I’m working with Fabien ’Kasparov’ Rocu on the next version of the DOM Explorer. As Fabien is implementing his ideas in an extension, I had to clean up the DOM Explorer and add a few hooks for his addon. To ease the development of his add-on, we’ve decided to implement a part of his work directly in KompoZer 0.8b2:

  • the DOM Explorer now shows the HTML attributes of the current element
  • a double-click on an element in the DOM Explorer brings up its “Property” dialog

The real improvement will come with Fabien’s extension, which should be released in April 2010. I’ll come back to this in another blog post.

New Keyboard Shortcuts

I’m known to be a dangerous pervert when it comes to computer keyboards — I admit I hate having to use a mouse when I’m editing a text. These new keyboard shortcuts aren’t documented, you can see them as a hidden bonus:

  • Ctrl+(Up|Down) moves the caret to the (beginning|end) of the current element
  • Ctrl(+Shift)+Enter adds a new line after (before) the current element
  • Alt+Shift+Enter switches to “Source” view

The Ctrl+Up/Down shortcut is more than a productivity booster. One of the known problems of the Mozilla editor component is that in some situations, it can be difficult to put the caret where you want it: for instance, there’s no easy way to put the caret right after a <div> block if it’s the last block in the page. With KompoZer 0.7.10 you had to select the <div> in the status bar, press the right arrow and hit Return; now all you need is to do a Ctrl+Down.

The “Source” View Still Sucks…

…and I’m aware of that. Please configure KompoZer to use your favorite text editor to work on the HTML source, there’s a specific “HTML” button by default in the main toolbar for that. I can’t help it, I hate the “Source” view in Nvu and KompoZer 0.7:

  • I don’t see much point in a pseudo syntax hilighting that doesn’t update as you type
  • I don’t see any point in showing line numbers that don’t match the *real* line numbers in the HTML file
  • nobody understands why the “Source” view hides the document tabs
  • it was the main source of crashes for KompoZer 0.7

The SeaMonkey-like plaintext editor, in my opinion, is much better at the moment — and on my first trunk builds (KompoZer 0.9a1pre / Gecko 1.9.3), Bespin is already working quite well.

Again, I understand a lot of users have a very different opinion on this, so I’ve tried an interesting experiment with this “Source” view: basically, I’ve re-written the main <tabeditor> element so it includes its own source editor. This embedded source editor could be used either for the “Split” view or for the “Source” view, and I could switch to “Source” mode without loosing the document tabs.

Unfortunately, this new <tabeditor> element raised a few problems that I couldn’t solve easily for this 0.8b2 release, so I’ve had to revert to the good old plaintext editor. For the 0.8b3 I’ll probably re-implement an Nvu-like “Source” view, rather than spending too much time on a feature that won’t work as well as Bespin: I prefer to release KompoZer 0.8 sooner in order to propose a Bespin-based KompoZer 0.9 as soon as possible.

The HTML Serializer Still Sucks…

…but we’re working on it. As you may have noticed, the HTML output of KompoZer 0.8 is already much cleaner than the one we had in KompoZer 0.7, especially if you check the “reformat HTML source” option: the most visible point is, there are (almost) no empty lines any more in the output files. But your well-defined indentation is still destroyed by KompoZer, which is a real pain when switching to “Source” mode.

Of course, you can use HTML Tidy as a workaround; I even used to design an Nvu extension for that. But this means dealing with temp files, serializing the files twice (once with KompoZer + reformatting with Tidy), and risking data losses (especially in utf-8, don’t ask me why). And the HTML code in the “Source” view is still a mess.

The great news is, Laurent Jouanneau has backported his XHTML serializer to Gecko 1.8.1 so I could use it for KompoZer 0.8 — and the first results look great! See this small example I saved with KompoZer 0.7.10, KompoZer 0.8b2 and KompoZer 0.8b3pre. Looks like we can finally get rid of HTML Tidy!

Almost Done

There are four main points to address before we can release a third (and hopefully last) beta:

  • adapt KompoZer 0.8 to the new HTML serializer;
  • get some kind of colored source view working;
  • fix the bugs in the “Split” view so people start using it;
  • work on FTP support to replace the current “Publish” button.

Please test this new version and report bugs. Many thanks to all users who donated or gave some time to keep this project running!

Fabien CazenaveKompoZer 0.8b3

KompoZer logo

We’ve just released KompoZer 0.8b3:

Localized binaries are available on the official download page: http://kompozer.net/download.php.

This maintainance release fixes two regressions that have been introduced in the previous beta:

  • bug #2957813, the "Source" mode was not applying modifications properly
  • bug #2959534, the "class" drop-down list was broken by a dirty attempt to make it UTF8-friendly

I didn’t want to take the risk of addressing other bugs but I did work on bug 1831943 by disabling line wrapping for Asian users. The relevant preference (editor.htmlWrapColumn) it now set to zero for Chinese (zh-CN, zh-TW) and Japanese (ja) builds, and it should be read properly by KompoZer — both when switching to “Source” mode and when saving HTML documents. This is still experimental, so your feedback will be welcome.

We’ve spent a few hours designing a bash/python script to make localized binaries for the 18 languages that are currently supported by KompoZer. This script works fine on Linux and OSX and it can build win32 installers by launching the InnoSetup compiler through Wine. It also checks that I haven't forgotten to include the MSVC7 DLLs in the win32 binaries, which should avoid us a few bad surprises for the next releases…

For the next beta we’ll focus on the “Source” view and the FTP module. We’ll do our best to release it in March.

EDIT In case you’ve downloaded a Windows build with missing MSVC7 dlls, I’ve just changed the path of all Windows binaries on SourceForge. Please download KompoZer 0.8b3 again, the problem should be solved. Sorry for the trouble. :-/

Chris IliasMy Installed Add-ons – Context Search

I love finding new extensions that do things I never even thought to search for. One of the best ways to find them is through word of mouth. In this case, I guess you can call it “word of blog”. I’d like to start a series of blog posts about the extensions I use, and maybe you’ll see one that you want to use.

The first one is Context Search. Context Search is one of those extensions I think should be part of Firefox. Context Search allows you to choose which search engine you use for each search. If it’s a word you aren’t familiar with, you can choose the Websters search engine. If it’s an acronym you aren’t familiar with, you can choose the Acronym Finder search engine.

Without the extension, when you highlight text then right-click, the menu will contain an item to search your preferred search engine for the text that is highlighted. With Context Search, you are instead given a list of your installed search engines, so you can pick which one to use. The search results will open in a new tab. I find myself using it more than the search bar.

Here’s a screenshot:

You can install it via the Mozilla Add-ons site.

Yunier José Sosa VázquezActualizando complementos

addonsDespués de un tiempo sin actualizar los complementos disponibles en nuestro AMO (Addons.Mozilla.Org), volvemos a ofrecerle este servicio.

Ofrecemos disculpas por las molestias ocasionadas, no habíamos publicado más porque una actualización en los servicios de Mozilla, había dejado desactualizada la carpeta en su FTP público — los complementos se guardan en otra ubicación.

Poco a poco iremos actualizando y publicando nuevas extensiones para toda la comunidad cubana. Si existe un add-on que no ha sido actualizado — unos cuantos –, nos lo pueden decir y le damos prioridad en la cola.

Robert O'Callahanrr 3.0 Released With x86-64 Support

I just pushed the release of rr 3.0. The big milestone for this release is that x86-64 support is fully functional! On 64-bit machines, we build a 64-bit rr that can record and replay 64-bit processes with the same set of features and performance-enhancing tricks that we developed for 32-bit. Not only that, but 64-bit rr can also record and replay 32-bit processes or even a mix of 32-bit and 64-bit processes in the same process tree. 64-bit support is mostly due to Nathan Froyd and we hope it significantly lowers barriers to using rr.

Many other internal improvements and sundry bug fixes have landed since 2.0. Some highlights:

  • rr can record executables that use a #! interpreter, e.g. bash scripts. Naturally the entire process tree is recorded, so this simplifies debugging of applications that use complex wrapper scripts.
  • A new rr ps command lists the processes recorded in an rr trace, making it easier to select the process you want to debug.
  • To make that even easier, rr replay -p <command> automatically attaches to the first exec of <command>. E.g. to debug e10s Firefox's first content process, use rr replay -p plugin-container.
  • Sometimes you record a trace, then modify the program but later want to replay the original trace, which requires the original program binaries. rr now uses hardlinks to save files in the trace so that in many cases, old traces are still replayable after a rebuild. Thanks to Karl Tomlinson for the idea.
  • Some small changes in command-line syntax were made to regularize the syntax and prepare for future expansion.
  • Many bug fixes to broaden the range of recordable applications. E.g. LibreOffice and QEMU work now.

Development carries on; visit rr-dev for exciting updates.

Have fun using rr!

Nicholas NethercoteCumulative heap profiling in Firefox with DMD

DMD is a tool that I originally created to help identify where new memory reporters should be added to Firefox in order to reduce the “heap-unclassified” measurement in about:memory. (The name is actually short for “Dark Matter Detector”, because we sometimes call the “heap-unclassified” measurement “dark matter“.)

Recently, I’ve modified DMD to become a more general heap profiling tool. It now has three distinct modes of operation.

  1. “Dark matter”: this mode gives you DMD’s original behaviour.
  2. “Live”: this mode tracks all the live blocks on the system heap, and lets you take snapshots at particular points in time.
  3. Cumulative“: this mode tracks all the blocks that have ever been allocated on the system heap, and so gives you information about all the allocations done by Firefox during an entire session.

Most memory profilers (including as about:memory) are snapshot-based, and so work much like DMD’s “live” mode. But “cumulative” mode is equally interesting.

In particular, unlike “live” mode, “cumulative” mode tells you about parts of the code that are responsible for allocating many short-lived heap blocks (sometimes called “heap churn”). Such allocations can hurt performance: allocations and deallocations themselves aren’t free, particularly because they require obtaining a global lock; such code often involves unnecessary initialization or copying of heap data; and if these allocations come in a variety of sizes they can cause additional heap fragmentation.

Another nice thing about cumulative heap profiling is that, unlike live heap profiling, you don’t have to decide when to take snapshots. You can just profile an entire workload of interest and get the results at the end.

I’ve used DMD’s cumulative mode to find inefficiencies in SpiderMonkey’s source compression  and code generation, SQLite, NSS, nsTArray, XDR encoding, Gnome start-up, IPC messaging, nsStreamLoader, cycle collection, and safe-browsing. There are “start doing something clever” optimizations and then there are “stop doing something stupid” optimizations, and every one of these fixes has been one of the latter. Each change has avoided cumulative heap allocations ranging from tens to thousands of MiBs.

It’s typically difficult to quantify any speed-ups from these changes, because the workloads are noisy and non-deterministic, but I’m convinced that simple changes to fix these issues are worthwhile. For one, I used cumulative profiling (via a different tool) to drive the major improvements I made to pdf.js earlier this year. Also, Chrome developers have found that “Chrome in general is *very* close to the threshold where heap lock contention causes noticeable UI lag”.

So far I have only profiled a few simple workloads. There are all sorts of things I haven’t tried: text-heavy pages, image-heavy pages, audio and video, WebRTC, WebGL, popular benchmarks… the list goes on. I intend to do more profiling and fix things where I can, but it would be great to have help from domain experts with this stuff. If you want to try out cumulative heap profiling in Firefox, please read the DMD instructions and feel free to ask me for help. In particular, I now have a good feel for which hot allocations are unavoidable and reasonable — plenty of them, of course — and which are avoidable. Let’s get rid of the avoidable ones.

Fabien CazenaveBack from Mozilla

During the last 3 years I’ve worked full-time for Mozilla Corp, and now it’s more than time to move on.

Leaving the MoCo has been a very difficult step for me. I’ve been a Mozillian for the last 8 years, and it’s been much more than a friendly community or a challenging job for me. I’ve had a lot of fun, met amazing people, worked on exciting technologies. I’m very proud of what we did, and I’m even prouder of why we did it. Working for Mozilla felt like a love story, and ending a love story is always painful.

I just took a long, refreshing, offline break. Sorry if you tried to reach me during this period — I’m getting through the mailbox hell, and I’ll do my best to reply to every message.

Best wishes to all Mozillians, especially to the folks in the Paris office, the Spanish Connection, and my drinking pals all around the globe. I’ll be happy to share a few beers with you at a web or FLOSS event someday. :-)

Jared WeinStatus Update – In-content Preferences, part 3

icons@2x

Since the last update, bug 1022582 (checkboxes and radio buttons in about:preferences lack any indication they’re checked/selected when using High Contrast mode) has been marked ready for check-in and should be marked fixed by late tomorrow.

This marks the last of the high-contrast bugs blocking the release :)

We also have had great progress on bug 1043346 (dialogs should have their dimensions reset after closing). This bug looks ready to land after the patch gets updated.

I’m optimistic that we can get a fix in for bug 1008172 ([linux-only] scrolling up and down on pages with scrollbars in about:preferences will change subgroups) soon. I put some tips in the bug, and Richard Marti has said that he will try to take a look at it.

As for bug 1056478 (resizer of sub-dialog follows only half of the movement of mouse pointer), I’m not sure if we should keep it as a blocker. This bug reveals an underlying issue in Gecko (bug 1105906) that exists with any centered resizable element. Webkit doesn’t suffer from this issue, so it’s hopeful that we will find a solution, but I don’t know if this quirk should stop us from shipping.

The next bugs that I would like to see attention on are:
Bug 1043612: Persist the size of resizable in-content subdialogs
Bug 1044597: in-content preferences: resized dialogs should not push buttons into overflow
Bug 1044600: in-content preferences: empty dialogs after pressing backspace or the Back button

All three of these bugs are part of the subdialogs category. Thanks for the work to those contributing patches and helping test, we couldn’t do this without you! :)


Tagged: firefox, planet-mozilla, ux

Joel MaherLanguage hacking – changing the choice of words we use

Have you ever talked to someone who continues to use the same word over and over again?  Then you find that many people you chat with end up using the same choice of words quite frequently.  My wife and I see this quite often, usually with the word ‘Amazing’, ‘cool’, and ‘hope’.

  • Are these words necessary in conversation?
  • Do these words we choose lose value due to overuse?
  • Are we communicating effectively?
  • Do others who are jaded by these words associate other meanings or intentions to the words we use?

Lets focus on the word “hope”.  There are many places where hope is appropriate, but I find that most people misuse the word.  For example:

I hope to show up at yoga on Saturday

I heard this sentence and wonder:

  • do you want to show up to yoga on Saturday?
  • are you saying this to make me feel good?
  • are there other things preventing you from committing to yoga on Saturday?

What could be said is:

I am planning to show up at yoga on Saturday

or:

I have a lot of things going on, if all goes well I will show up at yoga on Saturday

or:

I don’t want to hurt your feelings by saying no, so to make you feel good I will be non committal about showing up to yoga on Saturday even though I have no intentions.

There are many ways to replace the word “hope”, and all of them achieve a clearer communication between two people.

Now with that said, what am I hacking?  For the last few months I have been reducing (almost removing) the words ‘awesome’, ‘amazing’, ‘hate’, and ‘hope’ from my vocabulary.

Why am I writing about this?  I might as well be in the open about this and invite others to join me in being deliberate about how we speak.  Once a month I will post a new word, feel free to join me in this effort and see how thinking about what you say and how you say it impacts your communications.

Also please leave comments on this post about specific words that you feel are overused – I could use suggestions of words.


Daniel Stenberglibcurl multi_socket 3333 days later

.SE-logoOn October 25, 2005 I sent out the announcement about “libcurl funding from the Swedish IIS Foundation“. It was the beginning of what would eventually become the curl_multi_socket_action() function and its related API features. The API we provide for event-driven applications. This API is the most suitable one in libcurl if you intend to scale up your client up to and beyond hundreds or thousands of simultaneous transfers.

Thanks to this funding from IIS, I could spend a couple of months working full-time on implementing the ideas I had. They paid me the equivalent of 19,000 USD back then. IIS is the non-profit foundation that runs the .se TLD and they fund projects that help internet and internet usage, in particular in Sweden. IIS usually just call themselves “.se” (dot ess ee) these days.

Event-based programming isn’t generally the easiest approach so most people don’t easily take this route without careful consideration, and also if you want your event-based application to be portable among multiple platforms you also need to use an event-based library that abstracts the underlying function calls. These are all reasons why this remains a niche API in libcurl, used only by a small portion of users. Still, there are users and they seem to be able to use this API fine. A success in my eyes.

One dollar billPart of that improvement project to make libcurl scale and perform better, was also to introduce HTTP pipelining support. I didn’t quite manage that part with in the scope of that project but the pipelining support in libcurl was born in that period  (autumn 2006) but had to be improved several times over the years until it became decently good just a few years ago – and we’re just now (still) fixing more pipelining problems.

On December 10, 2014 there are exactly 3333 days since that initial announcement of mine. I’d like to highlight this occasion by thanking IIS again. Thanks IIS!

Current funding

These days I’m spending a part of my daytime job working on curl with my employer’s blessing and that’s the funding I have – most of my personal time spent is still spare time. I certainly wouldn’t mind seeing others help out, but the best funding is provided as pure man power that can help out and not by trying to buy my time to add your features. Also, I will decline all (friendly) offers to host the web site on your servers since we already have a fairly stable and reliable infrastructure sponsored.

I’m not aware of anyone else that are spending (much) paid work time on curl code, although I’m know there are quite a few who do it every now and then – especially to fix problems that occur in commercial products or services or to add features to such.

IIS still donates money to internet related projects in Sweden but I never applied for any funding from them again. Mostly because it has been hard to sync with my normal life and job situation. If you’re a Swede or just live in Sweden, do consider checking this out for your next internet adventure!

Advancing ContentGetting Tiles Data Into Firefox

A month ago, we announced that the new Tiles experience is available in the stable Firefox build. We experimented earlier this year and found that users do want more than empty boxes or screenshots in Firefox.  The Mozilla project is also working hard to set higher standards for the industry around transparency and control of the use of a user’s data.  With that in mind, we wanted to make it clear what data we do and do not pass from Mozilla servers to Firefox to enable the Tiles experience, and here, we’ll address how Tiles get into Firefox.  You can, of course, inspect the source code, but this might be easier.

Typically, if a web page adds external content, it does so by embedding an unencrypted remote request with cookied identifiers, so the server can respond as fast as possible with relevant content. The server also wants to remember as much as possible about the context of the request such as IP addresses and referrals, so this all is tracked through a persistent identifier that could be shared and triangulated with other data sources. Frequently, where a server wants to gather as much data as possible about users, invisible image beacons and iframes/scripts may be used to gather more information.  This is all done with little understanding by, or permission from, the user.

Tiles Request Headers Graphic

No special request headers or body data sent

What we are doing with Tiles is different

With the November 10th release, Firefox sends a single custom request once per day and then saves data locally to be used for as many new tabs opened by the user.  The Mozilla server sees the IP address of the request, and uses that to determine geotargeting at a country level (e.g., nothing more granular than “United States” or “Germany”).  The request URL contains the locale of the Firefox build to ensure the content Firefox shows is language and location appropriate.

Having received the request, we respond with the appropriate content and then delete the raw data with IP addresses within a week. Even with this minimal actionable data set, we are able to determine the volume of requests for specific country/locale pairs. This helps us decide which tiles to translate to provide a useful experience to new users in those regions.

The JSON response contains display information: Tiles graphics resources, URLs and titles, and is defined for a specific country/locale pair.

Code snippet

Sample en-US response with an array of tile data objects

Tiles is still evolving, and this is an initial release, but there are several things that we like about how we get Tiles data into Firefox.

Very importantly, this is an experience that is controlled by the user, involves the minimal actionable dataset.  “Tracking” has become a loaded term, and we do not consider Tiles to be tracking.  We do not need or try to identify an individual with these data requests, meaning that no cookies need to be set (and no other techniques to “fingerprint” are used).  However, we assume that anyone who has previously set their DNT preference to “do not track” wishes to opt out of the experience, and so no request is sent.  DNT is not how a user will opt out of the Tiles experience: a user who has Tiles can also set “do not track” independently of their Tiles settings.

The architecture reduces the latency requirements of our Tiles experience compared to traditional web pages: because Firefox can request and cache the data, it can pre-load assets, pre-compute personalization, and pre-populate the new tab page, so when users open a new tab, it’s fast and immediately usable.

And even though we send little data and save little data, we secure that data with transport encryption and prevent attacks with key pinning.

There are downsides to this approach: the data file could be outdated in both space and time. For example, a user could download tiles for their home country but then travels elsewhere, or we might accidentally show a Happy New Year tile after that date has passed. Also, because we package additional information in order to let Firefox make decisions, Firefox ends up with some title and URL data of Enhanced tiles that are not shown to users.  These are all potential improvements we’re looking forward to making.

Happy New Year Tile... might be a day late!

Happy New Year Tile… might be a day late!

I’m sure we can improve on our implementation.  We would also love to hear your thoughts on our approach to getting relevant Tiles data to power the default Directory and Enhanced Tiles experiences.  Next time, we will detail how we report data on interactions with Tiles.

Ed Lee on behalf of the Tiles team.

Nicholas NethercoteBetter documentation for memory profiling and leak detection tools

Until recently, the documentation for all of Mozilla’s memory profiling and leak detection tools had some major problems.

  • It was scattered across MDN, the Mozilla Wiki, and the Mozilla archive site (yes, really).
  • Documentation for several tools was spread across multiple pages.
  • Documentation for some tools was meagre, non-existent, or overly verbose.
  • Some of the documentation was out of date, e.g. describing tools that no longer exist.

A little while back I fixed these problems.

  • The documentation for these tools is now all on MDN. If you look at the MDN Performance page in the “Memory profiling and leak detection tools” section, you’ll see a brief description of each tool that explains the circumstances in which it is useful, and a link to the relevant documentation.
  • The full list of documented tools includes: about:memory, DMD, areweslimyet.com, BloatView, Refcount tracing and balancing, GC and CC logs, Valgrind, LeakSanitizer, Apple tools, TraceMalloc, Leak Gauge, and LogAlloc.
  • As well as consolidating all the pages in one place, I also improved some of the pages (with the help of people like Andrew McCreight). In particular, about:memory now has reasonably detailed documentation, something it has lacked until now.

Please take a look, and if you see any problems let me know. Or, if you’re feeling confident just fix things yourself! Thanks.

Laura ThomsonTry server performance

In Q4, Greg Szorc and the Developer Services team generally have been working on a headless try implementation to solve try performance issues as the number of heads increases. In addition, he’s made a number of performance improvements to hg, independent of the headless implementation.

I think these graphs speak for themselves, so I’m just going to leave them here.

Try queue depth (people waiting on an in-flight push to complete before their push can go through):

Try push median wait times:

(Thanks, too, to Hal Wine for setting up a bunch of analytics so we can see the effects of the work represented in such shiny graphs.)

Karl DubostPop Out The Table

The CSS table display Web design pattern emerged among Web designers. CSS table display is very well supported across browsers. It gave the ability to use HTML table semantics but keeping the table layout properties.

The Issue!

But it also creates Web compatibility issues when combined with max-width: 100%;. This is a rendering of the current mobile Ebay site in Blink and Gecko. There is an open Web Compatibility bug about it.

Screenshots of mobile ebay site

The markup for each individual items in the page is following this structure:

<a href="http://…">
  <div class="verticalAlign">
    <img class="stackImg" src="….jpg" data-src="….jpg">
  </div>
</a>

associated with the following CSS:

.mainCatCntr .stackone .catImgBkg a {
    display: table;
    width: 100%;
    height: 100%;
    padding-bottom: 45px;
    padding-top: 10px;
}

.mainCatCntr .stackone .catImgBkg a .verticalAlign {
    display: table-cell;
    vertical-align: middle;
}

.mainCatCntr .stackone .stackImg {
    display: block;
    position: absolute;
    top: 0px;
    z-index: 1;
    max-height: 94px;
    max-width: 100%;
    border-radius: 2px 2px 0px 0px;
}

As we can see a first nesting element with display: table; and then nested an element being display: table-cell;. Finally inside this element one which is specified with max-width:100%;.

How To Fix It?

The first approximation to fix it is to replace max-width:100%; with width:100%; but then when the space is bigger than the image size, the image stretches. It is not a very compelling fix, maybe it's alright for sweaters and mirrors, but less so for cameras.

The fix is this one:

.mainCatCntr .stackone .catImgBkg a {
    display: table;
    table-layout: fixed;
    width: 100%;
    /* height: 100%; */
    padding-bottom: 45px;
    padding-top: 10px;
}

We added table-layout: fixed; just after display: table;. It fixed the images going out of their boxes. We still had an issue with the vertical alignment which was fixed by removing the height: 100%; but that was not the core issue. Once these changes in place we get the same rendering than blink rendering engine.

Screenshots of devtools on ebay site

And most important, the same fix doesn't break the layout in Blink. The work now is to convince ebay to modify their CSS so we can have a version which is working on Web Compatibility. Note that this type of issues could have been detected through automatic testing and screenshots.

Differences In Rendering

But wait… it's not over yet. Which rendering engine is implementing the correct behavior? Note that the answer is always difficult, it can be one of the following:

  1. The specification describes the correct behavior and one or more browsers is not implementing correctly the specification.
  2. The dominant market share browsers defines what is the correct behavior and we should modify our implementation to match what the dominant browser is doing. (Yeah. It's the it-sucks part of the story.)

The thing is that the users do not care which one of these two or other differences. What the user sees is that it is broken in one browser and not in the other one. The same way the designers use a dominant market share browsers with bugs considered as the normal behavior because they just don't know what should be the correct behavior. This is what Web Compatibility is about.

I found a couple of bugs at Mozilla talking about the same issue.

I let you read the comments of the two bugs and tries to decipher the interpretation of the specification text. If you have no issue and understand everything, you get your degree in Web Compatibility Shamanism. Maybe we should start deliver these. Web Compatibility Shaman should be as ridiculous as JavaScript Ninja or Python Rockstar?

Otsukare.

Nicholas Nethercotemfbt/SegmentedVector.h

I just landed a new container type called mozilla::SegmentedVector in MFBT. It’s similar to nsTArray and mozilla::Vector, but the the element storage is broken into segments rather than being contiguous. This makes it less flexible than those types — currently you can only append elements and iterate over all elements.

Hoever, in cases where those operations suffice, you should strongly consider using it. It uses multiple moderate-sized storage chunks instead of a single large storage chunk, which greatly reduces the likelihood of OOM crashes, especially on Win32 where large chunks of address space can be difficult to find. (See bug 1096624 for a good example; use of a large nsTArray was triggering ~1% of all OOM crashes.) It also avoids the need for repeatedly allocating new buffers and copying existing data into them as it grows.

The declaration is as follows.

template<typename T,
         size_t IdealSegmentSize,
         typename AllocPolicy = MallocAllocPolicy>
class SegmentedVector
  • T is the element type.
  • IdealSegmentSize is the size of each segment, in bytes. It should be a power-of-two (to avoid slop), not too small (so you can fit a reasonable number of elements in each chunk, which keeps the per-segmente book-keeping overhead low) and not too big (so virtual OOM crashes are unlikely). A value like 4,096 or 8,192 or 16,384 is likely to be good.
  • AllocPolicy is the allocation policy. A SegmentedVector can be made infallible by using InfallibleAllocPolicy from mozalloc.h.

If you want to use SegmentedVector but it doesn’t support the operations you need, please talk to me. While it will never be as flexible as a contiguous vector type, there’s definitely scope for adding new operations.

Andreas GalIt takes many to build the Web we want

Mozilla is announcing today the creation of a WebRTC competency center jointly with Telenor.

Mozilla’s purpose is to build the Web. We do so by building Firefox and Firefox OS. The Web is pretty unusual when it comes to interoperable technology stacks, because it is not built by standards bodies. Instead, the Web is built by browser vendors that implement browsers that implement the Web, which in the end pretty much defines what the Web is.

The Web adds new technologies whenever a majority of browser vendors agree to extend it in an interoperable way. Standards bodies merely help coordinating this process. Very rarely do new Web capabilities originate in a standards body. New Web capabilities merely end up there eventually, once there is sufficient interest by multiple browser vendors to warrant standardization.

Mozilla doesn’t — and can’t — build the Web alone. What makes the Web unique is that it is owned by no-one, and cannot be held back by anyone. It doesn’t take unanimous consent to extend the Web. A mere majority of browser vendors can popularize a new Web capability, forcing the rest of the browser vendors to eventually come along.

While several browser vendors build the Web, Mozilla has a unique vision for the Web that is driven by our mission as a non-profit foundation. Whereas all other browser vendors are for-profit corporations, advancing the Web in the interest of their shareholders, Mozilla advances the Web for users.

The primary browser vendors today are Google, Apple, Microsoft and Mozilla. These four organizations have a direct path to bring new technologies to the Web. While many other technology companies have a strong interest in the Web, they lack the ability to directly move the Web ahead because only these four browser vendors develop a rendering engine that implements the Web stack.

There is one more aspect that sets Mozilla apart from its browser vendor competitors. We are several orders of magnitude smaller than our peers. While this might appear as a market disadvantage at first, combined with our neutral and non-profit status it actually creates a unique opportunity. Many more technology companies have an interest in working on the Web, but if you aren’t Google, Apple, or Microsoft its very difficult to contribute core technologies to the Web. These three companies have direct control over a rendering engine. No other technology company can equally influence the Web. Mozilla is looking to change that.

Jointly with Telenor we are launching a new initiative that will allow parties with a strong technology interest in WebRTC to participate as an equal in the development process of the WebRTC standard. Since standards are really just a result of delivering new Web technologies in a rendering engine, Telenor will assign Telenor engineering staff to work on Mozilla’s implementation of WebRTC in Firefox and Firefox OS.

The goal of this new center is to implement WebRTC with a broad, neutral vision that captures the technology needs of many, not just the technology needs of individual browser vendors.

Mozilla is an open source project where every opinion and technical contribution matters. The WebRTC Competency Center will accelerate the development of WebRTC, and ensure that WebRTC serves the diverse technology interests of many. If you would like to see WebRTC (or any other part of the Web) grow capabilities that are important to you, join us.


Filed under: Mozilla

Armen ZambranoRunning Mozharness in developer mode will only prompt once for credentials

Thanks to Mozilla's contributor kartikgupta0909 we now only have to enter LDAP credentials once when running the developer mode of Mozharness.

He accomplished it in bug 1076172.

Thank you Kartik!


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Joel Maher5 days in Portland with Mozillians and 10 great things that came from it

I took a lot of notes in Portland last week.  One might not know that based on the fact that I talked so much my voice ran out of steam by the second day.  Either way, in chatting with some co-workers yesterday about what we took away from Portland, I realized that there is a long list of awesomeness.

Let me caveat this by saying that some of these ideas have been talked about in the past, but despite our efforts to work with others and field interesting and useful ideas, there is a big list of great things that came to light while chatting in person:

  • :bgrins mentioned a mozscreenshot tool and the need for getting a screenshot of new features in development on various platforms so UX can review the changes.  Currently it is a method of asking UX to download the build from try or some other location and run it locally to see the changes.
  • :heycam/:jwatt – had a great an interesting talos discussion.  Mostly around how to run it and validate patches/fixes locally and on try server. (check out bug 1109243)
  • :glandium is looking at doing some changes (I recall something with build/pgo) and wanted to know how to compare some Talos numbers to help make the right decision – this can be done with either bug 1109243, or the existing compare.py in the Talos repo (we might need some cleanup on this)
  • :bobowen has been working to get csb tests working- after chatting in line to board a plane, it became clear he needs to solve some finer grain test selection problems- many of which the ateam has on a roadmap in Q2/Q3 – I see some tighter collaboration happening here.
  • Thanks to chatting with :lsblakk, I am motivated to expand the talos sheriff team and look for dedicated Mozillians (or soon to become Mozillians) to work with in keeping a lid on the alerts and overall state of performance (based on what we measure).
  • :lightsofapollo had a great conversation with me about TaskCluster and what barriers stood in the way of running Talos on it – this will result is some initial investigation work!
  • :kats was asking me how to generate alerts for areweslimyet.com.  This is very doable via posting data to graph server
  • After a good session on how to handle intermittents (seems like the same people have this conversation every time a bunch of Mozillians get together), I am motivated to push Titanic further to find the root cause of an intermittent via brute force retriggers (ideally on weekends).  In fact :dbaron has done this a few times in the last month and so have the sheriffs.  This is similar to what we do to verify a talos regression, just with some different parameters.
  • The same conversation about intermittents yielded a stronger desire to look at new tests coming into the system and validating stability.  The simple solution is to run the job 100 times, verify that the new test didn’t have issues and then leave it along.  Of course we could get smart and do this for all test_* files that are edited in the tree.  Thanks to :ehsan for spawning this conversation.
  • Discussing the idea of a Talos Sheriff with a few folks, it seems like there are further conversations needs with the existing Sheriff team as well as to chat with :vladan and :avih about what type of policy we should have for existing performance failures which are detected.  I would expect some changes to be made early next year as we have more tests and need more help.  My initial thoughts are specifically with responding to regressions or getting backed out in XX hours.  Yeah that sounds nasty, but there are probably cut and dry parameters we can set and start enforcing.

Those are 10 specific topics which despite everybody knowing how to contact me or the ateam and share great ideas or frustrations, these came out of being in the same place at the same time.

Thinking through this, when I see these folks in a real office while working from there for a few days or a week, it seems as though the conversations are smaller and not as intense.  Usually just small talk whilst waiting for a build to finish.  I believe the idea where we are not expected to focus on our day to day work and instead make plans for the future is the real innovation behind getting these topics surfaced.


Kevin GhimInterest Dashboard Launch - Follow up

A month ago, we released the first iteration of the our Interest Dashboard.  This is a very ambitious project, interpreting and representing back to the user an analysis of their tasks and interests.  Many online services already make some representation of this back to a user, (think of Amazon’s suggestions) but I cannot think of anyone who does this with the broad scope that the Firefox Interest Dashboard does, and at the control of the user.  Rather than something like Amazon suggestions, when I looked at my Interest Dashboard, I was more reminded of a scene from You are what you eat where I was confronted with my intake of the Web.  

Anyway: it’s often hard to understand the scope of the project and the current and planned features without the full product vision. And there were a couple of misconceptions that we evidently helped create last week which I would like to clear up.

First: we stated in our FAQ that data is stored in the client.  That is the case.  A few commentators noted that the Privacy Notice states that you may (with your express consent) share your data with other websites.  The FAQ is correct, the data is stored in the client because this functionality is not yet enabled in the add-on.  At such time as we have the functionality available, we will update the FAQ and sure sure to notify users of the add-on.  We felt the best approach was to be transparent with the privacy notice about the direction we will go, but be very clear with the current FAQ (we felt this was a better approach than updating the privacy notice with the release).   We do expect the user to be able to share their intention with Websites in the future, and as we make clear, this will be with the user’s express consent.  That is in the future.

There were two other misapprehensions about the launch last week which I would like to clear up.  I read at least one comment where a user stated they had cleared their browser’s cache but still saw data in the Interest Dashboard.  This is fairly straightforward: the dashboard is constructed from the browser’s history, not what is in cache.  Also, one commentator noted that some reviews had been removed from AMO (some are preserved here).  This was actually news to us too, and on checking in with the AMO team, we understood these reviews were removed by members of the AMO community for violating their guidelines for relevance.  

The last point that I read in the discussion was about whether or not Interest Dashboard would exacerbate the Filter Bubble problem.  It’s an excellent point, and one we think about quite a bit. In fact, I believe we can help address the issue. The Web should not seduce the user into bad or repetitious habits. A first step here (just as in You Are What You Eat) is to be presented with what you consume (“What gets measured, gets managed”).

But this raises the principle question we’d like to answer through this experimental add-on: whether it’s possible to have a user-centered recommendation system. I use the word “recommendation” loosely because not everything we get recommended online needs to be in a form of an advertisement. We intend to build this recommendation system by first providing users transparency into how an interest categorization works - this is the current state of the Interest Dashboard. We are also intensely focused in providing in-depth analysis of the user’s long-term interests and dynamic short-term interests. Most recommendation systems, including ads, only focus on the short-term intent of the user in the hope of converting the user down the purchase funnel.

Shortly, we hope to connect the user’s long and short-term interests, with their explicit consent, to a piece of content that they might find interesting. The connection part can be done several different ways: 1) use content crawler and index them by categories or 2) partner up with an aggregation service. In the case of the latter, we can either get a daily massive dump of indexed content or use an API to send the user’s interest category that will return a matching piece of content. We are internally experimenting with these options before we release it as a feature on the add-on. That’s a long way to explain a part of the experimental feature is reflected on the privacy notice.

A content recommendation system is certainly not for everyone and there are some bad examples of how companies can abuse personal data. We want to counter the bad actors by providing choice.

If the user wants it, we can provide a platform based on what/how/when the user wants. We’re hoping the Interest Dashboard is the beginning of this exploration. But this is a sensitive area.  Ideas and advice are always welcomed, as is scrutiny.  We are serious about what we are building, and the only way it will be adopted is if it is trusted, and it will only be trusted if it is scrutinized.  Please, keep the scrutiny coming.

Gervase MarkhamUnanimity Is Not a Requirement

People sometimes wonder how to best involve all of the affected/important/relevant parts of the Mozilla community in a decision. The prospect of doing this can lead to a certain amount of fear – of criticism, bike-shedding, etc.

At the last All Hands in October 2013, at a session in Brussels, we produced a Best Practices document called “Productive Discussion” to help with exactly this problem. Given Mitchell’s keynote at the recent All Hands, I thought it was worth reflagging its existence.

Adam OkoyeOPW Internship

Tomorrow, December 9th (which is only about an 45 minutes at this point) is the start of my OPW internship with Mozilla. I’ll be working on the SUMO/Input Web Designer/Developer project and, from my understanding, primarily working on the “thank you” page that people see after they leave feedback. The goal is for people who leave feedback, especially negative feedback, to not feel brushed off, but rather to feel like their feedback was well received. We also want to be able to a. point them  to knowledge base articles that might mitigate the issue(s) they are having with Firefox based on their feedback (what they wrote in the text field) and b. to point them towards additional ways that they can become involved with Mozilla.

Like I said above, the internship starts on December 9th and it ends on March 9th. The internship, like all OPW projects, is remote but, because there is a Portland Mozilla office I will be able to work in one of their conference rooms.  Most of the programming I will be doing will be in Python and I will also be doing a lot of work in with Django. That said I will also likely be doing some work HTML, CSS, and Javascript. In addition to the thank you page, I’m also going to be working on other assorted Input bugs.

As part of the agreements of my internship I will be posting at least one internship related post every two weeks. In reality I am going to hope to post at least one internship related post a week as it will get me back into the practice of blogging and I will also have a back up plan if there is a week that I can’t post for whatever reason.

Here’s to a productive three months!

Nick AlexanderThe Firefox for Android build system in 2015

Me and my colleagues @lucasratmundo, @mleibovic, @michaelcomella, and vivekb attended the Community Building discussion at #mozlandia (notes and slides are available). @mhoye presented his thinking about community building and engagement at Mozilla and beyond. I interpreted Mike’s presentation through a bifurcated lens: I came away feeling that there are social aspects to community engagement, such as providing positive rewards and recognition; and there are technical aspects to community engagement, such as maintaining documentation and simplifying tooling requirements [1].

People like @lsblakk are able to bring new people into our community with phenomenal outreach programs like the Ascend Project, but that’s not my skill-set. I deeply admire the social work Lukas (and others!) are doing, but I personally am most able to empower the Mozilla community to own Firefox for Android by addressing the technical aspects Mike discussed.

Making it easier to contribute to Firefox for Android

In this context, the following initiatives will drive the Firefox for Android tooling:

  1. making it easier to build Firefox for Android the first time;
  2. reducing the edit-compile-test cycle time;
  3. making the Firefox for Android development process look and feel like the standard Android development process.

Making it easier to build Firefox for Android the first time

One strong claim made by mhoye — and supported by many others in the room — is that mach bootstrap has significantly reduced the technical accessibility barrier to building Firefox for Desktop. We need to implement mach bootstrap for Firefox for Android.

For those who don’t know, mach bootstrap is a script that prepares the Firefox build environment, including fetching, updating, and installing the pre-requisites needed for building Firefox. It automates the (often difficult!) task of fetching dependencies; ensures that known-good versions of dependencies are installed; and sets development environment defaults. mach bootstrap is the first thing that should be run in a fresh Firefox source tree [2].

Firefox for Android has more complicated dependencies than Firefox for Desktop, including some that cannot be easily distributed or installed: the Java development kit and run-time environment, the Android SDK and NDK; Google’s Play Services libraries, etc. We can save new contributors a long dependency chase before they see a positive result. In addition, seasoned developers spend an unknown-but-large amount of time discovering that the required dependencies have advanced. Pinning the build to known-good versions, failing the build when said versions are not present, and providing mach bootstrap to update to known-good versions will reduce this frustration.

A contributor started writing a shell script that does the work of mach bootstrap. Bug 1108771 tracks building upon this effort. I’ve also filed Bug 1108782 to track pinning the Firefox for Android build requirements to known-good versions.

Reducing the Firefox for Android edit-compile-test cycle time

Firefox for Android is an unusual Android application: a large C++ library backing a medium-sized Java front-end, all plumbed together with a JavaScript-based message passing system. Right now, building the C++ library takes roughly 12 minutes on my system. Building the Java front-end takes roughly 2 minutes, and the JavaScript parts are essentially free. In 2015, glandium has taken a first quarter goal to make it possible to build Firefox (for Desktop and for Android) without building that large C++ library at all [3]. In the future, purely front-end developers (XUL/JavaScript developers on Desktop; Java/JavaScript developers on Android) will download and cache the C++ build artifacts and build the application on top of the cached artifacts. Firefox for Android is really well-suited to this mode of operation because our dependencies are so well-defined. I’ve filed Bug 1093242 to track part of this work.

The previous work will make it faster to build Firefox for Android the first time, because we won’t build C++ libraries. We’re also going to invest in making each incremental build faster, and there’s some low-hanging fruit here. Right now, the most costly parts of our build are compiling individual JAR libraries and DEXing all of the resulting JAR libraries. Every time we split our JAR libraries, we can parallelize a small part of our build and reduce the wall-clock time of our Java compilation. Right now we could split our single third-party JAR library and save ourselves compile time. And we’re very close to being able to split the Background Services (Sync, Firefox Accounts, Firefox Health Report, etc) library out of Fennec proper, which will save even more compile time.

Improving our DEXing time is more difficult. Android’s DEX processor is a byte-code transformation step that turns Java’s byte-code into Dalvik VM byte-code. For historical reasons, we DEX the entirety of Fennec’s byte-code in one DEX invocation, and it’s both a single-process bottleneck and terribly expensive. For some time, it has been possible to DEX each individual library in parallel and to merge the resulting DEX files. All modern Android build system (such as buck or Gradle) support this. We could support this in the Firefox for Android build system as well, but I think we should move to use a more featured build system under the hood instead. Android build systems are very complicated; we don’t want to write our own, and we definitely don’t want to write our own in Make syntax. In 2015, we’ll push to use a full-featured build tool that brings this DEX-time improvement. More on this in a future post.

Making the Firefox for Android development process "standards compliant"

This point is a more philosophical point than the others. Firefox for Android wins when we engage our community. The community of Android developers is large and constantly pushing the boundaries of what’s possible on a device. We want to tap into that well-spring of talent and innovation, and every thing we do that’s non-standard to an Android developer makes it harder for us to do this. Contributor @fedepaol wrote a blog post about how difficult this used to be.

The good news is, we’re getting better: we have rudimentary Gradle support and you can use IntelliJ now. But we still have a long, long way to go. We’ve got lots of easy wins just waiting for us: tickets like Bug 1107811 will go a long way towards making the Fennec "Android standards compliant" IntelliJ experience better. I have a technical plan to land in-source-tree IntelliJ configurations, so developers can open mobile/android directly in IntelliJ and get to a working Firefox for Android APK in the IDE in one step.

At a lower level, Tickets like Bug 1074258 will let us use the IntelliJ design view more easily, and landing Android JUnit 3 Instrumentation test runners in automation (Bug 1064004) will make local testing significantly easier than the convoluted Robocop process we have right now. The list goes on and on.

Conclusion

The Firefox for Android team moved strongly towards easier builds and ubiquitous tooling in 2014. 2015 is going to be even better. We’re going to improve our technical experience in (at least!) three ways: making the first build easier; making the next builds faster; and unlocking the power of the standard Android developer toolchain.

Join us! Discussion is best conducted on the mobile-firefox-dev mailing list and I’m nalexander on irc.mozilla.org and @ncalexander on Twitter.

Notes

[1]I believe there is an over-arching third aspect, that of the system in which we do our work and interact with the community, but right-here-right-now I don’t feel empowered to change this. Systemic change requires making community engagement part of every team’s top-level goals, and achieving such goals requires resources that are allocated well above my pay-grade.
[2]In fact, the bootstrapper does not even require a source check-out — you can download just the script and it will fetch enough to bootstrap itself. So it’s more accurate to say just bootstrap rather than mach bootstrap, but mach bootstrap has won the vocabulary battle in this arena.
[3]glandium has written a very informative blog post about the future of the Firefox build system at http://glandium.org/blog/?p=3318. The section relevant to this discussion is Specialized incremental builds.

Paul RougetFirefox.html

Firefox.html

Firefox.html screenshot

I just posted on the Firefox-dev mailing about Firefox.html, an experimental re-implementation of the Firefox UI in HTML. If you have comments, please post on the mailing list.

Code, builds, screenshots: https://github.com/paulrouget/firefox.html.

Lukas BlakkAscend New Orleans: We need a space!

I’m trying to bring the second pilot of the Ascend Project http://ascendproject.org to New Orleans in February and am looking for a space to hold the program. We have a small budget to rent space but would prefer to find a partnership and/or sponsor if possible to help keep costs low.

The program takes 20 adults who are typically marginalized in technology/open source and offers them a 6 week accelerated learning environment where they build technical skills by contributing to open source – specifically, Mozilla. Ascend provides the laptops, breakfast, lunch, transit & childcare reimbursement, and a daily stipend in order to lift many of the barriers to participation.

Our first pilot completed 6 weeks ago in Portland, OR and it was a great success with 18 participants completing the 6 week course and fixing many bugs in a wide range of Mozilla projects. They have now continued on to internships both inside and outside of Mozilla as well as seeking job opportunities in the tech industry.

To do this again, in New Orleans, Ascend needs a space to hold the classes!

Space requirements are simple:

* Room for 25 people to comfortably work on laptops
* Strong & reliable internet connectivity
* Ability to bring in our own food & beverages

Bonus if the space helps network participants with other tech workers, has projector/whiteboards (though we can bring our own in), or video capability.

Please let me know if you have a connection who can help with getting a space booked for this project and if you have any other leads I can look into, I’d love to hear about them.

Armen ZambranoTest mozharness changes on Try

You can now push to your own mozharness repository (even a specific branch) and have it be tested on Try.

Few weeks ago we developed mozharness pinning (aka mozharness.json) and recently we have enabled it for Try. Read the blog post to learn how to make use of it.

NOTE: This currently only works for desktop, mobile and b2g test jobs. More to come.
NOTE: We only support named branches, tags or specific revisions. Do not use bookmarks as it doesn't work.


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Sean MartellThank You, Mozlandia

Well, that was a week.

Sitting here on the Monday after, coffee in hand and reading all of the fresh new posts detailing our recent Coincidental Work Week, I’ve decided to share a few quick thoughts while they’re still fresh in my mind.

For me, last week was a particularly emotionally overwhelming one. There was high energy around once again gathering as a whole, sadness around friends/family moving on, fear in what’s next, excitement in what’s next, and a fine juggling act of trying to manage all those feels as they kicked in all at once.

The work week itself (the actual work part) was just amazing and I’m pretty sure it was the most productive travel week I’ve ever had in any job setting. Things were laid out, solutions discussed, alliances forged. Good stuff.

Then Friday hit.

So did all the emotions. All the feels. All of them.

The night started with me traveling through the swarms of Mozillians getting folks to sign a farewell card for Johnny Slates, my partner in crime for the majority of my Mozilla experience. A tough start. Tears were shed, but really they were thank you tears, in thanks for an awesome time shared at Mozilla.

Later, as I stood in a sea of Mozillians dancing, cheering and smiles all around, I was standing once again in tears. I was watching Mozillians letting loose. I was watching Mozillians get pumped for the future of the Internetz and our role in it. Even though I was listening to lyrics on topics that have brought Mozilla together and torn us apart all at the same time, we were dancing together and having fun.

I felt like I was watching my work family heal.

It was a very, very happy cry.

Thank you to my past, current and future Mozilla family members. To me, there is no old or new guard, just an ever evolving extended family.

<3