Chris McDonaldNegativity in Talks

I was at a meetup recently, and one of the organizers was giving a talk. They come across some PHP in the demo they are doing, and crack a joke about how bad PHP is. The crowd laughs and cheers along with the joke. This isn’t an isolated incident, it happens during talks or discussions all the time. That doesn’t mean it is acceptable.

When I broke into the industry, my first gig was writing Perl, Java, and PHP. All of these languages have stigmas around them these days. Perl has its magic and the fact that only neckbeard sysadmins write it. Java is the ‘I just hit tab in my IDE and the code writes itself!’ and other comments on how ugly it is. PHP, possibly the most made fun of language, doesn’t even get a reason most of the time. It is just ‘lulz php is bad, right gaise?’

Imagine a developer who is just getting started. They are ultra proud of their first gig, which happens to be working on a Drupal site in PHP. They come to a user group for a different language they’ve read about and think sounds neat. They then hear speakers that people appear to respect making jokes about the job they are proud of, the crowd joining in on this negativity. This is not inspiring to them, it just reinforces the impostor syndrome most of us felt as we started into tech.

So what do we do about this? If you are a group organizer, you already have all the power you need to make the changes. Talk with your speakers when they volunteer or are asked to speak. Let them know you want to promote a positive environment regardless of background. Consider writing up guidelines for your speakers to agree to.

How about as just an attendee? The best bet is probably speaking to one of the organizers. Bring it to their attention that their speakers are alienating a portion of their audience with the language trash talking. Approach it as a problem to be fixed in the future, not as if they intended to insult.

Keep in mind I’m not opposed to direct comparison between languages. “I enjoy the lack of type inference because it makes the truth table much easier to understand than, for instance, PHP’s.” This isn’t insulting the whole language, it isn’t turning it into a joke. It is just illustrating a difference that the speaker values.

Much like other negativity in our community, this will be something that will take some time to fix. Keep in mind this isn’t just having to do with user group or conference talks. Discussions around a table suffer from this as well. The first place one should address this problem is within themselves. We are all better than this pandering, we can build ourselves up without having to push others down. Let’s go out and make our community much more positive.


Tarek ZiadéToxMail experiment

I am still looking for a good e-mail replacement that is more respectful of my privacy.

This will never happen with the existing e-mail system due to the way it works: when you send an e-mail to someone, even if you encrypt the body of your e-mail, the metadata will transit from server to server in clear, and the final destination will store it.

Every PGP UX I have tried is terrible anyways. It's just too painful to get things right for someone that has no knowledge (and no desire to have some) of how things work.

What I aiming for now is a separate system to send and receive mails with my close friends and my family. Something that my mother can use like regular e-mails, without any extra work.

I guess some kind of "Darknet for E-mails" where they are no intermediate servers between my mailbox and my mom's mailbox, and no way for a eavesdropper to get the content.

Ideally:

  • end-to-end encryption
  • direct network link between my mom's mail server and me
  • based on existing protocols (SMTP/IMAP/POP3) so my mom can use Thunderbird or I can set her up a Zimbra server.

Project Tox

The Tox Project is a project that aims to replace Skype with a more secured instant messaging system. You can send text, voice and even video messages to your friends.

It's based on NaCL for the crypto bits and in particular the crypto_box API which provides high-level functions to generate public/private key pairs and encrypt/decrypt messages with it.

The other main feature of Tox is its Distributed Hash Table that contains the list of nodes that are connected to the network with their Tox Id.

When you run a Tox-based application, you become part of the Tox network by registering to a few known public nodes.

To send a message to someone, you have to know their Tox Id and send a crypted message using the crypto_box api and the keypair magic.

Tox was created as an instant messaging system, so it has features to add/remove/invite friends, create groups etc. but its core capability is to let you reach out another node given its id, and communicate with it. And that can be any kind of communication.

So e-mails could transit through Tox nodes.

Toxmail experiment

Toxmail is my little experiment to build a secure e-mail system on the top of Tox.

It's a daemon that registers to the Tox network and runs an SMTP service that converts outgoing e-mails to text messages that are sent through Tox. It also converts incoming text messages back into e-mails and stores them in a local Maildir.

Toxmail also runs a simple POP3 server, so it's actually a full stack that can be used through an e-mail client like Thunderbird.

You can just create a new account in Thunderbird, point it to the Toxmail SMPT and POP3 local services, and use it like another e-mail account.

When you want to send someone an e-mail, you have to know their Tox Id, and use TOXID@tox as the recipient.

For example:

7F9C31FE850E97CEFD4C4591DF93FC757C7C12549DDD55F8EEAECC34FE76C029@tox

When the SMTP daemon sees this, it tries to send the e-mail to that Tox-ID. What I am planning to do is to have an automatic conversion of regular e-mails using a lookup table the user can maintain. A list of contacts where you provide for each entry an e-mail and a tox id.

End-to-end encryption, no intermediates between the user and the recipient. Ya!

Caveats & Limitations

For ToxMail to work, it needs to be registered to the Tox network all the time.

This limitation can be partially solved by adding in the SMTP daemon a retry feature: if the recipient's node is offline, the mail is stored and it tries to send it later.

But for the e-mail to go through, the two nodes have to be online at the same time at some point.

Maybe a good way to solve this would be to have Toxmail run into a Raspberry-PI plugged into the home internet box. That'd make sense actually: run your own little mail server for all your family/friends conversations.

One major problem though is what to do with e-mails that are to be sent to recipients that are part of your toxmail contact list, but also to recipients that are not using Toxmail. I guess the best thing to do is to fallback to the regular routing in that case, and let the user know.

Anyways, lots of fun playing with this on my spare time.

The prototype is being built here, using Python and the PyTox binding:

https://github.com/tarekziade/toxmail

It has reached a state where you can actually send and receive e-mails :)

I'd love to have feedback on this little project.

Jeff WaldenNew mach build feature: build-complete notifications on Linux

Spurred on by gps‘s recent mach blogging (and a blogging dry spell to rectify), I thought it’d be worth noting a new mach feature I landed in mozilla-inbound yesterday: build-complete notifications on Linux.

On OS X, mach build spawns a desktop notification when a build completes. It’s handy when the terminal where the build’s running is out of view — often the case given how long builds take. I learned about this feature when stuck on a loaner Mac for a few months due to laptop issues, and I found the notification quite handy. When I returned to Linux, I wanted the same thing there. evilpie had already filed bug 981146 with a patch using DBus notifications, but he didn’t have time to finish it. So I picked it up and did the last 5% to land it. Woo notifications!

(Minor caveat: you won’t get a notification if your build completes in under five minutes. Five minutes is probably too long; some systems build fast enough that you’d never get a notification. gps thinks this should be shorter and ideally configurable. I’m not aware of an existing bug for this.)

Florian QuèzeConverting old Mac minis into CentOS Instantbird build slaves

A while ago, I received a few retired Mozilla minis. Today 2 of them started their new life as CentOS6 build slaves for Instantbird, which means we now have Linux nightlies again! Our previous Linux build slave, running CentOS5, was no longer able to build nightlies based on the current mozilla-central code, and this is the reason why we haven't had Linux nightlies since March. We know it's been a long wait, but to help our dear Linux testers forgive us, we started offering 64bit nightly builds!

For the curious, and for future reference, here are the steps I followed to install these two new build slaves:

Partition table

The Mac minis came with a GPT partition table and an hfs+ partition that we don't want. While the CentOS installer was able to detect them, the grub it installed there didn't work. The solution was to convert the GPT partition table to the MBR older format. To do this, boot into a modern linux distribution (I used an Ubuntu 13.10 live dvd that I had around), install gdisk (sudo apt-get update && sudo apt-get gdisk) and use it to edit the disk's partition table:

sudo gdisk /dev/sda
Press 'r' to start recovery/transformation, 'g' to convert from GPT to MBR, 'p' to see the resulting partition table, and finally 'w' to write the changes to disk (instructions initially from here).
Exit gdisk.
Now you can check the current partition table using gparted. At this point I deleted the hfs+ partition.

Installing CentOS

The version of CentOS needed to use the current Mozilla build tools is CentOS 6.2. We tried before using another (slightly newer) version, and we never got it to work.

Reboot on a CentOS 6.2 livecd (press the 'c' key at startup to force the mac mini to look for a bootable CD).
Follow the instructions to install CentOS on the hard disk.
I customized the partition table a bit (50000MB for /, 2048MB of swap space, and the rest of the disk for /home).

The only non-obvious part of the CentOS install is that the boot loaded needs to be installed on the MBR rather than on the partition where the system is installed. When the installer asks where grub should be installed, set it to /dev/sda (the default is /dev/sha2, and that won't boot). Of course I got this wrong in my first attempts.

Installing Mozilla build dependencies

First, install an editor that is usable to you. I typically use emacs, so: sudo yum install emacs

The Mozilla Linux build slaves use a specifically tweaked version of gcc so that the produced binaries have low runtime dependencies, but the compiler still has the build time feature set of gcc 4.7. If you want to use something as old as CentOS6.2 to build, you need this specific compiler.

The good thing is, there's a yum repository publicly available where all the customized mozilla packages are available. To install it, create a file named /etc/yum.repos.d/mozilla.repo and make it contain this:

[mozilla]
name=Mozilla
baseurl=http://puppetagain.pub.build.mozilla.org/data/repos/yum/releng/public/CentOS/6/x86_64/
enabled=1
gpgcheck=0

Adapt the baseurl to finish with i386 or x86_64 depending on whether you are making a 32 bit or 64 bit slave.

After saving this file, you can check that it had the intended effect by running this comment to list the packages from the mozilla repository: repoquery -q --repoid=mozilla -a

You want to install the version of gcc473 and the version of mozilla-python27 that appear in that list.

You also need several other build dependencies. MDN has a page listing them:

yum groupinstall 'Development Tools' 'Development Libraries' 'GNOME Software Development'
yum install mercurial autoconf213 glibc-static libstdc++-static yasm wireless-tools-devel mesa-libGL-devel alsa-lib-devel libXt-devel gstreamer-devel gstreamer-plugins-base-devel pulseaudio-libs-devel

Unfortunately, two dependencies were missing on that list (I've now fixed the page):
yum install gtk2-devel dbus-glib-devel

At this point, the machine should be ready to build Firefox.

Instantbird, because of libpurple, depends on a few more packages:
yum install avahi-glib-devel krb5-devel

And it will be useful to have ccache:
yum install ccache

Installing the buildbot slave

First, install the buildslave command, which unfortunately doesn't come as a yum package, so you need to install easy_install first:

yum install python-setuptools python-devel mpfr
easy_install buildbot-slave

python-devel and mpfr here are build time dependencies of the buildbot-slave package, and not having them installed will cause compiling errors while attempting to install buildbot-slave.

We are now ready to actually install the buildbot slave. First let's create a new user for buildbot:

adduser buildbot
su buildbot
cd /home/buildbot

Then the command to create the local slave is:

buildslave create-slave --umask=022 /home/buildbot/buildslave buildbot.instantbird.org:9989 linux-sN password

The buildbot slave will be significantly more useful if it starts automatically when the OS starts, so let's edit the crontab (crontab -e) to add this entry:
@reboot PATH=/usr/local/bin:/usr/bin:/bin /usr/bin/buildslave start /home/buildbot/buildslave

The reason why the PATH environment variable has to be set here is that the default path doesn't contain /usr/local/bin, but that's where the mozilla-python27 packages installs python2.7 (which is required by mach during builds).

One step in the Instantbird builds configured on our buildbot use hg clean --all and this requires the purge mercurial extension to be enabled, so let's edit ~buidlbot/.hgrc to look like this:
$ cat ~/.hgrc
[extensions]
purge =

Finally, ssh needs to be configured so that successful builds can be uploaded automatically. Copy and adapt ~buildbot/.ssh from an existing working build slave. The files that are needed are id_dsa (the ssh private key) and known_hosts (so that ssh doesn't prompt about the server's fingerprint the first time we upload something).

Here we go, working Instantbird linux build slaves! Figuring out all these details for our first CentOS6 slave took me a few evenings, but doing it again on the second slave was really easy.

Aki Sasakion leaving mozilla

Today's my last day at Mozilla. It wasn't an easy decision to move on; this is the best team I've been a part of in my career. And working at a company with such idealistic principles and the capacity to make a difference has been a privilege.

Looking back at the past five-and-three-quarter years:

  • I wrote mozharness, a versatile scripting harness. I strongly believe in its three core concepts: versatile locking config; full logging; modularity.



  • I helped FirefoxOS (b2g) ship, and it's making waves in the industry. Internally, the release processes are well on the path to maturing and stabilizing, and b2g is now riding the trains.

    • Merge day: Releng took over ownership of merge day, and b2g increased its complexity exponentially. I don't think it's quite that bad :) I whittled it down from requiring someone's full mental capacity for three out of every six weeks, to several days of precisely following directions.

    • I rewrote vcs-sync to be more maintainable and robust, and to support gecko-dev and gecko-projects. Being able to support both mercurial and git across many hundreds of repos has become a core part of our development and automation, primarily because of b2g. The best thing you can say about a mission critical piece of infrastructure like this is that you can sleep through the night or take off for the weekend without worrying if it'll break. Or go on vacation for 3 1/2 weeks, far from civilization, without feeling guilty or worried.


  • I helped ship three mobile 1.0's. I learned a ton, and I don't think I could have gotten through it by myself; John and the team helped me through this immensely.

    • On mobile, we went from one or two builds on a branch to full tier-1 support: builds and tests on checkin across all of our integration-, release-, and project- branches. And mobile is riding the trains.

    • We Sim-shipped 5.0 on Firefox desktop and mobile off the same changeset. Firefox 6.0b2, and every release since then, was built off the same automation for desktop and mobile. Those were total team efforts.

    • I will be remembered for the mobile pedalboard. When we talked to other people in the industry, this was more on-device mobile test automation than they had ever seen or heard of; their solutions all revolved around manual QA.


      (full set)


    • And they are like effin bunnies; we later moved on to shoe rack bunnies, rackmounted bunnies, and now more and more emulator-driven bunnies in the cloud, each numbering in the hundreds or more. I've been hands off here for quite a while; the team has really improved things leaps and bounds over my crude initial attempts.


  • I brainstormed next-gen build infrastructure. I started blogging about this back in January 2009, based largely around my previous webapp+db design elsewhere, but I think my LWR posts in Dec 2013 had more of an impact. A lot of those ideas ended up in TaskCluster; mozharness scripts will contain the bulk of the client-side logic. We'll see how it all works when TaskCluster starts taking on a significant percentage of the current buildbot load :)

I will stay a Mozillian, and I'm looking forward to see where we can go from here!



comment count unavailable comments

Vaibhav AgrawalLets have more green trees

I have been working on making jobs ignore intermittent failures for mochitests (bug 1036325) on try servers to prevent unnecessary oranges, and save resources that goes into retriggering those jobs on tbpl. I am glad to announce that this has been achieved for desktop mochitests (linux, osx and windows). It doesn’t work for android/b2g mochitests but they will be supported in the future. This post explains how it works in detail and is a bit lengthy, so bear with me.

Lets see the patch in action. Here is an example of an almost green try push:

Tbpl Push Log

 Note: one bc1 orange job is because of a leak (Bug 1036328)

In this push, the intermittents were suppressed, for example this log shows an intermittent on mochitest-4 job on linux :

tbpl1

Even though there was an intermittent failure for this job, the job remains green. We can determine if a job produced an intermittent  by inspecting the number of tests run for the job on tbpl, which will be much smaller than normal. For example in the above intermittent mochitest-4 job it shows “mochitest-plain-chunked: 4696/0/23” as compared to the normal “mochitest-plain-chunked: 16465/0/1954”. Another way is looking at the log of the particular job for “TEST-UNEXPECTED-FAIL”.

<algorithm>

The algorithm behind getting a green job even in the presence of an intermittent failure is that we recognize the failing test, and run it independently 10 times. If the test fails < 3 times out of 10, it is marked as intermittent and we leave it. If it fails >=3 times out of 10, it means that there is a problem in the test turning the job orange.

</algorithm>

Next to test the case of a “real” failure, I wrote a unit test and tested it out in the try push:

tbpl4

This job is orange and the log for this push is:

tbpl3

In this summary, a test is failing for more than three times and hence we get a real failure. The important line in this summary is:

3086 INFO TEST-UNEXPECTED-FAIL | Bisection | Please ignore repeats and look for ‘Bleedthrough’ (if any) at the end of the failure list

This tells us that the bisection procedure has started and we should look out for future “Bleedthrough”, that is, the test causing the failure. And at the last line it prints the “real failure”:

TEST-UNEXPECTED-FAIL | testing/mochitest/tests/Harness_sanity/test_harness_post_bisection.html | Bleedthrough detected, this test is the root cause for many of the above failures

Aha! So we have found a permanent failing test and it is probably due to some fault in the developer’s patch. Thus, the developers can now focus on the real problem rather than being lost in the intermittent failures.

This patch has landed on mozilla-inbound and I am working on enabling it as an option on trychooser (More on that in the next blog post). However if someone wants to try this out now (works only for desktop mochitest), one can hack in just a single line:

options.bisectChunk = ‘default’

such as in this diff inside runtests.py and test it out!

Hopefully, this will also take us a step closer to AutoLand (automatic landing of patches).

Other Bugs Solved for GSoC:

[1028226] – Clean up the code for manifest parsing
[1036374] – Adding a binary search algorithm for bisection of failing tests
[1035811] – Mochitest manifest warnings dumped at start of each robocop test

A big shout out to my mentor (Joel Maher) and other a-team members for helping me in this endeavour!


Just BrowsingTaming Gruntfiles

Every software project needs plumbing.

If you write your code in JavaScript chances are you’re using Grunt. And if your project has been around long enough, chances are your Gruntfile is huge. Even though you write comments and indent properly, the configuration is starting to look unwieldy and is hard to navigate and maintain (see ngbp’s Gruntfile for an example).

Enter load-grunt-config, a Grunt plugin that lets you break up your Gruntfile by task (or task group), allowing for a nice navigable list of small per-task Gruntfiles.

When used, your Grunt config file tree might look like this:

./
 |_ Gruntfile.coffee
 |_ grunt/
    |_ aliases.coffee
    |_ browserify.coffee
    |_ clean.coffee
    |_ copy.coffee
    |_ watch.coffee
    |_ test-group.coffee

watch.coffee, for example, might be:

module.exports = {
  sources:
    files: [
      '<%= buildDir %>/**/*.coffee',
      '<%= buildDir %>/**/*.js'
    ]
    tasks: ['test']

  html:
    files: ['<%= srcDir %>/**/*.html']
    tasks: ['copy:html', 'test']

  css:
    files: ['<%= srcDir %>/**/*.css']
    tasks: ['copy:css', 'test']

  img:
    files: ['<%= srcDir %>/img/**/*.*']
    tasks: ['copy:img', 'test']
}

and aliases.coffee:

module.exports = {
  default: [
    'clean'
    'browserify:libs'
    'browserify:dist'
  ]

  dev: [
    'clean'
    'connect'
    'browserify:libs'
    'browserify:dev'
    'mocha_phantomjs'
    'watch'
  ]
}

By default, load-grunt-config reads the task configurations from the grunt/ folder located on the same level as your Gruntfile. If there’s an alias.js|coffee|yml file in that directory, load-grunt-config will use it to load your task aliases (which is convenient because one of the problem with long Gruntfiles is that the task aliases are hard to find).

Other files in the grunt/ directory define configurations for a single task (e.g. grunt-contrib-watch) or a group of tasks.

Another nice thing is that load-grunt-config takes care of loading plugins; it reads package.json and automatically loadNpmTaskss all the grunt plugins it finds for you.

To sum it up, for a bigger project, your Gruntfile can get messy. load-grunt-config helps combat that by introducing structure into the build configuration, making it more readable and maintainable.

Happy grunting!

Gregory SzorcPlease run mach mercurial-setup

Hey there, Firefox developer! Do you use Mercurial? Please take the time right now to run mach mercurial-setup from your Firefox clone.

It's been updated to ensure you are running a modern Mercurial version. More awesomely, it has support for a couple of new extensions to make you more productive. I think you'll like what you see.

mach mercurial-setup doesn't change your hgrc without confirmation. So it is safe to run to see what's available. You should consider running it periodically, say once a week or so. I wouldn't be surprised if we add a notification to mach to remind you to do this.

Gervase MarkhamNow We Are Five…

10 weeks old, and beautifully formed by God :-) The due date is 26th January 2015.

Maja FrydrychowiczDatabase Migrations ("You know nothing, Version Control.")

This is the story of how I rediscovered what version control doesn't do for you. Sure, I understand that git doesn't track what's in my project's local database, but to understand is one thing and to feel in your heart forever is another. In short, learning from mistakes and accidents is the greatest!

So, I've been working on a Django project and as the project acquires new features, the database schema changes here and there. Changing the database from one schema to another and possibly moving data between tables is called a migration. To manage database migrations, we use South, which is sort of integrated into the project's manage.py script. (This is because we're really using playdoh, Mozilla's augmented, specially-configured flavour of Django.)

South is lovely. Whenever you change the model definitions in your Django project, you ask South to generate Python code that defines the corresponding schema migration, which you can customize as needed. We'll call this Python code a migration file. To actually update your database with the schema migration, you feed the migration file to manage.py migrate.

These migration files are safely stored in your git repository, so your project has a history of database changes that you can replay backward and forward. For example, let's say you're working in a different repository branch on a new feature for which you've changed the database schema a bit. Whenever you switch to the feature branch you must remember to apply your new database migration (migrate forward). Whenever you switch back to master you must remember to migrate backward to the database schema expected by the code in master. Git doesn't know which migration your database should be at. Sometimes I'm distracted and I forget. :(

As always, it gets more interesting when you have project collaborators because they might push changes to migration files and you must pay attention and remember to actually apply these migrations in the right order. We will examine one such scenario in detail.

Adventures with Overlooked Database Migrations

Let's call the actors Sparkles and Rainbows. Sparkles and Rainbows are both contributing to the same project and so they each regularly push or pull from the same "upstream" git repository. However, they each use their own local database for development. As far as the database goes, git is only tracking South migration files. Here is our scenario.

  1. Sparkles pushes Migration Files 1, 2, 3 to upstream and applies these migrations to their local db in that order.
  2. Rainbows pulls Migration Files 1, 2, 3 from upstream and applies them to their local db in that order.

    All is well so far. The trouble is about to start.

  3. Sparkles reverses Migration 3 in their local database (backward migration to Migration 2) and pushes a delete of the Migration 3 file to upstream.
  4. Rainbows pulls from upstream: the Migration 3 file no longer exists at HEAD but it must also be reversed in the local db! Alas, Rainbows does not perform the backward migration. :(
  5. Life goes on and Sparkles now adds Migration Files 4 and 5, applies the migrations locally and pushes the files to upstream.
  6. Rainbows happily pulls Migrations Files 4 and 5 and applies them to their local db.

    Notice that Sparkles' migration history is now 1-2-4-5 but Rainbows' migration history is 1-2-3-4-5, but 3 is nolonger part of the up-to-date project!

At some point Rainbows will encounter Django or South errors, depending on the nature of the migrations, because the database doesn't match the expected schema. No worries, though, it's git, it's South: you can go back in time and fix things.

I was recently in Rainbows' position. I finally noticed that something was wrong with my database when South started refusing to apply the latest migration from upstream, telling me "Sorry! I can't drop table TaskArea, it doesn't exist!"

tasks:0011_auto__del_taskarea__del_field_task_area__add_field_taskkeyword_name
FATAL ERROR - The following SQL query failed: DROP TABLE tasks_taskarea CASCADE;
The error was: (1051, "Unknown table 'tasks_taskarea'")
>snip
KeyError: "The model 'taskarea' from the app 'tasks' is not available in this migration."

In my instance of the Sparkles-Rainbows story, Migration 3 and Migration 5 both drop the TaskArea table; I'm trying to apply Migration 5, and South grumbles in response because I had never reversed Migration 3. As far as South knows, there's no such thing as a TaskArea table.

Let's take a look at my migration history, which is conveniently stored in the database itself:

select migration from south_migrationhistory where app_name="tasks";

The output is shown below. The lines of interest are 0010_auth__del and 0010_auto__chg; I'm trying to apply migration 0011 but I can't, because it's the same migration as 0010_auto__del, which should have been reversed a few commits ago.

+------------------------------------------------------------------------------+
|  migration                                                                   |
+------------------------------------------------------------------------------+
|  0001_initial                                                                |
|  0002_auto__add_feedback                                                     |
|  0003_auto__del_field_task_allow_multiple_finishes                           |
|  0004_auto__add_field_task_is_draft                                          |
|  0005_auto__del_field_feedback_task__del_field_feedback_user__add_field_feed |
|  0006_auto__add_field_task_creator__add_field_taskarea_creator               |
|  0007_auto__add_taskkeyword__add_tasktype__add_taskteam__add_taskproject__ad |
|  0008_task_data                                                              |
|  0009_auto__chg_field_task_team                                              |
|  0010_auto__del_taskarea__del_field_task_area__add_field_taskkeyword_name    |
|  0010_auto__chg_field_taskattempt_user__chg_field_task_creator__chg_field_ta |
+------------------------------------------------------------------------------+

I want to migrate backwards until 0009, but I can't do that directly because the migration file for 0010_auto__del is not part of HEAD anymore, just like Migration 3 in the story of Sparkles and Rainbows, so South doesn't know what to do. However, that migration does exist in a previous commit, so let's go back in time.

I figure out which commit added the migration I need to reverse:

# Display commit log along with names of files affected by each commit. 
# Once in less, I searched for '0010_auto__del' to get to the right commit.
$ git log --name-status | less

What that key information, the following sequence of commands tidies everything up:

# Switch to the commit that added migration 0010_auto__del
$ git checkout e67fe32c
# Migrate backward to a happy migration; I chose 0008 to be safe. 
# ./manage.py migrate [appname] [migration]
$ ./manage.py migrate oneanddone.tasks 0008
$ git checkout master
# Sync the database and migrate all the way forward using the most up-to-date migrations.
$ ./manage.py syncdb && ./manage.py migrate

Mark FinkleFirefox for Android: Collecting and Using Telemetry

Firefox 31 for Android is the first release where we collect telemetry data on user interactions. We created a simple “event” and “session” system, built on top of the current telemetry system that has been shipping in Firefox for many releases. The existing telemetry system is focused more on the platform features and tracking how various components are behaving in the wild. The new system is really focused on how people are interacting with the application itself.

Collecting Data

The basic system consists of two types of telemetry probes:

  • Events: A telemetry probe triggered when the users takes an action. Examples include tapping a menu, loading a URL, sharing content or saving content for later. An Event is also tagged with a Method (how was the Event triggered) and an optional Extra tag (extra context for the Event).
  • Sessions: A telemetry probe triggered when the application starts a short-lived scope or situation. Examples include showing a Home panel, opening the awesomebar or starting a reading viewer. Each Event is stamped with zero or more Sessions that were active when the Event was triggered.

We add the probes into any part of the application that we want to study, which is most of the application.

Visualizing Data

The raw telemetry data is processed into summaries, one for Events and one for Sessions. In order to visualize the telemetry data, we created a simple dashboard (source code). It’s built using a great little library called PivotTable.js, which makes it easy to slice and dice the summary data. The dashboard has several predefined tables so you can start digging into various aspects of the data quickly. You can drag and drop the fields into the column or row headers to reorganize the table. You can also add filters to any of the fields, even those not used in the row/column headers. It’s a pretty slick library.

uitelemetry-screenshot-crop

Acting on Data

Now that we are collecting and studying the data, the goal is to find patterns that are unexpected or might warrant a closer inspection. Here are a few of the discoveries:

Page Reload: Even in our Nightly channel, people seem to be reloading the page quite a bit. Way more than we expected. It’s one of the Top 2 actions. Our current thinking includes several possibilities:

  1. Page gets stuck during a load and a Reload gets it going again
  2. Networking error of some kind, with a “Try again” button on the page. If the button does not solve the problem, a Reload might be attempted.
  3. Weather or some other update-able page where a Reload show the current information.

We have started projects to explore the first two issues. The third issue might be fine as-is, or maybe we could add a feature to make updating pages easier? You can still see high uses of Reload (reload) on the dashboard.

Remove from Home Pages: The History, primarily, and Top Sites pages see high uses of Remove (home_remove) to delete browsing information from the Home pages. People do this a lot, again it’s one of the Top 2 actions. People will do this repeatably, over and over as well, clearing the entire list in a manual fashion. Firefox has a Clear History feature, but it must not be very discoverable. We also see people asking for easier ways of clearing history in our feedback too, but it wasn’t until we saw the telemetry data for us to understand how badly this was needed. This led us to add some features:

  1. Since the History page was the predominant source of the Removes, we added a Clear History button right on the page itself.
  2. We added a way to Clear History when quitting the application. This was a bit tricky since Android doesn’t really promote “Quitting” applications, but if a person wants to enable this feature, we add a Quit menu item to make the action explicit and in their control.
  3. With so many people wanting to clear their browsing history, we assumed they didn’t know that Private Browsing existed. No history is saved when using Private Browsing, so we’re adding some contextual hinting about the feature.

These features are included in Nightly and Aurora versions of Firefox. Telemetry is showing a marked decrease in Remove usage, which is great. We hope to see the trend continue into Beta next week.

External URLs: People open a lot of URLs from external applications, like Twitter, into Firefox. This wasn’t totally unexpected, it’s a common pattern on Android, but the degree to which it happened versus opening the browser directly was somewhat unexpected. Close to 50% of the URLs loaded into Firefox are from external applications. Less so in Nightly, Aurora and Beta, but even those channels are almost 30%. We have started looking into ideas for making the process of opening URLs into Firefox a better experience.

Saving Images: An unexpected discovery was how often people save images from web content (web_save_image). We haven’t spent much time considering this one. We think we are doing the “right thing” with the images as far as Android conventions are concerned, but there might be new features waiting to be implemented here as well.

Take a look at the data. What patterns do you see?

Here is the obligatory UI heatmap, also available from the dashboard:
uitelemetry-heatmap

Gregory SzorcRepository-Centric Development

I was editing a wiki page yesterday and I think I coined a new term which I'd like to enter the common nomenclature: repository-centric development. The term refers to development/version control workflows that place repositories - not patches - first.

When collaborating on version controlled code with modern tools like Git and Mercurial, you essentially have two choices on how to share version control data: patches or repositories.

Patches have been around since the dawn of version control. Everyone knows how they work: your version control system has a copy of the canonical data and it can export a view of a specific change into what's called a patch. A patch is essentially a diff with extra metadata.

When distributed version control systems came along, they brought with them an alternative to patch-centric development: repository-centric development. You could still exchange patches if you wanted, but distributed version control allowed you to pull changes directly from multiple repositories. You weren't limited to a single master server (that's what the distributed in distributed version control means). You also didn't have to go through an intermediate transport such as email to exchange patches: you communicate directly with a peer repository instance.

Repository-centric development eliminates the middle man required for patch exchange: instead of exchanging derived data, you exchange the actual data, speaking the repository's native language.

One advantage of repository-centric development is it eliminates the problem of patch non-uniformity. Patches come in many different flavors. You have plain diffs. You have diffs with metadata. You have Git style metadata. You have Mercurial style metadata. You can produce patches with various lines of context in the diff. There are different methods for handling binary content. There are different ways to express file adds, removals, and renames. It's all a hot mess. Any system that consumes patches needs to deal with the non-uniformity. Do you think this isn't a problem in the real world? Think again. If you are involved with an open source project that collects patches via email or by uploading patches to a bug tracker, have you ever seen someone accidentally upload a patch in the wrong format? That's patch non-uniformity. New contributors to Firefox do this all the time. I also see it in the Mercurial project. With repository-centric development, patches never enter the picture, so patch non-uniformity is a non-issue. (Don't confuse the superficial formatting of patches with the content, such as an incorrect commit message format.)

Another advantage of repository-centric development is it makes the act of exchanging data easier. Just have two repositories talk to each other. This used to be difficult, but hosting services like GitHub and Bitbucket make this easy. Contrast with patches, which require hooking your version control tool up to wherever those patches are located. The Linux Kernel, like so many other projects, uses email for contributing changes. So now Git, Mercurial, etc all fulfill Zawinski's law. This means your version control tool is talking to your inbox to send and receive code. Firefox development uses Bugzilla to hold patches as attachments. So now your version control tool needs to talk to your issue tracker. (Not the worst idea in the world I will concede.) While, yes, the tools around using email or uploading patches to issue trackers or whatever else you are using to exchange patches exist and can work pretty well, the grim reality is that these tools are all reinventing the wheel of repository exchange and are solving a problem that has already been solved by git push, git fetch, hg pull, hg push, etc. Personally, I would rather hg push to a remote and have tools like issue trackers and mailing lists pull directly from repositories. At least that way they have a direct line into the source of truth and are guaranteed a consistent output format.

Another area where direct exchange is huge is multi-patch commits (branches in Git parlance) or where commit data is fragmented. When pushing patches to email, you need to insert metadata saying which patch comes after which. Then the email import tool needs to reassemble things in the proper order (remember that the typical convention is one email per patch and email can be delivered out of order). Not the most difficult problem in the world to solve. But seriously, it's been solved already by git fetch and hg pull! Things are worse for Bugzilla. There is no bullet-proof way to order patches there. The convention at Mozilla is to add Part N strings to commit messages and have the Bugzilla import tool do a sort (I assume it does that). But what if you have a logical commit series spread across multiple bugs? How do you reassemble everything into a linear series of commits? You don't, sadly. Just today I wanted to apply a somewhat complicated series of patches to the Firefox build system I was asked to review so I could jump into a debugger and see what was going on so I could conduct a more thorough review. There were 4 or 5 patches spread over 3 or 4 bugs. Bugzilla and its patch-centric workflow prevented me from importing the patches. Fortunately, this patch series was pushed to Mozilla's Try server, so I could pull from there. But I haven't always been so fortunate. This limitation means developers have to make sacrifices such as writing fewer, larger patches (this makes code review harder) or involving unrelated parties in the same bug and/or review. In other words, deficient tools are imposing limited workflows. No bueno.

It is a fair criticism to say that not everyone can host a server or that permissions and authorization are hard. Although I think concerns about impact are overblown. If you are a small project, just create a GitHub or Bitbucket account. If you are a larger project, realize that people time is one of your largest expenses and invest in tools like proper and efficient repository hosting (often this can be GitHub) to reduce this waste and keep your developers happier and more efficient.

One of the clearest examples of repository-centric development is GitHub. There are no patches in GitHub. Instead, you git push and git fetch. Want to apply someone else's work? Just add a remote and git fetch! Contrast with first locating patches, hooking up Git to consume them (this part was always confusing to me - do you need to retroactively have them sent to your email inbox so you can import them from there), and finally actually importing them. Just give me a URL to a repository already. But the benefits of repository-centric development with GitHub don't stop at pushing and pulling. GitHub has built code review functionality into pushes. They call these pull requests. While I have significant issues with GitHub's implemention of pull requests (I need to blog about those some day), I can't deny the utility of the repository-centric workflow and all the benefits around it. Once you switch to GitHub and its repository-centric workflow, you more clearly see how lacking patch-centric development is and quickly lose your desire to go back to the 1990's state-of-the-art methods for software development.

I hope you now know what repository-centric development is and will join me in championing it over patch-based development.

Mozillians reading this will be very happy to learn that work is under way to shift Firefox's development workflow to a more repository-centric world. Stay tuned.

Sean BoltonWhy Do People Join and Stay Part Of a Community (and How to Support Them)

[This post is inspired by notes from a talk by Douglas Atkin (currently at AirBnB) about his work with cults, brands and community.]

We all go through life feeling like we are different. When you find people that are different the same way you are, that’s when you decide to join.

As humans, we each have a unique self narrative: “we tell ourselves a story about who we are, what others are like, how the world works, and therefore how one does (or does not) belong in order to maximize self.” We join a community to become more of ourselves – to exist in a place where we feel we don’t have to self-edit as much to fit in.

A community must have a clear ideology – a set of beliefs about what it stands for – a vision of the world as it should be rather than how it is, that aligns with what we believe. Communities form around certain ways of thinking first, not around products. At Mozilla, this is often called “the web we want” or ‘the web as it should be.’

When joining a community people ask two questions: 1) Are they like me? and 2) Will they like me? The answer to these two fundamental human questions determine whether a person will become and stay part of a community. In designing a community it is important to support potential members in answering these questions – be clear about what you stand for and make people feel welcome. The welcoming portion requires extra work in the beginning to ensure that a new member forms relationships with people in the community. These relationships keep people part of a community. For example, I don’t go to a book club purely for the book, I go for my friends Jake and Michelle. Initially, the idea of a book club attracted me but as I became friends with Jake and Michelle, that friendship continually motivated me to show up. This is important because as the daily challenges of life show up, social bonds become our places of belonging where we can recharge.

Source: Douglas Atkin, The Glue Projecy

Source: Douglas Atkin, The Glue Project

These social ties must be mixed with doing significant stuff together. In designing how community members participate, a very helpful tool is the community commitment curve. This curve describes how a new member can invest in low barrier, easy tasks that build commitment momentum so the member can perform more challenging tasks and take on more responsibility. For example, you would not ask a new member to spend 12 hours setting up a development environment just to make their first contribution. This ask is too much for a new person because they are still trying to figure out ‘are the like me?’ and ‘will they like me?’ In addition, their sense of contribution momentum has not been built – 12 hours is a lot when your previous task is 0 but 12 is not so much when your previous was 10.

The community commitment curve is a powerful tool for community builders because it forces you to design the small steps new members can take to get involved and shows structure to how members take on more complex tasks/roles – it takes some of the mystery out! As new members invest small amounts of time, their commitment grows, which encourages them to invest larger amounts of time, continually growing both time and commitment, creating a fulfilling experience for the community and the member. I made a template for you to hack your own community commitment curve.

Social ties combined with a well designed commitment curve, for a clearly defined purpose, is powerful combination in supporting a community.


Marco BonardoUnified Complete coming to Firefox 34

The awesomebar in Firefox Desktop has been so far driven by two autocomplete searches implemented by the Places component:

  1. history: managing switch-to-tab, adaptive and browsing history, bookmarks, keywords and tags
  2. urlinline: managing autoFill results

Moving on, we plan to improve the awesomebar contents making them even more awesome and personal, but the current architecture complicates things.

Some of the possible improvements suggested include:

  • Better identify searches among the results
  • Allow the user to easily find favorite search engine
  • Always show the action performed by Enter/Go
  • Separate searches from history
  • Improve the styling to make each part more distinguishable

When working on these changes we don't want to spend time fighting with outdated architecture choices:

  • Having concurrent searches makes hard to insert new entries and ensure the order is appropriate
  • Having the first popup entry disagree with the autoFill entry makes hard to guess the expected action
  • There's quite some duplicate code and logic
  • All of this code predates nice stuff like Sqlite.jsm, Task.jsm, Preferences.js
  • The existing code is scary to work with and sometimes obscure

Due to these reasons, we decided to merge the existing components into a single new component called UnifiedComplete (toolkit/components/places/UnifiedComplete.js), that will take care of both autoFill and popup results. While the component has been rewritten from scratch, we were able to re-use most of the old logic that was well tested and appreciated. We were also able to retain all of the unit tests, that have been also rewritten, making them use a single harness (you can find them in toolkit/components/places/tests/unifiedcomplete/).

So, the actual question is: which differences should I expect from this change?

  • The autoFill result will now always cope with the first popup entry. Though note the behavior didn't change, we will still autoFill up to the first '/'. This means a new popup entry is inserted as the top match.
  • All initialization is now asynchronous, so the UI should not lag anymore at the first awesomebar search
  • The searches are serialized differently, responsiveness timing may differ, usually improve
  • Installed search engines will be suggested along with other matches to improve their discoverability

The component is currently disabled, but I will shortly push a patch to flip the pref that enables it. The preference to control whether to use new or old components is browser.urlbar.unifiedcomplete, you can already set it to true into your current Nightly build to enable it.

This also means old components will shortly be deprecated and won't be maintained anymore. That won't happen until we are completely satisfied with the new component, but you should start looking at the new one if you use autocomplete in your project. Regardless we'll add console warnings at least 2 versions before complete removal.

If you notice anything wrong with the new awesomebar behavior please file a bug in Toolkit/Places and make it block Bug UnifiedComplete so we are notified of it and can improve the handling before we reach the first Release.

Christian Heilmann[Video+Transcript]: The web is dead? – My talk at TEDx Thessaloniki

Today the good folks at TEDx Thessaloniki released the recording of my talk “The web is dead“.

Christian Heilmann at TEDx

I’ve given you the slides and notes earlier here on this blog but here’s another recap of what I talked about:

  • The excitement for the web is waning – instead apps are the cool thing
  • At first glance, the reason for this is that apps deliver much better on a mobile form factor and have a better, high fidelity interaction patterns
  • If you scratch the surface of this message though you find a few disturbing points:
    • Everything in apps is a number game – the marketplaces with the most apps win, the apps with the most users are the ones that will get more users as those are the most promoted ones
    • The form factor of an app was not really a technical necessity. Instead it makes software and services a consumable. The full control of the interface and the content of the app lies with the app provider, not the users. On the web you can change the display of content to your needs, you can even translate and have content spoken out for you. In apps, you get what the provider allows you to get
    • The web allowed anyone to be a creator. The curb to mount from reader to writer was incredibly low. In the apps world, it becomes much harder to become a creator of functionality.
    • Content creation is easy in apps. If you create the content the app makers wants you to. The question is who the owner of that content is, who is allowed to use it and if you have the right to stop app providers from analysing and re-using your content in ways you don’t want them to. Likes and upvotes/downvotes aren’t really content creation. They are easy to do, don’t mean much but make sure the app creator has traffic and interaction on their app – something that VCs like to see.
    • Apps are just another form factor to control software for the benefit of the publisher. Much like movies on DVDs are great, because when you scratch them you need to buy a new one, publishers can now make software services become outdated, broken and change, forcing you to buy a new one instead of enjoying new features cropping up automatically.
    • Apps lack the data interoperability of the web. If you want your app to succeed, you need to keep the users locked into yours and not go off and look at others. That way most apps are created to be highly addictive with constant stimulation and calls to action to do more with them. In essence, the business models of apps these days force developers to create needy, bullying tamagotchi and call them innovation

Transcript



You might have realized I’m not from here and I’m sorry for the translators because I speak fast, but it’s a good idea to know that life is cruel and you have to do a job.

Myself, I’ve been a web developer for 17 years like something like that now. I’ve dedicated my life to the web. I used to be a radio journalist. I used to be a travel agent, but that’s all boring because you saw people today who save people’s lives and change the life of children and stand there with their stick and do all kind of cool stuff.

I’m just this geek in the corner that just wants to catch this camera… how cool is that thing? They also gave me a laser pointer but it didn’t give me kitten. This is just really annoying.

I want to talk today about my wasted life because you heard it this morning already: we have a mobile revolution and you see it in press all over: the web is dead.

You don’t see excitement about it anymore. You don’t see people like, “Go to www awesome whatever .com” Nobody talks like this anymore. They shouldn’t have talked like that in the past as well but nobody does it anymore.

The web is not the cool thing. It’s not: “I did e-commerce yesterday.” Nobody does that, instead it is, “Oh, I’ll put things on my phone.”

This is what killed the web. The mobile phone factor is just not liking itself to the web. It’s not fun to type in TEDxThessaloniki.com with your two thumbs and forgetting how many S’s are there in Thessaloniki and then going to some strange website.

We text each other all the time, but typing in URL it feels icky, it feels not natural for a phone, so we needed to do something different that’s why we came up with the QR codes – robot barf as I keep calling it – because that didn’t work either. It’s beautiful isn’t it? You go there with your phone and you start scanning it and then two and a half minutes later with only 30% of your battery left, it goes to some URL.

Not a single mobile operating system came out with the QR reader out of the box. It’s worrying, so I realized there has to be a chance and the change happened. The innovation, the new beginning, the new dawn of the internet was the app.

“There’s an app for that,” was the great talk.

“There’s an app for everything. They’re beautiful. Apps are great.” I can explain it to my … well, not my mom but other people’s moms: “There’s an icon, you click on it, you open it and this is the thing and you use that now. There’s no address bar, there’s nothing to understand about domains, HTTP, cookies, all kind of things. You open it, you play with it and they’re beautiful.”

The interaction between the hardware and software with apps is gorgeous. On iOS, it’s beautiful what you can do and you cannot do any of that on the web with iOS because Apple … Well, because you can’t.

Apps are focused. That’s a really good thing about them, the problem with the web was that we’re like little rabbits. We’re running around like kittens with the laser pointer and we’re like, “Oh, that’s 20 tabs open.” Your friend is uploading something and there’s downloading something in the background and it’s multi-tasking.

With apps, you do one thing and one thing well. That is good because the web interfaces that we built over the last years were just like this much content, that much ads and blinking stuff. People don’t want that anymore. They want to do something with an app and that’s what they are focused and they make sense.

In order not to be unemployed and make my father not proud because he said, “The computer thing will never work out,” anyways, I thought it’s a good plan to start my own app idea. I’m going to pitch that to you tonight. I know there’s this few VC people in the audience. I’m completely buy-able. A few million dollars, I’m okay with that.

When I did my research, scientific research by scientists, I found out that most apps are used in leisure time. They’re not used during their work time. You will be hard pushed to find a boss that says like, “Wilkins, by lunch break you have to have a new extra level in Candy Crush or you’re fired.” It’s not going to happen. Most companies, I don’t know – some startup maybe.

We use them in our free time and being a public speaker and traveling all the time, I find that people use apps the most where you are completely focused and alone, in other words: public toilets. This goes so far that with every application that came out, the time spent in the facilities becomes longer and longer. At Snake, it was like 12 minutes and Angry Birds who are about 14, but Candy Crush and with Flappy Bird…

It happens. You sit there and you hear people inside getting a new high score and like, “Yeah, look what I did?” You’re like, “Yeah, look what I want to do.” That’s when I thought, “Why leave that to chance? Why is there no app that actually makes going to the public facilities, not a boring biological thing but makes it a social thing?”

I’m proposing the app called, “What’s Out.” What’s Out is a local application much like FourSquare and others that you can use to good use while you’re actually sitting down where you do things that you know how to do anyways without having to think about them.

You can check in, you can become the mayor, you can send reviews, you can actually check in with your friends and earn badges like three stalls in a row.All these things that make social apps social apps – and why not? You can actually link the photo that you took of the food in Instagram to your check in on What’s Out and that gets shared on the internet.

You can also pay for the full version and it doesn’t get shared on your Facebook account.

You might think I’m a genius, you might think that I have this great idea that nobody had before, but the business model is already in use and has been tested for years successfully in the canine market. The thing is dogs don’t have a thumb so they didn’t tweet about it. They also can’t write so they didn’t put a patent on it so I can do that.

Seriously now though, this is what I hate about apps. They are a hype, they’re no innovation, they’re nothing new. We had software that was doing one thing and one thing well before, we call it Word and Outlook. We called it things that we had to install and then do something with it.

The problem with apps is that the business model is all about hype. WhatsApp was not bought because it’s great software. WhatsApp was bought because millions of people use it. It’s because it actually allowed people to send text messages without paying for it.

Everybody now sees this as the new thing. “We got to have an app, you got to have an app.” For an app to be successful, it has to play a massive numbers game. An app needs millions of users continuously. Twitter has to change their business model every few months just to show more and more and more and more numbers.

It doesn’t really matter what the thing does. What the app does is irrelevant as long as it gets enough people addicted to using it continuously. It’s all about the eyeballs and you put content in these apps that advertisers can use that people can sell to other people. You are becoming the product inside a product.

That even goes into marketplaces. I work on Firefox OS and we have a marketplace for the emerging markets where people can build their first app without having to spend money or have a good computer or download a massive SDK, but people every time when I go to them like, “How many apps do you have in the marketplace?” “I don’t know. The HTML5 apps, they could be anything.”

“If it’s not a few million, the marketplace isn’t good.”

I go to a baker if they have three good things. I don’t need them to have 500 different rolls, but the marketplaces have to full. We just go for big numbers. That’s to me is the problem that we have with apps. I’m not questioning that the mobile web is the coming thing and is the current thing.

The desktop web is dying, it’s on decline, but apps to me are just a marketing model at the moment. They’re bringing the scratchability of CDs, the breaking of clothes, the outdated looking things of shoes into software. It’s becoming a consumer product that can be outdated and can look boring after a while.
That to me is not innovation. This is not bringing us further in the evolution of technology because I’ve seen the evolution. I came from radio to the internet. Out of a sudden, my voice was heard worldwide and not just in my hometown without me having to do anything extra.

Will you download a Christian Heilmann app? Probably not. Might you put my name in Google and find millions of things that I put in there over the last 17 years and some of them you like, probably and you can do that as well.
For apps to be successful, they have to lock you in.

The interoperability of the internet that made it so exciting, the things that Tim showed like I can use this thing and then I can do that, and then I use that. Then I click from Wikipedia to YouTube and from YouTube to this and I translate it if I need to because it’s my language. Nothing of that works in apps unless the app offers that functionality for a certain upgrade of $12.59 or something like that.

To be successful, apps have to be greedy. They have to keep you in themselves and they cannot talk to other apps unless they’re massively other successful apps. That to me doesn’t allow me as a publisher to come up with something new. It just means that the big players are getting bigger all the time and the few winners are out there and the other just go away and a lot of money has been wasted in the whole process.

In essence, apps are like Tamagotchi. Anybody old enough to remember Tamagotchi? These were these little toys for kids that were like a pet that couldn’t afford pets like in Japan, impossible. These little things were like, “Feed me, play with me, get me a playmate, do me these kind of things, do me this kind of thing.” After a few years, people were like, “Whatever.” Then rusting somewhere in the corner and collect dust and nobody cares about them any longer.

Imagine the annoyance that people have with Tamagotchi with over a hundred apps on your phone. It happens when your Android apps, for example, you leave your Android phone with like 600 updates like, “Oh please, I need a new update because I want to show you more ads.” I don’t even have insight of what updates do to the functionality of the app, it’s just I have to download another 12 MB.

If I’m on a contract where I have to pay per megabyte, that’s not fun. How is that innovative? How is that helping me? It’s helping the publisher. We’re making the problem of selling software our problem and we do it just by saying it’s a nicer interface.

Apps are great. Focus on one thing, one thing well, great. The web that we know right now is too complex. We can learn a lot from that one focus thing, but we shouldn’t let ourselves be locked into one environment. You upload pictures to Instagram now, have you read the terms and conditions?

Do you know who owns these pictures? Do you know if this picture could show up next to something that you don’t agree with like a political party because they have the right to show it? Nobody cares about them. Nobody reads that up.
What Tim showed, the image with the globe with the pictures, that was all from Flickr. Flickr, I was part of that group, licensed everything with Creative Commons. You knew that data is yours. There’s a button for downloading all your pictures. If you don’t want it anymore, here’s your pictures, thank you, we’re gone.

With other services, you get everything for free with ads next to it and your pictures might end up on like free singles in your area without you having to do anything with it. You don’t have insight. You don’t own the interface. You don’t own the software.

All in all, apps to me are a step back to the time that I replaced with the internet. A time when software came in a consumable format without me knowing what’s going on. In a browser, I can highlight part of the text, I can copy it into your email and send it to you. I can translate it. I can be blind and listen to a website. I can change things around. I can delete parts of it if it’s too much content there. I can use an ad blocker if I don’t like ads.
On apps, I don’t have any of that. I’m just the slave to the machine and I do it because everybody else does it. I’ve got 36,000 followers on Twitter, I don’t know why. I’m just putting things out there, but you see for example, Beyonce has 13.3 million followers on Twitter and she did six updates.

Twitter and other apps give you the idea that you have a social life that you don’t have. We stop having experiences and we talk about experiences instead. You go to concerts and you got a guy with an iPad in front of you filming the band like, “That’s going to be great sound and thank you for being in my face. I wanted to see the band, that’s what I came here for.” Your virtual life is doing well right? Everybody loves you are there. You don’t have to talk to real people. That would be boring”. Let’s not go back in time. Let’s not go back where software was there for us to just consume and take in.

I would have loved Word to have more functionality in 1995. I couldn’t get it because there wasn’t even add-ons. I couldn’t write any add-on. With the web, I can teach any of you in 20 minutes how to write your first website. HTML page, HTML5 app give me an hour and you learn it.

The technologies are decentralized. They’re open. They’re easy to learn and they’re worldwide. With apps, we go back to just one world that has it. What’s even worse is that we mix software with hardware again. “Oh you want that cool new game. You’re on Android? No, you got to wait seven months. You got to have an iPhone. Wait, do you have the old iPhone? No you got to buy the new one.”
How is that innovation? How is that taking it further? Software and technology is there to enrich our lives, to make it more magical, to be entertaining, to be beautiful. Right now, the model how we build apps right now, the economic model means that you put your life into apps and they make money with it. Something has gone very, very wrong there. I don’t think it’s innovation, I think it’s just dirty business and making money.

I challenge you all to go out and not upload another picture into an app or not type something into another closed environment. Find a way to put something on the web. This could be a blog software. This could be a comment on a newspaper.

Everything you put on that decentralized, beautiful, linked worldwide network of computers, and television sets, and mobile phones, and wearables, and Commodore 64s that people put their own things in, anything you put there is a little sign and a little sign can become a ripple and if more people like it, it become a wave. I’m looking forward to surfing the waves that you all generate. Thanks very much.

Jennie Rose HalperinVideo about Open Source and Community

Building and Leveraging an Open Source Developer Community.

this talk by Jade Wang is really great. Thanks to Adam Lofting for turning me onto it!

 

Monica ChewDownload files more safely with Firefox 31


Did you know that the estimated cost of malware is hundreds of billions of dollars per year? Even without data loss or identity theft, the time and annoyance spent dealing with infected machines is a significant cost.

Firefox 31 offers improved malware detection. Firefox has integrated Google’s Safe Browsing API for detecting phishing and malware sites since Firefox 2. In 2012 Google expanded their malware detection to include downloaded files and made it available to other browsers. I am happy to report that improved malware detection has landed in Firefox 31, and will have expanded coverage in Firefox 32.

In preliminary testing, this feature cuts the amount of undetected malware by half. That’s a significant user benefit.

What happens when you download malware? Firefox checks URLs associated with the download against a local Safe Browsing blocklist. If the binary is signed, Firefox checks the verified signature against a local allowlist of known good publishers. If no match is found, Firefox 32 and later queries the Safe Browsing service with download metadata (NB: this happens only on Windows, because signature verification APIs to suppress remote lookups are only available on Windows). In case malware is detected, the Download Manager will block access to the downloaded file and remove it from disk, displaying an error in the Downloads Panel below.


How can I turn this feature off? This feature respects the existing Safe Browsing preference for malware detection, so if you’ve already turned that off, there’s nothing further to do. Below is a screenshot of the new, beautiful in-content preferences (Preferences > Security) with all Safe Browsing integration turned off. I strongly recommend against turning off malware detection, but if you decide to do so, keep in mind that phishing detection also relies on Safe Browsing.
Many thanks to Gian-Carlo Pascutto and Paolo Amadini for reviews, and the Google Safe Browsing team for helping keep Firefox users safe and secure!

Erik VoldMozilla University

When I was young I only cared about Math, math and math, but saw no support, no interesting future, no jobs (that I cared for), nor any respect for the field. In my last year of high school I started thinking about my future, so I sought out advice and suggestions from the adults that I respected. The only piece of advice that I cared for was from my father, he simply suggested that I start with statistics because there are good jobs available for a person with those skills and if I didn’t like it I would find something else. I had taken an intro class to statistics and it was pretty easy, so I decided to give it a try.

Statistics was easy, and boring, so I tried computer science because I had been making web sites for people on the side, off and on, since I was 16, and it was interesting, especially since the best application of statistics to my mind is computer learning. I enjoyed the comp sci classes most of all, and I took 5 years to get my bachelor’s degree in statistics and computer science. It was a great experience for me.

Slightly before I graduated I started working full time as a web developer, after a couple of years I started tinkering with creating add-ons, because I was spending 8+ hours a day using Firefox and I figured I could make it suite my needs a little more, and maybe others would enjoy my hacks too, so I started making userscripts, ubiquity commands, jetpacks, and add-ons.

It’s been 5 years now since I started hacking on projects in the Mozilla community, and these last five years have been just as valuable to me as the 5 years that I spent at UBC. I consider this to be my 2nd degree.

Now when I think about how to grow the community, how to educate the masses, how to reward people for their awesome contributions, I can think of no better way than a free Mozilla University.

We have webmaker today, and I thought it was interesting at first, so I contributed to the best of ability for the first 2 years, but I see some fundamental issues with it. For instance, how do we measure the success of webmaker? how do we know that we’ve affected people? how do we know whether or not these people have decided to continue their education or not? and if they decide to continue their webmaker education then how do we help them? finally, do we respect the skills we teach if do we not provide credentials?

I, for one, would like to see Open Badges and Webmaker become Mozilla University, a free, open source, peer-to-peer, distributed, and widely respected place to learn.

I feel that one of the most important parts of my job at Mozilla is to teach, but how many of us are really doing this? Mozilla University could also be a way to measure our progress.

Nick CameronLibHoare - pre- and postconditions in Rust

I wrote a small macro library for writing pre- and postconditions (design by contract style) in Rust. It is called LibHoare (named after the logic and in turn Tony Hoare) and is here (along with installation instructions). It should be easy to use in your Rust programs, especially if you use Cargo. If it isn't, please let me know by filing issues on GitHub.

The syntax is straightforward, you add `[#precond="predicate"]` annotations before a function where `predicate` is any Rust expression which will evaluate to a bool. You can use any variables which would be in scope where the function is defined and any arguments to the function. Preconditions are checked dynamically before a function is executed on every call to that function.

You can also write `[#postcond="predicate"]` which is checked on leaving a function and `[#invariant="predicate"]` which is checked before and after. You can write any combination of annotations too. In postconditions you can use the special variable `result` (soon to be renamed to `return`) to access the value returned by the function.

There are also `debug_*` versions of each annotation which are not checked in --ndebug builds.

The biggest limitation at the moment is that you can only write conditions on functions, not methods (even static ones). This is due to a restriction on where any annotation can be placed in the Rust compiler. That should be resolved at some point and then LibHoare should be pretty easy to update.

If you have ideas for improvement, please let me know! Contributions are very welcome.

# Implementation

The implementation of these syntax extensions is fairly simple. Where the old function used to be, we create a new function with the same signature and an empty body. Then we declare the old function inside the new function and call it with all the arguments (generating the list of arguments is the only interesting bit here because arguments in Rust can be arbitrary patterns). We then return the result of that function call as the result of the outer function. Preconditions are just an `assert!` inserted before calling the inner function and postconditions are an `assert!` inserted after the function call and before returning.

Andrew OverholtWe held a Mozilla “bootcamp”. You won’t believe how it went!

For a while now a number of Mozillians have been discussing the need for some sort of technical training on Gecko and other Mozilla codebases. A few months ago, Vlad and I and a few others came up with a plan to try out a “bootcamp”-like event. We initially thought we’d have non-core developers learn from more senior developers for 4 days and had a few goals:

  • teach people not developing Mozilla code daily about the development process
  • expose Mozillians to areas with which they’re not familiar
  • foster shared ownership of areas of code and tools
  • teach people where to look in the code when they encounter a bug and to more accurately file a bug (“teach someone how to fish”)

While working towards this we realized that there isn’t as much shared ownership as there could be within Mozilla codebases so we focused on 2 engineering teams teaching other engineers. The JavaScript and Graphics teams agreed to be mentors and we solicited participants from a few paid Mozillians to try this out. We intentionally limited the audience and hand-picked them for this first “beta” since we had no idea how it would go.

The event took place over 4 days in Toronto in early June. We ended up with 5 or 6 mentors (the Graphics team having a strong employee presence in Toronto helped with pulling in experts here and there) and 9 attendees from a variety of engineering teams (Firefox OS, Desktop, and Platform).

The week’s schedule was fairly loose to accommodate questions and make it more of a conversational atmosphere. We planned sessions in an order to give a high level overview followed by deeper dives. We also made sessions about complementary Gecko components happen in a logical order (ex. layout then graphics). You can see details about the schedule we settled upon here: https://etherpad.mozilla.org/bootcamp1plans.

We collaboratively took notes and recorded everything on video. We’re still in the process of creating usable short videos out of the raw feeds we recorded. Text notes were captured on this etherpad which had some real-time clarifications made by people not physically present (Ms2ger and others) which was great.

The week taught us a few things, some obvious, some not so obvious:

  • people really want time for learning. This was noted more than once and positive comments I received made me realize it could have been held in the rain and people would have been happy
  • having a few days set aside for professional development was very much appreciated so paid Mozillians incorporating this into their goals should be encouraged
  • people really want the opportunity to learn from and ask questions of more seasoned Mozilla hackers
  • hosting this in a MozSpace ensured reliable facilities, flexibility in terms of space, and the availability of others to give ad hoc talks and answer questions when necessary. It also allowed others who weren’t official attendees to listen in for a session or two. Having it in the office also let us use the existing video recording setup and let us lean on the ever-amazing Jonathan Lin for audio and video help. I think you could do this outside a MozSpace but you’d need to plan a bit more for A/V and wifi, etc.
  • background noise (HVAC, server fans, etc.) is very frustrating for conversations and audio recording (but we already knew this)
  • this type of event is unsuitable for brand new {employees|contributors} since it’s way too much information. It would be more applicable after someone’s been involved for a while (6 months, 1 year?).

In terms of lessons for the future, a few things come to mind:

  • interactive exercises were very well received (thanks, kats!) and cemented people’s learning as expected
  • we should perhaps define homework to be done in advance and strongly encourage completion of it; videos of previous talks may be good material
  • scheduling around 2 months in advance seemed to be best to balance “I have no idea what I’ll be doing then” and “I’m already busy that week”
  • keeping the ratio of attendees to “instructors” to around 2 or 3 to 1 worked well for interactivity and ensuring the right people were present who could answer questions
  • although very difficult, attempting to schedule around major deadlines is useful (this week didn’t work for a few of the Firefox OS teams)
  • having people wear lapel microphones instead of a hand-held one makes for much better and more natural audio
  • building a schedule, mentors, and attendee list based on common topics of interest would be an interesting experiment instead of the somewhat mixed bag of topics we had this time
  • using whiteboards and live coding/demos instead of “slides” worked very well

Vlad and I think we should do this again. He proposed chaining organizers so each organizer sets one up and helps the next person do it. Are you interested in being the next organizer?

I’m very interested in hearing other people’s thoughts about this so if you have any, leave a comment or email me or find me on IRC or send me a postcard c/o the Toronto office (that would be awesome).

Swarnava SenguptaFlashing Flame Devices with Firefox OS

Doug BelshawMaking the web simple, but not simplistic

A couple of months ago, an experimental feature Google introduced in the ‘Canary’ build of its Chrome browser prompted a flurry of posts in the tech press. The change was to go one step further than displaying an ‘origin chip’ and do away with the URL entirely:

Hidden URL

I have to admit that when I first heard of this I was horrified – I assumed it was being done for the worst of reasons (i.e. driving more traffic to Google search). However, on reflection, I think it’s a nice example of progressive complexity. Clicking on the root name of the site reveals the URL. Otherwise, typing in the omnibox allows you to search the web:

Google Chrome experiment

Progressive complexity is something we should aspire to when designing tools for a wide range of users. It’s demonstrated well by my former Mozilla colleague Rob Hawkes in his work on ViziCities:

progressive-complexity.png http://slidesha.re/1kbYyYU

Using this approach means that those that are used to manipulating URLs are catered for – but the process is simplified for novice users.


Something we forget is that URLs often depend on the file structures of web servers: http://server.com/directory/sub-directory/file.htm. There’s no particular reason why this should be the case.

iCloud and Pages on OS X Pages on Mac OS X saving to iCloud

google-drive.png Google Drive interface

It’s worth noting that both Apple and Google here don’t presuppose you will create folders to organise your documents and digital artefacts. You can do so, or add tags, but it’s just as easy to dump them all in one place and search efficiently. It’s human-centred design.

My guiding principle here from a web literacy point of view is whether simplification and progressive complexity is communicated to users. Is it clear that there’s more to this than what’s presented on the surface? With the examples I’ve given in this post, I feel that they are.


Questions? Comments? I’m @dajbelshaw or you can email me at doug@mozillafoundation.org.

Pete MooreWeekly review 2014-07-23

Highlights

This week I rolled out the l10n changes, after a few more iterations of tweaks / improvements / nice-to-haves. I am coordinating with Hal about when we can cut over from legacy (as this will need his involvement) which depends a little bit on his availability - he has already proactively contacted me to let me know he is quite tied up at the moment, so it is unlikely we’ll be able to engage in roll out work together for the next couple of weeks until hg issues have stablised, and he has completed some work with fubar/bkero and the interns.

I’ve had discussions with Aki about various vcs sync matters (both technical and business relationship-wise) and am confident I am in a good position to lead this going forward.

I also rolled out changes to the email notifications, which unfortunately I had to roll back.

Now l10n is done (apart from cutover) the last two parts are gecko.git and gecko-projects - which I anticipate as being relatively trouble-free.

After that comes git-hg and git-git support (currently new vcs sync only supports hg-git).

Other

Looking forward to getting involved with release process (https://bugzilla.mozilla.org/show_bug.cgi?id=1042128)

Dave HuntA new home for the gaiatest documentation

The gaiatest python package provides a test framework and runner for testing Gaia (the user interface for Firefox OS). It also provides a handy command line tool and can be used as a dependency from other packages that need to interact with Firefox OS.

Documentation for this package has now been moved to gaiatest.readthedocs.org, which is generated directly from the source code whenever there’s an update. In order to make this more useful we will continue to add documentation to the Python source code. If you’re interested in helping us out please get in touch by leaving a comment, or joining #ateam on irc.mozilla.org and letting us know.

Francesca CiceriAdventures in Mozillaland #3

Yet another update from my internship at Mozilla, as part of the OPW.

A brief one, this time, sorry.

Bugs, Bugs, Bugs, Bacon and Bugs

I've continued with my triaging/verifying work and I feel now pretty confident when working on a bug.
On the other hand, I think I've learned more or less what was to be learned here, so I must think (and ask my mentor) where to go from now on.
Maybe focus on a specific Component?
Or steadily work on a specific channel for both triaging/poking and verifying?
Or try my hand at patches?
Not sure, yet.

Also, I'd like to point out that, while working on bug triaging, the developer's answers on the bug report are really important.
Comments like this help me as a triager to learn something new, and be a better triager for that component.
I do realize that developers cannot always take the time to put in comments basic information on how to better debug their component/product, but trust me: this will make you happy on the long run.
A wiki page with basic information on how debug problems for your component is also a good idea, as long as that page is easy to find ;).

So, big shout-out for MattN for a very useful comment!

Community

After much delaying, we finally managed to pick a date for the Bug Triage Workshop: it will be on July 25th. The workshop will be an online session focused on what is triaging, why is important, how to reproduce bugs and what information ask to the reporter to make a bug report the most complete and useful possible.
We will do it in two different time slots, to accomodate various timezones, and it will be held on #testday on irc.mozilla.org.
Take a look at the official announcement and subscribe on the event's etherpad!

See you on Friday! :)

Gervase MarkhamFraudulent Passport Price List

This is a list (URL acquired from spam) of prices for fraudulent (but perhaps “genuine” in terms of the materials used, I don’t know) passports, driving licenses and ID cards. It is a fascinating insight into the relative security of the identification systems of a number of countries. Of course, the prices may also factor in the economic value of the passport, but it’s interesting that a Canadian passport is more expensive than a US one. That probably reflects difficulty of obtaining the passport rather than the greater desirability of Canada over the US. (Sorry, Canadians, I know you’d disagree! Still, you can be happy at the competence and lack of corruption in your passport service.)

One interesting thing to note is that one of the joint lowest-price countries, Latvia (€900), is a member of the EU. A Latvian passport allows you to live and work in any EU country, even Germany, which has the most expensive passports (€5200). The right to live anywhere in the EU – yours for only €900…

Also interesting is to sort by passport price and look if the other prices follow the same curve. A discrepancy may indicate particularly weak or strong security. So Russian ID cards are cheaper than one might expect, whereas Belgian ones are more expensive. Austrian and Belgian driver’s licenses also seem to be particularly hard to forge, but the prize there goes to the UK, which has the top-priced spot (€2000). I wonder if that’s related to the fact that the UK doesn’t have ID cards, so the driver’s license often functions as one?

Here is the data in spreadsheet form (ODS), so you can sort and analyse, and just in case the original page disappears…

Sylvestre LedruAuto-comment on the Release Management flags

Implemented in bug 853108 by the bmo team, using the tracking flags will automatically updated the comment field with some templates.
The goal is to reduce back and forth in Bugzilla on bug tracking. We also hope that is going to improve our response time.

For example, for the tracking requests (tracking-firefoxNN, tracking-firefox-esrNN or blocking-b2g), the user will see the text added into the Bugzilla comment field:

[Tracking Requested - why for this release]:

With this change, we hope to simplify the decision process for the release team.

For the relnotes-* flags:

Release Note Request (optional, but appreciated)
[Why is this notable]:
[Suggested wording]:
[Links (documentation, blog post, etc)]:

This change aims to simplify the process of release notes writing. In some cases, it can be hard for release manager to translate a bug into a new feature description.

Flags on which this option is enabled are:

  • relnote-firefox
  • relnote-b2g
  • tracking-firefoxNN
  • tracking-firefox-esrNN
  • blocking-b2g

Finally, we reported bug 1041964 to discuss about a potential auto-focus on the comment area.

Manish Goregaokar200, and more!

After my last post on my running GitHub streak, I've pretty much continued to contribute to the same projects, so I didn't see much of a point of posting about it again — the fun part about these posts is talking about all the new projects I've started or joined. However, this time an arbitrary base-ten milestone comes rather close to another development on the GitHub side which is way more awesome than a streak; hence the post.

Firstly, a screenshot:

I wish there was more dark green

Now, let's have a look at the commit that made the streak reach 200. That's right, it's a merge commit to Servo — something which is created for the collaborator who merges the pull request1. Which is a great segue into the second half of this post:

I now have commit/collaborator access to Servo. :D

It happened around a week back. Ms2ger needed a reviewer, Lars mentioned he wanted to get me more involved, I said I didn't mind reviewing, and in a few minutes I was reviewing a pull request for the first time. A while later I had push access.

This doesn't change my own workflow while contributing to Servo; since everyone still goes through pull requests and reviews. But it gives a much greater sense of belonging to a project. Which is saying something, since Mozilla projects already give one a sense of being "part of the team" rather early on, with the ability to attend meetings, take part in decision-making, and whatnot.

I also now get to review others' code, which is a rather interesting exercise. I haven't done much reviewing before. Pull requests to my own repos don't count much since they're not too frequent and if there are small issues I tend to just merge and fix. I do give feedback for patches on Firefox (mostly for the ones I mentor or if asked on IRC), but in this situation I'm not saying that the code is worthy to be merged; I'm just pointing out any issues and/or saying "Looks good to me".

With Servo, code I review and mark as OK is ready for merging. Which is a far bigger responsibility. I make mistakes (and style blunders) in my own code, so marking someone else's code as mistake free is a bit intimidating at first. Yes, everyone makes mistakes and yet we have code being reviewed properly, but right now I'm new to all this, so I'm allowed a little uncertainty ;) Hopefully in a few weeks I'll be able to review code without overthinking things too much.



In other GitHub-ish news, a freshman of my department submitted a very useful pull request to one of my repos. This makes me happy for multiple reasons: I have a special fondness for student programmers who are not from CS (not that I don't like CS students), being one myself. Such students face an extra set of challenges of finding a community, learning the hard stuff without a professor, and juggling their hobby with normal coursework (though to be fair for most CS students their hobby programming rarely intersects with coursework either).

Additionally, the culture of improving tools that you use is one that should be spread, and it's great that at least one of the new students is a part of this culture. Finally, it means that people use my code enough to want to add more features to it :)

1. I probably won't count this as part of my streak and make more commits later today. Reviewing is hard, but it doesn't exactly take the place of writing actual code so I may not count merge commits as part of my personal commit streak rules.

Julien VehentOpSec's public mailing list

Mozilla's Operations Security team (OpSec) protects the networks, systems, services and data that power the Mozilla project. The nature of the job forces us to keep a lot of our activity behind closed doors. But we thrive to do as much as possible in the open, with projects like MIG, Mozdef, Cipherscan, OpenVPN-Netfilter, Duo-Unix or Audisp-Json.

Opening up security discussions to the community, and to the public, has been a goal for some time, and today we are making a step forward with the OpSec mailing list at

https://lists.mozilla.org/listinfo/opsec

This mailing list is a public place for discussing general security matters among operational teams, such as public vulnerabilities, security news, best practices discussions and tools. We hope that people from inside and outside of Mozilla will join the discussions, and help us keep Mozilla secure.

So join in, and post some cool stuff!

Mike ShalMoving Automation Steps in Tree

In bug 978211, we're looking to move the logic for the automation build steps from buildbot into mozilla-central. Essentially, we're going to convert this: Into this:

Rick EyreWebVTT Released in Firefox 31

If you haven't seen the release notes WebVTT has finally been released in Firefox 31. I'm super excited about this as it's the culmination of a lot of my own and countless others' work over the last two years. Especially since it has been delayed for releases 29 and 30.

That being said, there are still a few known major bugs with WebVTT in Firefox 31:

  • TextTrackCue enter, exit, and change events do not work yet. I'm working on getting them done now.
  • WebVTT subtitles do not show on audio only elements yet. This will probably be what is tackled after the TextTrackCue events (EDIT: To clarify, I meant audio only video elements).
  • There is no support for any in-band TextTrack WebVTT data yet. If your a video or audio codec developer that wants in-band WebVTT to work in Firefox, please help out :-).
  • Oh, and there is no UI on the HTML5 Video element to control subtitles... not the most convenient, but it's currently being worked on as well.
I do expect the bugs to start rolling in as well and I'm actually kind of looking forward to that as it will help improve WebVTT in Firefox.

Doug BelshawA list of all 15 Web Literacy 'maker' badges

Things have to be scheduled when there’s so much to ‘ship’ at an organization like Mozilla. So we’re still a couple of weeks away from a landing page for all of the badges at webmaker.org. This post has a link to all of the Web Literacy badges now available.

Web Literacy Map v1.1

We’ve just finished testing the 15 Web Literacy ‘maker’ badges I mentioned in a previous post. Each badge corresponds to the ‘Make’ part of the resources page for the relevant Web Literacy Map competency. We’re not currently badging ‘Discover’ and ‘Teach’. If this sounds confusing, you see what I mean by viewing, as an example, the resources page for the Privacy competency.

Below is a list of the Web Literacy badges that can apply for right now. Note that you might want to follow this guidance if and when you do!

EXPLORING

BUILDING

CONNECTING

Why not set yourself a challenge? Can you:

  1. Collect one from each strand?
  2. Collect all the badges within a given strand?
  3. Collect ALL THE BADGES?

Comments? Questions? I’m @dajbelshaw or you can email me: doug@mozillafoundation.org

Luis VillaSlide embedding from Commons

A friend of a friend asked this morning:

I suggested Wikimedia Commons, but it turns out she wanted something like Slideshare’s embedding. So here’s a test of how that works (timely, since soon Wikimanians will be uploading dozens of slide decks!)

This is what happens when you use the default Commons “Use this file on the web -> HTML/BBCode” option on a slide deck pdf:

Wikimedia Legal overview 2014-03-19

Not the worst outcome – clicking gets you to a clickable deck. No controls inline in the embed, though. And importantly nothing to show that it is clickable :/

Compare with the same deck, uploaded to Slideshare:

Some work to be done if we want to encourage people to upload to Commons and share later.

Update: a commenter points me at viewer.js, which conveniently includes a wordpress plugin! The plugin is slightly busted (I had to move some files around to get it to work in my install) but here’s a demo:

Update2: bugs are fixed upstream and in an upcoming 0.5.2 release of the plugin. Hooray!

Mozilla Privacy BlogPrefer:Safe — Making Online Safety Simpler in Firefox

Mozilla believes users have the right to shape the Internet and their own experiences on it. However, there are instances when people seek to shape not only their own experiences, but also those of young users and family members whose … Continue reading

Yunier José Sosa VázquezNueva verificación de certificados, depuradores y funcionalidades para Firefox

Los navegadores hoy en día son parte esencial de nuestra vida, con ellos podemos navegar en Internet, jugar, hacer compras, oír música, ver videos, etc. Un video puede estar grabado en un idioma que no entendemos y necesitamos subtítulos para entender lo que se dice. Para la web, estos archivos están regidos por el formato para mostrar texto en pistas WebVTT a través del elemento <track> y ser utilizados en <video> para añadir subtítulos. De ahora en adelante los usuarios de Firefox podremos disfrutar de subtítulos en los videos de la web  y los desarrolladores podrán emplearlos.

Una nueva librería para verificar la veracidad de los certificados e incrementar la seguridad de los usuarios finales está siendo usada por esta nueva versión de Firefox. mozilla::pkix — como es llamada,  es más robusta, fácil de mantener y mejora el consumo de memoria. Su código puede ser visto por cualquier persona desde aquí.

Para realizar búsquedas más fácil se agregó el campo de búsqueda en la página Nueva pestaña, desde allí puedes elegir el motor de búsqueda a utilizar en Firefox.

Los complementos son una de las características de Firefox que más te gustan, con ellos puedes agregar funcionalidades que no se encuentran por defecto en el navegador y personalizarlo a tu modo. Por esa razón, se ha implementado un depurador para complementos que permitirá a los desarrolladores contar con una herramienta que les ayude a localizar errores y probar sus creaciones.

Las Firefox Hub APIs permiten a los desarrolladores de complementos que sus creaciones incorporen contenidos propios a la página de inicio — donde los usuarios pueden encontrar marcadores, sitios más visitados, etc — e incrementar la interacción de los usuarios con estos. Para más detalles puedes visitar la documentación en MDN y ver algunos complementos de ejemplos.

También se suman las mejoras de estabilidad y rendimiento de la APK Factory, las cuales proveen una mejor “experiencia nativa” para aplicaciones Web en Android. Usando APK Factory los desarrolladores de aplicaciones para Firefox OS pueden portar a millones de usuarios de Android sus desarrollos sin tener que cambiar una línea de código. El APK Factory también asegura que las aplicaciones corran en un ambiente de ejecución actualizado, por lo que no presentarán problemas de degradación o compatibilidad.

 

Para Android

  • Se ha añadido la posibilidad de reordenar los paneles existentes en about:home.
  • Soporte para más lenguajes: Asamés [as], Bengali [bn-IN], Gujarati [gu-IN], Hindi [hi-IN], Kannada [kn], Maithili [mai], Malayalam [ml], Marathi [mr], Oriya [or], Panjabi [pa-IN], Tamil [ta], Telugu [te].
  • Botón para actualizar manualmente en la página de pestañas sincronizadas.

 

Otras novedades

  • Preferencia navigator.sendBeacon habilitada por defecto.
  • Los archivos .PDF y .OGG son manejados por Firefox sino se especifica una aplicación para hacerlo.
  • Mejoras en el editor de código (Herramientas para desarrollo).
  • Implementación parcial de las tablas matemáticas OpenType (ver documentación).
  • Implementadas y habilitadas las variables CSS3.
  • Nueva herramienta Eyedropper para obtener el color fácilmente (Herramientas para desarrollo).
  • Modelo de caja editable al analizar los elementos HTML (Herramientas para desarrollo).
  • Un depurador para Canvas (Herramientas para desarrollo).
  • Muchos cambios más.

Si deseas conocer más, puedes leer las notas de lanzamiento.

Puedes obtener esta versión desde nuestra zona de Descargas en español e inglés para Linux, Mac, Windows y Android. Recuerda que para navegar a través de servidores proxy debes modificar la preferencia network.negotiate-auth.allow-insecure-ntlm-v1 a true desde about:config.

 

Byron Joneshappy bmo push day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1039198] Field name is still “Bug #” for the Bug Created field
  • [922482] Change all bugs with general@js.bugs assignee to nobody@mozilla.org
  • [713307] Please add FlagTypeComments for tracking/approval flags
  • [1037571] Change Several Bugs at Once Does Not Allow Modification of the QA Whiteboard
  • [1032883] update FlagDefaultRequestee extension to use object hooks
  • [1040580] Bugzilla detects Firefox OS device as Hardware:Other, OS:Other
  • [1026416] “blocks” field is present as empty string when empty, rather than null or []
  • [1040841] Provide good error message when people can’t use form.legal
  • [1041538] A few more “Bugmail filtering” fields need to be excluded from the prefs UI
  • [1041559] “Please wait while your bugs are retrieved” shown above menu header for search error pages
  • [936468] Move OS: Windows 8 Metro to Windows 8.1

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

Jennie Rose HalperinNumbers are not enough: Why I will only attend conferences with explicitly enforceable Codes of Conduct and a commitment to accessibility

I recently had a bad experience at a programming workshop where I was the only woman in attendance and eventually had to leave early out of concern for my safety.

Having to repeatedly explain the situation to a group of men who promised me that “they were working on fixing this community” was not only degrading, but also unnecessary. I was shuttled to three separate people, eventually receiving some of my money back approximately a month later (which was all I asked for) along with promises and placating statements about “improvement.”

What happened could have been prevented: each participant signed a “Code of Conduct” that was buried in the payment for the workshop, but there was no method of enforcement and nowhere to turn when issues arose.

At one point while I was attempting to resolve the issue, this community’s Project Manager told me, “Three other women signed up, but they dropped out at the last minute because they had to work. It was very strange and unexpected that you were the only woman.” I felt immediately silenced. The issue is not numbers, but instead inviting people to safe spaces and building supportive structures where people feel welcomed and not marginalized. Increasing the variety of people involved in an event is certainly a step, but it is only part of the picture. I realize now that the board members of this organization were largely embarrassed, but they could have handled my feelings in a way where I didn’t feel like their “future improvements” were silencing my very real current concerns.

Similarly, I’ve been thinking a lot about a conversation I had with some members of the German Python community a few months ago. Someone told me that Codes of Conduct are an American hegemonic device and that introducing the idea of abuse opens the community up for it, particularly in places that do not define “diversity” in the same way as Americans. This was my first exposure to this argument, and it definitely gave me a lot of food for thought, though I adamantly disagree.

In my opinion, the open-source tech community is a multicultural community and organizers and contributors have the responsibility to set their rules for participation. Mainstream Western society, which unfortunately dictates many of the social rules on the Internet, does a bad job teaching people how to interact with one another in a positive and genuine way, and going beyond “be excellent to one another, we’re all friends here!” argument helps us participate in a way in which people feel safe both on and off the Web.

At a session at the Open Knowledge Festival this week, we were discussing accessibility and realized that the Code of Conduct (called a “User Guide”) was not easily located and many participants were probably not aware of its existence. The User Guide is quite good: it points to other codes of conduct, provides clear enforcement, and emphasizes collaboration and diversity.

At the festival, accessibility was not addressed in any kind of cohesive manner: the one gender-neutral bathroom in the huge space was difficult to find, sessions were loud and noisy and often up stairs, making it impossible for anyone with any kind of hearing or mobility issue to participate, and finally, the conference organizers did not inform participants that food would not be free, causing the conference’s ticket price to increase dramatically in an expensive neighborhood in Berlin.

In many ways, I’m conflating two separate issues here (accessibility and behavior of participants at an event.) I would counter that creating a safe space is not only about behavior on the part of the participants, but also on the part of the conference organizers. Thinking about how participants interact at your event not only has to do with how people interact with one another, but also how people interact with the space. A commitment to accessibility and “diversity” hinges upon more than words and takes concerted and long term action. It may mean choosing a smaller venue or limiting the size of the conference, but it’s not impossible, and incredibly important. It also doesn’t have to be expensive!  A small hack that I appreciated at Ada Camp and Open Source Bridge was a quiet chill out room. Being able to escape from the hectic buzz was super appreciated.

Ashe Dryden writes compellingly about the need for better Codes of Conduct and the impetus to not only have events be a reflection of what a community looks like, but also where they want to see them go. As she writes,

I worry about the conferences that are adopting codes of conduct without understanding that their responsibility doesn’t end after copy/pasting it onto their site. Organizers and volunteers need to be trained about how to respond, need to educate themselves about the issues facing marginalized people attending their events, and need to more thoughtfully consider their actions when responding to reports.

Dryden’s  Code of Conduct 101 and FAQ should be required reading for all event organizers and Community Managers. Codes of Conduct remove the grey areas surrounding appropriate and inappropriate behavior and allow groups to set the boundaries for what they want to see happening in their communities. In my opinion, there should not only be a Code of Conduct, but also an accessibility statement that collaboratively outlines what the organizers are doing to make the space accessible and inclusive and addresses and invites concerns and edits.  In her talk at the OKFestival, Penny pointed out that accessibility and inclusion actually makes things better for everyone involved in an event. As she said, “No one wants to sit in a noisy room! For you, it may be annoying, but for me it’s impossible.”

Diversity is not only about getting more women in the room, it is about thinking intersectionally and educating oneself so that all people feel welcome regardless of class, race, physicality, or level of education. I’ve had the remarkable opportunity to go to conferences all over the world this year, and the spaces that have made an obvious effort to think beyond “We have 50% women speakers!” are almost immediately obvious. I felt safe and welcomed at Open Source Bridge and Ada Camp. From food I could actually eat to lanyards that indicated comfort with photography to accessibility lanes, the conference organizers were thoughtful, available, and also kind enough that I could approach them if I needed anything or wanted to talk.

From now on, unless I’m presented a Code of Conduct that is explicit in its enforcement, defines harassment in a comprehensive manner, makes accessibility a priority, and provides trained facilitators to respond to issues, you can count me out of your event.

We can do better in protecting our friends and communities, but change can only begin internally. I am a Community Manager because we get together to educate ourselves and each other as a collaborative community of people from around the world. We should feel safe in the communities of practice that we choose, whether that community is the international Python community, or a local soccer league, or a university. We have the power to change our surroundings and our by extension our future, but it will take a solid commitment from each of us.

Events will never be perfect, but I believe that at least in this respect, we can come damn close.

Andy McKayThe scourge of settings files

When zamboni started on Django, many years ago we started writing settings files. These python files are used by Django to determine site behaviour. It started off optimistically, developers would have have a local settings file, the production servers would have a slightly different one and that's that.

Except then tests get run and tests might need different settings, so a setting file was created for that. Then multiple staging servers got added, so multiple settings file were added for that. Then we split the site into two sites (AMO and Marketplace) so another settings file was added. That meant we could refactor some settings into a base file (my fault).

Each time a new feature was added emails were sent saying how to adjust your settings file locally. Then a settings change log was added that detailed what you needed to change in your settings file.

By now settings files were hundreds of lines long. When new developers joined, they were just sent a copy of someone else's settings file and that got them up and running.

I started thinking about how to make a virtual machine for the Marketplace and was start to think about tools for manipulating settings files, like Puppet. That's when I realised I was looking at the problem all wrong, the problem was that there was a need for a settings file at all. That was the root cause that needed to be fixed.

So we set down a new path. Since all the settings for production servers are in source control, the default should be that the settings value has a sane default and works without custom changes out of the box. Production servers can then override them.

Looking at the settings files, we found:

  • Values that needed to be the same, but were set differently in three different projects. Those were cleaned up.
  • Values that had to be overridden in the local settings files because they were based upon the value of an earlier setting. Those were moved into a lookup method in code.
  • Values that were set because of security concerns, e.g.: SECRET_KEY. Those were set to default values. A check method was added to startup to raise an error if those were not altered on production servers.
  • Values that were just never used. Those were deleted.

We then looked at the settings and grouped them into two values:

  • Settings that are likely to be changed.
  • Settings that might be changed locally for development or testing, but it's pretty unusual.

For the former, we used environment variables instead of settings files. This has the advantage of meaning that the environment variables are shared across the projects. We've currently got only five of those and they cover databases and urls. Those default to sane values, so even those are optional.

Finally we re-ordered the settings files into three categories:

  • Environment setting: paths, hostnames that sort of thing.
  • Django specific settings: which don't really need documentation.
  • Application specific settings: which are commented well.
  • Each of those categories is alphabetized, unless something needs to go before another.

The end result is that marketplace can be installed and run without any custom settings files. That's quicker, easier and saner to setup and maintain.

Mozilla WebDev CommunityBeer and Tell July 2014

Once a month, web developers across the Mozilla community get together to share what side projects or cool stuff we’ve been working on in our spare time. This monthly tribute is known as “Beer and Tell”.

There’s a wiki page listing the presenters and links to what they’re showing off and useful side information. There’s also a recording on Air Mozilla of the meeting.

Pomax: RGBAnalyse and nrAPI

This month Pomax had two projects to show. The first, RGBAnalyse, is a JavaScript library that generates histographical data about the colors in an image. Originally created so he could sort ink colors by hue, the library not only generates the data, but also generates images (available as data-uris) of histograms using that data.

The second project Pomax shared was nrAPI, a Node.js-based REST API for a website for learning Japanese: nihongoresources.com. The API lets you search for basic dictionary info, data on specific Kanji, sound effects, and Japanese names. Search input is accepted in English, Romaji, Hiragana, Katakana, or Kanji.

HTML result from nrAPI search for "tiger".

HTML result from nrAPI search for “tiger”.

Bill Walker: Photo Mosaic via CSS Multi-Column

Next, bwalker shared his personal birding photo site, and talked about a new photo layout he’s been playing with that uses a multi-column layout via CSS. The result is an attractive grid of photos of various sizes without awkward gaps, that can also be made responsive without the use of JavaScript. bwalker also shared the blog post that he learned the technique from.

Dean Johnson: MusicDownloader

deanj shared MusicDownloader, a Python-based program for downloading music from SoundCloud, Youtube, Rdio, Pandora, and HypeScript. The secret sauce is in the submodules, which implement the service-specific download code.

Chris Lonnen: Alonzo

Lastly, lonnen shared alonzo, a Scheme interpreter written in Haskell as part of a mad attempt to learn both languages at the same time. It uses Parsec to implement parsing, and so far implements binary numeric operations, basic conditionals, and even rudimentary error checking. The development roughly follows along “Write Yourself a Scheme in 48 Hours” by Johnathan Tang.

Sample Runs of alonzo


Thanks to all of our presenters for sharing! If you’re interested in attending or presenting at the next Beer and Tell, subscribe to the dev-webdev mailing list! The wiki page and connection info is shared a few days before each meeting.

See you next month!

David BoswellCommunity Building Stories

One of Mozilla’s goals for 2014 is to grow the number of active contributors by 10x. For the first half of the year, the Community Building team has been supporting other teams as they connect more new contributors to their projects.

Everyone on the team recently blogged about their experience supporting projects. The stories below show different stages in the lifecycle of communities and show how we’re helping projects progress through the phases of starting, learning, scaling and then sustaining communities.

We’ve learned a lot from these experiences that will help us complete the goal in the second half of the year. For example, the Geolocation pilot event in Bangalore will be a template for more events that will connect more people to the Location Services project.

Photo courtesy of  Galaxy Kadiyala

Photo courtesy of Galaxy Kadiyala

These are just a few of the stories of community building though. There are many other blog posts to check out and even a video Dia made about how contributors made the Web We Want video available in 29 different languages.

dia_video_poster

I’d love to hear what you’ve been doing to connect with more contributors and to hear about what you’ve learned. Feel free to leave links to your stories in the comments below.


Selena DeckelmannMy recent op-ed published about Portland and startups

I was featured in the Portland Business Journal last Friday! I wrote an essay on startups and the experiences of women in the Portland tech community that have caused me to not refer women into startups for jobs unless the startups are run by fellow PyLadies.

Some excerpts:

It takes more than one CEO’s alleged behavior to cause 56 percent of women to leave technology related fields by mid-career, according to a Harvard Business Review study. That’s twice the rate that men leave the tech industry.

After all, 63 percent of women in STEM industries (science, technology engineering and math) have experienced sexual harassment, according to a 2008 study.

I can’t recommend that women work for startups in Portland.

Startup funders should keep holding executives accountable. Company cultures grow from the seeds planted by their leaders.

These companies need [qualified HR, skilled with workforce diversity issues], and our tech leaders should demand it.

Read the whole thing at the Portland Business Journal’s site!

Doug BelshawFirefoxOS v2.0 is possibly the easiest-to-use smartphone operating system I’ve experienced

My father’s always been a fairly early adopter of technology. He happily uses a device that wirelessly connects his golf club to his iPhone, for example. My mother? Not so much. Until this weekend she was still sporting an old Nokia feature phone. She kind of wanted a smartphone, but didn’t want the complexity, nor the expense.

FirefoxOS v2.0

Meanwhile, I’ve been using a Geeksphone Peak smartphone recently. It’s not the latest FirefoxOS device (that would be the Flame), but it’s a significant step up from last year’s Geeksphone Keon. I’ve been using the pre-release channel of v2.0 of FirefoxOS, which is a departure from previous versions. Whereas they were similar in look and feel to Android, FirefoxOS v2.0 is different.

Every weekend, we go over to my parents’ house for Sunday lunch. Yesterday, we got talking about technology and I showed my parents my FirefoxOS device. One thing led to another, and (because all of my stuff was backed up) I wiped the phone, transferred my mother’s contacts, and swapped SIM cards. My wife gave her some tips, and then we drove off into the sunset with her Nokia phone.

I don’t think I would have felt comfortable leaving her without her old phone to revert back to if I was giving her an Android device or iPhone. There’s something so simple yet so powerful about Firefox v2.0; I’m happy to use it myself and hand it over to other, more technophobic people. Yes, I understand that I’m a Mozilla employee fully invested in the mission, but those who know me understand I also don’t say positive things about specific technologies without good reason.

According to the roadmap, today’s the day that FirefoxOS v2.0 becomes feature complete. There’s some really nice features in there too, like WebRTC (imagine Skype/FaceTime, but just using web technologies), edge gestures (something I really missed from my old Nokia N9) and Sync. If you haven’t had a chance to try one out, I’d take a look at FirefoxOS device over the coming months. The operating system is currently being tested on tablets and TVs and means, of course, smart devices without the usual vendor lock-in.


Note: the screenshots are from this post as I forgot to take some before wiping the phone and lending it to my mother!

Michael VerdiLocalized screencasts perform better – go figure

Planet Mozilla viewers – you can watch this video on YouTube.

I created this video about bookmarks for Firefox 29. It’s in English and has closed captions for a few languages, including German. But you can see from this audience retention data that German speakers don’t watch the video as much as English speakers.
english

So, with Kadir‘s help, I made a German version (above). You can see that this video performs much better in German speaking locales. Of course this is what we expected but it’s cool to see how plainly it shows up.
german

Note: Rewinding and re-watching can result in values higher than 100%.

Mike RatcliffeView DOM Events in Firefox Developer Tools

I recently realized that support for inspection of DOM events is very poor in pretty much all developer tools. Having seen Opera Dragonflies implementation some time ago I liked the way you could very easily see the scope of an event.

I have used a similar design to add DOM event inspection to Firefox Developer Tools. The event icons are visible in the markup view and if you click on them you can see information about the event including it's handler. If you click the debugger icon at the top left of the popup it will take you to that handler in the debugger.

Visual Events in Firefox Developer Tools

Whilst developing this feature I noticed that my workflow changed considerably. I found myself repeatedly looking at the event handlers attached to e.g. a button, clicking the debug icon, adding a breakpoint and clicking the button.

We hope this feature will be useful to you. If you have any idea how we can improve this feature then please let us know via our feedback channel or Bugzilla.

Hub FiguièreGoing to Guadec

For the first time since 2008, when it was in Istanbul, I'm coming to Guadec. This time it is in Strasbourg, France. Thanks to a work week scheduled just before in Paris.

I won't present anything this year, but I hope to be able to catch up a bit more with the Gnome community. I was already at the summit last fall, as it was being held in Montréal, but Guadec participation is usually broader and wider.

Hannah KaneMaker Party Engagement: Week 1

Maker Party is here!

Last week Geoffrey sent out the Maker Party Marketing Plan and outlined the four strategies we’re using to engage the community in our annual campaign to teach the web.

Let’s see how we’re doing in each of those four areas.

First, some overall stats:

  • Events: 541 as of this writing (with more to be uploaded soon)
  • Hosts: 217 as of this writing
  • Expected attendees: 25,930 as of this writing
  • Contributors: See Adam’s post
  • Traffic: see the image below, which shows traffic to Webmaker.org during the last month. The big spike at the end of June/early July corresponds to the launch of the snippet. You can see another smaller spike at the launch of Maker Party itself.

Screen Shot 2014-07-20 at 9.29.00 AM

——————————————————————–

Engagement Strategy #1: PARTNER OUTREACH

  • # of confirmed event partners: 200 as of this writing
  • # of confirmed promotional partners: 61 as of this writing

We can see from analytics on the RIDs that Web 2.0 Labs/Learning Revolution and National 4H are the leading partners in terms of generating traffic to Webmaker.org. Links attributed to them generated 140 and 68 sessions, respectively.

Additionally, we saw blog posts from these partners:

——————————————————————–

Engagement Strategy #2: ACTIVE MOZILLIANS

  • Appmaker trainings happened at Cantinas in MozSpaces around the world last Thursday. Waiting to hear a tally of how many Mozillians were engaged through those events.
  • You’ve probably seen the event reports on the Webmaker listserve from Reps and Mentors around the world who are throwing Maker Parties.
  • Hives are in full effect! Lots of event uploads this week from the Hive networks.

Note re: metrics—though there’s evidence of a lot of movement within this strategy, I’m not quite sure how to effectively measure it. Would love to brainstorm with others.

——————————————————————–

Engagement Strategy #3: OWNED MEDIA

  • Snippet: The snippet has generated nearly 300M impressions, ~610K clicks, and ~33,500 email sign-ups to date. We now have a solid set of baseline data for the initial click-through rate, and will shift our focus to learning as much as we can about what happens after the initial click. We are working on creating several variants of the most successful icon/copy combination to avoid snippet fatigue. Captured email addresses will be a part of an engagement email campaign moving forward.
  • Mozilla.org: The Maker Party banner went live on July 16 in EN, FR, DE, and es-ES. So far there’s been no correlative spike in traffic, but it’s too early to draw any conclusions about its effectiveness.

——————————————————————–

Engagement Strategy #4: EARNED MEDIA

Our partners at Turner4D have set up several interviews for Mark and Chris as well as Mozillians in Uganda and Kenya.

Radio

Print

English:

Indonesian:

German:

Spanish:

Importantly, Maker Party was included in a Dear Colleague Letter to 435 members of the U.S. Congress this week.

What are the results of earned media efforts?

None of the press we’ve received so far can be directly correlated with a bump in traffic. Because press, when combined with social media and word of mouth, can increase general brand awareness of Mozilla and Maker Parties, one of the data points we are tracking is traffic coming from searches for brand terms like “webmaker” and “maker party.” The graph below shows a spike in that kind of searching the day before the launch, followed by a return to more average levels.

Screen Shot 2014-07-20 at 10.13.35 AM
SOCIAL:

We do not consider social media to be a key part of our strategy to draw in contributors, but it is a valuable supplement to our other efforts, as it allows us to amplify and respond to the community voice.

You can see a big spike in mentions on this #MakerParty trendline: trendline

See #MakerParty tweets here: https://twitter.com/search?q=%23makerparty&src=typd

Some highlights:

tweet1 tweet2 tweet3 tweet4 tweet5 tweet6 tweet7That’s all for this week. Stay tuned. The analysis will get deeper as we collect more data.


Nigel BabuOKFestival - Berlin, 2014

For the first time, I actually attended the OKFestival. I didn’t get to attend many sessions, but the conversations I’ve had are spectacular.

The first surprise was meeting malev. A couple of years ago, we both worked together on the Ubuntu project. Now, he’s an Open News Fellow and I work at Open Knowledge. The FOSS world is truely small :-)

I finally got to meet Christie! I’ve heard of Christie since right before she started at Mozilla, when I first heard of Open Source Bridge, and later she started at Mozilla Webdev, where I was closely involved back then.

Georg came over to say hi on Tuesday. When I realized that he was in Uganda for the Mozfest East Africa, I introduced him to Ketty who was also there, leading to an interesting conversation and great connection.

George Sattler works for XVT solutions in Australia and is our partner. He is fairly certain that I don’t sleep ;) We’ve been having conversations over email for quite a long time and it was great to meet George in person.

The Venue

It’s been a long time since I’ve met Adam Green, the editor of Public Domain Review. It was nice catching up with him. Also, Joris! I hadn’t seen him since he moved on from OKF :-)

I haven’t met Riju since he’s moved to Delhi and I met him in Berlin! Totally random and great running into him :)

The last I met Kaustubh was at Pranesh’s farewell party in October (?). We had a good time catching up.

Folks from local groups across OKF. As a part-time system, I talk to most of the OKF community folks at some point through RT. Additionally, I was going around asking feedback for the sysadmin team. It was great for me to put a face to names and I suspect vice versa as well.

The usual suspects who were great to meet, are of course, my lovely teammates. It’s nice to meet in person, grab a drink, and talk.

Congratulations again to Bea, Megan, Lou, and Naomi for making OKFestival happen!

Cutting the Cake

Will Kahn-GreeneInput status: July 20th, 2014

Summary

This is the status report for development on Input. I publish a status report to the input-dev mailing list every couple of weeks or so covering what was accomplished and by whom and also what I'm focusing on over the next couple of weeks. I sometimes ruminate on some of my concerns. I think one time I told a joke.

Last status report was at the end of June. This status report covers the last few things we landed in 2014q2 as well as everything we've done so far in 2014q3.

Development

Landed and deployed:

  • 6ecd0ce [bug 1027108] Change default doc theme to mozilla sphinx (Anna Philips)
  • 070f992 [bug 1030526] Add cors; add api feedback get view
  • f6f5bc9 [bug 1030526] Explicitly declare publicly-visible fields
  • c243b5d [bug 1027280] Add GengoHumanTranslater.translate; cleanup
  • 3c9cdd1 [bug 1027280] Add human tests; overhaul Gengo tests
  • ff39543 [bug 1027280] Add support for the Gengo sandbox
  • 258c0b5 [bug 1027280] Add test for get_balance
  • 44dd8e5 [bug 1027280] Implement Gengo Human push_translations
  • 35ae6ec [bug 1027280] Clean up API code
  • a7bf90a [bug 1027280] Finish pull_translations and tests
  • c9db147 [bug 1027286] Gengo translation system status
  • f975f3f [bug 1027291] Implement spot Gengo human translation
  • f864b6b [bug 1027295] Add translation_sync cron job
  • c58fd44 [bug 1032226] en-GB should copyover, too
  • 7480f87 [bug 1032226] Tweak the code to be more defensive
  • 7ac1114 [bug 1032571] CSRF exempt the API
  • ac856eb [bug 1032571] Fix tests to catch csrf issues in the api
  • 74e8e09 [bug 1032967] Handle unsupported language pairs
  • 74a409e [bug 1026503] First pass at vagrantification
  • a7a440f Continued working on docs; ditched hacking howto
  • 44e702b [bug 1018727] Backfill translations
  • 69f9b5b Fix date_end issue
  • e59d4f6 [bug 1033852] Better handle unsupported src languages
  • cc3c4d7 Add list of unsupported languages to admin
  • 32e7434 [bug 1014874] Fix translate ux
  • 672abba [bug 1038774] Hide responses from hidden products
  • e23eca5 Fix a goof in the last commit
  • 6f78e2e [bug 947767] Nix authentication for API stuff
  • a9f2179 Fix response view re: non-existent products
  • e4c7c6c [Bug 1030905] fjord feedback api tests for dates (Ian Kronquist)
  • 0d8e024 [bug 935731] Add FactoryBoy
  • 646156f Minor fixes to the existing API docs
  • f69b58b [bug 1033419] Heartbeat backend prototype
  • f557433 [bug 1033419] Add docs for heartbeat posting

Landed, but not deployed:

  • 7c7009b [bug 935731] Switch all tests to use FactoryBoy
  • 2351fb5 Generate locales so ubuntu will quite whining (Ian Kronquist)

Current head: 7ea9fc3

High-level

At a high level, this is:

  1. Landed automated Gengo human translation and a bunch of minor fixes to make it work more smoothly.
  2. Reworked how we build development environments to use vagrant. This radically simplifies the instructions and should make it a lot easier for contributors to build a development environment. This in turn should lead to more people working on Input.
  3. Fixed a bug where products marked as "hidden" were still showing up in the dashboard.
  4. Implemented a GET API for Input responses. (https://wiki.mozilla.org/Firefox/Input/Dashboards_for_Everyone)
  5. Implemented the backend for the Heartbeat prototype. (https://wiki.mozilla.org/Firefox/Input/Heartbeat)
  6. Also, I'm fleshing out the Input section in the wiki complete with project plans. (https://wiki.mozilla.org/Firefox/Input)

Over the next two weeks

  1. Continue fleshing out project plans for in-progress projects on the wiki.
  2. Gradient sentiment and product picker work.

What I need help with

  1. We have a new system for setting up development environments. I've tested it on Linux. Ian has, too (pretty sure he's using Linux). We could use some help testing it on Windows and Mac OSX.

Do the instructions work on Windows? Do the instructions work on Mac OSX? Are there important things the instructions don't cover? Is there anything confusing?

http://fjord.readthedocs.org/en/latest/getting_started.html

  1. I'm changing the way I'm managing Fjord development. All project plans will be codified in the wiki. A rough roadmap of which projects are on the drawing board, in-progress, completed, etc is also on the wiki. I threw together a structure for all of this that I think is good, but it could use some review.

Do these project plans provide useful information? Are there important questions that need answering that the plans do not answer?

https://wiki.mozilla.org/Firefox/Input

If you're interested in helping, let me know! We hang out on #input on irc.mozilla.org and there's the input-dev mailing list.

I think that covers it!

Soledad Penades“Just turn it into a node module”, and other mantras Edna taught me

Here are the screencast and the write up on the talk I gave today at One-Shot London NodeConf. As usual I diverge a bit from the initial narrative and I forgot to mention a couple of topics I wanted to highlight, but I have had a horrible week and considering that, this has turned out pretty well!

It was great to meet so many interesting people at the conference and seeing old friends again! Also now I’m quite excited to hack with a few things. Damn (or yay!).

Slides and code for the slides.

Creative technologist

creative technologist

A little more than a year ago I started working at an agency in London as a creative technologist. It was fun to be trying out all the new things and building cool experiences with them. So whatever new APIs were out there, we would come up with some idea to use them. Sometimes that would be because the client wanted to be “first” in using that browser functionality so theirs would be the first website that did X and that would drive a ton of traffic to them, and other times we just wanted to do some cool experiment for our own website so we would master that technology and also attract traffic and get new clients–that’s how it works in advertising.

We mostly used JavaScript on the front-end (except some old remains from the past built in Flash). On the server, we used a mix of Python and node.js. Most of the Python was actually for setting up websites and authentication in Google App Engine which is what they used to host the websites (so the server wouldn’t go down when the site got popular), but for all the real time communication we used Socket.io running in node.js because it was way easier than Python alternatives.

I had been toying with node.js on and off for a while already, but this was when I started to actually believe that I could just use JS for everything… even build scripts!

The worst habits

But I had the worst habits after a life of many other programming languages, and although I wasn’t consciously aware of them, I knew that something didn’t feel quite right. But what? It was hard to say when everyone else in your environment was OK with whatever solution you came up with as long as it worked well enough and you delivered on time.

This was also part of why I joined Mozilla–I was feeling super guilty that we were building things that were not using standards and they would break in the future, or even worse, set bad precedents and habits. I didn’t want to contribute to a web where the cool experiences only worked on one browser, but I wanted to contribute to make the web more powerful and expressive.

My buddy Jen

cult leader

A couple of months later, I was in Mountain View for my onboarding week. I was disoriented and jetlagged, but also very excited and scared! And more people on from my future team were in Mountain View for that week. One of them, Jen, sent me a message pretty much just as I checked in into the hotel: “hey, ready for dinner?”

I hadn’t even met her or spoken to her during the interviews, so I wasn’t even sure how did she look like in real life. I washed my face and told to myself in the mirror that it was 6pm, NOT 2 AM as my body was trying to protest, and that everything was OK. And went downstairs to meet her.

She was waiting in the parking lot. I had only seen a tiny picture of her with shorter hair in her (very prolific) github profile, but honestly, who waits alone in the parking lot of a hotel in Mountain View at 6pm on a Sunday? We said “hi” and she said: “it’s my birthday today, and I want to have a nice dinner”. Would I oppose such thing? No!

Jen’s a philosopher, therefore she philosophises

best github account ever

We walked to Castro street, spending some time admiring the peculiarities of the business on either side of El Camino Real. You might have a chiropractice, a massage parlour, a beauty salon, and a gun seller, all side by side. It was all very amusing. Then we went into a Moroccan restaurant, and we had to prove our age by showing our respective passports, which was amusing again.

So we started talking about the food first… how it didn’t taste like anything Moroccan I had had before, and whether the Moroccan food I had had either in London or Paris could be considered authentic, or more authentic than this–based on closeness to the “source”. But you can only analyse food so much, so we switched to other topics and at the end of dinner she was telling me how she was going to build a distributed blog system but she would do it into a module so she could then reuse it for other things because it would be generic enough and… with the wine and the jetlag I was really finding it hard to follow.

She continued discussing and elaborating on these ideas during the week. She was hacking on a module that would be called meatspace. And she excitedly told me: “to empty it you would just call the flush method!”. Not being a native English speaker, I didn’t understand what ‘meatspace’ meant initially, so the whole naming seemed disgusting to me. Flushing meat drown the drain to empty the stored messages! GROSS.

rtcamera

My first task to get acquainted with the Mozilla process was to port or rewrite one of my existing Android apps to the web. WebRTC support was coming up soon in Firefox, so I opted to build something similar to Nerdstalgia. And I built something, and then I had Jen code review it. I didn’t know it initially, but she had been appointed my “Moz buddy”, to guide me around and introduce me to the usual processes.

She would keep mentioning this notion of “making things into modules” but I still didn’t quite get it. I regularly extracted my code into libraries, right? So why all this insistence on modules?

Modules

Intrigued (or sparkled) by her insistence, I started extracting modules out of rtcamera. The first one was the Animated_GIF library, and then gumHelper. This was quite providential because a while later she was exploring this idea of a distributed multiuser chat that could use your webcam to capture your facial expression, and because we had these libraries handy, adding them to the stack was very easy. You might, or might not, have heard of a thing called Meatspace Chat

Frida is my muse
Frida is one of Jen’s cats. This is Frida after seeing comma-first JS, according to Potch.

Something that really helped me “get the node way” were her comments on how to pass parameters, and callbacks. This was one of the reasons why I felt my node.js code didn’t feel ‘right’, and that’s because I was using my own adhoc style which was the result of having programmed in many languages but not being profficient in node.js – so I wasn’t using their idioms, and my code felt weird when using other people’s code–even system modules such as fs.

She insisted a lot on using a standard “callback signature” — the function(err, result) style which honestly drove me a bit nuts at the beginning. But if you’re using the same style throughout the code you can exchange node modules or even send the callback to another function, and it’s easier than if you have different signatures on each library.

Simplify

Another of her lessons was that if you were trapped in callback hell, maybe you were doing it conceptually wrong. Maybe you should be simplifying your code so you do calls in a different way. I am not totally sure of what I like most yet–promises or just plain callbacks, but I see her point, and oftentimes I would bring a Promises library to my project, then after refactoring the code so it would be suitable for promises I find that I don’t really need them at all.

Likewise for user interfaces–most of the time we agonise over how pretty something has to look but the fact is that the site might not provide any value to a user and so they leave. We should focus on the experience first, and then maybe we can make the things prettier.

npm

Another important lesson: it’s totally OK to use other people’s modules! Believe it or not, my initial node code almost didn’t use anyone’s modules, and if I used external code that was because I downloaded the code and copied it to the src folder AND loaded it with a local require path. npm? What was that thing?

Funny fact: I was watching RealtimeConf’s live stream because Jen was doing a talk on all the experiments she had been working on and was going to present Meatspace chat for the first time, and so I stayed for a while. And then I learnt a nice lesson not from her directly but from Max Ogden on his RealtimeConf talk: you don’t need to care about the code style in a node module, as long as it works. If it doesn’t, either you replace that module with another one, or you fix it. And again, if you’re using the same signature, this is easier to accomplish–you’re just replacing “boxes”.

Having tests is incredibly useful for this. Jen often writes the module tests first and then the module code itself–so she defines the API that way and sees if it feels natural and intuitive.

At this point there’s no UI code yet, just tests that make sure the logic works. If you can run the same test input data through two different modules you can make sure they do what they are supposed to do, and can also compare their performance or whatever nit is that makes you want to switch modules. This again is way easier if your function signatures are “standard node.js style”.

Browserify

I heard about this one actually the same week I started in Mozilla. But I was unable to appreciate its awesomeness–I was just used to applying Google Closure compiler to my concatenated code and calling it a day. I either concatenated the code with a nice

cat *.js > all.js

or maybe I built Python scripts that would read certain code files and join them together before either invoking a local version of the closure compiler (which was Java), or would send the uncompressed code to the Google Closure service to hopefully get it back, if there weren’t any errors.

But I quickly forgot about it. About some time later, I was looking into building a somewhat complex thing for my JSConf.EU project, and somehow Jen reminded me about it.

This project was a fun one, because I was using everything on it: server side node with Express.js serving the slides, which were advanced according to the music player with virtual Web Audio based instruments that was running on the browser, plus I had Socket.io sending data to and from a hardware MIDI device through OSC. So there were lots of data to pass back and forth and lots of modules to orchestrate, and including script tags in the browser wasn’t going to work well if I wanted to stay sane. So all the front-end code was launched using Browserify.

Another favourite anecdote in this project is that I in fact extracted several modules out of it, with tests and all, that I then reused in other projects. So I was taking advantage of my own work later on, and I like to think that when this happens, more people might find it useful too.

Multiplatform

Finally–and this is not a thing that only Jen taught me– one of the reasons why we like node a lot in Mozilla is because it makes it so much easier to write code that works everywhere. And with that I mean different platforms. As long as node can run in that system, the code can run.

This is very important because most of the times developers assume that everyone else is running the same system they are developing on, and in rich countries this often means the assumption that people use Macs, but that’s not the case everywhere, and certainly not in poorer countries. They use Windows or Linux, and setting up a toolchain to have a working make tool with which to run Makefile is either harder or not for the faint of mind.

So in this context, distributing build scripts written for node.js is way more democratic and helps us get our code to more people than if we used Make or Bash scripts.

And here comes one of my favourite stories–when one of the meatspacers sent me a PR to improve the build system of one of the libraries I had extracted and make it use node.js with uglify instead of the bash script I was using. That simple gesture enabled all the Windows developers to contribute to my module!

Conclusions

  • node modularity is awesome, but it takes time to ‘get it’. It’s OK to not to get things at the first try.
  • If you can find a mentor, it will help you ‘get it’ faster.
  • Otherwise maybe hang on the proper channels (irc, user groups, blogs, confs), study other people’s code and BE A SPONGE (a nodesponge?)
  • Don’t be afraid to experiment but also use the safety harness: tests!
  • And don’t be afraid to publish your code – maybe someone else will find it useful OR give you advice to improve it!

flattr this!

Adam Lofting2014 Contributor Goals: Half-time check-in

We’re a little over halfway through the year now, and our dashboard is now good enough to tell us how we’re doing.

TL;DR:

  • The existing trend lines won’t get us to our 2014 goals
    • but knowing this is helpful
    • and getting there is possible
  • Ask less: How do we count our contributors?
  • Ask more: What are we doing to grow the contributor community? And, are we on track?

Changing the question

Our dashboard now needs to move from being a project to being a tool that helps us do better. After all, Mozilla’s unique strength is that we’re a community of contributors and this dashboard, and the 2014 contributor goal, exist to help us focus our workflows, decisions and investments in ways that empower the community. Not just for the fun of counting things.

The first half of the year focused us on the question “How do we count contributors?”. By and large, this has now been answered.

We need to switch our focus to:

  1. Are we on track?
  2. What are we doing to grow the contributor community?

Then repeating these two question regularly throughout the year, and adjusting our strategy as we go.

Are we on track?

Wearing my cold-dispassionate-metrics hat, and not my “I know how hard you’re all working already” hat, I have to say no (or, not yet).

I’m going to look at this team by team and then look at the All Mozilla Foundation view at the end.

Your task, for each graph below is to take an imaginary marker pen and draw the line for the rest of the year based on the data you can see to date. And only on the data you can see to-date.

  • What does your trend line look like?
  • Is it going to cross the dotted target line in 2014?

OpenNews

Screen Shot 2014-07-18 at 19.48.44

Based on the data to-date, I’d draw a flat line here. Although there are new contributors joining pretty regularly, the overall trend is flat. In marketing terms there is ‘churn’; not a nice term, but a useful one to talk about the data. To use other crass marketing terms, ‘retention’ is as important as ‘acquisition’ in changing the shape of this graph.

Science Lab

Screen Shot 2014-07-18 at 19.49.55

Dispassionately here, I’d have to draw a trend line that’s pointing slightly down. One thing to note in this view is that the Science Lab team have good historic data, so what we’re seeing here is the result of the size of the community in early 2013, and some drop-off from those people.

Appmaker

Screen Shot 2014-07-18 at 19.50.57

This graph is closest to what we want to see generally, i.e. pointing up. But I’ll caveat that with a couple of points. First, taking the imaginary marker pen, this isn’t going to cross the 2014 target line at the current rate. Second, unlike the Science Lab and OpenNews data above, much of this Appmaker counting is new. And when you count things for the first time, a 12 month rolling active total has a cumulative effect in the first year, which increases the appearance of growth, but might not be a long term trend. This is because Appmaker community churn won’t be a visible thing until next year when people could first drop out of the twelve month active time-frame.

Webmaker

Screen Shot 2014-07-18 at 19.51.47

This graph is the hardest to extend with our imaginary marker pen, especially with the positive incline we can see as Maker Party kicks off. The Webmaker plan expects much of the contributor community growth to come from the Maker Party campaign, so a steady incline was not the expectation across the year. But, we can still play with the imaginary marker pen.

I’d do the following exercise: In the first six months, active contributors grew by ~800 (~130 per month), so assuming that’s a general trend (big assumption) and you work back from 10k in December you would need to be at ~9,500 by the end of September. Mark a point at 9,500 contributors above the October tick and look at the angle of growth required throughout Maker Party to get there. That’s not impossible, but it’s a big challenge and I don’t have any historic data to make an informed call here.

Note: the Appmaker/Webmaker separation here is a legacy thing from the beginning of the year when we started this project. The de-duped datastore we’re working on next will allow us to graph: Webmaker Total > Webmaker Tools > Appmaker as separate graphs with separate goals, but which get de-duped and roll-up into the total numbers above, and in turn roll-up into the Mozilla wide total at areweamillionyet.org – this will better reflect the actual overlaps.

Metrics

[ 0 contributors ]

The MoFo metrics team currently has zero active volunteer contributors, and based on the data available to date is trending absolutely flat. Action is required here, or this isn’t going to change. I also need to set a target. Growing 0 by 10X doesn’t really work. So I’ll aim for 10 volunteer contributors in 2014.

All Mozilla Foundation

Screen Shot 2014-07-18 at 19.52.40

Here we’re adding up the other graphs and also adding in ~900 people who contributed to MozFest in October 2013. That MozFest number isn’t counted towards a particular team and simply lifts the total for the year. There is no trend for the MozFest data because all the activity happened at once, but if there wasn’t a MozFest this year (don’t worry, there is!) in October the total line would drop by 900 in a single week. Beyond that, the shape of this line is the cumulative result of the team graphs above.

In Q3, we’ll be able to de-dupe this combined number as there are certainly contributors working across MoFo teams. In a good way, our total will be less that the sum of our parts.

Where do we go from here?

First, don’t panic. Influencing these trend lines is not like trying to shift a nation’s voting trends in the next election. Much of this is directly under our control, or if not ‘control’, then it’s something we can strongly influence. So long as we work on it.

Next, it’s important to note that this is the first time we’ve been able to see these trends, and the first time we can measure the impact of decisions we make around community building. Growing a community beyond a certain scale is not a passive thing. I’ve found David Boswell’s use of the term ‘intentional’ community building really helpful here. And much more tasteful than my marketing vocabulary!

These graphs show where we’re heading based on what we’re currently doing, and until now we didn’t know if we were doing well, or even improving at all. We didn’t have any feedback mechanism on decisions we’d make relating to community growth. Now we do.

Trend setting

Here are some initial steps that can help with the ‘measuring’ part of this community building task.

Going back to the marker pen exercise, take another imaginary color and rather than extrapolate the current trend, draw a positive line that gets you to your target by the end of the year. This doesn’t have to be a straight line; allow your planned activity to shape the growth you want to see. Then ask:

  • Where do you need to be in Aug, Sep, Oct, Nov, Dec?
  • How are you going to reach each of these smaller steps?

Schedule a regular check-in that focuses on growing your contributor community and check your dashboard:

  • Are your current actions getting you to your goals?
  • What are the next actions you’re going to take?

The first rule of fundraising is ‘Ask for money’. People often overlook this. By the same measure, are you asking for contributions?

  • How many people are you asking this week or month to get involved?
  • What percentage of them do you expect to say yes and do something?

Multiply those numbers together and see if it that prediction can get you to your next step towards your goal.

Asking these questions alone won’t get us to our goals, but it helps us to know if our current approach has the capacity to get there. If it doesn’t we need to adjust the approach.

Those are just the numbers

I could probably wrap up this check-in from a metrics point of view here, but this is not a numbers game. The Total Active Contributor number is a tool to help us understand scale beyond the face-to-face relationships we can store in our personal memories.

We’re lucky at Mozilla that so many people already care about the mission and want to get involved, but sitting and waiting for contributors to show up is not going to get us to our goals in 2014. Community building is an intentional act.

Here’s to setting new trends.

Kevin NgoPoker Sessions #19 to #27 - Downswing

Just keep Hulking through the drops.

Since returning from a three-week personal/work/family trip in Florida, things have not gone too hot. Busted out of a $5K (-$80), out of a freeroll (-$60), a $3K (-$90), a couple of small $300s (-$90), a couple of $1500s (-$100), and a couple more $5Ks (-$160). That totals for a -$480 dip. Though I try not to be results-oriented.

The first couple of tournaments were rust. I chalk the rest up to "that's tournament poker". MTTs are naturally swingy, despite playing pretty solid. Most bust outs were failed steals in the higher all-in-or-fold blind levels, a couple were suckouts. But I won't recite every bust-out hand.

Though I have been doing pretty solid live, I have been getting undisciplined in my online poker play. It's time to hit the books and tighten up. Harrington has a solid guide for preflop play that I need to freshen up upon.

After doing some bookkeeping, my poker bankroll after after 27 sessions is +$3272.

Sessions Conclusions

  • Went Well: improving on hand-reading, taking less marginal lines, super patience
  • Mistakes: some loose push-fold play, thinking limpers always have marginal hands
  • Get Better At: studying on late push-fold play, whether it needs to tighten up
  • Profit: -$480

Kim MoirReminder: Release Engineering Special Issue submission deadline is August 1, 2014

Just a friendly reminder that the deadline for the Release Engineering Special Issue is August 1, 2014.  If you have any questions about the submission process or a topic that's you'd like to write about, the guest editors, including myself, are happy to help you!

Kim MoirMozilla pushes - June 2014

Here's June 2014's  analysis of the pushes to our Mozilla development trees. You can load the data as an HTML page or as a json file

Trends
This was another record breaking month with a total of 12534 pushes.  As a note of interest, this is is over double the number of pushes we had in June 2013. So big kudos to everyone who helped us scale our infrastructure and tooling.  (Actually we had 6,433 pushes in April 2013 which would make this less than half because June 2013 was a bit of a dip.  But still impressive :-)

Highlights
  • 12534 pushes
    • new record
  • 418 pushes/day (average)
    • new record
  • Highest number of pushes/day: 662 pushes on June 5, 2014
    • new record
  • Highest 23.17 pushes/hour (average)
    • new record

General Remarks
The introduction of Gaia-try in April has been very popular and comprised around 30% of pushes in June compared to 29% last month.
The Try branch itself consisted of around 38% of pushes.
The three integration repositories (fx-team, mozilla-inbound and b2g-inbound) account around 21% of all the pushes, compared to 22% in the previous month.

Records
June 2014 was the month with most pushes (12534 pushes)
June 2014 has the highest pushes/day average with
418 pushes/day
June 2014 has the highest average of "pushes-per-hour" is
23.17 pushes/hour
June 4th, 2014 had the highest number of pushes in one day with
662 pushes





Mark SurmanQuick thoughts from Kenya

Going anywhere in Africa always energizes me. It surprises me. Challenges my assumptions. Gives me new ideas. And makes me smile. The week I just spent in Nairobi did all these things.

Airtel top up agenda in Nairobi

The main goal of my trip was to talk to people about the local content and simple appmaking work Mozilla is doing. I spent an evening talking with Mozilla community members, a day and a bit with people at Equity Bank and a bunch of time with people from iHub. Here are three of the many thoughts I had while reflecting on the flight home:

Microbusiness is our biggest opportunity for AppMaker

I talked to ALOT of people about the idea of non-techie smartphone users being able to make their own apps.

My main question was: who would want to make their own app rather than just use Facebook? Most of the good answers had to with someone running a very small business. A person selling juice to office workers who wastes alot of travel time taking orders. An up and coming musician who wants a way to pre-sell tickets to loyal fans using mobile money. A chicken farmer outside Nairobi who is always on the phone with the hotels she sells to (pic below, met her and her husband while on a trip with Equity Bank folks). The common thread: simple to make and remix apps could be very useful to very small real world businesses that would benefit from better communications, record keeping and transaction processing via mobile phone.

IMG_20140717_085731~2

Our main priority with AppMaker (or whatever we call it) right now is to get a first cut at on-device authoring out there. In the background, we also really need to be pushing on use cases like these — and the kind of app templates that would enable them. Some people at the iHub in Nairobi have offered to help with prototyping template apps specific to Kenya over the next few months, which will help with figuring this out.

Even online is offline in much of Africa

As I was reminded at MozFest East Africa, even online is offline in much of Africa (and many other parts of the world). In the city, the cost of data for high bandwidth applications like media streaming — or running a Webmaker workshop — is expensive. And, outside the city, huge areas have connections that are spotty or non-existent.

BRCK-in-use

It was great to meet the BRCK people who are building products to address issues like this. Specifically: BRCK is a ruggedized wifi router with a SIM card, useful I/O ports and local storage. Brainstorming with Juliana and Erik from iHub, it quickly became clear that it could be useful for things like Webmaker workshops in places where connectivity is expensive, slow or even non-existent. If you popped a Raspberry Pi on the side, you might even be able create a working version of Webmaker tools like Thimble and Appmaker that people could use locally — with published web pages and apps trickling back or syncing once the BRCK had a connection. The Kenyan Mozillians I talked to were very excited about this idea. Worth exploring.

People buy brands

During a dinner with local Mozillians, a question was raised: ‘what will it take for Firefox OS to succeed in Kenya?’ A debate ensued. “Price,” said one person, “you can’t get a $30 smartphone like the one Mozilla is going to sell.” “Yes you can!”, said another. “But those are China phones,” said someone else. “People want real phones backed by a real brand. If people believe Firefox phones are authentic, they will buy them.”

IMG_20140717_103451~4

Essentially, they were talking about the tension between brand / authenticity / price in commodity markets like smartphones. The contention was: young Kenyan’s are aspiring to move up in the world. An affordable phone backed by a global brand like Mozilla stands for this. Of course, we know this. But it’s a good reminder from the people who care most about Mozilla (our community, pic below of Mozillians from Kenya) that the Firefox brand really needs to shine through on our devices and in the product experience as we roll out phones in more parts of the world.

Mozillians from Nairobi

I’ve got alot more than this rumbling around in my head, of course. My week in Uganda and Kenya really has my mind spinning. In a good way. It’s all a good reminder that the diverse perspectives of our community and our partners are one of our greatest strengths. As an organization, we need to tap into that even more than we already do. I truly believe that the big brain that is the Mozilla Community will be a key factor in winning the next round in our efforts to stand up for the web.


Filed under: mozilla, webmakers

Doug BelshawHOWTO: apply for Webmaker badges

super-mentor-02.jpg

We’re in full swing with Webmaker contribution and web literacy badges at the moment, so I wanted to take a moment to give some advice to people thinking of applying. We already have a couple of pages on the Webmaker blog for the Mentor and Super Mentor badges:

However, I wanted to give some general advice and fill in the gaps.

First of all, it’s worth sharing the guidance page for the people reviewing your application. In the case of a Webmaker Super Mentor badge, this will be a Mozilla paid contributor (i.e. staff member), but for all other badges it may be community member who has unlocked the necessary privileges.

To be clear:


The best applications we’ve seen for the Webmaker badges so far take the explain how the applicant meets each one of the criteria on the badge detail page.

For example, this was Stefan Bohacek’s successful application for the Sharing ‘maker’ badge:

1) Sharing a resource using an appropriate tool and format for the audience: I wrote tutorials for people learning to make websites and web apps and shared them on my blog: http://blog.fourtonfish.com/tagged/tutorial. These also exist as a teaching kit on Webmaker – see my blogpost with a link here: http://blog.fourtonfish.com/post/89157427285/mozilla-webmaker-featured-teaching-kit. Furthermore I created resources for web developers such as http://simplesharingbuttons.com (also see: http://badges.p2pu.org/en/project/477) and some other (mini-)projects here: https://github.com/fourtonfish

2) Tracking changes made to co-created Web resources: I use GitHub for some of my personal projects (although I only received a handful of opened issues) and GitLab with clients I worked with/for.

3) Using synchronous and asynchronous tools to communicate with web communities, networks and groups https://twitter.com/fourtonfish – I follow some of the members of Webmaker (and seldomly frustrate Doug Belshaw with questions) https://plus.google.com/+StefanBohacek/posts – I am a member of the Webmaker community http://webdevrefinery.com/forums/user/18887-ftfish/ – I (infrequently) post here, share ideas, comment on ideas of others etc. stefan@fourtonfish.com – I wouldn’t be able to finish my teaching kit without the help of other webmakers and my email account to communicate with them

Note that Stefan earned his badge for numbers 1) and 3) in the above example. This was enough to meet the requirements as the badge is awarded for meeting any two of the criteria listed on the badge detail page. He did not provide any evidence for using GitHub, as mentioned in 2), so this was not used as evidence by the person reviewing his application.


Applying for a badge is just like applying for anything in life:

  • Make the reviewer’s job easy – they’re looking at lots of applications!
  • Tell the reviewer which of the criteria you think you have met.
  • Include a link for each of the criteria – more than one if you can.
  • If you are stuck, ask for help. A good place to start is the Webmaker discussion forum, or if you know someone who’s already got that badge, ask them to help you!

Questions? Comments? I’m @dajbelshaw or you can email me at doug@mozillafoundation.org. Note that specific badge questions should go in the discussion forum.


Image CC Mozilla in Europe

Botond BalloTrip Report: C++ Standards Committee Meeting in Rapperswil, June 2014

Summary / TL;DR

Project Status
C++14 On track to be published late 2014
C++17 A few minor features so far, including for (elem : range)
Networking TS Ambitious proposal to standardize sockets based on Boost.ASIO
Filesystems TS On track to be published late 2014
Library Fundamentals TS Contains optional, any, string_view and more. Progressing well, expected early 2015
Library Fundamentals TS II Follow-up to Library Fundamentals TS; will contain array_view and more. In early stage
Array Extensions TS Completely stalled. No proposal related to runtime-sized arrays/objects currently has consensus
Parallelism TS Progressing well; expected 2015
Concurrency TS Executors and resumable functions need more work
Transactional Memory TS Progressing well; expected 2015-ish
Concepts (“Lite”) TS Progressing well; expected 2015
Reflection A relatively full-featured compile-time introspection proposal was favourably reviewed. Might target a TS or C++17
Graphics Moving forward with a cairo-based API, to be published in the form of a TS
Modules Clang has a complete implementation for C++, plan to push it for C++17

Introduction

Last week I attended another meeting of the ISO C++ Standards Committee in Rapperswil, Switzerland (near Zurich). This is the third Committee meeting I have attended; you can find my reports about the previous two here (September 2013, Chicago) and here (February 2014, Issaquah). These reports, particularly the Issaquah one, provide useful context for this post.

With C++14′s final ballot being still in progress, the focus of this meeting was the various language and library Technical Specifications (TS) that are planned as follow-ups to C++14, and on C++17.

C++14

C++14 is currently out for its “DIS” (Draft International Standard) ballot (see my Issaquah report for a description of the procedure for publishing a new language standard). This ballot was sent out at the end of the Issaquah meeting, and will close mid-August. If no national standards body poses an objection by then – an outcome considered very likely – then the standard will be published before the end of the year.

Since a ballot was in progress, no changes to the C++14 draft were made during the Rapperswil meeting.

C++17, and what’s up with all these TS’s?

ISO procedure allows the Committee to publish two types of documents:

  • International Standards (IS). These are official standards with stringent backwards-compatibility requirements.
  • Technical Specifications (TS) (formerly called Technical Reports (TR)). These are for things that are not quite ready to be enshrined into an official standard yet, and have no backwards-compatibility requirements. Specifications contained in a TS may or may not be added, possibly with modifications, into a future IS.

C++98 and C++11 are IS’s, as will be C++14 and C++17. The TS/TR form factor has, up until recently, only been used once by the Committee: for TR1, the 2005 library spec that gave us std::tr1::shared_ptr and other library enhancements that were then added into C++11.

Since C++11, in an attempt to make the standardization process more agile, the Committee has been aiming to place significant new language and library features into TS’s, published on a schedule independent of the IS’s. The idea is that being in a TS allows the feature to gain user and implementation experience, which the committee can then use to re-evaluate and possibly revise the feature before putting it into an IS.

As such, much of the standardization work taking place concurrently with and immediately after C++14 is in the form of TS’s, the first wave of which will be published over the next year or so, and the contents of which may then go into C++17 or a subsequent IS, as schedules permit.

Therefore, at this stage, only some fairly minor features have been added directly to C++17.

The most notable among them is the ability to write a range-based for loop of the form for (elem : range), i.e. with the type of the element omitted altogether. As explained in detail in my Issaquah report, this is a shorthand for for (auto&& elem : range) which is almost always what you want. The Evolution Working Group (EWG) approved this proposal in Issaquah; in Rapperswil it was also approved by the Core Working Group (CWG) and voted into C++17.

Other minor things voted into C++17 include:

  • static_assert(condition), i.e. with the message omitted. An implementation-defined message is displayed.
  • auto var{expr}; is now valid and equivalent to T var{expr}; (where T is the deduced type)
  • A template template parameter can now be written as template <...> typename Name in addition to template <...> class Name, to mirror the way a type template parameter can be written as typename Name in addition to class Name
  • Trigraphs (an obscure feature that allowed certain characters, such as #, which are not present on some ancient keyboards, to be written as a three-character sequence, such as ??=) were removed from the language

Evolution Working Group (EWG)

As with previous meetings, I spent most of time in the Evolution Working Group, which spends its time looking at proposals for new language features that either do not fall into the scope of any Study Group, or have already been approved by a Study Group. There was certainly no lack of proposals at this meeting; to EWG’s credit, it got through all of them, at least the ones which had papers in the official pre-Rapperswil mailing.

Incoming proposals were categorized into three rough categories:

  • Approved. The proposal is approved without design changes. They are sent on to CWG, which revises them at the wording level, and then puts them in front of the committee at large to be voted into whatever IS or TS they are targeting.
  • Further Work. The proposal’s direction is promising, but it is either not fleshed out well enough, or there are specific concerns with one or more design points. The author is encouraged to come back with a modified proposal that is more fleshed out and/or addresses the stated concerns.
  • Rejected. The proposal is unlikely to be accepted even with design changes.

Accepted proposals:

  • Opening two nested namespaces with namespace A::B { in addition to namespace A { namespace B {
  • “Making return explicit”. This means that if a class A has an explicit constructor which takes a parameter of type B, then, in a function whose return type is A, it is valid to write return b; where b has type B. (Currently, one has to writen return A(b);.) The idea is to avoid repetition; a very common use case is A being std::unique_ptr<T> for some T, and B being T*. This proposal was relatively controversial; it passed with a weak consensus in EWG, and was also discussed in the Library Evolution Working Group (LEWG), where there was no consensus for it. I was surprised that EWG passed this to CWG, given the state of the consensus; in fact, CWG indicated that they would like additional discussion of it in a forum that includes both EWG and LEWG members, before looking at it in CWG.
  • A preprocessor feature for testing for the presence of a (C++11-style) attribute: __has_cpp_attribute(attribute_name)
  • A couple that I already mentioned because they were also passed in CWG and voted into C++17 at the same meeting:

Proposals for which further work is encouraged:

  • A proposal to make C++ more friendly for embedded systems development by reducing the overhead of exception handling, and further expanding the expressiveness of constexpr. EWG encouraged the author to gather people interested in this topic and form a Study Group to explore it in more detail.
  • A proposal to convey information to the compiler about aliasing, via attributes. This is intended to be an improvment to C99′s restrict.
  • A way to get the compiler to, on an opt-in basis, generate equality and comparison operators for a structure/class. Everyone wanted this feature, but there were disagreements about how terse the syntax should be, whether complementary operators should be generated automatically (e.g. != based on ==), how exactly the compiler should implement the operators (particularly for comparison – questions of total order vs. weaker orders came up), and how mutable members should be handled.
  • A proposal for a per-platform portable C++ ABI. I will talk about this in more detail below.
  • A way to force the compiler to omit padding between two structure fields
  • A way to specify that a class should be converted to another type in auto initialization. That is, for a class C, to specify that in auto var = c; (with c having type C), the type of var should actually be some other type D. The motivating use here is expression templates; in Matrix X, Y; auto Z = X * Y; we want the type of Z to be Matrix even if the type of X * Y is some expression template type. EWG liked the motivation, but the proposal tried to modify the semantics of template parameter deduction for by-value parameters so as to remain consistent with auto, and EWG was concerned that this was starting to encroach on too many areas of the language. The author was encouraged to come back with a more limited-scope proposal that concerned auto initialization only.
  • Fixed-size template parameter packs (typename...[K]) , and packs where all parameters must be of the same type (T...[N]). EWG liked the idea, but had some concerns about syntactic ambiguities. The proposal also generated renewed interest in the idea of subscripting parameter packs (e.g. Pack[0] gives you the first parameter), to avoid having to use recursion to iterate over the parameters in many cases.
  • Expanding parameter packs as expressions. Currently, if T is a parameter pack bound to parameters A, B, and C, then T... expands to A, B, C; this expansion is allowed in various contexts where a comma-separated list of things (types or expressions, as the parameters may be) is allowed. The proposal here is to allow things like T +... which would expand to A + B + C, which would be allowed in an expression context.

Rejected proposals:

  • Objects of runtime size. This would have allowed a pure library implementation of something like std::dynarray (and allowed users to write similar classes of their own), but it unfortunately failed to gain consensus. More about this in the Array Extensions TS section.
  • Additional flow control mechanisms like break label;, continue label; and goto case value;. EWG thought these encouraged hard-to-follow control flow.
  • Allowing specifiers such as virtual, static, override, and some others, to apply to a group of members the way acess specifiers (private: etc.) currently do. The basis for rejection here was that separating these specifiers from the members they apply to can make class definitions less readable.
  • Specializing an entity in a different namespace without closing the namespaces you are currently in. Rejected because it’s not clear what would be in scope inside the specialization (names from the entity’s namespace, the current namespace, or both).
  • <<< and >>> operators for circular bit-shifts. EWG felt these would be more appropriate as library functions.
  • A rather complicated-looking proposal for annotating template parameter packs that appeared to be a generalization of the proposal for fixed-size template parameter packs. The discussion of this was more preliminary in nature because the paper was not in the pre-Rapperswil mailing, and so many attendees did not have time to look at it in advance. The initial impression was that it would make the language much more complicated, while the benefit would mostly be for template metaprogramming; also, several of the use cases can be satisfied with Concepts instead. Some people felt that more limited extensions to the language, with library features providing more advanced functionality, might be more appropriate. That said, if the author follows up with a formal proposal, EWG is likely to have another look at this.
  • Throwing an exception on stack exhaustion. The implementers in the room felt this was not implementable.

I should also mention the proposal for named arguments that Ehsan and I have been working on. We did not prepare this proposal in time to get it into the pre-Rapperswil mailing, and as such, EWG did not look at it in Rapperswil. However, I did have some informal discussions with people about it. The main concerns were:

  • consideration for constructor calls with {...} syntax and, by extension, aggregate initialization
  • the relationship to C99 designated initializers (if we are covering aggregate initializtion, then these can be viewed as competing syntaxes)
  • most significantly: parameter names becoming part of a library’s interface that library authors then have to be careful not to break

Assuming we are able to address these concerns, we will likely write an updated proposal, get it into the pre-Urbana mailing (Urbana-Champaign, Illinois is the location of the next Committee meeting in November), and present it at the Urbana meeting.

Portable C++ ABI

One of the most exciting proposals at this meeting, in my opinion, was Herb Sutter’s proposal for a per-platform portable C++ ABI.

A per-platform portable ABI means that, on a given platform (where “platform” is generally understood to a mean the combination of an operating system, processor family, and bitness), binary components can be linked together even if they were compiled with different compilers, or different versions of the same compiler, or different compiler options. The current lack of this in C++ is, I think, one of C++’s greatest weaknesses compared to other languages like C or Java.

More specifically, there are two aspects to ABI portability: language and library. On the language side, portability means that binary components can be linked together as long as, for any interface between two components (for example, for a function that one component defines and the other calls, the interface would consist of the function’s declaration, and the definitions of any types used as parameters or return type), the two components are compiled from identical standard-conforming source code for that interface. On the library side, portability additionally means that interfaces between components can make use of standard library types (this does not follow solely from the language part, because different compilers may not have identical source code for their standard library types).

It has long been established that it is out of scope for the C++ Standard to prescribe an ABI that vendors should use (among other reasons, because parts of an ABI are inherently platform-specific, and the standard cannot enumerate every platform and prescribe something for each one). Instead, Herb’s proposal is that the standard codify the notions of a platform and a platform owner (an organization/entity who controls the platform); require that platform owners document an ABI (in the case of the standard library, this means making available the source code of a standard library implementation) which is then considered the platform ABI; and require compiler and standard library vendors to support the platform ABI to be conformant on any given platform.

In order to ease transitioning from the current world where, on a given platform, the ABI can be highly dependent on the compiler, the compiler version, or even compiler options, Herb also proposes some mechanisms for delineating a portion of one’s code which should be ABI-portable, and therefore compiled using the platform ABI. These mechanisms are a new linkage (extern "abi") on the language side, and a new namespace (std::abi, containing the same members as std) on the library side. The idea is that one can restrict the use of these mechanisms to code that constitutes component interfaces, thereby achieving ABI portability without affecting other code.

This proposal was generally well-received, and certainly people agreed that a portable ABI is something C++ needs badly, but some people had concerns about the specific approach. In particular, implementers were uncomfortable with the idea of potentially having to support two different ABI’s side-by-side in the same program (the platform ABI for extern "abi" entities, and the existing ABI for other entities), and, especially, with having two copies of every library entity (one in std and one in std::abi). Other concerns about std::abi were raised as well, such as the performance penalty arising from having to convert between std and std::abi types in some places, and the duplication being difficult to teach. It seemed that a modified proposal that concerned the language only and dropped std::abi would have greater consensus.

Array Extensions TS

The Array Extensions TS was initally formed at the Chicago meeting (September 2013) when the committee decided to pull out arrays of runtime bound (ARBs, the C++ version of C’s VLAs) and dynarray, the standard library class for encapsulating them, out of C++14 and into a TS. This was done mostly because people were concerned that dynarray required too much compiler magic to implement. People expressed a desire for a language feature that would allow them to implement a class like dynarray themselves, without any compiler magic.

In Issaquah a couple of proposals for such a language feature were presented, but they were relatively early-stage proposals, and had various issues such as having quirky syntax and not being sufficiently general. Nonetheless there was consensus that a library component is necessary, and we’d rather not have ARBs at all than get them without a library component to wrap them into a C++ interface.

At this meeting, a relatively fully fleshed-out proposal was presented that gave programmers a fairly general/flexible way to define classes of runtime size. Unfortunately, it was considered a complex change that touches many parts of the language, and there was no consensus for going forward with it.

As a result, the Array Extensions TS is completely stalled: ARBs themselves are ready, but we don’t want them without a library wrapper, and no proposal for a library wrapper (or mechanism that would enable one to be written) has consensus. This means that the status quo of not being able to use VLAs in C++ (unless a vendor enables C-stle VLAs in C++ as an extension) will remain for now.

Library / Library Evolution Working Groups (LWG and LEWG)

Library work at this meeting included the Library Fundamentals TS (and its planned follow-up, Library Fundamentals II), the Filesystems TS and Networking TS (about which I’ll talk in the SG 3 and SG 4 sections below), and reviewing library components of other projects like the Concurrency TS.

The Library Fundamentals TS was in the wording review stage at this meeting, with no new proposals being added to it. It contains general library utilities such as optional, any, string_view, and more; see my Issaquah report for a full list. The current draft of the TS can be found here. At the end of the meeting, the Committee voted to send out the TS for its first national body ballot, the PDTS (Preliminary Draft TS) ballot. This ballot concludes before the Urbana meeting in November; if the comments can be addressed during that meeting and the TS sent out for its second and final ballot, the DTS (Draft TS) ballot, it could be published in early 2015.

The Committee is also planning a follow-up to the Library Fundamentals TS, called the Library Fundamentals TS II, which will contain general utilities that did not make it into the first one. Currently, it contains one proposal, a generalized callable negator; another proposal, containing library facilities for contract programming, got rejected for several reasons, one of them being that it is expected to be obsoleted in large part by reflection. Proposals under consideration to be added include:

Study Groups

SG 1 (Concurrency)

SG 1 focuses on two areas, concurrency and parallelism, and there is one TS in the works for each.

I don’t know much about the Parallelism TS other than it’s in good shape and was sent out for its PDTS ballot at the end of the meeting, which could lead to publication in 2015.

The status of the Concurrency TS is less certain. Coming into the Rapperswil meeting, the Concurrency TS contained two things: improvements to std::future (notably a then() method for chaining a second operation to it), and executors and schedulers, with resumable function being slated for addition.

However, the pre-Rapperswil mailing contained several papers arguing against the existing designs for executors and resumable functions, and proposing alternative designs instead. These papers led to executors and schedulers being removed from the Concurrency TS, and resumable functions not being added, until people come to a consensus regarding the alternative designs. I’m not sure whether publication of the Concurrency TS (which now contains only the std::future improvements) will proceed, leaving executors and resumable functions for a follow-up TS, or be stalled until consensus on the latter topics is reached.

For resumable functions, I was in the room during the technical discussion, and found it quite interesting. The alternative proposal is a coroutines library based on Boost.Coroutine. The two proposals differ both in syntax (new keywords async and await vs. the magic being hidden entirely behind a library interface), and implementation technique for storing the local variables of a resumable function (heap-allocated “activation frames” vs. side stacks). The feedback from SG 1 was to disentangle these two aspects, possibly yielding a proposal where either syntax could be matched with either implementation technique.

There are also other concurrency-related proposals before SG 1, such as ostream buffers, latches and barries, shared_mutex, atomic operations on non-atomic data, and a synchronized wrapper. I assume these will go into either the current Concurrency TS, or a follow-up TS, depending on how they progress through the committee.

SG 2 (Modules)

Modules is, in my opinion, one of the most sorely needed features in C++. They have the potential of decreasing compile times by an order of magnitude or more, thus bringing compile-time performance more in line with more modern languages, and of solving the combinatorial explosion problem, caused by macros, that hampers the development of powerful tooling such as automated refactoring.

The standardization of modules has been a slow process for two reasons. First, it’s an objectively difficult problem to solve. Second, the solution shares some of the implementation difficulties of the export keyword, a poorly thought-out feature in C++98 that sought to allow separate compilation of templates; export was only ever implemented by one compiler (EDG), and the implementation process revealed flaws that led not only to other compilers not even bothering to implement it, but also to the feature being removed from the language in C++11. This bad experience with export led people to be uncertain about whether modules are even implementable. As a result, while some papers have been written proposing designs for modules (notably, one by Daveed Vandevoorde a couple of years back, and one by Gabriel Dos Reis very recently, what everyone has really been holding their breaths for was an implementation (of any variation/design), to see that one was possible.

Google and others have been working on such an implementation in Clang, and I was very excited to hear Chandler Carruth (head of Google’s team working on Clang) report that they have now completed it! As this work was completely very recently prior to the meeting, they did not get a chance to write a paper to present at this meeting, but Chandler said one will be forthcoming for the next meeting.

EWG held a session on Modules, where Gaby presented his paper, and the Clang folks discussed their implementation. There were definitely some differences between the two. Gaby’s proposal came across as more idealistic: this is what a module system should look like if you’re writing new code from scratch with modules. Clang’s implementation is more practical: this is how we can start using modules right away in existing codebases. For example, in Gaby’s proposal, a macro defined in a module is not visible to an importing module; in Clang’s implementation, it is, reflecting the reality that today’s codebases still use macros heavily. As another example, in Gaby’s proposal, a module writer must say which declarations in the module file are visible to importing modules by surrounding them in an export block (not to be confused with the failed language feature I talked about above); in Clang’s implementation, a header file can be used as a module without any changes, using an external “module map” file to tell the compiler it is a module. Another interesting design question that came up was whether private members of a class exported from a module are “visible” to an importing module (in the sense that importing modules need to be recompiled if such a private member is added or modified); in Clang’s implementation, this is the case, but there would certainly be value in avoiding this (among other things, it would obsolete the laborious “Pimpl” design pattern).

The takeaway was that, while everyone wants this feature, and everyone is excited about there finally being an implementation, several design points still need to be decided. EWG deemed that it was too early to take any polls on this topic, but instead encouraged the two parties (the Clang folks, and Gaby, who works for Microsoft and hinted at a possible Microsoft implementation effort as well) to collaborate on future work. Specifically, EWG encourages that the following papers be written for Urbana: one about what is common to the various proposals, and one or more about the remaining deep technical issues. I eagerly await such future work and papers.

SG 3 (Filesystems)

At this meeting, the Library Working Group finished addressing the ballot comments for the Filesystem TS’s PDTS ballot, and sent out the TS for the final “DTS” ballot. If this ballot is successful, the Filesystems TS will be published by the end of 2014.

Beman (the SG 3 chair) stated that SG 3 will entertain new filesystem-related proposals that build upon the Filesystems TS, targetting a follow-up Filesystems TS II. To my knowledge no such proposals have been submitted so far.

SG 4 (Networking)

SG 4 had been working on standardizing basic building blocks related to networking, such as IP addresses and URIs. However, these efforts are stalled.

As a result, the LEWG decided at this meeting to take it upon itself to advance networking-related proposals, and they set their sights on something much more ambitious than IP addresses and URIs: a full-blown sockets library, based on Boost.ASIO. The plan is basically to pick up Chris Kohlhoff’s (the author of ASIO) 2007 paper proposing ASIO for standardization, incorporating changes to update the library for C++11, as well as C++14 (forthcoming). This idea received very widespread support in LEWG; the group decided to give people another meeting to digest the new direction, and then propose adopting these papers as the working paper for the Networking TS in Urbana.

This change in pace and direction might seem radical, but it’s in line with the committee’s philosophy for moving more rapidly with TS’s. Adopting the ASIO spec as the initial Networking TS working paper does not mean that the committee agrees with every design decision in ASIO; on the contrary, people are likely to propose numerous changes to it before it gets standardized. However, having a working paper will give people something to propose changes against, and thus facilitate progress.

SG 5 (Transactional Memory)

The Transactional Memory TS is progressing well through the committee. CWG began reviewing its wording at this meeting, and referred one design issue to EWG. (The issue concerned functions that were declared to be neither transaction-safe nor transaction-unsafe, and defined out of line (so the compiler cannot compute the transaction safety from the definition). The state of the proposal coming into the discussion was that for such functions, the compiler must assume that they can be either transaction-safe or transaction-unsafe; this resulted in the compiler sometimes needing to generate two versions of some functions, with the linker stripping out the unused version if you’re lucky. EWG preferred avoiding this, and instead assuming that such functions are transaction-unsafe.) CWG will continue reviewing the wording in Urbana, and hopefully sendout the TS for its PDTS ballot then.

SG 6 (Numerics)

Did not meet in Rapperswil, but plans to meet in Urbana.

SG 7 (Reflection)

SG 7 met for an evening session and looked at three papers:

  • The latest version of the source code information capture proposal, which aims to replace the __LINE__, __FILE__, and __FUNCTION__ macros with a first-class language feature. There was a lot of enthusiasm for this idea in Issaquah, and now that it’s been written up as a paper, SG 7 is moving on it quickly, deciding to send it right on to LEWG with only minor changes. The publication vehicle – with possible choices being Library Fundamentals TS II, a hypothetical Reflection TS, or C++17 – will be decided by LEWG.
  • The type member property queries proposal by Andrew Tomazos. This is an evolution of an earlier proposal which concerned enumerations only, and which was favourably reviewed in Issaquah; the updated proposal extends the approach taken for enumerations, to all types. The result is already a quite featureful compile-time introspection facility, on top of which facilities such as serialization can be built. It does have one significant limitation: it relies on forming pointers to members, and thus cannot be used to introspect members to which pointers cannot be formed – namely, references and bitfields. The author acknowledged this, and pointed out that supporting such members with the current approach would require language changes. SG 7 did not deem this a deal-breaker problem, possibly out of optimism that such language changes would be forthcoming if this facility created a demand for them. Overall, the general direction of the proposal had basically unanimous support, and the author was encouraged to come back with a revised proposal that splits out the included compile-time string facility (used to represent names of members for introspection) into a separate, non-reflection-specific proposal, possibly targeted at Library Fundamentals II. The question of extending this facility to introspect things other than types (notably, namespaces, although there was some opposition to being able to introspect namespaces) also came up; the consensus here was that such extensions can be proposed separately when desired.
  • A more comprehensive static reflection proposal was looked at very, very briefly (the author was not present to speak about it in detail). This was a higher-level and more featureful proposal than Tomazos’ one; the prevailing opinion was that it is best to standardize something lower-level like Tomazos’ proposal first, and then consider standardizing higher-level libraries that build on it if appropriate.

SG 8 (Concepts)

The Concepts TS (formerly called “Concepts Lite”, but then people thought “Lite” was too informal to be in the title of a published standard) is still in the CWG review stage. Even though CWG looked at it in Issaquah, and the author and project editor, Andrew Sutton, revised the draft TS significantly for Rapperswil, the feature touches many areas of the language, and as such more review of the wording was required; in fact, CWG spent almost three full days looking at it this time.

The purpose of a CWG review of a new language feature is twofold: first, to make sure the feature meshes well with all areas of the language, including interactions that the author and EWG may have glossed over; and second, to make sure that the wording reflects the author’s intention accurately. In fulfilling the first objective, CWG often ends up making minor changes to a feature, while staying away from making fundamental changes to the design (sometimes, recommendations for more significant changes do come up during a CWG review – these are run by EWG before being acted on).

In the case of the Concepts TS, CWG made numerous minor changes over the course of the review. It was initially hoped that there would be time to revise the wording to reflect these changes, and put the reivsed wording out for a PDTS ballot by the end of the meeting, but the changes were too numerous to make this feasible. Therefore, the PDTS ballot proposal was postponed until Urbana, and Andrew has until then to implement the wording changes.

SG 9 (Ranges)

SG 9 did not meet in Rapperswil, but does plan to meet in Urbana, and I anticipate some exciting developments in Urbana.

First, I learned that Eric Niebler, who in Issaquah talked about an idea for a Ranges proposal that I thought was very exciting (I describe it in my Issaquah report), plans to write up his idea as a proposal and present it in Urbana.

Second, one of the attendees at Rapperswil, Fabio Fracassi, told me that he is also working on a (different) Ranges proposal that he plans to present in Urbana as well. I’m not familiar with his proposal, but I look forward to it. Competition is always healthy when it comes up early-stage standards proposal / choosing an approach to solving a problem.

SG 10 (Feature Test)

I didn’t follow the work of SG 10 very closely. I assume that, in addition to the __has_cpp_attribute() preprocessor feature that I mentioned above in the EWG section, they are kept busy by the flow of new features being added into working papers, for each of which they have to decide whether the feature deserves a feature-test macro, and if so standardize a name for one.

Clark (the SG 10 chair) did mention that the existence of TS’s complicates matters for SG 10, but I suppose that’s a small price to pay for the benefits of TS’s.

SG 12 (Undefined Behaviour)

Did not meet in Rapperswil, but plans to meet in Urbana.

SG 13 (Human Interaction, formerly “Graphics”)

SG 13 met for a quarter-day session, during which Herb presented an updated version of the proposal for a cairo-based 2D drawing API. A few interesting points came up in the discussion:

  • The interface being standardized is a C++ wrapper interface around cairo that was generated using a set of mechanical transformation rules applied to cairo’s interface. The transformation rules are not being standardized, only their result (so the standard interface can potentially diverge from cairo in the future, though presumably this wouldn’t be done without a very good reason).
  • I noted that Mozilla is moving away from cairo, in part due to inefficiencies caused by cairo being a stateful API (as explained here). It was pointed out that this inefficiency is an issue of implementation (due to cairo using a stateless layer internally), not one of interface. This is a good point, although I’m not sure how much it matters in practice, as standard library vendors are much more likely to ship cairo’s implementation than write their own. (Jonathan Wakely said so about libstdc++, but I think it’s very likely the case for other vendors as well.)
  • Regarding the tradeoff between a nice interface and high performance, Herb said the goal was to provide a nice interface while providing as good of a performance as we can get, without necesarily squeezing every last ounce of performance.
  • The library has the necessary extension points in place to allow for uses such as hooking into drawing onto a drawing surface of an existing library, such as a Qt canvas (with the cooperation of the existing library, cairo, and the platform, of course).

The proposal is moving forward: the authors are encouraged to come back with wording.

TS Content Guidelines

One mildly controversial issue that came to a vote in the plenary meeting at the end of the week, is the treatment of modifications/extensions to standard library types in a Technical Specification. One group held that the simplest thing to do for users is to have the TS specify modifications to the types in std:: themselves. Another group was of the view that, in order to make life easier for a third-party library vendor to implement a TS, as well as to ensure that it remains practical for a subsequent IS to break the TS if it needs to, the types that are modified should be cloned into an std::experimental:: namespace, and the modifications applied there. This second view prevailed.

Next Meeting

The next Committee meeting (“Urbana”) will be at the University of Illinois at Urbana-Champaign, the week of November 3rd.

Conclusion

The highlights of the meeting for me, personally, were:

  • The relevation that clang has completed their modules implementation, that they will be pushing it for C++17, and that they are fairly confident that they will be able to get it in. The adoption of a proper modules system has the potential to revolutionize compilation speeds and the tooling landscape – revolutions that C++ needs badly.
  • Herb’s proposal for a portable C++ ABI. It is very encouraging to see the committee, which has long held this issue to be out of its scope, looking at a concrete proposal for solving a problem which, in my opinion, plays a significant role in hampering the use of C++ interfaces in libraries.
  • LEWG looking at bringing the entire Boost.ASIO proposal into the Networking TS. This dramatically brings forward the expected timeframe of having a standard sockets library, compared to the previous approach of standardizing first URIs and IP addresses, and then who knows what before finally getting to sockets.

I eagerly await further developments on these fronts and others, and continue to be very excited about the progress of C++ standardization.


Ryan KellyAn Experiment in Improving Compressiblity

Henrik SkupinFirefox Automation report – week 25/26 2014

In this post you can find an overview about the work happened in the Firefox Automation team during week 25 and 26.

Highlights

June the 11th was actually the last Automation Training day for our team in Q3. About the results you can read here. We will implement some changes for the next quarter, when we most likely want to host 2 of them.

Henrik finally got the time to upgrade our Mozmill-CI systems to the lastest LTS version of Jenkins. There were a bit of changes necessary but in general all went fine this time, and we can see some great improvements. Especially the long delays when sending out job results seem to be gone.

Further Henrik investigated the slow behavior with the mozmill-ci production master, when it is under load, e.g. QA runs ondemand update tests for releases of Firefox. The main problem stays with Java, which is taking up about 100% of the CPU. Because of this the integrated web server cannot serve pages in a timely manner. Adding a 2nd CPU to this node gave us way better response times.

Given that the new version of Ubuntu came out already in April, we want to have our Mozmill tests also run on that platform version. So we got new VM spun-up by IT, which we now have to puppetize and bring online. But this may still take a bit, given the remaining blockers for using PuppetAgain.

While talking about Puppet we got the next big change reviewed and landed. With bug 1021230 we now have our own user account, which can be customized to our needs. And that’s what we totally need, given that our infrastructure is so different from the Releng one.

Also for TPS we made progress, so the new TPS-CI production machine came online. Yet it cannot replace the current CI due to still a fair amount of open blockers, but hopefully by end of July we should be able to turn the switch.

Individual Updates

For more granular updates of each individual team member please visit our weekly team etherpad for week 25 and week 26.

Meeting Details

If you are interested in further details and discussions you might also want to have a look at the meeting agenda, the video recording, and notes from the Firefox Automation meetings of week 25 and week 26.

David Rajchenbach TellerThe Battle of Session Restore – Season 1 Episode 3 – All With Measure

Plot For the second time, our heroes prepared for battle. The startup of Firefox was too slow and Session Restore was one of the battle fields.

When Firefox starts, Session Restore is in charge of restoring the browser to its previous state, in case of a crash, a restart, or for the users who have configured Firefox to resume from its previous state. This entails numerous activities during startup:

  1. read sessionstore.js from disk, decode it and parse it (recall that the file is potentially several Mb large), handling errors;
  2. backup sessionstore.js in case of startup crash.
  3. create windows, tabs, frames;
  4. populate history, scroll position, forms, session cookies, session storage, etc.

It is common wisdom that Session Restore must have a large impact on Firefox startup. But before we could minimize this impact, we needed to measure it.

Benchmarking is not easy

When we first set foot on Session Restore territory, the contribution of that module to startup duration was uncharted. This was unsurprising, as this aspect of the Firefox performance effort was still quite young. To this day, we have not finished chartering startup or even Session Restore’s startup.

So how do we measure the impact of Session Restore on startup?

A first tool we use is Timeline Events, which let us determine how long it takes to reach a specific point of startup. Session Restore has had events `sessionRestoreInitialized` and `sessionRestored` for years. Unfortunately, these events did not tell us much about Session Restore itself.

The first serious attempt at measuring the impact of Session Restore on startup Performance was actually not due to the Performance team but rather to the metrics team. Indeed, data obtained through Firefox Health Report participants indicated that something wrong had happened.

Oops, something is going wrong

Indicator `d2` in the graph measures the duration between `firstPaint` (which is the instant at which we start displaying content in our windows) and `sessionRestored` (which is the instant at which we are satisfied that Session Restore has opened its first tab). While this measure is imperfect, the dip was worrying – indeed, it represented startups that lasted several seconds longer than usual.

Upon further investigation, we concluded that the performance regression was indeed due to Session Restore. While we had not planned to start optimizing the startup component of Session Restore, this battle was forced upon us. We had to recover from that regression and we had to start monitoring startup much better.

A second tool is Telemetry Histograms for measuring duration of individual operations, such as reading sessionstore.js or parsing it. We progressively added measures for most of the operations of Session Restore. While these measures are quite helpful, they are also unfortunately very unstable in real-world conditions, as they are affected both by scheduling (the operations are asynchronous), by the work load of the machine, by the actual contents of sessionstore.js, etc.

The following graph displays the average duration of reading and decoding sessionstore.js among Telemetry participants: Telemetry 4

Difference in colors represent successive versions of Firefox. As we can see, this graph is quite noisy, certainly due to the factors mentioned above (the spikes don’t correspond to any meaningful change in Firefox or Session Restore). Also, we can see a considerable increase in the duration of the read operation. This was quite surprising for us, given that this increase corresponds to the introduction of a much faster, off the main thread, reading and decoding primitive. At the time, we were stymied by this change, which did not correspond to our experience. We have now concluded that by changing the asynchronous operation used to read the file, we have simply changed the scheduling, which makes the operation appear longer, while in practice it simply does not block the rest of the startup from taking place on another thread.

One major tool was missing for our arsenal: a stable benchmark, always executed on the same machine, with the same contents of sessionstore.js, and that would let us determine more exactly (almost daily, actually) the impact of our patches upon Session Restore:Session Restore Talos

This test, based on our Talos benchmark suite, has proved both to be very stable, and to react quickly to patches that affected its performance. It measures the duration between the instant at which we start initializing Session Restore (a new event `sessionRestoreInit`) and the instant at which we start displaying the results (event `sessionRestored`).

With these measures at hand, we are now in a much better position to detect performance regressions (or improvements) to Session Restore startup, and to start actually working on optimizing it – we are now preparing to using this suite to experiment with “what if” situations to determine which levers would be most useful for such an optimization work.

Evolution of startup duration

Our first benchmark measures the time elapsed between start and stop of Session Restore if the user has requested all windows to be reopened automatically

restoreAs we can see, the performance on Linux 32 bits, Windows XP and Mac OS 10.6 is rather decreasing, while the performance on Linux 64 bits, Windows 7 and 8 and MacOS 10.8 is improving. Since the algorithm used by Session Restore upon startup is exactly the same for all platforms, and since “modern” platforms are speeding up while “old” platforms are slowing down, this suggests that the performance changes are not due to changes inside Session Restore. The origin of these changes is unclear. I suspect the influence of newer versions of the compilers or some of the external libraries we use, or perhaps new and improved (for some platforms) gfx.

Still, seeing the modern platforms speed up is good news. As of Firefox 31, any change we make that causes a slowdown of Session Restore will cause an immediate alert so that we can react immediately.

Our second benchmark measures the time elapsed if the user does not wish windows to be reopened automatically. We still need to read and parse sessionstore.js to find whether it is valid, so as to decide whether we can show the “Restore” button on about:home.

norestoreWe see peaks in Firefox 27 and Firefox 28, as well as a slight decrease of performance on Windows XP and Linux. Again, in the future, we will be able to react better to such regressions.

The influence of factors upon startup

With the help of our benchmarks, we were able to run “what if” scenarios to find out which of the data manipulated by Session Restore contributed to startup duration. We did this in a setting in which we restore windows:size-restore

and in a setting in which we do not:

size-norestore

Interestingly, increasing the size of sessionstore.js has apparently no influence on startup duration. Therefore, we do not need to optimize reading and parsing sessionstore.js. Similarly, optimizing history, cookies or form data would not gain us anything.

The single largest most expensive piece of data is the set of open windows – interestingly, this is the case even when we do not restore windows. More precisely, any optimization should target, by order of priority:

  1. the cost of opening/restoring windows;
  2. the cost of opening/restoring tabs;
  3. the cost of dealing with windows data, even when we do not restore them.

What’s next?

Now that we have information on which parts of Session Restore startup need to be optimized, the next step is to actually optimize them. Stay tuned!


Nick CameronRust for C++ programmers - part 9: destructuring pt2 - match and borrowing

(Continuing from part 8, destructuring).

(Note this was significantly edited as I was wildly wrong the first time around. Thanks to /u/dpx-infinity for pointing out my mistakes.)

When destructuring there are some surprises in store where borrowing is concerned. Hopefully, nothing surprising once you understand borrowed references really well, but worth discussing (it took me a while to figure out, that's for sure. Longer than I realised, in fact, since I screwed up the first version of this blog post).

Imagine you have some `&Enum` variable `x` (where `Enum` is some enum type). You have two choices: you can match `*x` and list all the variants (`Variant1 => ...`, etc.) or you can match `x` and list reference to variant patterns (`&Variant1 => ...`, etc.). (As a matter of style, prefer the first form where possible since there is less syntactic noise). `x` is a borrowed reference and there are strict rules for how a borrowed reference can be dereferenced, these interact with match expressions in surprising ways (at least surprising to me), especially when you a modifying an existing enum in a seemingly innocuous way and then the compiler explodes on a match somewhere.

Before we get into the details of the match expression, lets recap Rust's rules for value passing. In C++, when assigning a value into a variable or passing it to a function there are two choices - pass-by-value and pass-by-reference. The former is the default case and means a value is copied either using a copy constructor or a bitwise copy. If you annotate the destination of the parameter pass or assignment with `&`, then the value is passed by reference - only a pointer to the value is copied and when you operate on the new variable, you are also operating on the old value.

Rust has the pass-by-reference option, although in Rust the source as well as the destination must be annotated with `&`. For pass-by-value in Rust, there are two further choices - copy or move. A copy is the same as C++'s semantics (except that there are no copy constructors in Rust). A move copies the value but destroys the old value - Rust's type system ensures you can no longer access the old value. As examples, `int` has copy semantics and `Box<int>` has move semantics:

fn foo() {
    let x = 7i;
    let y = x;                // x is copied
    println!("x is {}", x);   // OK

    let x = box 7i;
    let y = x;                // x is moved
    //println!("x is {}", x); // error: use of moved value: `x`
}

Rust determines if an object has move or copy semantics by looking for destructors. Destructors probably need a post of their own, but for now, an object in Rust has a destructor if it implements the `Drop` trait. Just like C++, the destructor is executed just before an object is destroyed. If an object has a destructor then it has move semantics. If it does not, then all of its fields are examined and if any of those do then the whole object has move semantics. And so on down the object structure. If no destructors are found anywhere in an object, then it has copy semantics.

Now, it is important that a borrowed object is not moved, otherwise you would have a reference to the old object which is no longer valid. This is equivalent to holding a reference to an object which has been destroyed after going out of scope - it is a kind of dangling pointer. If you have a pointer to an object, there could be other references to it. So if an object has move semantics and you have a pointer to it, it is unsafe to dereference that pointer. (If the object has copy semantics, dereferencing creates a copy and the old object will still exist, so other references will be fine).

OK, back to match expressions. As I said earlier, if you want to match some `x` with type `&T` you can dereference once in the match clause or match the reference in every arm of the match expression. Example:

enum Enum1 {
    Var1,
    Var2,
    Var3
}

fn foo(x: &Enum1) {
    match *x {  // Option 1: deref here.
        Var1 => {}
        Var2 => {}
        Var3 => {}
    }

    match x {
        // Option 2: 'deref' in every arm.
        &Var1 => {}
        &Var2 => {}
        &Var3 => {}
    }
}

In this case you can take either approach because `Enum1` has copy semantics. Let's take a closer look at each approach: in the first approach we dereference `x` to a temporary variable with type `Enum1` (which copies the value in `x`) and then do a match against the three variants of `Enum1`. This is a 'one level' match because we don't go deep into the value's type. In the second approach there is no dereferencing. We match a value with type `&Enum1` against a reference to each variant. This match goes two levels deep - it matches the type (always a reference) and looks inside the type to match the referred type (which is `Enum1`).

Either way, we must ensure that we (that is, the compiler) must ensure we respect Rust's invariants around moves and references - we must not move any part of an object if it is referenced. If the value being matched has copy semantics, that is trivial. If it has move semantics then we must make sure that moves don't happen in any match arm. This is accomplished either by ignoring data which would move, or making references to it (so we get by-reference passing rather than by-move).

enum Enum2 {
    // Box has a destructor so Enum2 has move semantics.
    Var1(Box<int>),
    Var2,
    Var3
}

fn foo(x: &Enum2) {
    match *x {
        // We're ignoring nested data, so this is OK
        Var1(..) => {}
        // No change to the other arms.
        Var2 => {}
        Var3 => {}
    }

    match x {
        // We're ignoring nested data, so this is OK
        &Var1(..) => {}
        // No change to the other arms.
        &Var2 => {}
        &Var3 => {}
    }
}

In either approach we don't refer to any of the nested data, so none of it is moved. In the first approach, even though `x` is referenced, we don't touch its innards in the scope of the dereference (i.e., the match expression) so nothing can escape. We also don't bind the whole value (i.e., bind `*x` to a variable), so we can't move the whole object either.

We can take a reference to any variant in the second match, but not in the derferenced version. So, in the second approach replacing the second arm with `a @ &Var2 => {}` is OK (`a` is a reference), but under the first approach we couldn't write `a @ Var2 => {}` since that would mean moving `*x` into `a`. We could write `ref a @ Var2 => {}` (in which `a` is also a reference), although it's not a construct you see very often.

But what about if we want to use the data nested inside `Var1`? We can't write:

    match *x {
        Var1(y) => {}
        _ => {}
    }

or

    match x {
        &Var1(y) => {}
        _ => {}
    }

because in both cases it means moving part of `x` into `y`. We can use the 'ref' keyword to get a reference to the data in `Var1`: `&Var1(ref y) => {}`.That is OK, because now we are not dereferencing anywhere and thus not moving any part of `x`. Instead we are creating a pointer which points into the interior of `x`.

Alternatively, we could destructure the Box (this match is going three levels deep): `&Var1(box y) => {}`. This is OK because `int` has copy semantics and `y` is a copy of the `int` inside the `Box` inside `Var1` (which is 'inside' a borrowed reference). Since `int` has copy semantics, we don't need to move any part of `x`. We could also create a reference to the int rather than copy it: `&Var1(box ref y) => {}`. Again, this is OK, because we don't do any dereferencing and thus don't need to move any part of `x`. If the contents of the Box had move semantics, then we could not write `&Var1(box y) => {}`, we would be forced to use the reference version. We could also use similar techniques with the first approach to matching, which look the same but without the first `&`. For example, `Var1(box ref y) => {}`.

Now lets get more complex. Lets say you want to match against a pair of reference-to-enum values. Now we can't use the first approach at all:

fn bar(x: &Enum2, y: &Enum2) {
    // Error: x and y are being moved.
    // match (*x, *y) {
    //     (Var2, _) => {}
    //     _ => {}
    // }

    // OK.
    match (x, y) {
        (&Var2, _) => {}
        _ => {}
    }
}

The first approach is illegal because the value being matched is created by dereferencing `x` and `y` and then moving them both into a new tuple object. So in this circumstance, only the second approach works. And of course, you still have to follow the rules above for avoiding moving parts of `x` and `y`.

If you do end up only being able to get a reference to some data and you need the value itself, you have no option except to copy that data. Usually that means using `clone()`. If the data doesn't implement clone, you're going to have to further destructure to make a manual copy or implement clone yourself.

What if we don't have a reference to a value with move semantics, but the value itself. Now moves are OK, because we know no one else has a reference to the value (the compiler ensures that if they do, we can't use the value). For example,

fn baz(x: Enum2) {
    match x {
        Var1(y) => {}
        _ => {}
    }
}

There are still a few things to be aware of. Firstly, you can only move to one place. In the above example we are moving part of `x` into `y` and we'll forget about the rest. If we wrote `a @ Var1(y) => {}` we would be attempting to move all of `x` into `a` and part of `x` into `y`. That is not allowed, an arm like that is illegal. Making one of `a` or `y` a reference (using `ref a`, etc.) is not an option either, then we'd have the problem described above where we move whilst holding a reference. We can make both `a` and `y` references and then we're OK - neither is moving, so `x` remains in tact and we have pointers to the whole and a part of it.

Similarly (and more common), if we have a variant with multiple pieces of nested data, we can't take a reference to one datum and move another. For example if we had a `Var4` declared as `Var4(Box<int>, Box<int>)` we can have a match arm which references both (`Var4(ref y, ref z) => {}`) or a match arm which moves both (`Var4(y, z) => {}`) but you cannot have a match arm which moves one and references the other (`Var4(ref y, z) => {}`). This is because a partial move still destroys the whole object, so the reference would be invalid.

Marco ZeheQuick tip: Add someone to circles on Google Plus using a screen reader

In my “WAI-ARIA for screen reader users” post in early May, I was asked by Donna to talk a bit about Google Plus. Especially, she asked how to add someone to circles. Google Plus has learned a thing or two about screen reader accessibility recently, but the fact that there is no official documentation on the Google Accessibility entry page yet suggests that people inside Google are not satisfied with the quality of Google Plus accessibility yet, or not placing a high enough priority on it. That quality, however, has improved, so adding someone to one or more circles using a screen reader is not that difficult any more.

Note that I tested the below steps with Firefox 31 Beta (out July 22) and NVDA 2014.2. Other screen reader/browser combos may vary in the way they output stuff or switch between their virtual cursor and focus/forms modes.

Here are the steps:

  1. Log into Google Plus. If you already have a profile, just go ahead and find someone. If not, create a profile and add people.
  2. The easiest way to find people is to go to the People tab. Note that these currently have no “selected” state yet, but they do have the word “active” as part of the link text.
  3. Once you found someone in the list of suggestions, find the “Add to circles” menu button, and press the Space Bar. Note that it is very important that you press Space here, not Enter!
  4. NVDA now automatically switches to focus mode. What happened is that a popup menu opened that has a list of current circles, and an item at the bottom that allows you to create a new circle on the fly. The circles themselves are checkable menu items. Use the up and down arrows to select a circle, for example Friends or Acquaintances, and press the Space Bar to add the person. The number of people in that circle will dynamically increase by one, and the state will be changing to “checked”. Likewise, if you want to remove a person from a particular circle, press Space Bar just the same. These all act like regular check boxes, and the menu stays active so you can shuffle that person around your circles as you please.
  5. At the bottom, there is a non-checkable menu item called “Add new circle”. Here, you have to press Enter. If you do this, a panel opens inside the menu, and focus lands on a text field where you can enter the name of a new circle, for example Web Developers. Press Tab to reach the Create Circle button and press Space Bar. The new circle will be added, the person you’re adding to circles will automatically be added to that circle, and you’re back in the menu of circle checkboxes.
  6. Once you’re done, press Escape twice. The first will end NVDA’s focus mode, the second will close the Add to Circles menu. Focus will land back on the button for that person, but the label will change to the name of a single circle, if you added the person to only one circle, or the label “x Circles”, where x is the number of circles you just put that person into.

The above steps also work on the menu button that you find if you opened the profile page of an individual person, not just in the list of suggested people, or any other list of people. The interaction is exactly the same.

Hope this helps you get around in Google Plus a bit more efficiently!

Kent JamesFollowing Wikipedia, Thunderbird Could Raise $1,600,000 in annual donations

What will it take to keep Thunderbird stable and vibrant? Although there is a dedicated, hard-working team of volunteers trying hard to keep Thunderbird alive, there has been very little progress on improvements since Mozilla drastically reduced their funding. I’ve been an advocate for some time that Thunderbird needs income to fulfill its potential, and that the best way to generate that income would be to appeal directly to its users for donations.

One internet organization that has done this successfully has been Wikipedia. How much income could Thunderbird generate if they received the same income per user as Wikipedia? Surely our users, who rely on Thunderbird for critical daily communications, are at least as willing to donate as Wikipedia users.

Estimates of income from Wikipedia’s annual fund raising drive to users are around $20,000,000 per year. Recently Wikipedia is reporting 11824 M pageviews per month and 5 pageviews per user. That results in a daily user count of 78 million users. Thunderbird by contrast has about 6 million daily users (using hits per day to update checks), or about 8% of the daily users of Wikipedia.

If Thunderbird were willing to directly engage users asking for donations, at the same rate per user as Wikipedia, there is a potential to raise $1,600,000 per year. That would certainly be enough income to maintain a serious team to move forward.

Wikipedia’s donation requests were fairly intrusive, with large banners at the top of all Wikipedia pages. When Firefox did a direct appeal to users early this year, the appeal was very subtle (did you even notice it?). I tried to scale the Firefox results to Thunderbird, and estimated that a similar subtle appeal might raise $50,000 – $100,000 per year in Thunderbird. That is not sufficient to make a significant impact. We would have to be willing to be a little intrusive, like Wikipedia, it we are going to be successful. This will generate pushback, as has Wikipedia’s campaign, so we would have to be willing to live with the pushback.

But is it really in the best interest of our users to spare them an annual, slightly intrusive appeal for donations, while letting the product that they depend on each day slowly wither away? I believe that if we truly care about our users, we will take the necessary steps to insure that we give them the best product possible, including undertaking fundraising to keep the product stable and vibrant.

Kevin NgoMore Happiness for Your Buck

Disney is the happiest places on Earth, but one of the most expensive. But it might be well worth the wallet hit.

With increasing assets, I have been thinking lately about what to purchase next, home purchasing, vacation planning, investment. You know, personal finances. Then I wonder how we spend in order to make ourselves happier. How can we use our money most efficiently to make ourselves happiest?

We have fine choices betweem 65" 3D plasma TVs, media-integrated BMWs and Audis, Tudor-style houses on the tree-lined avenue. Although we're all aware of the American Dream and although we might even consciously scoff at it, is it really ingrained in our heads enough to affect our purchases? Despite being aware of materialism, we still spend on items such as an Apple product upgrades or matching furniture sets. But really, compared to what we could potentially be allocating our money towards, are they really worth it?. Buck by buck, there are happier things to spend money on and happier ways to spend it.

Experiences Trumps Stuff

The happiness attained from a new toy is fleeting. When I buy a gadget, I get really excited about it for a couple weeks, and then it's just another item on the shelf. Once in freshman year, I dropped $70 on an HD camcorder. "Think about all the cool life experiences I could record!", I thought. After playing around with it for a bit, it got stowed away, just as Woody had when Buzz came to town. It wasn't the actual camcorder that I really wanted, it was thinking about the future experiences I could have.

Thinking back, the best things I have ever spent my money on were experiences. Trips around the world, places like the cultural streets of Beijing, the serenity of Oahu, or the cold isolation of Alaska. They bring back warm (or perhaps cold) memories and instill a rush of nostalgia. It brought about happiness in a way that those $100 beat-up sneakers or that now-stolen iPod touch ever did.

It's changed my thoughts on getting a nice house or car. Why spend to be stuck at a mundane home or spend to be stuck in traffic (just in cushier seats)? I'd rather use the money saved from not splurging $400K on a house to see the world. Spend money to be with people, go to places, attend shows, try new things. You won't forget it.

Instant Gratification is a Drag

It's not only what we spend on that makes us happy, it's how we spend. When we spend in a way such that we attain instant gratification, such as an in-store purchase on credit of that new fridge or getting that candy bar now, it destroys the whole fun of the waiting game. Have you ever eagerly awaited a package to come for weeks? Thinking about all the future possibilites, all the things you can do, all the fun you will have once that package comes. We are happier when we await something eagerly in anticipation. It's about the journey and not the destination.

Just yesterday, I booked my flight and hotel to Florida to visit my girlfriend working at Disney. It's almost two months out. But every day, I'll be thinking about how much fun we'll have watching the Fantasmic fireworks, how relaxing it will be staying at a 1940s Atlantic-city themed Disney inn, all the delicious food at the Flying Fish. With the date marked on my calendar, it makes me happier every day just eagerly anticipating it.

When you spend on something now, and defer the actual consumption or experience for later, you will much more gratified. Try pre-ordering something you enjoy, plan trips out months ahead, or purchasing online. By practicing patience, you'll probably even save a bit of cash.

Make It Scarce

Experiencing something too frequently makes it less of an experience. If you drink a frothy mocha cappucino every day, you become more and more desensitized to its creamy joys. By making something scarce by not buying or experiencing it too often, it becomes more of a treat. So if you're eating out for lunch every day at nice restaurants, you might want to think about only eating out once a week. Or only get expensive coffees on Fridays. It'll make those times you do go out that much more satisfying, and your wallet will thank you.

Time Trumps Money

Don't dwell too much on wasting your time to pinch some money. So Starbucks is giving out free 12oz coffees today? Free sounds enticing but is it really worth the gas, time in dreadful traffic, and waiting in line? View time as happiness. If you have more time, you can do more of the things you want to do. If you just feel like you have a lot of time, you feel much more free.

With that in mind, you should consider how purchases will affect your future time. Ask "will this really make me happier next week?". If you are contemplating a new TV, you might think it'll make you happier. Have so many friends over to play FIFA on the so-much-HD. But television doesn't make you happier or any less stressed. It's a numbing time-sink. Or perhaps think when you are debating between two similar products such as a Nexus 5 or an HTC One. Sure, when placed side-by-side, those extra megapixels and megahertz might seem like a huge advantage. But think about the product in isolation and see if it will really benefit your future time.

Give it Away

Warren Buffett pledged to give away 99% of his wealth, whether in his lifetime or posthumously. Giving away, passing it forward, being charitable makes people happy. Even happier had they splurged on themselves.

Helping others in need makes it feel like you have a lot of extra free time to give away. And feeling like you have a lot of free time is less of a boulder on your back. So invest in others and invest in relationships. We're inherently social creatures although sometimes selfish. It works against us. Donate to a charity where you know exactly where your money is going to, or buy something nice for a family member or friend without pressure. It's money happily spent.

Mark SurmanHow do we get depth *and* scale?

We want millions of people learning about the web everyday with Mozilla. The ‘why’ is simple: web literacy is quickly becoming just as important as reading, writing and math. By 2024, there will be more than 5 billion people on the web. And, by then, the web will shape our everyday lives even more than it does today. Understanding how it works, how to build it and how to make it your own will be essential for nearly everyone.

Maker Party Uganda

The tougher question is ‘how’ — how do we teach the web with both the depth *and* scale that’s needed? Most people who tackle a big learning challenge pick one path of the other. For example, the educators in our Hive Learning Networks are focused on depth of learning. Everything the do is high touch, hands-on and focused on innovating so learning happens in a deep way. On the flip side, MOOCs have quickly shown what scale looks like, but they almost universally have high drop out rates and limited learning impact for all but the most motivated learners. We rarely see depth and scale go together. Yet, as the web grows, we need both. Urgently.

I’m actually quite hopeful. I’m hopeful because the Mozilla community is deeply focused on tackling this challenge head on, with people rolling up their sleeves to help people learn by making and organizing themselves in new ways that could massively grow the number of people teaching the web. We’re seeing the seeds of both depth and scale emerge.

This snapped into focus for me at MozFest East Africa in Kampala a few days ago. Borrowing from the MozFest London model, the event showcased a variety of open tech efforts by Mozilla and others: FirefoxOS app development; open data tools from a local org called Mountabatten; Mozilla localization; Firefox Desktop engineering; the work of the Ugandan National Information Technology Agency. It also included a huge Maker Party, with 200 young Ugandans showing up to learn and hack with Webmaker tools.

Maker Party Uganda

The Maker Party itself was impressive — pulled off well despite rain and limited connectivity. But what was more impressive was seeing how the Mozilla community is stepping up to plant the seeds of teaching the web at depth and scale, which I’d call out as:

Mentors: IMHO, a key to depth is humans connecting face to face to learn. We’ve set up a Webmaker Mentors program in the last year to encourage this kind of learning. The question has been: will people step up to do this kind of teaching and mentoring, and do it well? MozFest EA was promising start: 30 motivated mentors showed up prepared, enthusiastic and ready to help the 200 young people at the event learn the web.

Curriculum: one of the hard parts of scaling a volunteer-based mentor program is getting people to focus their teaching on the most important web literacy skills. We released a new collection of open source web literacy curriculum over the past couple of months designed to solve this problem. We weren’t sure how things would work out, I’d say MozFestEA is early evidence that curriculum can do a good job of helping people quickly understand what and how to teach. Here, each of the mentors was confidently and articulately teaching a piece of the web literacy framework using Webmaker tools.

Making as learning: another challenge is getting people to teach / learn deeply based on written curriculum. Mozilla focuses on ‘making by learning’ as a way past this — putting hands-on, project based learning at the heart of most of our Webmaker teaching kits. For example, the basic remix teaching kit gets learners quickly hacking and personalizing their favourite big brand web site, which almost always gets people excited and curious. More importantly: this ‘making as learning’ approach lets mentors adapt the experience to a learner’s interests and local context in real time. It was exciting to see the Ugandan mentors having students work on web pages focused on local school tasks and local music stars, which worked well in making the standard teaching kits come to life.

Clubs: mentors + curriculum + making can likely get us to our 2014 goal of 10,000 people around the world teaching web literacy with Mozilla. But the bigger question is how do we keep the depth while scaling to a much bigger level? One answer is to create more ’nodes’ in the Webmaker network and get them teaching all year round. At MozFest EA, there was a session on Webmaker Clubs — after school web literacy clubs run by students and teachers. This is an idea that floated up from the Mozilla community in Uganda and Canada. In Uganda, the clubs are starting to form. For me, this is exciting. Right now we have 30 contributors working on Webmaker in Uganda. If we opened up clubs in schools, we could imagine 100s or even 1000s. I think clubs like this is a key next step towards scale.

Community leadership: the thing that most impressed me at MozFestEA was the leadership from the community. San Emmanuel James and Lawrence Kisuuki have grown the Mozilla community in Uganda in a major way over the last couple of years. More importantly, they have invested in building more community leaders. As one example, they organized a Webmaker train the trainer event a few weeks before MozFestEA. The result was what I described above: confident mentors showing up ready to teach, including people other than San and Lawrence taking leadership within the Maker Party side of the event. I was impressed.This is key to both depth and scale: building more and better Mozilla community leaders around the world.

Of course, MozFestEA was just one event for one weekend. But, as I said, it gave me hope: it made be feel that the Mozilla community is taking the core building blocks of Webmaker shaping them into something that could have a big impact.

IMG_20140716_185205

With Maker Party kicking off this week, I suspect we’ll see more of this in coming months. We’ll see more people rolling up their sleeves to help people learn by making. And more people organizing themselves in new ways that could massively grow the number of people teaching the web. If we can make happen this summer, much bigger things lay on the path ahead.


Filed under: education, mozilla, webmakers

Gregory SzorcUpdates to firefoxtree Mercurial extension

My Please Stop Using MQ post, has been generating a lot of interest for bookmark-based workflows at Mozilla. To make adoption easier, I quickly authored an extension to add remote refs of Firefox repositories to Mercurial.

There was still a bit of confusion and gripes about workflows that I thought it would be best to update the extension to make things more pleasant.

Automatic tree names

People wanted an ability to easy pull/aggregate the various Firefox trees without additional configuration to an hgrc file.

With firefoxtree, you can now hg pull central or hg pull inbound or hg pull aurora and it just works.

Pushing with aliases doesn't yet work. It is slightly harder to do in the Mercurial API. I have a solution, but I'm validating some code paths to ensure it is safe. This feature will likely appear soon.

fxheads commands

Once people adopted unified repositories with heads from multiple repositories, they asked how they could quickly identify the heads of the pulled Firefox repositories.

firefoxtree now provides a hg fxheads command that prints a concise output of the commits constituting the heads of the Firefox repos. e.g.

$ hg fxheads
224969:0ec0b9ac39f0 aurora (sort of) bug 898554 - raise expected hazard count for b2g to 4 until they are fixed, a=bustage+hazbuild-only
224290:6befadcaa685 beta Tagging /src/mdauto/build/mozilla-beta 1772e55568e4 with FIREFOX_RELEASE_31_BASE a=release CLOSED TREE
224848:8e8f3ba64655 central Merge inbound to m-c a=merge
225035:ec7f2245280c fx-team fx-team/default Merge m-c to fx-team
224877:63c52b7ddc28 inbound Bug 1039197 - Always build js engine with zlib. r=luke
225044:1560f67f4f93 release release/default tip Automated checkin: version bump for firefox 31.0 release. DONTBUILD CLOSED TREE a=release

Please note that the output is based upon local-only knowledge: you'll need to pull to ensure data is current.

Reject pushing multiple heads

People were complaining that bookmark-based workflows resulted in Mercurial trying to push multiple heads to a remote. This complaint stems from the fact that Mercurial's default push behavior is to find all commits missing from the remote and push them. This behavior is extremely frustrating for Firefox development because the Firefox repos only have a single head and pushing multiple heads will only result in a server hook rejecting the push (after wasting a lot of time transferring that commit data).

firefoxtree now will refuse to push multiple heads to a known Firefox repo before any commit data is sent. In other words, we fail fast so your time is saved.

firefoxtree also changes the default behavior of hg push when pushing to a Firefox repo. If no -r argument is specified, hg push to a Firefox repo will automatically remap to hg push -r .. In other words, we attempt to push the working copy's commit by default. This change establishes sensible default and likely working behavior when typing just hg push.

I am a bit on the fence about changing the default behavior of hg push. On one hand, it makes total sense. On the other, silently changing the default behavior of a built-in command is a little dangerous. I can easily see this backfiring when people interact with non-Firefox repos. I encourage people to get in the habit of typing hg push -r because that's what you should be doing.

Installing firefoxtree

Within the next 48 hours, mach mercurial-setup should prompt to install firefoxtree. Until then, clone https://hg.mozilla.org/hgcustom/version-control-tools and ensure your ~/.hgrc file has the following:

[extensions]
firefoxtree = /path/to/version-control-tools/hgext/firefoxtree

You likely already have a copy of version-control-tools in ~/.mozbuild/version-control-tools.

It is completely safe to install firefoxtree globally: the extension will only modify behavior of repositories that are clones of Firefox repositories.

Pete MooreWeekly review 2014-07-16

Highlights

Last week build duty, therefore much less to report this week. I think we’ll have plenty to talk about though (wink wink).

l10n vcs sync was done by aki, and i posted my responses, and am writing up a patch which I hope to land in the next 24 hours. That will be l10n done.

I’ve been busy traiging queues too, and inviting people to meetings that I don’t attend myself, and cleaning up a lot of bugs (not just the triaging, but in general).

Today’s major incident was fallout from panda train 3 move - finally resolved now (yay). Basically, devices.json was out-of-date on the foopies. Disappointingly I thought to check devices.json, but did not consider it would be out-of-date on foopies, as I knew we’d been having lots of reconfigs every day. But for other reasons, the foopy updates were not performed (hanging ssh sessions when updating them) - so it took a while until this was discovered (by dustin!). In the meantime had to disable and enable > 250 pandas.

Other than that, working ferociously on finishing off vcs sync.

I think I probably updated 200 bugs this week! Was quite a clean up.

Frédéric HarperCommunity Evangelist: Firefox OS developer outreach at MozCamp India

Copyright Ratnadeep Debnath  Ratnadeep Debnath  http://j.mp/1jIYxWb (click to enlarge)

Copyright Ratnadeep Debnath
http://j.mp/1jIYxWb (click to enlarge)

At the end of June, I was in India to do a train the trainer session at MozCamp India. The purpose of the session Janet Swisher (first time we worked together, and I think we got a winning combo), and I delivered was to help Mozillians to become Community Evangelists. Our goal was to help them become part of our Technical Evangelist team: helping us inspiring and enabling developers in India to be successful with Firefox OS (we are starting with this technology because of the upcoming launch).

We would have been able to do a full day or more about developer outreach, but we only had three hours in which we shown the attendees how they can contribute, did a fun speaker idol and worked on their project plan. The contribution can be done at many levels, like public speaking, helping developers to build Firefox OS application, answering questions on StackOverflow, and more.

Since we had parallel tracks during our session, we gave it twice to give them the chance to assist to more than one track. For those who were there in the Saturday session, the following slides are the one we used:

I also recorded the session for those of you that would like to refresh your memory:

For the session on Sunday, we fixed some slides, and adapted our session to give us more time for the speaker idol as the project plan. Here are the slides:

If you were not there, I would suggest you to follow the slides as the video of the second day, as it’s an improve version of the first one (not that the first one was not good, but it was the first time we gave this session);

From the feedback we got, it was a pretty good session, and we were happy so see the excitement of the Indian community about this community evangelist role. I can’t wait to see what the Mozilla community in India will do! If you too, Mozillian or not, have any interest about evangelizing the open web, you should join the Mozilla Evangelism mailing list.

 


--
Community Evangelist: Firefox OS developer outreach at MozCamp India is a post on Out of Comfort Zone from Frédéric Harper

Luis VillaDesigners and Creative Commons: Learning Through Wikipedia Redesigns

tl;dr: Wikipedia redesigns mostly ignore attribution of Wikipedia authors, and none approach the problem creatively. This probably says as much or more about Creative Commons as it does about the designers.

disclaimer-y thing: so far, this is for fun, not work; haven’t discussed it at the office and have no particular plans to. Yes, I have a weird idea of fun.

Refresh variant from interfacesketch.com.
A mild refresh from interfacesketch.com.

It is no longer surprising when a new day brings a new redesign of Wikipedia. After seeing one this weekend with no licensing information, I started going back through seventeen of them (most of the ones listed on-wiki) to see how (if at all) they dealt with licensing, attribution, and history. Here’s a summary of what I found.

Completely missing

Perhaps not surprisingly, many designers completely remove attribution (i.e., history) and licensing information in their designs. Seven of the seventeen redesigns I surveyed were in this camp. Some of them were in response to a particular, non-licensing-related challenge, so it may not be fair to lump them into this camp, but good designers still deal with real design constraints, and licensing is one of them.

History survives – sometimes

The history link is important, because it is how we honor the people who wrote the article, and comply with our attribution obligations. Five of the seventeen redesigns lacked any licensing information, but at least kept a history link.

Several of this group included some legal information, such as links to the privacy policy, or in one case, to the Wikimedia Foundation trademark page. This suggests that our current licensing information may be presented in a worse way than some of our other legal information, since it seems to be getting cut out even by designers who are tolerant of some of our other legalese?

Same old, same old

Four of the seventeen designs keep the same old legalese, though one fails to comply by making it impossible to get to the attribution (history) page. Nothing wrong with keeping the existing language, but it could reflect a sad conclusion that licensing information isn’t worth the attention of designers; or (more generously) that they don’t understand the meaning/utility of the language, so it just gets cargo-culted around. (Credit to Hamza Erdoglu , who was the only mockup designer who specifically went out of his way to show the page footer in one of his mockups.)

A winner, sort of!

Of the seventeen sites I looked at, exactly one did something different: Wikiwand. It is pretty minimal, but it is something. The one thing: as part of the redesign, it adds a big header/splash image to the page, and then adds a new credit specifically for the author of the header/splash image down at the bottom of the page with the standard licensing information. Arguably it isn’t that creative, just complying with their obligations from adding a new image, but it’s at least a sign that not everyone is asleep at the wheel.

Observations

This is surely not a large or representative sample, so all my observations from this exercise should be taken with a grain of salt. (They’re also speculative since I haven’t talked to the designers.) That said, some thoughts besides the ones above:

  • Virtually all of the designers who wrote about why they did the redesign mentioned our public-edit-nature as one of their motivators. Given that, I expected history to be more frequently/consistently addressed. Not clear whether this should be chalked up to designers not caring about attribution, or the attribution role of history being very unclear to anyone who isn’t an expect. I suspect the latter.
  • It was evident that some of these designers had spent a great deal of time thinking about the site, and yet were unaware of licensing/attribution. This suggests that people who spend less time with the site (i.e., 99.9% of readers) are going to be even more ignorant.
  • None of the designers felt attribution and licensing was even important enough to experiment on or mention in their writeups. As I said above, this is understandable but sort of sad, and I wonder how to change it.

Postscript, added next morning:

I think it’s important to stress that I didn’t link to the individual sites here, because I don’t want to call out particular designers or focus on their failures/oversights. The important (and as I said, sad) thing to me is that designers are, historically, a culture concerned with licensing and attribution. If we can’t interest them in applying their design talents to our problem, in the context of the world’s most famously collaborative project, we (lawyers and other Commoners) need to look hard at what we’re doing, and how we can educate and engage designers to be on our side.

I should also add that the WMF design team has been a real pleasure to work with on this problem, and I look forward to doing more of it. Some stuff still hasn’t made it off the drawing board, but they’re engaged and interested in this challenge. Here is one example.

Byron Joneshappy bmo push day!

switching the default monospace font on bmo yesterday highlighted a few issues with the fira-mono typeface that we’d like to see addressed before we use it.  as a result comments are now displayed using their old font.

the following changes have been pushed to bugzilla.mozilla.org:

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

Jess KleinThe first 6 weeks of Hive Labs

Six weeks ago, Atul Varma, Chris Lawrence, Kat Baybrooke and I embarked on an experiment we call Hive Labs. Let me tell you about, Let me show you a little slideshow I made about our first 6 weeks to the tune of Josh Gad singing In Summer from the movie Frozen.





So, in summary (or if you aren't the musical slideshare type) the first 6 weeks have been great. We did a bunch of listening and research, including attending events and hackjams run by and for Hive members. Here's a neat worksheet from a Mouse run Webmaker training in New York. 

We did some research and design on tools and resources to support prototyping:




Sherpa is a codename for a tool that helps prototypers define a design opportunity and openly work through the process for designing a solution. We designed some mockups to see if this is a direction that we should pursue. Sherpa could be a back-end for the "Cupcake dashboard" or be a stand alone tool. We spun up an instance of the "Cupcakes" dashboard  designed by the Firefox UX team to help figure out if it is a useful tool to surface prototypes.

We also prototyped a snippet for Firefox to promote Maker Party, worked on an idea for self guided Webmaking and began work on a Net Neutrality Teaching Kit.

Finally, we've shipped some things:

The No-Fi Lo-Fi Teaching Kit asks participants the question how can we empower educators to teach the web in settings where connectivity isn't guaranteed?

With the Mobile Design Teaching Kit, participants play with, break apart and modify mobile apps in order to understand how they work as systems. This teaching kit is designed to explore a few activities that can be mixed and mashed into workshops for teens or adults who want to design mobile apps. Participants will tinker with paper prototyping, design mindmaps and program apps while learning basic design and webmaking concepts.

A local and a global Hive Learning Network directory

... and a section on Webmaker.org to help guide mentors through making Teaching Kits and Activities:



The first 6 weeks have been great, and we are going to continue to listen, create and deliver based on needs from the community. We have lots more to build. We want to do this incrementally, partly to release sooner, and partly to build momentum through repeated releases.


Armen ZambranoDeveloping with GitHub and remote branches

I have recently started contributing using Git by using GitHub for the Firefox OS certification suite.

It has been interestting switching from Mercurial to Git. I honestly believed it would be more straight forward but I have to re-read again and again until the new ways sink in with me.

jgraham shared with me some notes (Thanks!) with regards what his workflow looks like and I want to document it for my own sake and perhaps yours:
git clone git@github.com:mozilla-b2g/fxos-certsuite.git

# Time passes

# To develop something on master
# Pull in all the new commits from master

git fetch origin

# Create a new branch (this will track master from origin,
# which we don't really want, but that will be fixed later)

git checkout -b my_new_thing origin/master

# Edit some stuff

# Stage it and then commit the work

git add -p
git commit -m "New awesomeness"

# Push the work to a remote branch
git push --set-upstream origin HEAD:jgraham/my_new_thing

# Go to the GH UI and start a pull request

# Fix some review issues
git add -p
git commit -m "Fix review issues" # or use --fixup

# Push the new commits
git push

# Finally, the review is accepted
# We could rebase at this point, however,
# we tend to use the Merge button in the GH UI
# Working off a different branch is basically the same,
# but you replace "master" with the name of the branch you are working off.


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Alon ZakaiMassive, a new work-in-progress asm.js benchmark - feedback is welcome!

Massive is a new benchmark for asm.js. While many JavaScript benchmarks already exist, asm.js - a strict subset of JavaScript, designed to be easy to optimize - poses some new challenges. In particular, asm.js is typically generated by compiling from another language, like C++, and people are using that approach to run large asm.js codebases, by porting existing large C++ codebases (for example, game engines like Unity and Unreal).

Very large codebases can be challenging to optimize for several reasons: Often they contain very large functions, for example, which stress register allocation and other compiler optimizations. Total code size can also cause pauses while the browser parses and prepares to execute a very large script. Existing JavaScript benchmarks typically focus on small programs, and tend to focus on throughput, ignoring things like how responsive the browser is (which matters a lot for the user experience). Massive does focus on those things, by running several large real-world codebases compiled to asm.js, and testing them on throughput, responsiveness, preparation time and variance. For more details, see the FAQ at the bottom of the benchmark page.

Massive is not finished yet, it is a work in progress - the results should not be taken seriously yet (bugs might cause some things to not be measured accurately, etc.). Massive is being developed as an open source project, so please test it and report your feedback. Any issues you find or suggestions for improvements are very welcome!

Gregory SzorcPython Packaging Do's and Don'ts

Are you someone who casually interacts with Python but don't know the inner workings of Python? Then this post is for you. Read on to learn why some things are the way they are and how to avoid making some common mistakes.

Always use Virtualenvs

It is an easy trap to view virtualenvs as an obstacle, a distraction towards accomplishing something. People see me adding virtualenvs to build instructions and they say I don't use virtualenvs, they aren't necessary, why are you doing that?

A virtualenv is effectively an overlay on top of your system Python install. Creating a virtualenv can be thought of as copying your system Python environment into a local location. When you modify virtualenvs, you are modifying an isolated container. Modifying virtualenvs has no impact on your system Python.

A goal of a virtualenv is to isolate your system/global Python install from unwanted changes. When you accidentally make a change to a virtualenv, you can just delete the virtualenv and start over from scratch. When you accidentally make a change to your system Python, it can be much, much harder to recover from that.

Another goal of virtualenvs is to allow different versions of packages to exist. Say you are working on two different projects and each requires a specific version of Django. With virtualenvs, you install one version in one virtualenv and a different version in another virtualenv. Things happily coexist because the virtualenvs are independent. Contrast with trying to manage both versions of Django in your system Python installation. Trust me, it's not fun.

Casual Python users may not encounter scenarios where virtualenvs make their lives better... until they do, at which point they realize their system Python install is beyond saving. People who eat, breath, and die Python run into these scenarios all the time. We've learned how bad life without virtualenvs can be and so we use them everywhere.

Use of virtualenvs is a best practice. Not using virtualenvs will result in something unexpected happening. It's only a matter of time.

Please use virtualenvs.

Never use sudo

Do you use sudo to install a Python package? You are doing it wrong.

If you need to use sudo to install a Python package, that almost certainly means you are installing a Python package to your system/global Python install. And this means you are modifying your system Python instead of isolating it and keeping it pristine.

Instead of using sudo to install packages, create a virtualenv and install things into the virtualenv. There should never be permissions issues with virtualenvs - the user that creates a virtualenv has full realm over it.

Never modify the system Python environment

On some systems, such as OS X with Homebrew, you don't need sudo to install Python packages because the user has write access to the Python directory (/usr/local in Homebrew).

For the reasons given above, don't muck around with the system Python environment. Instead, use a virtualenv.

Beware of the package manager

Your system's package manager (apt, yum, etc) is likely using root and/or installing Python packages into the system Python.

For the reasons given above, this is bad. Try to use a virtualenv, if possible. Try to not use the system package manager for installing Python packages.

Use pip for installing packages

Python packaging has historically been a mess. There are a handful of tools and APIs for installing Python packages. As a casual Python user, you only need to know of one of them: pip.

If someone says install a package, you should be thinking create a virtualenv, activate a virtualenv, pip install <package>. You should never run pip install outside of a virtualenv. (The exception is to install virtualenv and pip itself, which you almost certainly want in your system/global Python.)

Running pip install will install packages from PyPI, the Python Packaging Index by default. It's Python's official package repository.

There are a lot of old and outdated tutorials online about Python packaging. Beware of bad content. For example, if you see documentation that says use easy_install, you should be thinking, easy_install is a legacy package installer that has largely been replaced by pip, I should use pip instead. When in doubt, consult the Python packaging user guide and do what it recommends.

Don't trust the Python in your package manager

The more Python programming you do, the more you learn to not trust the Python package provided by your system / package manager.

Linux distributions such as Ubuntu that sit on the forward edge of versions are better than others. But I've run into enough problems with the OS or package manager maintained Python (especially on OS X), that I've learned to distrust them.

I use pyenv for installing and managing Python distributions from source. pyenv also installs virtualenv and pip for me, packages that I believe should be in all Python installs by default. As a more experienced Python programmer, I find pyenv just works.

If you are just a beginner with Python, it is probably safe to ignore this section. Just know that as soon as something weird happens, start suspecting your default Python install, especially if you are on OS X. If you suspect trouble, use something like pyenv to enforce a buffer so the system can have its Python and you can have yours.

Recovering from the past

Now that you know the preferred way to interact with Python, you are probably thinking oh crap, I've been wrong all these years - how do I fix it?

The goal is to get a Python install somewhere that is as pristine as possible. You have two approaches here: cleaning your existing Python or creating a new Python install.

To clean your existing Python, you'll want to purge it of pretty much all packages not installed by the core Python distribution. The exception is virtualenv, pip, and setuptools - you almost certainly want those installed globally. On Homebrew, you can uninstall everything related to Python and blow away your Python directory, typically /usr/local/lib/python*. Then, brew install python. On Linux distros, this is a bit harder, especially since most Linux distros rely on Python for OS features and thus they may have installed extra packages. You could try a similar approach on Linux, but I don't think it's worth it.

Cleaning your system Python and attempting to keep it pure are ongoing tasks that are very difficult to keep up with. All it takes is one dependency to get pulled in that trashes your system Python. Therefore, I shy away from this approach.

Instead, I install and run Python from my user directory. I use pyenv. I've also heard great things about Miniconda. With either solution, you get a Python in your home directory that starts clean and pure. Even better, it is completely independent from your system Python. So if your package manager does something funky, there is a buffer. And, if things go wrong with your userland Python install, you can always nuke it without fear of breaking something in system land. This seems to be the best of both worlds.

Please note that installing packages in the system Python shouldn't be evil. When you create virtualenvs, you can - and should - tell virtualenv to not use the system site-packages (i.e. don't use non-core packages from the system installation). This is the default behavior in virtualenv. It should provide an adequate buffer. But from my experience, things still manage to bleed through. My userland Python install is extra safety. If something wrong happens, I can only blame myself.

Conclusion

Python's long and complicated history of package management makes it very easy for you to shoot yourself in the foot. The long list of outdated tutorials on The Internet make this a near certainty for casual Python users. Using the guidelines in this post, you can adhere to best practices that will cut down on surprises and rage and keep your Python running smoothly.

Darrin HeneinSide Tabs: Prototyping An Unexpected Productivity Hack

A few months ago, I came across an interesting Github repo authored by my (highly esteemed!) colleague Vlad Vukicevic called VerticalTabs. This is a Firefox add-on which moves your tabs, normally organized horizontally along the top of your browser, to a vertical list docked to either the left or right side of the window. Vlad’s add-on worked great, but I saw a few areas where a small amount of UX and visual design love could make this something I’d be willing to trial. So, I forked.

 

After cloning the repo, I spent a couple days modifying some of the layout, adding a new dark theme to the CSS, and replaced a handful of the images and icons with my own. Ultimately, it was probably a single-digit number of hours spent to get my code to a place that I was happy with. Certainly, there are some issues on certain operating systems, and things like Firefox’s pinned tabs don’t get the treatment I would love them to have, but that was not the point. The point of my experiment was to learn.

Learn? Learn what?

Let’s step back for a moment. Here at Mozilla, we like to experiment. Hack, Play, Make… whatever you’d like to call it. But we don’t like to waste time: we do things with purpose. If we build something, we try to make sure it’s worth the time and effort involved. As a Design Engineer on the UX team, I (along with others) work hard to bring and make clear the value of prototyping to my colleagues. What is the minimal thing we can make to test our assumptions? The reality is that when designing digital products, how it works is equally (arguably more) important than how it looks. Steve Jobs said it best:

Design is how it works.

 

Let’s bring it back to Side Tabs now (I’ll be using Side Tabs and VerticalTabs interchangeably). The hypothesis I was hoping to validate was that there was a subset of the Firefox user base that would find value in the layout that Side Tabs enabled. I wanted to bring this add-on to a level where users would find it delightful and usable enough to at least give it a fair shot.

It’s critically important that before you unleash your experiment and start learning from it, you mitigate (as much as possible) any sources of bias or false-negatives. Make (or fake) it to a point where the data you collect is not influenced by poor design, conflated features, or poor performance/usability. It turned out that this delta, from Vlad’s work to my own version, was a small amount of work. I went for it, pushed it a few steps in the right direction, and shared it with as many people as I could.

I want to restate: the majority of the credit for my particular version of VerticalTabs goes to those who published the work on top of which I built, namely Vlad and Philipp von Weitershausen. Furthermore, the incredibly talented Stephen Horlander has explored the idea of Side Tabs as well, and you will notice his work helped inspire the visual language I used in my implementation. This is how great things are built; rarely from scratch, but more commonly on the shoulders and brilliance of those who came before you.

My Github repo (at time of writing) has 13 stars and is part of a graph with 19 forks. Similarly, I’ve had colleagues fork my repo and take it further, adding improvements and refinements as they go (see my colleague Hayden’s work for one promising effort). I’ve had great response on Twitter from developers and users who love the add-on and who can’t wait to share their ideas and thoughts. It’s awesome to see ideas take shape and grow so organically like this. This is collaboration.

I’ve been using Side Tabs full-time in my default browser (Firefox Nightly) for 5 or 6 months now, and I’ve learned a ton. Aside from now preferring a horizontal layout (made possible by stacking tabs vertically) on a screen pressed for vertical space, I’ve discovered a use case that I never would have imagined had I simply mocked this idea up in Photoshop.

I use productivity tools heavily, from calendars to to-do lists and beyond. One common scenario is this: I click on a link, and it’s something I find interesting or valuable, but I don’t want to address it right now. I’ve experimented with Pocket (I still use this for longer form writing I wish to read later) but find that most of my Read Later lists are Should-but-Never-Actually-Read-Later lists. Out of sight, out of mind, right?

Saving for Later

The Firefox UX Team has actually done some great research on Save for Later behaviour. There is a great blog post here as well as a more detailed research summary here.

By a quite pleasant surprise, the vertical layout of Side Tabs surfaced a solution to me. I found myself appropriating my tab list into a priority-stack , always giving my focus to the tab at the bottom of the list. When I open something I want to keep around, I simply drag it in the list to a spot based on its relative importance; right to the top for ‘Someday’ items, 2nd or 3rd from the bottom if I want to take a peek once I’m done my task at hand (which is always the bottom tab). I’ve even moved to having two Firefox windows open, which essentially gives me two separate task lists: one for personal tabs/to-dos and one for work.

 

So where does this leave us? Quite clearly, it’s shown the immediate value of investing in an interactive prototype versus using only static mockups: people can use the design, see how it works, and in this case, expose a usage pattern I hadn’t seen before. The most common argument against prototyping is the cost involved (time, chiefly), and in my experience the value of building your designs (to varying levels of fidelity, based on the project/hypothesis) always outweighs the cost. Building the design sheds light on design problems and solutions that traditional, static mockups often fail to illuminate.

With regards to Side Tabs itself, I learned that in some cases, users treat their tabs as tasks to accomplish, and when a task is completed, it’s tab is closed. Increasingly so, our work and personal tasks exist online (email, banking, shopping, etc.), and are accessed through the browser. Some tasks (tabs) have higher priority or urgency than others, and whether visible or not, there is an implicit order by which a user will attend to their tabs. Helping users better organize their tabs made using the browser a more productive, delightful experience. And anything that can make an experience more delightful or useful is not only of great value and importance to the product or team I work with, but also to me as a designer.

Get the Add-on Side Tabs on Github