Brian BirtlesAfter 10 years

Yesterday marks 10 days to the day since I posted my first patch to Bugzilla. It was a small patch to composite SVG images with their background (and not just have a white background).

Since then I’ve contributed to Firefox as a volunteer, an intern, a contractor, and, as of 3 years ago tomorrow, a Mozilla Japan employee.

It’s still a thrill and privilege to contribute to Firefox. I’m deeply humbled by the giants I work alongside who support me like I was one of their own. In the evening when I’m tired from the day I still often find myself bursting into a spontaneous prayer of thanks that I get to work on this stuff.

So here are 8 reflections from the last 10 years. It should have been 10 but I ran out of steam.

Why I joined

  1. I got involved with Firefox because, as a Web developer, I wanted to make the Web platform better. Firefox was in a position of influence and anyone could join in. It was open technically and culturally. XPCOM took a little getting used to but everyone was very supportive.

What I learned

  1. Don’t worry about the boundaries. When I first started hacking on SVG code I would be afraid to touch any source file outside /content/svg/content/src (now, thankfully, dom/svg!). When I started on the SVG working group I would think, “we can’t possibly change that, that’s another working group’s spec!” But when Cameron McCormack joined Mozilla I was really impressed how he fearlessly fixed things all over the tree. As I’ve become more familiar and confident with Firefox code and Web specs I’ve stopped worrying about artificial boundaries like folders and working groups and more concerned with fixing things properly.

  2. Blessed are the peacemakers. It’s really easy to get into arguments on the Internet that don’t help anyone. I once heard a colleague consider how Jesus’ teaching applies to the Internet. He suggested that sometimes when someone makes a fool of us on the Internet the best thing is just to leave it and look like a fool. I find that hard to do and don’t do it often, but I’m always glad when I do.

    Earlier this year another colleague impressed me with her very graceful response to Brendan’s appointment to CEO. I thought it was a great example of peace-making.

  3. Nothing is new. I’ve found these words very sobering:

    What has been will be again,
      what has been done will be done again;
      there is nothing new under the sun.
    Is there anything of which one can say,
      “Look! This is something new”?
    It was here already, long ago;
      it was here before our time. (Ecclesiastes 1:9–10)

    It’s so easy to get caught up in some new technology—I got pretty caught up defending SVG Animation (aka SMIL) for a while when I worked on it. Taking a step back though, that new thing has almost invariably been done before in some form, and it will certainly be superseded in my lifetime. In fact, every bit of code I’ve ever written will almost certainly be either rewritten or abandoned altogether within my lifetime.

    In light of that I try to fixate less on each new technology and more on the process: what kind of person was I when I implemented that (now obsolete) feature? What motivated me to work at it each day? That, I believe, is eternal.

How I hope Mozilla will shape up over the next 10 years

  1. I hope we’ll be the most welcoming community on the Web

    I don’t mean that we’ll give free hugs to new contributors, or that we’ll accept any patch that manages to enter Bugzilla, or we’ll entertain any troublemaker who happens upon #developers. Rather, I hope that anyone who wants to help out finds overwhelming encouragement and enthusiasm and without having to sign up to an ideological agenda first. Something like this interaction.

  2. I hope we’ll stay humble

    I’d love to see Mozilla be known as servants of the Web but when things go well there’s always the danger we’ll become arrogant, less welcoming of others’ ideas, and deaf to our critics. I hope we can celebrate our victories while taking a modest view of ourselves. Who knows, maybe our harshest critics will become some of our most valuable contributors.

  3. I hope we’ll talk less, show more

    By building amazing products through the input of thousands of people around the world we can prove Open works, we can prove you don’t need to choose between privacy and convenience. My initial interest in Mozilla was because of its technical excellence and welcoming community. The philosophy came later.

  4. I hope we’ll make less t-shirts

    Can we do, I don’t know, a shirt once in a while? Socks even? Pretty much anything else!

Daniel StenbergChanging networks on Mac with Firefox

Not too long ago I blogged about my work to better deal with changing networks while Firefox is running. That job was basically two parts.

A) generic code to handle receiving such a network-changed event and then

B) a platform specific part that was for Windows that detected such a network change and sent the event

Today I’ve landed yet another fix for part B called bug 1079385, which detects network changes for Firefox on Mac OS X.

mac miniI’ve never programmed anything before on the Mac so this was sort of my christening in this environment. I mean, I’ve written countless of POSIX compliant programs including curl and friends that certainly builds and runs on Mac OS just fine, but I never before used the Mac-specific APIs to do things.

I got a mac mini just two weeks ago to work on this. Getting it up, prepared and my first Firefox built from source took all-in-all less than three hours. Learning the details of the mac API world was much more trouble and can’t say that I’m mastering it now either but I did find myself at least figuring out how to detect when IP addresses on the interfaces change and a changed address is a pretty good signal that the network changed somehow.

Nathan Froydporting rr to x86-64

(TL;DR: rr from git can record and replay 64-bit programs.  Try it for yourself!)

Over the last several months, I’ve been devoting an ever-increasing amount of my time to making rr able to trace x86-64 programs.  I’ve learned a lot along the way and thought I’d lay out all the major pieces of work that needed to be done to make this happen.

Before explaining the major pieces, it will be helpful to define some terms: the host architecture is the architecture that the rr binary itself is compiled for.  The target architecture is the architecture of the binary that rr is tracing.  These are often equivalent, but not necessarily so: you could be tracing a 64-bit binary with a 64-bit rr (host == target), but then the program starts to run a 32-bit subprocess, which rr also begins to trace (host != target).  And you have to handle both cases in a single rr session, with a single rr binary.  (64-bit rr doesn’t handle the host != target case quite yet, but all the infrastructure is there.)

All of the pieces described below are not new ideas: the major programs you use for development (compiler, linker, debugger, etc.) all have done some variation of what I describe below.  However, it’s not every day that one takes a program written without any awareness of host/target distinctions and endows it with the necessary awareness.

Quite often, a program written exclusively for 32-bit hosts has issues when trying to compile for 64-bit hosts, and rr was no exception in this regard.  Making the code 64-bit clean by fixing all the places that triggered compiler warnings on x86-64, but not on i386, was probably the easiest part of the whole porting effort.  Format strings were a big part of this: writing %llx when you wanted to print a uint64_t, for instance, which assumes that uint64_t is implemented as unsigned long long (not necessarily true on 64-bit hosts).  There were several places where long was used instead of uint32_t.  And there were even places that triggered signed/unsigned comparison warnings on 64-bit platforms only.  (Exercise for the reader: construct code where this happens before looking at the solution.)

Once all the host issues are dealt with, removing all the places where rr assumed semantics or conventions of the x86 architecture was the next step.  In short, all of the code assumed host == target: we were compiled on x86, so that must be the architecture of the program we’re debugging.  How many places actually assumed this, though?  Consider what the very simplified pseudo-code of the rr main recording loop looks like:

while (true) {
  wait for the tracee to make a syscall
  grab the registers at the point of the syscall
  extract the syscall number from the registers (1)
  switch (syscall_number) {
    case SYS_read: (2)
      extract pointer to the data read from the registers (3)
      record contents of data
      break;
    case SYS_clock_gettime:
      extract pointers for argument structures from the registers
      record contents of those argument structures (4)
      break;
    case SYS_mmap: (5)
      ...
    case SYS_mmap2: (6)
      ...
    case SYS_clone: (7)
      ...
    ...
    default:
      complain about an unhandled syscall
  }
  let the tracee resume
}

Every line marked with a number at the end indicates a different instance where host and target differences come into play and/or the code might have assumed x86 semantics.  (And the numbering above is not exhaustive!)  Taking them in order:

  1. You can obtain the registers of your target with a single ptrace call, but the layout of those registers depends on your target.  ptrace returns the registers as a struct user_regs, which differs between targets; the syscall number location obviously differs between different layouts of struct user_regs.
  2. The constant SYS_read refers to the syscall number for read on the host.  If you want to identify the syscall number for the target, you’ll need to do something different.
  3. This instance is a continuation of #1: syscall arguments are passed in different registers for each target, and the locations of those registers differ in size and location between different layouts of struct user_regs.
  4. SYS_clock_gettime takes a pointer to a struct timespec.  How much data should we read from that pointer for recording purposes?  We can’t just use sizeof(struct timespec), since that’s the size for the host, not the target.
  5. Like SYS_read, SYS_mmap refers to the syscall number for mmap on the host, so we need to do something similar to SYS_read here.  But just because two different architectures have a SYS_mmap, it doesn’t mean that the calling conventions for those syscalls at the kernel level are identical.  (This distinction applies to several other syscalls as well.)  SYS_mmap on x86 takes a single pointer argument, pointing to a structure that contains the syscall’s arguments.  The x86-64 version takes its arguments in registers.  We have to extract arguments appropriately for each calling convention.
  6. SYS_mmap2 only exists on x86; x86-64 has no such syscall.  So we have to handle host-only syscalls or target-only syscalls in addition to things like SYS_read.
  7. SYS_clone has four (!) different argument orderings at the kernel level, depending on the architecture, and x86 and x86-64 of course use different argument orderings.  You must take these target differences into account when extracting arguments.  SYS_clone implementations also differ in how they treat the tls parameter, and those differences have to be handled as well.

So, depending on the architecture of our target, we want to use different constants, different structures, and do different things depending on calling conventions or other semantic differences.

The approach rr uses is that the Registers of every rr Task (rr’s name for an operating system thread) has an architecture, along with a few other things like recorded events.  Every structure for which the host/target distinction matters has an arch() accessor.  Additionally, we define some per-architecture classes.  Each class contains definitions for important kernel types and structures, along with enumerations for syscalls and various constants.

Then we try to let C++ templates do most of the heavy lifting.  In code, it looks something like this:

enum SupportedArch {
  x86,
  x86_64,
};

class X86Arch {
  /* many typedefs, structures, enums, and constants defined... */
};

class X64Arch {
  /* many typedefs, structures, enums, and constants defined... */
};

#define RR_ARCH_FUNCTION(f, arch, args...) \
  switch (arch) { \
    default: \
      assert(0 && "Unknown architecture"); \
    case x86: \
      return f<X86Arch>(args); \
    case x86_64: \
      return f<X64Arch>(args); \
  }

class Registers {
public:
  SupportedArch arch() const { ... }

  intptr_t syscallno() const {
    switch (arch()) {
      case x86:
        return u.x86.eax;
      case x86_64:
        return u.x64.rax;
    }
  }

  // And so on for argN accessors and so forth...

private:
  union RegisterUnion {
    X86Arch::user_regs x86;
    X64Arch::user_regs x64;
  } u.
};

template <typename Arch>
static void process_syscall_arch(Task* t, int syscall_number) {
  switch (syscall_number) {
    case Arch::read:
      remote_ptr buf = t->regs().arg2();
      // do stuff with buf
      break;
    case Arch::clock_gettime:
      // We ensure Arch::timespec is defined with the appropriate types so it
      // is exactly the size |struct timespec| would be on the target arch.
      remote_ptr tp = t->regs().arg2();
      // do stuff with tp
      break;
    case Arch::mmap:
      switch (Arch::mmap_argument_semantics) {
        case Arch::MmapRegisterArguments:
          // x86-64
          break;
        case Arch::MmapStructArguments:
          // x86
          break;
      }
      break;
    case Arch::mmap2:
      // Arch::mmap2 is always defined, but is a negative number on architectures
      // where SYS_mmap2 isn't defined.
      // do stuff
      break;
    case Arch::clone:
      switch (Arch::clone_argument_ordering) {
        case Arch::FlagsStackParentTLSChild:
          // x86
          break;
        case Arch::FlagsStackParentChildTLS:
          // x86-64
          break;
      }
      break;
    ...
  }
}

void process_syscall(Task* t, int syscall_number) {
  int syscall_number = t->regs().syscallno();
  RR_ARCH_FUNCTION(process_syscall_arch, t->arch(), t, syscall_number);
}

The definitions of X86Arch and X64Arch also contain static_asserts to try and ensure that we’ve defined structures correctly for at least the host architecture.  And even now the definitions of the structures aren’t completely bulletproof; I don’t think the X86Arch definitions of some structures are robust on a 64-bit host because of differences in structure field alignment between 32-bit and 64-bit, for instance.  So that’s still something to fix in rr.

Templates handle the bulk of target-specific code in rr.  There are a couple of places where we need to care about how the target implements mmap and other syscalls which aren’t amenable to templates (or, at least, we didn’t use them; it’s probably possible to (ab)use templates for these purposes), and so we have code like:

Task* t = ...
if (has_mmap2_syscall(t->arch())) {
  // do something specifically for mmap2
} else {
  // do something with mmap
}

Finally, various bits of rr’s main binary and its testsuite are written in assembly, so of course those needed to be carefully ported over.

That’s all the major source-code related work that needed to be done. I’ll leave the target-specific runtime work required for a future post.

x86-64 support for rr hasn’t been formally released, but the x86-64 support in the github repository is functional: x86-64 rr passes all the tests in rr’s test suite and is able to record and replay Firefox mochitests.  I will note that it’s not nearly as fast as the x86 version; progress is being made in improving performance, but we’re not quite there yet.

If you’re interested in trying 64-bit rr out, you’ll find the build and installation instructions helpful, with one small modification: you need to add the command-line option -Dforce64bit=ON to any cmake invocations.  Therefore, to build with Makefiles, one needs to do:

git clone https://github.com/mozilla/rr.git
mkdir obj64
cd obj64
cmake -Dforce64bit=ON ../rr
make -j4
make check

Once you’ve done that, the usage instructions will likely be helpful.  Please try it out and report bugs if you find any!

Joel MaherA case of the weekends?

Case of the Mondays

What was famous 15 years ago as a case of the Mondays has manifested itself in Talos.  In fact, I wonder why I get so many regression alerts on Monday as compared to other days.  It is more to a point of we have less noise in our Talos data on weekends.

Take for example the test case tresize:

linux32,

* in fact we see this on other platforms as well linux32/linux64/osx10.8/windowsXP

30 days of linux tresize

Many other tests exhibit this.  What is different about weekends?  Is there just less data points?

I do know our volume of tests go down on weekends mostly as a side effect of less patches being landed on our trees.

Here are some ideas I have to debug this more:

  • Run massive retrigger scripts for talos on weekends to validate # of samples is/isnot the problem
  • Reduce the volume of talos on weekdays to validate the overall system load in the datacenter is/isnot the problem
  • compare the load of the machines with all branches and wait times to that of the noise we have in certain tests/platforms
  • Look at platforms like windows 7, windows 8, and osx 10.6 as to why they have more noise on weekends or are more stable.  Finding the delta in platforms would help provide answers

If you have ideas on how to uncover this mystery, please speak up.  I would be happy to have this gone and make any automated alerts more useful!


Peter BengtssonShout-out to eventlog

If you do things with the Django ORM and want an audit trails of all changes you have two options:

  1. Insert some cleverness into a pre_save signal that writes down all changes some way.

  2. Use eventlog and manually log things in your views.

(you have other options too but I'm trying to make a point here)

eventlog is almost embarrassingly simple. It's basically just a model with three fields:

  • User
  • An action string
  • A JSON dump field

You use it like this:

from eventlog.models import log

def someview(request):
    if request.method == 'POST':
        form = SomeModelForm(request.POST)
        if form.is_valid():
            new_thing = form.save()
            log(request.user, 'mymodel.create', {
                'id': new_thing.id,
                'name': new_thing.name,
                # You can put anything JSON 
                # compatible in here
            })
            return redirect('someotherview')
    else:
        form = SomeModelForm()
    return render(request, 'view.html', {'form': form})

That's all it does. You then have to do something with it. Suppose you have an admin page that only privileged users can see. You can make a simple table/dashboard with these like this:

from eventlog.models import Log  # Log the model, not log the function

def all_events(request):
    all = Log.objects.all()
    return render(request, 'all_events.html', {'all': all})

And something like this to to all_events.html:

<table>
  <tr>
    <th>Who</th><th>When</th><th>What</th><th>Details</th>
  </tr>
  {% for event in all %}
  <tr>
    <td>{{ event.user.username }}</td>
    <td>{{ event.timestamp | date:"D d M Y" }}</td>
    <td>{{ event.action }}</td>
    <td>{{ event.extra }}</td>
  </tr>
  {% endfor %}
</table>

What I like about it is that it's very deliberate. By putting it into views at very specific points you're making it an audit log of actions, not of data changes.

Projects with overly complex model save signals tend to dig themselves into holes that make things slow and complicated. And it's not unrealistic that you'll then record events that aren't particularly important to review. For example, a cron job that increments a little value or something. It's more interesting to see what humans have done.

I just wanted to thank the Eldarion guys for eventlog. It's beautifully simple and works perfectly for me.

Adam LoftingLearning about Learning Analytics @ #Mozfest

If I find a moment, I’ll write about many of the fun and inspiring things I saw at Mozfest this weekend, but this post is about a single session I had the pleasure of hosting alongside Andrew, Doug and Simon; Learning Analytics for Good in the Age of Big Data.

We had an hour, no idea if anyone else would be interested, or what angle people would come to the session from. And given that, I think it worked out pretty well.

la_session

We had about 20 participants, and broke into four groups to talk about Learning Analytics from roughly 3 starting points (though all the discussions overlapped):

  1. Practical solutions to measuring learning as it happens online
  2. The ethical complications of tracking (even when you want to optimise for something positive – e.g. Learning)
  3. The research opportunities for publishing and connecting learning data

But, did anyone learn anything in our Learning Analytics session?

Well, I know for sure the answer is yes… as I personally learned things. But did anyone else?

I spoke to people later in the day who told me they learned things. Is that good enough?

As I watched the group during the session I saw conversations that bounced back and forth in a way that rarely happens without people learning something. But how does anyone else who wasn’t there know if our session had an impact?

How much did people learn?

This is essentially the challenge of Learning Analytics. And I did give this some thought before the session…

IMG_0184

As a meta-exercise, everyone who attended the session had a question to answer at the start and end. We also gave them a place to write their email address and to link their ‘learning data’ to them in an identifiable way. It was a little bit silly, but it was something to think about.

This isn’t good science, but it tells a story. And I hope it was a useful cue for the people joining the session.

Response rate:

  • We had about 20 participants
  • 10 returned the survey (i.e. opted in to ‘tracking’), by answering question 1
  • 5 of those answered question 2
  • 5 gave their email address (not exactly the same 5 who answered both questions)

Here is our Learning Analytics data from our session

Screen Shot 2014-10-30 at 13.53.26

Is that demonstrable impact?

Even though this wasn’t a serious exercise. I think we can confidently argue that some people did learn, in much the same way certain newspapers can make a headline out of two data points…

What, and how much they learned, and if it will be useful later in their life is another matter.

Even with the deliberate choice of question which was almost impossible to not show improvement from start to end of the session, one respondent claims to be less sure what the session was about after attending (but let’s not dwell on that!).

Post-it notes and scribbles

If you were at the session, and want to jog your memory about what we talked about. I kind-of documented the various things we captured on paper.

Screen Shot 2014-10-30 at 14.40.57

Click for gallery of bigger images

Into 2015

I’m looking forward to exploring Learning Analytics in the context of Webmaker much more in 2015.

And to think that this was just one hour in a weekend full of the kinds of conversations that repeat in your mind all the way until next Mozfest. It’s exhausting in the best possible way.

Tim TaubertWhy including a backup pin in your Public-Key-Pinning header is a good idea

In my last post “Deploying TLS the hard way” I explained how TLS and its extensions (as well as a few HTTP extensions) work and what to watch out for when enabling TLS for your server. One of the HTTP extensions mentioned is HTTP Public-Key-Pinning (HPKP). As a short reminder, the header looks like this:

Public-Key-Pins:
  pin-sha256="GRAH5Ex+kB4cCQi5gMU82urf+6kEgbVtzfCSkw55AGk=";
  pin-sha256="lERGk61FITjzyKHcJ89xpc6aDwtRkOPAU0jdnUqzW2s=";
  max-age=15768000; includeSubDomains

You can see that it specifies two pin-sha256 values, that is the pins of two public keys. One is the public key of your currently valid certificate and the other is a backup key in case you have to revoke your certificate.

I received a few questions as to why I suggest including a backup pin and what the requirements for a backup key would be. I will try to answer those with a more detailed overview of how public key pinning and TLS certificates work.

How are RSA keys represented?

Let us go back to the beginning and start by taking a closer look at RSA keys:

$ openssl genrsa 4096

The above command generates a 4096 bit RSA key and prints it to the console. Although it says -----BEGIN RSA PRIVATE KEY----- it does not only return the private key but an ASN.1 structure that also contains the public key - we thus actually generated an RSA key pair.

A common misconception when learning about keys and certificates is that the RSA key itself for a given certificate expires. RSA keys however never expire - after all they are just three numbers. Only the certificate containing the public key can expire and only the certificate can be revoked. Keys “expire” or are “revoked” as soon as there are no more valid certificates using the public key, and you threw away the keys and stopped using them altogether.

What does the TLS certificate contain?

By submitting the Certificate Signing Request (CSR) containing your public key to a Certificate Authority it will issue a valid certificate. That will again contain the public key of the RSA key pair we generated above and an expiration date. Both the public key and the expiration date will be signed by the CA so that modifications of any of the two would render the certificate invalid immediately.

For simplicity I left out a few other fields that X.509 certificates contain to properly authenticate TLS connections, for example your server’s hostname and other details.

How does public key pinning work?

The whole purpose of public key pinning is to detect when the public key of a certificate for a specific host has changed. That may happen when an attacker compromises a CA such that they are able to issue valid certificates for any domain. A foreign CA might also just be the attacker, think of state-owned CAs that you do not want to be able to {M,W}ITM your site. Any attacker intercepting a connection from a visitor to your server with a forged certificate can only be prevented by detecting that the public key has changed.

After the server sent a TLS certificate with the handshake, the browser will look up any stored pins for the given hostname and check whether any of those stored pins match any of the SPKI fingerprints (the output of applying SHA-256 to the public key information) in the certificate chain. The connection must be terminated immediately if pin validation fails.

If the browser does not find any stored pins for the current hostname then it will directly continue with the usual certificate checks. This might happen if the site does not support public key pinning and does not send any HPKP headers at all, or if this is the first time visiting and the server has not seen the HPKP header yet in a previous visit.

Pin validation should happen as soon as possible and thus before any basic certificate checks are performed. An expired or revoked certificate will be happily accepted at the pin validation stage early in the handshake when any of the SPKI fingerprints of its chain match a stored pin. Only a little later the browser will see that the certificate already expired or was revoked and will reject it.

Pin validation also works for self-signed certificates, but they will of course raise the same warnings as usual as soon as the browser determined they were not signed by a trusted third-party.

What if your certificate was revoked?

If your server was compromised and an attacker obtained your private key you have to revoke your certificate as the attacker can obviously fully intercept any TLS connection to your server and record every conversation. If your HPKP header contained only a single pin-sha256 token you are out of luck until the max-age directive given in the header lets those pins expire in your visitors’ browsers.

Pin validation requires checking the SPKI fingerprints of all certificates in the chain. When for example StartSSL signed your certificate you have another intermediate Class 1 or 2 certificate and their root certificate in the chain. The browser trusts only the root certificate but the intermediate ones are signed by the root certificate. The intermediate certificate in turn signs the certificate deployed on your server and that is called a chain of trust.

To prevent getting stuck after your only pinned key was compromised, you could for example provide the SPKI fingerprint of StartSSL’s Class 1 intermediate certificate. An attacker would now have to somehow get a certificate issued by StartSSL’s Class 1 tier to successfully impersonate you. You are however again out of luck should you decide to upgrade to Class 2 in a month because you decided to start paying for a certificate.

Pinning StartSSL’s root certificate would let you switch Classes any time and the attacker would still have to get a certificate issued by StartSSL for your domain. This is a valid approach as long as you are trusting your CA (really?) and as long as the CA itself is not compromised. In case of a compromise however the attacker would be able to get a valid certificate for your domain that passes pin validation. After the attack was discovered StartSSL would quickly revoke all currently issued certificates, generate a new key pair for their root certificate and issue new certificates. And again we would be out of luck because suddenly pin validation fails and no browser will connect to our site.

Include the pin of a backup key

The safest way to pin your certificate’s public key and be prepared to revoke your certificate when necessary is to include the pin of a second public key: your backup key. This backup RSA key should in no way be related to your first key, just generate a new one.

A good advice is to keep this backup key pair (especially the private key) in a safe place until you need it. Uploading it to the server is dangerous: when your server is compromised you lose both keys at once and have no backup key left.

Generate a pin for the backup key exactly as you did for the current key and include both pin-sha256 values as shown above in the HPKP header. In case the current key is compromised make sure all vulnerabilities are patched and then remove the revoked pin. Generate a CSR for the backup key, let your CA issue a new certificate, and revoke the old one. Upload the new certificate to your server and you are done.

Finally, do not forget to generate a new backup key and include that pin in your HPKP header again. Once a browser successfully establishes a TLS connection the next time, it will see your updated HPKP header and replace any stored pins with the new ones.

Tim TaubertHTTP Public-Key-Pinning explained

In my last post “Deploying TLS the hard way” I explained how TLS and its extensions (as well as a few HTTP extensions) work and what to watch out for when enabling TLS for your server. One of the HTTP extensions mentioned is HTTP Public-Key-Pinning (HPKP). As a short reminder, the header looks like this:

Public-Key-Pins:
  pin-sha256="GRAH5Ex+kB4cCQi5gMU82urf+6kEgbVtzfCSkw55AGk=";
  pin-sha256="lERGk61FITjzyKHcJ89xpc6aDwtRkOPAU0jdnUqzW2s=";
  max-age=15768000; includeSubDomains

You can see that it specifies two pin-sha256 values, that is the pins of two public keys. One is the public key of your currently valid certificate and the other is a backup key in case you have to revoke your certificate.

I received a few questions as to why I suggest including a backup pin and what the requirements for a backup key would be. I will try to answer those with a more detailed overview of how public key pinning and TLS certificates work.

How are RSA keys represented?

Let us go back to the beginning and start by taking a closer look at RSA keys:

$ openssl genrsa 4096

The above command generates a 4096 bit RSA key and prints it to the console. Although it says -----BEGIN RSA PRIVATE KEY----- it does not only return the private key but an ASN.1 structure that also contains the public key - we thus actually generated an RSA key pair.

A common misconception when learning about keys and certificates is that the RSA key itself for a given certificate expires. RSA keys however never expire - after all they are just numbers. Only the certificate containing the public key can expire and only the certificate can be revoked. Keys “expire” or are “revoked” as soon as there are no more valid certificates using the public key, and you threw away the keys and stopped using them altogether.

What does the certificate contain?

By submitting the Certificate Signing Request (CSR) containing your public key to a Certificate Authority it will issue a valid certificate. That will again contain the public key of the RSA key pair we generated above and an expiration date. Both the public key and the expiration date will be signed by the CA so that modifications of any of the two would render the certificate invalid immediately.

For simplicity I left out a few other fields that X.509 certificates contain to properly authenticate TLS connections, for example your server’s hostname and other details.

How does public key pinning work?

The whole purpose of public key pinning is to detect when the public key of a certificate for a specific host has changed. That may happen when an attacker compromises a CA such that they are able to issue valid certificates for any domain. A foreign CA might also just be the attacker, think of state-owned CAs that you do not want to be able to {M,W}ITM your site. Any attacker intercepting a connection from a visitor to your server with a forged certificate can only be prevented by detecting that the public key has changed.

After the server sent a TLS certificate with the handshake, the browser will look up any stored pins for the given hostname and check whether any of those stored pins match any of the SPKI fingerprints (the output of applying SHA-256 to the public key information) in the certificate chain. The connection must be terminated immediately if pin validation fails.

If the browser does not find any stored pins for the current hostname then it will directly continue with the usual certificate checks. This might happen if the site does not support public key pinning and does not send any HPKP headers at all, or if this is the first time visiting and the server has not seen the HPKP header yet in a previous visit.

Pin validation should happen as soon as possible and thus before any basic certificate checks are performed. An expired or revoked certificate will be happily accepted at the pin validation stage early in the handshake when any of the SPKI fingerprints of its chain match a stored pin. Only a little later the browser will see that the certificate already expired or was revoked and will reject it.

Pin validation also works for self-signed certificates, but they will of course raise the same warnings as usual as soon as the browser determined they were not signed by a trusted third-party.

What if your certificate was revoked?

If your server was compromised and an attacker obtained your private key you have to revoke your certificate as the attacker can obviously fully intercept any TLS connection to your server and record every conversation. If your HPKP header contained only a single pin-sha256 token you are out of luck until the max-age directive given in the header lets those pins expire in your visitors’ browsers.

Pin validation requires checking the SPKI fingerprints of all certificates in the chain. When for example StartSSL signed your certificate you have another intermediate Class 1 or 2 certificate and their root certificate in the chain. The browser trusts only the root certificate but the intermediate ones are signed by the root certificate. The intermediate certificate in turn signs the certificate deployed on your server and that is called a chain of trust.

To prevent getting stuck after your only pinned key was compromised, you could for example provide the SPKI fingerprint of StartSSL’s Class 1 intermediate certificate. An attacker would now have to somehow get a certificate issued by StartSSL’s Class 1 tier to successfully impersonate you. You are however again out of luck should you decide to upgrade to Class 2 in a month because you decided to start paying for a certificate.

Pinning StartSSL’s root certificate would let you switch Classes any time and the attacker would still have to get a certificate issued by StartSSL for your domain. This is a valid approach as long as you are trusting your CA (really?) and as long as the CA itself is not compromised. In case of a compromise however the attacker would be able to get a valid certificate for your domain that passes pin validation. After the attack was discovered StartSSL would quickly revoke all currently issued certificates, generate a new key pair for their root certificate and issue new certificates. And again we would be out of luck because suddenly pin validation fails and no browser will connect to our site.

Include the pin of a backup key

The safest way to pin your certificate’s public key and be prepared to revoke your certificate when necessary is to include the pin of a second public key: your backup key. This backup RSA key should in no way be related to your first key, just generate a new one.

A good advice is to keep this backup key pair (especially the private key) in a safe place until you need it. Uploading it to the server is dangerous: when your server is compromised you lose both keys at once and have no backup key left.

Generate a pin for the backup key exactly as you did for the current key and include both pin-sha256 values as shown above in the HPKP header. In case the current key is compromised make sure all vulnerabilities are patched and then remove the revoked pin. Generate a CSR for the backup key, let your CA issue a new certificate, and revoke the old one. Upload the new certificate to your server and you are done.

Finally, do not forget to generate a new backup key and include that pin in your HPKP header again. Once a browser successfully establishes a TLS connection the next time, it will see your updated HPKP header and replace any stored pins with the new ones.

Gregory SzorcSoft Launch of MozReview

We performed a soft launch of MozReview: Mozilla's new code review tool yesterday!

What does that mean? How do I use it? What are the features? How do I get in touch or contribute? These are all great questions. The answers to those and more can all be found in the MozReview documentation. If they aren't, it's a bug in the documentation. File a bug or submit a patch. Instructions to do that are in the documentation.

Kev Needhamthings that interest me this week – 29 oct 2014

Quick Update: A couple of people mentioned there’s no Mozilla items in here. They’re right, and it’s primarily because the original audience of this type of thing was Mozilla. I’ll make sure I add them where relevant, moving forward.

Every week I put together a bunch of news items I think are interesting to the people I work with, and that’s usually limited to a couple wiki pages a handful of people read. I figured I may as well put it in a couple other places, like here, and see if people are interested. Topics focus on the web, the technologies that power it, and the platforms that make use of it. I work for Mozilla, but these are my own opinions and takes on things.

I try to have three sections:

  • Something to Think About – Something I’m seeing a company doing that I think is important, why I think it’s important, and sometimes what I think should be done about it. Some weeks these won’t be around, because they tend to not show their faces much.
  • Worth a Read – Things I think are worth the time to read if you’re interested in the space as a whole. Limited to three items max, but usually two. If you don’t like what’s in here, tell me why.
  • Notes – Bits and bobs people may or may not be interested in, but that I think are significant, bear watching, or are of general interest.

I’ll throw these out every Wednesday, and standard disclaimers apply – this is what’s in my brain, and isn’t representative of what’s in anyone else’s brain, especially the folks I work with at Mozilla. I’ll also throw a mailing list together if there’s interest, and feedback is always welcome (your comment may get stuck in a spam-catcher, don’t worry, I’ll dig it out).

– k

Something to Think About

Lifehacker posted an article this morning around all the things you can do from within Chrome’s address bar. Firefox can do a number of the same things, but it’s interesting to see the continual improvements the Chrome team has made around search (and service) integration, and also the productivity hacks (like searching your Google drive without actually going there) that people come up with to make a feature more useful than it’s intended design.

Why I think people should care: Chrome’s modifications to the address bar aren’t ground-breaking, nor are they changes that came about overnight. They are a series of iterative changes to a core function that work well with Google’s external services, and focus on increasing utility which, not coincidentally, increases the value and stickiness of the Google experience as a whole. Continued improvements to existing features (and watching how people are riffing on those features) is a good thing, and is something to consider as part of our general product upkeep, particularly around the opportunity to do more with services (both ours, and others) that promote the open web as a platform.

Worth a Read

  • Benedict Evans updated his popular “Mobile Is Eating the World” presentation, and posits that mobile effectively ”is” everything technology today. I think it needs a “Now” at the end, because what he’s describing has happened before, and will happen again. Mobile is a little different currently, mainly because of the gigantic leaps in hardware for fewer dollars that continue to be made as well as carrier subsidies fueling 2-year upgrade cycles. Mobile itself is also not just phones, it’s things other than desktops and laptops that have a network connection. Everything connected is everything. He’s also put together a post on Tablets, PCs and Office that goes a little bit into technology cycles and how things like tablets are evolving to fill more than just media consumption needs, but the important piece he pushes in both places is the concept of network connected screens being the window to your stuff, and the platform under the screen being a commodity (e.g. processing power is improving on every platform to the point the hardware platform is mattering less) that is really simply the interface that better fits the task at hand.
  • Ars Technica has an overview of some of the more interesting changes in Lollipop which focus on unbundling apps and APIs to mitigate fragmentation risk, an enhanced setup process focusing on user experience, and the shift in the Nexus brand from a market-share builder to a premium offering.
  • Google’s Sundar Pichai was promoted last week in a move that solidifies Google’s movement towards a unified, backend-anchored, multi-screen experience. Pichai is a long time Google product person, and has been fronting the Android and Chrome OS (and a couple other related services) teams, and now takes on Google’s most important web properties as well, including Gmail, Search, AdSense, and the infrastructure that runs it. This gives those business units inside Google better alignment around company goals, and shows the confidence Google has in Pichai. Expect further alignment in Google’s unified experience movement through products like Lollipop, Chrome OS, Inbox and moving more Google Account data (and related experiences like notifications and Web Intents) into the cloud, where it doesn’t rely on a specific client and can be shared/used on any connected screen.

Notes

Mozilla Release Management TeamFirefox 34 beta3 to beta4

  • 38 changesets
  • 64 files changed
  • 869 insertions
  • 625 deletions

ExtensionOccurrences
js16
cpp16
jsm9
h9
java4
xml2
jsx2
html2
mn1
mm1
list1
css1

ModuleOccurrences
browser19
gfx10
content8
mobile6
services5
layout4
widget3
netwerk3
xpfe2
toolkit2
modules1
accessible1

List of changesets:

Nicolas SilvaBug 1083071 - Backout the additional blacklist entries. r=jmuizelaar, a=sledru - 31acf5dc33fc
Jeff MuizelaarBug 1083071 - Disable D3D11 and D3D9 layers on broken drivers. r=bjacob, a=sledru - 618a12c410bb
Ryan VanderMeulenBacked out changeset 6c46c21a04f9 (Bug 1074378) - 3e2c92836231
Cosmin MalutanBug 1072244 - Correctly throw the exceptions in TPS framework. r=hskupin a=testonly DONTBUILD - 48e3c2f927d5
Mark BannerBug 1081959 - "Something went wrong" isn't displayed when the call fails in the connection phase. r=dmose, a=lmandel - 8cf65ccdce3d
Jared WeinBug 1062335 - Loop panel size increases after switching themes. r=mixedpuppy, a=lmandel - 033942f8f817
Wes JohnstonBug 1055883 - Don't reshow header when hitting the bottom of short pages. r=kats, a=lmandel - 823ecd23138b
Patrick McManusBug 1073825 - http2session::cleanupstream failure. r=hurley, a=lmandel - eed6613c5568
Paul AdenotBug 1078354 - Part 1: Make sure we are not waking up an OfflineGraphDriver. r=jesup, a=lmandel - 9d0a16097623
Paul AdenotBug 1078354 - Part 2: Don't try to measure a PeriodicWave size when an OscillatorNode is using a basic waveform. r=erahm, a=lmandel - b185e7a13e18
Gavin SharpBug 1086958 - Back out change to default browser prompting for Beta 34. r=Gijs, a=lmandel - d080a93fd9e1
Yury DelendikBug 1072164 - Fixes pdf.js for CMYK jpegs. r=bdahl, a=lmandel - d1de09f2d1b0
Neil RashbrookBug 1070768 - Move XPFE's autocomplete.css to communicator so it doesn't conflict with toolkit's new global autocomplete.css. r=Ratty, a=lmandel - 78b9d7be1770
Markus StangeBug 1078262 - Only use the fixed epsilon for the translation components. r=roc, a=lmandel - 2c49dc84f1a0
Benjamin ChenBug 1079616 - Dispatch PushBlobRunnable in RequestData function, and remove CreateAndDispatchBlobEventRunnable. r=roc, a=lmandel - d9664db594e9
Brad LasseyBug 1084035 - Add the ability to mirror tabs from desktop to a second screen, don't block browser sources when specified in constraints from chrome code. r=jesup, a=lmandel - 47065beeef20
Gijs KruitboschBug 1074520 - Use CSS instead of hacks to make the forget button lay out correctly. r=jaws, a=lmandel - 46916559304f
Markus StangeBug 1085475 - Don't attempt to use vibrancy in 32-bit mode. r=smichaud, a=lmandel - 184b704568ff
Mark FinkleBug 1088952 - Disable "Enable wi-fi" toggle on beta due to missing permission. r=rnewman, a=lmandel - 9fd76ad57dbe
Yonggang LuoBug 1066459 - Clamp the new top row index to the valid range before assigning it to mTopRowIndex when scrolling. r=kip a=lmandel - 4fd0f4651a61
Mats PalmgrenBug 1085050 - Remove a DEBug assertion. r=kip a=lmandel - 1cd947f5b6d8
Jason OrendorffBug 1042567 - Reflect JSPropertyOp properties more consistently as data properties. r=efaust, a=lmandel - 043c91e3aaeb
Margaret LeibovicBug 1075232 - Record which suggestion of the search screen was tapped in telemetry. r=mfinkle, a=lmandel - a627934a0123
Benoit JacobBug 1088858 - Backport ANGLE fixes to make WebGL work on Windows in Firefox 34. r=jmuizelaar, a=lmandel - 85e56f19a5a1
Patrick McManusBug 1088910 - Default http/2 off on gecko 34 after EARLY_BETA. r=hurley, a=lmandel - 74298f48759a
Benoit JacobBug 1083071 - Avoid touching D3D11 at all, even to test if it works, if D3D11 layers are blacklisted. r=Bas, r=jmuizelaar, a=sledru - 6268e33e8351
Randall BarkerBug 1080701 - TabMirror needs to be updated to work with the chromecast server. r=wesj, r=mfinkle, a=lmandel - 0811a9056ec4
Xidorn QuanBug 1088467 - Avoid adding space for bullet with list-style: none. r=surkov, a=lmandel - 2e54d90546ce
Michal NovotnyBug 1083922 - Doom entry when parsing security info fails. r=mcmanus, a=lmandel - 34988fa0f0d8
Ed LeeBug 1088729 - Only allow http(s) directory links. r=adw, a=sledru - 410afcc51b13
Mark BannerBug 1047410 - Desktop client should display Call Failed if an incoming call - d2ef2bdc90bb
Mark BannerBug 1088346 - Handle "answered-elsewhere" on incoming calls for desktop on Loop. r=nperriault a=lmandel - 67d9122b8c98
Mark BannerBug 1088636 - Desktop ToS url should use hello.firefox.com not call.mozilla.com. r=nperriault a=lmandel - 45d717da277d
Adam Roach [:abr]Bug 1033579 - Add channel to POST calls for Loop to allow different servers based on the channel. r=dmose a=lmandel - d43a7b8995a6
Ethan HuggBug 1084496 - Update whitelist for screensharing r=jesup a=lmandel - 080cfa7f5d79
Ryan VanderMeulenBacked out changeset 043c91e3aaeb (Bug 1042567) for deBug jsreftest failures. - 15bafc2978d8
Jim ChenBug 1066982 - Try to not launch processes on pre-JB devices because of Android bug. r=snorp, a=lmandel - 5a4dfee44717
Randell JesupBug 1080755 - Push video frames into MediaStreamGraph instead of waiting for pulls. r=padenot, a=lmandel - 22cfde2bf1ce

David BoswellPlease complete and share the contributor survey

We are conducting a research project to learn about the values and motivations of Mozilla’s contributors (both volunteers and staff) and to understand how we can improve their experiences.

Part of this effort is a survey for contributors that has just been launched at:

http://www.surveygizmo.com/s3/1852460/Mozilla-Community-Survey

Please take a few minutes to fill this out and then share this link with the communities you work with. Having more people complete this will give us a more complete understanding of how we can improve the experience for all contributors.

We plan to have results from this survey and the data analysis project available by the time of the Portland work week in December.


K Lars LohnJudge the Project, Not the Contributors

I recently read a blog posting titled, The 8 Essential Traits of a Great Open Source Contributor I am disturbed by this posting. While clearly not the intended effect, I feel the posting just told a huge swath of people that they are neither qualified nor welcome to contribute to Open Source. The intent of the posting was to say that there is a wide range of skills needed in Open Source. Even if a potential contributor feels they lack an essential technical skill, here's an enumeration of other skills that are helpful.
Over the years, I’ve talked to many people who have wanted to contribute to open source projects, but think that they don’t have what it takes to make a contribution. If you’re in that situation, I hope this post helps you get out of that mindset and start contributing to the projects that matter to you.
See? The author has completely good intentions. My fear is that the posting has the opposite effect. It raises a bar as if it is an ad for a paid technical position. He uses superlatives that say to me, “we are looking for the top people as contributors, not common people”.

Unfortunately, my interpretation of this blog posting is not the need for a wide range of skills, it communicates that if you contribute, you'd better be great at doing so. In fact, if you do not have all these skills, you cannot be considered great. So where is the incentive to participate? It makes Open Source sound as if it an invitation to be judged as either great or inadequate.

Ok, I know this interpretation is through my own jaundiced eyes. So to see if my interpretation was just a reflection of my own bad day, I shared the blog posting with a couple colleagues.  Both colleagues are women that judge their own skills unnecessarily harshly, but, in my judgement are really quite good. I chose these two specifically, because I knew both suffer “imposter syndrome”, a largely unshakable feeling of inadequacy that is quite common among technical people.   Both reacted badly to the posting, one saying that it sounded like a job posting for a position for which there would be no hope of ever landing.

I want to turn this around. Let's not judge the contributors, let's judge the projects instead. In fact, we can take these eight traits and boil them down to one: essential trait of a great open source project:
Essential trait of a great open source project:
Leaders & processes that can advance the project while marshalling imperfect contributors gracefully.
That's a really tall order. By that standard, my own Open Source projects are not great. However, I feel much more comfortable saying that the project is not great, rather than sorting the contributors.

If I were paying people to work on my project, I'd have no qualms about judging their performance any where along a continuum of “great” to “inadequate”. Contributors are NOT employees subject to performance review.  In my projects, if someone contributes, I consider both the contribution and the contributor to be “great”. The contribution may not make it into the project, but it was given to me for free, so it is naturally great by that aspect alone.

Contribution: Voluntary Gift

Perhaps if the original posting had said, "these are the eight gifts we need" rather than saying the the gifts are traits of people we consider "great", I would not have been so uncomfortable.

A great Open Source project is one that produces a successful product and is inclusive. An Open Source project that produces a successful product, but is not inclusive, is merely successful.

Tim TaubertTalk: Keeping secrets with JavaScript - An Introduction to the WebCrypto API

With the web slowly maturing as a platform the demand for cryptography in the browser has risen, especially in a post-Snowden era. Many of us have heard about the upcoming Web Cryptography API but at the time of writing there seem to be no good introductions available. We will take a look at the proposed W3C spec and its current state of implementation.

Video

Slides

Code

https://github.com/ttaubert/secret-notes

Soledad PenadesNative smooth scrolling with JS

There’s a new way of invoking the scroll functions in JavaScript where you can specify how do you want the scroll to behave: smoothly, immediately, or auto (whatever the user agent wants, I guess).

window.scrollBy({ top: 100, behavior: 'smooth' });

(note it’s behavior, not behaviour, argggh).

I read this post yesterday saying that it would be available (via this tweet from @FirefoxNightly) and immediately wanted to try it out!

I made sure I had an updated copy of Firefox Nightly—you’ll need a version from the 28th of October or later. Then I enabled the feature by going to about:config and changing layout.css.scroll-behavior.enabled to true. No restart required!

My test looks like this:

native smooth scrolling

(source code)

You can also use it in CSS code:

#myelement {
  scroll-behavior: smooth;
}

but my example doesn’t. Feel like building one yourself? :)

The reason why I’m so excited about this is that I’ve had to implement this behaviour with plug-ins and what nots that tend to interfere with the rendering pipeline many, many times and it’s amazing that this is going to be native to the browser, as it should be smooth and posh. And also because other native platforms have it too and it makes the web look “not cool”. Well, not anymore!

The other cool aspect is that it degrades great—if the option is not recognised by the engine you will just get… a normal abrupt behaviour, but it will still scroll.

I’m guessing that you can still use your not-so-performant plug-ins if you really want your own scroll algorithm (maybe you want it to bounce in a particular way, etc). Just use instant instead of smooth, and you should be good to go!

SCROLL SCROLL SCROLL SCROLL!

Update: Frontender magazine translated this post to Russian.

flattr this!

Andreas GalHTML5 reaches the Recommendation stage

Today HTML5 reached the Recommendation stage inside the W3C, the last stage of W3C standards. Mozilla was one of the first organizations to become deeply involved in the evolution and standardization of HTML5, so today’s announcement by the W3C has a special connection to Mozilla’s mission and our work over the last 10 years.

Mozilla has pioneered many widely adopted technologies such as WebGL which further enhance HTML5 and make it a competitive and compelling alternative to proprietary and native ecosystems. With the entrance of Firefox OS into the smartphone market we have also made great progress in advancing the state of the mobile Web. Many of the new APIs and capabilities we have proposed in the context of Firefox OS are currently going through the standards process, bringing capabilities to the Web that were previously only available to native applications.

W3C Standards go through a series of steps, ranging from proposals to Editors’ Drafts to Candidate Recommendations and ultimately Recommendations. While reaching the Recommendation stage is an important milestone, we encourage developers to engage with new Web standards long before they actually hit that point. To stay current, Web developers should keep an eye on new evolving standards and read Editors’ Drafts instead of Recommendations. Web developer-targeted documentation such as developer.mozilla.org and caniuse.com are also a great way to learn about upcoming standards.

A second important area of focus for Mozilla around HTML5 has been test suites. Test suites can be used by Web developers and Web engine developers alike to verify that Web browsers consistently implement the HTML5 specification. You can check out the latest results at:

http://w3c.github.io/test-results/dom/all.html
http://w3c.github.io/test-results/html/details.html

These automated testing suites for HTML5 play a critical role in ensuring a uniform and consistent Web experience for users.

At Mozilla, we envision a Web which can do anything you can do in a native application. The advancement of HTML5 marks an important step on the road to this vision. We have many exciting things planned for our upcoming 10th anniversary of Firefox (#Fx10), which will continue to move the Web forward as an open ecosystem and platform for innovation.

Stay tuned.


Filed under: Mozilla

Kartikaya GuptaBuilding a NAS

I've been wanting to build a NAS (network-attached storage) box for a while now, and the ominous creaking noises from the laptop I was previously using as a file server prompted me to finally take action. I wanted to build rather than buy because (a) I wanted more control over the machine and OS, (b) I figured I'd learn something along the way and (c) thought it might be cheaper. This blog posts documents the decisions and mistakes I made and problems I ran into.

First step was figuring out the level of data redundancy and storage space I wanted. After reading up on the different RAID levels I figured 4 drives with 3 TB each in a RAID5 configuration would suit my needs for the next few years. I don't have a huge amount of data so the ~9TB of usable space sounded fine, and being able to survive single-drive failures sounded sufficient to me. For all critical data I keep a copy on a separate machine as well.

I chose to go with software RAID rather than hardware because I've read horror stories of hardware RAID controllers going obsolete and being unable to find a replacement, rendering the data unreadable. That didn't sound good. With an open-source software RAID controller at least you can get the source code and have a shot at recovering your data if things go bad.

With this in mind I started looking at software options - a bit of searching took me to FreeNAS which sounded exactly like what I wanted. However after reading through random threads in the user forums it seemed like the FreeNAS people are very focused on using ZFS and hardware setups with ECC RAM. From what I gleaned, using ZFS without ECC RAM is a bad idea, because errors in the RAM can cause ZFS to corrupt your data silently and unrecoverably (and worse, it causes propagation of the corruption). A system that makes bad situations worse didn't sound so good to me.

I could have still gone with ZFS with ECC RAM but from some rudimentary searching it sounded like it would increase the cost significantly, and frankly I didn't see the point. So instead I decided to go with NAS4Free (which actually was the original FreeNAS before iXsystems bought the trademark and forked the code) which allows using a UFS file system in a software RAID5 configuration.

So with the software decisions made, it was time to pick hardware. I used this guide by Sam Kear as a starting point and modified a few things here and there. I ended up with this parts list that I mostly ordered from canadadirect.com. (Aside: I wish I had discovered pcpartpicker.com earlier in the process as it would have saved me a lot of time). They shipped things to me in 5 different packages which arrived on 4 different days using 3 different shipping services. Woo! The parts I didn't get from canadadirect.com I picked up at a local Canada Computers store. Then, last weekend, I put it all together.

It's been a while since I've built a box so I screwed up a few things and had to rewind (twice) to fix them. Took about 3 hours in total for assembly; somebody who knew what they were doing could have done it in less than one. I mostly blame lack of documentation with the chassis since there were a bunch of different screws and it wasn't obvious which ones I had to use for what. They all worked for mounting the motherboard but only one of them was actually correct and using the wrong one meant trouble later.

In terms of the hardware compatibility I think my choices were mostly sound, but there were a few hitches. The case and motherboard both support up to 6 SATA drives (I'm using 4, giving me some room to grow). However, the PSU only came with 4 SATA power connectors which means I'll need to get some adaptors or maybe a different PSU if I need to add drives. The other problem was that the chassis comes with three fans (two small ones at the front, one big one at the back) but there was only one chassis power connector on the motherboard. I plugged the big fan in and so far the machine seems to be staying pretty cool so I'm not too worried. Does seem like a waste to have those extra unused fans though.

Finally, I booted it up using a monitor/keyboard borrowed from another machine, and ran memtest86 to make sure the RAM was good. It was, so I flashed the NAS4Free LiveUSB onto a USB drive and booted it up. Unfortunately after booting into NAS4Free my keyboard stopped working. I had to disable the USB 3.0 stuff in the BIOS to get around that. I don't really care about having USB 3.0 support on this machine so not a big deal. It took me some time to figure out what installation mode I wanted to use NAS4Free in. I decided to do a full install onto a second USB drive and not have a swap partition (figured hosting swap over USB would be slow and probably unnecessary).

So installing that was easy enough, and I was able to boot into the full NAS4Free install and configure it to have a software RAID5 on the four disks. Things generally seemed OK and I started copying stuff over.. and then the box rebooted. It also managed to corrupt my installation somehow, so I had to start over from the LiveUSB stick and re-install. I had saved the config from the first time so it was easy to get it back up again, and once again I started putting data on there. Again it rebooted, although this time it didn't corrupt my installation. This was getting worrying, particularly since the system log files provided no indication as to what went wrong.

My first suspicion was that the RAID wasn't fully initialized and so copying data onto it resulted in badness. The array was "rebuilding" and I'm supposed to be able to use it then, but I figured I might as well wait until it was done. Turns out it's going to be rebuilding for the next ~20 days because RAID5 has to read/write the entire disk to initialize fully and in the days of multi-terabyte disk this takes forever. So in retrospect perhaps RAID5 was a poor choice for such large disks.

Anyway in order to debug the rebooting, I looked up the FreeBSD kernel debugging documentation, and that requires having a swap partition that the kernel can dump a crash report to. So I reinstalled and set up a swap partition this time. This seemed to magically fix the rebooting problem entirely, so I suspect the RAID drivers just don't deal well when there's no swap, or something. Not an easy situation to debug if it only happens with no swap partition but you need a swap partition to get a kernel dump.

So, things were good, and I started copying more data over and configuring more stuff and so on. The next problem I ran into was the USB drive to which I had installed NAS4Free started crapping out with read/write errors. This wasn't so great but by this point I'd already reinstalled it about 6 or 7 times, so I reinstalled again onto a different USB stick. The one that was crapping out seems to still work fine in other machines, so I'm not sure what the problem was there. The new one that I used, however, was extremely slow. Things that took seconds on the previous drive took minutes on this one. So I switched again to yet another drive, this time an old 2.5" internal drive that I have mounted in an enclosure through USB.

And finally, after installing the OS at least I've-lost-count-how-many times, I have a NAS that seems stable and appears to work well. To be fair, reinstalling the OS is a pretty painless process and by the end I could do it in less than 10 minutes from sticking in the LiveUSB to a fully-configured working system. Being able to download the config file (which includes not just the NAS config but also user accounts and so on) makes it pretty painless to restore your system to exactly the way it was. The only additional things I had to do were install a few FreeBSD packages and unpack a tarball into my home directory to get some stuff I wanted. At no point was any of the data on the RAID array itself lost or corrupted, so I'm pretty happy about that.

In conclusion, setup was a bit of a pain, mostly due to unclear documentation and flaky USB drives (or drivers) but now that I have it set up it seems to be working well. If I ever have to do it over I might go for something other than RAID5 just because of the long rebuild time but so far it hasn't been an actual problem.

Asa DotzlerMozFest Flame Phones

Dancing FlamesImage via Flickr user Capture Queen, and used under a CC license

Even though I wasn’t there, it sure was thrilling to see all the activity around the Flame phones at MozFest.

So, you’ve got a Flame and you’re wondering how you can use this new hardware to help Mozilla make Firefox OS awesome?! Well, here’s what we’d love from you.

First, check your Flame to see what build of Firefox OS it’s running. If you have not flashed it, it’s probably on Firefox OS 1.3 and you’ll need to upgrade it to something contemporary first. If you’re using anything older than the v188 base image, you definitely need to upgrade. To upgrade, visit the Flame page on MDN and follow the instructions to flash a new vendor-provided base image and then flash the latest nightly from Mozilla on top of that.

Once you’re on the latest nightly of Firefox OS, you’re ready to start using the Flame and filing bugs on things that don’t work. You’d think that with about five thousand Flames out there, we’d have reports on everything that’s not working but that’s not the case. Even if the bug seems highly visible, please report it. We’d rather have a couple of duplicate reports than no report at all. If you’re experienced with Bugzilla, please search first *and* help us triage incoming reports so the devs can focus on fixing rather than duping bugs.

In addition to this use-based ad hoc testing, you can participate in the One and Done program or work directly with the Firefox OS QA team on more structured testing.

But that’s not all! Because Firefox OS is built on Web technologies, you don’t have to be a hardcore programmer to fix many of the bugs in the OS or the default system apps like Dialer, Email, and Camera. If you’ve got Web dev skills, please help us squash bugs. A great place to start is the list of bugs with developers assigned to mentor you through the process.

It’s a non-trivial investment that the Mozilla Foundation has made in giving away these Flame reference phones and I’m here to work with you all to help make that effort pay off in terms of bugs reported and fixed. Please let me know if you run into problems or could use my help. Enjoy your Flames!

J. Ryan StinnettDebugging Tabs with Firefox for Android

For quite a while, it has been possible to debug tabs on Firefox for Android devices, but there were many steps involved, including manual port forwarding from the terminal.

As I hinted a few weeks ago, WebIDE would soon support connecting to Firefox for Android via ADB Helper support, and that time is now!

How to Use

You'll need to assemble the following bits and bobs:

  • Firefox 36 (2014-10-25 or later)
  • ADB Helper 0.7.0 or later
  • Firefox for Android 35 or later

Opening WebIDE for the first time should install ADB Helper if you don't already have it, but double-check it is the right version in the add-on manager.

Firefox for Android runtime appears

Inside WebIDE, you'll see an entry for Firefox for Android in the Runtime menu.

Firefox for Android tab list

Once you select the runtime, tabs from Firefox for Android will be available in the (now poorly labelled) apps menu on the left.

Inspecting a tab in WebIDE

Choosing a tab will open up the DevTools toolbox for that tab. You can also toggle the toolbox via the "Pause" icon in the top toolbar.

If you would like to debug Firefox for Android's system-level / chrome code, instead of a specific tab, you can do that with the "Main Process" option.

What's Next

We have even more connection UX improvements on the way, so I hope to have more to share soon!

If there are features you'd like to see added, file bugs or contact the team via various channels.

Christian HeilmannSpeaking at the Trondheim Developer Conference – good show!

TL;DR: The Trondheim Development Conference 2014 was incredible. Well worth my time and a fresh breath of great organisation.

trondheim Developer conference

I am right now on the plane back from Oslo to London – a good chance to put together a few thoughts on the conference I just spoke at. The Trondheim Developer Conference was – one might be amazed to learn – a conference for developers in Trondheim, Norway. All of the money that is left over after the organisers covered the cost goes to supporting other local events and developer programs. In stark contrast to other not-for-profit events this one shines with a classy veneer that is hard to find and would normally demand a mid-3 digit price for the tickets.

This is all the more surprising seeing that Norway is a ridiculously expensive place where I tend not to breathe in too much as I am not sure if they charge for air or not.

Clarion Hotel Trondheim - outside
Clarion Hotel Trondheim - inside

The location of the one day conference was the Clarion Hotel & Congress Trondheim, a high-class location with great connectivity and excellent catering. Before I wax poetic about the event here, let’s just give you a quick list:

  • TDC treats their speakers really well. I had full travel and accommodation coverage with airport pick-ups and public transport bringing me to the venue. I got a very simple list with all the information I needed and there was no back and forth about what I want – anything I could think of had already been anticipated. The speaker lounge was functional and easily accessible. The pre-conference speaker dinner lavish.
  • Everything about the event happened in the same building. This meant it was easy to go back to your room to get things or have undisturbed preparation or phone time. It also meant that attendees didn’t get lost on the way to other venues.
  • Superb catering. Coffee, cookies and fruit available throughout the day.
  • Great lunch organisation that should be copied by others. It wasn’t an affair where you had to queue up for ages trying to get the good bits of a buffet. Instead the food was already on the tables and all you had to do was pick a seat, start a chat and dig in. That way the one hour break was one hour of nourishment and conversation, not pushing and trying to find a spot to eat.
  • Wireless was strong and bountiful. I was able to upload my screencasts and cover the event on social media without a hitch. There was no need to sign up or get vouchers or whatever else is in between us and online bliss – simply a wireless name and a password.
  • Big rooms with great sound and AV setup. The organisers had a big box of cable connectors in case you brought exotic computers. We had enough microphones and the rooms had enough space.
  • Audience feedback was simple. When entering a session, attendees got a roulette chip and when leaving the session they dropped them in provided baskets stating “awesome” or “meh”. There was also an email directly after the event asking people to provide feedback.
  • Non-pushy exhibitors. There was a mix of commercial partners and supported not-for-profit organisations with booths and stands. Each of them had something nice to show (Oculus Rift probably was the overall winner) and none of them had booth babes or sales weasels. All the people I talked to had good info and were not pushy but helpful instead.
  • A clever time table. Whilst I am not a big fan of multi-track conferences, TDC had 5 tracks but limited the talks to 30 minutes. This meant there were 15 minute breaks in between tracks to have a coffee and go to the other room. I loved that. It meant speakers need to cut to the chase faster.
  • Multilingual presentations. Whilst my knowledge of Norwegian is to try to guess the German sounding words in it and wondering why everything is written very differently to Swedish I think it gave a lot of local presenters a better chance to reach the local audience when sticking to their mother tongue. The amounts of talks were even, so I could go to the one or two English talks in each time slot. With the talks being short it was no biggie if one slot didn’t have something that excited you.
  • A nice after party with a band and just the right amount of drinks. Make no mistake – alcohol costs an arm and a leg in Norway (and I think the main organiser ended up with a peg leg) but the party was well-behaved with a nice band and lots of space to have chats without having to shout at one another.
  • Good diversity of speakers and audience There was a healthy mix and Scandinavian countries are known to be very much about equality.
  • It started and ended with science and blowing things up. I was mesmerised by Selda Ekiz who started and wrapped up the event by showing some physics experiments of the explosive kind. She is a local celebrity and TV presenter who runs a children’s show explaining physics. Think Mythbusters but with incredible charm and a lot less ad breaks. If you have an event, consider getting her – I loved every second.

Selda Ekiz on stage

I was overwhelmed how much fun and how relaxing the whole event was. There was no rush, no queues, no confusion as to what goes where. If you want a conference to check out next October, TDC is a great choice.

My own contributions to the event were two sessions (as I filled in for one that didn’t work out). The first one was about allowing HTML5 to graduate, or – in other words – not being afraid of using it.

You can watch a the screencast with me talking about how HTML5 missed its graduation on YouTube.

The HTML5 graduation slides are on Slideshare.

How HTML5 missed its graduation – #TrondheimDC from Christian Heilmann

The other session was about the need to create offline apps for the now and coming market. Marketing of products keeps telling us that we’re always connected but this couldn’t be further from the truth. It is up to us as developers to condition our users to trust the web to work even when the pesky wireless is acting up again.

You can watch the screencast of the offline talk on YouTube.

The Working connected to create offline slides are on Slideshare.

Working connected to create offline – Trondheim Developer Conference 2014 from Christian Heilmann

I had a blast and I hope to meet many of the people I met at TDC again soon.

Marco ZeheApps, the web, and productivity

Inspired by this public discussion on Asa Dotzler’s Facebook wall, I reflected on my own current use cases of web applications, native mobile apps, and desktop clients. I also thought about my post from 2012 where I asked the question whether web apps are accessible enough to replace desktop clients any time soon.

During my 30 days with Android experiment this summer, I also used Gmail on the web most of the time and hardly used my mail clients for desktop and mobile, except for the Gmail client on Android. The only exception was my Mozilla e-mail which I continued to do in Thunderbird on Windows.

After the experiment ended, I gradually migrated back to using clients of various shapes and sizes on the various platforms I use. And after a few days, I found that the Gmail web client was no longer one of them.

The problem is not one of accessibility in this case, because that has greatly improved over the last two years. So have web apps like Twitter and Facebook, for example. The reason I am still using dedicated clients for the most part are, first and foremost, these:

  1. Less clutter: All web apps I mentioned, and others, too, come with a huge overload of clutter that get in the way of productivity. Granted, the Gmail keyboard shortcuts, and mostly using the web app like a desktop client with NDA’s virtual mode turned off, mittigate this somewhat, but it still gets in the way far too often.
  2. Latency. I am on a quite fast VDSL 50 MBIT/S connection on my landline internet provider. Sufficing to say, this is quite fast already. The download of OS X Yosemite, 5.16 GB, takes under 20 minutes if the internet isn’t too busy. But still managing e-mail, loading conversations, switching labels, collecting tweets in the Twitter web app over time, browsing Facebook, especialy when catching up with the over-night news feed, take quite some noticeable time to load, refresh, or fetch new stuff. First the new data is pulled from servers, second they are being processed in the browser, which has to integrate it into the overloaded web applications it already has (see above), and third, all the changes need to be communicated to the screen reader I happen to be using at the time. On a single page load, this may not add up much. But on a news feed, 50 or so e-mail threads, or various fetches of tweets, this adds up time. I don’t even want to imagine how this would feel on a much slower connection that others have to cope with on a daily basis!

Yes, some of the above could probably be mittigated by using the mobile web offerings instead. But a) some web sites don’t allow a desktop browser to fetch their mobile site without the desktop browser faking a mobile one, and b) those are nowadays often so touch optimized that keyboard or simulated mouse interaction often fails or is as cumbersome as waiting for the latent loads of the desktop version.

So whether it’s e-mail, Twitter, or Facebook, I found that dedicated clients still do a much better job at allowing me to be productive. The amount of data they seem to fetch is much smaller, or it at least feels that way. The way this new data is integrated feels faster even on last year’s mobile device, and the whole interface is so geared to the task at hand, without any clutter getting in the way, that one simply gets things done much faster over-all.

What many many web applications for the desktop have not learned to do a good job at is to only give users what they currently need. For example as I write this in my WordPress installation backend, besides the editor, I have all the stuff that allows me to create new articles, pages, categories, go into the WordPress settings, install new plugins etc. I have to navigate past this to the main section to start editing my article. This, for example, is made quick by the quick navigation feature of my screen reader, but even the fact that this whole baggage is there to begin with proves the point. I want to write an article. Why offer me all those distractions? Yes, for quick access and quick ways of switching tasks, some would say. But if I write an article, I write an article. Thanks for the WordPress app for iOS or Android, which if I write an article, don’t put all other available options in my face at the same time!

Or take Twitter or Facebook. All the baggage that those web apps carry around while one just wants to browse tweets is daunting! My wife recently described to me what the FB web site looks to her in a browser, and fact is the point where the action is happening, the news feed, takes only a roughly estimated 10 or 15 percent of the whole screen estate. All the rest is either ads, or links to all kinds of things that Facebook has to offer besides the news feed. Zillions of groups, recommended friends, apps, games nobody plays, etc., etc., etc.

Same with Twitter. It shoves down one’s throat trendings, other recommendations, a huge menu of other stuff one would probably only need once a year, etc. OK, desktop screens are big nowadays. But offering so many bells, whistles and other distractions constantly and all around the place cannot seriously be considered a good user experience, can it?

I realize this is purely written from the perspective of a blind guy who has never seen a user interface. I only know them from descriptions by others. But I find myself always applauding the focused, concise, and clean user interfaces much more than those that shove every available option down my throat on first launch. And that goes for all operating systems and platforms I use.

And if the web doesn’t learn to give me better, and in this case that means, more focused user interfaces where I don’t have to dig for the UI of the task I want to accomplish, I will continue to use mobile and desktop clients for e-mail, Twitter and others over the similar web offerings, even when those are technically accessible to my browser and screen reader.

So, to cut a long story short, I think many mainstream web applications are still not ready, at least for me, for productive use, despite their advancements in technical accessibility. And the reason is the usability of things for one, and the latency of fetching all that stuff over the internet even on fast connections, on the other hand.

Pascal FinetteThe Open Source Disruption

Yesterday I gave a talk at Singularity University’s Executive Program on Open Source Disruption - it’s (somewhat) new content I developed; here’s the abstract of my talk:

The Open Source movement has upended the software world: Democratizing access, bringing billion dollar industries to their knees, toppling giants and simultaneously creating vast opportunities for the brave and unconventional. After decades in the making, the Open Source ideology, being kindled by ever cheaper and better technologies, is spreading like wildfire - and has the potential to disrupt many industries.

In his talk, Pascal will take you on a journey from the humble beginnings to the end of software as we knew it. He will make a case for why Open Source is an unstoppable force and present you with strategies and tactics to thrive in this brave new world.

And here’s the deck.

Yunier José Sosa VázquezDisponible el Add-on SDK 1.17

Add-on-SDKUna nueva versión de la herramienta creada por Mozilla para  desarrollar complementos ha sido liberada y se encuentra disponible desde nuestro sitio web.

Descargar Add-on SDK 1.17.

En esta ocasión no encontraremos grandes novedades ni nuevas características añadidas al Add-on SDK pues este lanzamiento tiene como objetivo principal la actualización del comando cfx y la compatibilidad de las extensiones con las nuevas versiones de Firefox (32+).

El mayor cambio en el Add-on SDK lo veremos en la próxima versión ya que se dejará de utilizar cfx para emplear JPM (Jetpack Manager), un módulo de Node.JS. Según los desarrolladores de Mozilla con cfx era muy complejo empaquetar las dependencias en cada add-on y en su lugar JMP es más simple al eliminar algunas tareas que cfx hacía.

JPM también permitirá a los desarrolladores de complementos crear y usar los módulos npm como dependencias en sus complementos. En este artículo publicado en el sitio para Desarrolladores podrás aprender a trabajar con JPM y los cambios que debes realizar en tu complemento.

Si te interesa la creación de añadidos para Firefox puedes visitar nuestro sitio de Desarrolladores e investigar más al respecto. Allí encontrarás presentaciones, talleres y artículos que tocan este tema.

Antes de descargar el Add-on SDK 1.17 recuerda que puedes contribuir a la mejora de este reportando bugs, mirando el código para que contribuyas dando tus soluciones o simplemente dejar tu impresión sobre esta nueva versión.

Will Kahn-GreeneInput: Removing the frontpage chart

I've been working on Input for a while now. One of the things I've actively disliked was the chart on the front page. This blog post talks about why I loathe it and then what's happening this week.

First, here's the front page dashboard as it is today:

Input front page dashboard

Input front page dashboard (October 2014)

When I started, Input gathered feedback solely on the Firefox desktop web browser. It was a one-product feedback gathering site. Because it was gathering feedback for a single product, the front page dashboard was entirely about that single product. All the feedback talked about that product. The happy/sad chart was about that product. Today, Input gathers feedback for a variety of products.

When I started, it was nice to have a general happy/sad chart on the front page because no one really looked at it and the people who did look at it understood why the chart slants so negatively. So the people who did look at it understood the heavy negative bias and could view the chart as such. Today, Input is viewed by a variety of people who have no idea how feedback on Input works or why it's so negatively biased.

When I started, Input didn't expose the data in helpful ways allowing people to build their own charts and dashboards to answer their specific questions. Thus there was a need for a dashboard to expose information from the data Input was gathering. I contend that the front page dashboard did this exceedingly poorly--what does the happy/sad lines actually mean? If it dips, what does that mean? If they spike, what does that mean? There's not enough information in the chart to make any helpful conclusions. Today, Input has an API allowing anyone to fetch data from Input in JSON format and generate their own dashboards of which there are several out there.

When I started, Input received some spam/abuse feedback, but the noise was far outweighed by the signal. Today, we get a ton of spam/abuse feedback. We still have no good way of categorizing spam/abuse as such and removing it from the system. That's something I want to work on more, but haven't had time to address. In the meantime, the front page dashboard chart has a lot of spammy noise in it. Thus the happy/sad lines aren't accurate.

Thus I argue we've far outlived the usefulness of the chart on the front page and it's time for it to go away.

So, what happens now? Bug 1080816 covers removing the front page dashboard chart. It covers some other changes to the front page, but I think I'm going to push those off until later since they're all pretty "up in the air".

If you depend on the front page dashboard chart, toss me an email. Depending on how many people depend on the front page chart and what the precise needs are, maybe we'll look into writing a better one.

Byron Joneshappy bmo push day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1083790] Default version does not take into account is_active
  • [1078314] Missing links and broken unicode characters in some bugmail
  • [1072662] decrease the number of messages a jobqueue worker will process before terminating
  • [1086912] Fix BugUserLastVisit->get
  • [1062940] Please increase bmo’s alias length to match bugzilla 5.0 (40 chars instead of 20)
  • [1082113] The ComponentWatching extension should create a default watch user with a new database installation
  • [1082106] $dbh->bz_add_columns creates a foreign key constraint causing failure in checksetup.pl when it tries to re-add it later
  • [1084052] Only show “Add bounty tracking attachment” links to people who actually might do that (not everyone in core-security)
  • [1075281] bugmail filtering using “field name contains” doesn’t work correctly with flags
  • [1088711] New bugzilla users are unable to user bug templates
  • [1076746] Mentor field is missing in the email when a bug gets created
  • [1087525] fix movecomponents.pl creating duplicate rows in flag*clusions

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

Karl DubostList Of Google Web Compatibility Bugs On Firefox

As of today, there are 706 Firefox Mobile bugs and 205 Firefox Desktop bugs in Mozilla Web Compatibility activity. These are OPEN bugs for many different companies (not only Google). We could add to that the any browser 237 bugs already collected on Webcompat.com. Help is welcome.

On these, we have a certain number of bugs related to Google Web properties.

Google and Mozilla

Let's make it very clear. We have an ongoing open discussions channel with Google about these bugs. I will not cite names of people at Google helping to deal with them for their own sake and privacy, but they do the best they can for getting us a resolution. It's not the case for all companies. So they know themselves and I want to thank them for this.

That said, there are long standing bugs where Firefox can't properly access the services developed by Google. The nature of bugs is diverse. It can be wrong user agent sniffing, -webkit- CSS or JS, or a codepath in JS using a very specific feature of Chrome. The most frustrating ones for the Web Compatibility people (and in return for users) are the ones where you can see that there's a bit of CSS here and there breaking the user experience but in the end, it seems it could work.

The issue is often with the "it seems it could work". We may have at first sight an impression that it will be working and then there is a hidden feature which has not be completely tested.

Also Firefox is not the only browser with issues with Google services. Opera browser, even after the switch to blink, has still issues with some Google services.

List Of Google Bugs For Firefox Browsers

Here the non exhaustive list of bugs which are still opened on Bugzilla. If you find bugs which are not valid anymore or resolved, please let us know. We do our best to find out what is resolved, but we might miss some sometimes. The more you test, the more it helps all users. With detail bug reports and analysis of the issues we have a better and more useful communications with Google Engineers for eventually fixing the bugs.

You may find similar bugs on webcompat.com. We haven't yet made the full transition.

You can help us.

Otsukare.

Allison NaaktgeborenApplying Privacy Series: The 1st meeting

The 1st Meeting

Product Manager: People, this could be a game changer! Think of the paid content we could open up to non-english speakers in those markets. How fast can we get it into trunk?

Engineer: First we have to figure out what *it* is.

Product Manager: I want users to be able to click a button in the main dropdown menu and translate all their text.

Engineering Manager: Shouldn’t we verify with the user which language they want? Many in the EU speak multiple languages. Also do we want translation per page?

Product Manager:  Worry about translation per page later. Yeah, verify with user is fine as long as we only do it once.

Engineering Manager: It doesn’t quite work like that. If you want translation per page later, we’ll need to architect this so it can support that in the future.

Product Manager: …Fine

Engineer: What about pages that fail translation? What would we display in that case?

Product Manager: Throw an error bar at the top and show the original page. That’ll cover languages the service can’t handle too. Use the standard error template from UX.

Engineering Manager: What device actually does the translation? The phone?

Product Manager: No, make the server do it, bandwidth on phones is precious and spotty. When they start up the phone next, it should download the content already translated to our app.

Engineer: Ok, well if there’s a server involved, we need to talk to the Ops folks.

Engineering Manager: and the DBAs. We’ll also need to find who is the expert on user data handling. We could be handling a lot of that before this is out.

Project Manager: Next UI release is in 6 weeks. I’ll see about scheduling some time with Ops and the database team.

Product Manager: Can you guys pull it off?

Engineer: Depends on the server folks’ schedule.

Who brought up user data safety & privacy concerns in this conversation?

The Engineering Manager.

Robert O'CallahanAre We Fast Yet? Yes We Are!

Spidermonkey has passed V8 on Octane performance on arewefastyet, and is now leading V8 and JSC on Octane, Sunspider and Kraken.

Does this matter? Yes and no. On one hand, it's just a few JS benchmarks, real-world performance is much more complicated, and it's entirely possible that V8 (or even Chakra) could take the lead again in the future. On the other hand, beating your competitors on their own benchmarks is much more impressive than beating your competitors on benchmarks which you co-designed along with your engine to win on, which is the story behind most JS benchmarking to date.

This puts us in a position of strength, so we can say "these benchmarks are not very interesting; let's talk about other benchmarks (e.g. asm.js-related) and language features" without being accused of being sore losers.

Congratulations to the Spidermonkey team; great job!

Gregory SzorcImplications of Using Bugzilla for Firefox Patch Development

Mozilla is very close to rolling out a new code review tool based on Review Board. When I became involved in the project, I viewed it as an opportunity to start from a clean slate and design the ideal code development workflow for the average Firefox developer. When the design of the code review experience was discussed, I would push for decisions that were compatible with my utopian end state.

As part of formulating the ideal workflows and design of the new tool, I needed to investigate why we do things the way we do, whether they are optimal, and whether they are necessary. As part of that, I spent a lot of time thinking about Bugzilla's role in shaping the code that goes into Firefox. This post is a summary of my findings.

The primary goal of this post is to dissect the practices that Bugzilla influences and to prepare the reader for the potential to reassemble the pieces - to change the workflows - in the future, primarily around Mozilla's new code review tool. By showing that Bugzilla has influenced the popularization of what I consider non-optimal practices, it is my hope that readers start to question the existing processes and open up their mind to change.

Since the impetus for this post in the near deployment of Mozilla's new code review tool, many of my points will focus on code review.

Before I go into my findings, I'd like to explicitly state that while many of the things I'm about to say may come across as negativity towards Bugzilla, my intentions are not to put down Bugzilla or the people who maintain it. Yes, there are limitations in Bugzilla. But I don't think it is correct to point fingers and blame Bugzilla or its maintainers for these limitations. I think we got where we are following years of very gradual shifts. I don't think you can blame Bugzilla for the circumstances that led us here. Furthermore, Bugzilla maintainers are quick to admit the faults and limitations of Bugzilla. And, they are adamant about and instrumental in rolling out the new code review tool, which shifts code review out of Bugzilla. Again, my intent is not to put down Bugzilla. So please don't direct ire that way yourself.

So, let's drill down into some of the implications of using Bugzilla.

Difficult to Separate Contexts

The stream of changes on a bug in Bugzilla (including review comments) is a flat, linear list of plain text comments. This works great when the activity of a bug follows a nice, linear, singular topic flow. However, real bug activity does not happen this way. All but the most trivial bugs usually involve multiple points of discussion. You typically have discussion about what the bug is. When a patch comes along, reviewer feedback comes in both high-level and low-level forms. Each item in each group is its own logical discussion thread. When patches land, you typically have points of discussion tracking the state of this patch. Has it been tested, does it need uplift, etc.

Bugzilla has things like keywords, flags, comment tags, and the whiteboard to enable some isolation of these various contexts. However, you still have a flat, linear list of plain text comments that contain the meat of the activity. It can be extremely difficult to follow these many interleaved logical threads.

In the context of code review, lumping all review comments into the same linear list adds overhead and undermines the process of landing the highest-quality patch possible.

Review feedback consists of both high-level and low-level comments. High-level would be things like architecture discussions. Low-level would be comments on the code itself. When these two classes of comments are lumped together in the same text field, I believe it is easy to lose track of the high-level comments and focus on the low-level. After all, you may have a short paragraph of high-level feedback right next to a mountain of low-level comments. Your eyes and brain tend to gravitate towards the larger set of more concrete low-level comments because you sub-consciously want to fix your problems and that large mass of text represents more problems, easier problems to solve than the shorter and often more abstract high-level summary. You want instant gratification and the pile of low-level comments is just too tempting to pass up. We have to train ourselves to temporarily ignore the low-level comments and focus on the high-level feedback. This is very difficult for some people. It is not an ideal disposition. Benjamin Smedberg's recent post on code review indirectly talks about some of this by describing his has rational approach of tackling high-level first.

As review iterations occur, the bug devolves into a mix of comments related to high and low-level comments. It thus becomes harder and harder to track the current high-level state of the feedback, as they must be picked out from the mountain of low-level comments. If you've ever inherited someone else's half-finished bug, you know what I'm talking about.

I believe that Bugzilla's threadless and contextless comment flow disposes us towards focusing on low-level details instead of the high-level. I believe that important high-level discussions aren't occurring at the rate they need and that technical debt increases as a result.

Difficulty Tracking Individual Items of Feedback

Code review feedback consists of multiple items of feedback. Each one is related to the review at hand. But oftentimes each item can be considered independent from others, relevant only to a single line or section of code. Style feedback is one such example.

I find it helps to model code review as a tree. You start with one thing you want to do. That's the root node. You split that thing into multiple commits. That's a new layer on your tree. Finally, each comment on those commits and the comments on those comments represent new layers to the tree. Code review thus consists of many related, but independent branches, all flowing back to the same central concept or goal. There is a one to many relationship at nearly every level of the tree.

Again, Bugzilla lumps all these individual items of feedback into a linear series of flat text blobs. When you are commenting on code, you do get some code context printed out. But everything is plain text.

The result of this is that tracking the progress on individual items of feedback - individual branches in our conceptual tree - is difficult. Code authors must pour through text comments and manually keep an inventory of their progress towards addressing the comments. Some people copy the review comment into another text box or text editor and delete items once they've fixed them locally! And, when it comes time to review the new patch version, reviewers must go through the same exercise in order to verify that all their original points of feedback have been adequately addressed! You've now redundantly duplicated the feedback tracking mechanism among at least two people. That's wasteful in of itself.

Another consequence of this unstructured feedback tracking mechanism is that points of feedback tend to get lost. On complex reviews, you may be sorting through dozens of individual points of feedback. It is extremely easy to lose track of something. This could have disastrous consequences, such as the accidental creation of a 0day bug in Firefox. OK, that's a worst case scenario. But I know from experience that review comments can and do get lost. This results in new bugs being filed, author and reviewer double checking to see if other comments were not acted upon, and possibly severe bugs with user impacting behavior. In other words, this unstructured tracking of review feedback tends to lessen code quality and is thus a contributor to technical debt.

Fewer, Larger Patches

Bugzilla's user interface encourages the writing of fewer, larger patches. (The opposite would be many, smaller patches - sometimes referred to as micro commits.)

This result is achieved by a user interface that handles multiple patches so poorly that it effectively discourages that approach, driving people to create larger patches.

The stream of changes on a bug (including review comments) is a flat, linear list of plain text comments. This works great when the activity of a bug follows a nice, linear flow. However, reviewing multiple patches doesn't work in a linear model. If you attach multiple patches to a bug, the review comments and their replies for all the patches will be interleaved in the same linear comment list. This flies in the face of the reality that each patch/review is logically its own thread that deserves to be followed on its own. The end result is that it is extremely difficult to track what's going on in each patch's review. Again, we have different contexts - different branches of a tree - all living in the same flat list.

Because conducting review on separate patches is so painful, people are effectively left with two choices: 1) write a single, monolithic patch 2) create a new bug. Both options suck.

Larger, monolithic patches are harder and slower to review. Larger patches require much more cognitive load to review, as the reviewer needs to capture the entire context in order to make a review determination. This takes more time. The increased surface area of the patch also increases the liklihood that the reviewer will find something wrong and will require a re-review. The added complexity of a larger patch also means the chances of a bug creeping in are higher, leading to more bugs being filed and more reviews later. The more review cycles the patch goes through, the greater the chances it will suffer from bit rot and will need updating before it lands, possibly incurring yet more rounds of review. And, since we measure progress in terms of code landing, the delay to get a large patch through many rounds of review makes us feel lethargic and demotivates us. Large patches have intrinsic properties that lead to compounding problems and increased development cost.

As bad as large patches are, they are roughly in the same badness range as the alternative: creating more bugs.

When you create a new bug to hold the context for the review of an individual commit, you are doing a lot of things, very few of them helpful. First, you must create a new bug. There's overhead to do that. You need to type in a summary, set up the bug dependencies, CC the proper people, update the commit message in your patch, upload your patch/attachment to the new bug, mark the attachment on the old bug obsolete, etc. This is arguably tolerable, especially with tools that can automate the steps (although I don't believe there is a single tool that does all of what I mentioned automatically). But the badness of multiple bugs doesn't stop there.

Creating multiple bugs fragments the knowledge and history of your change and diminishes the purpose of a bug. You got in the situation of creating multiple bugs because you were working on a single logical change. It just so happened that you needed/wanted multiple commits/patches/reviews to represent that singular change. That initial change was likely tracked by a single bug. And now, because of Bugzilla's poor user interface around mutliple patch reviews, you now find yourself creating yet another bug. Now you have two bug numbers - two identifiers that look identical, only varying by their numeric value - referring to the same logical thing. We've started with a single bug number referring to your logical change and created what are effectively sub-issues, but allocated them in the same namespace as normal bugs. We've diminished the importance of the average bug. We've introduced confusion as to where one should go to learn about this single, logical change. Should I go to bug X or bug Y? Sure, you can likely go to one and ultimately find what you were looking for. But that takes more effort.

Creating separate bugs for separate reviews also makes refactoring harder. If you are going the micro commit route, chances are you do a lot of history rewriting. Commits are combined. Commits are split. Commits are reordered. And if those commits are all mapping to individual bugs, you potentially find yourself in a huge mess. Combining commits might mean resolving bugs as duplicates of each other. Splitting commits means creating yet another bug. And let's not forget about managing bug dependencies. Do you set up your dependencies so you have a linear, waterfall dependency corresponding to commit order? That logically makes sense, but it is hard to keep in sync. Or, do you just make all the review bugs depend on a single parent bug? If you do that, how do you communicate the order of the patches to the reviewer? Manually? That's yet more overhead. History rewriting - an operation that modern version control tools like Git and Mercurial have enabled to be a lightweight operation and users love because it doesn't constrain them to pre-defined workflows - thus become much more costly. The cost may even be so high that some people forego rewriting completely, trading their effort for some poor reviewer who has to inherit a series of patches that isn't organized as logically as it could be. Like larger patches, this increases cognitive load required to perform reviews and increases development costs.

As you can see, reviewing multiple, smaller patches with Bugzilla often leads to a horrible user experience. So, we find ourselves writing larger, monolithic patches and living with their numerous deficiencies. At least with monolithic patches we have a predictable outcome for how interaction with Bugzilla will play out!

I have little doubt that large patches (whose existence is influenced by the UI of Bugzilla) slows down the development velocity of Firefox.

Commit Message Formatting

The heavy involvement of Bugzilla in our code development lifecycle has influenced how we write commit messages. Let's start with the obvious example. Here is our standard commit message format for Firefox:

Bug 1234 - Fix some feature foo; r=gps

The bug is right there at the front of the commit message. That prominent placement is effectively saying the bug number is the most important detail about this commit - everything else is ancillary.

Now, I'm sure some of you are saying, but Greg, the short description of the change is obviously more important than the bug number. You are right. But we've allowed ourselves to make the bug and the content therein more important than the commit.

Supporting my theory is the commit message content following the first/summary line. That data is almost always - wait for it - nothing: we generally don't write commit messages that contain more than a single summary line. My repository forensics show that that less than 20% of commit messages to Firefox in 2014 contain multiple lines (this excludes merge and backout commits). (We are doing better than 2013 - the rate was less than 15% then).

Our commit messages are basically saying, here's a highly-abbreviated summary of the change and a pointer (a bug number) to where you can find out more. And of course loading the bug typically reveals a mass of interleaved comments on various topics, hardly the high-level summary you were hoping was captured in the commit message.

Before I go on, in case you are on the fence as to the benefit of detailed commit messages, please lead Phabricator's recommendations on revision control and writing reviewable code. I think both write-ups are terrific and are excellent templates that apply to nearly everyone, especially a project as large and complex as Firefox.

Anyway, there are many reasons why we don't capture a detailed, multi-line commit message. For starters, you aren't immediately rewarded for doing it: writing a good commit message doesn't really improve much in the short term (unless someone yells at you for not doing it). This is a generic problem applicable to all organizations and tools. This is a problem that culture must ultimately rectify. But our tools shouldn't reinforce the disposition towards laziness: they should reward best practices.

I don't Bugzilla and our interactions with it do an adequate job rewarding good commit message writing. Chances are your mechanism for posting reviews to Bugzilla or posting the publishing of a commit to Bugzilla (pasting the URL in the simple case) brings up a text box for you to type review notes, a patch description, or extra context for the landing. These should be going in the commit message, as they are the type of high-level context and summarizations of choices or actions that people crave when discerning the history of a repository. But because that text box is there, taunting you with its presence, we write content there instead of in the commit message. Even where tools like bzexport exist to upload patches to Bugzilla, potentially nipping this practice in the bug, it still engages in frustrating behavior like reposting the same long commit message on every patch upload, producing unwanted bug spam. Even a tool that is pretty sensibly designed has an implementation detail that undermines a good practice.

Machine Processing of Patches is Difficult

I have a challenge for you: identify all patches currently under consideration for incorporation in the Firefox source tree, run static analysis on them, and tell me if they meet our code style policies.

This should be a solved problem and deployed system at Mozilla. It isn't. Part of the problem is because we're using Bugzilla for conducting review and doing patch management. That may sound counter-intuitive at first: Bugzilla is a centralized service - surely we can poll it to discover patches and then do stuff with those patches. We can. In theory. Things break down very quickly if you try this.

We are uploading patch files to Bugzilla. Patch files are representations of commits that live outside a repository. In order to get the full context - the result of the patch file - you need all the content leading up to that patch file - the repository data. When a naked patch file is uploaded to Bugzilla, you don't always have this context.

For starters, you don't know with certainly which repository the patch belongs to because that isn't part of the standard patch format produced by Mercurial or Git. There are patches for various repositories floating around in Bugzilla. So now you need a way to identify which repository a patch belongs to. It is a solvable problem (aggregate data for all repositories and match patches based on file paths, referenced commits, etc), albeit one Mozilla has not yet solved (but should).

Assuming you can identify the repository a patch belongs to, you need to know the parent commit so you can apply this patch. Some patches list their parent commits. Others do not. Even those that do may lie about it. Patches in MQ don't update their parent field when they are pushed, only after they are refreshed. You could be testing and uploading a patch with a different parent commit than what's listed in the patch file! Even if you do identify the parent commit, this commit could belong to another patch under consideration that's also on Bugzilla! So now you need to assemble a directed graph with all the patches known from Bugzilla applied. Hopefully they all fit in nicely.

Of course, some patches don't have any metadata at all: they are just naked diffs or are malformed commits produced by tools that e.g. attempt to convert Git commits to Mercurial commits (Git users: you should be using hg-git to produce proper Mercurial commits for Firefox patches).

Because Bugzilla is talking in terms of patch files, we often lose much of the context needed to build nice tools, preventing numerous potential workflow optimizations through automation. There are many things machines could be doing for us (such as looking for coding style violations). Instead, humans are doing this work and costing Mozilla a lot of time and lost developer productivity in the process. (A human costs ~$100/hr. A machine on EC2 is pennies per hour and should do the job with lower latency. In other words, you can operate over 300 machines 24 hours a day for what you may an engineer to work an 8 hour shift.)

Conclusion

I have outlined a few of the side-effects of using Bugzilla as part of our day-to-day development, review, and landing of changes to Firefox.

There are several takeways.

First, one cannot argue the fact that Firefox development is bug(zilla) centric. Nearly every important milestone in the lifecycle of a patch involves Bugzilla in some way. This has its benefits and drawbacks. This article has identified many of the drawbacks. But before you start crying to expunge Bugzilla from the loop completely, consider the benefits, such as a place anyone can go to to add metadata or comments on something. That's huge. There is a larger discussion to be had here. But I don't want to be inviting it quite yet.

A common thread between many of the points above is Bugzilla's unstructured and generic handling of code and metadata attached to it (patches, review comments, and landing information). Patches are attachments, which can be anything under the sun. Review comments are plain text comments with simple author, date, and tag metadata. Landings are also communicated by plain text review comments (at least initially - keywords and flags are used in some scenarios).

By being a generic tool, Bugzilla throws away a lot of the rich metadata that we produce. That data is still technically there in many scenarios. But it becomes extremely difficult if not practically impossible for both humans and machines to access efficiently. We lose important context and feedback by normalizing all this data to Bugzilla. This data loss creates overhead and technical debt. It slows Mozilla down.

Fortunately, the solutions to these problems and shortcomings are conceptually simple (and generally applicable): preserve rich context. In the context of patch distribution, push commits to a repository and tell someone to pull those commits. In the context of code review, create sub-reviews for different commits and allow tracking and easy-to-follow (likely threaded) discussions on found issues. Design workflow to be code first, not tool or bug first. Optimize workflows to minimize people time. Lean heavily on machines to do grunt work. Integrate issue tracking and code review, but not too tightly (loosely coupled, highly cohesive). Let different tools specialize in the handling of different forms of data: let code review handle code review. Let Bugzilla handle issue tracking. Let a landing tool handle tracking the state of landings. Use middleware to make them appear as one logical service if they aren't designed to be one from the start (such as is Mozilla's case with Bugzilla).

Another solution that's generally applicable is to refine and optimize the whole process to land a finished commit. Your product is based on software. So anything that adds overhead or loss of quality in the process of developing that software is fundamentally a product problem and should be treated as such. Any time and brain cycles lost to development friction or bugs that arise from things like inadequate code reviews tools degrade the quality of your product and take away from the user experience. This should be plain to see. Attaching a cost to this to convince the business-minded folks that it is worth addressing is a harder matter. I find management with empathy and shared understanding of what amazing tools can do helps a lot.

If I had to sum up the solution in one sentence, it would be: invest in tools and developer happiness.

I hope to soon publish a post on how Mozilla's new code review tool addresses many of the workflow deficiencies present today. Stay tuned.

Soledad PenadesMozFest 2014, day 2

As I was typing the final paragraphs of my previous post, hundreds of Flame devices were being handed to MozFest attendees that had got involved on sessions the day before.

When I arrived (late, because I felt like a lazy slug), there was a queue in the flashing station, which was, essentially, a table with a bunch of awesome Mozilla employees and volunteers from all over the world, working in shifts to make sure all those people with phones using Firefox OS 1.3 were upgraded to 2.1. I don’t have the exact numbers, but I believe the amount was close to 1000 phones. One thousand phones. BAM. Super amazing work, friends. **HATS OFF**

Potch was also improving the Flame starter guide. It had been renamed to Flame On, so go grab that one if you got a Flame and want to know what you can do now. If you want to contribute to the guide, here’s the code.

I (figuratively) rolled up my sleeves (I was wearing a t-shirt), and joined Potch’s table in their effort to enable ADB+DevTools in the newly unboxed phones, so that then the flashing table could jump straight to that part of the process. Not everybody knew about that and they went directly to the other queue, so at some point Marcia went person by person and enabled ADB+DevTools in those phones. Which I found when I tried to help by making sure everybody had that done… and turns out that had already happened. Too late, Sole!

They called us for “the most iconic clipart in Mozilla” i.e. the group photo. After we posed seriously (“interview picture”), smiling and scaring, we went upstairs again to deal with the flow of newly Flame owners.

I helped a bunch of people setting up WebIDE and explained them how it could be used to quickly get started in developing apps, install the simulators, try their app in them and in the phones, etc. But (cue dramatic voice) I saw versions of Firefox I hadn’t seen for years, had to reminisce things I hadn’t done in even longer (installing udev rules) and also did things that looked like straight out of a nightmare (like installing an unsigned driver in Windows 8). Basically, getting this up and running is super easy in Mac, less so in Linux, and quite tedious in Windows. The good news is: we’re working on making the connection part easier!

My favourite part of helping someone set this environment up was then asking, and learning, about how they planned to use it, and how’s tech like in their countries. As I said, MozFest has people from all the places, and that gave me the chance to understand how they use our tools. For example they might just have intermittent internet access which is also metered, BUT they have pretty decent local networks in schools or unis, so it’s feasible to get just one person to get the data (e.g. an updated Firefox) and then everyone else can go to the uni with your laptop to copy all that data. We also had a chance to discuss what sort of apps they are looking to build, and hopefully we will keep in touch so that I can help empower and teach them and then they can spread that knowledge to more people locally! Yay collaboration!

At some point I went out to get some food and get some quiet time. I was drinking water constantly so that was good for my throat but I was feeling little stings of pain occasionally.

On the way back, I grabbed some coffee nearby, and when I entered the college I stumbled upon Rosana, Krupa and Amy who were having some interesting discussions on the lobby. We left with a great life lesson from Amy: if someone is acting like a jerk, perhaps they have a terrible shitty job.

Upstairs to the 6th floor again, I stumble upon Bobby this time and we run a quick postmortem-so-far: mostly good experience, but I feel there’s too much noise for bigger groups, I get distracted and it’s terrible. I also should not let supertechnical people hijack conversations that scare less tech-savvy people away, and I should also explicitly ask each person for questions, not leave it up to them to ask (because they might be afraid of taking the initiative). I should know better, but I don’t facilitate sessions every day so I’m a bit out of my element. I’ll get better! I let Bobby eat his lunch (while standing), and go back to the MEGABOOTH. It’s still a hive of activity. I help where I can. WebIDE questions, Firefox OS questions, you name it.

I also had a chance to chat with Ioana, Flaki, Bebe and other members of the Romania community. We had interacted via Twitter before but never met! They’re supercool and I’m going to be visiting their country next month so we’re all super excited! Yay!

As the evening approaches the area starts to calm down. At some point we can see the flashing station volunteers again, once the queue is gone. They are still in one piece! I start to think they’re superhuman.

It’s starting to be demo-time again! I move downstairs to the 4th floor where people are installing screens and laptops for their demos, but before I know it someone comes nearby and entices me to go back to the Art Room where they are starting the party already. How can I say no?

I go there, they’ve turned off the lights for better atmosphere and so we can see the projected works in all their glory. It feels like being in the Tate Tanks-i.e. great and definitely atmospheric!

Forrest from NoFlo is there, he’s used Mirobot, a Logo-Like robot kit, connected to NoFlo to program it (and I think it used the webcam as input too):

When I come back to the 4th they’re having some announcements and wrap-up speeches, thanking everyone who’s put their efforts into making the festival a success. There’s a mention for pretty much everyone, so our hands hurt! Also, Dees revealed his true inner self:

I realised Bobby had changed to wear even more GOLD. Space wranglers could be identified because they were wearing a sort of golden pashmina, but Bobby took it further. Be afraid, mr. Tom “Gold pants” Dale!

And then on to the demos, there was a table full of locks and I didn’t know what it was about:

until someone explained to me that those locks came in various levels of difficulty and were there to learn how to pick locks! Which I started doing:

Now I cannot stop thinking about cylinders and feeling the mechanism each time I open doors… the harm has been done!

Chris Lord had told me they would be playing at MozFest but I thought I had missed them the night before. No! they played on Sunday! Everyone, please welcome The Vanguards:

And it wasn’t too long until the party would be over and was time to go home! Exhausted, but exhilarated!

I said goodbye to the Mozilla Reps gathering in front of the college, and thanks, and wished them a happy safe journey back home. Not sure why, since they were just going to have dinner, but I was loaded with good vibes and that felt like the right thing to do.

And so that was MozFest 2014 for me. A chaos like usual and I hardly had time to see anything outside from the MEGABOOTH. I’m so sorry I missed so many interesting sessions, but I’m glad I helped so many people too, so there’s that!

flattr this!

Kim MoirMozilla pushes - September 2014

Here's September 2014's monthly analysis of the pushes to our Mozilla development trees.
You can load the data as an HTML page or as a json file.


Trends
Suprise!  No records were broken this month.

Highlights
12267 pushes
409 pushes/day (average)
Highest number of pushes/day: 646 pushes on September 10, 2014
22.6 pushes/hour (average)

General Remarks
Try has around 36% of pushes and Gaia-Try comprise about 32%.  The three integration repositories (fx-team, mozilla-inbound and b2g-inbound) account around 22% of all the pushes.

Records
August 2014 was the month with most pushes (13,090  pushes)
August 2014 has the highest pushes/day average with 620 pushes/day
July 2014 has the highest average of "pushes-per-hour" with 23.51 pushes/hour
August 20, 2014 had the highest number of pushes in one day with 690 pushes





Yunier José Sosa VázquezActualizados los canales de Firefox y Thunderbird

Se encuentra disponible actualizaciones para Firefox y Thunderbird. Esto incluye, la versión 15 de plugin Adobe Flash Player y las versiones para Andriod de Firefox.

Release: Firefox 33.0.1, Thunderbird 31.2.0, Firefox Mobile 33.0

Beta: Firefox 34

Aurora: Firefox 35

Nightly: Firefox 36 (con procesos separados gracias a Electrolysis) y Thunderbird 36

Ir a Descargas

Tim TaubertDeploying TLS the hard way

  1. How does TLS work?
  2. The certificate
  3. (Perfect) Forward Secrecy
  4. Choosing the right cipher suites
  5. HTTP Strict Transport Security
  6. HSTS Preload List
  7. OCSP Stapling
  8. HTTP Public Key Pinning
  9. Known attacks

Last weekend I finally deployed TLS for timtaubert.de and decided to write up what I learned on the way hoping that it would be useful for anyone doing the same. Instead of only giving you a few buzz words I want to provide background information on how TLS and certain HTTP extensions work and why you should use them or configure TLS in a certain way.

One thing that bugged me was that most posts only describe what to do but not necessarily why to do it. I hope you appreciate me going into a little more detail to end up with the bigger picture of what TLS currently is, so that you will be able to make informed decisions when deploying yourselves.

To follow this post you will need some basic cryptography knowledge. Whenever you do not know or understand a concept you should probably just head over to Wikipedia and take a few minutes or just do it later and maybe re-read the whole thing.

Disclaimer: I am not a security expert or cryptographer but did my best to research this post thoroughly. Please let me know of any mistakes I might have made and I will correct them as soon as possible.

But didn’t Andy say this is all shit?

I read Andy Wingo’s blog post too and I really liked it. Everything he says in there is true. But what is also true is that TLS with the few add-ons is all we have nowadays and we better make the folks working for the NSA earn their money instead of not trying to encrypt traffic at all.

After you finished reading this page, maybe go back to Andy’s post and read it again. You might have a better understanding of what he is ranting about than you had before if the details of TLS are still dark matter to you.

So how does TLS work?

Every TLS connection starts with both parties sharing their supported TLS versions and cipher suites. As the next step the server sends its X.509 certificate to the browser.

Checking the server’s certificate

The following certificate checks need to be performed:

  • Does the certificate contain the server’s hostname?
  • Was the certificate issued by a CA that is in my list of trusted CAs?
  • Does the certificate’s signature verify using the CA’s public key?
  • Has the certificate expired already?
  • Was the certificate revoked?

All of these are very obvious crucial checks. To query a certificate’s revocation status the browser will use the Online Certificate Status Protocol (OCSP) which I will describe in more detail in a later section.

After the certificate checks are done and the browser ensured it is talking to the right host both sides need to agree on secret keys they will use to communicate with each other.

Key Exchange using RSA

A simple key exchange would be to let the client generate a master secret and encrypt that with the server’s public RSA key given by the certificate. Both client and server would then use that master secret to derive symmetric encryption keys that will be used throughout this TLS session. An attacker could however simply record the handshake and session for later, when breaking the key has become feasible or the machine is suspect to a vulnerability. They may then use the server’s private key to recover the whole conversation.

Key Exchange using (EC)DHE

When using (Elliptic Curve) Diffie-Hellman as the key exchange mechanism both sides have to collaborate to generate a master secret. They generate DH key pairs (which is a lot cheaper than generating RSA keys) and send their public key to the other party. With the private key and the other party’s public key the shared master secret can be calculated and then again be used to derive session keys. We can provide Forward Secrecy when using ephemeral DH key pairs. See the section below on how to enable it.

We could in theory also provide forward secrecy with an RSA key exchange if the server would generate an ephemeral RSA key pair, share its public key and would then wait for the master secret to be sent by the client. As hinted above RSA key generation is very expensive and does not scale in practice. That is why RSA key exchanges are not a practical option for providing forward secrecy.

After both sides have agreed on session keys the TLS handshake is done and they can finally start to communicate using symmetric encryption algorithms like AES that are much faster than asymmetric algorithms.

The certificate

Now that we understand authenticity is an integral part of TLS we know that in order to serve a site via TLS we first need a certificate. The TLS protocol can encrypt traffic between two parties just fine but the certificate provides the necessary authentication towards visitors.

Without a certificate a visitor could securely talk to either us, the NSA, or a different attacker but they probably want to talk to us. The certificate ensures by cryptographic means that they established a connection to our server.

Selecting a Certificate Authority (CA)

If you want a cheap certificate, have no specific needs, and only a single subdomain (e.g. www) then StartSSL is an easy option. Do of course feel free to take a look at different authorities - their services and prices will vary heavily.

In the chain of trust the CA plays an important role: by verifying that you are the rightful owner of your domain and signing your certificate it will let browsers trust your certificate. The browsers do not want to do all this verification themselves so they defer it to the CAs.

For your certificate you will need an RSA key pair, a public and private key. The public key will be included in your certificate and thus also signed by the CA.

Generating an RSA key and a certificate signing request

The example below shows how you can use OpenSSL on the command line to generate a key for your domain. Simply replace example.com with the domain of your website. example.com.key will be your new RSA key and example.com.csr will be the Certificate Signing Request that your CA needs to generate your certificate.

openssl req -new -newkey rsa:4096 -nodes -sha256 \
  -keyout example.com.key -out example.com.csr

We will use a SHA-256 based signature for integrity as Firefox and Chrome will phase out support for SHA-1 based certificates soon. The RSA keys used to authenticate your website will use a 4096 bit modulus. If you need to handle a lot of traffic or your server has a weak CPU you might want to use 2048 bit. Never go below that as keys smaller than 2048 bit are considered insecure.

Get a signed certificate

Sign up with the CA you chose and depending on how they handle this process you probably will have to first verify that you are the rightful owner of the domain that you claim to possess. StartSSL will do that by sending a token to postmaster@example.com (or similar) and then ask you to confirm the receipt of that token.

Now that you signed up and are the verified owner of example.com you simply submit the example.com.csr file to request the generation of a certificate for your domain. The CA will sign your public key and the other information contained in the CSR with their private key and you can finally download the certificate to example.com.crt.

Upload the .crt and .key files to your web server. Be aware that any intermediate certificate in the CA’s chain must be included in the .crt file as well - you can just cat them together. StartSSL’s free tier has an intermediate Class 1 certificate - make sure to use the SHA-256 version of it. All files should be owned by root and must not be readable by anyone else. Configure your web server to use those and you should probably have TLS running configured out-of-the-box.

(Perfect) Forward Secrecy

To properly deploy TLS you will want to provide (Perfect) Forward Secrecy. Without forward secrecy TLS still seems to secure your communication today, it might however not if your private key is compromised in the future.

If a powerful adversary (think NSA) records all communication between a visitor and your server, they can decrypt all this traffic years later by stealing your private key or going the “legal” way to obtain it. This can be prevented by using short-lived (ephemeral) keys for key exchanges that the server will throw away after a short period.

Diffie-Hellman key exchanges

Using RSA with your certificate’s private and public keys for key exchanges is off the table as generating a 2048+ bit prime is very expensive. We thus need to switch to ephemeral (Elliptic Curve) Diffie-Hellman cipher suites. For DH you can generate a 2048 bit parameter once, choosing a private key afterwards is cheap.

openssl dhparam -out dhparam.pem 2048

Simply upload dhparam.pem to your server and instruct the web server to use it for Diffie-Hellman key exchanges. When using ECDH the predefined elliptic curve represents this parameter and no further action is needed.

(Nginx)
ssl_dhparam /path/to/ssl/dhparam.pem;

Apache does unfortunately not support custom DH parameters, it is always set to 1024 bit and is not user configurable. This might hopefully be fixed in future versions.

Session IDs

One of the most important mechanisms to improve TLS performance is Session Resumption. In a full handshake the server sends a Session ID as part of the “hello” message. On a subsequent connection the client can use this session ID and pass it to the server when connecting. Because both the server and the client have saved the last session’s “secret state” under the session ID they can simply resume the TLS session where they left off.

Now you might notice that this could violate forward secrecy as a compromised server might reveal the secret state for all session IDs if the cache is just large enough. The forward secrecy of a connection is thus bounded by how long the session information is retained on the server. Ideally, your server would use a medium-sized in-memory cache that is purged daily.

Apache lets you configure that using the SSLSessionCache directive and you should use the high-performance cyclic buffer shmcd. Nginx has the ssl_session_cache directive and you should use a shared cache that is shared between workers. The right size of those caches would depend on the amount of traffic your server handles. You want browsers to resume TLS sessions but also get rid of old ones about daily.

Session Tickets

The second mechanism to resume a TLS session are Session Tickets. This extension transmits the server’s secret state to the client, encrypted with a key only known to the server. That ticket key is protecting the TLS connection now and in the future.

This might as well violate forward secrecy if the key used to encrypt session tickets is compromised. The ticket (just as the session cache) contains all of the server’s secret state and would allow an attacker to reveal the whole conversation.

Nginx and Apache by default generate a session ticket key at startup and do unfortunately provide no way to rotate it. If your server is running for months without a restart then you will use that same session ticket key for months and breaking into your server could reveal every recorded TLS conversation since the web server was started.

Neither Nginx nor Apache have a sane way to work around this, Nginx might be able to rotate the key by reloading the server config which is rather easy to implement with a cron job. Make sure to test that this actually works before relying on it though.

Thus if you really want to provide forward secrecy you should disable session tickets using ssl_session_tickets off for Nginx and SSLOpenSSLConfCmd Options -SessionTicket for Apache.

Choosing the right cipher suites

Mozilla’s guide on server side TLS provides a great list of modern cipher suites that needs to be put in your web server’s configuration. The combinations below are unfortunately supported by only modern browsers, for broader client support you might want to consider using the “intermediate” list.

ECDHE-RSA-AES128-GCM-SHA256:   \
ECDHE-ECDSA-AES128-GCM-SHA256: \
ECDHE-RSA-AES256-GCM-SHA384:   \
ECDHE-ECDSA-AES256-GCM-SHA384: \
DHE-RSA-AES128-GCM-SHA256:     \
DHE-DSS-AES128-GCM-SHA256:     \
[...]
!aNULL:!eNULL:!EXPORT:!DES:!RC4:!3DES:!MD5:!PSK

All these cipher suites start with (EC)DHE which means they only support ephemeral Diffie-Hellman key exchanges for forward secrecy. The last line discards non-authenticated key exchanges, null-encryption (cleartext), legacy weak ciphers marked exportable by US law, weak ciphers (3)DES and RC4, weak MD5 signatures, and pre-shared keys.

Note: To ensure that the order of cipher suites is respected you need to set ssl_prefer_server_ciphers on for Nginx or SSLHonorCipherOrder on for Apache.

HTTP Strict Transport Security (HSTS)

Now that your server is configured to accept TLS connections you still want to support HTTP connections on port 80 to redirect old links and folks typing example.com in the URL bar to your shiny new HTTPS site.

At this point however a Man-In-The-Middle (or Woman-In-The-Middle) attack can easily intercept and modify traffic to deliver a forged HTTP version of your site to a visitor. The poor visitor might never know because they did not realize you offer TLS connections now.

To ensure your users are secured when visiting your site the next time you want to send a HSTS header to enforce strict transport security. By sending this header the browser will not try to establish a HTTP connection next time but directly connect to your website via TLS.

Strict-Transport-Security:
  max-age=15768000; includeSubDomains; preload

Sending these headers over a HTTPS connection (they will be ignored via HTTP) lets the browser remember that this domain wants strict transport security for the next six months (~15768000 seconds). The includeSubDomains token enforces TLS connections for every subdomain of your domain and the non-standard preload token will be required for the next section.

HSTS Preload List

If after deploying TLS the very first connection of a visitor is genuine we are fine. Your server will send the HSTS header over TLS and the visitor’s browser remembers to use TLS in the future. The very first connection and every connection after the HSTS header expires however are still vulnerable to a {M,W}ITM attack.

To prevent this Firefox and Chrome share a HSTS Preload List that basically includes HSTS headers for all sites that would send that header when visiting anyway. So before connecting to a host Firefox and Chrome check whether that domain is in the list and if so would not even try using an insecure HTTP connection.

Including your page in that list is easy, just submit your domain using the HSTS Preload List submission form. Your HSTS header must be set up correctly and contain the includeSubDomains and preload tokens to be accepted.

OCSP Stapling

OCSP - using an external server provided by the CA to check whether the certificate given by the server was revoked - might sound like a great idea at first. On the second thought it actually sounds rather terrible. First, the CA providing the OCSP server suddenly has to be able to handle a lot of requests: every client opening a connection to your server will want to know whether your certificate was revoked before talking to you.

Second, the browser contacting a CA and passing the certificate is an easy way to monitor a user’s browsing behavior. If all CAs worked together they probably could come up with a nice data set of TLS sites that people visit, when and in what order (not that I know of any plans they actually wanted to do that).

Let the server do the work for your visitors

OCSP Stapling is a TLS extension that enables the server to query its certificate’s revocation status at regular intervals in the background and send an OCSP response with the TLS handshake. The stapled response itself cannot be faked as it needs to be signed with the CA’s private key. Enabling OCSP stapling thus improves performance and privacy for your visitors immediately.

You need to create a certificate file that contains your CA’s root certificate prepended by any intermediate certificates that might be in your CA’s chain. StartSSL has an intermediate certificate for Class 1 (the free tier) - make sure to use the one having the SHA-256 signature. Pass the file to Nginx using the ssl_trusted_certificate directive and to Apache using the SSLCACertificateFile directive.

OCSP Must Staple

OCSP however is unfortunately not a silver bullet. If a browser does not know in advance it will receive a stapled response then the attacker might as well redirect HTTPS traffic to their server and block any traffic to the OCSP server (in which case browsers soft-fail). Adam Langley explains all possible attack vectors in great detail.

One solution might be the proposed OCSP Must Staple Extension. This would add another field to the certificate issued by the CA that says a server must provide a stapled OCSP response. The problem here is that the proposal expired and in practice it would take years for CAs to support that.

Another solution would be to implement a header similar to HSTS, that lets the browser remember to require a stapled OCSP response when connecting next time. This however has the same problems on first connection just like HSTS, and we might have to maintain a “OCSP-Must-Staple Preload List”. As of today there is unfortunately no immediate solution in sight.

HTTP Public Key Pinning (HPKP)

Even with all those security checks when receiving the server’s certificate we would still be completely out of luck in case your CA’s private key is compromised or your CA simply fucks up. We can prevent these kinds of attacks with an HTTP extension called Public Key Pinning.

Key pinning is a trust-on-first-use (TOFU) mechanism. The first time a browser connects to a host it lacks the the information necessary to perform “pin validation” so it will not be able to detect and thwart a {M,W}ITM attack. This feature only allows detection of these kinds of attacks after the first connection.

Generating a HPKP header

Creating an HPKP header is easy, all you need to do is to compute the base64-encoded “SPKI fingerprint” of your server’s certificate. An SPKI fingerprint is the output of applying SHA-256 to the public key information contained in your certificate.

openssl req -inform pem -pubkey -noout < example.com.csr |
  openssl pkey -pubin -outform der |
  openssl dgst -sha256 -binary |
  base64

The result of running the above command can be directly used as the pin-sha256 values for the Public-Key-Pins header as shown below:

Public-Key-Pins:
  pin-sha256="GRAH5Ex+kB4cCQi5gMU82urf+6kEgbVtzfCSkw55AGk=";
  pin-sha256="lERGk61FITjzyKHcJ89xpc6aDwtRkOPAU0jdnUqzW2s=";
  max-age=15768000; includeSubDomains

Upon receiving this header the browser knows that it has to store the pins given by the header and discard any certificates whose SPKI fingerprints do not match for the next six months (max-age=15768000). We specified the includeSubDomains token so the browser will verify pins when connecting to any subdomain.

Include the pin of a backup key

It is considered good practice to include at least a second pin, the SPKI fingerprint of a backup RSA key that you can generate exactly as the original one:

openssl req -new -newkey rsa:4096 -nodes -sha256 \
  -keyout example.com.backup.key -out example.com.backup.csr

In case your private key is compromised you might need to revoke your current certificate and request the CA to issue a new one. The old pin however would still be stored in browsers for six months which means they would not be able to connect to your site. By sending two pin-sha256 values the browser will later accept a TLS connection when any of the stored fingerprints match the given certificate.

Known attacks

In the past years (and especially the last year) a few attacks on SSL/TLS were published. Some of those attacks can be worked around on the protocol or crypto library level so that you basically do not have to worry as long as your web server is up to date and the visitor is using a modern browser. A few attacks however need to be thwarted by configuring your server properly.

BEAST (Browser Exploit Against SSL/TLS)

BEAST is an attack that only affects TLSv1.0. Exploiting this vulnerability is possible but rather difficult. You can either disable TLSv1.0 completely - which is certainly the preferred solution although you might neglect folks with old browsers on old operating systems - or you can just not worry. All major browsers have implemented workarounds so that it should not be an issue anymore in practice.

BREACH (Browser Reconnaissance and Exfiltration via Adaptive Compression of Hypertext)

BREACH is a security exploit against HTTPS when using HTTP compression. BREACH is based on CRIME but unlike CRIME - which can be successfully defended by turning off TLS compression (which is the default for Nginx and Apache nowadays) - BREACH can only be prevented by turning off HTTP compression. Another method to mitigate this would be to use cross-site request forgery (CSRF) protection or disable HTTP compression selectively based on headers sent by the application.

POODLE (Padding Oracle On Downgraded Legacy Encryption)

POODLE is yet another padding oracle attack on TLS. Luckily it only affects the predecessor of TLS which is SSLv3. The only solution when deploying a new server is to just disable SSLv3 completely. Fortunately, we already excluded SSLv3 in our list of preferred ciphers previously. Firefox 34 will ship with SSLv3 disabled by default, Chrome and others will hopefully follow soon.

Further reading

Thanks for reading and I am really glad you made it that far! I hope this post did not discourage you from deploying TLS - after all getting your setup right is the most important thing. And it certainly is better to to know what you are getting yourselves into than leaving your visitors unprotected.

If you want to read even more about setting up TLS, the Mozilla Wiki page on Server-Side TLS has more information and proposed web server configurations.

Thanks a lot to Frederik Braun for taking the time to proof-read this post and helping to clarify a few things!

Daniel Stenbergdaniel.haxx.se episode 8

Today I hesitated to make my new weekly video episode. I looked at the viewers number and how they basically have dwindled the last few weeks. I’m not making this video series interesting enough for a very large crowd of people. I’m re-evaluating if I should do them at all, or if I can do something to spice them up…

… or perhaps just not look at the viewers numbers at all and just do what think is fun?

I decided I’ll go with the latter for now. After all, I enjoy making these and they usually give me some interesting feedback and discussions even if the numbers are really low. What good is a number anyway?

This week’s episode:

Personal

Firefox

Fun

HTTP/2

TALKS

  • I’m offering two talks for FOSDEM

curl

  • release next Wednesday
  • bug fixing period
  • security advisory is pending

wget

John O'Duinn“Oughta See” by General Fuzz

Well, well, well… I’m very happy to discover that General Fuzz just put out yet another album. Tonight was a great first-listen, and I know this will make a happy addition to tomorrow’s quiet Sunday morning first-coffee. Thanks, James!

Click here to jump over to the music and listen for yourself. On principle, his music is free-to-download, so try it and if you like it, help spread the word.

(ps: if you like this album, check out his *6* other albums, also all available for free download on the same site – donations welcome!)

Gregory SzorcImplications of Using Bugzilla for Firefox Patch Development

Mozilla is very close to rolling out a new code review tool based on Review Board. When I became involved in the project, I viewed it as an opportunity to start from a clean slate and design the ideal code development workflow for the average Firefox developer. When the design of the code review experience was discussed, I would push for decisions that were compatible with my utopian end state.

As part of formulating the ideal workflows and design of the new tool, I needed to investigate why we do things the way we do, whether they are optimal, and whether they are necessary. As part of that, I spent a lot of time thinking about Bugzilla's role in shaping the code that goes into Firefox. This post is a summary of my findings.

The primary goal of this post is to dissect the practices that Bugzilla influences and to prepare the reader for the potential to reassemble the pieces - to change the workflows - in the future, primarily around Mozilla's new code review tool. By showing that Bugzilla has influenced the popularization of what I consider non-optimal practices, it is my hope that readers start to question the existing processes and open up their mind to change.

Since the impetus for this post in the near deployment of Mozilla's new code review tool, many of my points will focus on code review.

Before I go into my findings, I'd like to explicitly state that while many of the things I'm about to say may come across as negativity towards Bugzilla, my intentions are not to put down Bugzilla or the people who maintain it. Yes, there are limitations in Bugzilla. But I don't think it is correct to point fingers and blame Bugzilla or its maintainers for these limitations. I think we got where we are following years of very gradual shifts. I don't think you can blame Bugzilla for the circumstances that led us here. Furthermore, Bugzilla maintainers are quick to admit the faults and limitations of Bugzilla. And, they are adamant about and instrumental in rolling out the new code review tool, which shifts code review out of Bugzilla. Again, my intent is not to put down Bugzilla. So please don't direct ire that way yourself.

So, let's drill down into some of the implications of using Bugzilla.

Difficult to Separate Contexts

The stream of changes on a bug in Bugzilla (including review comments) is a flat, linear list of plain text comments. This works great when the activity of a bug follows a nice, linear, singular topic flow. However, real bug activity does not happen this way. All but the most trivial bugs usually involve multiple points of discussion. You typically have discussion about what the bug is. When a patch comes along, reviewer feedback comes in both high-level and low-level forms. Each item in each group is its own logical discussion thread. When patches land, you typically have points of discussion tracking the state of this patch. Has it been tested, does it need uplift, etc.

Bugzilla has things like keywords, flags, comment tags, and the whiteboard to enable some isolation of these various contexts. However, you still have a flat, linear list of plain text comments that contain the meat of the activity. It can be extremely difficult to follow these many interleaved logical threads.

In the context of code review, lumping all review comments into the same linear list adds overhead and undermines the process of landing the highest-quality patch possible.

Review feedback consists of both high-level and low-level comments. High-level would be things like architecture discussions. Low-level would be comments on the code itself. When these two classes of comments are lumped together in the same text field, I believe it is easy to lose track of the high-level comments and focus on the low-level. After all, you may have a short paragraph of high-level feedback right next to a mountain of low-level comments. Your eyes and brain tend to gravitate towards the larger set of more concrete low-level comments because you sub-consciously want to fix your problems and that large mass of text represents more problems, easier problems to solve than the shorter and often more abstract high-level summary. You want instant gratification and the pile of low-level comments is just too tempting to pass up. We have to train ourselves to temporarily ignore the low-level comments and focus on the high-level feedback. This is very difficult for some people. It is not an ideal disposition. Benjamin Smedberg's recent post on code review indirectly talks about some of this by describing his rational approach of tackling high-level first.

As review iterations occur, the bug devolves into a mix of comments related to high and low-level comments. It thus becomes harder and harder to track the current high-level state of the feedback, as they must be picked out from the mountain of low-level comments. If you've ever inherited someone else's half-finished bug, you know what I'm talking about.

I believe that Bugzilla's threadless and contextless comment flow disposes us towards focusing on low-level details instead of the high-level. I believe that important high-level discussions aren't occurring at the rate they need and that technical debt increases as a result.

Difficulty Tracking Individual Items of Feedback

Code review feedback consists of multiple items of feedback. Each one is related to the review at hand. But oftentimes each item can be considered independent from others, relevant only to a single line or section of code. Style feedback is one such example.

I find it helps to model code review as a tree. You start with one thing you want to do. That's the root node. You split that thing into multiple commits. That's a new layer on your tree. Finally, each comment on those commits and the comments on those comments represent new layers to the tree. Code review thus consists of many related, but independent branches, all flowing back to the same central concept or goal. There is a one to many relationship at nearly every level of the tree.

Again, Bugzilla lumps all these individual items of feedback into a linear series of flat text blobs. When you are commenting on code, you do get some code context printed out. But everything is plain text.

The result of this is that tracking the progress on individual items of feedback - individual branches in our conceptual tree - is difficult. Code authors must pore through text comments and manually keep an inventory of their progress towards addressing the comments. Some people copy the review comment into another text box or text editor and delete items once they've fixed them locally! And, when it comes time to review the new patch version, reviewers must go through the same exercise in order to verify that all their original points of feedback have been adequately addressed! You've now redundantly duplicated the feedback tracking mechanism among at least two people. That's wasteful in of itself.

Another consequence of this unstructured feedback tracking mechanism is that points of feedback tend to get lost. On complex reviews, you may be sorting through dozens of individual points of feedback. It is extremely easy to lose track of something. This could have disastrous consequences, such as the accidental creation of a 0day bug in Firefox. OK, that's a worst case scenario. But I know from experience that review comments can and do get lost. This results in new bugs being filed, author and reviewer double checking to see if other comments were not acted upon, and possibly severe bugs with user impacting behavior. In other words, this unstructured tracking of review feedback tends to lessen code quality and is thus a contributor to technical debt.

Fewer, Larger Patches

Bugzilla's user interface encourages the writing of fewer, larger patches. (The opposite would be many, smaller patches - sometimes referred to as micro commits.)

This result is achieved by a user interface that handles multiple patches so poorly that it effectively discourages that approach, driving people to create larger patches.

The stream of changes on a bug (including review comments) is a flat, linear list of plain text comments. This works great when the activity of a bug follows a nice, linear flow. However, reviewing multiple patches doesn't work in a linear model. If you attach multiple patches to a bug, the review comments and their replies for all the patches will be interleaved in the same linear comment list. This flies in the face of the reality that each patch/review is logically its own thread that deserves to be followed on its own. The end result is that it is extremely difficult to track what's going on in each patch's review. Again, we have different contexts - different branches of a tree - all living in the same flat list.

Because conducting review on separate patches is so painful, people are effectively left with two choices: 1) write a single, monolithic patch 2) create a new bug. Both options suck.

Larger, monolithic patches are harder and slower to review. Larger patches require much more cognitive load to review, as the reviewer needs to capture the entire context in order to make a review determination. This takes more time. The increased surface area of the patch also increases the liklihood that the reviewer will find something wrong and will require a re-review. The added complexity of a larger patch also means the chances of a bug creeping in are higher, leading to more bugs being filed and more reviews later. The more review cycles the patch goes through, the greater the chances it will suffer from bit rot and will need updating before it lands, possibly incurring yet more rounds of review. And, since we measure progress in terms of code landing, the delay to get a large patch through many rounds of review makes us feel lethargic and demotivates us. Large patches have intrinsic properties that lead to compounding problems and increased development cost.

As bad as large patches are, they are roughly in the same badness range as the alternative: creating more bugs.

When you create a new bug to hold the context for the review of an individual commit, you are doing a lot of things, very few of them helpful. First, you must create a new bug. There's overhead to do that. You need to type in a summary, set up the bug dependencies, CC the proper people, update the commit message in your patch, upload your patch/attachment to the new bug, mark the attachment on the old bug obsolete, etc. This is arguably tolerable, especially with tools that can automate the steps (although I don't believe there is a single tool that does all of what I mentioned automatically). But the badness of multiple bugs doesn't stop there.

Creating multiple bugs fragments the knowledge and history of your change and diminishes the purpose of a bug. You got in the situation of creating multiple bugs because you were working on a single logical change. It just so happened that you needed/wanted multiple commits/patches/reviews to represent that singular change. That initial change was likely tracked by a single bug. And now, because of Bugzilla's poor user interface around mutliple patch reviews, you now find yourself creating yet another bug. Now you have two bug numbers - two identifiers that look identical, only varying by their numeric value - referring to the same logical thing. We've started with a single bug number referring to your logical change and created what are effectively sub-issues, but allocated them in the same namespace as normal bugs. We've diminished the importance of the average bug. We've introduced confusion as to where one should go to learn about this single, logical change. Should I go to bug X or bug Y? Sure, you can likely go to one and ultimately find what you were looking for. But that takes more effort.

Creating separate bugs for separate reviews also makes refactoring harder. If you are going the micro commit route, chances are you do a lot of history rewriting. Commits are combined. Commits are split. Commits are reordered. And if those commits are all mapping to individual bugs, you potentially find yourself in a huge mess. Combining commits might mean resolving bugs as duplicates of each other. Splitting commits means creating yet another bug. And let's not forget about managing bug dependencies. Do you set up your dependencies so you have a linear, waterfall dependency corresponding to commit order? That logically makes sense, but it is hard to keep in sync. Or, do you just make all the review bugs depend on a single parent bug? If you do that, how do you communicate the order of the patches to the reviewer? Manually? That's yet more overhead. History rewriting - an operation that modern version control tools like Git and Mercurial have enabled to be a lightweight operation and users love because it doesn't constrain them to pre-defined workflows - thus become much more costly. The cost may even be so high that some people forego rewriting completely, trading their effort for some poor reviewer who has to inherit a series of patches that isn't organized as logically as it could be. Like larger patches, this increases cognitive load required to perform reviews and increases development costs.

As you can see, reviewing multiple, smaller patches with Bugzilla often leads to a horrible user experience. So, we find ourselves writing larger, monolithic patches and living with their numerous deficiencies. At least with monolithic patches we have a predictable outcome for how interaction with Bugzilla will play out!

I have little doubt that large patches (whose existence is influenced by the UI of Bugzilla) slows down the development velocity of Firefox.

Commit Message Formatting

The heavy involvement of Bugzilla in our code development lifecycle has influenced how we write commit messages. Let's start with the obvious example. Here is our standard commit message format for Firefox:

Bug 1234 - Fix some feature foo; r=gps

The bug is right there at the front of the commit message. That prominent placement is effectively saying the bug number is the most important detail about this commit - everything else is ancillary.

Now, I'm sure some of you are saying, but Greg, the short description of the change is obviously more important than the bug number. You are right. But we've allowed ourselves to make the bug and the content therein more important than the commit.

Supporting my theory is the commit message content following the first/summary line. That data is almost always - wait for it - nothing: we generally don't write commit messages that contain more than a single summary line. My repository forensics show that that less than 20% of commit messages to Firefox in 2014 contain multiple lines (this excludes merge and backout commits). (We are doing better than 2013 - the rate was less than 15% then).

Our commit messages are basically saying, here's a highly-abbreviated summary of the change and a pointer (a bug number) to where you can find out more. And of course loading the bug typically reveals a mass of interleaved comments on various topics, hardly the high-level summary you were hoping was captured in the commit message.

Before I go on, in case you are on the fence as to the benefit of detailed commit messages, please read Phabricator's recommendations on revision control and writing reviewable code. I think both write-ups are terrific and are excellent templates that apply to nearly everyone, especially a project as large and complex as Firefox.

Anyway, there are many reasons why we don't capture a detailed, multi-line commit message. For starters, you aren't immediately rewarded for doing it: writing a good commit message doesn't really improve much in the short term (unless someone yells at you for not doing it). This is a generic problem applicable to all organizations and tools. This is a problem that culture must ultimately rectify. But our tools shouldn't reinforce the disposition towards laziness: they should reward best practices.

I don't believe Bugzilla and our interactions with it do an adequate job rewarding good commit message writing. Chances are your mechanism for posting reviews to Bugzilla or posting the publishing of a commit to Bugzilla (pasting the URL in the simple case) brings up a text box for you to type review notes, a patch description, or extra context for the landing. These should be going in the commit message, as they are the type of high-level context and summarizations of choices or actions that people crave when discerning the history of a repository. But because that text box is there, taunting you with its presence, we write content there instead of in the commit message. Even where tools like bzexport exist to upload patches to Bugzilla, potentially nipping this practice in the bug, it still engages in frustrating behavior like reposting the same long commit message on every patch upload, producing unwanted bug spam. Even a tool that is pretty sensibly designed has an implementation detail that undermines a good practice.

Machine Processing of Patches is Difficult

I have a challenge for you: identify all patches currently under consideration for incorporation in the Firefox source tree, run static analysis on them, and tell me if they meet our code style policies.

This should be a solved problem and deployed system at Mozilla. It isn't. Part of the problem is because we're using Bugzilla for conducting review and doing patch management. That may sound counter-intuitive at first: Bugzilla is a centralized service - surely we can poll it to discover patches and then do stuff with those patches. We can. In theory. Things break down very quickly if you try this.

We are uploading patch files to Bugzilla. Patch files are representations of commits that live outside a repository. In order to get the full context - the result of the patch file - you need all the content leading up to that patch file - the repository data. When a naked patch file is uploaded to Bugzilla, you don't always have this context.

For starters, you don't know with certainly which repository the patch belongs to because that isn't part of the standard patch format produced by Mercurial or Git. There are patches for various repositories floating around in Bugzilla. So now you need a way to identify which repository a patch belongs to. It is a solvable problem (aggregate data for all repositories and match patches based on file paths, referenced commits, etc), albeit one Mozilla has not yet solved (but should).

Assuming you can identify the repository a patch belongs to, you need to know the parent commit so you can apply this patch. Some patches list their parent commits. Others do not. Even those that do may lie about it. Patches in MQ don't update their parent field when they are pushed, only after they are refreshed. You could be testing and uploading a patch with a different parent commit than what's listed in the patch file! Even if you do identify the parent commit, this commit could belong to another patch under consideration that's also on Bugzilla! So now you need to assemble a directed graph with all the patches known from Bugzilla applied. Hopefully they all fit in nicely.

Of course, some patches don't have any metadata at all: they are just naked diffs or are malformed commits produced by tools that e.g. attempt to convert Git commits to Mercurial commits (Git users: you should be using hg-git to produce proper Mercurial commits for Firefox patches).

Because Bugzilla is talking in terms of patch files, we often lose much of the context needed to build nice tools, preventing numerous potential workflow optimizations through automation. There are many things machines could be doing for us (such as looking for coding style violations). Instead, humans are doing this work and costing Mozilla a lot of time and lost developer productivity in the process. (A human costs ~$100/hr. A machine on EC2 is pennies per hour and should do the job with lower latency. In other words, you can operate over 300 machines 24 hours a day for what you may an engineer to work an 8 hour shift.)

Conclusion

I have outlined a few of the side-effects of using Bugzilla as part of our day-to-day development, review, and landing of changes to Firefox.

There are several takeways.

First, one cannot argue the fact that Firefox development is bug(zilla) centric. Nearly every important milestone in the lifecycle of a patch involves Bugzilla in some way. This has its benefits and drawbacks. This article has identified many of the drawbacks. But before you start crying to expunge Bugzilla from the loop completely, consider the benefits, such as a place anyone can go to to add metadata or comments on something. That's huge. There is a larger discussion to be had here. But I don't want to be inviting it quite yet.

A common thread between many of the points above is Bugzilla's unstructured and generic handling of code and metadata attached to it (patches, review comments, and landing information). Patches are attachments, which can be anything under the sun. Review comments are plain text comments with simple author, date, and tag metadata. Landings are also communicated by plain text review comments (at least initially - keywords and flags are used in some scenarios).

By being a generic tool, Bugzilla throws away a lot of the rich metadata that we produce. That data is still technically there in many scenarios. But it becomes extremely difficult if not practically impossible for both humans and machines to access efficiently. We lose important context and feedback by normalizing all this data to Bugzilla. This data loss creates overhead and technical debt. It slows Mozilla down.

Fortunately, the solutions to these problems and shortcomings are conceptually simple (and generally applicable): preserve rich context. In the context of patch distribution, push commits to a repository and tell someone to pull those commits. In the context of code review, create sub-reviews for different commits and allow tracking and easy-to-follow (likely threaded) discussions on found issues. Design workflow to be code first, not tool or bug first. Optimize workflows to minimize people time. Lean heavily on machines to do grunt work. Integrate issue tracking and code review, but not too tightly (loosely coupled, highly cohesive). Let different tools specialize in the handling of different forms of data: let code review handle code review. Let Bugzilla handle issue tracking. Let a landing tool handle tracking the state of landings. Use middleware to make them appear as one logical service if they aren't designed to be one from the start (such as is Mozilla's case with Bugzilla).

Another solution that's generally applicable is to refine and optimize the whole process to land a finished commit. Your product is based on software. So anything that adds overhead or loss of quality in the process of developing that software is fundamentally a product problem and should be treated as such. Any time and brain cycles lost to development friction or bugs that arise from things like inadequate code reviews tools degrade the quality of your product and take away from the user experience. This should be plain to see. Attaching a cost to this to convince the business-minded folks that it is worth addressing is a harder matter. I find management with empathy and shared understanding of what amazing tools can do helps a lot.

If I had to sum up the solution in one sentence, it would be: invest in tools and developer happiness.

I hope to soon publish a post on how Mozilla's new code review tool addresses many of the workflow deficiencies present today. Stay tuned.

Jen Fong-AdwentRVLVVR: The Details

RVLVVR has nothing to do with Meatspace mechanics (except for the use of websockets) nor does it have anything to do with a Meatspace 3.0 version.

Martijn WargersT-Dose event in Eindhoven

Last Saturday and Sunday, I went to the T-Dose event, manning a booth with my good friend Tim Maks van den Broek.

I met a lot of of smart and interesting people:
  • Someone with an NFC chip in his hand! I was able to import his contact details into my FirefoxOS phone by keeping it close to his hand. Very funny to see, but it also felt weird.
  • Bram Moolenaar (VIM author! Note to self, never complain about software in front of people you don't know)
  • Some crazy enthusiastic people about Perl and more specifically, Perl 6. They even brought a camel! (ok, not a live one)


Often asked questions during the event

Where can I phone with FirefoxOS? What price? 
The only phone that you can buy online in the Netherlands, that is getting the latest updates, is the Flame device. You can order it on everbuying.com. More information on Mozilla Developer Network. Because the phone will be shipped outside of the EU, import duties will be charged, which are probably somewhere around €40.
Can I put Firefox OS on my Android device?
On MDN, there are a couple of devices mentioned, but I wouldn't know how well that works.
Is there a backup tool to backup on your desktop computer?
As far as I know, there is no backup software on the desktop computer, that would be able to make a backup of your Firefox OS phone. I know there is a command line tool from Taiwan QA that allows you to backup  and restore your profile.
Does Whatsapp work on it?
There is no official Whatsapp, but there is an app called ConnectA2, that allows you to connect with your Whatsapp contacts. I haven't used it, but Tim Maks told me it works reasonably well.


Kim MoirRelease Engineering in the classroom

The second week of October, I had the pleasure of presenting lectures on release engineering to university students in Montreal as part of the PLOW lectures at École Polytechnique de Montréal.    Most of the students were MSc or PhD students in computer science, with a handful of postdocs and professors in the class as well. The students came from Montreal area universities and many were international students. The PLOW lectures consisted of several invited speakers from various universities and industry spread over three days.

View looking down from the university

Université de Montréal administration building

École Polytechnique building.  Each floor is painted a different colour to represent a differ layer of the earth.  So the ground floor is red, the next orange and finally green.

The first day, Jack Jiang from York University gave a talk about software performance engineering.
The second day, I gave a lecture on release engineering in the morning.  The rest of the day we did a lot of labs to configure a Jenkins server to build and run tests on an open source project. Earlier that morning, I had setup m3.large instances for the students on Amazon that they could ssh into and conduct their labs.  Along the way, I talked about some release engineering concepts.  It was really interesting and I learned a lot from their feedback.  Many of the students had not been exposed to release engineering concepts so it was fun to share the information.

Several students came up to me during the breaks and said "So, I'm doing my PhD in release engineering, and I have several questions for you" which was fun.  Also, some of the students were making extensive use of code bases for Mozilla or other open source projects so that was interesting to learn more about.  For instance one research project looking at the evolution of multi-threading in a Mozilla code bases, and another student was conducting bugzilla comment sentiment analysis.  Are angry bug comments correlated with fewer bug fixes?  Looking forward to the results of this research!

I ended the day by providing two challenge exercises to the students that they could submit answers to.  One exercise was to setup a build pipeline in Jenkins for another open source project.  The other challenge was to use a the Jenkins REST API to query the Apache projects Jenkins server and present some statistics on their build history.  The results were pretty impressive!

My slides are on GitHub and the readme file describes how I setup the Amazon instances so Jenkins and some other required packages were installed before hand.  Please use them and distribute them if you are interested in teaching release engineering in your classroom.

Lessons I learned from this experience:
  • Computer science classes focus on writing software, but not necessarily building it is a team environment. So complex branching strategies are not necessarily a familiar concept to some students.  Of course, this depends on the previous work experience of the students and the curriculum at the school they attend. One of students said to me "This is cool.  We write code, but we don't build software".
  • Concepts such as building a pipeline for compilation, correctness/performance/
    regression testing, packing and deployment can also be unfamiliar.   As I said in the class, the work of the release engineer starts when the rest of the development team things they are done :-)
  • When you're giving a lecture and people would point out typos, or ask for clarification I'd always update the repository and ask the students to pull a new version.  I really liked this because my slides were in reveal.js and I didn't have to export a new PDF and redistribute.  Instant bug fixes!
  • Add bonus labs to the material so students who are quick to complete the exercises have more to do while the other students complete the original material.  Your classroom will have people with wildly different experience levels.
The third day there was a lecture by Michel Dagenais of Polytechnique Montréal on tracing heterogeneous cloud instances using (tracing framework for Linux).  The Eclipse trace compass project also made an appearance in the talk. I always like to see Eclipse projects highlighted.  One of his interesting points was that none of the companies that collaborate on this project wanted to sign a bunch of IP agreements so they could collaborate on this project behind closed doors.  They all wanted collaborate via an open source community and source code repository.  Another thing he emphasized was that students should make their work available on the web, via GitHub or other repositories so they have a portfolio of work available.  It was fantastic to seem him promote the idea of students being involved in open source as a way to help their job prospects when they graduate!

Thank you Foutse and  Bram  for the opportunity to lecture at your university!  It was a great experience!  Also, thanks Mozilla for the opportunity to do this sort of outreach to our larger community on company time!

Also, I have a renewed respect for teachers and professors.  Writing these slides took so much time.  Many long nights for me especially in the days leading up to the class.  Kudos to you all who do teach everyday.

References
The slides are on GitHub and the readme file describes how I setup the Amazon instances for the labs

Kim MoirBeyond the Code 2014: a recap

I started this blog post about a month ago and didn't finish it because well, life is busy.  

I attended Beyond the Code last September 19.  I heard about it several months ago on twitter.  A one-day conference about celebrating women in computing, in my home town, with an fantastic speaker line up?  I signed up immediately.   In the opening remarks, we were asked for a show of hands to show how many of us were developers, in design,  product management, or students and there was a good representation from all those categories.  I was especially impressed to see the number of students in the audience, it was nice to see so many of them taking time out of their busy schedule to attend.

View of the Parliament Buildings and Chateau Laurier from the MacKenzie street bridge over the Rideau Canal
Ottawa Conference Centre, location of Beyond the Code
 
There were seven speakers, three workshop organizers, a lunch time activity, and a panel at the end. The speakers were all women.  The speakers were not all white women or all heterosexual women.  There were many young women, not all industry veterans :-) like me.  To see this level of diversity at a tech conference filled me with joy.  Almost every conference I go to is very homogenous in the make up of the speakers and the audience.  To to see ~200 tech women in at conference and 10% men (thank you for attending:-) was quite a role reversal.

I completely impressed by the caliber of the speakers.  They were simply exceptional.

The conference started out with Kronda Adair giving a talk on Expanding Your Empathy.  One of the things that struck me from this talk was that she talked about how everyone lives in a bubble, and they don't see things that everyone does due to privilege.  She gave the example of how privilege is like a browser, and colours how we see the world.  For a straight white guy a web age looks great when they're running the latest Chrome on MacOSx.  For a middle class black lesbian, the web page doesn't look as great because it's like she's running IE7.  There is less inherent privilege.  For a "differently abled trans person of color" the world is like running IE6 in quirks mode. This was a great example. She also gave a shout out to the the Ascend Project which she and Lukas Blakk are running in Mozilla Portland office. Such an amazing initiative.

The next speaker was Bridget Kromhout who gave talk about Platform Ops in the Public Cloud.
I was really interested in this talk because we do a lot of scaling of our build infrastructure in AWS and wanted to see if she had faced similar challenges. She works at DramaFever, which she described as Netflix for Asian soap operas.  The most interesting things to me were the fact that she used all AWS regions to host their instances, because they wanted to be able to have their users download from a region as geographically close to them as possible.  At Mozilla, we only use a couple of AWS regions, but more instances than Dramafever, so this was an interesting contrast in the services used. In addition, the monitoring infrastructure they use was quite complex.  Her slides are here.

I was going to summarize the rest of the speakers but Melissa Jean Clark did an exceptional job on her blog.  You should read it!

Thank you Shopify for organizing this conference.  It was great to meet some many brilliant women in the tech industry! I hope there is an event next year too!

Karl DubostHow to deactivate UA override on Firefox OS

Some Web sites do not send the right version of the content to Firefox OS. Some ill-defined server side and client side scripting do not detect Firefox OS as a mobile device and they send the desktop content instead. To fix that, we sometimes define UA override for certain sites.

It may improve the life of users but as damaging consequences when it's time for Web developers to test the site they are working on. The device is downloading the list on the server side at a regular pace. Luckily enough, you can deactivate it through preferences.

Firefox UA Override Preferences

There are two places in Firefox OS where you may store preferences:

  • /system/b2g/defaults/pref/user.js
  • /data/b2g/mozilla/something.default/prefs.js (where something is a unique id)

To change the UA override preferences, you need to set useragent.updates.enabled to true (default) for enabling and to false for disabling. If you put in /system/, each time you update the system, the file and its preferences will be crushed and replace by the update. On the other if you put it in /data/, it will be kept with all the data of your profiles.

On the command line, using adb, you can manipulate the preferences:

# Prepare
set -x
adb shell mount -o rw,remount /system
# Local copy of the preferences
adb pull /system/b2g/defaults/pref/user.js /tmp/user.js
# Keep a local copy of the correct file.
cp /tmp/user.js /tmp/user.js.tmp
# Let's check if the preference is set and with which value
grep useragent.updates.enabled /tmp/user.js
# If not set, let's add it
echo 'pref("general.useragent.updates.enabled", true);' >> /tmp/user.js.tmp
# Push the new preferences to Firefox OS
adb push /tmp/user.js.tmp /system/b2g/defaults/pref/user.js
adb shell mount -o ro,remount /system
# Restart
adb shell stop b2g && adb shell start b2g

We created a script to help for this. If you find bugs, do not hesitate to tell us.

Clarification Notes

Prior to Firefox OS 1.2, we were having the list of UA override on the device itself. The file was updated with system updates. But we noticed that some carriers were not sending the updates as much as we wished. It is an issue, because some sites may have been fixed in the mean time and we might trigger the wrong information. So we decided to push the list of the server side starting with Firefox OS 1.2. The users can then receive an always updated list of UA override. Thanks to André Jaenisch for the question.

Otsukare.

Wil ClouserMigrating off Wordpress

This post is a celebration of finishing a migration off of Wordpress for this site and on to flat files, built by Jekyll from Markdown files. I'm definitely looking forward to writing more Markdown and fewer HTML tags.

90% of the work was done by jekyll-import to roughly pull my wordpress data into Markdown files, and then I spent a late night with Vim macros and sed to massage it the way I wanted it.

If all I wanted to do was have my posts up, I'd be done, but having the option to accept comments is important to me and I wasn't comfortable using a platform like Disqus because I didn't want to force people to use a 3rd party.

Since my posts only average one or two comments I ended up using a slightly modified jekyll-static-comments to simply put a form on the page and email me any comments (effectively, a moderation queue). If it's not spam, it's easy to create a .comment file and it will appear on the site.

My original goal was to host this all on Github but they only allow a short list of plugins and the commenting system isn't on there so I'll stick with my own host for now.

Please let me know if you see anything broken.

Justin DolskeStellar Paparazzi

Last Thursday (23 Oct 2014), North America was treated to a partial solar eclipse. This occurs when the moon passes between the Earth and Sun, casting its shadow onto part of our planet. For observers in the California Bay Area, the moon blocked about 40% of the sun. Partial eclipses are fairly common (2-5 times a year, somewhere on the Earth), but they can still be quite interesting to observe.

The first two eclipses I recall observing were on 11 July 1991 and 10 May 1994. The exact dates are not memorable; they’re just easy to look up as the last eclipses to pass through places I lived ! But I do remember trying to observe them with some lackluster-but-easily-available methods of the time. Pinhole projection seems to be most commonly suggested, but I never got good results from it. Using a commercial audio CD (which uses a thin aluminum coating) had worked a bit better for me, but this is highly variable and can be unsafe.

I got more serious about observing in 2012. For the annular solar eclipse and transit of Venus which occurred that May/June, I made an effort to switch to higher-quality methods. My previous blog post goes into detail, but I first tried a pinhead mirror projection, which gave this better-but-not-awesome result:

(In fairness, the equipment fits into a pocket, and it was a last-minute plan to drive 6 hours, round trip, for better viewing.)

For the transit of Venus a few days later — a very rare event that occurs only once ever 105 years — I switched to using my telescope for even better quality. You don’t look through it, but instead use it to project a bright image of the sun onto another surface for viewing.

I was excited to catch last week’s eclipse because there was an unusually large sunspot (“AR2192″) that was going to be visible. It’s one of the larger sunspots of the last century, so it seemed like a bit of an historic opportunity to catch it.

This time I took the unusual step of observing from indoors, looking out a window. This basically allows for projecting into a darker area (compared to full sunlight), resulting in better image contrast. Here’s a shot of my basic setup — a Celestron C8 telescope, with a right angle adapter and 30mm eyepiece, projecting the full image of the sun (including eclipse and sunspots) onto the wall of my home:

The image was obviously quite large, and made it easy to examine details of the large sunspot AR2192, as well as a number of smaller sunspots that were present.

I also switched to a 12.5mm eyepiece, allowing for a higher magnification, which made the 2-tone details of the main sunspot even more obvious. The image is a little soft, but not too bad — it’s hard to get sharp contrast at high zoom, and the image was noticeably wavering as a result of thermal convection withing the telescope and atmosphere. (Not to mention that a telescope mounted on carpet in a multistory building isn’t the epitome of stability — I had to stand very still or else the image would shake! Not ideal, but workable.)

As with the transit of Venus, it’s fun to compare my picture with that from NASA’s $850-million Solar Dynamics Observatory.

Observing this sunspot wasn’t nearly as exciting as the Carrington Event of 1859, but it was still a beautiful sight to behold. I’m definitely looking forward to the 21 August 2017 eclipse, which should be a fantastic total eclipse visible from a wide swath of the US!


Daniel StenbergStricter HTTP 1.1 framing good bye

I worked on a patch for Firefox bug 237623 to make sure Firefox would use a stricter check for “HTTP 1.1 framing”, checking that Content-Length is correct and that there’s no broken chunked encoding pieces. I was happy to close an over 10 years old bug when the fix landed in June 2014.

The fix landed and has not caused any grief all the way since June through to the actual live release (Nightlies, Aurora, Beta etc). This change finally shipped in Firefox 33 and I had more or less already started to forget about it, and now things went south really fast.

The amount of broken servers ended up too massive for us and we had to backpedal. The largest amount of problems can be split up in these two categories:

  1. Servers that deliver gzipped content and sends a Content-Length: for the uncompressed data. This seems to be commonly done with old mod_deflate and mod_fastcgi versions on Apache, but we also saw people using IIS reporting this symptom.
  2. Servers that deliver chunked-encoding but who skip the final zero-size chunk so that the stream actually never really ends.

We recognize that not everyone can have the servers fixed – even if all these servers should still be fixed! We now make these HTTP 1.1 framing problems get detected but only cause a problem if a certain pref variable is set (network.http.enforce-framing.http1), and since that is disabled by default they will be silently ignored much like before. The Internet is a more broken and more sad place than I want to accept at times.

We haven’t fully worked out how to also make the download manager (ie the thing that downloads things directly to disk, without showing it in the browser) happy, which was the original reason for bug 237623…

Although the code may now no longer alert anything about HTTP 1.1 framing problems, it will now at least mark the connection not due for re-use which will be a big boost compared to before since these broken framing cases really hurt persistent connections use. The partial transfer return codes for broken SPDY and HTTP/2 transfers remain though and I hope to be able to remain stricter with these newer protocols.

This partial reversion will land ASAP and get merged into patch releases of Firefox 33 and later.

Finally, to top this off. Here’s a picture of an old HTTP 1.1 frame so that you know what we’re talking about.

An old http1.1 frame

Mozilla Release Management TeamFirefox 34 beta2 to beta3

  • 26 changesets
  • 65 files changed
  • 451 insertions
  • 154 deletions

ExtensionOccurrences
cpp14
css7
jsm5
js5
xml4
h4
java3
html2
mm1
ini1

ModuleOccurrences
toolkit10
mobile8
dom6
browser6
security5
layout4
widget2
netwerk1
media1
gfx1
content1
browser1

List of changesets:

Mike HommeyBug 1082910, race condition copyhing sdk/bootstrap.js, r=mshal a=lmandel - 8c63e1286d75
Ed LeeBug 1075620 - Switch to GET for fetch to allow caching of links data from redirect. r=ttaubert, a=sledru - da489398c483
Steven MichaudBug 1069658 - The slide-down titlebar in fullscreen mode is transparent on Yosemite. r=mstange a=lmandel - a026594416c7
Martin ThomsonBug 1076983 - Disabling SSL 3.0 with pref, r=keeler a=lmandel - 8c9d5c14b866
Randall BarkerBug 1053426 - Fennec crashes when tab sharing is active. r=jesup, a=lmandel - 4ff961ace0d0
Kearwood (Kip) GilbertBug 1074165 - Prevent out of range scrolling in nsListboxBodyFrame. r=mats, a=lmandel - 9d9abce3b2f2
Randall BarkerBug 1080012 - Fennec no longer able to mirror tabs on chromecast. r=mfinkle, a=lmandel - 25b64ba60455
Gijs KruitboschBug 1077304 - Fix password manager to not fire input events if not changing input field values. r=gavin, a=lmandel - 65f5bf99d815
Wes JohnstonBug 966493 - Mark touchstart and end events as handling user input. r=smaug, a=lmandel - f6c14ee20738
Lucas RochaBug 1058660 - Draw divider at the bottom of about:home's tab strip. r=margaret, a=lmandel - 7d2f3db4567d
Lucas RochaBug 1058660 - Use consistent height in about:home's tab strip. r=margaret, a=lmandel - a73c379cfa5f
Lucas RochaBug 1058660 - Use consistent bg color in about:home's tab strip. r=margaret, a=lmandel - 47ef137f046f
Michael WuBug 1082745 - Avoid reoptimizing optimized SourceSurfaceCairos. r=bas, a=lmandel - 5e3fc9d8a99b
Dão GottwaldBug 1075435 - Adjust toolbar side borders for customization mode. r=gijs, a=lmandel - e57353855abf
Gijs KruitboschBug 1083668 - Don't set color for menubar when lwtheme is in use. r=dao, a=lmandel - 1af716db5215
Gavin SharpBug 1060675 - Only cap local form history results for the search bar if there are remote suggestions. r=MattN, a=lmandel - a963eab53a09
Jed DavisBug 1080165 - Allow setpriority() to fail without crashing in media plugins on Linux. r=kang, a=lmandel - 5c014e511661
Jeff GilbertBug 1083611 - Use UniquePtr and fallible allocations. r=kamidphish, a=lmandel - 42f43b1c896e
Tanvi VyasBug 1084513 - Add a null check on callingDoc before we try and get its principal. r=smaug, a=lmandel - e84f980d638e
Christoph KerschbaumerBug 1073316 - CSP: Use nsRefPtr to store CSPContext in CSPReportSenderRunnable. r=sstamm, a=lmandel - 290442516a98
Jared WeinBug 1085451 - Implement new design for Loop's green call buttons. r=Gijs, a=lmandel - 5aecfcba7559
Gijs KruitboschBug 1082002 - Fix urlbar to stay white. r=dao, a=lmandel - 605c6938c9d3
Birunthan MohanathasBug 960757 - Fix test_bug656379-1.html timeouts. r=ehsan, a=test-only - 27b0655c1385
Martin ThomsonBug 1083058 - Add a pref to control TLS version fallback. r=keeler, a=lsblakk - ae15f14a1db1
Irving ReidBug 1081702 - Check that callback parameters are defined before pushing onto result arrays. r=Mossop, a=lsblakk - 79560a3c511f
Jared WeinBug 1083396 - Update the Hello icon. r=Unfocused, a=lsblakk - a80d4ca56309

Soledad PenadesMozFest 2014 days 0, 1

I’ll try to not let something like past year happen and do a quick blogging now!

Day 0: Friday

I went to the facilitators session. Gunner, Michelle and co explained how to a) get ready for the chaos b) seed the chaos that is MozFest.

I was equally amused and scared, and a bit of embarrassed. That is good.

Idea being that you have to make new connections and new friends during MozFest. Do not hang with people you already know!

It’s hard to do it because there are so many great friends I haven’t seen in months, and people I hadn’t met in person for the first time, but I try.

We mingle with facilitators and as an exercise, we have to explain to each other what our session will consist of. I am told that they are surprised I have got a technical background, right after I mention “HTTP requests” and “API endpoints”. Very ironic/sad specially after I wrote this on diversity past week.

I also got a terrible headache and ended up leaving back home before the Science Fair happened. Oh well!

Day 1: Saturday

Chaos unravels.

Our table for WebIDE sessions is taken over by a group of people hanging out. I kindly ask them to make some room as we need space for a session. They sort of leave and then an AppMaker bunch of people drags the table about 1 meter away from where it was and start a session of their own (??). I was in the middle of explaining WebIDE to someone but they are OK with the chaos, so we drag ourselves 1 m away too and continue as if nothing happened. This guy is pretty cool and perhaps wants to contribute with templates! We discuss Grunt and Gulp and dependency requirements. It’s a pity Nicola is not yet there but I explain him the work he did this summer and we’re working on (node.js + devtools = automated Firefox OS development).

A bit later my session co-facilitators show up in various states of confusion. Nothing unexpected here…

Bobby brings us a big screen so sessions are easier/more obvious and we can explain WebIDE to more than one person at the time. Potch shows his Windows XP wallpaper in all his glory.

Nobody shows up so we go to find lunch. The queue is immense so I give up and go grab “skinny burgers” without buns somewhere else.

Back there Potch proposes a hypothesis for the sake of argument: “Say there are going to be a bunch more people with Flame devices tomorrow. How do we get them started in five minutes?”

We write a script for what we’d say to people, as we reproduce the steps on my fully flashed phone. This is how you activate Developer mode. This is how you connect to the phone, etc.

Potch: “can I take screenshots with WebIDE?”
Sole: “Yes, yes you can!”
Potch: “Awesome!”

Potch takes screenshots for the guide.

People come to the WebIDE table and we show them how it works. They ask us questions, we answer. When we cannot answer, we show them how to file bugs. Please file bugs! We are not omniscient and cannot know what you are missing.

People leave the table. I leave to find some water as my throat is not happy with me. I stumble upon a bunch of people I know, I get delayed and somehow end up in the art room organised by Kat and Paula, and someone from the Tate explains me a process for creating remixed art with X-Ray and WebMaker: think of an art movement, find what is it that categorises that art movement. Then use google images to look for those elements in the net and use them in the initial website as replacements or as additions. Seems mechanical but the slight randomness of whatever google images can come up with looks funny. I don’t want to do this now and I have to come back to my table, but I get this idea about automating this.

Back to the MEGABOOTH Bobby says someone was looking for me. I end up speaking to someone from Mozilla whose face looked familiar but I did not know why. I had been to their office twice, that’s why!

They have a custom built version of Firefox OS that takes over the WiFi and replaces it with an adhoc mesh network. So they have a bunch of devices on the table who are able to discover nearby devices and establish this network without intermediaries. They’re also working on getting this to be a standard thing—or at least a thing that will be in the operating system, not on a custom build. Pretty cool!

We end up discussing WebRTC, latency, synchronisation of signals for distributed processing, and naive synchronisation signals using a very loud tone if all the devices are in the same place. Fantastic conversation!

I move to the flashing station. A bunch of people are helping to flash Firefox OS phones to the latest version. Somebody even tries his luck with resuscitating a ZTE “open” but it’s hard…

Jan Jongboom shows up. I say hi, he tells me about the latest developments in JanOS, and I feel compelled to high five him! Pro tip: never high five Jan. He’ll destroy your hand!

It’s about time for the speeches. Most important take out: this thing you have in your pocket is not a phone or a TV, it’s a computer and you can program it. Be creative!.

Announcement is made that people that contributed in an interestingly special way during the sessions and had got a glittery star sticker in their badge will be rewarded with a Flame phone, but please only take it if you can/want to help us make it better.

“For the sake of argument” becomes “a solid argument”. I see one of the flashing station volunteers rush in panic, smiling.

Here’s the guide Potch and me devised: Flame-what now?

Time for party, Max Ogden opens his Cat Umbrella. These are the true JS illuminati.

My throat is definitely not happy with me; I go home.

flattr this!

Daniel StenbergPretending port zero is a normal one

Speaking the TCP protocol, we communicate between “ports” in the local and remote ends. Each of these port fields are 16 bits in the protocol header so they can hold values between 0 – 65535. (IPv4 or IPv6 are the same here.) We usually do HTTP on port 80 and we do HTTPS on port 443 and so on. We can even play around and use them on various other custom ports when we feel like it.

But what about port 0 (zero) ? Sure, IANA lists the port as “reserved” for TCP and UDP but that’s just a rule in a list of ports, not actually a filter implemented by anyone.

In the actual TCP protocol port 0 is nothing special but just another number. Several people have told me “it is not supposed to be used” or that it is otherwise somehow considered bad to use this port over the internet. I don’t really know where this notion comes from more than that IANA listing.

Frank Gevaerts helped me perform some experiments with TCP port zero on Linux.

In the Berkeley sockets API widely used for doing TCP communications, port zero has a bit of a harder situation. Most of the functions and structs treat zero as just another number so there’s virtually no problem as a client to connect to this port using for example curl. See below for a printout from a test shot.

Running a TCP server on port 0 however, is tricky since the bind() function uses a zero in the port number to mean “pick a random one” (I can only assume this was a mistake done eons ago that can’t be changed). For this test, a little iptables trickery was run so that incoming traffic on TCP port 0 would be redirected to port 80 on the server machine, so that we didn’t have to patch any server code.

Entering a URL with port number zero to Firefox gets this message displayed:

This address uses a network port which is normally used for purposes other than Web browsing. Firefox has canceled the request for your protection.

… but Chrome accepts it and tries to use it as given.

The only little nit that remains when using curl against port 0 is that it seems glibc’s getpeername() assumes this is an illegal port number and refuses to work. I marked that line in curl’s output in red below just to highlight it for you. The actual source code with this check is here. This failure is not lethal for libcurl, it will just have slightly less info but will still continue to work. I claim this is a glibc bug.

$ curl -v http://10.0.0.1:0 -H "Host: 10.0.0.1"
* Rebuilt URL to: http://10.0.0.1:0/
* Hostname was NOT found in DNS cache
* Trying 10.0.0.1...
* getpeername() failed with errno 107: Transport endpoint is not connected
* Connected to 10.0.0.1 () port 0 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.38.1-DEV
> Accept: */*
> Host: 10.0.0.1
>
< HTTP/1.1 200 OK
< Date: Fri, 24 Oct 2014 09:08:02 GMT
< Server: Apache/2.4.10 (Debian)
< Last-Modified: Fri, 24 Oct 2014 08:48:34 GMT
< Content-Length: 22
< Content-Type: text/html

<html>testpage</html>

Why doing this experiment? Just for fun to to see if it worked.

(Discussion and comments on this post is also found at Reddit.)

Christian HeilmannThe things browsers can do – SAE Alumni Conference, Berlin 2014

Two days ago I was in Berlin for a day to present at the SAE alumni Conference in Berlin, Germany. I knew nothing about SAE before I went there except for the ads I see on the Tube in London. I was pretty amazed to see just how big a community the alumni and chapters of this school are. And how proud they are.

SAE Afterparty

My presentation The things browsers can do – go play with the web was a trial-run of a talk I will re-hash a bit at a few more conferences to come.

In essence, the thing I wanted to bring across is that HTML5 has now matured and is soon a recommendation.

And along the way we seem to have lost the excitement for it. One too many shiny HTML5 demo telling us we need a certain browser to enjoy the web. One more polyfill and library telling us without this extra overhead HTML5 isn’t ready. One more article telling us just how broken this one week old experimental implementation of the standard is. All of this left us tainted. We didn’t believe in HTML5 as a viable solution but something that is a compilation target instead.

Techno nightmare by @elektrojunge

In this talk I wanted to remind people just how much better browser support for the basic parts of HTML5 and friends is right now. And what you can do with it beyond impressive demos. No whizzbang examples here, but things you can use now. With a bit of effort you can even use them without pestering browsers that don’t support what you want to achieve. It is not about bringing modern functionality to all – browsers; it is about giving people things that work.

I recorded a screencast and put it on YouTube

The slides are on Slideshare.

The things browsers can do! SAE Alumni Convention 2014 from Christian Heilmann

All in all I enjoyed the convention and want to thank the organizers for having me and looking after me in an excellent fashion. It was refreshing to meet students who don’t have time to agonize which of the three task runners released this week to use. Instead who have to deliver something right now and in a working fashion. This makes a difference

Rizky AriestiyansyahFirefox OS App Days STPI

For Firefox Student Ambassadors at Sekolah Tinggi Perpajakan Indonesia let’s make application together.   Event Information : Speaker : Rizky Ariestiyansyah (RAL) Target audience : 35 Student There will be free Wifi and please...

The post Firefox OS App Days STPI appeared first on oonlab.

Peter BengtssonGo vs. Python

tl;dr; It's not a competition! I'm just comparing Go and Python. So I can learn Go.

So recently I've been trying to learn Go. It's a modern programming language that started at Google but has very little to do with Google except that some of its core contributors are staff at Google.

The true strength of Go is that it's succinct and minimalistic and fast. It's not a scripting language like Python or Ruby but lots of people write scripts with it. It's growing in popularity with systems people but web developers like me have started to pay attention too.

The best way to learn a language is to do something with it. Build something. However, I don't disagree with that but I just felt I needed to cover the basics first and instead of taking notes I decided to learn by comparing it to something I know well, Python. I did this a zillion years ago when I tried to learn ZPT by comparing it DTML which I already knew well.

My free time is very limited so I'm taking things by small careful baby steps. I read through An Introduction to Programming in Go by Caleb Doxey in a couple of afternoons and then I decided to spend a couple of minutes every day with each chapter and implement something from that book and compare it to how you'd do it in Python.

I also added some slightly more full examples, Markdownserver which was fun because it showed that a simple Go HTTP server that does something can be 10 times faster than the Python equivalent.

What I've learned

  • Go is very unforgiving but I kinda like it. It's like Python but with pyflakes switched on all the time.

  • Go is much more verbose than Python. It just takes so much more lines to say the same thing.

  • Goroutines are awesome. They're a million times easier to grok than Python's myriad of similar solutions.

  • In Python, the ability to write to a list and it automatically expanding at will is awesome.

  • Go doesn't have the concept of "truthy" which I already miss. I.e. in Python you can convert a list type to boolean and the language does this automatically by checking if the length of the list is 0.

  • Go gives you very few choices (e.g. there's only one type of loop and it's the for loop) but you often have a choice to pass a copy of an object or to pass a pointer. Those are different things but sometimes I feel like the computer could/should figure it out for me.

  • I love the little defer thing which means I can put "things to do when you're done" right underneath the thing I'm doing. In Python you get these try: ...20 lines... finally: ...now it's over... things.

  • The coding style rules are very different but in Go it's a no brainer because you basically don't have any choices. I like that. You just have to remember to use gofmt.

  • Everything about Go and Go tools follow the strict UNIX pattern to not output anything unless things go bad. I like that.

  • godoc.org is awesome. If you ever wonder how a built in package works you can just type it in after godoc.org like this godoc.org/math for example.

  • You don't have to compile your Go code to run it. You can simply type go run mycode.go it automatically compiles it and then runs it. And it's super fast.

  • go get can take a url like github.com/russross/blackfriday and just install it. No PyPI equivalent. But it scares me to depend on peoples master branches in GitHub. What if master is very different when I go get something locally compared to when I run go get weeks/months later on the server?

Mic BermanWhat's your daily focus practice?

I have a daily and weekly practice to support and nuture myself - one of my core values is discipline and doing what I say. This for me is how I show up with integrity both for myself and the commitments I make to others. So, I enjoy evolving and living my practice

Each day I meditate, as close to waking and certainly before my first meeting. I get clear on what is coming up that day and how I want to show up for myself and the people I’m spending time with. I decide how I want to be, for example, is being joyful and listening to my intuition the most important way for today, or curiosity, humour?

 

I use mindful breathing many times in a day - particularly when I’m feeling strong emotions, maybe because I’ve just come from a fierce conversation or a situation that warrants some deep empathy - simply breath gets me grounded and clear before my next meeting or activity.

 

Exercise - feeling my body, connecting to my physical being and what’s going on for me. Maybe I’m relying too much on a coffee buzz and wanting an energy boost - listening to my cues and taking care throughout the day. How much water have I had? etc As well as honouring my value around fitness and health.

I also write - my daily journal and always moving a blog post or article forward. And most importantly - mindfulness, being fully present in each activity. Several years ago I broke my right foot and ‘lost’ my ability to multi-task in the healing process. It was a huge gift ultimately - choosing to only do one thing at a time. To have all of my mind and body focused on the thing I am doing or person i am talking with and nothing else. What a beautiful way to be, to honour those around me and the purpose or agenda of the company I’m working for. Weekly, I enjoy a mindful Friday night dinner with my family and turn off all technology to Saturday night and on Sunday's I reflect on my past week and prepare for my next - what worked, what didn't, what's important, what's not. etc.

Joy to you in finding a practice that works :)

Tarek ZiadéWeb Application Firewall

Web Application Firewall (WAF) applied to HTTP web services is an interesting concept.

It basically consists of extracting from a web app a set of rules that describes how the endpoints should be used. Then a Firewall proxy can enforce those rules on incoming requests.

Le't say you have a search api where you want to validate that:

  • there's a optional before field that has to be a datetime
  • you want to limit the number of calls per user per minute to 10
  • you want to reject with a 405 any call that uses another HTTP method than GET

Such a rule could look like this:

"/search": {
    "GET": {
        "parameters": {
            "before": {
                "validation":"datetime",
                "required": false
            }
        },
        "limits": {
            "rates": [
                {
                    "seconds": 60,
                    "hits": 10,
                    "match": "header:Authorization AND header:User-Agent or remote_addr"
                }
            ]
        }
    }
}

Where the rate limiter will use the Authorization and the User-Agent header to uniquely identify a user, or the remote IP address if those fields are not present.

Note

We've played a little bit around request validation with Cornice, where you can programmatically describe schemas to validate incoming requests, and the ultimate goal is to make Cornice generate those rules in a spec file independantly from the code.

I've started a new project around this with two colleagues at Mozilla (Julien & Benson), called Videur. We're defining a very basic JSON spec to describe rules on incoming requests:

https://github.com/mozilla/videur/blob/master/spec/VAS.rst

What makes it a very exciting project is that our reference implementation for the proxy is based on NGinx and Lua.

I've written a couple of Lua scripts that get loaded in Nginx, and our Nginx configuration roughly looks like this for any project that has this API spec file:

http {
    server {
        listen 80;
        set $spec_url "http://127.0.0.1:8282/api-specs";
        access_by_lua_file "videur.lua";
    }
}

Instead of manually defining all the proxy rules to point to our app, we're simply pointing the spec file that contains the description of the endpoints and use the lua script to dynamically build all the proxying.

Videur will then make sure incoming requests comply with the rules before passing them to the backend server.

One extra benefit is that Videur will reject any request that's not described in the spec file. This implicit white listing is in itself a good way to avoid improper calls on our stacks.

Last but not least, Lua in Nginx is freaking robust and fast. I am still amazed by the power of this combo. Kudos to Yichun Zhang for the amazing work he's done there.

Videur is being deployed on one project at Mozilla to see how it goes, and if that works well, we'll move forward to more projects and add more features.

And thanks to NginxTest our Lua script are fully tested.

Jess KleinHive Labs at the Mozilla Festival: Building an Ecosystem for Innovation



 


This weekend marks the fifth year anniversary of the Mozilla Festival - and Hive Labs has a ton of fun design - oriented, hands-on activities to get messy with in person or remotely. We are using the event to explore design questions that are relevant to local communities and Hives and to dabble in building out a community-driven ecosystem for innovation. Here's a few highlights:

Challenges to Enacting and Scaling Connected Learning

This year, the Hive track at MozFest (http://2014.mozillafestival.org/tracks/) is bringing together Hive and "Hive curious" travelers from around the world to incubate solutions to shared challenges in enacting and scaling connected learning. We're working together over the course of the MozFest weekend to collaboratively answer questions that come up again and again in our networks across the globe. One question that Hive Labs is focusing on is: How do we build a community that supports innovation in the education space? 



Action Incubator

We will be hosting a series of activities embedded within the Hive track to think through problems in your Hives and local communities and brainstorming solutions collectively. We will be leveraging three teaching kit's that were made specifically to facilitate this kind of design thinking activity:

  • Firestarter: In this activity, participants will identify opportunities and then brainstorm potential design solutions.
  • User Testing at an Event: Events are a great time to leverage all of the different kinds of voices in a room to get feedback on an in-progress project or half-baked idea. This activity will help you to test ideas and projects and get constructive and actionable feedback.
  • Giving + Getting Actionable Feedback: Getting effective and actionable feedback on your half - baked ideas and projects can be challenging. This activity explores some ways to structure your feedback session.

Art of the Web 

This entire track is dedicated to showcasing and making art using the Web as your medium. Follow the #artoftheweb hashtag on twitter. 


in response to the #mozfest remotee challenge

MozFest Remotee Challenge

Want to join in on all of the Mozilla Festival action even though you aren't physically at the event? This challenge is for you! We have compiled a handful of activities focused on Web Literacy, supporting community - based learning and making so that you can take part in the conversation and brainstorming at the Mozilla Festival. Go here to start the challenge.

You can follow along all weekend using the #mozfest or #hivebuzz hashtags on Twitter.

Yunier José Sosa VázquezFirefox incorpora OpenH.264 y más seguridad

Sin darnos cuenta ya está con nosotros una nueva versión de tu navegador favorito. En esta ocasión disfrutarás de interesantes características como la implementación de código abierto del códec H.264, más seguridad y en el caso de los usuarios de Windows, gozarán de un mejor rendimiento.

La misión de Mozilla plantea apoyar y defender la web abierta, cosa que es muy difícil debido al reclamo de los usuarios en reproducir contenido en formatos cerrados, por lo que Mozilla debe pagar a sus propietarios la inclusión del código en Firefox. Poco a poco se fueron dando pasos de avance, primero con la posibilidad de usar los códecs instalados (Windows) y después si teníamos instalado el adecuado plugin de Gstreamer (Linux). Ahora se añade el soporte a OpenH264 y proporciona seguridad al navegador pues se ejecuta en sandbox.

La experiencia de búsqueda desde la barra de direcciones ha recibido mejoras. Si intentas ir a un sitio que no es real, Firefox verifica si existe y alerta si deseas ir allí o realizar una búsqueda con el motor elegido por ti. También desde la página de inicio (about:home) y nueva pestaña (about:newtab) se mostrarán sugerencias de búsqueda mientras escribes en los campos de búsqueda.

Una nueva Política de Seguridad sobre el Contendio (CSP) hará más seguro al navegador y protegerá mejor tu información almacenada en tu computadora. Además, se ha dado soporte para conectarse a servidores proxy HTTP a través de HTTPS.

Mientras tanto, los usuarios de Windows recibirán nuevos cambios en la arquitectura de Firefox que propician un mejor rendimiento pues ahora varias tareas se separan del proceso principal. Más adelante veremos muchos más radicales con respecto a la arquitectura.

Firefox ahora habla azerbaiyano, se mejoró la seguridad al restaurar una sesión y se han añadido muchos cambios para desarrolladores.

Para Android

  • Posibilidad de enviar videos a dispositivos Chromecast y Roku.
  • Restaurar la última pestaña cerrada.
  • Lista de las pestañas cerradas recientemente.
  • Cerrar todas las pestañas de una sola vez.
  • Añadida la opción de borrar los datos claros al salir.
  • Los elementos del formularios han sido actualizados a una vista más moderna.
  • Añadido los idiomas Aragonés [an], Frisón [fy-NL], Kazajo [kk] y Khmer [km]

Si deseas conocer más, puedes leer las notas de lanzamiento.

Puedes obtener esta versión desde nuestra zona de Descargas en español e inglés para Linux, Mac, Windows y Android. Recuerda que para navegar a través de servidores proxy debes modificar la preferencia network.negotiate-auth.allow-insecure-ntlm-v1 a true desde about:config.

Mozilla Reps CommunityReps Weekly Call – October 23th 2014

Last Thursday we had our regular weekly call about the Reps program, where we talk about what’s going on in the program and what Reps have been doing during the last week.

reps-sticker

Summary

  • The Open Standard.
  • Firefox 10th anniversary.
  • New council members.
  • Mozfest.
  • FOSDEM.
  • New reporting activities.

Detailed notes

AirMozilla video

https://air.mozilla.org/reps-weekly-20141023/

Don’t forget to comment about this call on Discourse and we hope to see you next week!

Frédéric WangA quick note for Mozillians regarding MathML on Wikipedia

As mentioned some time ago and as recently announced on the MathML and MediaWiki mailing lists, a MathML mode with SVG/PNG fallback is now available on Wikipedia. In order to test it, you need to log in with a Wikipedia account and select the mode in the "Math" section of your preferences.

Zoomed-Windows8-Firefox32-MathML-LatinModern

Some quick notes for Mozillians:

  • Although Mozilla intern Jonathan Wei has done some work on MathML accessibility and that there are reports about work in progress to make Firefox work with NVDA / Orca / VoiceOver, we unfortunately still don't have something ready for Gecko browsers. You can instead try the existing solutions for Safari or Internet Explorer (ChromeVox and JAWS 16 beta are supposed to be MathML-aware but fail to read the MathML on Wikipedia at the moment).

  • By default, the following MATH fonts are tried: Cambria Math, Latin Modern Math, STIX Math, Latin Modern Math (Web font). In my opinion, our support for Cambria Math (installed by default on Windows) is still not very good, so I'd recommend to use Latin Modern Math instead, which has the same "Computer Modern" style as the current PNG mode. To do that, go to the "Skin" section of your preferences and just add the rule math { font-family: Latin Modern Math; } to your "Custom CSS". Latin Modern Math is installed with most LaTeX distributions, available from the GUST website and provided by the MathML font add-on.

  • You can actually install various fonts and try to make the size and style of the math font consistent with the surrounding text. Here are some examples:

    /* Asana Math (Palatino style) */
    .mw-body, mtext {
        font-family: Palatino Linotype, URW Palladio L, Asana Math;
    }
    math {
        font-family: Asana Math;
    }
    
    
    /* Cambria (Microsoft Office style) */
    .mw-body, mtext {
        font-family: Cambria;
    }
    math {
        font-family: Cambria Math;
    }
    
    
    /* Latin Modern (Computer Modern style) */
    .mw-body, mtext {
        font-family: Latin Modern Roman;
    }
    math {
        font-family: Latin Modern Math;
    }
    
    
    /* STIX/XITS (Times New Roman style) */
    .mw-body, mtext {
        font-family: XITS, STIX;
    }
    math {
        font-family: XITS Math, STIX Math;
    }
    
    
    /* TeX Gyre Bonum (Bookman style) */
    .mw-body, mtext {
        font-family: TeX Gyre Bonum;
    }
    math {
        font-family: TeX Gyre Bonum Math;
    }
    
    
    /* TeX Gyre Pagella (Palatino style) */
    .mw-body, mtext {
        font-family: TeX Gyre Pagella;
    }
    math {
        font-family: TeX Gyre Pagella Math;
    }
    
    
    /* TeX Gyre Schola (Century Schoolbook style) */
    .mw-body, mtext {
        font-family: TeX Gyre Schola;
    }
    math {
        font-family: TeX Gyre Schola Math;
    }
    
    
    /* TeX Gyre Termes (Times New Roman style) */
    .mw-body, mtext {
        font-family: TeX Gyre Termes;
    }
    math {
        font-family: TeX Gyre Termes Math;
    }
    
  • We still have bugs with missing fonts and font inflation on mobile devices. If you are affected by these bugs, you can force the SVG fallback instead:

    span.mwe-math-mathml-inline, div.mwe-math-mathml-display {
      display: none !important;
    }
    span.mwe-math-mathml-inline + .mwe-math-fallback-image-inline {
      display: inline !important;
    }
    div.mwe-math-mathml-display + .mwe-math-fallback-image-display {
      display: block !important;
    }
    
  • You might want to install some Firefox add-ons for copying MathML/LaTeX, zooming formulas or configuring the math font.

  • Finally, don't forget to report bugs to Bugzilla so that volunteers can continue to improve our MathML support. Thank you!

Yura ZenevichTIL Debugging Gaia with B2G Desktop and WebIDE.

TIL Debugging Gaia with B2G Desktop and WebIDE.

24 Oct 2014 - Toronto, ON

With some great help from Rob Wood from Mozilla Firefox OS Automation team, I finally managed to get Gaia up and running and ready for debugging with B2G Desktop Nightly build and WebIDE.

It is currently impossible to develop / debug Gaia using Firefox for Desktop. Thus, it's a pretty big barrier for new contributors who are just starting out with Gaia. Turns out, it is fairly easy to run Gaia codebase with B2G Desktop build. Here are the instructions:

Prerequisits:

Instructions:

  • In your Gaia repository build a Gaia user profile from scratch: make

  • Run the following command to start the B2G Nightly build with Gaia profile that you just built: /path/to/your/B2G/b2g-bin -profile /path/to/your/gaia/profile -start-debugger-server [PORT_NUMBER] For example: /Applications/B2G.app/Contents/MacOS/b2g-bin -profile /Users/me/gaia/profile -start-debugger-server 7000

  • Start Firefox for Desktop Nightly build

  • Open WebIDE (Tools -> Web Developer -> WebIDE)

  • Press Select Runtime -> Remote Runtime

  • Replace the port with the one used in step 2 (7000) and click OK

  • A dialog should pop up within the B2G emulator asking to permit remote debugging. Press OK

  • At this point you should have access to all apps bundled with the profile as well as the Main Process via WebIDE

  • When you make changes to Gaia code base, you can run make again to rebuild your profile. Hint: if you are working on a single app, just run APP=my_app make to only rebuild the app.

yzen

Yunier José Sosa VázquezUnity corona a Firefox como el mejor navegador para utilizar contenido Unity WebGL

La tecnología multiplataforma de Unity se está erigiendo como la mejor alternativa para todos los que quieren disfrutar de los cada vez más populares juegos para navegadores web. De hecho, la compañía está actualmente trabajando para integrar la tecnología WebGL por la que cada vez más navegadores están apostando.

Con la intención de facilitar su labor a la hora de integrar WebGL, Unity ha desarrollado un benchmark para medir el rendimiento de los diferentes navegadores y sistemas operativos a la hora de usar su tecnología. Para realizar la serie de test a la que han sometido a los navegadores, el equipo de Unity ha comparado el rendimiento nativo con el de los navegadores Firefox 32, Chrome 37 y Safari 8 instalados en un MacBook Pro con procesador i7 a 2,6 GHz y sistema operativo OS X 10.10.

En la imagen se puede observar como en ocasiones Firefox obtiene mejor puntuación ejecutando las pruebas que el propio código nativo de Unity.

pruebas

Pruebas realizadas

Según los resultados completos de los tests que pueden ver, y en los que se compara el rendimiento de los tres navegadores usando WebGL con el código nativo de Unity, de entre los tres participantes es Firefox el que ha resultado el ganador en prácticamente la totalidad de factores analizados por el benchmark, por lo que es sin duda el mejor para usar la tecnología Unity WebGL.

comparacion_final

Suma total de las puntuaciones obtenidas

Podrán encontrar más información sobre este tema en Unity3d.com.

Fuente: Genbeta

Rob CampbellThanks, Mozilla

random access memoryThis’ll be my last post to planet.mozilla.org.

After 8 years and thousands of airmiles, I’ve decided to leave Mozilla. It’s been an incredible ride and I’ve met a lot of great people. I am incredibly grateful for the adventure.

Thanks for the memories.

Benoit GirardGPU Profiling has landed

A quick remainder that one of the biggest benefit to having our own built-in profiler is that individual teams and project can add their own performance reporting features. The graphics team just landed a feature to measure how much GPU time is consumed when compositing.

I already started using this in bug 1087530 where I used it to measure the improvement from recycling our temporary intermediate surfaces.

Screenshot 2014-10-23 14.35.29Here we can see that the frame had two rendering phases (group opacity test case) totaling 7.42ms of GPU time. After applying the patch from the bug and measuring again I get:

Screenshot 2014-10-23 14.38.15Now with retaining the surface the rendering GPU time drops to 5.7ms of GPU time. Measuring the GPU time is important because timing things on the CPU time is not accurate.

Currently we still haven’t completed the D3D implementation or hooked it up to WebGL, we will do that as the need arises. To implement this, when profiling, we insert a query object into the GPU pipeline for each rendering phase (framebuffer switches).


Gavin Sharpr=gfritzsche

I’m happy to announce that Georg Fritzsche is now officially a Firefox reviewer.

Georg has been contributing to Firefox for a while now – his contributions started with done some great work on Firefox’s Telemetry system, as well as investigating stability issues in plugins and Firefox itself. He played a crucial role in building the telemetry experiments system, and more recently has become familiar with a few key parts of the Firefox front-end code, including our click to play plugin UI and the upcoming “FHR self support” feature. He’s a thorough reviewer with lots of experience with Mozilla code, so don’t hesitate to ask Georg for review if you’re patching Firefox!

Thanks Georg!

Matt BrubeckA little randomness for Hacker News

In systems that rely heavily on “most popular” lists, like Reddit or Hacker News, the rich get richer while the poor stay poor. Since most people only look at the top of the charts, anything that’s not already listed has a much harder time being seen. You need visibility to get ratings, and you need ratings to get visibility.

Aggregators try to address this problem by promoting new items as well as popular ones. But this is hard to do effectively. For example, the “new” page at Hacker News gets only a fraction of the front page’s traffic. Most users want to see the best content, not wade through an unfiltered stream of links. Thus, very little input is available to decide which links get promoted to the front page.

As an experiment, I wrote a userscript that uses the Hacker News API to search for new or low-ranked links and randomly insert just one or two of them into the front page. It’s also available as a bookmarklet for those who can’t or don’t want to install the user script.

Install user script (may require a browser extension)

Randomize HN (drag to bookmark bar, or right-click to bookmark)

This gives readers a chance to see and vote on links that they otherwise wouldn’t, without altering their habits or wading through a ton of unfiltered content. Each user will see just one or two links per visit, but thanks to randomness a much larger number of links will be seen by the overall user population. My belief, though I can’t prove it, is that widespread use of this feature would improve the quality of the selection process.

The script isn’t perfect (search for FIXME in the source code for some known issues), but it works well enough to try out the idea. Unfortunately, the HN API doesn’t give access to all the data I’d like, and sometimes the script won’t find any suitable links to insert. (You can look at your browser’s console to see which which items were randomly inserted.) Ideally, this feature would be built in to Hacker News—and any other service that recommends “popular” items.

Peter BengtssonlocalForage vs. XHR

tl;dr; Fetching from IndexedDB is about 5-15 times faster than fetching from AJAX.

localForage is a wrapper for the browser that makes it easy to work with any local storage in the browser. Different browsers have different implementations. By default, when you use localForage in Firefox is that it used IndexedDB which is asynchronous by default meaning your script don't get blocked whilst waiting for data to be retrieved.

A good pattern for a "fat client" (lots of javascript, server primarly speaks JSON) is to download some data, by AJAX using JSON and then store that in the browser. Next time you load the page, you first read from the local storage in the browser whilst you wait for a fresh new JSON from the server. That way you can present data to the screen sooner. (This is how Buggy works, blogged about it here)

Another similar pattern is that you load everything by AJAX from the server, present it and store it in the local storage. Then you perdiocally (or just on onload) you send the most recent timestamp from the data you've received and the server gives you back everything new and everything that has changed by that timestamp. The advantage of this is that the payload is continuously small but the server has to make a custom response for each client whereas a big fat blob of JSON can be better cached and such. However, oftentimes the data is dependent on your credentials/cookie anyway so most possibly you can't do much caching.

Anyway, whichever pattern you attempt I thought it'd be interesting to get a feel for how much faster it is to retrieve from the browsers memory compared to doing a plain old AJAX GET request. After all, browsers have seriously optimized for AJAX requests these days so basically the only thing standing in your way is network latency.

So I wrote a little comparison script that tests this. It's here: http://www.peterbe.com/localforage-vs-xhr/index.html

It retrieves a 225Kb JSON blob from the server and measures how long that took to become an object. Equally it does the same with localforage.getItem and then it runs this 10 times and compares the times. It's obviously not a surprise the local storage retrieval is faster, what's interesting is the difference in general.

What do you think? I'm sure both sides can be optimized but at this level it feels quite realistic scenarios.

Mozilla Release Management TeamFirefox 34 beta1 to beta2

  • 60 changesets
  • 135 files changed
  • 2926 insertions
  • 673 deletions

ExtensionOccurrences
js32
cpp23
cc17
jsm9
html9
css9
jsx7
h7
ini3
sh2
patch2
mm2
manifest2
list2
xul1
xml1
txt1
py1
mozilla1
mk1
json1
build1

ModuleOccurrences
browser60
gfx21
security12
content11
toolkit5
layout4
widget3
testing3
media3
gfx3
js2
xulrunner1
xpcom1
netwerk1
mobile1
+gfx1
dom1

List of changesets:

Justin DolskeBug 1068290 - UI Tour: Add ability to highlight New Private Window icon in chrome. r=MattN a=dolske - aa77bc7b59e3
Justin DolskeBug 1072036 - UI Tour: Add ability to highlight new privacy button. r=mattn a=dolske - d37b92959827
Justin DolskeBug 1071238 - UI Tour: add ability to put a widget in the toolbar. r=mattn a=dolske - 1c96180e6a5b
Blair McBrideBug 1068284 - UI Tour: Add ability to highlight search provider in search menu. r=MattN a=dolske - 9d4b08eecd9a
Joel MaherBug 1083369 - update talos.json to include fixes for mainthreadio whitelist and other goodness. r=dminor a=test-only - 184bc1bea651
Bas SchoutenBug 1074272 - Use exception mode 0 for our D3D11 devices. r=jrmuizel, a=sledru - 46d2991042df
Gijs KruitboschBug 1079869 - Fix closing forget panel by adding a closemenu=none attribute. r=jaws, a=sledru - e6441f98f159
Patrick BrossetBug 1020038 - Disable test browser/devtools/layoutview/test/browser_layoutview_update-in-iframes.js. a=test-only - da7c401c5aa7
Jeff GilbertBug 1079848 - Large allocs should be infallible and handled. r=kamidphish, a=sledru - 03d4ab96c271
Mike HommeyBug 1081031 - Unbust xulrunner mac builds by not exporting all JS symbols (Bug 920731). r=bsmedberg, a=npotb - 5967c4a96835
Mark BannerBug 1081906 - Fix unable to start Firefox due to 'Couldn't load XPCOM'. r=bsmedberg, a=sledru - c212fd07fd32
Georg FritzscheBug 1079312 - Fix invalid log.warning() to log.warn(). r=irving, a=sledru - ae6317e02f72
Mike de BoerBug 1081130 - Fix importing contacts with only a phone number and fetch the correct format. r=abr, a=sledru - 1f7f807b6362
Simon MontaguTest for Bug 1067268. r=jfkthame, a=lmandel - 4f904d9bcff2
Simon MontaguBug 1067268 - Don't mix physical and logical coordinates when calculating width to clear past floats. r=jfkthame, a=lmandel - 29dd7b8ee41f
JW WangBug 1069289 - Take |mAudioEndTime| into account when updating playback position at the end of playback. r=kinetik, a=lmandel - 31fc68be9136
David KeelerBug 1058812 - mozilla::pkix: Add SignatureAlgorithm::unsupported_algorithm to better handle e.g. roots signed with RSA/MD5. r=briansmith, a=lmandel - 2535e75ff9c6
David KeelerBug 1058812 - mozilla::pkix: Test handling unsupported signature algorithms. r=briansmith, a=lmandel - a7b8a4567262
Jonathan KewBug 1074223 - Update OTS to pick up fixes for upstream issues 35, 37. Current rev: c24a839b1c66c4de09e58fabaacb82bf3bd692a4. r=jdaggett, a=lmandel - 6524ec11ce53
Gijs KruitboschBug 1050638 - Should be able to close tab with onbeforeunload warning if clicking close a second time. r=ttaubert, a=lmandel - 98fc091c4706
JW WangBug 760770 - Allow 'progress' and 'suspend' events after 'ended'. r=roc, a=test-only - 915073abfd8b
Benjamin ChenBug 1041362 - Modify testcases because during the oncanplaythrough callback function, the element might not ended but the mediastream is ended. r=roc, a=test-only - c3fa7201e034
Jean-Yves AvenardBug 1079621 - Return error instead of asserting. r=kinetik, a=lmandel - 9be2b1620955
Andrei OpreaBug 1020449 Loop should show caller information on incoming calls. Patch originally by Andrei, updated and polished by Standard8. r=nperriault a=lmandel - 742beda04394
Mark BannerBug 1020449: Fix typo in addressing review comments in Bug 1020449 that caused broken jsx. rs=NiKo a=lmandel - 0033bca3ce22
Mark BannerBug 1029433 When in a Loop call, the title bar should display the remote party's information. r=nperriault a=lmandel - 530ec559a14c
Simone BrunoBug 1058286 - Add in-tree manifests needed for tests. DONTBUILD a=NPOTB - 0b7106ef79d2
Jonathan KewBug 1074809 - For OTS warning (rather than failure) messages, only log the first occurrence of any given message per font. r=jdaggett, a=lmandel - 1875f4aff106
Ed LeeBug 1081157 - "What is this page" link appears on "blank" version of about:newtab. r=ttaubert, a=sledru - c00a4cfe83e9
Matthew GreganBug 1080986 - Check list chunk is large enough to read list ID before reading. r=giles, a=sledru - bb851de524c2
Matthew GreganBug 1079747 - Follow WhatWG's MIMESniff spec for MP4 more closely. r=cpearce, a=sledru - f752e25f4c42
Steven MichaudBug 1084589 - Fix a Yosemite topcrasher. r=gijskruitbosch a=gavin - 3a24d0c65745
Mike de BoerBug 1081061: switch to a different database if a userProfile is active during the first mozLoop.contacts access to always be in sync with the correct state. r=MattN. a=lmandel - 6b4c22bfe385
Mark BannerBug 1081066 Incoming call window stays open forever if the caller closes the window/tab or crashes. r=nperriault a=lmandel - 8c329499cf7d
Matthew NoorenbergheBug 1079656 - Make the Loop Account menu item work after a restart. r=jaws a=lmandel - ada526904539
Mike de BoerBug 1076967: fix Error object data propagation to Loop content pages. r=bholley a=lmandel - 880cfb4ef6f8
Mark BannerBug 1078226 Unexpected Audio Level indicator on audio-only calls for Loop, also disable broken low-quality video warning indicator. r=nperriault a=lmandel - 5ad9f4e96214
Nicolas PerriaultBug 1048162 Part 1 - Add an 'Email Link' button to Loop desktop failed call view. r=Standard8 a=lmandel - f705ffd06218
Mark BannerBug 1081154 - Loop direct calls should attempt to call phone numbers as well as email addresses. r=mikedeboer a=lmandel - 191b3ce44bea
Nicolas PerriaultBug 1048162 Part 2 - Display an error message if fetching an email link fails r=standard8,darrin a=lmandel - 3fc523fcc7da
Mike de BoerBug 1013989: change the label of the Loop button to Hello. r=MattN a=lmandel - 2edc9ed56fa4
Romain GauthierBug 1079811 - A new call won't start if the outgoing call window is opened (showing feedback or retry/cancel). r=Standard8 a=lmandel - a4e22c4da890
Mike de BoerBug 1079941: implement LoopContacts.search to allow searching for contacts by query and use that to find out if a contact who's trying to call you is blocked. r=abr a=lmandel - bda95894a692
Randell JesupBug 1084384: support importing contacts with phone numbers in a different format r=abr a=lmandel - 8c42ccaf8aa1
Mike de BoerBug 1084097: make sure that the Loop button only shows up in the palette when unthrottled. r=Unfocused a=lmandel - 5840764c4312
Randell JesupBug 1070457: downgrade assertion about cubeb audiostreams to a warning r=roc a=lmandel - 9e420243b962
Randell JesupBug 1075640: Don't return 0-length frames for decoding; add comments about loss handling r=ehugg a=lmandel - bb26f4854630
Ethan HuggBug 1075640 - Check for zero length frames in GMP H264 decode r=jesup a=lmandel - 452bc7db811e
Luke WagnerBug 1064668 - OdinMonkey: Only add AsmJSActivation to profiling stack after it is fully initialized. r=djvj, a=lmandel - eb43a2c05eb3
Bas SchoutenBug 1060588 - Use PushClipRect when possible in ClipToRegionInternal. r=jrmuizel, a=lmandel - 6609fd74488b
Gijs KruitboschBug 1077404 - subviewradio elements in panic button panel are elliptical and labels get borders. r=jaws, a=lmandel - 2418d9e17dd0
Markus StangeBug 1081160 - Update window shadows for Yosemite. r=smichaud, a=lmandel - fc032520a26e
Chenxia LiuBug 1079761 - Add 'stop tab mirroring' one level higher in the menu. r=rbarker, a=lmandel - 4018d170ab06
Mike HommeyBug 1082323 - Reject pymake in client.mk. r=mshal, a=sledru - f19a52b7e6ec
Tomasz KołodziejskiBug 1074817 - Allow downgrade of content-prefs.sqlite. r=MattN, a=lmandel - 2235079fe205
Gijs KruitboschBug 1083895 - Favicon should not change if link element isn't in DOM. r=bz, a=lmandel - 73bc0bc9343b
Dragana DamjanovicBug 1081794 - Fixing a test for dns request cancel. On e10s, the dns resolver is sometimes faster than a cancel request. Use a random string to be resolved instead of a fix one. r=sworkman, a=test-only - 6d2e5afd8b75
Nicolas SilvaBug 1083071 - Add some old intel drivers to the blocklist. r=Bas, a=sledru - 53b97e435f10
Nicolas SilvaBug 1083071 - Blacklist device family IntelGMAX4500HD drivers older than 7-19-2011 because of OMTC issues on Windows. r=Bas, a=sledru - b5d97c1c71b7
Ryan VanderMeulenBug 1083071 - Change accidentally-used periods to commas. rs=nical, a=bustage - 8cc403ad710b

Doug BelshawInterim results of the Web Literacy Map 2.0 community survey

Thanks to my colleague Adam Lofting, I’ve been able to crunch some of the numbers from the Web Literacy Map v2.0 community survey. This will remain open until the end of the month, but I thought I’d share some of the results.

Web Literacy Map v2.0 community survey: overview

This is the high-level overview. Respondents are able to indicate the extent to which they agree or disagree with each proposal on a five-point scale. The image above shows the average score as well as the standard deviation. Basically, for the top row the higher the number the better. For the bottom row, low is good.

Web Literacy Map v2.0 community survey: by location

Breaking it down a bit further, there’s some interesting things you can pull out of this. Note that the top-most row represents people who completed the survey, but chose not to disclose their location. All of the questions are optional.

Things that stand out:

  • There’s strong support for Proposal 4: I believe that concepts such as ‘Mobile’, 'Identity’, and 'Protecting’ should be represented as cross-cutting themes in the Web Literacy Map.
  • There’s also support for Proposal 1: I believe the Web Literacy Map should explicitly reference the Mozilla manifesto and Proposal 5: I believe a 'remix’ button should allow me to remix the Web Literacy Map for my community and context.
  • People aren’t so keen on Proposal 2: I believe the three strands should be renamed 'Reading’, 'Writing’ and 'Participating’. (although, interestingly, in the comments people say they like the term 'Participating’, just not 'Reading’ and 'Writing’ which many said reminded them too much of school!)

This leaves Proposal 3: I believe the Web Literacy Map should look more like a 'map’. This could have been better phrased, as the assumption in the comments seems to be that we can present it either as a grid or as a map. In fact, here’s no reason why we can’t do both. In fact, I’d like to see us produce:

Finally, a word on Proposal 5 and remixing. In the comments there’s support for this - but also a hesitation lest it 'dilutes’ the impact of the Web Literacy Map. A number of people suggested using a GitHub-like model where people can 'fork’ the map if necessary. In fact, this is already possible as v1.1 is listed as a repository under the Mozilla account.

I’m looking forward to doing some more analysis of the community survey after it closes!


Comments? Questions? Send them this way: @dajbelshaw or doug@mozillafoundation.org

Benjamin SmedbergHow I Do Code Reviews at Mozilla

Since I received some good feedback about my prior post, How I Hire at Mozilla, I thought I’d try to continue this is a mini-series about how I do other things at Mozilla. Next up is code review.

Even though I have found new module owners for some of the code I own, I still end up doing 8-12 review/feedback cycles per week. Reviews are only as good as the time you spend on them: I approach reviews in a fairly systematic way.

When I load a patch for review, I don’t read it top-to-bottom. I also try to avoid reading the bug report: a code change should be able to explain itself either directly in the code or in the code commit message which is part of the patch. If bugzilla comments are required to understand a patch, those comments should probably be part of the commit message itself. Instead, I try to understand the patch by unwrapping it from the big picture into the small details:

The Commit Message

  • Does the commit message describe accurately what the patch does?
  • Am I the right person to make a decision about this change?
  • Is this the right change to be making?
  • Are there any external specifications for this change? This could include a UX design, or a DOM/HTML specification, or something else.
  • Should there be an external specification for this change?
  • Are there other experts who should be involved in this change? Does this change require a UX design/review, or a security, privacy, legal, localization, accessibility, or addon-compatibility review or notification?

Read the Specification

If there is an external specification that this change should conform to, I will read it or the appropriate sections of it. In the following steps of the review, I try to relate the changes to the specification.

Documentation

If there is in-tree documentation for a feature, it should be kept up to date by patches. Some changes, such as Firefox data collection, must be documented. I encourage anyone writing Mozilla-specific features and APIs to document them primarily with in-tree docs, and not on developer.mozilla.org. In-tree docs are much more likely to remain correct and be updated over time.

API Review

APIs define the interaction between units of Mozilla code. A well-designed API that strikes the right balance between simplicity and power is a key component of software engineering.

In Mozilla code, APIs can come in many forms: IDL, IPDL, .webidl, C++ headers, XBL bindings, and JS can all contain APIs. Sometimes even C++ files can contain an API; for example Mozilla has an mostly-unfortunate pattern of using the global observer service as an API surface between disconnected code.

In the first pass I try to avoid reviewing the implementation of an API. I’m focused on the API itself and its associated doccomments. The design of the system and the interaction between systems should be clear from the API docs. Error handling should be clear. If it’s not perfectly obvious, the threading, asynchronous behavior, or other state-machine aspects of an API should be carefully documented.

During this phase, it is often necessary to read the surrounding code to understand the system. None of our existing tools are very good at this, so I often have several MXR tabs open while reading a patch. Hopefully future review-board integration will make this better!

Brainstorm Design Issues

In my experience, the design review is the hardest phase of a review, the part which requires the most experience and creativity, and provides the most value.

  • How will this change interact with other code in the tree?
  • Are there edge cases or failure cases that need to be addressed in the design?
  • Is it likely that this change will affect performance or security in unexpected ways?

Testing Review

I try to review the tests before I review the implementation.

  • Are there automated unit tests?
  • Are there automated performance tests?
  • Is there appropriate telemetry/field measurement, especially for error conditions?
  • Do the tests cover the specification, if there is one?
  • Do the tests cover error conditions?
  • Do the tests strike the right balance between “unit” testing of small pieces of code versus “integration” tests that ensure a feature works properly end-to-end?

Code Review

The code review is the least interesting part of the review. At this point I’m going through the patch line by line.

  • Make sure that the code uses standard Mozilla coding style. I really desperately want somebody to automated lots of this as part of the patch-submission process. It’s tedious both for me and for the patch author if there are style issues that delay a patch and require another review cycle.
  • Make sure that the number of comments is appropriate for the code in question. New coders/contributors sometimes have a tendency to over-comment things that are obvious just by reading the code. Experienced contributors sometimes make assumptions that are correct based on experience but should be called out more explicitly in the code.
  • Look for appropriate assertions. Assertions are a great form of documentation and runtime checking.
  • Look for appropriate logging. In Mozilla we tend to under-log, and I’d like to push especially new code toward more aggressive logging.
  • Mostly this is scanning the implementation. If there is complexity (such as threading!), I’ll have to slow down a lot and make sure that each access and state change is properly synchronized.

Re-read the Specification

If there is a specification, I’ll briefly re-read it to make sure that it was covered by the code I just finished reading.

Mechanics

Currently, I primarily do reviews in the bugzilla “edit” interface, with the “edit attachment as comment” option. Splinter is confusing and useless to me, and review-board doesn’t seem to be ready for prime-time.

For long or complex reviews, I will sometimes copy and quote the patch in emacs and paste or attach it to bugzilla when I’m finished.

In some cases I will cut off a review after one of the earlier phases: if I have questions about the general approach, the design, or the API surface, I will often try to clarify those questions before proceeding with the rest of the review.

There’s an interesting thread in mozilla.dev.planning about whether it is discouraging to new contributors to mark “review-” on a patch, and whether there are less-painful ways of indicating that a patch needs work without making them feel discouraged. My current practice is to mark r- in all cases where a patch needs to be revised, but to thank contributors for their effort so that they are still appreciated and to be as specific as possible about required changes while avoiding any words that could be perceived as an insult.

If I haven’t worked with a coder (paid or volunteer) in the past, I will typically always ask them to submit an updated patch with any changes for re-review. This allows me to make sure that the changes were completed properly and didn’t introduce any new problems. After I gain some experience, I will often trust people to make necessary changes and simply mark “r+ with review comments fixed”.

Michael KaplyDisabling Buttons In Preferences

I get asked a lot how to disable certain buttons in preferences like Make Firefox the default browser or the various buttons in the Startup groupbox. Firefox does have a way to disable these buttons, but it's not very obvious. This post will attempt to remedy that.

These buttons are controlled through preferences that have the text "disable_button" in them. Just changing the preference to true isn't enough, though. The preference has to be locked, either via the CCK2 or AutoConfig. What follows is a mapping of all the preferences to their corresponding buttons.

pref.general.disable_button.default_browser
Advanced->General->Make Firefox the default browser
pref.browser.homepage.disable_button.current_page
General->Use Current Pages
pref.browser.homepage.disable_button.bookmark_page
General->Use Bookmark
pref.browser.homepage.disable_button.restore_default
General->Restore to Default
security.disable_button.openCertManager
Advanced->Certificates->View Certificates
security.disable_button.openDeviceManager
Advanced->Certificates->Security Devices
app.update.disable_button.showUpdateHistory
Advanced->Update->Show Update History
pref.privacy.disable_button.cookie_exceptions
Privacy->History->Exceptions
pref.privacy.disable_button.view_cookies
Privacy->History->Show Cookies
pref.privacy.disable_button.view_passwords
Security->Passwords->Saved Paswords
pref.privacy.disable_button.view_passwords_exceptions
Security->Passwords->Exceptions

As a bonus, there's one more preference you can set and lock - pref.downloads.disable_button.edit_actions. It prevents the changing of any actions on the Applications page in preferences.

Robert NymanThe editors I’ve been using – which one is your favorite?

The other day when I wrote about Vim and how to get started with it, I got a bit nostalgic with the editors I’ve been using over the years.

Therefore, I thought I’d list the editors I’ve been using over the years. I remember dabbling around with a few and trying to understand them, but this list is made up of editors that I’ve been using extensively:

    Allaire HomeSite
    Ah, good ol’ HomeSite. You never forget your first real editor that you used for your creations. It was later bought by MacroMedia and then, in 2009, it was retired. Its creator, Nick Bradbury, wrote a bit about that in HomeSite Discontinued. I also sometimes used TopStyle, also created by Nick, as a complement to HomeSite – and that one is actually still alive!
    Visual Studio.NET
    I was young and I needed the money.
    TextMate
    After my switch to Mac OS X, I quickly started using TextMate and it was my main editor for a good number of years.
    MacVim
    When I had used TextMate for a long time, a number of developers told me I should really get into Vim, where MacVim seemed like the most suitable alternative. I tried, really hard, with it for about 6 months; learned a lot, but eventually went back to TextMate.
    Sublime Text
    Later, along came Sublime Text and seemed to have a lot of nice features and active development, while TextMate had been pretty stale for a long time.
    MacVim (again)
    And now, as explained in my recent blog post on Vim, I’m back there again. :-)

I also do like to dabble around with various editors, to see what I like, get another perspective on workflow and general inspiration. One thing I’m toying around with there is Atom from GitHub, and I look forward to testing it more as well.

Which editor are you using?

It would be very interesting and great if you’d like to share in the comments which editor you are using, and why you prefer it! Or with which editor you started your developer career!

Nick CameronThoughts on numeric types

Rust has fixed width integer and floating point types (`u8`, `i32`, `f64`, etc.). It also has pointer width types (`int` and `uint` for signed and unsigned integers, respectively). I want to talk a bit about when to use which type, and comment a little on the ongoing debate around the naming and use of these types.

Choosing a type


Hopefully you know whether you want an integer or a floating point number. From here on in I'll assume you want an integer, since they are the more interesting case. Hopefully you know if you need your integer to be signed or not. Then it gets interesting.

All other things being equal, you want to use the smallest integer you can for performance reasons. Programs run as fast or faster on smaller integers (especially if the larger integer has to be emulated in software, e.g., `u64` on a 32 bit machine). Furthermore, smaller integers will give smaller code.

At this point you need to think about overflow. If you pick a type which is too small for your data, then you will get overflow and usually bugs. You very rarely want overflow. Sometimes you do - if you want arithmetic modulo 2^n where n is 8, 16, 32, or 64, you can use the fixed width type and rely on overflow behaviour. Sometimes you might also want signed overflow for some bit twiddling tricks. But usually you don't want overflow.

If your data could grow to any size, you should use a type which will never overflow, such as Rust's `num::bigint::BigInt`. You might be able to do better performance-wise if you can prove that values might only overflow in certain places and/or you can cope with overflow without 'upgrading' to a wider integer type.

If, on the other hand, you choose a fixed width integer, you are asserting that the value will never exceed that size. For example, if you have an ascii character, you know it won't exceed 8 bits, so you can use `u8` (assuming you're not going to do any arithmetic which might cause overflow).

So far, so obvious. But, what are `int` and `uint` for? These types are pointer width, that means they are the same size as a pointer on the system you are compiling for. When using these types, you are asserting that a value will never grow larger than a pointer (taking into account details about the sign, etc.). This is actually quite a rare situation, the usual case is when indexing into an array, which is itself quite rare in Rust (since we prefer using an iterator).

What you should never do is think "this number is an integer, I'll use `int`". You must always consider the maximum size of the integer and thus the width of the type you'll use.

Language design issues


There are a few questions that keep coming up around numeric types - how to name the types? Which type to use as a default? What should `int`/`uint` mean?

It should be clear from the above that there are only very few situations when using `int`/`uint` is the right choice. So, it is a terrible choice for any kind of default. But what is a good choice? Well first of all, there are two meanings for 'default': the 'go to' type to represent an integer when programming (especially in tutorials and documentation), and the default when a numeric literal does not have a suffix and type inference can't infer a type. The first is a matter of recommendation and style, and the second is built-in to the compiler and language.

In general programming, you should use the right width type, as discussed above. For tutorials and documentation, it is often less clear which width is needed. We have had an unfortunate tendency to reach for `int` here because it is the most familiar and the least scary looking. I think this is wrong. We should probably use a variety of sized types so that newcomers to Rust get aquainted with the fixed width integer types and don't perpetuate the habit of using `int`.

For a long time, Rust had `int` as the default type when the compiler couldn't decide on something better. Then we got rid of the default entirely and made it a type error if no precise numeric type could be inferred. Now we have decided we should have a default again. The problem is that there is no good default. If you aren't being explicit about the width of the value, you are basically waving your hands about overflow and taking a "it'll be fine, whatever" approach, which is bad programming. There is no default choice of integer which is appropriate for that situation (except a growable integer like BigInt, but that is not an appropriate default for a systems langauge). We could go with `i64` since that is the least worst option we have in terms of overflow (and thus safety). Or we could go with `i32` since that is probably the most performant on current processors, but neither of these options are future-proof. We could use `int`, but this is wrong since it is so rare to be able to reason that you won't overflow when you have an essentially unknown width. Also, on processors with less than 32 bit pointers, it is far to easy to overflow. I suspect there is no good answer. Perhaps the best thing is to pick `i64` because "safety first".

Which brings us to naming. Perhaps `int`/`uint` are nor the best names since they suggest they should be the default types to use when they are not. Names such as `index`/`uindex`, `int_ptr`/`u_ptr`, and `size`/`usize` have been suggested. All of these are better in that they suggest the proper use for these types. They are not such nice names though, but perhaps that is OK, since we should (mostly) discourage their use. I'm not really sure where I stand on this, again I don't really like any of the options, and at the end of the day, naming is hard.

Raniere SilvaThe Status of Math in Open Access

The Status of Math in Open Access

This week, October 20–26, 2014, held the Open Access Week (see the announcement) and will end with Mozilla Festival that has a awesome science track. This post has some thoughts about math and open access and this two events.

Leia mais...

Curtis KoenigThanks for all the Fish

I’ve always loved that book or in fact any of Douglas Adams books as they made me laugh while reading for the first time. And like the ending of that series it always seemed a good way to start a ending. This is only the 3rd real job I’ve ever had and they’ve all ended with that as the subject line, so by now you all know where this is going.

The last 3 years 9 months and 21 days have been the best of my adult working life. Mozilla has been more than a job, more than a career. It was a home. The opportunity to apply ones talent in conjunction with values and mission is a gift. It’s a dream state, even on bad days, that I gladly would have remained a slumberer in. The Community of Mozilla is a powerful and wonderful uniqueness that embodies the core of what it means to be Open, and if we ever lose that we’ve lost a precious gem.

I hope to work with many of you again at some future time. If we cross paths somewhere I’d happily lift a libation in remembrance.

With that I shall end with one of my favorite bits of poetry:

Two roads diverged in a yellow wood,
And sorry I could not travel both
And be one traveler, long I stood
And looked down one as far as I could
To where it bent in the undergrowth;

Then took the other, as just as fair,
And having perhaps the better claim
Because it was grassy and wanted wear,
Though as for that the passing there
Had worn them really about the same,

And both that morning equally lay
In leaves no step had trodden black.
Oh, I kept the first for another day!
Yet knowing how way leads on to way
I doubted if I should ever come back.

I shall be telling this with a sigh
Somewhere ages and ages hence:
Two roads diverged in a wood, and I,
I took the one less traveled by,
And that has made all the difference.

 

/Curtis


Ben KeroMissoula visit, day 1

NOTE: This is a personal post, so if these sort of things do not interest you please feel free to disregard.

Yesterday I drove from Seattle to Missoula to visit my mother and help her sort out her health issues. I left later than usual, but with the drive time reduced by an hour compared to Portland combined with the relatively straight and boring I-90 I made it to Missoula with energy to spare (which also might have been why I was up far later than my arrival time).

The purpose of my visit is to visit my mother, show that I still care about my family, and help her sort out her medical issues (she’s currently going through her third round of cancer). When I arrived last night around 2300 I didn’t get a very good look at her. She had stayed awake past her normal 2000 time to await my arrival and greet me. Last night was relatively uneventful besides my restful sleep. She showed me to the apartment’s single bedroom. Each surface was meticulously cleaned, although none of the multitude of tchotchkes or personal accessories were organized or put away. While using the bathroom I noticed there was a small mop and bucket. Normally I wouldn’t think much of it, but in previous weeks my sister told me that my mother had given herself a panic attack making sure the apartment was spotless for my arrival. Unfortunately my trip over was waylaid for a few weeks, so I hope that she hasn’t been in such a state the whole time.

Last year I sent her an air purifier to remove all the pet danger from the air, and although she couldn’t figure out what it was or its purpose, thankfully she had finally figured it out and was using it. As a result the air was much cleaner and her phlegm-laden smoker’s cough was slightly better than last time I visited.

This morning I woke up later because I also fell asleep quite late into the morning combined with the timezone change. She has a few friends here in Missoula who (as far as I can tell) provide her with some company and source of gossip and usefulness. when I emerged from my room this morning I found her on the phone with one such friend. I didn’t get much from the conversation, but apparently the daily phone calls are a routine for her. That’s good.

What I really didn’t like though was the television. During the day it was a large part of her unchanging environment. Equipped with a “digital cable” box, this tube never showed anything besides Informative Murder Porn. Likewise, the bookshelves were chocked full of James Patterson books, promising more tripe romance and novelized informative murder porn. Although I fear that it’s rotting her brain I refrained from commenting about it.

After my emergence this morning we finally got a good look at each other. She finally noticed my attempt at growing a beard and I finally noticed her emaciation. Her weight is below 100lb now, and she looks positively skeletal. It’s not a pretty sight. It appears she attempted to dye her hair recently, but even with that effort it’s resisted, instead opting for a grey and wispy appearance. I offered to pick her up a meal with my lunch, but she refused saying that she already had breakfast.

We talked for a while about how she’s been. Besides the new medical situation nothing ever seems to change with her. She remains cloistered in her dark, smoke-smelling apartment with nothing to keep her company save her pets, her informative murder porn, and trashy novels. She hasn’t expressed any dissatisfaction with the situation, so perhaps she enjoys it. I haven’t asked, and haven’t decided if it’s appropriate to ask yet. Perhaps it’s imposing my values on her to think that she would be unhappy with this bountiful life she’s living.

This morning we talked about her medical situation. The gist of the situation is that she believes that she’s been caught up in a catch-22 with a set of doctors, each waiting to hear from another before proceeding to make a diagnosis and start a treatment. I asked her about what she knows (red blood cell count is down), and what kind of treatment she was currently undergoing. The answer was no treatment, so I continued by asking which doctor’s she’s seen and what their next steps would be (or what they’re waiting on). I got quite a few answers about quite a few doctors, and was unfortunately unable to follow most of it. I asked her to write a list of doctors down, along with what she thinks they’re waiting on and the last time they’ve been in contact. Hopefully when I get home this evening I can help untangle this and get her the treatment she needs. She didn’t seem particularly worried about things, which frustrated me. The scenario in front of me was her off-the-cuff attitude about it combined with the television showing an emotionally charged lady with a bloody knife crumpled into a heap on the ground, the police intervening to save the day and arrest the paedophile, had caused me to want to ragequit and give her some time to compile the list of doctors. I do not have much hope of returning home this evening to any list.

This afternoon, as usual for my Missoula visits, I’ve holed up at The Break coffee shop. It’s an old standby for spending uninterrupted time on my laptop to get a little work done. It’s also conveniently located far away from that apartment, and near the other goodness of downtown Missoula. The espresso might have been burnt, but the well worn tables, music reminiscent of my high school days in Missoula, and mixed clientèle lead to a very pleasing atmosphere. As I sit at my stained and worn coffee table typing away at my laptop I can see a young man in a cowboy hat and vest conversing with a friend in a denim jacket over a cup of joe, a dreadlocked young lady enjoying an iced coffee alone, equipped with Macbook and textbooks. Various others including a suited businessman, and a few white collar workers taking a lunch break are around. The bright but persistently drizzling conditions outside add to the hearth-like atmosphere of the shop. This place is going to get sick of me before the week is out.

This afternoon I’m going to attempt to do a bit do a bit of work and get in contact with my father and sister for their obligatory visits. I hope that I can return to my mother’s place and provide some sort of assistance besides moral support.

I’ll be in Missoula until Thursday evening.

Mic BermanWhat Does Your Life Design Look Like?

I would guess many people would react to this title like what on earth is she talking about? Design my life - what does that even mean? Many people I coach and work with have not thought about their life’s design - meaning the focused work you do, is it in support of your values and vision? the people you surround yourself with - do they inspire and motivate you? the physical space you live in - does it give you energy?  

 

Your life design includes more than your career. I invite my clients to think beyond what's on paper. Your career is your role, salary, title, industry company. I’m asking you also think of -  who you will be surrounded with, where it might take you beyond the role right now, how it will honour or not honour your values and how it will fit with your vision for your life.

 

If we jump mindlessly from one opportunity to the next - even if they are great opportunities on the surface eg more money, bigger title etc it can cost us in our personal lives in ways we may not consider. Arianna Huffington recently wrote about this in her book Thrive, “Our relentless pursuit of the two traditional metrics of success - money and power - has led to an epidemic of burnout and stress-related illnesses, and an erosion in the quality of our relationships, family life, and, ironically, our careers”. In addition to this sentiment, everyone’s heard repeated stories of those on their deathbed who never regret not having worked more or harder - what they always say is they wished they'd spent more time with people they loved or doing things that inspire and nurture. These are our regrets. So how do you want to design your life with no regrets in mind?

 

Here are my 3 tips on designing which I hope inspires you to re-create your designed life:

  1. Know your values and how you want to honour them

  2. Define your vision if not for 5 years at least for the next year - what are the big impressions you want to leave

  3. Who are your stakeholders (the people that matter to you) and how will you honour them

Mic BermanHow are you discerning who and what influences you?

Being discerning about my influences and experiences - I choose the people, experiences and beauty to influence my life. I find people aren’t always choosy about who or what they allow in their lives - they tend to revert to a default based on who they’ve known forever or do so without thought. I am more discerning now - and decide who I will give my time to, when I might read email, pay attention to my phone, and certainly who I will listen to, take advice from and experiences I want.

On Friday nights I’ve begun turning my phone off from dinner until Saturday night and sometimes Sunday morning - it gives me the opportunity to not be distracted by anyone and fully present in how I design my weekend to maximize rejuvenation and reflection.

I have found several friends that are smarter than me in key areas I love to learn about and so I soak up their thoughts, we share our challenges and learn from each other, we push each other to be even better and stronger than we know and acknowledge where we’re at or how we’re feeling without judgement.

For a complete shift in perspective and experience, I love growing our organic market-garden farm, it’s a venture that gives me solace, grounding, joy on a spiritual and physical plane that is entirely unique - to grow my own food and share this bounty with those that appreciate what it takes is beyond joyful. Particularly when I also then learn what can be done with e.g. ground cherry tomato’s and chocolate or wild leeks and miso :)

And I choose to include some element of beauty in my daily life and surroundings. That might mean picking a simple bouquet of wild flowers to infuse a team meeting room in the fresh scent of lilac’s. Or it could be appreciating fine art in painting or sculpture and the profound and thought-provoking impact the artist intends.

Doug BelshawA 10-point #MozFest survival guide

It’s the Mozilla Festival in London this weekend. It’s sold out, so you’ll have to beg, borrow or steal a ticket! This will will be my fourth, and third as a paid contributor (i.e. Mozilla employee).

Here’s my tips for getting the most of it.

1. Attend the whole thing

There’s always the temptation with multi-day events not to go to each of the days. It’s easy to slip off into the city – especially if it’s one you haven’t been to before. However, that would a real shame as there’s so much to do and see at MozFest. Plus, you really should have booked a few days either side to chill out.

2. Sample everything

Some tracks will grab you more than others. However, with nine floors and multiple sessions happening at the same time, there’s always going to be something to keep you entertained. Feel free to vote with your feet if you’re not getting maximum value from a session – and drop into something you don’t necessarily know a lot about!

3. Drink lots

Not alcohol or coffee – although there’ll be plenty of that on offer! I mean fluids that will rehydrate you. At the Mozilla Summit at the end of last year we were all given rehydration powder along with a Camelbak refillable bottle. This was the perfect combination and I urge you to bring something similar. Pro tip: if you can’t find the powder (it’s harder to come by in the UK) just put a slice of lemon in the bottom of the bottle to keep it tasting fresh all day!

4. Introduce yourself to people

The chances are that you don’t know all 1,600 people who have tickets for MozFest. I know I don’t! You should feel encouraged to go up and introduce yourself to people who look lost, bewildered, or at a loose end. Sample phrases that seem to work well:

  • “Wow, it’s pretty crazy, eh?”
  • “Hi! Which session have you just been to?”
  • “Is this your first MozFest?”

5. Take time out

It’s easy to feel overwhelmed, so feel free to find a corner, put your headphones on and zone out for a while. You’ll see plenty of people doing this – on all floors! Pace yourself – it’s a marathon, not a sprint.

6. Wear comfortable shoes/clothing

There are lifts at the venue but, as you can imagine, with so many people there they get full quickly. As a result there’ll be plenty of walking up and down stairs. Wear your most comfortable pair of shoes and clothing that’ll still look good when you’re a sticky mess. ;-)

7. Expect tech fails

I’ve been to a lot of events and at every single one, whether because of a technical problem or human error, there’s been a tech fail. Expect it! Embrace it. The wifi is pretty good, but mobile phone coverage is poor. Plan accordingly and have a backup option.

8. Ask questions

With so many people coming from so many backgrounds and disciplines, it’s difficult to know the terminology involved. If someone ‘drops a jargon bomb’ then you should call them out on it. If you don’t know what they mean, then the chances are others won’t know either. And if you’re the one doing the explaining, be aware that others may not share your context.

9. Come equipped

Your mileage may vary, but I’d suggest the following:

  • Bag
  • Laptop
  • Mobile phone and/or tablet
  • Pen
  • Notepad
  • Multi-gang extension lead
  • Charging cables
  • Headphones
  • Snacks (e.g. granola bars)
  • Refillable water bottle

I’d suggest a backpack as something over one shoulder might eventually cause pain. You might also want to put a cloth bag inside the bag you’re carrying in case you pick up extra stuff.

10. Build (and network!)

MozFest is a huge opportunity to meet and co-create stuff with exceptionally talented and enthusiastic people. So get involved! Bring your skills and lend a hand in whatever’s being built. If nothing else, you can take photos and help document the festival.

The strapline of MozFest is ‘arrive with an idea, leave with a community’. Unlike some conferences that have subtitles that, frankly, bear no relation to what actually happens, this one is dead on. You’ll want to keep in touch with people, so in addition to the stuff listed above you might want to bring business cards. Far from being a 20th century thing, I’ve found them much more useful than just writing on a scrap of paper or exchanging Twitter usernames.


This isn’t meant to be comprehensive, just my top tips. But I’d be very interested to hear your advice to newbies if you’re a MozFest veteran! Leave a comment below. :-)

Image CC BY-SA Alan Levine

Update: my colleague Kay Thaney has a great list of blighty sights that you should check out too!