ArkyGoogle Cardboard 360° Photosphere Viewer with A-Frame

In my previous post "Embedding Google Cardboard Camera VR Photosphere with A-Frame", I wrote that some talented programmer would probably create a better solution for embedding Google Cardboard camera photosphere using A-Frame.

I didn't know that Chris Car had already created a sophisticated solution for this problem. You can view it here on A-Frame blog.

You first might have to use Google Cardboard camera converter tool to make your Google Cardboard photosphere.


Robert O'CallahanRandom Thoughts On Rust: And IDEs

I've always liked the idea of Rust, but to tell the truth until recently I hadn't written much Rust code. Now I've written several thousand lines of Rust code and I have some more informed comments :-). In summary: Rust delivers, with no major surprises, but of course some aspects of Rust are better than I expected, and others worse.

cargo and

cargo and are awesome. They're probably no big deal if you've already worked with a platform that has similar infrastructure for distributing and building libraries, but I'm mainly a systems programmer which until now meant C and C++. (This is one note in a Rust theme: systems programmers can have nice things.) Easy packaging and version management encourages modularising and publishing code. Knowing that publishing to gives you some protection against language/compiler breakage is also a good incentive.

There's been some debate about whether Rust should have a larger standard library ("batteries included"). IMHO that's unnecessary; my main issue with is discovery. Anyone can claim any unclaimed name and it's sometimes not obvious what the "best" library is for any given task. An "official" directory matching common tasks to blessed libraries would go a very long way. I know from browser development that an ever-growing centrally-supported API surface is a huge burden, so I like the idea of keeping the standard library small and keeping library development decentralised and reasonably independent of language/compiler updates. It's really important to be able to stop supporting an unwanted library, letting its existing users carry on using it without imposing a burden on anyone else. However, it seems likely that in the long term will accumulate a lot of these orphaned crates, which will make searches increasingly difficult until the discovery problem is addressed.


So far I've been using Eclipse RustDT. It's better than nothing, and good enough to get my work done, but unfortunately not all that good. I've heard that others are better but none are fantastic yet. It's a bit frustrating because in principle Rust's design enables the creation of extraordinarily good IDE support! Unlike C/C++, Rust projects have a standard, sane build system and module structure. Rust is relatively easy to parse and is a lot simpler than C++. Strong static typing means you can do code completion without arcane heuristics. A Rust IDE could generate code for you in many situations, e.g.: generate a skeleton match body covering all cases of a sum type; generate a skeleton trait implementation; one-click #[derive] annotation; automatically add use statements and update Cargo.toml; automatically insert conversion trait calls and type coercions; etc. Rust has quite comprehensive style and naming guidelines that an IDE can enforce and assist with. (I don't like a couple of the standard style decisions --- the way rustfmt sometimes breaks very().long().method().call().chains() into one call per line is galling --- but it's much better to have them than a free-for-all.) Rust is really good at warning about unused cruft up to crate boundaries --- a sane module system at work! --- and one-click support for deleting it all would be great. IDEs should assist with semantic versioning --- letting you know if you've changed stable API but haven't revved the major version. All the usual refactorings are possible, but unlike mainstream languages you can potentially do aggressive code motion without breaking semantics, by leveraging Rust's tight grip on side effects. (More about this in another blog post.)

I guess one Rust feature that makes an IDE's job difficult is type inference. For C and C++ an IDE can get adequate information for code-completion etc by just parsing up to the user's cursor and ignoring the rest of the containing scope (which will often be invalid or non-existent). That approach would not work so well in Rust, because in many cases the types of variables depend on code later in the same function. The IDE will need to deal with partially-known types and try very hard to quickly recover from parse errors so later code in the same function can be parsed. It might be a good idea to track text changes and reuse previous parses of unchanged text instead of trying to reparse it with invalid context. On the other hand, IDEs and type inference have some synergy because the IDE can display inferred types.

Mitchell BakerPracticing Open: Expanding Participation in Hiring Leadership

Last fall I came across a hiring practice that surprised me. We were hiring for a pretty senior position. When I looked into the interview schedule I realized that we didn’t have a clear process for the candidate to meet a broad cross-section of Mozillians.  We had a good clear process for the candidate to meet peers and people in the candidate’s organization.  But we didn’t have a mechanism to go broader.

This seemed inadequate to me, for two reasons.  First, the more senior the role, the broader a part of Mozilla we expect someone to be able to lead, and the broader a sense of representing the entire organization we expect that person to have.  Our hiring process should reflect this by giving the candidate and a broader section of people to interact.  

Second, Mozilla’s core DNA is from the open source world, where one earns leadership by first demonstrating one’s competence to one’s peers. That makes Mozilla a tricky place to be hired as a leader. So many roles don’t have ways to earn leadership through demonstrating competence before being hired. We can’t make this paradox go away. So we should tune our hiring process to do a few things:

  • Give candidates a chance to demonstrate their competence to the same set of people who hope they can lead. The more senior the role, the broader a part of Mozilla we expect someone to be able to lead.
  • Expand the number of people who have a chance to make at least a preliminary assessment of a candidate’s readiness for a role. This isn’t the same as the open source ideal of working with someone for a while. But it is a big difference from never knowing or seeing or being consulted about a candidate. We want to increase the number of people who are engaged in the selection and then helping the newly hired person succeed.

We made a few changes right away, and we’re testing out how broadly these changes might be effective.  Our immediate fix was to organize a broader set of people to talk to the candidate through a panel discussion. We aimed for a diverse group, from role to gender to geography. We don’t yet have a formalized way to do this, and so we can’t yet guarantee that we’re getting a representational group or that other potential criteria are met. However, another open source axiom is that “the perfect is the enemy of the good.” And so we started this with the goal of continual improvement. We’ve used the panel for a number of interviews since then.

We looked at this in more detail during the next senior leadership hire. Jascha Kaykas-Wolff, our Chief Marketing Officer, jumped on board, suggesting we try this out with the Vice President of Marketing Communications role he had open. Over the next few months Jane Finette (executive program manager, Office of the Chair) worked closely with Jascha to design and pilot a program of extending participation in the selection of our next VP of MarComm. Jane will describe that work in the next post. Here, I’ll simply note that the process was well received. Jane is now working on a similar process for the Director level.

Mozilla Cloud Services BlogSending VAPID identified WebPush Notifications via Mozilla’s Push Service


The Web Push API provides the ability to deliver real time events (including data) from application servers (app servers) to their client-side counterparts (applications), without any interaction from the user. In other parts of our Push documentation we provide general a reference for the API and a basic usage tutorial. This document addresses the server-side portion in detail, including integrating Push into your server effectively, and how to avoid common issues.

Note: Much of this document presumes you’re familiar with programming as well as have done some light work in cryptography. Unfortunately, since this is new technology, there aren’t many libraries available that make sending messages painless and easy. As new libraries come out, we’ll add pointers to them, but for now, we’re going to spend time talking about how to do the encryption so that folks who need it, or want to build those libraries can understand enough to be productive.

Bear in mind that Push is not meant to replace richer messaging technologies like Google Cloud Messaging (GCM), Apple Push Notification system (APNs), or Microsoft’s Windows Notification System (WNS). Each has their benefits and costs, and it’s up to you as developers or architects to determine which system solves your particular set of problems. Push is simply a low cost, easy means to send data to your application.

Push Summary

The Push system looks like:
A diagram of the push process flow

Application — The user facing part of the program that interacts with the browser in order to request a Push Subscription, and receive Subscription Updates.
Application Server — The back-end service that generates Subscription Updates for delivery across the Push Server.
Push — The system responsible for delivery of events from the Application Server to the Application.
Push Server — The server that handles the events and delivers them to the correct Subscriber. Each browser vendor has their own Push Server to handle subscription management. For instance, Mozilla uses autopush.
Subscription — A user request for timely information about a given topic or interest, which involves the creation of an Endpoint to deliver Subscription Updates to. Sometimes also referred to as a “channel”.
Endpoint — A specific URL that can be used to send a Push Message to a specific Subscriber.
Subscriber — The Application that subscribes to Push in order to receive updates, or the user who instructs the Application to subscribe to Push, e.g. by clicking a “Subscribe” button.
Subscription Update — An event sent to Push that results in a Push Message being received from the Push Server.
Push Message — A message sent from the Application Server to the Application, via a Push Server. This message can contain a data payload.

The main parts that are important to Push from a server-side perspective are as follows — we’ll cover all of these points below in detail:

Identifying Yourself

Mozilla goes to great lengths to respect privacy, but sometimes, identifying your feed can be useful.

Mozilla offers the ability for you to identify your feed content, which is done using the Voluntary Application server Identification for web Push VAPID specification. This is a set of header values you pass with every subscription update. One value is a VAPID key that validates your VAPID claim, and the other is the VAPID claim itself — a set of metadata describing and defining the current subscription and where it has originated from.

VAPID is only useful between your servers and our push servers. If we notice something unusual about your feed, VAPID gives us a way to contact you so that things can go back to running smoothly. In the future, VAPID may also offer additional benefits like reports about your feeds, automated debugging help, or other features.

In short, VAPID is a bit of JSON that contains an email address to contact you, an optional URL that’s meaningful about the subscription, and a timestamp. I’ll talk about the timestamp later, but really, think of VAPID as the stuff you’d want us to have to help you figure out something went wrong.

It may be that you only send one feed, and just need a way for us to tell you if there’s a problem. It may be that you have several feeds you’re handling for customers of your own, and it’d be useful to know if maybe there’s a problem with one of them.

Generating your VAPID key

The easiest way to do this is to use an existing library for your language. VAPID is a new specification, so not all languages may have existing libraries.
Currently, we’ve collected several libraries under and are very happy to learn about more.

Fortunately, the method to generate a key is fairly easy, so you could implement your own library without too much trouble

The first requirement is an Elliptic Curve Diffie Hellman (ECDH) library capable of working with Prime 256v1 (also known as “p256” or similar) keys. For many systems, the OpenSSL package provides this feature. OpenSSL is available for many systems. You should check that your version supports ECDH and Prime 256v1. If not, you may need to download, compile and link the library yourself.

At this point you should generate a EC key for your VAPID identification. Please remember that you should NEVER reuse the VAPID key for the data encryption key you’ll need later. To generate a ECDH key using openssl, enter the following command in your Terminal:

openssl ecparam -name prime256v1 -genkey -noout -out vapid_private.pem

This will create an EC private key and write it into vapid_private.pem. It is important to safeguard this private key. While you can always generate a replacement key that will work, Push (or any other service that uses VAPID) will recognize the different key as a completely different user.

You’ll need to send the Public key as one of the headers . This can be extracted from the private key with the following terminal command:

openssl ec -in vapid_private.pem -pubout -out vapid_public.pem

Creating your VAPID claim

VAPID uses JWT to contain a set of information (or “claims”) that describe the sender of the data. JWTs (or Javascript Web Tokens) are a pair of JSON objects, turned into base64 strings, and signed with the private ECDH key you just made. A JWT element contains three parts separated by “.”, and may look like:


  1. The first element is a “header” describing the JWT object. This JWT header is always the same — the static string {typ:"JWT",alg:"ES256"} — which is URL safe base64 encoded to eyJ0eXAiOiJKV1QiLCJhbGciOiJFUzI1NiJ9. For VAPID, this string should always be the same value.
  2. The second element is a JSON dictionary containing a set of claims. For our example, we’ll use the following claims:
    "sub": "",
    "exp": "1463001340"

    The claims are as follows:

    1. sub : The “Subscriber” — a mailto link for the administrative contact for this feed. It’s best if this email is not a personal email address, but rather a group email so that if a person leaves an organization, is unavailable for an extended period, or otherwise can’t respond, someone else on the list can. Mozilla will only use this if we notice a problem with your feed and need to contact you.
    2. exp : “Expires” — this is an integer that is the date and time that this VAPID header should remain valid until. It doesn’t reflect how long your VAPID signature key should be valid, just this specific update. Normally this value is fairly short, usually the current UTC time + no more than 24 hours. A long lived “VAPID” header does introduce a potential “replay” attack risk, since the VAPID headers could be reused for a different subscription update with potentially different content.

    Feel free to add additional items to your claims. This info really should be the sort of things you want to get at 3AM when your server starts acting funny. For instance, you may run many AWS S3 instances, and one might be acting up. It might be a good idea to include the AMI-ID of that instance (e.g. “aws_id”:”i-5caba953″). You might be acting as a proxy for some other customer, so adding a customer ID could be handy. Just remember that you should respect privacy and should use an ID like “abcd-12345” rather than “Mr. Johnson’s Embarrassing Bodily Function Assistance Service”. Just remember to keep the data fairly short so that there aren’t problems with intermediate services rejecting it because the headers are too big.

    Once you’ve composed your claims, you need to convert them to a JSON formatted string with no padding space between elements, for example:


    Then convert this string to a URL-safe base64-encoded string, with the padding ‘=’ removed. For example, if we were to use python:

       import base64
       import json
       # These are the claims
       claims = {"sub":"",
       # convert the claims to JSON, then encode to base64
       body = base64.urlsafe_b64encode(json.dumps(claims))
       print body

    would give us


    This is the “body” of the JWT base string.

    The header and the body are separated with a ‘.’ making the JWT base string.


  3. The final element is the signature. This is an ECDH signature of the JWT base string created using your VAPID private key. This signature is URL safe base64 encoded, “=” padding removed, and again joined to the base string with an a ‘.’ delimiter.

    Generating the signature depends on your language and library, but is done by the ecdsa algorithm using your private key. If you’re interested in how it’s done in Python or Javascript, you can look at the code in

    Since your private key will not match the one we’ve generated, the signature you see in the last part of the following example will be different.


Forming your headers

The VAPID claim you assembled in the previous section needs to be sent along with your Subscription Update as an Authorization header Bearer token — the complete token should look like so:

Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJFUzI1NiJ9.eyJzdWIiOiAibWFpbHRvOmFkbWluQGV4YW1wbGUuY29tIiwgImV4cCI6ICIxNDYzMDg3Njc3In0.uyVNHws2F3k5jamdpsH2RTfhI3M3OncskHnTHnmdo0hr1ZHZFn3dOnA-42YTZ-u8_KYHOOQm8tUm-1qKi39ppA

(Note: the header should not contain line breaks. Those have been added here to aid in readability)

Important! The Authorization header ID will be changing soon from “Bearer” to “WebPush”. Some Push Servers may accept both, but if your request is rejected, you may want to try changing the tag. This, of course, is part of the fun of working with Draft specifications

You’ll also need to send a Crypto-Key header along with your Subscription Update — this includes a p256ecdsa element that takes the VAPID public key as its value — formatted as a URL safe, base64 encoded DER formatted string of the raw keypair.

An example follows:
Crypto-Key: p256ecdsa=BA5vkyMXVfaKuehJuecNh30-NiC7mT9gM97Op5d8LiAKzfIezLzCZMwrY7OypBBNwEnusGkdg9F84WqW1j5Ymjk
Note: If you like, you can cheat here and use the content of “vapid_public.pem”. You’ll need to remove the “-----BEGIN PUBLIC KEY------” and “-----END PUBLIC KEY-----” lines, remove the newline characters, and convert all “+” to “-” and “/” to “_”.

Note: You can validate your work against the VAPID test page — this will tell you if your headers are properly encoded. In addition, the VAPID repo contains libraries for JavaScript and Python to handle this process for you.

We’re happy to consider PRs to add libraries covering additional languages.

Receiving Subscription Information

Your Application will receive an endpoint and key retrieval functions that contain all the info you’ll need to successfully send a Push message. See
Using the Push API for details about this. Your application should send this information, along with whatever additional information is required, securely to the Application Server as a JSON object.

Such a post back to the Application Server might look like this:

{"customerid": "123456",
 "subscription": {"endpoint": "…",
                  "keys": {"p256dh": "BOrnIslXrUow2VAzKCUAE4sIbK00daEZCswOcf8m3TF8V…",
                           "auth": "k8JV6sjdbhAi1n3_LDBLvA"}},
"favoritedrink": "warm milk"}

In this example, the “subscription” field contains the elements returned from a fulfilled PushSubscription. The other elements represent additional data you may wish to exchange.

How you decide to exchange this information is completely up to your organization. You are strongly advised to protect this information. If an unauthorized party gained this information, they could send messages pretending to be you. This can be made more difficult by using a “Restricted Subscription”, where your application passes along your VAPID public key as part of the subscription request. A restricted subscription can only be used if the subscription carries your VAPID information signed with the corresponding VAPID private key. (See the previous section for how to generate VAPID signatures.)

Subscription information is subject to change and should be considered “opaque”. You should consider the data to be a “whole” value and associate it with your user. For instance, attempting to retain only a portion of the endpoint URL may lead to future problems if the endpoint URL structure changes. Key data is also subject to change. The app may receive an update that changes the endpoint URL or key data. This update will need to be reflected back to your server, and your server should use the new subscription information as soon as possible.

Sending a Subscription Update Without Data

Subscription Updates come in two varieties: data free and data bearing. We’ll look at these separately, as they have differing requirements.

Data Free Subscription Updates

Data free updates require no additional App Server processing, however your Application will have to do additional work in order to act on them. Your application will simply get a “push” message containing no further information, and it may have to connect back to your server to find out what the update is. It is useful to think of Data Free updates like a doorbell — “Something wants your attention.”

To send a Data Free Subscription, you POST to the subscription endpoint. In the following example, we’ll include the VAPID header information. Values have been truncated for presentation readability.

curl -v -X POST\
  -H "Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJFUzI1NiJ9.eyJhdWQiOiJodHR…"\
  -H "Crypto-Key: p256ecdsa=BA5vkyMXVfaKuehJuecNh30-NiC7mT9gM97Op5d8LiAKzfIezLzC…"\
  -H "TTL: 0"\

This should result in an Application getting a “push” message, with no data associated with it.

To see how to store the Push Subscription data and send a Push Message using a simple server, see our Using the Push API article.

Data Bearing Subscription Updates

Data Bearing updates are a lot more interesting, but do require significantly more work. This is because we treat our own servers as potentially hostile and require “end-to-end” encryption of the data. The message you send across the Mozilla Push Server cannot be read. To ensure privacy, however, your application will not receive data that cannot be decoded by the User Agent.

There are libraries available for several languages at, and we’re happy to accept or link to more.

The encryption method used by Push is Elliptic Curve Diffie Hellman (ECDH) encryption, which uses a key derived from two pairs of EC keys. If you’re not familiar with encryption, or the brain twisting math that can be involved in this sort of thing, it may be best to wait for an encryption library to become available. Encryption is often complicated to understand, but it can be interesting to see how things work.

Note: If you’re familiar with Python, you may want to just read the code for the http_ece package. If you’d rather read the original specification, that [is available. While the code is not commented, it’s reasonably simple to follow.

Data encryption summary

  • Octet — An 8 bit byte of data (between \x00 and \xFF)
  • Subscription data — The subscription data to encode and deliver to the Application.
  • Endpoint — the Push service endpoint URL, received as part of the Subscription data.
  • Receiver key — The p256dh key received as part of the Subscription data.
  • Auth key — The auth key received as part of the Subscription data.
  • Payload — The data to encrypt, which can be any streamable content between 2 and 4096 octets.
  • Salt — 16 octet array of random octets, unique per subscription update.
  • Sender key — A new ECDH key pair, unique per subscription update.

Web Push limits the size of the data you can send to between 2 and 4096 octets. You can send larger data as multiple segments, however that can be very complicated. It’s better to keep segments smaller. Data, whatever the original content may be, is also turned into octets for processing.

Each subscription update requires two unique items — a salt and a sender key. The salt is a 16 octet array of random octets. The sender key is a ECDH key pair generated for this subscription update. It’s important that neither the salt nor sender key be reused for future encrypted data payloads.

The receiver key is the public key from the client’s ECDH pair. It is base64, URL safe encoded and will need to be converted back into an octet array before it can be used. The auth key is a shared “nonce”, or bit of random data like the salt.

Emoji Based Diagram
Subscription Data Per Update Info Update
🎯 Endpoint 🔑 Private Server Key 📄Payload
🔒 Receiver key (‘p256dh’) 🗝 Public Server Key
💩 Auth key (‘auth’) 📎 Salt
🔐 Private Sender Key
✒️ Public Server Key

Encryption uses a fabricated key and nonce. We’ll discuss how the actual encryption is done later, but for now, let’s just create these items.

Creating the Encryption Key and Nonce

The encryption uses HKDF (Hashed Key Derivation Function) using a SHA256 hash very heavily.

Creating the secret

The first HKDF function you’ll need will generate the common secret (🙊), which is a 32 octet value derived from a salt of the auth (💩) and run over the string “Content-Encoding: auth\x00”.
So, in emoji =>
🔐 = 🔑(🔒);
🙊 = HKDF(💩, “Content-Encoding: auth\x00”).🏭(🔐)

An example function in Python could look like so:

# receiver_key must have "=" padding added back before it can be decoded.
# How that's done is an exercise for the reader.
receiver_key = subscription['keys']['p256dh']
server_key = pyelliptic.ECC(curve="prime256v1")
sender_key = server_key.get_ecdh_key(base64.urlsafe_b64decode(receiver_key))

secret = HKDF(
    length = 32,
    salt = auth,
    info = "Content-Encoding: auth\0").derive(sender_key)
The encryption key and encryption nonce

The next items you’ll need to create are the encryption key and encryption nonce.

An important component of these is the context, which is:

  • A string comprised of ‘P-256’
  • Followed by a NULL (“\x00”)
  • Followed by a network ordered, two octet integer of the length of the decoded receiver key
  • Followed by the decoded receiver key
  • Followed by a networked ordered, two octet integer of the length of the public half of the sender key
  • Followed by the public half of the sender key.

As an example, if we have an example, decoded, completely invalid public receiver key of ‘RECEIVER’ and a sample sender key public key example value of ‘sender’, then the context would look like:

⚓ = "P-256\x00\x00\x08RECEIVER\x00\x06sender"

The “\x00\x08” is the length of the bogus “RECEIVER” key, likewise the “\x00\x06” is the length of the stand-in “sender” key. For real, 32 octet keys, these values will most likely be “\x00\x20” (32), but it’s always a good idea to measure the actual key rather than use a static value.

The context string is used as the base for two more HKDF derived values, one for the encryption key, and one for the encryption nonce. In python, these functions could look like so:

In emoji:
🔓 = HKDF(💩 , “Content-Encoding: aesgcm\x00” + ⚓).🏭(🙊)
🎲 = HKDF(💩 , “Content-Encoding: nonce\x00” + ⚓).🏭(🙊)

encryption_key = HKDF(
    info="Content-Encoding: aesgcm\x00" + context).derive(secret)

# 🎲
encryption_nonce = HDKF(
    info="Content-Encoding: nonce\x00" + context).derive(secret)

Note that the encryption_key is 16 octets and the encryption_nonce is 12 octets. Also note the null (\x00) character between the “Content-Encoding” string and the context.

At this point, you can start working your way through encrypting the data 📄, using your secret 🙊, encryption_key 🔓, and encryption_nonce 🎲.

Encrypting the Data

The function that does the encryption (encryptor) uses the encryption_key 🔓 to initialize the Advanced Encryption Standard (AES) function, and derives the Galois/Counter Mode (G/CM) Initialization Vector (IV) off of the encryption_nonce 🎲, plus the data chunk counter. (If you didn’t follow that, don’t worry. There’s a code snippet below that shows how to do it in Python.) For simplicity, we’ll presume your data is less than 4096 octets (4K bytes) and can fit within one chunk.
The IV takes the encryption_nonce and XOR’s the chunk counter against the final 8 octets.

def generate_iv(nonce, counter):
    (mask,) = struct.unpack("!Q", nonce[4:])  # get the final 8 octets of the nonce
    iv = nonce[:4] + struct.pack("!Q", counter ^ mask)  # the first 4 octets of nonce,
                                                     # plus the XOR'd counter
    return iv

The encryptor prefixes a “\x00\x00” to the data chunk, processes it completely, and then concatenates its encryption tag to the end of the completed chunk. The encryptor tag is a static string specific to the encryptor. See your language’s documentation for AES encryption for further information.

def encrypt_chunk(chunk, counter, encryption_nonce, encryption_key):
    encryptor = Cipher(algorithms.AES(encryption_key),
    return encryptor.update(b"\x00\x00" + chunk) +
           encryptor.finalize() +

def encrypt(payload, encryption_nonce, encryption_key):
    result = b""
    counter = 0
    for i in list(range(0, len(payload)+2, 4096):
        result += encrypt_chunk(

Sending the Data

Encrypted payloads need several headers in order to be accepted.

The Crypto-Key header is a composite field, meaning that different things can store data here. There are some rules about how things should be stored, but we can simplify and just separate each item with a semicolon “;”. In our case, we’re going to store three things, a “keyid”, “p256ecdsa” and “dh”.

“keyid” is the string “p256dh”. Normally, “keyid” is used to link keys in the Crypto-Key header with the Encryption header. It’s not strictly required, but some push servers may expect it and reject subscription updates that do not include it. The value of “keyid” isn’t important, but it must match between the headers. Again, there are complex rules about these that we’re safely ignoring, so if you want or need to do something complex, you may have to dig into the Encrypted Content Encoding specification a bit.

“p256ecdsa” is the public key used to sign the VAPID header (See [Forming your Headers]). If you don’t want to include the optional VAPID header, you can skip this.

The “dh” value is the public half of the sender key we used to encrypt the data. It’s the same value contained in the context string, so we’ll use the same fake, stand-in value of “sender”, which has been encoded as a base64, URL safe value. For our example, the base64 encoded version of the string ‘sender’ is ‘c2VuZGVy’

Crypto-Key: p256ecdsa=BA5v…;dh=c2VuZGVy;keyid=p256dh

The Encryption Header contains the salt value we used for encryption, which is a random 16 byte array converted into a base64, URL safe value.

Encryption: keyid=p256dh;salt=cm5kIDE2IGJ5dGUgc2FsdA

The TTL Header is the number of seconds the notification should stay in storage if the remote user agent isn’t actively connected. “0” (Zed/Zero) means that the notification is discarded immediately if the remote user agent is not connected; this is the default. This header must be specified, even if the value is “0”.

TTL: 0

Finally, the Content-Encoding Header specifies that this content is encoded to the aesgcm standard.

Content-Encoding: aesgcm

The encrypted data is set as the Body of the POST request to the endpoint contained in the subscription info. If you have requested that this be a restricted subscription and passed your VAPID public key as part of the request, you must include your VAPID information in the POST.

As an example, in python:

headers = {
    'crypto-key': 'p256ecdsa=BA5v…;dh=c2VuZGVy;keyid=p256dh',
    'content-encoding': 'aesgcm',
    'encryption': 'keyid=p256dh;salt=cm5kIDE2IGJ5dGUgc2FsdA',
    'ttl': 0,

A successful POST will return a response of 201, however, if the User Agent cannot decrypt the message, your application will not get a “push” message. This is because the Push Server cannot decrypt the message so it has no idea if it is properly encoded. You can see if this is the case by:

  • Going to about:config in Firefox
  • Setting the dom.push.loglevel pref to debug
  • Opening the Browser Console (located under Tools > Web Developer > Browser Console menu.

When your message fails to decrypt, you’ll see a message similar to the following
The debugging console displaying "The service worker for scope '' encountered an error decryption the a push message:, with a message and where to look for more info

You can use values displayed in the Web Push Data Encryption Page to audit the values you’re generating to see if they’re similar. You can also send messages to that test page and see if you get a proper notification pop-up, since all the key values are displayed for your use.

You can find out what errors and error responses we return, and their meanings by consulting our server documentation.

Subscription Updates

Nothing (other than entropy) lasts forever. There may come a point where, for various reasons, you will need to update your user’s subscription endpoint. There are any number of reasons for this, but your code should be prepared to handle them.

Your application’s service worker will get a onpushsubscriptionchange event. At this point, the previous endpoint for your user is now invalid and a new endpoint will need to be requested. Basically, you will need to re-invoke the method for requesting a subscription endpoint. The user should not be alerted of this, and a new endpoint will be returned to your app.

Again, how your app identifies the customer, joins the new endpoint to the customer ID, and securely transmits this change request to your server is left as an exercise for the reader. It’s worth noting that the Push server may return an error of 410 with an errno of 103 when the push subscription expires or is otherwise made invalid. (If a push subscription has expired several months ago, the server may return a different errno value.


Push Data Encryption can be very challenging, but worthwhile. Harder encryption means that it is more difficult for someone to impersonate you, or for your data to be read by unintended parties. Eventually, we hope that much of this pain will be buried in libraries that allow you to simply call a function, and as this specification is more widely adopted, it’s fair to expect multiple libraries to become available for every language.

See also:

  1. WebPush Libraries: A set of libraries to help encrypt and send push messages.
  2. VAPID lib for python or javascript can help you understand how to encode VAPID header data.

Christian HeilmannNew blog design – I let the browser do most of the work…

Unless you are reading the RSS feed or the AMP version of this blog, you’ll see that some things changed here. Last week I spent an hour redesigning this blog from scrarch. No, I didn’t move to another platform (WordPress does the job for me so far), but I fixed a few issues that annoyed me.

So now this blog is fully responsive, has no dependencies on any CSS frameworks or scripts and should render much nicer on mobile devices where a lot of my readers are.

It all started with my finding Dan Klammers Bytesize Icons – the icons which are now visible in the navigation on top if your screen is wide enough to allow for them. I loved their simplicity and that I could embed them, thus having a visual menu that doesn’t need any extra HTTP overhead. So I copied and pasted, coloured in the lines and that was that.

The next thing that inspired me incredibly was the trick to use a font-size of 1em + 1vw on the body of the document to ensure a readable text regardless of the resolution. It was one of the goodies in Heydon Pickering’s On writing less damn code post. He attributed this trick to Vasilis who of course is too nice and told the whole attribution story himself.

Next was creating the menu. For this, I used the power of flexbox and a single media query to ensure that my logo stays but the text links wrap into a few lines next to it. You can play with the code on JSBin.

The full CSS of the blog is now about 340 lines of code and has no dependency on any libraries or frameworks. There is no JavaScript except for ads.

The rest was tweaking some font sizes and colours and adding some enhancements like skip links to jump over the navigation. These are visible when you tab into the document, which seems a good enough solution seeing that we do not have a huge navigation as it is.

Other small fixes:

  • The code display on older posts is now fixed. I used an older plugin not compatible with the current one in the past. The fix was to write yet another plugin to un-do what the old one needed and giving it the proper HTML structure.
  • I switched the ad to a responsive one, so there should be no problems with this breaking the layout. Go on, test it out, click it a few hundred times to give it a thorough test.
  • I’ve stopped fixed image sizes for quite a while now and used 100% as width. With this new layout I also gave them a max width to avoid wasted space and massive blurring.
  • For videos I will now start using Embed responsively not to break the layout either.

All in all this was the work of an hour, live in my browser and without any staging. This is a blog, it is here for words, not to do amazing feats of code.

Here are few views of the blog on different devices (courtesy of the Chrome Devtools):

Blog on iPad

iPhone 5:
Blog on iPhone 5

iPhone 6:
Blog on iPhone 6

Nexus 5:
Blog on Nexus 5

Hope you like it.

All in all I love working on the web these days. Our CSS toys are incredibly powerful, browsers are much more reliable and the insights you get and tweaks you can do in developer tools are amazing. When I think back when I did the first layout here in 2006, I probably wouldn’t go through these pains nowadays. Create some good stuff, just do as much as is needed.

QMOFirefox 49 Beta 7 Testday, August 26th

Hello Mozillians,

We are happy to announce that Friday, August 26th, we are organizing Firefox 49 Beta 7 Testday. We will be focusing our testing on WebGL Compatibility and Exploratory Testing. Check out the detailed instructions via this etherpad.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better! See you on Friday!

This Week In RustThis Week in Rust 144

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

New Crates & Project Updates

Crate of the Week

No crate was selected for this week for lack of votes. Ain't that a pity?

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

167 pull requests were merged in the last two weeks.

New Contributors

  • Alexandre Oliveira
  • Amit Levy
  • clementmiao
  • DarkEld3r
  • Dustin Bensing
  • Erik Uggeldahl
  • Jacob
  • JessRudder
  • Michael Layne
  • Nazım Can Altınova
  • Neil Williams
  • pliniker

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

fn work(on: RustProject) -> Money

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

No quote was selected for QotW.

Submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and brson.

Alex VincentIntroducing es7-membrane: A new ECMAScript 2016 Membrane implementation

I have a new ECMAScript membrane implementation, which I will maintain and use in a professional capacity, and which I’m looking for lots of help with in the form of code reviews and API design advice.

For those of you who don’t remember what a membrane is, Tom van Cutsem wrote about membranes a few years ago, including the first implementations in JavaScript. I recently answered a StackOverflow question on why a membrane might be useful in the first place.

Right now, the membrane supports “perfect” mirroring across object graphs:  as far as I can tell, separate object graphs within the same membrane never see objects or functions from another object graph.

The word “perfect” is in quotes because there are probably bugs, facets I haven’t yet tested for (“What happens if I call Object.freeze() on a proxy from the membrane?”, for example).  There is no support yet for the real uses of proxies, such as hiding properties, exposing new ones, or special handling of primitive values.  That support is forthcoming, as I do expect I will need a membrane in my “Verbosio” project (an experimental XML editor concept, irrelevant to this group) and another for the company I work for.

The good news is the tests pass in Node 6.4, the current Google Chrome release, and Mozilla Firefox 51 (trunk, debug build).  I have not tested any other browser or ECMAScript environment.  I also will be checking in lots of use cases over the next few weeks which will guide future work on the module.

With all that said, I’d love to get some help.  That’s why I moved it to its own GitHub repository.

  • None of this code has been reviewed yet.  My colleagues at work are not qualified to do a code or API review on this.  (This isn’t a knock on them – proxies are a pretty obscure part of the ECMAScript universe…)  I’m looking for some volunteers to do those reviews.
  • I have two or three different options, design-wise, for making a Membrane’s proxies customizable while still obeying the rules of the Membrane.  I’m assuming there’s some supremely competent people from the es-discuss mailing list who could offer advice through the GitHub project’s wiki pages.
  • I’d like also to properly wrap the baseline code as ES6 modules using the import, export statements – but I’m not sure if they’re safe to use in current release browsers or Node.  (I’ve been skimming Dr. Axel Rauschmeyer’s “Exploring ES6” chapter on ES6 modules.)
    • Side note:  try { import /* … */ } catch (e) { /* … */ } seems to be illegal syntax, and I’d really like to know why.  (The error from trunk Firefox suggested import needed to be on the first line, and I had the import block on the second line, after the try statement.)
  • This is my first time publishing a serious open-source project to GitHub, and my absolute first attempt at publishing to NPM:
  • I’m not familiar with Node, nor with “proper” packaging of modules pre-ES6.  So my build-and-test systems need a thorough review too.
  • I’m having trouble properly setting up continuous integration.  Right now, the build reports as passing but is internally erroring out…
  • Pretty much any of the other GitHub/NPM-specific goodies (a static demo site, wiki pages for discussions, keywords for the npm package, a Tonic test case, etc.) don’t exist yet.
  • Of course, anyone who has interest in membranes is welcome to offer their feedback.

If you’re not able to comment here for some reason, I’ve set up a GitHub wiki page for that purpose.

Air MozillaMozilla Weekly Project Meeting, 22 Aug 2016

Mozilla Weekly Project Meeting The Monday Project Meeting

Hal WinePy Bay 2016 - a First Report

Py Bay 2016 - a First Report

PyBay held their first local Python conference this last weekend (Friday, August 19 through Sunday, August 21). What a great event! I just wanted to get down some first impressions - I hope to do more after the slides and videos are up.

First, the venue and arrangements were spot on. Check the twitter traffic for #PyBay2016 and @Py_Bay and you see numerous comments confirming that. And, I must say the food was worthy of San Francisco - very tasty. And healthy. With the weather cooperating to be nicely sunny around noon, the outdoor seating was appreciated by all who came from far away. The organizers even made it a truly Bay Area experience by arranging for both Yoga and Improv breakouts. The people were great - volunteers, organizers, speakers, and attendees. Props to all.

The technical program was well organized, and I’m really looking forward to the videos for the talks I couldn’t attend. Here’s some quick highlights that I hope to backfill.

  • OpenTracing - a talk by Ben Sigelman - big one for distributed systems, it promises a straightforward way to identify critical path issues across a micro-service distributed architecture. Embraced by a number of big companies (Google, Uber), it builds on real world experience with distributed systems.

    Programs just need to add about 5 lines of setup, and one call per ‘traceable action’ (whatever that means for your environment). The output can be directed anywhere - one of the specialized UI’s or traditional log aggregation services.

    There are open source, commercial offerings, and libraries for many languages (Python, Go, etc.) & frameworks (SQLAlchemy, Flask, etc.). As an entry level, you can insert the trace calls and render to existing logging. The framework adds guids to simplify tracking across multiple hosts & processes.

  • Semantic logging - a lightning talk by Mahmoud Hashemi was a brief introduction to the lithoxyl package. The readme contains the selling point. (Especially since I took Raymond Hettinger‘s intermediate class on Python Friday, and he convincingly advocated for the ratio of business logic lines to “admin lines” as a metric of good code.)

  • Mahmoud Hashemi also did a full talk on profiling Python performance in enterprise applications, and ways to improve that performance. (And, yes, we write enterprise applications.)

And there was lots more that I’ll try to cover later. And add in some links for the above as they become available.

Karl Dubost[worklog] Edition 032. The sento cheminey

From the seventh floor, I see the cheminey of a local sento. It's not always on. The smoke is not coming out, it gives me a good visual feedback of the opening hours. It's here. It doesn't have notifications. It doesn't have ics, atom. It's just there. And it's fine as-is. The digital world seems sometimes to create complicated UI and UX over things which are just working. They become disruptive but not helpful.

Tune of the week: Leo Ferré - Avec le temps

Webcompat Life

Progress this week:

Today: 2016-08-22T13:46:36.150030
300 open issues
needsinfo       4
needsdiagnosis  86
needscontact    18
contactready    28
sitewait        156

You are welcome to participate

Webcompat issues

(a selection of some of the bugs worked on this week). dev

Reading List

  • Return of the 10kB Web design contest. Good stuff. I spend a good chunk of my time in the network panel of devtools… And it's horrific.

Follow Your Nose


  • Document how to write tests on using test fixtures.
  • ToWrite: Amazon prefetching resources with <object> for Firefox only.


Will Kahn-Greenepyvideo last thoughts

What is pyvideo? is an index of Python-related conference and user-group videos on the Internet. Saw a session you liked and want to share it? It's likely you can find it, watch it, and share it with

This is my last update. is now in new and better hands and will continue going forward.

Read more… (2 mins to read)

Michael KellyCaching Async Operations via Promises

I was working on a bug in Normandy the other day and remembered a fun little trick for caching asynchronous operations in JavaScript.

The bug in question involved two asynchronous actions happening within a function. First, we made an AJAX request to the server to get an "Action" object. Next, we took an attribute of the action, the implementation_url, and injected a <script> tag into the page with the src attribute set to the URL. The JavaScript being injected would then call a global function and pass it a class function, which was the value we wanted to return.

The bug was that if we called the function multiple times with the same action, the function would make multiple requests to the same URL, even though we really only needed to download data for each Action once. The solution was to cache the responses, but instead of caching the responses directly, I found it was cleaner to cache the Promise returned when making the request instead:

export function fetchAction(recipe) {
  const cache = fetchAction._cache;

  if (!(recipe.action in cache)) {
    cache[recipe.action] = fetch(`/api/v1/action/${recipe.action}/`)
      .then(response => response.json());

  return cache[recipe.action];
fetchAction._cache = {};

Another neat trick in the code above is storing the cache as a property on the function itself; it helps avoid polluting the namespace of the module, and also allows callers to clear the cache if they wish to force a re-fetch (although if you actually needed that, it'd be better to add a parameter to the function instead).

After I got this working, I puzzled for a bit on how to achieve something similar for the <script> tag injection. Unlike an AJAX request, the only thing I had to work with was an onload handler for the tag. Eventually I realized that nothing was stopping me from wrapping the <script> tag injection in a Promise and caching it in exactly the same way:

export function loadActionImplementation(action) {
  const cache = loadActionImplementation._cache;

  if (!( in cache)) {
    cache[] = new Promise((resolve, reject) => {
      const script = document.createElement('script');
      script.src = action.implementation_url;
      script.onload = () => {
        if (!( in registeredActions)) {
          reject(new Error(`Could not find action with name ${}.`));
        } else {

  return cache[];
loadActionImplementation._cache = {};

From a nitpicking standpoint, I'm not entirely happy with this function:

  • The name isn't really consistent with the "fetch" terminology from the previous function, but I'm not convinced they should use the same verb either.
  • The Promise code could probably live in another function, leaving this one to only concern itself about the caching.
  • I'm pretty sure this does nothing to handle the case of the script failing to load, like a 404.

But these are minor, and the patch got merged, so I guess it's good enough.

Cameron KaiserTenFourFox 45 beta 2: not yet

So far the TenFourFox 45 beta is doing very well. There have been no major performance regressions overall (Amazon Music notwithstanding, which I'll get to presently) and actually overall opinion is that 45 seems much more responsive than 38. On that score, you'll like beta 2 even more when I get it out: I raided some more performance-related Firefox 46 and 47 patches and backported them too, further improving scrolling performance by reducing unnecessary memory allocation and also substantially reducing garbage collection overhead. There are also some minor tweaks to the JavaScript JIT, including a gimme I should have done long ago where we use single-precision FPU instructions instead of double-precision plus a rounding step. This doesn't make a big difference for most scripts, but a few, particularly those with floating point arrays, will benefit from the reduction in code size and slight improvement in speed.

Unfortunately, while this also makes Amazon Music's delay on starting and switching tracks better, it's still noticeably regressed compared to 38. In addition, I'm hitting an assertion with YouTube on certain, though not all, videos over a certain minimum length; in the release builds it just seizes up at that point and refuses to play the video further. Some fiddling in the debugger indicates it might be related to what Chris Trusch was reporting about frameskipping not working right in 45. If I'm really lucky all this is related and I can fix them all by fixing that root problem. None of these are showstoppers but that YouTube assertion is currently my highest priority to fix for the next beta; I have not solved it yet though I may be able to wallpaper it. I'm aiming for beta 2 this coming week.

On the localization front we have French, German and Japanese translations, the latter from the usual group that operates separately from Chris T's work in issue 42. I'd like to get a couple more in the can before our planned release on or about September 13. If you can help, please sign up.

Mozilla Open Design BlogNow we’re talking!

The responses we’ve received to posting the initial design concepts for an updated Mozilla identity have exceeded our expectations. The passion from Mozillians in particular is coming through loud and clear. It’s awesome to witness—and exactly what an open design process should spark.

Some of the comments also suggest that many people have joined this initiative in progress and may not have enough context for why we are engaging in this work.

Since late 2014, we’ve periodically been fielding a global survey to understand how well Internet users recognize and perceive Mozilla and Firefox. The data has shown that while we are known, we’re not understood.

  • Less than 30% of people polled know we do anything other than make Firefox.
  • Many confuse Mozilla with our Firefox browser.
  • Firefox does not register any distinct attributes from our closest competitor, Chrome.

We can address these challenges by strengthening our core with renewed focus on Firefox, prototyping the future with agile Internet of Things experiments and revolutionary platform innovation, and growing our influence by being clear about the Internet issues we stand for. All efforts now underway.

To support these efforts and help clarify what Mozilla stands for requires visual assets reinforcing our purpose and vision. Our current visual toolkit for Mozilla is limited to a rather generic wordmark and small color palette. And so naturally, we’ve all filled that void with our own interpretations of how the brand should look.

Our brand will always need to be expressed across a variety of communities, projects, programs, events, and more. But without a strong foundation of a few clearly identifiable Mozilla assets that connect all of our different experiences, we are at a loss. The current proliferation of competing visual expressions contribute to the confusion that Internet users have about us.

For us to be able to succeed at the work we need to do, we need others to join us. To get others to join us, we need to be easier to find, identify and understand. That’s the net impact we’re seeking from this work.

Doubling down on open.

The design concepts being shared now are initial directions that will continue to be refined and that may spark new derivations. It’s highly unusual for work at this phase of development to be shown publicly and for a forum to exist to provide feedback before things are baked. Since we’re Mozilla, we’re leaning into our open-source ethos of transparency and participation.

It’s also important to remember that we’re showing entire design systems here, not just logos. We’re pressure-testing which of these design directions can be expanded to fit the brilliant variety of programs, projects, events, and more that encompass Mozilla.

If you haven’t had a chance to read earlier blog posts about the formation of the narrative territories, have a look. This design work is built from a foundation of strategic thinking about where Mozilla is headed over the next five years, the makeup of our target audience, and what issues we care about and want to amplify. All done with an extraordinary level of openness.

We’re learning from the comments, especially the constructive ones, and are grateful that people are taking the time to write them. We’ll continue to share work as it evolves. Thanks to everyone who has engaged so far, and here’s to keeping the conversation going.

Gervase MarkhamSomething You Know And… Something You Know

The email said:

To better protect your United MileagePlus® account, later this week, we’ll no longer allow the use of PINs and implement two-factor authentication.

This is’s idea of two-factor authentication: screenshot asking two security questions because my device is unknown

It doesn’t count as proper “Something You Have”, if you can bootstrap any new device into “Something You Have” with some more “Something You Know”.

Air MozillaFoundation Demos August 19 2016

Foundation Demos August 19 2016 Foundation Demos August 19 2016

Air MozillaWebdev Beer and Tell: August 2016

Webdev Beer and Tell: August 2016 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...

Air MozillaImproving Pytest-HTML: My Outreachy Project

Improving Pytest-HTML: My Outreachy Project Ana Ribero of the Outreachy Summer 2016 program cohort describes her experience in the program and what she did to improve Mozilla's Pytest-HTML QA tools.

Mozilla Addons BlogA Simpler Add-on Review Process

In 2011, we introduced the concept of “preliminary review” on AMO. Developers who wanted to list add-ons that were still being tested or were experimental in nature could opt for a more lenient review with the understanding that they would have reduced visibility. However, having two review levels added unnecessary complexity for the developers submitting add-ons, and the reviewers evaluating them. As such, we have implemented a simpler approach.

Starting on August 22nd, there will be one review level for all add-ons listed on AMO. Developers who want to reduce the visibility of their add-ons will be able to set an “experimental” add-on flag in the AMO developer tools. This flag won’t have any effect on how an add-on is reviewed or updated.

All listed add-on submissions will either get approved or rejected based on the updated review policy. For unlisted add-ons, we’re also unifying the policies into a single set of criteria. They will still be automatically signed and post-reviewed at our discretion.

We believe this will make it easier to submit, manage, and review add-ons on AMO. Review waiting times have been consistently good this year, and we don’t expect this change to have a significant impact on this. It should also make it easier to work on AMO code, setting up a simpler codebase for future improvements.  We hope this makes the lives of our developers and reviewers easier, and we thank you for your continued support.

Gervase MarkhamAuditing the Trump Campaign

When we opened our web form to allow people to make suggestions for open source projects that might benefit from a Secure Open Source audit, some joker submitted an entry as follows:

  • Project Name: Donald J. Trump for President
  • Project Website:
  • Project Description: Make America great again
  • What is the maintenance status of the project? Look at the polls, we are winning!
  • Has the project ever been audited before? Its under audit all the time, every year I get audited. Isn’t that unfair? My business friends never get audited.

Ha, ha. But it turns out it might have been a good idea to take the submission more seriously…

If you know of an open source project (as opposed to a presidential campaign) which meets our criteria and might benefit from a security audit, let us know.

Paul RougetServo homebrew nightly buids

Servo binaries available via Homebrew


$ brew install servo/servo/servo-bin
$ servo -w # See `servo --help`

Update (every day):

$ brew update && brew upgrade servo-bin

Switch to older version (earliest version being 2016.08.19):

$ brew switch servo-bin YYYY.MM.DD

File issues specific to the Homebrew package here, and Servo issues here.

This package comes without browserhtml.

Daniel StenbergRemoving the PowerShell curl alias?

PowerShell is a spiced up command line shell made by Microsoft. According to some people, it is a really useful and good shell alternative.

Already a long time ago, we got bug reports from confused users who couldn’t use curl from their PowerShell prompts and it didn’t take long until we figured out that Microsoft had added aliases for both curl and wget. The alias had the shell instead invoke its own command called “Invoke-WebRequest” whenever curl or wget was entered. Invoke-WebRequest being PowerShell’s own version of a command line tool for fiddling with URLs.

Invoke-WebRequest is of course not anywhere near similar to neither curl nor wget and it doesn’t support any of the command line options or anything. The aliases really don’t help users. No user who would want the actual curl or wget is helped by these aliases, and user who don’t know about the real curl and wget won’t use the aliases. They were and remain pointless. But they’ve remained a thorn in my side ever since. Me knowing that they are there and confusing users every now and then – not me personally, since I’m not really a Windows guy.

Fast forward to modern days: Microsoft released PowerShell as open source on github yesterday. Without much further ado, I filed a Pull-Request, asking the aliases to be removed. It is a minuscule, 4 line patch. It took way longer to git clone the repo than to make the actual patch and submit the pull request!

It took 34 minutes for them to close the pull request:

“Those aliases have existed for multiple releases, so removing them would be a breaking change.”

To be honest, I didn’t expect them to merge it easily. I figure they added those aliases for a reason back in the day and it seems unlikely that I as an outsider would just make them change that decision just like this out of the blue.

But the story didn’t end there. Obviously more Microsoft people gave the PR some attention and more comments were added. Like this:

“You bring up a great point. We added a number of aliases for Unix commands but if someone has installed those commands on WIndows, those aliases screw them up.

We need to fix this.”

So, maybe it will trigger a change anyway? The story is ongoing…

Mike HoyeCulture Shock

I’ve been meaning to get around to posting this for… maybe fifteen years now? Twenty? At least I can get it off my desk now.

As usual, it’s safe to assume that I’m not talking about only one thing here.

I got this document about navigating culture shock from an old family friend, an RCMP negotiator now long retired. I understand it was originally prepared for Canada’s Department of External Affairs, now Global Affairs Canada. As the story made it to me, the first duty posting of all new RCMP recruits used to (and may still?) be to a detachment stationed outside their home province, where the predominant language spoken wasn’t their first, and this was one of the training documents intended to prepare recruits and their families for that transition.

It was old when I got it 20 years ago, a photocopy of a mimeograph of something typeset on a Selectric years before; even then, the RCMP and External Affairs had been collecting information about the performance of new hires in high-stress positions in new environments for a long time. There are some obviously dated bits – “writing letters back home” isn’t really a thing anymore in the stamped-envelope sense they mean and “incurring high telephone bills”, well. Kids these days, they don’t even know, etcetera. But to a casual search the broad strokes of it are still valuable, and still supported by recent data.

Traditionally, the stages of cross—cultural adjustment have been viewed as a U curve. What this means is, that the first months in a new culture are generally exciting – this is sometimes referred to as the “honeymoon” or “tourist” phase. Inevitably, however, the excitement wears off and coping with the new environment becomes depressing, burdensome, anxiety provoking (everything seems to become a problem; housing, neighbors, schooling, health care, shopping, transportation, communication, etc.) – this is the down part of the U curve and is precisely the period of so-called “culture shock“. Gradually (usually anywhere from 6 months to a year) an individual learns to cope by becoming involved with, and accepted by, the local people. Culture shock is over and we are back, feeling good about ourselves and the local culture.

Spoiler alert: It doesn’t always work out that way. But if you know what to expect, and what you’re looking for, you can recognize when things are going wrong and do something about it. That’s the key point, really: this slow rollercoaster you’re on isn’t some sign of weakness or personal failure. It’s an absolutely typical human experience, and like a lot of experiences, being able to point to it and give it a name also gives you some agency over it you may not have thought you had.

I have more to say about this – a lot more – but for now here you go: “Adjusting To A New Environment”, date of publication unknown, author unknown (likely Canada’s Department of External Affairs.) It was a great help to me once upon a time, and maybe it will be for you.

Air MozillaIntern Presentations 2016, 18 Aug 2016

Intern Presentations 2016 Group 5 of Mozilla's 2016 Interns presenting what they worked on this summer. Click the Chapters Tab for a topic list. Nathanael Alcock- MV Dimitar...

Support.Mozilla.OrgWhat’s Up with SUMO – 18th August

Hello, SUMO Nation!

It’s good to be back and know you’re reading these words :-) A lot more happening this week (have you heard about Activate Mozilla?), so go through the updates if you have not attended all our meetings – and do let us know if there’s anything else you want to see in the blog posts – in the comments!

Welcome, new contributors!

If you just joined us, don’t hesitate – come over and say “hi” in the forums!

Contributors of the week

Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

Most recent SUMO Community meeting

The next SUMO Community meeting

  • …is happening on the 24th of August!
  • If you want to add a discussion topic to the upcoming meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Wednesday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).
    • If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback.



Support Forum

  • The SUMO Firefox 48 Release Report is open for feedback: please add your links, tweets, bugs, threads and anything else that you would like to have highlighted in the report.
  • More details about the audit for the Support Forum:
    • The audit will only be happening this week and next
    • It will determine the forum contents that will be kept and/or refreshed
    • The Get Involved page will be rewritten and designed as it goes through a “Think-Feel-do” exercis.
    • Please take a few minutes this week to read through the document and make a comment or edit.
    • One of the main questions is “what are the things that we cannot live without in the new forum?” – if you have an answer, write more in the thread!
  • Join Rachel in the SUMO Vidyo room on Friday between noon and 14:00 PST for answering forum threads and general hanging out!

Knowledge Base & L10n


  • for Android
    • Version 49 will not have many features, but will include bug and security fixes.
  • for iOS
    • Version 49 will not have many features, but will include bug and security fixes.

… and that’s it for now, fellow Mozillians! We hope you’re looking forward to a great weekend and we hope to see you soon – online or offline! Keep rocking the helpful web!

Chris H-CThe Future of Programming

Here’s a talk I watched some months ago, and could’ve sworn I’d written a blogpost about. Ah well, here it is:

Bret Victor – The Future of Programming from Bret Victor on Vimeo.

It’s worth the 30min of your attention if you have interest in programming or computer history (which you should have an interest in if you are a developer). But here it is in sketch:

The year is 1973 (well, it’s 2004, but the speaker pretends it is 1973), and the future of programming is bright. Instead of programming in procedures typed sequentially in text files, we are at the cusp of directly manipulating data with goals and constraints that are solved concurrently in spatial representations.

The speaker (Bret Victor) highlights recent developments in the programming of automated computing machines, and uses it to suggest the inevitability of a very different future than we currently live and work in.

It highlights how much was ignored in my world-class post-secondary CS education. It highlights how much is lost by hiding research behind paywalled journals. It highlights how many times I’ve had to rewrite the wheel when, more than a decade before I was born, people were prototyping hoverboards.

It makes me laugh. It makes me sad. It makes me mad.

…that’s enough of that. Time to get back to the wheel factory.


Air MozillaConnected Devices Weekly Program Update, 18 Aug 2016

Connected Devices Weekly Program Update Weekly project updates from the Mozilla Connected Devices team.

Air MozillaReps weekly, 18 Aug 2016

Reps weekly This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Niko Matsakis'Tootsie Pop' Followup

A little while back, I wrote up a tentative proposal I called the Tootsie Pop model for unsafe code. It’s safe to say that this model was not universally popular. =) There was quite a long and fruitful discussion on discuss. I wanted to write a quick post summarizing my main take-away from that discussion and to talk a bit about the plans to push the unsafe discussion forward.

The importance of the unchecked-get use case

For me, the most important lesson was the importance of the unchecked get use case. Here the idea is that you have some (safe) code which is indexing into a vector:

fn foo() {
    let vec: Vec<i32> = vec![...];

You have found (by profiling, but of course) that this code is kind of slow, and you have determined that the bounds-check caused by indexing is a contributing factor. You can’t rewrite the code to use iterators, and you are quite confident that the index will always be in-bounds, so you decide to dip your tie into unsafe by calling get_unchecked:

fn foo() {
    let vec: Vec<i32> = vec![...];
    unsafe { vec.get_unchecked(i) }

Now, under the precise model that I proposed, this means that the entire containing module is considered to be within an unsafe abstraction boundary, and hence the compiler will be more conservative when optimizing, and as a result the function may actually run slower when you skip the bounds check than faster. (A very similar example is invoking str::from_utf8_unchecked, which skips over the utf-8 validation check.)

Many people were not happy about this side-effect, and I can totally understand why. After all, this code isn’t mucking about with funny pointers or screwy aliasing – the unsafe block is a kind of drop-in replacement for what was there before, so it seems odd for it to have this effect.

Where to go from here

Since posting the last blog post, we’ve started a longer-term process for settling and exploring a lot of these interesting questions about the proper use of unsafe. At this point, we’re still in the data gathering phase. The idea here is to collect and categorize interesting examples of unsafe code. I’d prefer at this point not to be making decisions per se about what is legal or not – although in some cases someting may be quite unambiguous – but rather just try to get a good corpus with which we can evaluate different proposals.

While I haven’t given up on the Tootsie Pop model, I’m also not convinced it’s the best approach. But whatever we do, I still believe we should strive for something that is safe and predictable by default – something where the rules can be summarized on a postcard, at least if you don’t care about getting every last bit of optimization. But, as the unchecked-get example makes clear, it is important that we also enable people to obtain full optimization, possibly with some amount of opt-in. I’m just not yet sure what’s the right setup to balance the various factors.

As I wrote in my last post, I think that we have to expect that whatever guidelines we establish, they will have only a limited effect on the kind of code that people write. So if we want Rust code to be reliable in practice, we have to strive for rules that permit the things that people actually do: and the best model we have for that is the extant code. This is not to say we have to achieve total backwards compatibility with any piece of unsafe code we find in the wild, but if we find we are invalidating a common pattern, it can be a warning sign.

Daniel StenbergHTTP/2 connection coalescing

Section 9.1.1 in RFC7540 explains how HTTP/2 clients can reuse connections. This is my lengthy way of explaining how this works in reality.

Many connections in HTTP/1

With HTTP/1.1, browsers are typically using 6 connections per origin (host name + port). They do this to overcome the problems in HTTP/1 and how it uses TCP – as each connection will do a fair amount of waiting. Plus each connection is slow at start and therefore limited to how much data you can get and send quickly, you multiply that data amount with each additional connection. This makes the browser get more data faster (than just using one connection).

6 connections

Add sharding

Web sites with many objects also regularly invent new host names to trigger browsers to use even more connections. A practice known as “sharding”. 6 connections for each name. So if you instead make your site use 4 host names you suddenly get 4 x 6 = 24 connections instead. Mostly all those host names resolve to the same IP address in the end anyway, or the same set of IP addresses. In reality, some sites use many more than just 4 host names.

24 connections

The sad reality is that a very large percentage of connections used for HTTP/1.1 are only ever used for a single HTTP request, and a very large share of the connections made for HTTP/1 are so short-lived they actually never leave the slow start period before they’re killed off again. Not really ideal.

One connection in HTTP/2

With the introduction of HTTP/2, the HTTP clients of the world are going toward using a single TCP connection for each origin. The idea being that one connection is better in packet loss scenarios, it makes priorities/dependencies work and reusing that single connections for many more requests will be a net gain. And as you remember, HTTP/2 allows many logical streams in parallel over that single connection so the single connection doesn’t limit what the browsers can ask for.


The sites that created all those additional host names to make the HTTP/1 browsers use many connections now work against the HTTP/2 browsers’ desire to decrease the number of connections to a single one. Sites don’t want to switch back to using a single host name because that would be a significant architectural change and there are still a fair number of HTTP/1-only browsers still in use.

Enter “connection coalescing”, or “unsharding” as we sometimes like to call it. You won’t find either term used in RFC7540, as it merely describes this concept in terms of connection reuse.

Connection coalescing means that the browser tries to determine which of the remote hosts that it can reach over the same TCP connection. The different browsers have slightly different heuristics here and some don’t do it at all, but let me try to explain how they work – as far as I know and at this point in time.

Coalescing by example

Let’s say that this cool imaginary site “” has two name entries in DNS: and When resolving those names over DNS, the client gets a list of IP address back for each name. A list that very well may contain a mix of IPv4 and IPv6 addresses. One list for each name.

You must also remember that HTTP/2 is also only ever used over HTTPS by browsers, so for each origin speaking HTTP/2 there’s also a corresponding server certificate with a list of names or a wildcard pattern for which that server is authorized to respond for.

In our example we start out by connecting the browser to A. Let’s say resolving A returns the IPs and from DNS, so the browser goes on and connects to the first of those addresses, the one ending with “1”. The browser gets the server cert back in the TLS handshake and as a result of that, it also gets a list of host names the server can deal with: and (it could also be a wildcard like “*”)

If the browser then wants to connect to B, it’ll resolve that host name too to a list of IPs. Let’s say and here.

Host A: and
Host B: and

Now hold it. Here it comes.

The Firefox way

Host A has two addresses, host B has two addresses. The lists of addresses are not the same, but there is an overlap – both lists contain And the host A has already stated that it is authoritative for B as well. In this situation, Firefox will not make a second connect to host B. It will reuse the connection to host A and ask for host B’s content over that single shared connection. This is the most aggressive coalescing method in use.

one connection

The Chrome way

Chrome features a slightly less aggressive coalescing. In the example above, when the browser has connected to for the first host name, Chrome will require that the IPs for host B contains that specific IP for it to reuse that connection.  If the returned IPs for host B really are and, it clearly doesn’t contain and so Chrome will create a new connection to host B.

Chrome will reuse the connection to host A if resolving host B returns a list that contains the specific IP of the connection host A is already using.

The Edge and Safari ways

They don’t do coalescing at all, so each host name will get its own single connection. Better than the 6 connections from HTTP/1 but for very sharded sites that means a lot of connections even in the HTTP/2 case.

curl also doesn’t coalesce anything (yet).

Surprises and a way to mitigate them

Given some comments in the Firefox bugzilla, the aggressive coalescing sometimes causes some surprises. Especially when you have for example one IPv6-only host A and a second host B with both IPv4 and IPv4 addresses. Asking for data on host A can then still use IPv4 when it reuses a connection to B (assuming that host A covers host B in its cert).

In the rare case where a server gets a resource request for an authority (or scheme) it can’t serve, there’s a dedicated error code 421 in HTTP/2 that it can respond with and the browser can then  go back and retry that request on another connection.

Starts out with 6 anyway

Before the browser knows that the server speaks HTTP/2, it may fire up 6 connection attempts so that it is prepared to get the remote site at full speed. Once it figures out that it doesn’t need all those connections, it will kill off the unnecessary unused ones and over time trickle down to one. Of course, on subsequent connections to the same origin the client may have the version information cached so that it doesn’t have to start off presuming HTTP/1.

Air Mozilla360/VR Meet-Up

360/VR Meet-Up We explore the potential of this evolving medium and update you on the best new tools and workflows, covering: -Preproduction: VR pre-visualization, Budgeting, VR Storytelling...

Mitchell BakerPracticing “Open” at Mozilla

Mozilla works to bring openness and opportunity for all into the Internet and online life.  We seek to reflect these values in how we operate.  At our founding it was easy to understand what this meant in our workflow — developers worked with open code and project management through bugzilla.  This was complemented with an open workflow through the social media of the day — mailing lists and the chat or “messenger” element, known as Internet Relay Chat (“irc”).  The tools themselves were also open-source and the classic “virtuous circle” promoting openness was pretty clear.

Today the setting is different.  We were wildly successful with the idea of engineers working in open systems.  Today open source code and shared repositories are mainstream, and in many areas the best of practices and expected and default. On the other hand, the newer communication and workflow tools vary in their openness, with some particularly open and some closed proprietary code.  Access and access control is a constant variable.  In addition, at Mozilla we’ve added a bunch of new types of activities beyond engineering, we’ve increased the number of employees dramatically and we’re a bit behind on figuring out what practicing open in this setting means.  

I’ve decided to dedicate time to this and look at ways to make sure our goals of building open practices into Mozilla are updated and more fully developed.  This is one of the areas of focus I mentioned in an earlier post describing where I spend my time and energy.  

So far we have three early stage pilots underway sponsored by the Office of the Chair:

  • Opening up the leadership recruitment process to more people
  • Designing inclusive decision-making practices
  • Fostering meaningful conversations and exchange of knowledge across the organization, with a particular focus on bettering communications between Mozillians and leadership.

Follow-up posts will have more info about each of these projects.  In general the goal of these experiments is to identify working models that can be adapted by others across Mozilla. And beyond that, to assist other Mozillians figure out new ways to “practice open” at Mozilla.

The Rust Programming Language BlogAnnouncing Rust 1.11

The Rust team is happy to announce the latest version of Rust, 1.11. Rust is a systems programming language focused on safety, speed, and concurrency.

As always, you can install Rust 1.11 from the appropriate page on our website, and check out the detailed release notes for 1.11 on GitHub. 1109 patches were landed in this release.

What’s in 1.11 stable

Much of the work that went into 1.11 was with regards to compiler internals that are not yet stable. We’re excited about features like MIR becoming the default and the beginnings of incremental compilation, and the 1.11 release has laid the groundwork.

As for user-facing changes, last release, we talked about the new cdylib crate type.

The existing dylib dynamic library format will now be used solely for writing a dynamic library to be used within a Rust project, while cdylibs will be used when compiling Rust code as a dynamic library to be embedded in another language. With the 1.10 release, cdylibs are supported by the compiler, but not yet in Cargo. This format was defined in RFC 1510.

Well, in Rust 1.11, support for cdylibs has landed in Cargo! By adding this to your Cargo.toml:

crate-type = ["cdylib"]

You’ll get one built.

In the standard library, the default hashing function was changed, from SipHash 2-4 to SipHash 1-3. We have been thinking about this for a long time, as far back as the original decision to go with 2-4:

we proposed SipHash-2-4 as a (strong) PRF/MAC, and so far no attack whatsoever has been found, although many competent people tried to break it. However, fewer rounds may be sufficient and I would be very surprised if SipHash-1-3 introduced weaknesses for hash tables.

See the detailed release notes for more.

Library stabilizations

See the detailed release notes for more.

Cargo features

See the detailed release notes for more.

Contributors to 1.11

We had 126 individuals contribute to 1.11. Thank you so much!

  • Aaklo Xu
  • Aaronepower
  • Aleksey Kladov
  • Alexander Polyakov
  • Alexander Stocko
  • Alex Burka
  • Alex Crichton
  • Alex Ozdemir
  • Alfie John
  • Amanieu d’Antras
  • Andrea Canciani
  • Andrew Brinker
  • Andrew Paseltiner
  • Andrey Tonkih
  • Andy Russell
  • Ariel Ben-Yehuda
  • bors
  • Brian Anderson
  • Carlo Teubner
  • Carol (Nichols || Goulding)
  • CensoredUsername
  • cgswords
  • cheercroaker
  • Chris Krycho
  • Chris Tomlinson
  • Corey Farwell
  • Cristian Oliveira
  • Daan Sprenkels
  • Daniel Firth
  • diwic
  • Eduard Burtescu
  • Eduard-Mihai Burtescu
  • Emilio Cobos Álvarez
  • Erick Tryzelaar
  • Esteban Küber
  • Fabian Vogt
  • Felix S. Klock II
  • flo-l
  • Florian Berger
  • Frank McSherry
  • Georg Brandl
  • ggomez
  • Gleb Kozyrev
  • Guillaume Gomez
  • Hendrik Sollich
  • Horace Abenga
  • Huon Wilson
  • Ivan Shapovalov
  • Jack O’Connor
  • Jacob Clark
  • Jake Goulding
  • Jakob Demler
  • James Alan Preiss
  • James Lucas
  • James Miller
  • Jamey Sharp
  • Jeffrey Seyfried
  • Joachim Viide
  • John Ericson
  • Jonas Schievink
  • Jonathan L
  • Jonathan Price
  • Jonathan Turner
  • Joseph Dunne
  • Josh Stone
  • Jupp Müller
  • Kamal Marhubi
  • kennytm
  • Léo Testard
  • Liigo Zhuang
  • Loïc Damien
  • Luqman Aden
  • Manish Goregaokar
  • Mark Côté
  • marudor
  • Masood Malekghassemi
  • Mathieu De Coster
  • Matt Kraai
  • Mátyás Mustoha
  • M Farkas-Dyck
  • Michael Necio
  • Michael Rosenberg
  • Michael Woerister
  • Mike Hommey
  • Mitsunori Komatsu
  • Morten H. Solvang
  • Ms2ger
  • Nathan Moos
  • Nick Cameron
  • Nick Hamann
  • Nikhil Shagrithaya
  • Niko Matsakis
  • Oliver Middleton
  • Oliver Schneider
  • Paul Jarrett
  • Pavel Pravosud
  • Peter Atashian
  • Peter Landoll
  • petevine
  • Reeze Xia
  • Scott A Carr
  • Sean McArthur
  • Sebastian Thiel
  • Seo Sanghyeon
  • Simonas Kazlauskas
  • Srinivas Reddy Thatiparthy
  • Stefan Schindler
  • Steve Klabnik
  • Steven Allen
  • Steven Burns
  • Tamir Bahar
  • Tatsuya Kawano
  • Ted Mielczarek
  • Tim Neumann
  • Tobias Bucher
  • Tshepang Lekhonkhobe
  • Ty Coghlan
  • Ulrik Sverdrup
  • Vadim Petrochenkov
  • Vincent Esche
  • Wangshan Lu
  • Will Crichton
  • Without Boats
  • Wojciech Nawrocki
  • Zack M. Davis
  • 吴冉波

William LachanceHerding Automation Infrastructure

For every commit to Firefox, we run a battery of builds and automated tests on the resulting source tree to make sure that the result still works and meets our correctness and performance quality criteria. This is expensive: every new push to our repository implies hundreds of hours of machine time. However, this type of quality control is essential to ensure that the product that we’re shipping to users is something that we can be proud of.

But what about evaluating the quality of the product which does the building and testing? Who does that? And by what criteria would we say that our automation system is good or bad? Up to now, our procedures for this have been rather embarassingly adhoc. With some exceptions (such as OrangeFactor), our QA process amounts to motivated engineers doing a one-off analysis of a particular piece of the system, filing a few bugs, then forgetting about it. Occasionally someone will propose turning build and test automation for a specific platform on or off in

I’d like to suggest that the time has come to take a more systemic approach to this class of problem. We spend a lot of money on people and machines to maintain this infrastructure, and I think we need a more disciplined approach to make sure that we are getting good value for that investment.

As a starting point, I feel like we need to pay closer attention to the following characteristics of our automation:

  • End-to-end times from push submission to full completion of all build and test jobs: if this gets too long, it makes the lives of all sorts of people painful — tree closures become longer when they happen (because it takes longer to either notice bustage or find out that it’s fixed), developers have to wait longer for try pushes (making them more likely to just push directly to an integration branch, causing the former problem…)
  • Number of machine hours consumed by the different types of test jobs: our resources are large (relatively speaking), but not unlimited. We need proper accounting of where we’re spending money and time. In some cases, resources used to perform a task that we don’t care that much about could be redeployed towards an underresourced task that we do care about. A good example of this was linux32 talos (performance tests) last year: when the question was raised of why we were doing performance testing on this specific platform (in addition to Linux64), no one could come up with a great justification. So we turned the tests off and reconfigured the machines to do Windows performance tests (where we were suffering from a severe lack of capacity).

Over the past week, I’ve been prototyping a project I’ve been calling “Infraherder” which uses the data inside Treeherder’s job database to try to answer these questions (and maybe some others that I haven’t thought of yet). You can see a hacky version of it on my github fork.

Why implement this in Treeherder you might ask? Two reasons. First, Treeherder already stores the job data in a historical archive that’s easy to query (using SQL). Using this directly makes sense over creating a new data store. Second, Treeherder provides a useful set of front-end components with which to build a UI with which to visualize this information. I actually did my initial prototyping inside an ipython notebook, but it quickly became obvious that for my results to be useful to others at Mozilla we needed some kind of real dashboard that people could dig into.

On the Treeherder team at Mozilla, we’ve found the New Relic software to be invaluable for diagnosing and fixing quality and performance problems for Treeherder itself, so I took some inspiration from it (unfortunately the problem space of our automation is not quite the same as that of a web application, so we can’t just use New Relic directly).

There are currently two views in the prototype, a “last finished” view and a “total” view. I’ll describe each of them in turn.

Last finished

This view shows the counts of which scheduled automation jobs were the “last” to finish. The hypothesis is that jobs that are frequently last indicate blockers to developer productivity, as they are the “long pole” in being able to determine if a push is good or bad.

Right away from this view, you can see the mochitest devtools 9 test is often the last to finish on try, with Windows 7 mochitest debug a close second. Assuming that the reasons for this are not resource starvation (they don’t appear to be), we could probably get results into the hands of developers and sheriffs faster if we split these jobs into two seperate ones. I filed bugs 1294489 and 1294706 to address these issues.

Total Time

This view just shows which jobs are taking up the most machine hours.

Probably unsurprisingly, it seems like it’s Android test jobs that are taking up most of the time here: these tests are running on multiple layers of emulation (AWS instances to emulate Linux hardware, then the already slow QEMU-based Android simulator) so are not expected to have fast runtime. I wonder if it might not be worth considering running these tests on faster instances and/or bare metal machines.

Linux32 debug tests seem to be another large consumer of resources. Market conditions make turning these tests off altogether a non-starter (see bug 1255890), but how much value do we really derive from running the debug version of linux32 through automation (given that we’re already doing the same for 64-bit Linux)?

Request for comments

I’ve created an RFC for this project on Google Docs, as a sort of test case for a new process we’re thinking of using in Engineering Productivity for these sorts of projects. If you have any questions or comments, I’d love to hear them! My perspective on this vast problem space is limited, so I’m sure there are things that I’m missing.

Mike HoyeThe Future Of The Planet

I’m not sure who said it first, but I’ve heard a number of people say that RSS solved too many problems to be allowed to live.

I’ve recently become the module owner of Planet Mozilla, a venerable communication hub and feed aggregator here at Mozilla. Real talk here: I’m not likely to get another chance in my life to put “seize control of planet” on my list of quarterly deliverables, much less cross it off in less than a month. Take that, high school guidance counselor and your “must try harder”!

I warned my boss that I’d be milking that joke until sometime early 2017.

On a somewhat more serious note: We have to decide what we’re going to do with this thing.

Planet Mozilla is a bastion of what’s by now the Old Web – Anil Dash talks about it in more detail here, the suite of loosely connected tools and services that made the 1.0 Web what it was. The hallmarks of that era – distributed systems sharing information streams, decentralized and mutually supportive without codependency – date to a time when the economics of software, hardware, connectivity and storage were very different. I’ve written a lot more about that here, if you’re interested, but that doesn’t speak to where we are now.

Please note that when talk about “Mozilla’s needs” below, I don’t mean the company that makes Firefox or the non-profit Foundation. I mean the mission and people in our global community of communities that stand up for it.

I think the following things are true, good things:

  • People still use Planet heavily, sometimes even to the point of “rely on”. Some teams and community efforts definitely rely heavily on subplanets.
  • There isn’t a better place to get a sense of the scope of Mozilla as a global, cultural organization. The range and diversity of articles on Planet is big and weird and amazing.
  • The organizational and site structure of Planet speaks well of Mozilla and Mozilla’s values in being open, accessible and participatory.
  • Planet is an amplifier giving participants and communities an enormous reach and audience they wouldn’t otherwise have to share stories that range from technical and mission-focused to human and deeply personal.

These things are also true, but not all that good:

  • It’s difficult to say what or who Planet is for right now. I don’t have and may not be able to get reliable usage metrics.
  • The egalitarian nature of feeds is a mixed blessing: On one hand, Planet as a forum gives our smallest and most remote communities the same platform as our executive leadership. On the other hand, headlines ranging from “Servo now self-aware” and “Mozilla to purchase Alaska” to “I like turnips” are all equal citizens of Planet, sorted only by time of arrival.
  • Looking at Planet via the Web is not a great experience; if you’re not using a reader even to skim, you’re getting a dated user experience and missing a lot. The mobile Web experience is nonexistent.
  • The Planet software is, by any reasonable standards, a contraption. A long-running and proven contraption, for sure, but definitely a contraption.

Maintaining Planet isn’t particularly expensive. But it’s also not free, particularly in terms of opportunity costs and user-time spent. I think it’s worth asking what we want Planet to accomplish, whether Planet is the right tool for that, and what we should do next.

I’ve got a few ideas about what “next” might look like; I think there are four broad categories.

  1. Do nothing. Maybe reskin the site, move the backing repo from Subversion to Github (currently planned) but otherwise leave Planet as is.
  2. Improve Planet as a Planet, i.e: as a feed aggregator and communication hub.
  3. Replace Planet with something better suited to Mozilla’s needs.
  4. Replace Planet with nothing.

I’m partial to the “Improve Planet as a Planet” option, but I’m spending a lot of time thinking about the others. Not (or at least not only) because I’m lazy, but because I still think Planet matters. Whatever we choose to do here should be a use of time and effort that leaves Mozilla and the Web better off than they are today, and better off than if we’d spent that time and effort somewhere else.

I don’t think Planet is everything Planet could be. I have some ideas, but also don’t think anyone has a sense of what Planet is to its community, or what Mozilla needs Planet to be or become.

I think we need to figure that out together.

Hi, Internet. What is Planet to you? Do you use it regularly? Do you rely on it? What do you need from Planet, and what would you like Planet to become, if anything?

These comments are open and there’s a thread open at the Mozilla Community discourse instance where you can talk about this, and you can always email me directly if you like.


* – Mozilla is not to my knowledge going to purchase Alaska. I mean, maybe we are and I’ve tipped our hand? I don’t get invited to those meetings but it seems unlikely. Is Alaska even for sale? Turnips are OK, I guess.

Air MozillaThe Joy of Coding - Episode 68

The Joy of Coding - Episode 68 mconley livehacks on real Firefox bugs while thinking aloud.

Mozilla Open Design BlogNow for the fun part.

On our open design journey together, we’ve arrived at an inflection point. Today our effort—equal parts open crit, performance art piece, and sociology experiment—takes its logical next step, moving from words to visuals. A roomful of reviewers lean forward in their chairs, ready to weigh in on what we’ve done so far. Or so we hope.

We’re ready. The work with our agency partner, johnson banks, has great breadth and substantial depth for first-round concepts (possibly owing to our rocket-fast timeline). Our initial response to the work has, we hope, helped make it stronger and more nuanced. We’ve jumped off this cliff together, holding hands and bracing for the splash.

Each of the seven concepts we’re sharing today leads with and emphasizes a particular facet of the Mozilla story. From paying homage to our paleotechnic origins to rendering us as part of an ever-expanding digital ecosystem, from highlighting our global community ethos to giving us a lift from the quotidian elevator open button, the concepts express ideas about Mozilla in clever and unexpected ways.

There are no duds in the mix. The hard part will be deciding among them, and this is a good problem to have.

We have our opinions about these paths forward, our early favorites among the field. But for now we’re going to sit quietly and listen to what the voices from the concentric rings of our community—Mozillians, Mozilla fans, designers, technologists, and beyond—have to say in response about them.

Tag, you’re it.

Here’s what we’d like you to do, if you’re up for it. Have a look at the seven options and tell us what you think. To make comments about an individual direction and to see its full system, click on its image below.

Which of these initial visual expressions best captures what Mozilla means to you? Which will best help us tell our story to a youthful, values-driven audience? Which brings to life the Mozilla personality: Gutsy, Independent, Buoyant, For Good?

If you want to drill down a level, also consider which design idea:

  • Would resonate best around the world?
  • Has the potential to show off modern digital technology?
  • Is most scalable to a variety of Mozilla products, programs, and messages?
  • Would stand the test of time (well…let’s say 5-10 years)?
  • Would make people take notice and rethink Mozilla?

This is how we’ve been evaluating each concept internally over the past week or so. It’s the framework we’ll use as we share the work for qualitative and quantitative feedback from our key audiences.

How you deliver your feedback is up to you: writing comments on the blog, uploading a sketch or a mark-up, shooting a carpool karaoke video….bring it on. We’ll be taking feedback on this phase of work for roughly the next two weeks.

If you’re new to this blog, a few reminders about what we’re not doing. We are not crowdsourcing the final design, nor will there be voting. We are not asking designers to work on spec. We welcome all feedback but make no promise to act on it all (even if such a thing were possible).

From here, we’ll reduce these seven concepts to three, which we’ll refine further based partially on feedback from people like you, partially on what our design instincts tell us, and very much on what we need our brand identity to communicate to the world. These three concepts will go through a round of consumer testing and live critique in mid-September, and we’ll share the results here. We’re on track to have a final direction by the end of September.

We trust that openness will prevail over secrecy and that we’ll all learn something in the end. Thanks for tagging along.


Mozilla Open Design BlogDesign Route G: Flik Flak

This final design route developed in parallel to route A, as we searched for animalistic solutions, but built characters out of consistent isometric shapes. The more we experimented, the more we realised we could construct a character that also spelt out the words, Mozilla.

This design direction also stems from the narrative pathway “Mavericks, United”

Mavericks, United.

The Internet belongs to mavericks and independent spirits. It’s the sum total of millions of people working towards something greater than themselves. We believe the independent spirit that founded the Internet is vital to its future. But being independent doesn’t mean being alone. We bring together free thinkers, makers and doers from around the world. We create the tools, platforms, conversations, and momentum to make great things happen. We’re not waiting for the future of the Internet to be decided by others. It’s ours to invent.


Mozilla Open Design BlogDesign Route F: The Impossible M

We wanted to show the collaborative aspect of the maker spirit in a simple typographic mark. Inspired by both computer graphics and optical illusions, an ‘impossible’ design developed that also revealed a cohesive design approach across all applications.

This design direction flows from the narrative theme “Mavericks United.”

Mavericks United

The Internet belongs to mavericks and independent spirits. It’s the sum total of millions of people working towards something greater than themselves. We believe the independent spirit that founded the Internet is vital to its future. But being independent doesn’t mean being alone. We bring together free thinkers, makers and doers from around the world. We create the tools, platforms, conversations, and momentum to make great things happen. We’re not waiting for the future of the Internet to be decided by others. It’s ours to invent.


Mozilla Open Design BlogDesign Route E: Wireframe World

Is there a way to hint at the enormity of the internet, yet place Mozilla within that digital ecosystem? This route developed out of experiments with 3D grids and the realisation that a simple ‘M’ could form the heart of an entire system.

This design direction also flows from the narrative theme “With you from the start.”

With you from the start.

Mozilla was, is, and always will be on the side of those who want a better, freer, more open Internet. In the early days, we were among those helping to embed principles of openness and accessibility into the web’s DNA. Now those principles matter more than ever. We need an Internet that works wonders for the many, not just the few. We need to stand by the founding ideals of the Internet, and carry them forward into new products, platforms, conversations, and great ideas. We’ve been with you from the start. And we’re just getting started.

Click the image below to see an animation of how a user might interact with Wireframe World to create unending patterns:


Mozilla Open Design BlogDesign Route D: Protocol

If we want to show that Mozilla is at the core of the internet, and has been for a long time, how do we show that it’s a fundamental building block of what we know, see and use every day? Perhaps the answer is staring us in the face, at the top of every browser…

This design direction stems from the narrative theme called With You from the Start.

With you from the start.

Mozilla was, is, and always will be on the side of those who want a better, freer, more open Internet. In the early days, we were among those helping to embed principles of openness and accessibility into the web’s DNA. Now those principles matter more than ever. We need an Internet that works wonders for the many, not just the few. We need to stand by the founding ideals of the Internet, and carry them forward into new products, platforms, conversations, and great ideas. We’ve been with you from the start. And we’re just getting started.

Click the first image below to see how this logo might animate:


Mozilla Open Design BlogDesign Route C: The Open Button

Mozilla stands for an Internet that’s open to all on an equal basis – but most people don’t realise that certain forces may divide it and close it off. How could we communicate ‘open’, quickly and simply? Could we find a current symbol or pictogram of ‘open’ and adapt it to our needs? There is one, and it’s around us almost every day…

This design direction stems from the narrative theme called Choose Open.

Choose Open

The future of the Internet can be open, or closed. We choose open. We choose an internet that is equal and accessible by default. Open to ideas, open to collaboration, open to everyone. But it isn’t a choice we can make alone. An open web is something we all have to choose together. And it involves many other choices. e tools we use. e products we support. e way we behave online. Those choices can be complex, but the guiding principle should always be simple. Choose open.

Click the image below to see how this logo might animate:


Mozilla Open Design BlogDesign Route B: The Connector

Typographic experiments with the ‘Mozilla’ name led to this route – where the letters are intertwined around each other to create two interrelated marks, inspired by circuitry and tribal patterns.

This design direction stems from the narrative called Mozilla. For the Internet of People.

Mozilla. For the Internet of People

Mozilla believes that the Internet should work for people – and the best way to achieve that is to give people the power to shape the Internet. At its best, the Internet is humanity’s greatest invention. It has the ability to connect human minds and free human potential on a scale never seen before. But we need to keep it open, always. We need to distribute power widely, not divide it narrowly. We need to build bridges, not walls. e future of the Internet is amazing, as long as it remains the Internet of People.

Click the first image below to see how this logo might animate:


Mozilla Open Design BlogDesign Route A: The Eye

Even though Mozilla’s old Shepherd Fairey-designed dinosaur head logo is only used internally, not externally, there’s still a lot of love in the community for all things ‘Dino’. And there’s no escaping that the name of the company ends with “zilla.” What if we could find a way to use just part of a reptile in a dynamic new design?

This design stems from the narrative pathway known as The Good Fight.

The Good Fight

Sometimes you have to fight for what you believe in.
Mozilla believes in an open, equal, accessible Internet – for everyone.
One that makes us active creators, not passive receivers.
One that works for the benefit of the many, not the few.
We’re ready to take a stand, link arms with others who share our view of the future, and provide tools and opportunities for those who need them.
You can wish for a better web, and a better world.
Or you can get involved and make it happen.

Click on the first image below to see how the logo might animate:







QMOFirefox 49 Beta 3 Testday Results

Hello Mozillians!

As you may already know, last Friday – August 12th – we held a new Testday event, for Firefox 49 Beta 3.

Thank you all for helping us making Mozilla a better place – Logicoma, Julie Myers, Moin Shaikh, Ilse Macías, Iryna Thompson.

From BangladeshRezaul Huque Nayeem, Raihan Ali, Md. Rahimul Islam, Rabiul Hossain Bablu, Hossain Al Ikram, Azmina Akter Papeya, Saddam Hossain, Sufi Ahmed Hamim, Fahim, Maruf Rahman, Hossain Ahmed Sadi, Tariqul Islam Chowdhury, Sajal Ahmed, Md.Majedul islam, Amir Hossain Rhidoy, Toki Yasir, Jobayer Ahmed Mickey, Sayed Ibn Masud, kazi Ashraf hossain, Sahab Ibn Mamun, Kazi Nuzhat Tasnem, Sourov Arko, Sauradeep Dutta, Samad Talukder, Kazi Sakib Ahmad, Sajedul Islam, Forhad hossain, Syed Nayeem Roman, Md. Faysal Alam Riyad, Tanvir Rahman, Oly Roy, Akash, Fatin Shahazad.

From India: Paarttipaabhalaji, Surentharan, Bhuvana Meenakshi.K, Nagaraj V, Md Shahbaz Alam, prasanthp96, Selva Makilan, Jayesh Ram, Dhinesh Kumar M, B.AISHWARYA, Ashly Rose, Kamlesh Vilpura, Pavithra.

A big thank you goes out to all our active moderators too!


Keep an eye on QMO for upcoming events! 😉

Anthony HughesVisualizing Crash Data in Bugzilla

Since joining the Platform Graphics team as a QA engineer several months ago I’ve dabbled in visualizing Graphics crash data using the Socorro supersearch API and the MetricsGraphics.js visualization library.

After I gained a better understanding of the API and MG.js I set up a github repo as sandbox to play around with visualizing different types of crash data. Some of these experiments include a tool to visualize top crash signatures for vendor/device/driver combinations, a tool to compare crash rates between Firefox and Fennec, a tool to track crashes from the graphics startup test, and a tool to track crashes with a WebGL context.

Top crash dashboard for our most common device/driver combination (Intel HD 4000 w/driver graphics crash for Intel HD 4000 w/driver

Fast forward to June,  I had the opportunity to present some of this work at the Mozilla All-hands in London. As a result of this presentation I had a fruitful conversation with Benoit Girard, fellow engineer on the Graphics team. We talked about integrating my visualization tool with Bugzilla by way of a Bugzilla Tweaks add-on; this would both improve the functionality of Bugzilla and improve awareness of my tool. To my surprise this was actually pretty easy and I had a working prototype within 24 hours.

Since then I’ve iterated a few times, fixing some bugs based on reviews for the AMO Editors team. With version 0.3 I am satisfied enough to publicize it as an experimental add-on.

Bugzilla Socorro Lens (working title) appends a small snippet into the Crash Signatures field of bug reports, visualizing 365 days worth of aggregate crash data for the signatures in the bug.  With BSL installed it becomes more immediately evident when a crash started being reported, if/when it was fixed, how the crash is trending, or if the crash is spiking; all without having to manually search Socorro.

Socorro snippet integrate on snippet integration on

Of course if you want to see the data in Socorro you can. Simply click a data-point on the visualization and a new tab will be opened to Socorro showing the crash reports for that date. This is particularly useful when you want to see what may be driving a spike.

At the moment BSL is an experimental add-on. I share it with you today to see if it’s useful and collect feedback. If you encounter a bug or have a feature request I invite you to submit an issue on my github repo. Since this project is a learning experience for me, as much as it is a productivity exercise, I am not accepting pull requests at this time. I welcome your feedback and look forward to improving my coding skills by resolving your issues.

You can get the add-on from

[Update] Nicholas Nethercote informed me of an issue where the chart won’t display if you have the “Experimental Interface” enabled in Bugzilla. I have filed an issue in my github repo and will take a look at this soon. In the meantime, you’ll have to use the default Bugzilla interface to make use of this add-on. Sorry for the inconvenience.

Air Mozilla2016 Intern Presentations

2016 Intern Presentations Group 4 of the interns will be presenting on what they worked on this summer. Andrew Comminos- TOR Benton Case- PDX Josephine Kao- SF Steven...

This Week In RustThis Week in Rust 143

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

New Crates & Project Updates

Crate of the Week

This week's Crate of the Week is Raph Levien's font-rs, yet another pure Rust font renderer, which is incomplete, but very fast. Thanks StefanoD for the suggestion.

Submit your suggestions for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

135 pull requests were merged in the last two weeks.

New Contributors

  • Andrii Dmytrenko
  • Cameron Hart
  • Cengiz Can
  • Chiu-Hsiang Hsu
  • Christophe Vu-Brugier
  • Clement Miao
  • crypto-universe
  • Felix Rath
  • hank-der-hafenarbeiter
  • José manuel Barroso Galindo
  • Krzysztof Garczynski
  • Luke Hinds
  • Marco A L Barbosa
  • Mark-Simulacrum
  • Matthew Piziak
  • Michael Gattozzi
  • Patrick McCann
  • Pietro Albini
  • ShyamSundarB
  • srdja
  • Stephen Lazaro

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

fn work(on: RustProject) -> Money

No jobs listed for this week.

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

The best way to learn Rust is to just try! and see what works (or is this to just see what works? now?)!

llogiq on /r/rust.

Thanks to UtherII for the suggestion.

Submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and brson.

Jen Kaganto-do lists for big problems => small pieces

one of the most useful skills i’m learning this summer is the ability to take seemingly seamless big problems and chisel them into smaller chunks.

the most recent example of this was adding support for min-vid from google’s main page. as i’m writing about in more depth shortly, min-vid uses the urlcontext and selectorcontext to parse a link to an mp4 to send to the min-vid player. but google LIES about its hrefs! for shame! this means that if you “inspect element” on a google search result, you’ll see a bunch of crap that is not a direct link to the resource you want. so i had to spend some time looking through all the gunk to find the link i wanted.

Screen Shot 2016-08-15 at 9.31.20 PM

when i looked at the actual href in its entirety, i noticed something interesting:


do you see it? the youtube link is in there, surrounded by a bunch of %2F‘s and %3D‘s. initially, i thought this was some kind of weird google cipher and that i needed to write a bunch of vars to convert these strange strings to the punctuation marks my link-parsing function expected. i wrote a regular expression to get rid of everything before https, then started the converting. it looked something like this:

 label: contextMenuLabel,
 context: cm.URLContext(['*']),
 contentScript: contextMenuContentScript,
 onMessage: function(url) {
   const regex = /url=(https[^;]*)/;
   const match = regex.exec(url)[1];
   const getColons = match.replace(/%3A/g, ':');
   const getSlashes = match.replace(/%2F/g, '/');
   const getQuestion = match.replace(/%3F/g, '?');
   const getEqual = match.replace(/%3D/g, '=');

   launchVideo({url: url,
               domain: 'tbdstring',
               getUrlFn: getTbdUrl});

at this point, i made myself a little to-do list. to-do lists make my life easier because i have a short attention span and am highly prone to rabbit holes, but am also really impatient and like to feel like i’m actually accomplishing things. the ability to cross tiny things off my list keeps me engaged and makes it much more likely that i’ll actually finish a thing i start. so. the list:

Screen Shot 2016-08-15 at 9.50.13 PM

thankfully, after cursing my fate at having to deconstruct and reconstruct such a ridiculous string, i found out about a thing called URI encoding. those weird symbols are not google-specific, and there are special functions to deal with them. decodeURIComponent took care of my first two to-do items. indexOf took care of my third. adding forward slashes to all my other selectorcontexts, to distinguish between the encoded hrefs on google and the un-encoded hrefs on other sites, took care of my last to-do.


  label: contextMenuLabel,
  context: [
    //TODO?: create a variable with all supported hrefs and reference it in this SelectorContext
    cm.SelectorContext('[href*=""], [href*=""], [href*=""], [href*=""]')
  contentScript: contextMenuContentScript,
  onMessage: function(url) {
    const regex = /url=(https[^;]*)/;
    const match = regex.exec(url)[1];
    const decoded = decodeURIComponent(match).split('&usg')[0];
    let getUrlFn;
    if (decoded.indexOf('' || '') > -1) {
      getUrlFn = getYouTubeUrl;
      var domain = '';
    } else if (decoded.indexOf('')  > -1) {
      getUrlFn = getVimeoUrl;
      var domain = '';
    } else if (decoded.indexOf('') > -1) {
      getUrlFn = getVineUrl;
      var domain = '';
    if (domain && getUrlFn) {
      launchVideo({url: decoded,
        domain: domain,
        getUrlFn: getUrlFn});
    } else console.error('Decoding failed');

i am 1000% positive i would not have completed this task without a to-do list. thanks to mentor jared for teaching me how!

Dave TownsendA new owner for the add-ons manager

I’ve been acting as the owner for the add-ons manager for the past little while and while I have always cared a lot about the add-ons space it is time to formerly pass over the torch. So I was pleased that Rob Helmer was willing to take it over from me.

Rob has been doing some exceptional work on making system add-ons (used as part of the go faster project) more robust and easier for Mozilla to use. He’s also been thinking lot about improvements we can make to the add-ons manager code to make it more friendly to approach.

As my last act I’m updating the suggested reviewers in bugzilla to be him, Andrew Swan (who in his own right has been doing exceptional work on the add-ons manager) and me as a last resort. Please congratulate them and direct any questions you may have about the add-ons manager towards Rob.

Mitchell BakerIncreasing Information Flow at Mozilla

Information flow between leaders and individual contributors is critical to an effective organization. The ability to better understand the needs of the organization, to gather input across different domains, getting other perspectives before we make a decision and change management, help create a clueful and informed organisation.

This quarter we are piloting a number of untypical discussion sessions between leaders and individuals across Mozilla, whereby leaders will engage with participants who are not usually in their domain. There are hypotheses we’d like to test.  One is that cross-team, multiple-level discussion and information flow will: prevent us from being blind-sided, increase our shared understanding, and empower people to participate and lead in productive ways.  A second hypothesis is that there is an appetite for this type of discussion and some templates and structure would make it easier for people to know how to approach it.

We have 9 leaders who have agreed to host a discussion session this quarter, and we’re currently in the process of inviting participants from across the organization. Currently, there are 4 types of discussions we’ve identified that could take place, there are likely more:

  • Pulse (“Taking the Pulse”) – allow a leader to quickly test an idea and/or get insights from the wider community about the current state of Mozilla, or their domain area.
  • Ideation – to generate insights from a targeted and diverse group of participants.
  • Decision – to ask for feedback regarding a decision from a broad group of people beyond the typical domain to ensure they are not blind-sided, and to provide key diverse input.
  • Change Management – creates a shared understanding for a decision already made.

If these sessions prove useful, we may create a useful toolkit for leadership on how to run disperse discussion sessions, and gather input from across Mozilla. And in addition, create a toolkit for individual contributors for understanding and contributing to important topics across Mozilla.

We’ll plan to share more updates next month.

Mozilla Addons Blog“Restart Required” Badge on AMO

When add-ons were first introduced as a way to personalize Firefox, they required a restart of Firefox upon installation. Then came “restartless” extensions, which made the experience of installing an add-on much smoother. Every iteration of extensions APIs since then has similarly supported restartless add-ons, up to WebExtensions.

To indicate that an add-on was restartless, we added “No Restart” badges next to them on (AMO). This helped people see which add-ons would be smoother to install, and encouraged developers to implement them for their own add-ons. However, two things happened recently that prompted us to reverse this badge. Now, rather than using a “No Restart” badge to indicate that an add-on is restartless, we will use a “Restart Required” badge to indicate that an add-on requires a restart.

One reason for this change is because we reached a tipping point: now that restartless add-ons are more common, and the number of WebExtensions add-ons is increasing, there are now more extensions that do not require a restart than those that do.

Another reason is that we encountered an unexpected issue with the recent introduction of multiprocess Firefox. In Firefox 48, multiprocess capability was only enabled for people with no add-ons installed. If you are one of these people and you now install an add-on, you’ll be asked to restart Firefox even if the add-on is restartless. This forced restart will only occur over the next few versions as multiprocess Firefox is gradually rolled out. This is not because of the add-on, but because Firefox needs to turn multiprocess off in order to satisfy the temporary rule that only people without add-ons installed have multiprocess Firefox enabled. So a “No Restart” badge may be confusing to people.

Restartless add-ons becoming the norm is a great milestone and a huge improvement in the add-on experience, and one we couldn’t have reached without all our add-on developers—thank you!

Christian HeilmannBetter keyboard navigation with progressive enhancement


When building interfaces, it is important to also consider those who can only use a keyboard to use your products. This is a basic accessibility need, and in most cases it isn’t hard to allow for a basic keyboard access. It means first and foremost using keyboard accessible elements for interaction:

  • anchors with a valid href attribute if you want the user to go somewhere
  • buttons when you want to execute your own code and stay in the document

You can make almost everything keyboard accessible using the roving tab index technique, but why bother when there are HTML elements that can do the same?

Making it visual

Using the right elements isn’t quite enough though; you also need to make it obvious where a keyboard user is in a collection of elements. Browsers do this by putting an outline around active elements. Whilst dead useful this has always been a thorn in the side of people who want to control the whole visual display of any interaction. You can remove this visual aid by setting the CSS outline property to none, which is a big accessibility issue unless you also provide an alternative.

By using the most obvious HTML elements for the job and some CSS to ensure that not only hover but also focus states are defined we can make it easy for our users to navigate a list of items by tabbing through them. Shift-Tab allows you to go backwards. You can try it here and the HTML is pretty straight forward.


example how to tab through a list of buttons

Using a list gives our elements a hierarchy and a way to navigate with accessible technology that a normal browser doesn’t have. It also gives us a lot of HTML elements to apply styling to. With a few styles, we can turn this into a grid, using less vertical space and allowing for more content in a small space.

ul, li {
  margin: 0;
  padding: 0;
  list-style: none;
button {
  border: none;
  display: block;
  background: goldenrod;
  color: white;
  width: 90%;
  height: 30px;  
  margin: 5%;
  transform: scale(0.8);
  transition: 300ms;
button:hover, button:focus {
  transform: scale(1);
  outline: none;
  background: powderblue;
  color: #333;
li {
  float: left;
  grid magic by @heydonworks
li {
  width: calc(100% / 4);
li:nth-child(4n+1):nth-last-child(1) {
  width: 100%;
li:nth-child(4n+1):nth-last-child(1) ~ li {
  width: 100%;
li:nth-child(4n+1):nth-last-child(2) {
  width: 50%;
li:nth-child(4n+1):nth-last-child(2) ~ li {
  width: 50%;
li:nth-child(4n+1):nth-last-child(3) {
  width: calc(100% / 4);
li:nth-child(4n+1):nth-last-child(3) ~ li {
  width: calc(100% / 4);

The result looks pretty fancy and it is very obvious where we are in our journey through the list.

tabbing through a grid item by item

Enhancing the keyboard access – providing shortcuts

However, if I am in a grid, wouldn’t it be better if I could move in two directions with my keyboard?

Using a bit of JavaScript for progressive enhancement, we get this effect and can navigate the grid either with the cursor keys or by using WASD:

navigating inside a grid of elements using the cursor keys going up, down, left and right

It is important to remember here that this is an enhancement. Our list is still fully accessible by tabbing and should JavaScript fail for any of the dozens of reasons it can, we lost a bit of convenience instead of having no interface at all.

I’ve packaged this up in a small open source, vanilla, dependency free JavaScript called gridnav and you can get it on GitHub. All you need to do is to call the script and give it a selector to reach your list of elements.

<ul id="links" data-amount="5" data-element="a">
  <li><a href="#">1</a></li>
  <li><a href="#">2</a></li><li><a href="#">25</a></li>
<script src="gridnav.js"></script>
  var linklist = new Gridnav('#links');

You define the amount of elements in each row and the keyboard accessible element as data attributes on the list element. These are optional, but make the script faster and less error prone. There’s an extensive README explaining how to use the script.

How does it work?

When I started to ponder how to do this, I started like any developer does: trying to tackle the most complex way. I thought I needed to navigate the DOM a lot using parent nodes and siblings with lots of comparing of positioning and using getBoundingClientRect.

Then I took a step back and realised that it doesn’t matter how we display the list. In the end, it is just a list and we need to navigate this one. And we don’t even need to navigate the DOM, as all we do is go from one element in a collection of buttons or anchors to another. All we need to do is to:

  1. Find the element we are on ( gives us that).
  2. Get the key that was pressed
  3. Depending on the key move to the next, previous, or skip a few elements to get to the next row

Like this (you can try it out here):

moving in the grid is the same as moving along an axis

The amount of elements we need to skip is defined by the amount of elements in a row. Going up is going n elements backwards and going down is n elements forwards in the collection.

diagram of navigation in the grid

The full code is pretty short if you use some tricks:

  var list = document.querySelector('ul');
  var items = list.querySelectorAll('button');
  var amount = Math.floor(
        list.offsetWidth / 
  var codes = {
    38: -amount,
    40: amount, 
    39: 1,
    37: -1
  for (var i = 0; i < items.length; i++) {
    items[i].index = i;
  function handlekeys(ev) {
    var keycode = ev.keyCode;
    if (codes[keycode]) {
      var t =;
      if (t.index !== undefined) {
        if (items[t.index + codes[keycode]]) {
          items[t.index + codes[keycode]].focus();
  list.addEventListener('keyup', handlekeys);

What’s going on here?

We get a handle to the list and cache all the keyboard accessible elements to navigate through

  var list = document.querySelector('ul');
  var items = list.querySelectorAll('button');

We calculate the amount of elements to skip when going up and down by dividing the width of the list element by the width of the first child element that is an HTML element (in this case this will be the LI)

  var amount = Math.floor(
        list.offsetWidth / 

Instead of creating a switch statement or lots of if statements for keyboard handling, I prefer to define a lookup table. In this case, it is called codes. They key code for up is 38, 40 is down, 39 is right and 37 is left. If we now get codes[37] for example, we get -1, which is the amount of elements to move in the list

  var codes = {
    38: -amount,
    40: amount, 
    39: 1,
    37: -1

We can use to get which button was pressed in the list, but we don’t know where in the list it is. To avoid having to loop through the list on each keystroke, it makes more sense to loop through all the buttons once and store their index in the list in an index property on the button itself.

  for (var i = 0; i < items.length; i++) {
    items[i].index = i;

The handlekeys() function does the rest. We read the code of the key pressed and compare it with the codes lookup table. This also means we only react to arrow keys in our function. We then get the current element the key was pressed on and check if it has an index property. If it has one, we check if an element exist in the collection that is in the direction we want to move. We do this by adding the index of the current element to the value returned from the lookup table. If the element exists, we focus on it.

  function handlekeys(ev) {
    var keycode = ev.keyCode;
    if (codes[keycode]) {
      var t =;
      if (t.index !== undefined) {
        if (items[t.index + codes[keycode]]) {
          items[t.index + codes[keycode]].focus();

We apply a keyup event listener to the list and we’re done :)

  list.addEventListener('keyup', handlekeys);

If you feel like following this along live, here’s a quick video tutorial of me explaining all the bits and bobs.

The video has a small bug in the final code as I am not comparing the count property to undefined, which means the keyboard functionality doesn’t work on the first item (as 0 is falsy).

Ian BickingA Product Journal: Oops We Made A Scraper

I’m blogging about the development of a new product in Mozilla, look here for my other posts in this series

A while back we got our first contributor to PageShot, who contributed a feature he wanted for Outernet – the ability to use PageShot to create readable and packaged versions of websites for distribution. Outernet is a neat project: they are building satellite capacity to distribute content anywhere in the world. But it’s purely one-way, so any content you send has to be complete. And PageShot tries pretty hard to identify and normalize all that content.

Lately I spent a week with the Activity Stream team, and got to thinking about the development process around recommendations. I’d like to be able to take my entire history and actually get the content, and see what I can learn from that.

And there’s this feature in PageShot to do just that! You can install the add-on and enable the pref to make the browser into a server:

about:addons prefs

After that you can get the shot data from a page with a simple command:

$ url=
$ server=http://localhost:10082
$ curl "${server}/data/?url=${url}&allowUnknownAttributes=true&delayAfterLoad=1000" > data.json

allowUnknownAttributes preserves attributes like data-* attributes that you might find useful in your processing. delayAfterLoad gives the milliseconds to wait, usually for the page to “settle”.

A fun part of this is that because it’s in a regular browser it will automatically pick up your profile and scrape the page as you, and you’ll literally see a new tab open for a second and then close. Install an ad blocker or anything else and its changes will also be applied.

The thing you get back will be a big JSON object:

  "bodyAttrs": ["name", "value"],
  "headAttrs": [], "htmlAttrs": [],
  "head": "html string",
  "body": "html string",
  "resources": {
    "uuid": {
      "url": "..."

There’s other stuff in there too (e.g., Open Graph properties), but this is what you need to reconstruct the page itself. It has a few nice features:

  1. The head and body are well formed; they are actually serialized from the DOM, not related to the HTTP response.
  2. All embedded resources (mostly images) are identified in the resources mapping. The URLs in the page itself are replaced with those UUIDs, so you can put them back with a simple string substitutions, or you can rewrite the links easily.
  3. Actual links (<a href>) should all be absolute.
  4. It will try to tell you if the page is private (though it’s just a heuristic).
  5. If you want, it’ll include a screenshot of the full page as a data: URL (use &thumbnailWidth=px to choose how wide).
  6. CSS will be inlined in a <style> tag, perhaps reducing the complexity of the page for you.

Notably scripts and hidden elements will not be included (because PageShot was written to share visible content and not to scrape content).

Anyway, fun to realize the tool can address some hidden and unintentional use cases.

The Servo BlogThis Week In Servo 75

In the last week, we landed 108 PRs in the Servo organization’s repositories.

Thanks to the community for their patience while our continuous integration services were were in a more manual mode as we adapted to some changes from Travis CI that complicated our autolander. Things should be fine now - please reach out in #servo if you see anything!

We are delighted to announce that long-time contributor Michael Howell (notriddle) has been made a reviewer! Thanks for all of your contributions and congratulations on your new role.

Planning and Status

Our overall roadmap is available online and now includes the initial Q3 plans. From now on, we plan to include the quarterly plan with a high-level breakdown in the roadmap page.

This week’s status updates are here.

Notable Additions

  • shinglyu fixed auto positioning on absolute flows
  • glennw implemented a slew of initial optimizations for the new WebRender
  • nox upgraded roughly half the Rust ecosystem to a new version of Euclid
  • notriddle added a flag to dump computed style values
  • ms2ger updated Glutin from upstream
  • paul updated browserhtml
  • vvuk continued his tear through the ecosystem, fixing everything to build cleanly on Windows with MSVC
  • simonsapin implemented ToCss for selector types
  • larsberg migrated our CI to check Travis status indirectly via GitHub
  • wafflespeanut added support for word-spacing for geckolib
  • anholt improved our WebGL support on Linux
  • msreckovic corrected inner radii for borders in WebRender
  • UK992 improved tidy’s license validation code
  • emilio fixed issues related to the client point with fixed positioned stacking contexts
  • paul added a Homebrew package for another path to the macOS nightly build
  • emilo redesigned the style sharing API
  • jennalee implemented the Request API
  • splav fixed a bug with the layout of inline pseudo elements

New Contributors

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!


None this week.

Karl Dubost[worklog] Edition 031. Heat wave and cicadas falling

Cicadas are an interesting insect. They live most of their time as a nymph under the ground in between 2 and 5 years, but some species live until 17 years before coming out and die six weeks after. What does it tell us about all the hidden work we put during our lifetime and blooms and shines for only a couple of hours.

Tune of the week: Ella Fitzgerald - Summertime (1968)

Webcompat Life

Progress this week:

Today: 2016-08-15T08:58:55.633182
298 open issues
needsinfo       4
needsdiagnosis  80
needscontact    17
contactready    29
sitewait        158

You are welcome to participate

Webcompat issues

(a selection of some of the bugs worked on this week).

  • Gecko and Blink/WebKit have different default CSS border-width values for the input element. It breaks a site. If we decide to fix it in Gecko, do we break other sites relying on this default value?
  • Performance issues on a heavy map Web site, though I'm not sure it is really a Web compatibility issue. It looks like more of something related to Gecko.
  • Another issue related to layout with a content inside a form. I need to dig a bit more.
  • no tap on priceline
  • Use chrome only for transcribeme
  • When using element.removeEventListener('event', callback) never forgets the second argument because it fails in Firefox, though that seems to be working in Chrome.
  • mask, background and mask-image difference creates immaterial design. The most important now being to really find what is the source of the issue. dev

  • Is invalid always the right keyword for closing an issue? From our side (project owner) it is invalid because it is not in the scope of the project, or there isn't enough details to reproduce. But from the user's perspective who had genuinely an issue (whatever the issue is), it can be felt as a strong rejection along the lines of "We don't care about you". Maybe we should find a better way of closing issues when they are out of scope.

Reading List

  • More a quote of the day, but spot on: > Wondering how long it will take for publishers to realize it’s Medium that desperately needs them and not the other way around.
  • And another one from Adam. And I really wish we could do that in a cool way! > <adam_s> Almost at our 3000th bug on The lucky reporter who hits 3000 wins a broken light bulb

Follow Your Nose


  • Document how to write tests on using test fixtures.
  • ToWrite: Amazon prefetching resources with <object> for Firefox only.


Nick DesaulniersObject Files and Symbols

What was supposed to be one blog post about memory segmentation turned into what will be a series of posts. As the first in the series, we cover the extreme basics of object files and symbols. In follow up posts, I plan to talk about static libraries, dynamic libraries, dynamic linkage, memory segments, and finally memory usage accounting. I also cover command line tools for working with these notions, both in Linux and OSX.

A quick review of the compilation+execution pipeline (for terminology):

  1. Lexing produces tokens
  2. Parsing produces an abstract syntax tree
  3. Analysis produces a code flow graph
  4. Optimization produces a reduced code flow graph
  5. Code gen produces object code
  6. Linkage produces a complete executable
  7. Loader instructs the OS how to start running the executable

This series will focus on part #6.

Let’s say you have some amazing C/C++ code, but for separations of concerns, you want to start moving it out into separate source files. Whereas previously in one file you had:

// main.c
#include <stdio.h>
void helper () {
int main () {

You now have two source files and maybe a header:

// main.c
#include "helper.h"
int main () {

// helper.h
void helper();

#include <stdio.h>
#include "helper.h"
void helper () {

In the single source version, we would have compiled and linked that with clang main.c and had an executable file. In the multiple source version, we first compile our source files to object files, then link them altogether. That can be done separately:

$ clang -c helper.c     # produces helper.o
$ clang -c main.o       # produces main.o
$ clang main.o helper.o # produces a.out

We can also do the compilation and linkage in one step:

$ clang helper.c main.c # produces a.out

Nothing special thus far; C/C++ 101. In the first case of separate compilation and linkage steps, we were left with intermediate object files (.o). What exactly are these?

Object files are almost full executables. They contain machine code, but that code still requires a relocation step. It also contains metadata about the addresses of its variables and functions (called symbols) in an associative data structure called a symbol table. The addresses may not be the final address of the symbol in the final executable. They also contain some information for the loader and probably some other stuff.

Remember that if we fail to specify the helper object file, we’ll get an undefined symbol error.

$ clang main.c
Undefined symbols for architecture x86_64:
  "_helper", referenced from:
      _main in main-459dde.o
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)

The problem is main.o refers to some symbol called helper, but on it’s own doesn’t contain any more information about it. Let’s say we want to know what symbols an object file contains, or expects to find elsewhere. Let’s introduce our first tool, nm. nm will print the name list or symbol table for a given object or executable file. On OSX, these are prefixed with an underscore.

$ nm helper.o
0000000000000000 T _helper
                 U _puts

$ nm main.o
                 U _helper
0000000000000000 T _main

$ nm a.out
0000000100000f50 T _helper
0000000100000f70 T _main
                 U _puts

Let’s dissect what’s going on here. The output (as understood by man 1 nm) is a space separated list of address, type, and symbol name. We can see that the addresses are placeholders in object files, and final in executables. The name should make sense; it’s the name of the function or variable. While I’d love to get in depth on the various symbol types and talk about sections, I don’t think I could do as great a job as Peter Van Der Linden in his book “Expert C Programming: Deep C Secrets.”

For our case, we just care about whether the symbol in a given object file is defined or not. The type U (undefined) means that this symbol is referenced or used in this object code/executable, but it’s value wasn’t defined here. When we compiled main.c alone and got the undefined symbol error, it should now make sense why we got the undefined symbol error for helper. main.o contains a symbol for main, and references helper. helper.o contains a symbol for helper, and references to puts. The final executable contains symbols for main and helper and references to puts.

You might be wondering where puts comes from then, and why didn’t we get an undefined symbol error for puts like we did earlier for helper. The answer is the C runtime. libc is implicitly dynamically linked to all executables created by the C compiler. We’ll cover dynamic linkage in a later post in this series.

When the linker performs relocation on the object files, combining them into a final executable, it goes through placeholders of addresses and fills them in. We did this manually in our post on JIT compilers.

While nm gave us a look into our symbol table, two other tools I use frequently are objdump on Linux and otool on OSX. Both of these provide disassembled assembly instructions and their addresses. Note how the symbols for functions get translated into labels of the disassembled functions, and that their address points to the first instruction in that label. Since I’ve shown objdump numerous times in previous posts, here’s otool.

$ otool -tV helper.o
(__TEXT,__text) section
0000000000000000    pushq    %rbp
0000000000000001    movq    %rsp, %rbp
0000000000000004    subq    $0x10, %rsp
0000000000000008    leaq    0xe(%rip), %rdi         ## literal pool for: "helper"
000000000000000f    callq    _puts
0000000000000014    movl    %eax, -0x4(%rbp)
0000000000000017    addq    $0x10, %rsp
000000000000001b    popq    %rbp
000000000000001c    retq
$ otool -tV main.o
(__TEXT,__text) section
0000000000000000    pushq    %rbp
0000000000000001    movq    %rsp, %rbp
0000000000000004    movb    $0x0, %al
0000000000000006    callq    _helper
000000000000000b    xorl    %eax, %eax
000000000000000d    popq    %rbp
000000000000000e    retq
$ otool -tV a.out
(__TEXT,__text) section
0000000100000f50    pushq    %rbp
0000000100000f51    movq    %rsp, %rbp
0000000100000f54    subq    $0x10, %rsp
0000000100000f58    leaq    0x43(%rip), %rdi        ## literal pool for: "helper"
0000000100000f5f    callq    0x100000f80             ## symbol stub for: _puts
0000000100000f64    movl    %eax, -0x4(%rbp)
0000000100000f67    addq    $0x10, %rsp
0000000100000f6b    popq    %rbp
0000000100000f6c    retq
0000000100000f6d    nop
0000000100000f6e    nop
0000000100000f6f    nop
0000000100000f70    pushq    %rbp
0000000100000f71    movq    %rsp, %rbp
0000000100000f74    movb    $0x0, %al
0000000100000f76    callq    _helper
0000000100000f7b    xorl    %eax, %eax
0000000100000f7d    popq    %rbp
0000000100000f7e    retq

readelf -s <object file> will give us a list of symbols on Linux. ELF is the file format used by the loader on Linux, while OSX uses Mach-O. Thus readelf and otool, respectively.

Also note that for static linkage, symbols need to be unique*, as they refer to memory locations to either read/write to in the case of variables or locations to jump to in the case of functions.

$ cat double_define.c
void a () {}
void a () {}
int main () {}
$ clang double_define.c
double_define.c:2:6: error: redefinition of 'a'
void a () {}
double_define.c:1:6: note: previous definition is here
void a () {}
1 error generated.

*: there’s a notion of weak symbols, and some special things for dynamic libraries we’ll see in a follow up post.

Languages like C++ that support function overloading (functions with the same name but different arguments, return types, namespaces, or class) must mangle their function names to make them unique.

Code like:

namespace util {
  class Widget {
      void doSomething (bool save);
      void doSomething (int n);

Will produce symbols like:

$ clang class.cpp -std=c++11
$ nm a.out
0000000100000f70 T __ZN4util6Widget11doSomethingEb
0000000100000f60 T __ZN4util6Widget11doSomethingEi

Note: GNU nm on Linux distros will have a --demangle option:

$ nm --demangle a.out
00000000004006d0 T util::Widget::doSomething(bool)
00000000004006a0 T util::Widget::doSomething(int)

On OSX, we can pipe nm into c++filt:

$ nm a.out | c++filt
0000000100000f70 T util::Widget::doSomething(bool)
0000000100000f60 T util::Widget::doSomething(int)

Finally, if you don’t have an object file, but instead a backtrace that needs demangling, you can either invoke c++filt manually or use

Rust also mangles its function names. For FFI or interface with C functions, other languages usually have to look for or expose symbols in a manner suited to C, the lowest common denominator. C++ has extern "C" blocks and Rust has extern blocks.

We can use strip to remove symbols from a binary. This can slim down a binary at the cost of making stack traces unreadable. If you’re following along at home, try comparing the output from your disassembler and nm before and after running strip on the executable. Luckily, you can’t strip the symbols out of object files, otherwise they’d be useless as you’d no longer be able to link them.

If we compile with the -g flag, we can create a different kind of symbol; debug symbols. Depending on your compiler+host OS, you’ll get another file you can run through nm to see an entry per symbol. You’ll get more info by using dwarfdump on this file. Debug symbols will retain source information such as filename and line number for all symbols.

This post should have been a simple refresher of some of the basics of working with C code. Finding symbols to be placed into a final executable and relocating addresses are the main job of the linker, and will be the main theme of the posts in this series. Keep your eyes out for more in this series on memory segmentation.

Mozilla Addons BlogWebExtensions Taking Root

Stencil and its 700,000+ royalty-free images are now available for Firefox users, thanks to WebExtensions.

Stencil and its 700,000+ royalty-free images are now available for Firefox users, thanks to WebExtensions.

From enhanced security for users to cross-browser interoperability and long-term compatibility with Firefox—including compatibility with multiprocess Firefox—there are many reasons why WebExtensions are becoming the future of add-on development.

So it’s awesome to see so many developers already embracing WebExtensions. To date, there are more than 700 listed on AMO. In celebration of their efforts to modernize their add-ons, I wanted to share a few interesting ones I recently stumbled upon…

musicfm has an impressively vast and free music library, plus an intuitive layout for simple browsing. However, I’m more of a SoundCloud music consumer myself, so I was intrigued to find SCDL SoundCloud Downloader, which is built for downloading not just music files, but related artwork and other meta information.

The popular Chrome add-on Stencil is now available for Firefox, thanks to WebExtensions. It’s a diverse creativity tool that allows you to combine text and imagery in all sorts of imaginative ways.

musicfm offers unlimited free music and the ability to create your playlists and online stations.

musicfm offers unlimited free music and the ability to create your own playlists and online stations.

I’m enjoying Dark Purple YouTube Theme. I think video resolution reads better against a dark background.

Keepa is one of the finest Amazon price trackers out there that also supports various international versions of the online bazaar (UK, Germany, Japan, plus many others).

Googley Eyes elegantly informs you which sites you visit send information about you to Google.

Search Engine Ad Remover is a perfectly titled extension. But arguably even better than removing ads is replacing them with cat pics.

Thanks for your continued support as we push ahead with a new model of extension development. If you need help porting your add-on to WebExtensions, check out the resources we’ve compiled. If you’re interested in writing your first add-on with WebExtensions, here’s how to get started.

Jennie Rose HalperinHello world!

Welcome to WordPress. This is your first post. Edit or delete it, then start writing!

Mozilla Open Design BlogAnd then there were five

It’s been a little quiet around here for a week or so because we’ve had our heads down agreeing the final verbal themes for the Mozilla brand project.

Six weeks ago we shared our initial seven themes, and then took hundreds of comments from the Mozillians at the All Hands conference in London. We gathered some useful comments online, plus some invaluable feedback internally – especially about upping the positivity and doing more with the whole principle of ‘open’.

thegoodfightSo now we’re ready to share our final five thematic options.

Taking a stand

Right from the off there’s been a desire to play to Mozilla’s non-profit strengths, amplify its voice and turn up the volume. Previously this overall direction was more about what Mozilla were fighting against – we’ve turned this to be more about what Mozilla are fighting for.

Theme 1: The Good Fight
Sometimes you have to fight for what you believe in. Mozilla believes in an open, equal, accessible Internet – for everyone. One that makes us active creators, not passive receivers. One that works for the benefit of the many, not the few. We’re ready to take a stand, link arms with others who share our view of the future, and provide tools and opportunities for those who need them. You can wish for a better web, and a better world. Or you can get involved and make it happen.

Positive impact on humanity
Another realization after our first stage of work was that we’d become too mired in geek-speak and weren’t successfully explaining how a healthy Internet helps everyone. So theme two concentrates on people, not machines.

Theme 2: For the Internet of People
Mozilla believes that the Internet should work for people – and the best way to achieve that is to give people the power to shape the Internet. At its best, the Internet is humanity’s greatest invention. It has the ability to connect human minds and free human potential on a scale never seen before. But we need to keep it open, always. We need to distribute power widely, not divide it narrowly. We need to build bridges, not walls. The future of the Internet is amazing, as long as it remains the Internet of People.

About ‘Open’
We had a sneaking suspicion that we’d buried the whole debate about ‘open’ too deep in the first tranche of work. So the third theme addresses this head on.

Theme 3: Choose open
The future of the internet can be open, or closed. We choose open. We choose an internet that is equal and accessible by default. Open to ideas, open to collaboration, open to everyone. But it isn’t a choice we can make alone. An open web is something we all have to choose together. And it involves many other choices. The tools we use. The products we support. The way we behave online. Those choices can be complex, but the guiding principle should always be simple. Choose open.

The pioneers
In our discussions at the Mozilla London All Hands meeting, and since, we’ve been talking about Mozilla’s pioneering role in the early development of the Internet, and ever since. So theme four looks hard at this (and brings back some of that grit as well, just for good measure).

Theme 4: With you from the start
Mozilla was, is, and always will be on the side of those who want a better, freer, more open Internet. In the early days, we were among those helping to embed principles of openness and accessibility into the web’s DNA. Now those principles matter more than ever. We need an Internet that works wonders for the many, not just the few. We need to stand by the founding ideals of the Internet, and carry them forward into new products, platforms, conversations, and great ideas. We’ve been with you from the start. And we’re just getting started.

The maker community
Last, but definitely not least, we wanted a clearer idea that summed up the dynamic online community worldwide that Mozilla represents. Now that we’ve met hundreds of them in person, we wanted something that verbally did them justice.

Theme 5: Mavericks, united
The Internet belongs to mavericks and independent spirits. It’s the sum total of millions of people working towards something greater than themselves. We believe the independent spirit that founded the Internet is vital to its future. But being independent doesn’t mean being alone. We bring together free thinkers, makers and doers from around the world. We create the tools, platforms, conversations, and momentum to make great things happen. We’re not waiting for the future of the Internet to be decided by others. It’s ours to invent.

What’s next? Well, in parallel to agreeing all of the above, we’ve started the design routes that go with each one. With a bit of luck, we’ll be ready to share our first thoughts on that very soon.

ArkyEmbedding Google Cardboard Camera VR Photosphere with A-Frame

Early this year I started looking into the VR (Virtual Reality) web applications. Web browsers now natively support VR applications using WebVR JavaScript API. We can now design virtual worlds using markup language and connect them to devices such as Oculus Rift and Leap motion controller using just a web browser.

To hit the ground running with WebVR. I started an experiment to capture Hackerspace Phnom Penh using Google Cardboard camera app and display it using A-Frame framework. The Google Cardboard camera photosphere is not supported by A-Frame. But the positive responses to my query encouraged me to try an hack using A-Frame Panorama component.

And it works. Almost I had to tweak the scale setting a bit to try get the perspective right but it does work. The ideal solution is to create a A-Frame custom component, that I leave it for more skilled people.

The markup needed for this demo is simple, you can achieve this with one line.

     <a-sky src="img/hackerspace.vr.jpg" radius="2400" scale="2 1 2"> </a-sky>

You can see the demo on Youtube or visit this webpage in an compatible web browser.

Mozilla Reps CommunityRepsNext – Improvements overview

Over the past months we have extensively worked on the future of the Reps program – called RepsNext. In several working groups we worked on proposals to improve the Reps program, keeping up with the Mozilla’s and our Reps’ needs. Following the RepsNext Introduction Video this blog post provides a broad overview of the various focus areas and invites further conversation.

RepsNext – The Visual Structure

Here is a visual overview of the RepsNext structure:

RepsNext structure as a flow chart, also explained in the next few paragraphs

With RepsNext there will be three different tracks to be specialized in:

  • Functional Goals
  • Leadership
  • Resources

The Functional Goals track is still work-in-progress, so we cannot provide a lot of information yet. We believe this will be a group of Reps who are heavily engaged in Mozilla’s functional areas.

Reps from the Leadership track support other Mozillians and communities through their broad  knowledge. Reps in this track will regularly exchange information among themselves, creating alignment among the various functional goals in the Reps program.

For all resource requests there is a dedicated Resources track which is specialized on increasing the program’s impact. The Review team, which is part of this track, is responsible to review budget requests.

Finally, every Rep will have a coach who has strong leadership skills and can provide guidance on Reps’ personal development.

Through this structure Reps from all specialization tracks can work together towards the overall Reps and Participation goals, each Rep contributing with their particular strengths to advance  Mozilla’s mission.

What are we going to improve in the Reps program?

Let’s compare the current state of the Reps program with the proposed improvements.

Area Current Future
Alignment with Mozilla There are no formal alignment processes with the Mozilla organization The Reps Program is aligned with the Participation team’s OKRs. Council members  participate in important planning and strategy meetings
Budget Request Reviews All Reps can submit budget requests, leading to a lot of ping pong when reviewing those Reps can specialize on “Resources” and file requests aligned for impact. This leads to faster reviews
Reps Activities Reps are mostly focused on running events in their communities Reps will be able to specialize in a certain topic (Resources, Leadership, Functional areas)
Mentoring Mentors are busy with Budget Request reviews Mentors will be focusing on personal development, no need to do budget reviews anymore (but you can be part of the Resources track)
Leadership Leadership has been part of Reps since its very beginning, it was not formally nurtured very well With the Leadership track we enable Reps’ personal development and develop their leadership potential for them to expand impact on their fellow Mozillians

Moving forward

We plan to go into more detail for each of the above mentioned areas in future blog posts. In order to prioritize and invest our (volunteering) energy in the most impactful way, we need your help: Which of the above areas are you most interested in? Where do you want to hear more in the next blog post? Which concerns do you have? What do you find intriguing?

Please let us know in Discourse and we aim to come up with an article answering to all your questions in a timely manner.

Support.Mozilla.OrgWhat’s Up with SUMO – 11th August

Hello, SUMO Nation!

How have you been? We missed you! Some of you have gone on holidays and already came back (to the inaudible – but huge – relief of the hundreds of users who ask questions in the forums and the millions of visitors who read the Knowledge Base). Let’s move on to the updates, shall we?

Welcome, new contributors!

  • … who seem to be enjoying summer away from computers… The way they should! So, no major greeting party for anyone this week, since you’ve been fairly quiet… But, if you just joined us, don’t hesitate – come over and say “hi” in the forums!

Contributors of the week

Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

Most recent SUMO Community meeting

The next SUMO Community meeting

  • …is happening on the 17th of August!
  • If you want to add a discussion topic to the upcoming meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Wednesday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).
    • If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback.



Support Forum

Knowledge Base & L10n


  • for Desktop
    • …or the Desktop side…

…what a quiet ending to this post, I hope you did not fall asleep. Then again, a siesta on a hot summer day is the best thing ever, trust me :-). Keep rocking (quietly, at least in the summer) the helpful web!

Air MozillaConnected Devices Weekly Program Update, 11 Aug 2016

Connected Devices Weekly Program Update Weekly project updates from the Mozilla Connected Devices team.

Brian KingActivate Mozilla

Today we are launching Activate Mozilla, a campaign where you can find what are the focus initiatives that support the current organization goals, how to start participating and mobilize your community around clearly defined activities.

Activate Mozilla Site

Activate Mozilla Site

One of the main asks from our community recently has been the need of more clarity on what are the most important things to do to support Mozilla right now and how to participate. With this site we have a place to answer this question that is always up-to-date.

We are launching with three activities, focused on Rust, Web Compatibility, and Test Pilot. Within each activity we explain why this is important, some goals we are trying to reach, and provide step by step instructions on how to do the activity. We want you to #mozactivate by mobilizing your community, so don’t forget to share share share!
This is a joint effort between the Participation team and other teams at Mozilla. We’ll be adding more activities to the campaign over time.


This summer, Sam and me are exploring Servo’s capabilities and building cool demos to showcase them (and most specially, WebRender, its engine “that aims to draw web content like a modern game engine”).

We could say there are two ways of experimenting. One in which you go and try out things as you think of them, and another one in which you look at what works first, and build a catalogue of resources that you can use to experiment with. An imperfect metaphor would be a comparison between going into an arts store and pick a tool… try something… then try something else with another tool as you see fit, etc (if the store would allow this), versus establishing what tools you can use before you go to the store to buy them to build your experiment.

WebRender is very good at CSS, but we didn’t know for sure what worked. We kept having “ideas” but stumbled upon walls of lack of implementation, or we would forget that X or Y didn’t work, and then we’d try to use that feature again and found that it was still not working.

To avoid that, we built two demo/tests that repeatedly render the same type of element and a different feature applied to each one: CSS transformations and CSS filters.

CSS transformations test

This way we can quickly determine what works, and we can also compare the behaviour between browsers, which is really neat when things look “weird”. And each time we want to build a new demo, we can look at the demos and be “ah, this didn’t work, but maybe I could use that other thing”.

Our tests use two types of element for now: a DIV with the unofficial but de facto Servo logo (a doge inside a cog wheel), and an IFRAME with an image as well. We chose those elements for two reasons: because the DIV is a sort of minimum building block, and because IFRAMEs tend to be a little “difficult”, with them being their own document and etc… they raise the rendering bar, so to speak 😉

All together, the tests looked so poignantly funny to me, I couldn’t but share them with the rest of the world:

and Eddy Bruel said something that inspired us to build another test… how many doges can your computer render before it slows down?

I love challenges, so it didn’t take me much to build the dogemania test.

It can ridiculise my MacBook retina very quickly, while people using desktop computers were like “so what’s the fuss, I get 9000 doges with no sweat?”

It was also very funny when people sent me their screenshots of trying the test on their office screen systems:

Servo engineers were quite amused and excited! That was cool! We also seemed to have surfaced new bugs which is always exciting. And look at the title of the bug: Doges disappear unless they are large and perfectly upright. Beautiful!

To add to the excitement, this morning I found this message on my idling IRC window:

7:22 PM <jack>
7:22 PM <jack> we all love it so much i thought it needed it's own domain :)

So you can now send everyone to 🎉

Thanks, Jack!

flattr this!

Cameron KaiserAnd now for something completely different: what you missed by not going to Vintage Computer Festival West XI

With the exception of this man,
you all missed out by not coming to the reincarnated Vintage Computer Festival West. You had a chance to meet me and my beloved Tomy Tutor (which you can emulate on a Power Mac or Intel Mac). You had a chance to see other inferior exhibits. You didn't. So let me now make you feel bad about what you missed out on.

You could have seen the other great exhibits at the Computer History Museum,

the complete Tomy Tutor family, hosted by my lovely wife and me
that corrupted young minds from a very early age
(for which I won a special award for Most Complete and 2nd place overall in the Microcomputer category),
seen a really big MOS 6502 recreation,
played Maze War,
solved differential equations the old fashioned way,
realized that in Soviet Russia computer uses you,
given yourself terrible eyestrain,
used a very credible MacPaint clone on a Tandy Color Computer,
messed with Steve Chamberlain's mind,
played with an unusual Un*x workstation powered by the similarly unusual (and architecturally troublesome) Intel i860,
marveled at the very first Amiga 1000 (serial #1),
attempted to steal an Apple I,
snuck onto Infinite Loop on the way back south,
and got hustled off by Apple security for plastering "Ready for PowerPC upgrade" stickers all over the MacBooks in the company store.

But you didn't. And now it's too late.*

*Well, you could always come next year.

William LachancePerfherder Quarter of Contribution Summer 2016: Results

Following on the footsteps of Mike Ling’s amazing work on Perfherder in 2015 (he’s gone on to do a GSOC project), I got two amazing contributors to continue working on the project for a few weeks this summer as part of our quarter of contribution program: Shruti Jasoria and Roy Chiang.

Shruti started by adding a feature to the treeherder/perfherder backend (ability to enable or disable a new performance framework on a tentative basis), then went on to make all sorts of improvements to the Treeherder / Perfherder frontend, fixing bugs in the performance sheriffing frontend, updating code to use more modern standards (including a gigantic patch to enable a bunch of eslint rules and fix the corresponding problems).

Roy worked all over the codebase, starting with some simple frontend fixes to Treeherder, moving on to fix a large number of nits in Perfherder’s alerts view. My personal favorite is the fact that we now paginate the list of alerts inside this view, which makes navigation waaaaay back into history possible:

alert pagination

You can see a summary of their work at these links:

Thank you Shruti and Roy! You’ve helped to make sure Firefox (and Servo!) performance remains top-notch.

William LachancePerfherder Quarter of Contribution: Summer 2016 Edition Results!

Following on the footsteps of Mike Ling’s amazing work on Perfherder in 2015 (he’s gone on to do a GSOC project), I got two amazing contributors to continue working on the project for a few weeks this summer as part of our quarter of contribution program: Shruti Jasoria and Roy Chiang.

Shruti started by adding a feature to the treeherder/perfherder backend (ability to enable or disable a new performance framework on a tentative basis), then went on to make all sorts of improvements to the Treeherder / Perfherder frontend, fixing bugs in the performance sheriffing frontend, updating code to use more modern standards (including a gigantic patch to enable a bunch of eslint rules and fix the corresponding problems).

Roy worked all over the codebase, starting with some simple frontend fixes to Treeherder, moving on to fix a large number of nits in Perfherder’s alerts view. My personal favorite is the fact that we now paginate the list of alerts inside this view, which makes navigation waaaaay back into history possible:

alert pagination

You can see a summary of their work at these links:

Thank you Shruti and Roy! You’ve helped to make sure Firefox (and Servo!) performance remains top-notch.

Air MozillaThe Joy of Coding - Episode 67

The Joy of Coding - Episode 67 mconley livehacks on real Firefox bugs while thinking aloud.

Air MozillaPrometheus and Grafana Presentation - Speaker Ben Kochie

Prometheus and Grafana Presentation - Speaker Ben Kochie Ben Kochie Site Reliability Engineer and Prometheus maintainer at SoundCloud will be presenting basics of Prometheus Monitoring and Grafana reporting.