Julia ValleraAda Lovelace Day Curriculum Design Workshop at Libre Learn Lab

This blog post was co-authored by Zannah Marsh and Julia Vallera

ada_lovelace_portraitOctober 11 was Ada Lovelace Day, an annual celebration of the contributions of women to the fields of Science, Technology, Engineering, and Mathematics (also known as STEM). Born in 1815, Lovelace was given a rigorous education by her mathematician mother, and went on to devise a method for programming the Analytical Engine, a conceptual model for the first-ever general purpose computer. Lovelace (pictured here in full Victorian splendor) is known as the first computer programmer. This year, Ada Lovelace Day presented the perfect opportunity for Mozilla to engage community members in something cool and inspiring around the contributions (past and future) of women and girls in STEM. Zannah from the Mozilla Science Lab(MSL) and Julia from the Mozilla Clubs program decided to team up to run a women in STEM themed session at Libre Learn Lab, a two-day summit for people who create, use and implement freely licensed resources for K-12 education. We jumped at this chance to collaborate to make something fun and new that would be useful for both of our programs, and the broader Mozilla community.

At MSL and Mozilla Clubs, we’ve been experimenting with creating “train-the-trainer” materials, resources that are packed with all the info needed to run a workshop on a given topic (for example, this resource on Open Data for academic researchers). There are 200+ clubs around the world meeting, making, and learning together… and many are eager for new curriculum and activities. In both programs and across Mozilla, we’re committed to bringing learning around the open web and all the amazing work it enables (from mathematics to advocacy) to as wide an audience as possible, especially to populations that have traditionally been excluded, like women and girls. Mozilla Learning has been running online, hour-long curriculum workshops on a monthly basis, in which users discuss a topic and get to hack on curriculum together, and had planned a special Ada Lovelace Day edition. We resolved to make an Ada Lovelace Day in-person event that would link together our “train-the-trainer” model and online curriculum creation initiatives, and help meet the need for new material for Clubs… all while highlighting the issue of inclusion on the open web.

Developing the workshop

After kicking around a few ideas for our Libre Learn Lab session, we settled on an intensive collaborative curriculum development workshop to guide participants to create their own materials inspired by Ada Lovelace Day and the contributions of women and girls to STEM. We drafted the workshop plan, tested it by working through each step, and then used insights from prototyping to make tweaks. After incorporating suggestions from key stakeholders we arrived at the final product.

What we came up with is a workshop experience that gets participants from zero to a draft by prototyping curriculum in about one and a half hours. In this workshop, we made a particular effort to:

  • Encourage good, intentional collaboration by getting participants to brainstorm and agree on guidelines for working together
  • Get users to work creatively right away, and encourage them to work on a topic they find fascinating and exciting
  • Introduce the idea of design for a specific audience (AKA user-centered design) early on, and keep returning to that audience (their needs, motivations, challenges) throughout the design process
  • Create a well-structured process of idea generation, sharing, and refining (along with a matrix to organize content) to get participants past decision making on process that can often hinder creative collaboration

If you’d like to know more you can take a look at the workshop plan, carefully documented on GitHub in a way that should make it easily reusable and remixable by anyone.

Running the workshop


On October 8 put our plans into action at Libre Learn Lab. The conference (only in its second year) had a small turnout of highly qualified participants with valuable experience in the field of education and open practices. Everyone who came to our workshop was connected to curriculum development in some way — teachers, program managers, and directors of educational organizations. After introducing the workshop theme and agenda with a short slide deck we brainstormed new ideas and worked in groups to refine or expand our ideas and prototype new curriculum. At the end of the session, we asked users to fill out a short survey on their experience.

The workshop development and implementation process so far has resulted in new lessons on understanding how climate effects living things and on women inventors throughout history. These are available in the GitHub repository for public use — and keep an eye on this, as we’ll be adding more lessons soon. Every workshop participant was eager to develop their materials further and use them with audiences ASAP. Thanks to Megan Black, Felix Alvarado, Victor Zuniga, and Don Davis for creating curriculum!

Wrap up and learnings

We got useful feedback from participants that will help make future evolutions of this workshop stronger. From our survey results we learned that participants loved the opportunity to collaborate, get hands-on experience and connect with Mozilla. They also liked having the matrix and sample cards as prompts. Suggested improvements included a desire for more curriculum examples, and the need for more time for prototyping. As facilitators, we’ll look for ways to encourage participants to move around the room and mix with other groups. We will look at improving our slides as an activity guide with clearer instructions. We’d like to find better ways for latecomers to jump in and find more ways to engage participants with different learning styles (for example more visual learners). We also learned that with ten or more participants it is best to have three or more facilitators in this type of intensive workshop.

We hope to find a time to run another session of the workshop in the Open Learning Circle in our Demystify the Web Space at this weekend’s Mozfest — keep an eye on the #mozfest hashtag on twitter for an announcement or reach out to us if you’d like to join.

Air MozillaMozilla Weekly Project Meeting, 24 Oct 2016

Mozilla Weekly Project Meeting The Monday Project Meeting

Niko MatsakisSupporting blanket impls in specialization

In my previous post, I talked about how we can separate out specialization into two distinct concepts: reuse and override. Doing so makes because the conditions that make reuse possible are more stringent than those that make override possible. In this post, I want to extend this idea to talk about a new rule for specialization that allow overriding in more cases. These rules are a big enabler for specialization, allowing it to accommodate many use cases that we couldn’t handle before. In particular, they enable us to add blanket impls like impl<T: Copy> Clone for T in a backwards compatible fashion, though only under certain conditions.

Revised algorithm

The key idea in this blog post is to change the rules for when some impl I specializes another impl J. Instead of basing the rules on subsets of types, I propose a two-tiered rule. Let me outline it first and then I will go into more detail afterwards.

  1. First, impls with more specific types specialize other impls (ignoring where clauses altogether).
    • So, for example, if impl I is impl<T: Clone> Clone for Option<T>, and impl J is impl<U: Copy> Clone for U, then I will be used in preference to J, at least for those types where they intersect (e.g., Option<i32>). This is because Option<T> is more specific than U.
    • For types where they do not intersect (e.g., i32 or Option<String>), then only one impl is used.
    • Note that the where clauses like T: Clone and U: Copy don’t matter at all for this test.
  2. However, reuse is only allowed if the full subset conditions are met.
    • So, in our example, impl I is not a full subset of impl J, because of types like Option<String>. This means that impl I could not reuse items from impl J (and hence that all items in impl J must be declared default).
  3. If the impls types are equally generic, then impls with more specific where clauses specialize other impls.
    • So, for example, if impl I is impl<T: Debug> Parse for T and impl J is impl<T> Parse for T, then impl I is used in preference to impl J where possible. In particular, types that implement Debug will prefer impl I.

Another way to express the rule is to say that impls can specialize one another in two ways:

  • if the types matched by one impl are a subset of the other, ignoring where clauses altogether;
  • otherwise, if the types matched by the two impls are the same, then if the where clauses of one impl are more selective.

Interestingly, and I’ll go into this a bit more later, this rule is not necessarily an alternative to the intersection impls I discussed at first. In fact, the two can be used together, and complement each other quite well.

Some examples

Let’s revisit some of the examples we’ve been working through and see how the rule would apply. The first three examples illustrate the first three clauses. Then I’ll show some other interesting examples that highlight various other facets and interactions of the rules.

Blanket impl of Clone for Copy types

First, we started out considering the case of trying to add a blanket impl of Clone for all Copy types:

impl<T: Copy> Clone for T {
  default fn clone(&self) -> Self {

We were concerned before that there are existing impls of Clone that will partially overlap with this new blanket impl, but which will not be full subsets of it, and which would therefore not be considered specializations. For example, an impl for the Option type:

impl<T: Clone> Clone for Option<T> {
  fn clone(&self) -> Self {
    self.as_ref().map(|c| c.clone())

Under these rules, this is no problem: the Option impl will take precedence over the blanket impl, because its types are more specific.

Note the interesting tie-in with the orphan rules here. When we add blanket impls, we have to worry about backwards compatibility in one of two ways:

  • existing impls will now fail coherence checks that used to pass;
  • some code that used to use an existing impl will silently change to using the blanket impl instead.

Naturally, the biggest concern is about impls in other crates, since those impls are not visible to us. Interestingly, the orphan rules require that those impls in other crates must be using some local type in their signature. Thus I believe the orphan rules ensure that existing impls in other crates will take precedence over our new blanket impl – that is, we are guaranteed that they are considered legal specializations, and hence will pass coherence, and moreover that the existing impl is used in preference over the blanket one.

Dump trait: Reuse requires full subset

In previous blog post I gave an example of a Dump trait that had a blanket impl for Debug things:

trait Dump {
    fn display(&self);
    fn debug(&self);

impl<T> Dump // impl A
    where T: Debug,
    default fn display(&self) {

    default fn debug(&self) {
        println!("{:?}", self);

The idea was that some other crate might want to specialize Dump just to change how display works, perhaps trying something like this:

struct Widget<T> { ... }

impl<T: Debug> Debug for Widget<T> {...}

// impl B (note that it is defined for all `T`, not `T: Debug`):
impl<T> Dump for Widget<T> {
    fn display(&self) {

Here, impl B only defines the display() item from the trait because it intends to reuse the existing debug() method from impl A. However, this poses a problem: impl A only applies when Widget<T>: Debug, which may be true but is not always true. In particular, impl B is defined for any Widget<T>.

Under the rules I gave, this is an error. Here we have a scenario where impl B does specialize impl A (because its types are more specific), but impl B is not a full subset of impl A, and therefore it cannot reuse items from impl A. It must provide a full definition for all items in the trait (this also implies that every item in impl A must be declared as default, as is the case here).

Note that either of these two alternatives for impl B would be fine:

// Alternative impl B.1: provides all items
impl<T> Dump for Widget<T> {
    fn display(&self) {...}
    fn debug(&self) {...}

// Alternative impl B.2: full subset
impl<T: Debug> Dump for Widget<T> {
    fn display(&self) {...}

There is some intersection with backwards compatibility here. If the impl of Dump for Widget were added before impl A, then it necessarily would have defined all items (as in impl B.1), and hence there would be no error when impl A is added later.

Using where clauses to detect Debug

You may have noticed that if you do an index into a map and the key is not found, the error message is kind of lackluster:

use std::collections::HashMap;

fn main() {
    let mut map = HashMap::new();
    map.insert("a", "b");
    // Error: thread 'main' panicked at 'no entry found for key', ../src/libcore/option.rs:700

In particular, it doesn’t tell you what key you were looking for! I would have liked to see ‘no entry found for c’. Well, the reason for this is that the map code doesn’t require that the key type K have a Debug impl. That’s good, but it’d be nice if we could get a better error if a debug impl happens to exist.

We might do so by using specialization. Let’s imagine defining a trait that can be used to panic when a key is not found. Thus when a map fails to find a key, it invokes key.not_found():

trait KeyNotFound {
    fn not_found(&self) -> !;

impl<T> KeyNotFound for T { // impl A
    fn not_found(&self) -> ! {
        panic!("no entry found for key")

Now we could provide a specialized impl that kicks in when Debug is available:

impl<T: Debug> KeyNotFound for T { // impl B
    fn not_found(&self) -> ! {
        panic!("no entry found for key `{:?}`", self)

Note that the types for impl B are not more specific than impl A, unless you consider the where clauses. That is, they are both defined for any type T. It is only when we consider the where clauses that we see that impl B can in fact be judged more specific than A. This is the third clause in my rules (it also works with specialization today).

Fourth example: AsRef

One longstanding ergonomic problem in the standard library has been that we could add all of the impls of the AsRef trait that we wanted. T: AsRef<U> is a trait that says an `&T` reference can be converted into a an `&U` reference. It is particularly useful for types that support slicing, like String: AsRef<str> – this states that an &String can be sliced into an &str reference.

There are a number of blanket impls for AsRef that one might expect:

  • Naturally one might expect that T: AsRef<T> would always hold. That just says that an &T reference can be converted into another &T reference (duh) – which is sometimes called being reflexive.
  • One might also that AsRef would be compatible with deref coercions. That is, if I can convert an &U reference to an &V reference, than I can also convert an &&U reference to an &V reference.

Unfortunately, if you try to combine both of those two cases, the current coherence rules reject it (I’m going to ignore lifetime parameters here for simplicity):

impl<T> AsRef<T> for T { } // impl A

impl<U, V> AsRef<V> for &U
    where U: AsRef<V> { }  // impl B

It’s clear that these two impls, at least potentially, overlap. In particular, a trait reference like &Foo: AsRef<&Foo> could be satisfied by either one (assuming that Foo: AsRef<&Foo>, which is probably not true in practice, but could be implemented by some type Foo in theory).

At the same time, it’s clear that neither represents a subset of one another, even if ignore where clauses. Just consider these examples:

  • String: AsRef<String> (matches impl A, but not impl B)
  • &String: AsRef<String> (matches impl B, but not impl A)

However, we’ll see that we can satisfy this example if we incorporate intersection impls; we’ll cover this later.

Detailed explanation: drilling into subset of types

OK, that was the high-level summary, let’s start getting a bit more into the details. In this section, I want to discuss how to implement this new rule. I’m going to assume you’ve read and understood the Algorithmic formulation section of the specialization RFC, which describes how to implement the subset check (if not, go ahead and do so, it’s quite readable – nice job aturon!).

Implementing the rules today basically consists of two distinct tests, applied in succession. RFC 1210 describes how, given two impls I and J, we can say define an ordering Subset(I, J) that indicates I matches a subset of the types of J (the RFC calls it I <= J). The current rules then say that I specializes J if Subset(I, J) holds but Subset(J, I) does not.

To decide if Subset(I, J) holds, we apply two tests (both of which must pass):

  • Type(I, J): For any way of instantiating I.vars, there is some way of instantiating J.vars such that the Self type and trait type parameters match up.
    • Here I.vars refers to the generic parameters of impl I
    • The actual technique here is to skolemize I.vars and then attempt unification. If unification succeeds, then Type(I, J) holds.
  • WhereClause(I, J): For the instantiation of I.vars used in Type(I, J), if you assume I.wc holds, you can prove J.wc.
    • Here I.wc refers to the where clauses of impl I.
    • The actual technique here is to consider I.wc as true, and attempt to prove J.wc using the standard trait machinery.

The algorithm to test whether an impl I can specialize an impl J is this:

  • Specializes(I, J):
    • If Type(I, J) holds:
      • If Type(J, I) does not hold:
        • true
      • Otherwise, if WhereClause(I, J) holds:
        • If WhereClause(J, I) does not hold:
          • true
        • else:
          • false
    • false

You could also write this as Specializes(I, J) is:

Type(I, J) && (!Type(J, I) || WhereClause(I, J) && !WhereClause(J, I))

Unlike before, we also need a separate test to check whether reuse is legal. Reuse is legal if Subset(I, J) holds.

You can view the Specializes(I, J) test as being based on a partial order, where the <= predicate is the lexicographic combination of two other partial orders, Type(I, J) and WhereClause(I, J). This implies that it is transitive.

Combining with intersection impls

It’s interesting to note that this rule can also be combined with the rule for intersection impls. The idea of intersection impls is really somewhat orthogonal to what exact test is being used to decide which impl specializes another. Essentially, whereas without intersection impls we say: two impls can overlap so long as one of them specializes the other, we would now add the additional possibility that two impls can overlap so long as some other impl specializes both of them.

This is helpful for realizing some other patterns that we wanted to get out of specialization but which, until now, we could not.

Example: AsRef

We saw earlier that this new rule doesn’t allow us to add the reflexive AsRef impl that we wanted to add. However, using an intersection impl, we can make progress. We can basically add a third impl:

impl<T> AsRef<T> for T { } // impl A

impl<U, V> AsRef<V> for &U
    where U: AsRef<V> { }  // impl B

impl<W> AsRef<&W> for &W { ... } // impl C

Impl C is a specialiation of both of the others, since every type it can match can also be matched by the others. So this would be accepted, since impl A and B overlap but have a common specializer.

(As an aside, you might also expect a generic transitivity impl, like impl<T,U,V> AsRef<V> for T where T: AsRef<U>. I haven’t thought much about if such an impl would work with the specialization rules, since I’m pretty sure though that we’d have to improve the trait matcher implementation in any case to make it work, as I think right now it would quickly overflow.)

Example: Overlapping blanket impls for Dump

Let’s see another, more conventional example where an intersection impl might be useful. We’ll return to our Dump trait. If you recall, it had a blanket impl that implemented Dump for any type T where T: Debug:

trait Dump {
    fn display(&self);
    fn debug(&self);

impl<T> Dump // impl A
    where T: Debug,
    default fn display(&self) {

    default fn debug(&self) {
        println!("{:?}", self);

But we might also want another blanket impl for types where T: Display:

impl<T> Dump // impl B
    where T: Display,
    default fn display(&self) {
        println!("{}", self);

    default fn debug(&self) {

Now we have a problem. Impl A and B clearly potentially overlap, but (a) neither is more specific in terms of its types (both apply to any type T, so Type(A, B) and Type(B, A) will both hold) and (b) neither is more specific in terms of its where-clauses: one applies to types that implement Debug, and one applies to types that implement Display, but clearly types can implement both.

With intersection impls we could resolve this error by providing a third impl for types T where T: Debug + Display:

impl<T> Dump // impl C
    where T: Debug + Display,
    default fn display(&self) {
        println!("{}", self);

    default fn debug(&self) {
        println!("{:?}", self);

Orphan rules, blanket impls, and negative reasoning

Traditionally, we have said that it is considering backwards compatible (in terms of semver) to add impls for traits, with the exception of backwards impls that apply to all T, even if T is guarded by some traits (like the impls we saw for Dump in the previous section). This is because if I add an impl like impl<T: Debug> Dump for T where none existed before, some other crate may already have an impl like impl Dump for MyType, and then if MyType: Debug, we would have an overlap conflict, and hence that downstream crate will not compile (see RFC 1023 for more information on these rules).

This new proposed specialization rule has the potential to change that balance. In fact, at first you might think that adding a blanket impl would always be legal, as long as all of its members are declared default. After all, any pre-existing impl from another crate must, because of the orphan rules, have more specific types, and will thus take precedence over the default impl (moreover, since there was nothing for this impl to inherit from before, it must still inherit). So something like impl Dump for MyType would still be legal, right?

But there is actually still a risk from blanket impls around negative reasoning. To see what I mean, let’s continue with a simplified variant of the Dump example from the previous section which doesn’t use intersection impls. So imagine that we have the Dump trait and the following impls:

// crate `dump`
trait Dump { }
trait<T: Display> Dump for T { .. }
trait<T: Debug + Display> Dump for T { .. }

So, these are pre-existing impls. Now, imagine that in the standard library, we decided to add a kind of fallback impl of Debug that says any type which implements `Display`, automatically implements `Debug`:

impl<T: Display> Debug for T {
  fn fmt(&self, fmt: &mut Formatter) -> Result<(), Error> {
    Display::fmt(self, fmt)

Interestingly, this impl creates a problem for the crate dump! Before, its two impls were well-ordered; one applied to types that implement Display, and one applied to types that implement both Debug and Display. But with this new impl, all types that implement Display also implement Debug, so this distinction is meaningless.

But wait, you cry! That impl looks awfully familiar to our motivating example from the very first post! Remember that this all started because we wanted to implement Clone for all Copy types:

impl<T: Copy> Clone for T { .. }

So is that actually illegal?

It turns out that there is a crucial difference between these two. It does not lie in the impls, but rather in the traits. In particular, the Copy trait is a subtrait of Clone – that is, anything which is copyable must also be cloneable. But Display and Debug have no relationship; in fact, the blanket impl interconverting between them is effectively imposing an undeclared subtrait relationship Display: Debug. After all, now some type T implements Display, we are guaranteed that it also implements Debug.

So this suggests that the new rule for semver compatibility is that one can add blanket impls after the fact, but only if a subtrait relationship already existed.

As an aside, this – along with the similar example raised by withoutboats and reddit user oconnor663 – strongly suggests to me that traits need to predeclare strong relationships, like subtraits but also mutual exclusion if we ever support that, at the point when they are created. I know withoutboats has some interesting thoughts in this direction. =)

However, another possibility that aturon raised is to use a more syntactic criteria for when something is more specialized – in that case, Debug+Display would be considered more specialized than Display, even if in reality they are equivalent. This may wind up being easier to understand – and more flexible – even if it is less smart.


This post lays out an alternative specialization predicate that I believe helps to overcome a lot of the shortcomings of the current subset rule. The rule is fairly simple to describe: impls with more specific types get precedence. If the types of two impls are equally generic, then the impl with more specific where-clauses gets precedence. I claim this rule is intuitive in practice; perhaps more intuitive than the current rule.

This predicate allows for a number of scenarios that the current specialization rule excludes, but which we wanted initially. The ones I have considered mostly fall into the category of adding an impl of a supertrait in terms of a subtrait backwards compatibly:

  • impl<T: Copy> Clone for T { ... }
  • impl<T: Eq> PartialEq for T { ... }
  • impl<T: Ord> PartialOrd for T { ... }

If we combine with intersection impls, we can also accommodate the AsRef impl, and also get better support for having overlapping blanket impls. I’d be interested to hear about other cases where the coherence rules were limiting that may be affected by specializaton, so we can see how they fare.

One sour note has to do with negative reasoning. Specialization based on where clauses (orthogonally from the changes proposed in this post, in fact) introduces a kind of negative reasoning that is not currently subject to the rules in RFC 1023. This implies that crates cannot add blanket impls with impunity. In particular, introducing subtrait relationships can still cause problems, which affects a number of suggested bridge cases:

  • impl<R, T: Add<R> + Clone> AddAssign<R> for T
    • anything that has Add and Clone is now AddAssign
  • impl<T: Display> Debug for T
    • anything that is Debug is now Display

There may be some room to revise the specialization rules to address this, by tweaking the WhereClause(I, J) test to be more conservative, or to be more syntactical in nature. This will require some further experimentation and tinkering.


Please leave comments in this internals thread.

Hannes VerschoreRequest for hardware

Do you have a netbook (from around 2011) with AMD processor, please take a look if it is bobcat processor (C-30, C-50, C-60, C-70, E-240, E-300, E-350, E-450). If you have one and are willing to help us giving vpn/ssh access please contact me (hverschore [at] mozilla.com).

Improving stability and decreasing crash rate is an ongoing issue for all our teams in Mozilla. That is also true for the JS team. We have fuzzers abusing our JS engine, we review each-others code in order to find bugs, we have static analyzers looking at our code, we have best practices, we look at crash-stats trying to fix the underlying bug … Lately we have identified a source of crashes in our JIT engine on specific hardware. But we haven’t been able to find a solution yet.

Our understanding of the bug is quite limited, but we know it is related to the generated code. We have tried to introduce some work-around to fix this issue, but none have worked yet and the turn-around is quite slow. We have to find a possible way to work-around and release that to nightly and wait for crash-stats to see if it could be fixed.

That is the reason for our call for hardware. We don’t have the hardware our-self and having access to the correct hardware would make it possible to test possible fixes much quicker until we find a possible solution. It would help us a lot.

This is the first time our team tries to leverage our community in order to find specific hardware and I hope it works out. We have a backup plan, but we are hoping that somebody reading this could make our live a little bit easier. We would appreciate it a lot if everybody could see if they still have a laptop/netbook with an bobcat AMD processor (C-30, C-50, C-60, C-70, E-240, E-300, E-350, E-450). E.g. this processor was used in the Asus Eee variant with AMD. If you do please contact me at (hverschore [at] mozilla.com) in order to discuss a way to access the laptop for a limited time.

François MarierTweaking Referrers For Privacy in Firefox

The Referer header has been a part of the web for a long time. Websites rely on it for a few different purposes (e.g. analytics, ads, CSRF protection) but it can be quite problematic from a privacy perspective.

Thankfully, there are now tools in Firefox to help users and developers mitigate some of these problems.


In a nutshell, the browser adds a Referer header to all outgoing HTTP requests, revealing to the server on the other end the URL of the page you were on when you placed the request. For example, it tells the server where you were when you followed a link to that site, or what page you were on when you requested an image or a script. There are, however, a few limitations to this simplified explanation.

First of all, by default, browsers won't send a referrer if you place a request from an HTTPS page to an HTTP page. This would reveal potentially confidential information (such as the URL path and query string which could contain session tokens or other secret identifiers) from a secure page over an insecure HTTP channel. Firefox will however include a Referer header in HTTPS to HTTPS transitions unless network.http.sendSecureXSiteReferrer (removed in Firefox 52) is set to false in about:config.

Secondly, using the new Referrer Policy specification web developers can override the default behaviour for their pages, including on a per-element basis. This can be used both to increase or reduce the amount of information present in the referrer.

Legitimate Uses

Because the Referer header has been around for so long, a number of techniques rely on it.

Armed with the Referer information, analytics tools can figure out:

  • where website traffic comes from, and
  • how users are navigating the site.

Another place where the Referer is useful is as a mitigation against cross-site request forgeries. In that case, a website receiving a form submission can reject that form submission if the request originated from a different website.

It's worth pointing out that this CSRF mitigation might be better implemented via a separate header that could be restricted to particularly dangerous requests (i.e. POST and DELETE requests) and only include the information required for that security check (i.e. the origin).

Problems with the Referrer

Unfortunately, this header also creates significant privacy and security concerns.

The most obvious one is that it leaks part of your browsing history to sites you visit as well as all of the resources they pull in (e.g. ads and third-party scripts). It can be quite complicated to fix these leaks in a cross-browser way.

These leaks can also lead to exposing private personally-identifiable information when they are part of the query string. One of the most high-profile example is the accidental leakage of user searches by healthcare.gov.

Solutions for Firefox Users

While web developers can use the new mechanisms exposed through the Referrer Policy, Firefox users can also take steps to limit the amount of information they send to websites, advertisers and trackers.

In addition to enabling Firefox's built-in tracking protection by setting privacy.trackingprotection.enabled to true in about:config, which will prevent all network connections to known trackers, users can control when the Referer header is sent by setting network.http.sendRefererHeader to:

  • 0 to never send the header
  • 1 to send the header only when clicking on links and similar elements
  • 2 (default) to send the header on all requests (e.g. images, links, etc.)

It's also possible to put a limit on the maximum amount of information that the header will contain by setting the network.http.referer.trimmingPolicy to:

  • 0 (default) to send the full URL
  • 1 to send the URL without its query string
  • 2 to only send the scheme, host and port

or using the network.http.referer.XOriginTrimmingPolicy option (added in Firefox 52) to only restrict the contents of referrers attached to cross-origin requests.

Site owners can opt to share less information with other sites, but they can't share any more than what the user trimming policies allow.

Another approach is to disable the Referer when doing cross-origin requests (from one site to another). The network.http.referer.XOriginPolicy preference can be set to:

  • 0 (default) to send the referrer in all cases
  • 1 to send a referrer only when the base domains are the same
  • 2 to send a referrer only when the full hostnames match


If you try to remove all referrers (i.e. network.http.sendRefererHeader = 0, you will most likely run into problems on a number of sites, for example:

The first two have been worked-around successfully by setting network.http.referer.spoofSource to true, an advanced setting which always sends the destination URL as the referrer, thereby not leaking anything about the original page.

Unfortunately, the last two are examples of the kind of breakage that can only be fixed through a whitelist (an approach supported by the smart referer add-on) or by temporarily using a different browser profile.

My Recommended Settings

As with my cookie recommendations, I recommend strengthening your referrer settings but not disabling (or spoofing) it entirely.

While spoofing does solve many the breakage problems mentioned above, it also effectively disables the anti-CSRF protections that some sites may rely on and that have tangible user benefits. A better approach is to limit the amount of information that leaks through cross-origin requests.

If you are willing to live with some amount of breakage, you can simply restrict referrers to the same site by setting:

network.http.referer.XOriginPolicy = 2

or to sites which belong to the same organization (i.e. same ETLD/public suffix) using:

network.http.referer.XOriginPolicy = 1

This prevent leaks to third-parties while giving websites all of the information that they can already see in their own server logs.

On the other hand, if you prefer a weaker but more compatible solution, you can trim cross-origin referrers down to just the scheme, hostname and port:

network.http.referer.XOriginTrimmingPolicy = 2

I have not yet found user-visible breakage using this last configuration. Let me know if you find any!

Smokey ArdissonThoughts on the Mac OS X upgrade cycle

Michael Tsai recently linked to Ricardo Mori’s lament on the unfashionable state of the Mac, quoting the following passage:

Having a mandatory new version of Mac OS X every year is not necessarily the best way to show you’re still caring, Apple. This self-imposed yearly update cycle makes less and less sense as time goes by. Mac OS X is a mature operating system and should be treated as such. The focus should be on making Mac OS X even more robust and reliable, so that Mac users can update to the next version with the same relative peace of mind as when a new iOS version comes out.

I wonder how much the mandatory yearly version cycle is due to the various iOS integration features—which, other than the assorted “bugs introduced by rewriting stuff that ‘just worked,’” seem to be the main changes in every Mac OS X (er, macOS, previously OS X) version of late.

Are these integration features so wide-ranging that they touch every part of the OS and really need an entire new version to ship safely, or are they localized enough that they could safely be released in a point update? Of course, even if they are safe to release in an update, it’s still probably easier on Apple’s part to state “To use this feature, your Mac must be running macOS 10.18 or newer, and your iOS device must be running iOS 16 or newer” instead of “To use this feature, your Mac must be running macOS 10.15.5 or newer, and your iOS device must be running iOS 16 or newer” when advising users on the availability of the feature.

At this point, as Mori mentioned, Mac OS X is a mature, stable product, and Apple doesn’t even have to sell it per se anymore (although for various reasons, they certainly want people to continue to upgrade). So even if we do have to be subjected to yearly Mac OS X releases to keep iOS integration features coming/working, it seems like the best strategy is to keep the scope of those OS releases small (iOS integration, new Safari/WebKit, a few smaller things here and there) and rock-solid (don’t rewrite stuff that works fine, fix lots of bugs that persist). I think a smaller, more scoped release also lessens the “upgrade burnout” effect—there’s less fear and teeth-gnashing over things that will be broken and never fixed each year, but there’s still room for surprise and delight in small areas, including fixing persistent bugs that people have lived with for upgrade after upgrade. (Regressions suck. Regressions that are not fixed, release after release, are an indication that your development/release process sucks or your attention to your users’ needs sucks. Neither is a very good omen.) And when there is something else new and big, perhaps it has been in development and QA for a couple of cycles so that it ships to the user solid and fully-baked.

I think the need not to have to “sell” the OS presents Apple a really unique opportunity that I can imagine some vendors would kill to have—the ability to improve the quality of the software—and thus the user experience—by focusing on the areas that need attention (whatever they may be, new features, improvements, old bugs) without having to cram in a bunch of new tentpole items to entice users to purchase the new version. Even in terms of driving adoption, lots of people will upgrade for the various iOS integration features alone, and with a few features and improved quality overall, the adoption rate could end up being very similar. Though there’s the myth that developers are only happy when they get to write new code and new features (thus the plague of rewrite-itis), I know from working on Camino that I—and, more importantly, most of our actual developers1—got enormous pleasure and satisfaction from fixing bugs in our features, especially thorny and persistent bugs. I would find it difficult to believe that Apple doesn’t have a lot of similar-tempered developers working for it, so keeping them happy without cranking out tons of brand-new code shouldn’t be overly difficult.

I just wish Apple would seize this opportunity. If we are going to continue to be saddled with yearly Mac OS X releases (for whatever reason), please, Apple, make them smaller, tighter, more solid releases that delight us in how pain-free and bug-free they are.


1 Whenever anyone would confuse me for a real developer after I’d answered some questions, my reply was “I’m not a developer; I only play one on IRC.”2 ↩︎
2 A play on the famous television commercial disclaimer, “I’m not a doctor; I only play one on TV,” attributed variously, perhaps first to Robert Young, television’s Marcus Welby, M.D. from 1969-1976.3 ↩︎
3 The nested footnotes are a tribute to former Mozilla build/release engineer J. Paul Reed (“preed” on IRC), who was quite fond of them. ↩︎

Daniel StenbergAnother wget reference was Bourne

wget-is-not-a-crimeBack in 2013, it came to light that Wget was used to to copy the files private Manning was convicted for having leaked. Around that time, EFF made and distributed stickers saying wget is not a crime.

Weirdly enough, it was hard to find a high resolution version of that image today but I’m showing you a version of it on the right side here.

In the 2016 movie Jason Bourne, Swedish actress Alicia Vikander is seen working on her laptop at around 1:16:30 into the movie and there’s a single visible sticker on that laptop. Yeps, it is for sure the same EFF sticker. There’s even a very brief glimpse of the top of the red EFF dot below the “crime” word.


Also recall the wget occurance in The Social Network.

Yunier José Sosa VázquezActualización para Firefox 49

En el día de hoy Mozilla a publicado una nueva actualización para su navegador, en esta ocasión la 49.0.2.

Esta liberación resuelve pequeños problemas que han estado confrontando algunos usuarios, por lo que recomendamos actualizar.

La pueden obtener desde nuestra zona de Descargas para Linux, Mac, Windows y Android en español e inglés.

Air MozillaWebdev Beer and Tell: October 2016

Webdev Beer and Tell: October 2016 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...

QMOFirefox 51.0a2 Aurora Testday, October 28th

Hello Mozillians,

We are happy to let you know that Friday, October 28th, we are organizing Firefox 51.0 Aurora Testday. We’ll be focusing our testing on the following features: Zoom indicator, Downloads dropmaker.

Check out the detailed instructions via this etherpad.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better!

See you on Friday!

Hal WineUsing Auto Increment Fields to Your Advantage

Using Auto Increment Fields to Your Advantage

I just found, and read, Clément Delafargue’s post “Why Auto Increment Is A Terrible Idea” (via @CoreRamiro). I agree that an opaque primary key is very nice and clean from an information architecture viewpoint.

However, in practice, a serial (or monotonically increasing) key can be handy to have around. I was reminded of this during a recent situation where we (app developers & ops) needed to be highly confident that a replica was consistent before performing a failover. (None of us had access to the back end to see what the DB thought the replication lag was.)


Christian HeilmannDecoded Chats – second edition featuring Monica Dinculescu on Web Components

At SmashingConf Freiburg this year I was lucky enough to find some time to sit down with Monica Dinculescu (@notwaldorf) and chat with her about Web Components, extending the web, JavaScript dependency and how to be a lazy but dedicated developer. I’m sorry about the sound of the recording and some of the harsher cuts but we’ve been interrupted by tourists trying to see the great building we were in who couldn’t read signs that it is closed for the day.

You can see the video and get the audio recording of our chat over at the Decoded blog:

Monica saying hi

I played a bit of devil’s advocate interviewing Monica as she has a lot of great opinions and the information to back up her point of view. It was very enjoyable seeing the current state of the web through the eyes of someone talented who just joined the party. It is far too easy for those who have been around for a long time to get stuck in a rut of trying not to break up with the past or considering everything broken as we’ve seen too much damage over the years. Not so Monica. She is very much of the opinion that we can trust developers to do the right thing and that by giving them tools to analyse their work the web of tomorrow will be great.

I’m happy that there are people like her in our market. It is good to pass the torch to those with a lot of dedication rather than those who are happy to use whatever works.

Support.Mozilla.OrgWhat’s Up with SUMO – 20th October

Hello, SUMO Nation!

We had a bit of a break, but we’re back! First, there was the meeting in Toronto with the Lithium team about the migration (which is coming along nicely), and then I took a short holiday. I missed you all, it’s great to be back, time to see what’s up in the world of SUMO!

Welcome, new contributors!

If you just joined us, don’t hesitate – come over and say “hi” in the forums!

Contributors of the week

Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

SUMO Community meetings

  • LATEST ONE: 19th of October – you can read the notes here and see the video at AirMozilla.
  • NEXT ONE: happening on the 26th of October!
  • If you want to add a discussion topic to the upcoming meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Wednesday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).
    • If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback.




Support Forum

Knowledge Base & L10n

  • We are 3 weeks before next release / 1 week after current release What does that mean? (Reminder: we are following the process/schedule outlined here).
    • Joni will finalize next release content by the end of this week; no work for localizers for the next release yet
    • All existing content is open for editing and localization as usual; please focus on localizing the most recent / popular content
  • Migration: please check this spreadsheet to see which locales are going to be migrated in the first wave
    • Locale packages that will be migrated are marked as “match” and “needed” in the spreadsheet
    • Other locales will be stored as an archive at sumo-archive.mozilla.org – and will be added whenever there are contributors ready to keep working on them
    • We are also waiting for confirmation about the mechanics of l10n, we may be launching the first version without an l10n system built in – but all the localized content and UI will be there in all the locales listed in the spreadsheet above
  • Remember the MozPizza L10n Hackathon in Brazil? Take a look here!


  • for iOS
    • No news, keep biting the apple ;-)

…Whew, that’s it for now, then! I hope you could catch up with everything… I’m still digging through my post-holiday inbox ;-) Take care, stay safe, and keep rocking the helpful web! WE <3 YOU ALL!

Cameron KaiserWe need more desktop processor branches

Ars Technica is reporting an interesting attack that uses a side-channel exploit in the Intel Haswell branch translation buffer, or BTB (kindly ignore all the political crap Ars has been posting lately; I'll probably not read any more articles of theirs until after the election). The idea is to break through ASLR, or address space layout randomization, to find pieces of code one can string together or directly attack for nefarious purposes. ASLR defeats a certain class of attacks that rely on the exact address of code in memory. With ASLR, an attacker can no longer count on code being in a constant location.

Intel processors since at least the Pentium use a relatively simple BTB to aid these computations when finding the target of a branch instruction. The buffer is essentially a dictionary with virtual addresses of recent branch instructions mapping to their predicted target: if the branch is taken, the chip has the new actual address right away, and time is saved. To save space and complexity, most processors that implement a BTB only do so for part of the address (or they hash the address), which reduces the overhead of maintaining the BTB but also means some addresses will map to the same index into the BTB and cause a collision. If the addresses collide, the processor will recover, but it will take more cycles to do so. This is the key to the side-channel attack.

(For the record, the G3 and the G4 use a BTIC instead, or a branch target instruction cache, where the table actually keeps two of the target instructions so it can be executing them while the rest of the branch target loads. The G4/7450 ("G4e") extends the BTIC to four instructions. This scheme is highly beneficial because these cached instructions essentially extend the processor's general purpose caches with needed instructions that are less likely to be evicted, but is more complex to manage. It is probably for this reason the BTIC was dropped in the G5 since the idea doesn't work well with the G5's instruction dispatch groups; the G5 uses a three-level hybrid predictor which is unlike either of these schemes. Most PowerPC implementations also have a return address stack for optimizing the blr instruction. With all of these unusual features Power ISA processors may be vulnerable to a similar timing attack but certainly not in the same way and probably not as predictably, especially on the G5 and later designs.)

To get around ASLR, an attacker needs to find out where the code block of interest actually got moved to in memory. Certain attributes make kernel ASLR (KASLR) an easier nut to crack. For performance reasons usually only part of the kernel address is randomized, in open-source operating systems this randomization scheme is often known, and the kernel is always loaded fully into physical memory and doesn't get swapped out. While the location it is loaded to is also randomized, the kernel is mapped into the address space of all processes, so if you can find its address in any process you've also found it in every process. Haswell makes this even easier because all of the bits the Linux kernel randomizes are covered by the low 30 bits of the virtual address Haswell uses in the BTB index, which covers the entire kernel address range and means any kernel branch address can be determined exactly. The attacker finds branch instructions in the kernel code such as by disassembling it that service a particular system call and computes (this is feasible due to the smaller search space) all the possible locations that branch could be at, creates a "spy" function with a branch instruction positioned to try to force a BTB collision by computing to the same BTB index, executes the system call, and then executes the spy function. If the spy process (which times itself) determines its branch took longer than an average branch, it logs a hit, and the delta between ordinary execution and a BTB collision is unambiguously high (see Figure 7 in the paper). Now that you have the address of that code block branch, you can deduce the address of the entire kernel code block (because it's generally in the same page of memory due to the typical granularity of the randomization scheme), and try to get at it or abuse it. The entire process can take just milliseconds on a current CPU.

The kernel is often specifically hardened against such attacks, however, and there are more tempting targets though they need more work. If you want to attack a user process (particularly one running as root, since that will have privileges you can subvert), you have to get your "spy" on the same virtual core as the victim process or otherwise they won't share a BTB -- in the case of the kernel, the system call always executes on the same virtual core via context switch, but that's not the case here. This requires manipulating the OS' process scheduler or running lots of spy processes, which slows the attack but is still feasible. Also, since you won't have a kernel system call to execute, you have to get the victim to do a particular task with a branch instruction, and that task needs to be something repeatable. Once this is done, however, the basic notion is the same. Even though only a limited number of ASLR bits can be recovered this way (remember that in Haswell's case, bit 30 and above are not used in the BTB, and full Linux ASLR uses bits 12 to 40, unlike the kernel), you can dramatically narrow the search space to the point where brute-force guessing may be possible. The whole process is certainly much more streamlined than earlier ASLR attacks which relied on fragile things like cache timing.

As it happens, software mitigations can blunt or possibly even completely eradicate this exploit. Brute-force guessing addresses in the kernel usually leads to a crash, so anything that forces the attacker to guess the address of a victim routine in the kernel will likely cause the exploit to fail catastrophically. Get a couple of those random address bits outside the 30 bits Haswell uses in the BTB table index and bingo, a relatively simple fix. One could also make ASLR more granular to occur at the function, basic block or even single instruction level rather than merely randomizing the starting address of segments within the address space, though this is much more complicated. However, hardware is needed to close the gap completely. A proper hardware solution would be to either use most or all of the virtual address in the BTB to reduce the possibility of a collision, and/or to add a random salt to whatever indexing or hashing function is used for BTB entries that varies from process to process so a collision becomes less predictable. Either needs a change from Intel.

This little fable should serve to remind us that monocultures are bad. This exploit in question is viable and potentially ugly but can be mitigated. That's not the point: the point is that the attack, particularly upon the kernel, is made more feasible by particular details of how Haswell chips handle branching. When everything gets funneled through the same design and engineering optics and ends up with the same implementation, if someone comes up with a simple, weapons-grade exploit for a flaw in that implementation that software can't mask, we're all hosed. This is another reason why we need an auditable, powerful alternative to x86/x86_64 on the desktop. And there's only one system in that class right now.

Okay, okay, I'll stop banging you over the head with this stuff. I've got a couple more bugs under investigation that will be fixed in 45.5.0, and if you're having the issue where TenFourFox is not remembering your search engine of choice, please post your country and operating system here.

Air MozillaConnected Devices Weekly Program Update, 20 Oct 2016

Connected Devices Weekly Program Update Weekly project updates from the Mozilla Connected Devices team.

Air MozillaReps Weekly Meeting Oct. 20, 2016

Reps Weekly Meeting Oct. 20, 2016 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Mozilla Reps CommunityRep of the Month – September 2016

Please join us in congratulating Mijanur Rahman Rayhan, Rep of the Month for September 2016!

Mijanur is a Mozilla Rep and Tech Speaker from Sylhet, Bangladesh. With his diverse knowledge he organized hackathons around Connected Devices and held a Web Compatibility event to find differences in different browsers.


Mijanur proved himself as a very active Mozillian through his different activities and work with different communities. With his patience and consistency to reach his goals he is always ready and prepared for these. He showed commitment to the Reps program and his proactive spirit these last elections by running as a nominee for the Cohort position in Reps Council.

Be sure to follow his activities as he continues the activate series with a Rust workshop, Dive Into Rust events, Firefox Testpilot MozCoffees, Web Compatibility Sprint and Privacy and Security seminar with Bangladesh Police!

Gervase MarkhamNo Default Passwords

One of the big problems with IoT devices is default passwords – here’s the list coded into the malware that attacked Brian Krebs. But without a default password, you have to make each device unique and then give the randomly-generated password to the user, perhaps by putting it on a sticky label. Again, my IoT vision post suggests a better solution. If the device’s public key and a password are in an RFID tag on it, and you just swipe that over your hub, the hub can find and connect securely to the device over SSL, and then authenticate itself to the device (using the password) as the user’s real hub, with zero configuration on the part of the user. And all of this works without the need for any UI or printed label which needs to be localized. Better usability, better security, better for the internet.

Gervase MarkhamSomeone Thought This Was A Good Idea

You know that problem where you want to label a coffee pot, but you just don’t have the right label? Technology to the rescue!


Of course, new technology does come with some disadvantages compared to the old, as well as its many advantages:


And pinch-to-zoom on the picture viewer (because that’s what it uses) does mean you can play some slightly mean tricks on people looking for their caffeine fix:


And how do you define what label the tablet displays? Easy:


Seriously, can any reader give me one single advantage this system has over a paper label?

Daniel PocockChoosing smartcards, readers and hardware for the Outreachy project

One of the projects proposed for this round of Outreachy is the PGP / PKI Clean Room live image.

Interns, and anybody who decides to start using the project (it is already functional for command line users) need to decide about purchasing various pieces of hardware, including a smart card, a smart card reader and a suitably secure computer to run the clean room image. It may also be desirable to purchase some additional accessories, such as a hardware random number generator.

If you have any specific suggestions for hardware or can help arrange any donations of hardware for Outreachy interns, please come and join us in the pki-clean-room mailing list or consider adding ideas on the PGP / PKI clean room wiki.

Choice of smart card

For standard PGP use, the OpenPGP card provides a good choice.

For X.509 use cases, such as VPN access, there are a range of choices. I recently obtained one of the SmartCard HSM cards, Card Contact were kind enough to provide me with a free sample. An interesting feature of this card is Elliptic Curve (ECC) support. More potential cards are listed on the OpenSC page here.

Choice of card reader

The technical factors to consider are most easily explained with a table:

On disk Smartcard reader without PIN-pad Smartcard reader with PIN-pad
Software Free/open Mostly free/open, Proprietary firmware in reader
Key extraction Possible Not generally possible
Passphrase compromise attack vectors Hardware or software keyloggers, phishing, user error (unsophisticated attackers) Exploiting firmware bugs over USB (only sophisticated attackers)
Other factors No hardware Small, USB key form-factor Largest form factor

Some are shortlisted on the GnuPG wiki and there has been recent discussion of that list on the GnuPG-users mailing list.

Choice of computer to run the clean room environment

There are a wide array of devices to choose from. Here are some principles that come to mind:

  • Prefer devices without any built-in wireless communications interfaces, or where those interfaces can be removed
  • Even better if there is no wired networking either
  • Particularly concerned users may also want to avoid devices with opaque micro-code/firmware
  • Small devices (laptops) that can be stored away easily in a locked cabinet or safe to prevent tampering
  • No hard disks required
  • Having built-in SD card readers or the ability to add them easily

SD cards and SD card readers

The SD cards are used to store the master private key, used to sign the certificates/keys on the smart cards. Multiple copies are kept.

It is a good idea to use SD cards from different vendors, preferably not manufactured in the same batch, to minimize the risk that they all fail at the same time.

For convenience, it would be desirable to use a multi-card reader:

although the software experience will be much the same if lots of individual card readers or USB flash drives are used.

Other devices

One additional idea that comes to mind is a hardware random number generator (TRNG), such as the FST-01.

Can you help with ideas or donations?

If you have any specific suggestions for hardware or can help arrange any donations of hardware for Outreachy interns, please come and join us in the pki-clean-room mailing list or consider adding ideas on the PGP / PKI clean room wiki.

Mozilla Open Design BlogNearly there

We’ve spent the past two weeks asking people around the world to think about our four refined design directions for the Mozilla brand identity. The results are in and the data may surprise you.

If you’re just joining this process, you can get oriented here and here. Our objective is to refresh our Mozilla logo and related visual assets that support our mission and make it easier for people who don’t know us to get to know us.

A reminder of the factors we’re taking into account in this phase. Data is our friend, but it is only one of several aspects to consider. In addition to the three quantitative surveys—of Mozillians, developers, and our target consumer audience—qualitative and strategic factors play an equal role. These include comments on this blog, constructive conversations with Mozillians, our 5-year strategic plan for Mozilla, and principles of good brand design.

Here is what we showed, along with a motion study, for each direction:






We asked survey respondents to rate these design directions against seven brand attributes. Five of them—Innovative, Activist, Trustworthy, Inclusive/Welcoming, Opinionated—are qualities we’d like Mozilla to be known for in the future. The other two—Unique, Appealing—are qualities required for any new brand identity to be successful.

Mozillians and developers meld minds.

Members of our Mozilla community and the developers surveyed through MDN (the Mozilla Developer Network) overwhelmingly ranked Protocol 2.0 as the best match to our brand attributes. For over 700 developers and 450 Mozillians, Protocol scored highest across 6 of 7 measures. People with a solid understanding of Mozilla feel that a design embedded with the language of the internet reinforces our history and legacy as an Internet pioneer. The link’s role in connecting people to online know-how, opportunity and knowledge is worth preserving and fighting for.


But consumers think differently.

We surveyed people making up our target audience, 400 each in the U.S., U.K., Germany, France, India, Brazil, and Mexico. They are 18- to 34-year-old active citizens who make brand choices based on values, are more tech-savvy than average, and do first-hand research before making decisions (among other factors).

We asked them first to rank order the brand attributes most important for a non-profit organization “focused on empowering people and building technology products to keep the internet healthy, open and accessible for everyone.” They selected Trustworthy and Welcoming as their top attributes. And then we also asked them to evaluate each of the four brand identity design systems against each of the seven brand attributes. For this audience, the design system that best fit these attributes was Burst.


Why would this consumer audience choose Burst? Since this wasn’t a qualitative survey, we don’t know for sure, but we surmise that the colorful design, rounded forms, and suggestion of interconnectedness felt appropriate for an unfamiliar nonprofit. It looks like a logo.


Also of note, Burst’s strategic narrative focused on what an open, healthy Internet feels and acts like, while the strategic narratives for the other design systems led with Mozilla’s role in world. This is a signal that our targeted consumer audience, while they might not be familiar with Mozilla, may share our vision of what the Internet could and should be.

Why didn’t they rank Protocol more highly across the chosen attributes? We can make an educated guess that these consumers found it one dimensional by comparison, and they may have missed the meaning of the :// embedded in the wordmark.


Although Dino 2.0 and Flame had their fans, neither of these design directions sufficiently communicated our desired brand attributes, as proven by the qualitative survey results as well as through conversations with Mozillians and others in the design community. By exploring them, we learned a lot about how to describe and show certain facets of what Mozilla offers to the world. But we will not be pursuing either direction.

Where we go from here.

Both Protocol and Burst have merits and challenges. Protocol is distinctly Mozilla, clearly about the Internet, and it reinforces our mission that the web stay healthy, accessible, and open. But as consumer testing confirmed, it lacks warmth, humor, and humanity. From a design perspective, the visual system surrounding it is too limited.

By comparison, Burst feels fresh, modern, and colorful, and it has great potential in its 3D digital expression. As a result, it represents the Internet as a place of endless, exciting connections and possibilities, an idea reinforced by the strategic narrative. Remove the word “Mozilla,” though, and are there enough cues to suggest that it belongs to us?

Our path forward is to take the strongest aspects of Burst—its greater warmth and dimensionality, its modern feel—and apply them to Protocol. Not to Frankenstein the two together, but to design a new, final direction that builds from both. We believe we can make Protocol more relatable to a non-technical audience, and build out the visual language surrounding it to make it both harder working and more multidimensional.

Long live the link.

What do we say to Protocol’s critics who have voiced concern that Mozilla is hitching itself to an Internet language in decline? We’re doubling down on our belief in the original intent of the Internet—that people should have the ability to explore, discover and connect in an unfiltered, unfettered, unbiased environment. Our mission is dedicated to keeping that possibility alive and well.

For those who are familiar with the Protocol prompt, using the language of the Internet in our brand identity signals our resolve. For the unfamiliar, Protocol will offer an opportunity to start a conversation about who we are and what we believe. The language of the Internet will continue to be as important to building its future as it was in establishing its origin.

We’ll have initial concepts for a new, dare-we-say final design within a few weeks. To move forward, first we’ll be taking a step back. We’ll explore different graphic styles, fonts, colors, motion, and surrounding elements, making use of the design network established by our agency partner johnson banks. In the meantime, tell us what you think.

The Rust Programming Language BlogAnnouncing Rust 1.12.1

The Rust team is happy to announce the latest version of Rust, 1.12.1. Rust is a systems programming language with a focus on reliability, performance, and concurrency.

As always, you can install Rust 1.12.1 from the appropriate page on our website, or install via rustup with rustup update stable.

What’s in 1.12.1 stable

Wait… one-point-twelve-point… one?

In the release announcement for 1.12 a few weeks ago, we said:

The release of 1.12 might be one of the most significant Rust releases since 1.0.

It was true. One of the biggest changes was turning on a large compiler refactoring, MIR, which re-architects the internals of the compiler. The overall process went like this:

  • Initial MIR support landed in nightlies back in Rust 1.6.
  • While work was being done, a flag, --enable-orbit, was added so that people working on the compiler could try it out.
  • Back in October, we would always attempt to build MIR, even though it was not being used.
  • A flag was added, -Z orbit, to allow users on nightly to try and use MIR rather than the traditional compilation step (‘trans’).
  • After substantial testing over months and months, for Rust 1.12, we enabled MIR by default.
  • In Rust 1.13, MIR will be the only option.

A change of this magnitude is huge, and important. So it’s also important to do it right, and do it carefully. This is why this process took so long; we regularly tested the compiler against every crate on crates.io, we asked people to try out -Z orbit on their private code, and after six weeks of beta, no significant problems appeared. So we made the decision to keep it on by default in 1.12.

But large changes still have an element of risk, even though we tried to reduce that risk as much as possible. And so, after release, 1.12 saw a fair number of regressions that we hadn’t detected in our testing. Not all of them are directly MIR related, but when you change the compiler internals so much, it’s bound to ripple outward through everything.

Why make a point release?

Now, given that we have a six-week release cycle, and we’re halfway towards Rust 1.13, you may wonder why we’re choosing to cut a patch version of Rust 1.12 rather than telling users to just wait for the next release. We have previously said something like “point releases should only happen in extreme situations, such as a security vulnerability in the standard library.”

The Rust team cares deeply about the stability of Rust, and about our users’ experience with it. We could have told you all to wait, but we want you to know how seriously we take this stuff. We think it’s worth it to demonstrate our commitment to you by putting in the work of making a point release in this situation.

Furthermore, given that this is not security related, it’s a good time to practice actually cutting a point release. We’ve never done it before, and the release process is semi-automated but still not completely so. Having a point release in the world will also shake out any bugs in dealing with point releases in other tooling as well, like rustup. Making sure that this all goes smoothly and getting some practice going through the motions will be useful if we ever need to cut some sort of emergency point release due to a security advisory or anything else.

This is the first Rust point release since Rust 0.3.1, all the way back in 2012, and marks 72 weeks since Rust 1.0, when we established our six week release cadence along with a commitment to aggressive stability guarantees. While we’re disappointed that 1.12 had these regressions, we’re really proud of Rust’s stability and will to continue expanding our efforts to ensure that it’s a platform you can rely on. We want Rust to be the most reliable programming platform in the world.

A note about testing on beta

One thing that you, as a user of Rust, can do to help us fix these issues sooner: test your code against the beta channel! Every beta release is a release candidate for the next stable release, so for the cost of an extra build in CI, you can help us know if there’s going to be some sort of problem before it hits a stable release! It’s really easy. For example, on Travis, you can use this as your .travis.yml:

language: rust
  - stable
  - beta

And you’ll test against both. Furthermore, if you’d like to make it so that any beta failure doesn’t fail your own build, do this:

    - rust: beta

The beta build may go red, but your build will stay green.

Most other CI systems, such as AppVeyor, should support something similar. Check the documentation for your specific continuous integration product for full details.

Full details

There were nine issues fixed in 1.12.1, and all of those fixes have been backported to 1.13 beta as well.

In addition, there were four more regressions that we decided not to include in 1.12.1 for various reasons, but we’ll be working on fixing those as soon as possible as well.

You can see the full diff from 1.12.0 to 1.12.1 here.

Support.Mozilla.OrgFirefox 49 Support Release Report

This report is aiming to capture and explain what has happened during and after the launch of Firefox 49on multiple support fronts: Knowledge Base and localization, 1:1 social and forum support, trending issues and reported bugs, as well as to celebrate and recognize the tremendous work the SUMO community is putting in to make sure our users experience a happy release. We have lots of ways to contribute, from Support to Social to PR, the ways you can help shape our communications program and tell the world about Mozilla are endless. For more information: [https://goo.gl/NwxLJF]

Knowledge Base and Localization

Article Voted “helpful” (English/US only) Global views Comments from dissatisfied users
Desktop (Sept. 20 – Oct. 12)
https://support.mozilla.org/en-US/kb/hello-status 76-80% 93871 “No explanation of why it was removed.”
https://support.mozilla.org/en-US/kb/firefox-reader-view-clutter-free-web-pages 61-76% 8625 none
https://support.mozilla.org/en-US/kb/html5-audio-and-video-firefox 36-71% 11756 “Didn’t address Firefox not playing YouTube tutorials”
https://support.mozilla.org/en-US/kb/your-hardware-no-longer-supported 70-75% 5147 “Please continue to support Firefox for Pentium III. It is not that hard to do.”

“What about those who can’t afford to upgrade their processors?”

Android (Sept. 20 – Oct. 12)
https://support.mozilla.org/en-US/kb/whats-new-firefox-android 68% 292 none


Article Top 10 locale coverage Top 20 locale coverage
Desktop (Sept. 20 – Oct. 12)
https://support.mozilla.org/en-US/kb/hello-status 100% 86%
https://support.mozilla.org/en-US/kb/firefox-reader-view-clutter-free-web-pages 100% 81%
https://support.mozilla.org/en-US/kb/html5-audio-and-video-firefox 100% 81%
https://support.mozilla.org/en-US/kb/your-hardware-no-longer-supported 100% 81%
Android (Sept. 20 – Oct. 12)
https://support.mozilla.org/en-US/kb/whats-new-firefox-android 100% 71%


Support Forum Threads


Great teamwork between some top contributors


Bugs Created from Forum threads – SUMO Community
  • [Bug 1305436] Firefox 49 won’t start after installation
  • [Bug 1304848] Users report Firefox is no longer launching after the 49 update with a mozglue.dll missing error instead
  • (Contributed to) [Bug 1304360] Firefox49 showing graphics artifacts with HWA enabled

Army Of Awesome

(by Stefan Costen -Costenslayer)

My thanks goes out to all contributors for their help in supporting everyone from crashed (which can be difficult and annoying) to people thanking us. All of your hard work has been noticed and is much appreciated

Along with Amit Roy (twitter: amitroy2779) for helping uses every day

Social Support Highlights

Brought to you by Sprinklr

Total active contributors in program ~16

Top 12 Contributors
Name Engagements
Noah 103
Magdno 69
Daniela 28
Andrew 25
Geraldo 10
Cynthia 10
Marcelo 4
Jhonatas 2
Thiago 2
Joa Paulo 1

Number of Replies:


Trending issues

Innbound, what people are clicking and asking about:


Outbound top engagement:


Thank yous from users who received SUMO help

Support Forums:

Thanks to jscher from determining between how windows and Firefox handles different video file types Thank you post

Thank you for Noah from a user on Social, link here

Tune in next time in three weeks for Firefox 50!

Air MozillaSingularity University

Singularity University Mozilla Executive Chair Mitchell Baker's address at Singularity University's 2016 Closing Ceremony.

Air MozillaIEEE Global Connect

IEEE Global Connect Mozilla Executive Chair Mitchell Baker's address at IEEE Global Connect

Eric ShepherdFinding stuff: My favorite Firefox search keywords

One of the most underappreciated features of Firefox’s URL bar and its bookmark system is its support for custom keyword searches. These let you create special bookmarks that type a keyword followed by other text, and have that text inserted into a URL identified uniquely by the keyword, then that URL gets loaded. This lets you type, for example, “quote aapl” to get a stock quote on Apple Inc.

You can check out the article I linked to previously (and here, as well, for good measure) for details on how to actually create and use keyword searches. I’m not going to go into details on that here. What I am going to do is share a few keyword searches I’ve configured that I find incredibly useful as a programmer and as a writer on MDN.

For web development

Here are the search keywords I use the most as a web developer.

Keyword Description URL
if Opens an API reference page on MDN given an interface name. https://developer.mozilla.org/en-US/docs/Web/API/%s
elem Opens an HTML element’s reference page on MDN. https://developer.mozilla.org/en-US/docs/Web/HTML/Element/%s
css Opens a CSS reference page on MDN. https://developer.mozilla.org/en-US/docs/Web/CSS/%s
fx Opens the release notes for a given version of Firefox, given its version number. https://developer.mozilla.org/en-US/Firefox/Releases/%s
mdn Searches MDN for the given term(s) using the default filters, which generally limit the search to include only pages most useful to Web developers. https://developer.mozilla.org/en-US/search?q=%s
mdnall Searches MDN for the given term(s) with no filters in place. https://developer.mozilla.org/en-US/search?q=%s&none=none

For documentation work

When I’m writing docs, I actually use the above keywords a lot, too. But I have a few more that I get a lot of use out of, too.

Keyword Description URL
bug Opens the specified bug in Mozilla’s Bugzilla instance, given a bug number. https://bugzilla.mozilla.org/show_bug.cgi?id=%s
bs Searches Bugzilla for the specified term(s). https://bugzilla.mozilla.org/buglist.cgi?quicksearch=%s
dxr Searches the Mozilla source code on DXR for the given term(s). https://dxr.mozilla.org/mozilla-central/search?q=%s
file Looks for files whose name contains the specified text in the Mozilla source tree on DXR. https://dxr.mozilla.org/mozilla-central/search?q=path%3A%s
ident Looks for definitions of the specified identifier (such as a method or class name) in the Mozilla code on DXR. https://dxr.mozilla.org/mozilla-central/search?q=id%3A%s
func Searches for the definition of function(s)/method(s) with the specified name, using DXR. https://dxr.mozilla.org/mozilla-central/search?q=function%3A%s
t Opens the specified MDN KumaScript macro page, given the template/macro name. https://developer.mozilla.org/en-US/docs/Template:%s
wikimo Searches wiki.mozilla.org for the specified term(s). https://wiki.mozilla.org/index.php?search=%s

Obviously, DXR is a font of fantastic information, and I suggest click the “Operators” button at the right end of the search bar there to see a list of the available filters; building search keywords for many of these filters can make your life vastly easier, depending on your specific needs and work habits!

Air MozillaMozFest Volunteer Health & Safety Briefing

MozFest Volunteer Health & Safety Briefing Excerpt from 2016 MozFest Volunteer Briefing on 19th October for Health and Safety

Air MozillaThe Joy of Coding - Episode 76

The Joy of Coding - Episode 76 mconley livehacks on real Firefox bugs while thinking aloud.

Yunier José Sosa VázquezNueva versión de Firefox llega con mejoras en la reproducción de videos y mucho más

El pasado martes 19 de septiembre Mozilla liberó una nueva versión de su navegador e inmediatamente compartimos con ustedes sus novedades y su descarga. Pedimos disculpa a todas las personas por las molestias que esto pudo causar.

Lo nuevo

El administrador de contraseñas ha sido actualizado para permitir a las páginas HTTPS emplear las credenciales HTTP almacenadas. Esta es una forma más para soportar Let’s Encrypt y ayudar a los usuarios en la transición hacia una web más segura.

El modo de lectura ha recibido varias funcionalidades que mejoran nuestra lectura y escucha mediante la adición de controles para ajustar el ancho y el espacio entre líneas del texto, y la inclusión de narración donde el navegador lee en voz alta el contenido de la página; sin dudas características que mejorarán la experiencia de uso en personas con discapacidad visual.

El modo de lectura ahora incluye controles adicionales y lectura en alta voz

El modo de lectura ahora incluye controles adicionales y lectura en alta voz

El reproductor de audio y video HTML5 ahora posibilita la reproducción de archivos a diferentes velocidades (0.5x, Normal, 1.25x, 1.5x, 2x) y repetirlos indefinidamente. En este sentido, se mejoró el rendimiento al reproducir videos para usuarios con sistemas que soportan instrucciones SSSE3 sin aceleración por hardware.

Firefox Hello, el sistema de comunicación mediante videollamadas y chat ha sido eliminado por su bajo empleo. No obstante, Mozilla seguirá desarrollando y mejorando WebRTC.

Fin del soporte para sistemas OS X 10.6, 10.7 y 10.8, y Windows que soportan procesadores SSE.

Para desarrolladores

  • Añadida la columna Causa al Monitor de Red para mostrar la causa que generó la petición de red.
  • Introducida la API web speech synthesis.

Para Android

  • Adicionado el modo de vista de página sin conexión, con esto podrás ver algunas páginas aunque no tengas acceso a Internet.
  • Añadido un paseo por características fundamentales como el Modo de Lectura y Sync a la página Primera Ejecución.
  • Introducidos las localizaciones Español de Chile (es-CL) y Noruego (nn-NO).
  • El aspecto y comportamiento de las pestañas ha sido actualizado y ahora:
    • Las pestañas antiguas ahora son ocultadas cuando la opción restaurar pestañas está establecida en “Siempre restaurar”.
    • Recuerdo de la posición del scroll y el nivel de zoom para las pestañas abiertas.
    • Los controles multimedia han sido actualizados para evitar sonidos desde múltiples pestañas al mismo tiempo.
    • Mejoras visuales al mostrar los favicons.

Otras novedades

  • Mejoras en la página about:memory para reportar el uso de memoria dedicada a las fuentes.
  • Rehabilitado el valor por defecto para la organización de las fuentes mediante Graphite2.
  • Mejorado el rendimiento en sistemas Windows y OS X que no cuentan con aceleración por hardware.
  • Varias correcciones de seguridad.

Si prefieres ver la lista completa de novedades, puedes llegarte hasta las notas de lanzamiento (en inglés).

Puedes obtener esta versión desde nuestra zona de Descargas en español e inglés para Android, Linux, Mac y Windows. Si te ha gustado, por favor comparte con tus amigos esta noticia en las redes sociales. No dudes en dejarnos un comentario.

Gervase MarkhamSecurity Updates Not Needed

As Brian Krebs is discovering, a large number of internet-connected devices with bad security can really ruin your day. Therefore, a lot of energy is being spent thinking about how to solve the security problems of the Internet of Things. Most of it is focussed on how we can make sure that these devices get regular security updates, and how to align the incentives to achieve that. And it’s difficult, because cheap IoT devices are cheap, and manufacturers make more money building the next thing than fixing the previous one.

Perhaps, instead, of trying to make water flow uphill, we should be taking a different approach. How can we design these devices such that they don’t need any security updates for their lifetime?

One option would be to make them perfect first time. Yeah, right.

Another option would be the one from my blog post, An IoT Vision. In that post, I outlined a world where IoT devices’ access to the Internet is always mediated through a hub. This has several advantages, including the ability to inspect all traffic and the ability to write open source drivers to control the hardware. But one additional outworking of this design decision is that the devices are not Internet-addressable, and cannot send packets directly to the Internet on their own account. If that’s so, it’s much harder to compromise them and much harder to do anything evil with them if you do. At least, evil things affecting the rest of the net. And if that’s not sufficient, the hub itself can be patched to forbid patterns of access necessary for attacks.

Can we fix IoT security not by making devices secure, but by hiding them from attacks?

Gervase MarkhamWoSign and StartCom

One of my roles at Mozilla is that I’m part of the Root Program team, which manages the list of trusted Certificate Authorities (CAs) in Firefox and Thunderbird. And, because we run our program in an open and transparent manner, other entities often adopt our trusted list.

In that connection, I’ve recently been the lead investigator into the activities of a Certificate Authority (CA) called WoSign, and a connected CA called StartCom, who have been acting in ways contrary to those expected of a trusted CA. The whole experience has been really interesting, but I’ve not seen a good moment to blog about it. Now that a decision has been taken on how to move forward, it seems like a good time.

The story started in late August, when Google notified Mozilla about some issues with how WoSign was conducting its operations, including various forms of what seemed to be certificate misissuance. We wrote up the three most serious of those for public discussion. WoSign issued a response to that document.

Further issues were pointed out in discussion, and via the private investigations of various people. That led to a longer, curated issues list and much more public discussion. WoSign, in turn produced a more comprehensive response document, and a “final statement” later.

One or two of the issues on the list turned out to be not their fault, a few more were minor, but several were major – and their attempts to explain them often only led to more issues, or to a clearer understanding of quite how wrong things had gone. On at least one particular issue, the question of whether they were deliberately back-dating certificates using an obsolete cryptographic algorithm (called “SHA-1”) to get around browser blocks on it, we were pretty sure that WoSign was lying.

Around that time, we privately discovered a couple of certificates which had been mis-issued by the CA StartCom but with WoSign fingerprints all over the “style”. Up to this point, the focus has been on WoSign, and StartCom was only involved because WoSign bought them and didn’t disclose it as they should have done. I started putting together the narrative. The result of those further investigations was a 13-page report which conclusively proved that WoSign had been intentionally back-dating certificates to avoid browser-based restrictions on SHA-1 cert issuance.

The report proposed a course of action including a year’s dis-trust for both CAs. At that point, Qihoo 360 (the Chinese megacorporation which is the parent of WoSign and StartCom) requested a meeting with Mozilla, which was held in Mozilla’s London office, and attended by two representatives of Qihoo, and one each from StartCom and WoSign. At that meeting, WoSign’s CEO admitted to intentionally back-dating SHA-1 certificates, as our investigation had discovered. The representatives of Qihoo 360 wanted to know whether it would be possible to disentangle StartCom from WoSign and then treat it separately. Mozilla representatives gave advice on the route which might most likely achieve this, but said that any plan would be subject to public discussion.

WoSign then produced another updated report which included their admissions, and which outlined a plan to split StartCom out from under WoSign and change the management, which was then repeated by StartCom in their remediation plan. However, based on the public discussion, the Mozilla CA Certificates module owner Kathleen Wilson decided that it was appropriate to mostly treat StartCom and WoSign together, although StartCom has an opportunity for quicker restitution than WoSign.

And that’s where we are now :-) StartCom and WoSign will no longer be trusted in Mozilla’s root store for certs issued after 21st October (although it may take some time to implement that decision).

Christian HeilmannDecoded Chats – first edition live on the Decoded Blog

Over the last few weeks I was busy recording interviews with different exciting people of the web. Now I am happy to announce that the first edition of Decoded Chats is live on the new Decoded Blog.

Decoded Chats - Chris interviewing Rob Conery

In this first edition, I’m interviewing Rob Conery about his “Imposter Handbook“. We cover the issues of teaching development, how to deal with a constantly changing work environment and how to tackle diversity and integration.

We’ve got eight more interviews ready and more lined up. Amongst the people I talked to are Sarah Drasner, Monica Dinculescu, Ada-Rose Edwards, Una Kravets and Chris Wilson. The format of Decoded Chats is pretty open: interviews ranging from 15 minutes to 50 minutes about current topics on the web, trends and ideas with the people who came up with them.

Some are recorded in a studio (when I am in Seattle), others are Skype calls and yet others are off-the-cuff recordings at conferences.

Do you know anyone you’d like me to interview? Drop me a line on Twitter @codepo8 and I see what I can do :)

Aki Sasakiscriptworker 0.8.1 and 0.7.1

Tl;dr: I just shipped scriptworker 0.8.1 (changelog) (github) (pypi) and scriptworker 0.7.1 (changelog) (github) (pypi)
These are patch releases, and are currently the only versions of scriptworker that work.

scriptworker 0.8.1

The json, embedded in the Azure XML, now contains a new property, hintId. Ideally this wouldn't have broken anything, but I was using that json dict as kwargs, rather than explicitly passing taskId and runId. This means that older versions of scriptworker no longer successfully poll for tasks.

This is now fixed in scriptworker 0.8.1.

scriptworker 0.7.1

Scriptworker 0.8.0 made some non-backwards-compatible changes to its config format, and there may be more such changes in the near future. To simplify things for other people working on scriptworker, I suggested they stay on 0.7.0 for the time being if they wanted to avoid the churn.

To allow for this, I created a 0.7.x branch and released 0.7.1 off of it. Currently, 0.8.1 and 0.7.1 are the only two versions of scriptworker that will successfully poll Azure for tasks.

comment count unavailable comments

Mike RatcliffeRunning ESLint in Atom for Mozilla Development

Due to some recent changes in the way that we use eslint to check that our coding style linting Mozilla source code in Atom has been broken for a month or two.

I have recently spent some time working on Atom's linter-eslint plugin making it possible to bring all of that linting goodness back to life!

From the root of the project type:

./mach eslint --setup

Install the linter-eslint package v.8.00 or above. Then go to the package settings and enable the following options:

Eslint Settings

Once done, you should see errors and warnings as shown in the screenshot below:

Eslint in the Atom Editor

Air MozillaMozFest 2016 Brown Bag

MozFest 2016 Brown Bag MozFest 2016 Brown Bag - October 18th, 2016 - 16:00 London

Mozilla Security BlogPhasing Out SHA-1 on the Public Web

An algorithm we’ve depended on for most of the life of the Internet — SHA-1 — is aging, due to both mathematical and technological advances. Digital signatures incorporating the SHA-1 algorithm may soon be forgeable by sufficiently-motivated and resourceful entities.

Via our and others’ work in the CA/Browser Forum, following our deprecation plan announced last year and per recommendations by NIST, issuance of SHA-1 certificates mostly halted for the web last January, with new certificates moving to more secure algorithms. Since May 2016, the use of SHA-1 on the web fell from 3.5% to 0.8% as measured by Firefox Telemetry.

In early 2017, Firefox will show an overridable “Untrusted Connection” error whenever a SHA-1 certificate is encountered that chains up to a root certificate included in Mozilla’s CA Certificate Program. SHA-1 certificates that chain up to a manually-imported root certificate, as specified by the user, will continue to be supported by default; this will continue allowing certain enterprise root use cases, though we strongly encourage everyone to migrate away from SHA-1 as quickly as possible.

This policy has been included as an option in Firefox 51, and we plan to gradually ramp up its usage.  Firefox 51 is currently in Developer Edition, and is currently scheduled for release in January 2017. We intend to enable this deprecation of SHA-1 SSL certificates for a subset of Beta users during the beta phase for 51 (beginning November 7) to evaluate the impact of the policy on real-world usage. As we gain confidence, we’ll increase the number of participating Beta users. Once Firefox 51 is released in January, we plan to proceed the same way, starting with a subset of users and eventually disabling support for SHA-1 certificates from publicly-trusted certificate authorities in early 2017.

Questions about SHA-1 based certificates should be directed to the mozilla.dev.security.policy forum.

Christian Heilmanncrossfit.js

Also on Medium, in case you want to comment.

Rey Bango telling you to do it

When I first heard about Crossfit, I thought it to be an excellent idea. I still do, to be fair:

  • Short, very focused and intense workouts instead of time consuming exercise schedules
  • No need for expensive and complex equipment; it is basically running and lifting heavy things
  • A lot of the workouts use your own body weight instead of extra equipment
  • A strong focus on good nutrition. Remove the stuff that is fattening and concentrate on what’s good for you

In essence, it sounded like the counterpoint to overly complex and expensive workouts we did before. You didn’t need expensive equipment. Some bars, ropes and tyres will do. There was also no need for a personal trainer, tailor-made outfits and queuing up for machines to be ready for you at the gym.

Fast forward a few years and you’ll see that we made Crossfit almost a running joke. You have overly loud Crossfit bros crashing weights in the gym, grunting and shouting and telling each other to “feel the burn” and “when you haven’t thrown up you haven’t worked out hard enough”. You have all kind of products branded Crossfit and even special food to aid your Crossfit workouts.

Thanks, commercialism and marketing. You made something simple and easy annoying and elitist again. There was no need for that.

One thing about Crossfit is that it can be dangerous. Without good supervision by friends it is pretty easy to seriously injure yourself. It is about moderation, not about competition.

I feel the same thing happened to JavaScript and it annoys me. JavaScript used to be an add-on to what we did on the web. It gave extra functionality and made it easier for our end users to finish the tasks they came for. It was a language to learn, not a lifestyle to subscribe to.

Nowadays JavaScript is everything. Client side use is only a small part of it. We use it to power servers, run tasks, define build processes and create fat client software. And everybody has an opinionated way to use it and is quick to tell others off for “not being professional” if they don’t subscribe to it. The brogrammer way of life rears its ugly head.

Let’s think of JavaScript like Crossfit was meant to be. Lean, healthy exercise going back to what’s good for you:

  • Use your body weight – on the client, if something can be done with HTML, let’s do it with HTML. When we create HTML with JavaScript, let’s create what makes sense, not lots of DIVs.
  • Do the heavy lifting – JavaScript is great to make complex tasks easier. Use it to create simpler interfaces with fewer reloads. Change user input that was valid but not in the right format. Use task runners to automate annoying work. However, if you realise that the task is a nice to have and not a need, remove it instead. Use worker threads to do heavy computation without clobbering the main UI.
  • Watch what you consume – keep dependencies to a minimum and make sure that what you depend on is reliable, safe to use and update-able.
  • Run a lot – performance is the most important part. Keep your solutions fast and lean.
  • Stick to simple equipment – it is not about how many “professional tools” we use. It is about keeping it easy for people to start working out.
  • Watch your calories – we have a tendency to use too much on the web. Libraries, polyfills, frameworks. Many of these make our lives easier but weigh heavy on our end users. It’s important to understand that our end users don’t have our equipment. Test your products on a cheap Android on a flaky connection, remove what isn’t needed and make it better for everyone.
  • Eat good things – browsers are evergreen and upgrade continuously these days. There are a lot of great features to use to make your products better. Visit “Can I use” early and often and play with new things that replace old cruft.
  • Don’t be a code bro – nobody is impressed with louts that constantly tell people off for not being as fit as they are. Be a code health advocate and help people get into shape instead.

JavaScript is much bigger these days than a language to learn in a day. That doesn’t mean, however, that every new developer needs to know the whole stack to be a useful contributor. Let’s keep it simple and fun.

QMOFirefox 50 Beta 7 Testday Results

Hello Mozillians!

As you may already know, last Friday – October 14th – we held a new Testday event, for Firefox 50 Beta 7.

Thank you all for helping us making Mozilla a better place – Onek Jude, Sadamu Samuel, Moin Shaikh, Suramya,ss22ever22 and Ilse Macías.

From Bangladesh: Maruf Rahman, Md.Rahimul Islam, Sayed Ibn Masud, Abdullah Al Jaber Hridoy, Zayed News, Md Arafatul Islam, Raihan Ali, Md.Majedul islam, Tariqul Islam Chowdhury, Shahrin Firdaus, Md. Nafis Fuad, Sayed Mahmud, Maruf Hasan Hridoy, Md. Almas Hossain, Anmona Mamun Monisha, Aminul Islam Alvi, Rezwana Islam Ria, Niaz Bhuiyan Asif, Nazmul Hassan, Roy Ayers, Farhadur Raja Fahim, Sauradeep Dutta, Sajedul Islam, মাহফুজা হুমায়রা মোহনা.

A big thank you goes out to all our active moderators too!


  • there were 4 verified bugs:
  • all the tests performed on Flash 23 were marked as PASS and 1 new possible issue was found on the New Awesome Bar feature that need to be investigated.

Keep an eye on QMO for upcoming events!

Nicholas NethercoteHow to speed up the Rust compiler

Rust is a great language, and Mozilla plans to use it extensively in Firefox. However, the Rust compiler (rustc) is quite slow and compile times are a pain point for many Rust users. Recently I’ve been working on improving that. This post covers how I’ve done this, and should be of interest to anybody else who wants to help speed up the Rust compiler. Although I’ve done all this work on Linux it should be mostly applicable to other platforms as well.

Getting the code

The first step is to get the rustc code. First, I fork the main Rust repository on GitHub. Then I make two local clones: a base clone that I won’t modify, which serves as a stable comparison point (rust0), and a second clone where I make my modifications (rust1). I use commands something like this:

for r in rust0 rust1 ; do
  cd ~/moz
  git clone https://github.com/$user/rust $r
  cd $r
  git remote add upstream https://github.com/rust-lang/rust
  git remote set-url origin git@github.com:$user/rust

Building the Rust compiler

Within the two repositories, I first configure:

./configure --enable-optimize --enable-debuginfo

I configure with optimizations enabled because that matches release versions of rustc. And I configure with debug info enabled so that I get good information from profilers.

Then I build:

RUSTFLAGS='' make -j8

[Update: I previously had -Ccodegen-units=8 in RUSTFLAGS because it speeds up compile times. But Lars Bergstrom informed me that it can slow down the resulting program significantly. I measured and he was right — the resulting rustc was about 5–10% slower. So I’ve stopped using it now.]

That does a full build, which does the following:

  • Downloads a stage0 compiler, which will be used to build the stage1 local compiler.
  • Builds LLVM, which will become part of the local compilers.
  • Builds the stage1 compiler with the stage0 compiler.
  • Builds the stage2 compiler with the stage1 compiler.

It can be mind-bending to grok all the stages, especially with regards to how libraries work. (One notable example: the stage1 compiler uses the system allocator, but the stage2 compiler uses jemalloc.) I’ve found that the stage1 and stage2 compilers have similar performance. Therefore, I mostly measure the stage1 compiler because it’s much faster to just build the stage1 compiler, which I do with the following command.

RUSTFLAGS='-Ccodegen-units=8' make -j8 rustc-stage1

Building the compiler takes a while, which isn’t surprising. What is more surprising is that rebuilding the compiler after a small change also takes a while. That’s because a lot of code gets recompiled after any change. There are two reasons for this.

  • Rust’s unit of compilation is the crate. Each crate can consist of multiple files. If you modify a crate, the whole crate must be rebuilt. This isn’t surprising.
  • rustc’s dependency checking is very coarse. If you modify a crate, every other crate that depends on it will also be rebuilt, no matter how trivial the modification. This surprised me greatly. For example, any modification to the parser (which is in a crate called libsyntax) causes multiple other crates to be recompiled, a process which takes 6 minutes on my fast desktop machine. Almost any change to the compiler will result in a rebuild that takes at least 2 or 3 minutes.

Incremental compilation should greatly improve the dependency situation, but it’s still in an experimental state and I haven’t tried it yet.

To run all the tests I do this (after a full build):

ulimit -c 0 && make check

The checking aborts if you don’t do the ulimit, because the tests produces lots of core files and it doesn’t want to swamp your disk.

The build system is complex, with lots of options. This command gives a nice overview of some common invocations:

make tips

Basic profiling

The next step is to do some basic profiling. I like to be careful about which rustc I am invoking at any time, especially if there’s a system-wide version installed, so I avoid relying on PATH and instead define some environment variables like this:

export RUSTC01="$HOME/moz/rust0/x86_64-unknown-linux-gnu/stage1/bin/rustc"
export RUSTC02="$HOME/moz/rust0/x86_64-unknown-linux-gnu/stage2/bin/rustc"
export RUSTC11="$HOME/moz/rust1/x86_64-unknown-linux-gnu/stage1/bin/rustc"
export RUSTC12="$HOME/moz/rust1/x86_64-unknown-linux-gnu/stage2/bin/rustc"

In the examples that follow I will use $RUSTC01 as the version of rustc that I invoke.

rustc has the ability to produce some basic stats about the time and memory used by each compiler pass. It is enabled with the -Ztime-passes flag. If you are invoking rustc directly you’d do it like this:

$RUSTC01 -Ztime-passes a.rs

If you are building with Cargo you can instead do this:

RUSTC=$RUSTC01 cargo rustc -- -Ztime-passes

The RUSTC= part tells Cargo you want to use a non-default rustc, and the part after the -- is flags that will be passed to rustc when it builds the final crate. (A bit weird, but useful.)

Here is some sample output from -Ztime-passes:

time: 0.056; rss: 49MB parsing
time: 0.000; rss: 49MB recursion limit
time: 0.000; rss: 49MB crate injection
time: 0.000; rss: 49MB plugin loading
time: 0.000; rss: 49MB plugin registration
time: 0.103; rss: 87MB expansion
time: 0.000; rss: 87MB maybe building test harness
time: 0.002; rss: 87MB maybe creating a macro crate
time: 0.000; rss: 87MB checking for inline asm in case the target doesn't support it
time: 0.005; rss: 87MB complete gated feature checking
time: 0.008; rss: 87MB early lint checks
time: 0.003; rss: 87MB AST validation
time: 0.026; rss: 90MB name resolution
time: 0.019; rss: 103MB lowering ast -> hir
time: 0.004; rss: 105MB indexing hir
time: 0.003; rss: 105MB attribute checking
time: 0.003; rss: 105MB language item collection
time: 0.004; rss: 105MB lifetime resolution
time: 0.000; rss: 105MB looking for entry point
time: 0.000; rss: 105MB looking for plugin registrar
time: 0.015; rss: 109MB region resolution
time: 0.002; rss: 109MB loop checking
time: 0.002; rss: 109MB static item recursion checking
time: 0.060; rss: 109MB compute_incremental_hashes_map
time: 0.000; rss: 109MB load_dep_graph
time: 0.021; rss: 109MB type collecting
time: 0.000; rss: 109MB variance inference
time: 0.038; rss: 113MB coherence checking
time: 0.126; rss: 114MB wf checking
time: 0.219; rss: 118MB item-types checking
time: 1.158; rss: 125MB item-bodies checking
time: 0.000; rss: 125MB drop-impl checking
time: 0.092; rss: 127MB const checking
time: 0.015; rss: 127MB privacy checking
time: 0.002; rss: 127MB stability index
time: 0.011; rss: 127MB intrinsic checking
time: 0.007; rss: 127MB effect checking
time: 0.027; rss: 127MB match checking
time: 0.014; rss: 127MB liveness checking
time: 0.082; rss: 127MB rvalue checking
time: 0.145; rss: 161MB MIR dump
 time: 0.015; rss: 161MB SimplifyCfg
 time: 0.033; rss: 161MB QualifyAndPromoteConstants
 time: 0.034; rss: 161MB TypeckMir
 time: 0.001; rss: 161MB SimplifyBranches
 time: 0.006; rss: 161MB SimplifyCfg
time: 0.089; rss: 161MB MIR passes
time: 0.202; rss: 161MB borrow checking
time: 0.005; rss: 161MB reachability checking
time: 0.012; rss: 161MB death checking
time: 0.014; rss: 162MB stability checking
time: 0.000; rss: 162MB unused lib feature checking
time: 0.101; rss: 162MB lint checking
time: 0.000; rss: 162MB resolving dependency formats
 time: 0.001; rss: 162MB NoLandingPads
 time: 0.007; rss: 162MB SimplifyCfg
 time: 0.017; rss: 162MB EraseRegions
 time: 0.004; rss: 162MB AddCallGuards
 time: 0.126; rss: 164MB ElaborateDrops
 time: 0.001; rss: 164MB NoLandingPads
 time: 0.012; rss: 164MB SimplifyCfg
 time: 0.008; rss: 164MB InstCombine
 time: 0.003; rss: 164MB Deaggregator
 time: 0.001; rss: 164MB CopyPropagation
 time: 0.003; rss: 164MB AddCallGuards
 time: 0.001; rss: 164MB PreTrans
time: 0.182; rss: 164MB Prepare MIR codegen passes
 time: 0.081; rss: 167MB write metadata
 time: 0.590; rss: 177MB translation item collection
 time: 0.034; rss: 180MB codegen unit partitioning
 time: 0.032; rss: 300MB internalize symbols
time: 3.491; rss: 300MB translation
time: 0.000; rss: 300MB assert dep graph
time: 0.000; rss: 300MB serialize dep graph
 time: 0.216; rss: 292MB llvm function passes [0]
 time: 0.103; rss: 292MB llvm module passes [0]
 time: 4.497; rss: 308MB codegen passes [0]
 time: 0.004; rss: 308MB codegen passes [0]
time: 5.185; rss: 308MB LLVM passes
time: 0.000; rss: 308MB serialize work products
time: 0.257; rss: 297MB linking

As far as I can tell, the indented passes are sub-passes, and the parent pass is the first non-indented pass afterwards.

More serious profiling

The -Ztime-passes flag gives a good overview, but you really need a profiling tool that gives finer-grained information to get far. I’ve done most of my profiling with two Valgrind tools, Cachegrind and DHAT. I invoke Cachegrind like this:

valgrind \
 --tool=cachegrind --cache-sim=no --branch-sim=yes \
 --cachegrind-out-file=$OUTFILE $RUSTC01 ...

where $OUTFILE specifies an output filename. I find the instruction counts measured by Cachegrind to be highly useful; the branch simulation results are occasionally useful, and the cache simulation results are almost never useful.

The Cachegrind output looks like this:

22,153,170,953 PROGRAM TOTALS

         Ir file:function
923,519,467 /build/glibc-GKVZIf/glibc-2.23/malloc/malloc.c:_int_malloc
879,700,120 /home/njn/moz/rust0/src/rt/miniz.c:tdefl_compress
629,196,933 /build/glibc-GKVZIf/glibc-2.23/malloc/malloc.c:_int_free
394,687,991 ???:???
379,869,259 /home/njn/moz/rust0/src/libserialize/leb128.rs:serialize::leb128::read_unsigned_leb128
376,921,973 /build/glibc-GKVZIf/glibc-2.23/malloc/malloc.c:malloc
263,083,755 /build/glibc-GKVZIf/glibc-2.23/string/::/sysdeps/x86_64/multiarch/memcpy-avx-unaligned.S:__memcpy_avx_unaligned
257,219,281 /home/njn/moz/rust0/src/libserialize/opaque.rs:<serialize::opaque::Decoder<'a> as serialize::serialize::Decoder>::read_usize
217,838,379 /build/glibc-GKVZIf/glibc-2.23/malloc/malloc.c:free
217,006,132 /home/njn/moz/rust0/src/librustc_back/sha2.rs:rustc_back::sha2::Engine256State::process_block
211,098,567 ???:llvm::SelectionDAG::Combine(llvm::CombineLevel, llvm::AAResults&, llvm::CodeGenOpt::Level)
185,630,213 /home/njn/moz/rust0/src/libcore/hash/sip.rs:<rustc_incremental::calculate_svh::hasher::IchHasher as core::hash::Hasher>::write
171,360,754 /home/njn/moz/rust0/src/librustc_data_structures/fnv.rs:<rustc::ty::subst::Substs<'tcx> as core::hash::Hash>::hash
150,026,054 ???:llvm::SelectionDAGISel::SelectCodeCommon(llvm::SDNode*, unsigned char const*, unsigned int)

Here “Ir” is short for “I-cache reads”, which corresponds to the number of instructions executed. Cachegrind also gives line-by-line annotations of the source code.

The Cachegrind results indicate that malloc and free are usually the two hottest functions in the compiler. So I also use DHAT, which is a malloc profiler that tells you exactly where all your malloc calls are coming from.  I invoke DHAT like this:

/home/njn/grind/ws3/vg-in-place \
 --tool=exp-dhat --show-top-n=1000 --num-callers=4 \
 --sort-by=tot-blocks-allocd $RUSTC01 ... 2> $OUTFILE

I sometimes also use --sort-by=tot-bytes-allocd. DHAT’s output looks like this:

==16425== -------------------- 1 of 1000 --------------------
==16425== max-live: 30,240 in 378 blocks
==16425== tot-alloc: 20,866,160 in 260,827 blocks (avg size 80.00)
==16425== deaths: 260,827, at avg age 113,438 (0.00% of prog lifetime)
==16425== acc-ratios: 0.74 rd, 1.00 wr (15,498,021 b-read, 20,866,160 b-written)
==16425== at 0x4C2BFA6: malloc (vg_replace_malloc.c:299)
==16425== by 0x5AD392B: <syntax::ptr::P<T> as serialize::serialize::Decodable>::decode (heap.rs:59)
==16425== by 0x5AD4456: <core::iter::Map<I, F> as core::iter::iterator::Iterator>::next (serialize.rs:201)
==16425== by 0x5AE2A52: rustc_metadata::decoder::<impl rustc_metadata::cstore::CrateMetadata>::get_attributes (vec.rs:1556)
==16425== -------------------- 2 of 1000 --------------------
==16425== max-live: 1,360 in 17 blocks
==16425== tot-alloc: 10,378,160 in 129,727 blocks (avg size 80.00)
==16425== deaths: 129,727, at avg age 11,622 (0.00% of prog lifetime)
==16425== acc-ratios: 0.47 rd, 0.92 wr (4,929,626 b-read, 9,599,798 b-written)
==16425== at 0x4C2BFA6: malloc (vg_replace_malloc.c:299)
==16425== by 0x881136A: <syntax::ptr::P<T> as core::clone::Clone>::clone (heap.rs:59)
==16425== by 0x88233A7: syntax::ext::tt::macro_parser::parse (vec.rs:1105)
==16425== by 0x8812E66: syntax::tokenstream::TokenTree::parse (tokenstream.rs:230)

The “deaths” value here indicate the total number of calls to malloc for each call stack, which is usually the metric of most interest. The “acc-ratios” value can also be interesting, especially if the “rd” value is 0.00, because that indicates the allocated blocks are never read. (See below for example of problems that I found this way.)

For both profilers I also pipe $OUTFILE through eddyb’s rustfilt.sh script which demangles ugly Rust symbols like this:


to something much nicer, like this:

<serialize::opaque::Decoder<'a> as serialize::serialize::Decoder>::read_usize

For programs that use Cargo, sometimes it’s useful to know the exact rustc invocations that Cargo uses. Find out with either of these commands:

RUSTC=$RUSTC01 cargo build -v
RUSTC=$RUSTC01 cargo rust -v

I also have done a decent amount of ad hoc println profiling, where I insert println! calls in hot parts of the code and then I use a script to post-process them. This can be very useful when I want to know exactly how many times particular code paths are hit.

I’ve also tried perf. It works, but I’ve never established much of a rapport with it. YMMV. In general, any profiler that works with C or C++ code should also work with Rust code.

Finding suitable benchmarks

Once you know how you’re going to profile you need some good workloads. You could use the compiler itself, but it’s big and complicated and reasoning about the various stages can be confusing, so I have avoided that myself.

Instead, I have focused entirely on rustc-benchmarks, a pre-existing rustc benchmark suite. It contains 13 benchmarks of various sizes. It has been used to track rustc’s performance at perf.rust-lang.org for some time, but it wasn’t easy to use locally until I wrote a script for that purpose. I invoke it something like this:

./compare.py \
  /home/njn/moz/rust0/x86_64-unknown-linux-gnu/stage1/bin/rustc \

It compares the two given compilers, doing debug builds, on the benchmarks See the next section for example output. If you want to run a subset of the benchmarks you can specify them as additional arguments.

Each benchmark in rustc-benchmarks has a makefile with three targets. See the README for details on these targets, which can be helpful.


Here are the results if I compare the following two versions of rustc with compare.py.

  • The commit just before my first commit (on September 12).
  • A commit from October 13.
futures-rs-test  5.028s vs  4.433s --> 1.134x faster (variance: 1.020x, 1.030x)
helloworld       0.283s vs  0.235s --> 1.202x faster (variance: 1.012x, 1.025x)
html5ever-2016-  6.293s vs  5.652s --> 1.113x faster (variance: 1.011x, 1.008x)
hyper.0.5.0      6.182s vs  5.039s --> 1.227x faster (variance: 1.002x, 1.018x)
inflate-0.1.0    5.168s vs  4.935s --> 1.047x faster (variance: 1.001x, 1.002x)
issue-32062-equ  0.457s vs  0.347s --> 1.316x faster (variance: 1.010x, 1.007x)
issue-32278-big  2.046s vs  1.706s --> 1.199x faster (variance: 1.003x, 1.007x)
jld-day15-parse  1.793s vs  1.538s --> 1.166x faster (variance: 1.059x, 1.020x)
piston-image-0. 13.871s vs 11.885s --> 1.167x faster (variance: 1.005x, 1.005x)
regex.0.1.30     2.937s vs  2.516s --> 1.167x faster (variance: 1.010x, 1.002x)
rust-encoding-0  2.414s vs  2.078s --> 1.162x faster (variance: 1.006x, 1.005x)
syntex-0.42.2   36.526s vs 32.373s --> 1.128x faster (variance: 1.003x, 1.004x)
syntex-0.42.2-i 21.500s vs 17.916s --> 1.200x faster (variance: 1.007x, 1.013x)

Not all of the improvement is due to my changes, but I have managed a few nice wins, including the following.

#36592: There is an arena allocator called TypedArena. rustc creates many of these, mostly short-lived. On creation, each arena would allocate a 4096 byte chunk, in preparation for the first arena allocation request. But DHAT’s output showed me that the vast majority of arenas never received such a request! So I made TypedArena lazy — the first chunk is now only allocated when necessary. This reduced the number of calls to malloc greatly, which sped up compilation of several rustc-benchmarks by 2–6%.

#36734: This one was similar. Rust’s HashMap implementation is lazy — it doesn’t allocate any memory for elements until the first one is inserted. This is a good thing because it’s surprisingly common in large programs to create HashMaps that are never used. However, Rust’s HashSet implementation (which is just a layer on top of the HashMap) didn’t have this property, and guess what? rustc also creates large numbers of HashSets that are never used. (Again, DHAT’s output made this obvious.) So I fixed that, which sped up compilation of several rustc-benchmarks by 1–4%. Even better, because this change is to Rust’s stdlib, rather than rustc itself, it will speed up any program that creates HashSets without using them.

#36917: This one involved avoiding some useless data structure manipulation when a particular table was empty. Again, DHAT pointed out a table that was created but never read, which was the clue I needed to identify this improvement. This sped up two benchmarks by 16% and a couple of others by 3–5%.

#37064: This one changed a hot function in serialization code to return a Cow<str> instead of a String, which avoided a lot of allocations.

Future work

Profiles indicate that the following parts of the compiler account for a lot of its runtime.

  • malloc and free are still the two hottest functions in most benchmarks. Avoiding heap allocations can be a win.
  • Compression is used for crate metadata and LLVM bitcode. (This shows up in profiles under a function called tdefl_compress.)  There is an issue open about this.
  • Hash table operations are hot. A lot of this comes from the interning of various values during type checking; see the CtxtInterners type for details.
  • Crate metadata decoding is also costly.
  • LLVM execution is a big chunk, especially when doing optimized builds. So far I have treated LLVM as a black box and haven’t tried to change it, at least partly because I don’t know how to build it with debug info, which is necessary to get source files and line numbers in profiles.

A lot of programs have broadly similar profiles, but occasionally you get an odd one that stresses a different part of the compiler. For example, in rustc-benchmarks, inflate-0.1.0 is dominated by operations involving the (delighfully named) ObligationsForest (see #36993), and html5ever-2016-08-25 is dominated by what I think is macro processing. So it’s worth profiling the compiler on new codebases.

Caveat lector

I’m still a newcomer to Rust development. Although I’ve had lots of help on the #rustc IRC channel — big thanks to eddyb and simulacrum in particular — there may be things I am doing wrong or sub-optimally. Nonetheless, I hope this is a useful starting point for newcomers who want to speed up the Rust compiler.

This Week In RustThis Week in Rust 152

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Blog Posts

News & Project Updates

Other Weeklies from Rust Community

Crate of the Week

This week's Create of the Week is xargo - for effortless cross compilation of Rust programs to custom bare-metal targets like ARM Cortex-M. It recently reached version 0.2.0 and you can read the announcement here.

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

106 pull requests were merged in the last week.

New Contributors

  • Danny Hua
  • Fabian Frei
  • Mikko Rantanen
  • Nabeel Omer

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Style RFCs

Style RFCs are part of the process for deciding on style guidelines for the Rust community and defaults for Rustfmt. The process is similar to the RFC process, but we try to reach rough consensus on issues (including a final comment period) before progressing to PRs. Just like the RFC process, all users are welcome to comment and submit RFCs. If you want to help decide what Rust code should look like, come get involved!

FCP issues:

Other issues getting a lot of discussion:

No PRs this week.

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

fn work(on: RustProject) -> Money

No jobs listed for this week.

Tweet us at @ThisWeekInRust to get your job offers listed here!

Friends of the Forest

Our community likes to recognize people who have made outstanding contributions to the Rust Project, its ecosystem, and its community. These people are 'friends of the forest'.

This week's friends of the forest are:

I'd like to nominate bluss for his work on scientific programming in Rust. ndarray is a monumental project but in addition to that he has worked (really) hard to share that knowledge among others and provided easy-to-use libraries like matrixmultiply. Without bluss' assistance rulinalg would be in a far worse state.

I'd like to nominate Yehuda Katz, the lord of package managers.

Submit your Friends-of-the-Forest nominations for next week!

Quote of the Week

<dRk> that gives a new array of errors, guess that's a good thing <misdreavus> you passed one layer of tests, and hit the next layer :P <misdreavus> rustc is like onions <dRk> it makes you cry?

— From #rust-beginners.

Thanks to Quiet Misdreavus for the suggestion.

Submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and brson.

Daniel Stenbergcurl up in Nuremberg!

I’m very happy to announce that the curl project is about to run our first ever curl meeting and developers conference.

March 18-19, Nuremberg Germany

Everyone interested in curl, libcurl and related matters is invited to participate. We only ask of you to register and pay the small fee. The fee will be used for food and more at the event.

You’ll find the full and detailed description of the event and the specific location in the curl wiki.

The agenda for the weekend is purposely kept loose to allow for flexibility and unconference-style adding things and topics while there. You will thus have the chance to present what you like and affect what others present. Do tell us what you’d like to talk about or hear others talk about! The sign-up for the event isn’t open yet, as we first need to work out some more details.

We have a dedicated mailing list for discussing the meeting, called curl-meet, so please consider yourself invited to join in there as well!

Thanks a lot to SUSE for hosting!

Feel free to help us make a cool logo for the event!


(The 19th birthday of curl is suitably enough the day after, on March 20.)

Firefox NightlyThese Weeks in Firefox: Issue 3

The Firefox Desktop team met yet again last Tuesday to share updates. Here are some fresh updates that we think you might find interesting:


Contributor(s) of the Week

Project Updates

Context Graph

Electrolysis (e10s)

Platform UI

Privacy / Security


Here are the raw meeting notes that were used to derive this list.

Want to help us build Firefox? Get started here!

Here’s a tool to find some mentored, good first bugs to hack on.

Firefox NightlyBetter default bookmarks for Nightly

Because software defaults matter, we have just changed the default bookmarks for the Nightly channel to be more useful to power-users deeply interested in day to day progress of Firefox and potentially willing to help Mozilla improve their browser through bug and crash reports, shared telemetry data and technical feedback.

Users on the Nightly channels had the same bookmarks than users on the release channel, these bookmarks target end-users with limited technical knowledge and link to Mozilla sites providing end-user support, add-ons or propose a tour of Firefox features. Not very compelling for a tech-savvy audience that installed pre-alpha software!

As of last week, new Nightly users or existing Nightly users creating a new profile have a different set of bookmarks that are more likely to meet their interest in the technical side of Mozilla and contributing to Firefox as an alpha tester. Here is what the default bookmarks are:

New Nightly Bookmarks

There are links to this blog of course, to Planet Mozilla, to the Mozilla Developer Network, to the Nightly Testers Tools add-on, to about:crashes and to the IRC #nightly channel in case you find a bug and would like to talk to other Nightly users about it and of course a link to Bugzilla. The Firefox tour link was also replaced by a link to the contribute page on mozilla.org.

It’s a minor change to the profile data as we don’t want to make of Nightly a different product from Firefox, but I hope another small step in the direction of empowering our more technical user base to help Mozilla build the most stable and reliable browser for hundreds of millions of people!

Giorgos LogiotatidisSystemd Unit to activate loopback devices before LVM

In a Debian server I'm using LVM to create a single logical volume from multiple different volumes. One of the volumes is a loop-back device which refers to a file in another filesystem.

The loop-back device needs to be activated before the LVM service starts or the later will fail due to missing volumes. To do so a special systemd unit needs to be created which will not have the default dependencies of units and will get executed before lvm2-activation-early service.

Systemd will set a number of dependencies for all units by default to bring the system into a usable state before starting most of the units. This behavior is controlled by DefaultDependencies flag. Leaving DefaultDependencies to its default True value creates a dependency loop which systemd will forcefully break to finish booting the system. Obviously this non-deterministic flow can result in different than desired execution order which in turn will fail the LVM volume activation.

Setting DefaultDependencies to False will disable all but essential dependencies and will allow our unit to execute in time. Systemd manual confirms that we can set the option to false:

Generally, only services involved with early boot or late shutdown should set this option to false.

The second is to execute before lvm2-activation-early. This is simply achieved by setting Before=lvm2-activation-early.

The third and last step is to set the command to execute. In my case it's /sbin/losetup /dev/loop0 /volume.img as I want to create /dev/loop0 from the file /volume.img. Set the process type to oneshot so systemd waits for the process to exit before it starts follow-up units. Again from the systemd manual

Behavior of oneshot is similar to simple; however, it is expected that the process has to exit before systemd starts follow-up units.

Place the unit file in /etc/systemd/system and in the next reboot the loop-back device should be available to LVM.

Here's the final unit file:

Description=Activate loop device

ExecStart=/sbin/losetup /dev/loop0 /volume.img


See also: - Anthony's excellent LVM Loopback How-To

Firefox NightlyDevTools now display white space text nodes in the DOM inspector

Web developers don’t write all their code in just one line of text. They use white space between their HTML elements because it makes markup more readable: spaces, returns, tabs.

In most instances, this white space seems to have no effect and no visual output, but the truth is that when a browser parses HTML it will automatically generate anonymous text nodes for elements not contained in a node. This includes white space (which is, after all a type of text).

If these auto generated text nodes are inline level, browsers will give them a non-zero width and height, and you will find strange gaps between the elements in the context, even if you haven’t set any margin or padding on nearby elements.

This behaviour can be hard to debug, but Firefox DevTools are now able to display these whitespace nodes, so you can quickly spot where do the gaps come from in your markup, and fix the issues.


Whitespace debugging in DevTools in action

The demo shows two examples with slightly different markup to highlight the differences both in browser rendering and what DevTools are showing.

The first example has one img per line, so the markup is readable, but the browser renders gaps between the images:

<img src="..." />
<img src="..." />

The second example has all the img tags in one line, which makes the markup unreadable, but it also doesn’t have gaps in the output:

<img src="..." /><img src="..." />

If you inspect the nodes in the first example, you’ll find a new whitespace indicator that denotes the text nodes created for the browser for the whitespace in the code. No more guessing! You can even delete the node from the inspector, and see if that removes mysterious gaps you might have in your website.

The Servo BlogThese Weeks In Servo 81

In the last two weeks, we landed 171 PRs in the Servo organization’s repositories.

Planning and Status

Our overall roadmap is available online and now includes the Q4 plans and tentative outline of some ideas for 2017. Please check it out and provide feedback!

This week’s status updates are here.

Notable Additions

  • bholley added benchmark support to mach’s ability to run unit tests
  • frewsxcv implemented the value property on <select>
  • pcwalton improved the rendering of etsy.com by fixing percentages in top and bottom
  • joewalker added support for font-kerning in Stylo
  • ms2ger implemented blob URL support in the fetch stack
  • scottrinh hid some canvas-related interfaces from workers
  • pcwalton improved reddit.com by avoiding vertical alignment of absolutely positioned children in table rows
  • namsoocho added font-variant-position for Stylo
  • mmatyas fixed Android and ARM compilation issues in WebRender
  • pcwalton improved google.com by avoiding incorrect block element position modifications
  • heycam factored out a UrlOrNone type to avoid some duplication in property bindings code
  • manishearth vendored bindings for Gecko’s nsString
  • awesomeannirudh implemented the -moz-text-align-last property
  • mrobinson added a custom debug formatter for ClippingRegion
  • manishearth implemented column-count for Stylo
  • anholt added the WebGL uniformMatrix*fv methods
  • UK992 made our build environment warn if it finds the MinGW Python, which breaks Windows MinGW builds
  • nox updated Rust
  • waffles added image-rendering support for Stylo
  • glennw fixed routing of touch events to the correct iframe
  • jdub added some bindings generation builder functions
  • larsberg picked up the last fix to get Servo on MSVC working
  • glennw added fine-grained GPU profiling to WebRender
  • canaltinova implemented some missing gradient types for Stylo
  • pcwalton implemented vertical-align: middle and fixed some vertical-align issues
  • splav added initial support for the root SVG element
  • glennw added transform support for text runs in WebRender
  • nox switched many crates to serde_derive, avoiding a fragile nightly dependency in our ecosystem
  • wafflespeanut added font-stretch support to Stylo
  • aneeshusa fixed the working directory for CI steps auto-populated from the in-tree rules
  • dati91 added mock WebBluetooth device support, in order to implement the WebBluetooth Test API
  • aneeshusa fixed a potential GitHub token leak in our documentation build
  • pcwalton fixed placement of inline hypothetical boxes for absolutely positioned elements, which fixes the Rust docs site
  • SimonSapin changed PropertyDeclarationBlock to use parking_lot::RwLock
  • shinglyu restored the layout trace viewer to aid in debugging layout
  • KiChjang implemented CSS transition DOM events
  • nox added intemediary, Rust-only WebIDL interfaces that replaced lots of unnecessary code duplication
  • mathieuh improved web compatibility by matching the new specification changes related to XMLHttpRequest events
  • emilio improved web compatibility by adding more conformance checks to various WebGL APIs
  • mortimergoro implemented several missing WebGL APIs
  • g-k created tests verifying the behaviour of browser cookie implementations

New Contributors

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!


Canaltinova implemented parsing for many gradients so that they can be used in Firefox via Stylo and also provided comparisons:

Radial gradient support in Stylo

Robert O'CallahanIronic World Standards Day

Apparently World Standards Day is on October 14. Except in the USA it's celebrated on October 27 and in Canada on October 5.

Are they trying to be ironic?

Cameron KaiserIt's Talos time (plus: 45.5.0 beta 2 now with more AltiVec IDCT)

It's Talos time. You can now plunk down your money for an open, auditable, non-x86 workstation-class computer that doesn't suck. It's PowerPC. It's modern. It's beefy. It's awesome.

Let's not mince words, however: it's also not cheap, and you're gonna plunk down a lot if you want this machine. The board runs $4100 and that's without the CPU, which is pledged for separately though you can group them in the same order (this is a little clunky and I don't know why Raptor did it this way). To be sure, I think we all suspected this would be the case but now it's clear the initial prices were underestimates. Although some car repairs and other things have diminished my budget (I was originally going to get two of these), I still ponied up for a board and for one of the 190W octocore POWER8 CPUs, since this appears to be the sweetspot for those of us planning to use it as a workstation (remember each core has eight threads via SMT for a grand total of 64, and this part has the fastest turbo clock speed at 3.857GHz). That ran me $5340. I think after the RAM, disks, video card, chassis and PSU I'll probably be all in for around $7000.

Too steep? I don't blame you, but you can still help by donating to the project and enable those of us who can afford to jump in first to smoothe the way out for you. Frankly, this is the first machine I consider a meaningful successor to the Quad G5 (the AmigaOne series isn't quite there yet). Non-x86 doesn't have the economies of scale of your typical soulless Chipzilla craptop or beige box, but if we can collectively help Raptor get this project off the ground you'll finally have an option for your next big machine when you need something free, open and unchained -- and there's a lot of chains in modern PCs that you don't control. You can donate as little as $10 and get this party started, or donate $250 and get to play with one remotely for a few months. Call it a rental if you like. No, I don't get a piece of this, I don't have stock in Raptor and I don't owe them a favour. I simply want this project to succeed. And if you're reading this blog, odds are you want that too.

The campaign ends December 15. Donate, buy, whatever. Let's do this.

My plans are, even though I confess I'll be running it little-endian (since unfortunately I don't think we have much choice nowadays), to make it as much a true successor to the last Power Mac as possible. Yes, I'll be sinking time into a JIT for it, which should fully support asm.js to truly run those monster applications we're seeing more and more of, porting over our AltiVec code with an endian shift (since the POWER8 has VMX), and working on a viable and fast way of running legacy Power Mac software on it, either through KVM or QEMU or whatever turns out to be the best option. If this baby gets off the ground, you have my promise that doing so will be my first priority, because this is what I wanted the project for in the first place. We have a chance to resurrect the Power Mac, folks, and in a form that truly kicks ass. Don't waste the opportunity.

Now, having said all that, I do think Raptor has made a couple tactical errors. Neither are fatal, but neither are small.

First, there needs to be an intermediate pledge level between the bare board and the $18,000 (!!!!) Warren Buffett edition. I have no doubt the $18,000 machine will be the Cadillac of this line, but like Cadillacs, there isn't $18,000 worth of parts in it (maybe, maybe, $10K), and this project already has a bad case of sticker shock without slapping people around with that particular dead fish. Raptor needs to slot something in the middle that isn't quite as wtf-inducing and I'll bet they'll be appealing to those people willing to spend a little more to get a fully configured box. (I might have been one of those people, but I won't have the chance now.)

Second, the pledge threshold of $3.7 million is not ludicrous when you consider what has to happen to manufacture these things, but it sure seems that way. Given that this can only be considered a boutique system at this stage, it's going to take a lot of punters like yours truly to cross that point, which is why your donations even if you're not willing to buy right now are critical to get this thing jumpstarted. I don't know Raptor's finances, but they gave themselves a rather high hurdle here and I hope it doesn't doom the whole damn thing.

On the other hand, doesn't look like Apple's going to be updating the Mac Pro any time soon, so if you're in the market ...

On to 45.5.0 beta 2 (downloads, hashes). The two major changes in this version is that I did some marginal reduction in the overhead of graphics primitives calls, and completed converting to AltiVec all of the VP9 inverse discrete cosine and Hadamard transforms. Feel free to read all 152K of it, patterned largely off the SSE2 version but still mostly written by hand; I also fixed the convolver on G4 systems and made it faster too. This is probably the biggest amount of time required by the computer while decoding frames. I can do some more by starting on the intraframe predictors but that will probably not yield speed ups as dramatic. My totally unscientific testing is yielding these recommendations for specific machines:

1.0GHz iMac G4 (note: not technically supported, but a useful comparison): maximum watchable resolution 144p VP9
1.33GHz iBook G4, reduced performance: same
1.33GHz iBook G4, highest performance: good at 144p VP9, max at 240p VP9, but VP8 is better
1.67GHz DLSD PowerBook G4: ditto, VP8 better here too
2.5GHz Quad G5, reduced performance: good at 240p VP9, max at 360p VP9
2.5GHz Quad G5, highest performance: good at 360p VP9, max at 480p VP9

I'd welcome your own assessments, but since VP8 (i.e., MediaSource Extensions off) is "good enough" on the G5 and actually currently better on the G4, I've changed my mind again and I'll continue to ship with MSE turned off so that it still works as people expect. However, they'll still be able to toggle the option in our pref panel, which also was fixed to allow toggling PDF.js (that was a stupid bug caused by missing a change I forgot to pull forward into the released build). When VP9 is clearly better on all supported configurations then we'll reexamine this.

No issues have been reported regarding little-endian JavaScript typed arrays or our overall new hybrid endian strategy, or with the minimp3 platform decoder, so both of those features are go. Download and try it.

Have you donated yet?

Mozilla Addons BlogAdd-ons Update – 2016/10

Here’s the state of the add-ons world this month.

The Review Queues

In the past month, 1,755 listed add-on submissions were reviewed:

  • 1,438 (82%) were reviewed in fewer than 5 days.
  • 119 (7%) were reviewed between 5 and 10 days.
  • 198 (11%) were reviewed after more than 10 days.

There are 223 listed add-ons awaiting review.

If you’re an add-on developer and are looking for contribution opportunities, please consider joining us. Add-on reviewers are critical for our success, and can earn cool gear for their work. Visit our wiki page for more information.


The compatibility blog post for Firefox 50 is up, and the bulk validation was run recently. The compatibility blog post for Firefox 51 has published yesterday. It’s worth pointing out that the Firefox 50 cycle will be twice as long, so 51 won’t be released until January 24th, 2017.

Multiprocess Firefox is now enabled for users without add-ons, and add-ons will be gradually phased in, so make sure you’ve tested your add-on and either use WebExtensions or set the multiprocess compatible flag in your add-on manifest.

As always, we recommend that you test your add-ons on Beta and Firefox Developer Edition to make sure that they continue to work correctly. End users can install the Add-on Compatibility Reporter to identify and report any add-ons that aren’t working anymore.


We would like to thank Atique Ahmed Ziad, Surya Prashanth, freaktechnik, shubheksha, bjdixon, zombie, berraknil, Krizzu, rackstar17, paenglab, and Trishul Goel (long list!) for their recent contributions to the add-ons world. You can read more about their work in our recognition page.

Daniel Stenberga single byte write opened a root execution exploit

Thursday, September 22nd 2016. An email popped up in my inbox.

Subject: ares_create_query OOB write

As one of the maintainers of the c-ares project I’m receiving mails for suspected security problems in c-ares and this was such a one. In this case, the email with said subject came from an individual who had reported a ChromeOS exploit to Google.

It turned out that this particular c-ares flaw was one important step in a sequence of necessary procedures that when followed could let the user execute code on ChromeOS from JavaScript – as the root user. I suspect that is pretty much the worst possible exploit of ChromeOS that can be done. I presume the reporter will get a fair amount of bug bounty reward for this.

The setup and explanation on how this was accomplished is very complicated and I am deeply impressed by how this was figured out, tracked down and eventually exploited in a repeatable fashion. But bear with me. Here comes a very simplified explanation on how a single byte buffer overwrite with a fixed value could end up aiding running exploit code as root.

The main Google bug for this problem is still not open since they still have pending mitigations to perform, but since the c-ares issue has been fixed I’ve been told that it is fine to talk about this publicly.

c-ares writes a 1 outside its buffer

c-ares has a function called ares_create_query. It was added in 1.10 (released in May 2013) as an updated version of the older function ares_mkquery. This detail is mostly interesting because Google uses an older version than 1.10 of c-ares so in their case the flaw is in the old function. This is the two functions that contain the problem we’re discussing today. It used to be in the ares_mkquery function but was moved over to ares_create_query a few years ago (and the new function got an additional argument). The code was mostly unchanged in the move so the bug was just carried over. This bug was actually already present in the original ares project that I forked and created c-ares from, back in October 2003. It just took this long for someone to figure it out and report it!

I won’t bore you with exactly what these functions do, but we can stick to the simple fact that they take a name string as input, allocate a memory area for the outgoing packet with DNS protocol data and return that newly allocated memory area and its length.

Due to a logic mistake in the function, you could trick the function to allocate a too short buffer by passing in a string with an escaped trailing dot. An input string like “one.two.three\.” would then cause the allocated memory area to be one byte too small and the last byte would be written outside of the allocated memory area. A buffer overflow if you want. The single byte written outside of the memory area is most commonly a 1 due to how the DNS protocol data is laid out in that packet.

This flaw was given the name CVE-2016-5180 and was fixed and announced to the world in the end of September 2016 when c-ares 1.12.0 shipped. The actual commit that fixed it is here.

What to do with a 1?

Ok, so a function can be made to write a single byte to the value of 1 outside of its allocated buffer. How do you turn that into your advantage?

The Redhat security team deemed this problem to be of “Moderate security impact” so they clearly do not think you can do a lot of harm with it. But behold, with the right amount of imagination and luck you certainly can!

Back to ChromeOS we go.

First, we need to know that ChromeOS runs an internal HTTP proxy which is very liberal in what it accepts – this is the software that uses c-ares. This proxy is a key component that the attacker needed to tickle really badly. So by figuring out how you can send the correctly crafted request to the proxy, it would send the right string to c-ares and write a 1 outside its heap buffer.

ChromeOS uses dlmalloc for managing the heap memory. Each time the program allocates memory, it will get a pointer back to the request memory region, and dlmalloc will put a small header of its own just before that memory region for its own purpose. If you ask for N bytes with malloc, dlmalloc will use ( header size + N ) and return the pointer to the N bytes the application asked for. Like this:


With a series of cleverly crafted HTTP requests of various sizes to the proxy, the attacker managed to create a hole of freed memory where he then reliably makes the c-ares allocated memory to end up. He knows exactly how the ChromeOS dlmalloc system works and its best-fit allocator, how big the c-ares malloc will be and thus where the overwritten 1 will end up. When the byte 1 is written after the memory, it is written into the header of the next memory chunk handled by dlmalloc:


The specific byte of that following dlmalloc header that it writes to, is used for flags and the lowest bits of size of that allocated chunk of memory.

Writing 1 to that byte clears 2 flags, sets one flag and clears the lowest bits of the chunk size. The important flag it sets is called prev_inuse and is used by dlmalloc to tell if it can merge adjacent areas on free. (so, if the value 1 simply had been a 2 instead, this flaw could not have been exploited this way!)

When the c-ares buffer that had overflowed is then freed again, dlmalloc gets fooled into consolidating that buffer with the subsequent one in memory (since it had toggled that bit) and thus the larger piece of assumed-to-be-free memory is partly still being in use. Open for manipulations!


Using that memory buffer mess

This freed memory area whose end part is actually still being used opened up the play-field for more “fun”. With doing another creative HTTP request, that memory block would be allocated and used to store new data into.

The attacker managed to insert the right data in that further end of the data block, the one that was still used by another part of the program, mostly since the proxy pretty much allowed anything to get crammed into the request. The attacker managed to put his own code to execute in there and after a few more steps he ran whatever he wanted as root. Well, the user would have to get tricked into running a particular JavaScript but still…

I cannot even imagine how long time it must have taken to make this exploit and how much work and sweat that were spent. The report I read on this was 37 very detailed pages. And it was one of the best things I’ve read in a long while! When this goes public in the future, I hope at least parts of that description will become available for you as well.

A lesson to take away from this?

No matter how limited or harmless a flaw may appear at a first glance, it can serve a malicious purpose and serve as one little step in a long chain of events to attack a system. And there are skilled people out there, ready to figure out all the necessary steps.

Christian HeilmannWe need JavaScript to fix the web

TL;DR: JavaScript is too great an opportunity to build accessible, easy-to-use and flexible solutions for the web to not use it. It fills the gaps years of backwards-compatibility focus created. It helps with the problems of the now and the future that HTML and CSS alone can’t cover reliably. We shouldn’t blindly rely on it – we should own the responsibility to work around its flaky nature and reliability issues.

Patchy wiring

Right now, there is a lot of noise in our world about JavaScript, Progressive Enhancement and reliance on technology and processes. I’m in the middle of that. I have quite a few interviews with stakeholders in the pipeline and I’m working on some talks on the subject.

A lot of the chatter that’s happening right now seems to be circular:

  • Somebody makes a blanket statement about the state of the web and technologies to rely on
  • This ruffles the feathers of a few others. They point out the danger of these technologies and that it violates best practices
  • The original writer accuses the people reacting of having limited and anachronistic views
  • Stakeholders of frameworks that promise to make the life of developers a breeze chime in and point out that their way is the only true solution
  • 62 blog posts and 5212 tweets later we find consensus and all is good again

Except, it isn’t. The web is in a terrible state and the average web site is slow, punishes our computers with slow running code and has no grace in trying to keep users interacting with it. Ads spy on us, scripts inject malware and it is taxing to find content in a mess of modals and overly complex interfaces.

It is high time we, the community that survived the first browser wars and made standards-driven development our goal, start facing facts. We failed to update our message of a usable and maintainable web to be relevant to the current market and a new generation of developers.

We once fixed the web and paved the way forward

Our “best development practices” stem from a time when we had bad browsers on desktop computers. We had OK connectivity – it wasn’t fast, but it was at least reliable. We also created documents and enhanced them to become applications or get some interactivity later.

The holy trinity was and is:

  • Semantic HTML for structure
  • CSS for styling
  • JavaScript for some extra behaviour

That’s the message we like to tell. It is also largely a product of fiction. We defined these as best practices and followed them as much as we could, but a huge part of the web back then was done with WYSIWYG editors, CMS-driven or built with server-side frameworks. As such, the HTML was a mess, styles were an afterthought and JavaScript riddled with browser-sniffing or lots of document.write nasties. We pretended this wasn’t a problem and people who really care would never stoop down to creating terrible things like that.

There is no “the good old standards based web”. It was always a professional, craftsmanship view and ideal we tried to create. Fact is, we always hacked around issues with short-term solutions.

We had a clear goal and enemy – one we lost now

Browsers back then were not standards-aware and the browser wars raged. Having a different functionality than other browsers was a market advantage. This was bad for developers as we had to repeat all of our work for different browsers. The big skill was to know which browser messed up in which way. This was our clear target: to replace terrible web sites that only worked in IE6. The ones that were not maintainable unless you also had access to the CMS code or whatever language the framework was written in. We wanted to undo this mess by explaining what web standards are good for.

HTML describes a small use case

HTML describes linked documents with a few interactive elements. As quality oriented developers we started with an HTML document and we got a kick out of structuring it sensibly, adding just the right amount of CSS, adding some JavaScript to make it more interactive and release the thing. This was cool and is still very much possible. However, with the web being a main-stream medium these days, it isn’t quite how people work. We got used to things working in browsers differently, many of these patterns requiring JavaScript.

We got used to a higher level of interactivity as browser makers spend a lot of time ensuring compatibilty with another. We also have a few very obviously winning browsers and developers favouring them. Browsers are all great and open to feedback and there is no war among browser makers any longer. It is important to not block out users of older browsers, but there is no point in catering to them with extra work. Yes, you can go on the freeway with a car with broken indicators and no lights, but you are a danger to yourself and others. This is what surfing with old internet explorer is now.

The upgrade of HTML wasn’t as smooth as we make it out to be

HTML and CSS are gorgeous, beautiful in their simplicity and ensure that nobody on the web gets left out. Both technologies are very forgiving, allowing publishers of web content to make mistakes without making their readers suffer. This truth doesn’t change and – together with the beautiful simplicity that is a link – makes the web what it is. It is, however not good enough for today’s expectations of end users. It doesn’t matter if it is sturdy and can’t break it if is is boring or awkward to use.

Fact is that a lot of the amazing things of HTML aren’t as rosy when you look closer. One thing that opened my eyes was Monica Dinculescu’s talk “I love you input, but you’re letting me down”.

In it, Monica criticises the architecture and the implementation of the HTML input element. I was annoyed by that when I heard her hint at that earlier at Google IO. Surely this is wrong: INPUT is a beautiful thing. If you use input type range, end users of modern browsers get a slider, and older browsers a text box. Nobody is left out, it just gets better with better browsers. This is progressive enhancement at it’s best: built into the platform.

Except that the implementation of slider, number, URL and many of the other new input types that came with HTML5 is terrible. Monica shows some very obvious flaws in the most modern browsers that will not get fixed. Mostly, because nobody complained about them as developers use JavaScript solutions instead. So, yes, older browsers get a text box that works. But newer browsers get interfaces that disappoint or even make a simple task like entering a number impossible.

This is an inconvenient fact we need to own as something that needs to change. There is a lot of false information stating that functionality defined in the HTML5 standard can be used reliably and there is no need for JavaScript. This is simply not true, and, to a large degree, based on browser implementation. But, as Monica explains in detail, a lot of it is vaguely defined in the standard, was badly implemented in the beginning and now can’t be fixed in browsers as it would break a lot of live uses on the web. Every browser maker has to deal with this issue – we have a lot of terrible code on the web that still needs to work even if we’d love to fix the implementations.

There are other myths that keep cropping up. Adding ARIA to your HTML for example doesn’t automatically make your solutions accessible. Other than a few simple features like description any ARIA enhancement needs JavaScript to reach assistive technology.

The sturdy baseline we make HTML out to be has become much more fragile with the complexity we added when we moved with HTML5 from documents to apps. That doesn’t mean you should discard all semantic HTML and create everything with JavaScript. It does, however, neither mean that you can rely on HTML to magically solve usability and access issues for you.

The web as a platform isn’t fascinating to new developers – it’s a given

Developers who start building for the web right now never knew what being offline on a Desktop feels like. They never got stuck with a JavaScript-dependent solution blocking them out. They’ve seen a fair share of JavaScript abuse, and they do encounter broken sites. But in most cases a reload fixes that. There is hardly any “your browser isn’t good enough” happening any more.

For better or worse, the success of the web became its curse. We were the cool “new media”. However, these days, we’re not cool. We’re like plumbing. When you’re in a fairly well-off country, you turn on the tap and water comes out. You don’t care about the plumbing or where it comes from. It is just there. Much like the web.

This is what we wanted, now we need to understand that our fascination with everything web is not shared by people who never knew a world without it.

It is time to move on and let the new generation of developers deal with the problems of now instead of us waving a finger and demanding a work ethic that always was a perfect scenario and a small part of how the market worked.

The market is a rat race

Technology is the only properly growing market out there and we are part of that. This means a lot of pressure is on this market to continuously grow. It means that we need to be seen as constantly innovating. If that makes sense or if it is necessary is irrelevant. We just need to be bigger, faster and more – all the time. Yes, this isn’t sustainable, but it makes money and that is – sadly enough – still the main goal of our world.

This means that when we talk about development, people are much more likely to listen to the framework that offers “quick apps without much code”. Developers are also more likely to get excited about anything that offers to build “small, independent solutions that can be mixed and matched to build huge applications that scale”.

Building wheels from small reusable parts

Developers of the web these days aren’t asked to architect a clean, accessible and very well structured web document. If we are fair to ourselves, we were never asked to do this. We did it because we cared, and to make our lives easier. Standard driven development was there to protect us from reinventing the wheel. However, these days, this is exactly what is expected of a cool new company and developers. We live in a world of components. We are continuously asked to build something fast that others can reuse in any product. That way you can have 10 developers work on a product in parallel and you can remove and add functionality as it is needed.

This is how we got solutions like “Object Oriented CSS“. The Cascade is a beautiful part of CSS that allows you to write a few lines that get applied to thousands of documents without you needing to repeat your definitions. This is working against the concept of reusable, small components that don’t inherit any look and feel from their parent container. Our use case changed, and the standards didn’t deliver the functionality we needed.

We’re mobile, and our best practices aren’t

Our world has changed drastically. We now live in a world where Desktop is not as important. Everything points to the next users of the web being on mobiles. This doesn’t mean Desktops are irrelevant and our solutions for that form factor bad.

It means that our “best practices” don’t solve the current issues. Best practices aren’t defined. They are found by trial and error. This is how we got where we are now with crazy CSS hacks like resets, browser filters (midpass using voice CSS anyone?) and many other – now considered terrible – ideas.

We need to find solutions for bad or no connectivity on fairly capable browsers on interfaces that aren’t keyboard driven. We need solutions for small screens and interactivity ready for big fingers or voice control.

Yes, the most hard core scenarios of this world mean you can’t rely on client side scripting or a lot of storage space. Proxy browsers and battery or data saving settings of some browsers interfere with what we can do on the client. On the whole though what we deal with is a pretty capable computer (surely much better than the ones we had when we defined our best web practices) on a small screen device with dubious, non-reliable or even non-existing connectivity.

Enter the new, old hero: JavaScript

Now, JavaScript gives us a lot of great features to cater for this world. We can use ServiceWorker to offer offline functionality, we can store content in a reliable manner on the device by using IndexedDB instead of relying on a browser cache and we have the DOM and events to enhance HTML interfaces to become touch enabled without delays. More importantly, we have a way to demand functionality on the fly and test if it was successful before applying it. We have a very mighty “if” statement that allows us to react to success and failure cases. Developer tools give us full insight into what happened, including slow performance. Yes, this is a much more complex starting point than writing an HTML document and adding some CSS. But this is why we are paid as professionals. Everyone is invited to contribute to the web. Professionals who expect to be paid for creating web content need to do more.

We can create interfaces that are great to use, intuitive and accessible if we use some semantic HTML, CSS and JavaScript. Instead we keep harping on about using HTML for everything, CSS if we can and JavaScript if we must. Many “CSS only” solutions have dubious accessibility features and expect a lot of knowledge of CSS quirks from the maintainer. Enough already. Great solutions are not limited to one of the web technologies and demanding expert care. They are a mix of all of them, catered to the needs of the users.

We need to take on the responsibility of JavaScript

If we see JavaScript as a given and we constantly remind ourselves about its non-forgiving nature we have a new baseline to build amazing solutions for the current and future web. The main thing we have to understand is that it is our responsibility to make our products load fast, work smoothly, and to be accessible to those with a sensible environment. We can not rely on the sturdiness of the web of old to take that responsibility away from us. We need to own it and move the web forward.

This doesn’t mean we need to blindly use JavaScript and frameworks to deliver our products. There is space for many different solutions and approaches. It does mean, however, that we shouldn’t limit us to what made sense over a decade ago. New form factors need new thinking, and I for one trust those rebelling against the best practices of old to find good solutions much like we defined our best practices based on stalwart and short-term solutions of those before us.

JavaScript abuse is rampant. It is the main reason for security issues on the web and terrible performance of the average web site. We shove functionality and ads in the face of end users in the hope of keeping them. Instead we should use browser and hardware functionality to deliver a great experience. With JavaScript I can react to all kind of events and properties of the computer my product is consumed in. Without it, I need to hope that functionality exists. Only by owning the fact that JavaScript is a given we can start making the clogged up web better. It is time to clean up the current web instead of demanding the “good old web” that never really existed.

Photo credit: John Loo via Visual hunt / CC BY

Wil ClouserTest Pilot 2016 Q4 OKRs

The Test Pilot 2016 Q4 OKRs are published. Primarily we'll be focused on continued growth of users (our overall 2016 goal). We deprioritized localization last quarter and over-rotated on publishing experiments by launching four when we were only aiming for one. This quarter we'll turn that knob back down (we're aiming for two new experiments) and get localization done.

We also failed to graduate any experiments last quarter -- arguably the most important part of our entire process since it includes drawing conclusions and publishing our results. This quarter we'll graduate three experiments from Test Pilot, publish our findings so we can improve Firefox, and clear out space in Test Pilot for the next big ideas.

Mozilla Addons BlogAdd-on Compatibility for Firefox 51

Firefox 51 will be released on January 24th. Note that the scheduled release on December 13th is a point release, not a major release, hence the much longer cycle. Here’s the list of changes that went into this version that can affect add-on compatibility. There is more information available in Firefox 51 for Developers, so you should also give it a look.


XPCOM and Modules


  • Embedded WebExtensions. You can now embed a WebExtension into any restartless add-on, allowing you to gradually migrate your code to the new platform, or transition any data you store to a format that works with WebExtensions.
  • WebExtension Experiments. This is a mechanism that allows us (and you!) to prototype new WebExtensions APIs.

Let me know in the comments if there’s anything missing or incorrect on these lists. If your add-on breaks on Firefox 51, I’d like to know.

The automatic compatibility validation and upgrade for add-ons on AMO will happen in a few weeks, so keep an eye on your email if you have an add-on listed on our site with its compatibility set to Firefox 50.

The Mozilla BlogBringing the Power of the Internet to the Next Billion and Beyond

Announcing Mozilla’s Equal Rating Innovation Challenge, a $250,000 contest including expert mentorship to spark new ways to connect everyone to the Internet.

At Mozilla, we believe the Internet is most powerful when anyone – regardless of gender, income, or geography – can participate equally. However the digital divide remains a clear and persistent reality. Today more than 4 billion people are still not online, according to the World Economic Forum. That is greater than 55% of the global population. Some, who live in poor or rural areas, lack the infrastructure. Fast wired and wireless connectivity only reaches 30% of rural areas. Other people don’t connect because they don’t believe there is enough relevant digital content in their language. Women are also less likely to access and use the Internet; only 37% access the Internet versus 59% of men, according to surveys by the World Wide Web Foundation.

Access alone, however, is not sufficient. Pre-selected content and walled gardens powered by specific providers subvert the participatory and democratic nature of the Internet that makes it such a powerful platform. Mitchell Baker coined the term equal rating in a 2015 blog post. Mozilla successfully took part in shaping pro-net neutrality legislation in the US, Europe and India. Today, Mozilla’s Open Innovation Team wants to inject practical, action-oriented, new thinking into these efforts.

This is why we are very excited to launch our global Equal Rating Innovation Challenge. This challenge is designed to spur innovations for bringing the members of the Next Billion online. The Equal Rating Innovation Challenge is focused on identifying creative new solutions to connect the unconnected. These solutions may range from consumer products and novel mobile services to new business models and infrastructure proposals. Mozilla will award US$250,000 in funding and provide expert mentorship to bring these solutions to the market.

Equal Rating Innovation ChallengeWe seek to engage entrepreneurs, designers, researchers, and innovators all over the world to propose creative, engaging and scalable ideas that cultivate digital literacy and provide affordable access to the full diversity of the open Internet. In particular, we welcome proposals that build on local knowledge and expertise. Our aim is to entertain applications from all over the globe.

The US$250,000 in prize monies will be split in three categories:

  • Best Overall (key metric: scalability)
  • Best Overall Runner-up
  • Most Novel Solution (key metric: experimental with potential high reward)

This level of funding may be everything a team needs to go to market with a consumer product, or it may provide enough support to unlock further funding for an infrastructure project.

The official submission period will run from 1 November to 6 January. All submissions will be judged by a group of external experts by mid January. The selected semifinalists will receive mentorship for their projects before they demo their ideas in early March. The winners will be announced at the end of March 2017.

Submisson Process

We have also launched www.equalrating.com, a website offering educational content and background information to support the challenge. On the site, you will find the 3 key frameworks that may be useful for building understanding of the different aspects of this topic. You can read important statistics that humanize this issue, and see how connectivity influences gender dynamics, education, economics, and a myriad of other social issues. The reports section provides further depth to the different positions of the current debate. In the coming weeks, we will also stream a series of webinars to further inform potential applicants about the challenge details. We hope these webinars also provide opportunities for dialogue and questions.

Connecting the unconnected is one of the greatest challenges of our time. No one organization or effort can tackle it alone. Spread the word. Submit your ideas to build innovative and scalable ways to bring Internet access to the Next Billion – and the other billions, as well. Please join us in addressing this grand challenge.

Further information: www.equalrating.com
Contact: equalrating@mozilla.com

Mozilla Open Innovation TeamBringing the Power of the Internet to the Next Billion and Beyond

Announcing Mozilla’s Equal Rating Innovation Challenge, a $250,000 contest including expert mentorship to spark new ways to connect everyone to the Internet.

At Mozilla, we believe the Internet is most powerful when anyone — regardless of gender, income, or geography — can participate equally. However the digital divide remains a clear and persistent reality. Today more than 4 billion people are still not online, according to the World Economic Forum. That is greater than 55% of the global population. Some, who live in poor or rural areas, lack the infrastructure. Fast wired and wireless connectivity only reaches 30% of rural areas. Other people don’t connect because they don’t believe there is enough relevant digital content in their language. Women are also less likely to access and use the Internet; only 37% access the Internet versus 59% of men, according to surveys by the World Wide Web Foundation.

Access alone, however, is not sufficient. Pre-selected content and walled gardens powered by specific providers subvert the participatory and democratic nature of the Internet that makes it such a powerful platform. Mitchell Baker coined the term equal rating in a 2015 blog post. Mozilla successfully took part in shaping pro-net neutrality legislation in the US, Europe and India. Today, Mozilla’s Open Innovation Team wants to inject practical, action-oriented, new thinking into these efforts.

This is why we are very excited to launch our global Equal Rating Innovation Challenge. This challenge is designed to spur innovations for bringing the members of the Next Billion online. The Equal Rating Innovation Challenge is focused on identifying creative new solutions to connect the unconnected. These solutions may range from consumer products and novel mobile services to new business models and infrastructure proposals. Mozilla will award US$250,000 in funding and provide expert mentorship to bring these solutions to the market.

We seek to engage entrepreneurs, designers, researchers, and innovators all over the world to propose creative, engaging and scalable ideas that cultivate digital literacy and provide affordable access to the full diversity of the open Internet. In particular, we welcome proposals that build on local knowledge and expertise. Our aim is to entertain applications from all over the globe.

The US$250,000 in prize monies will be split in three categories:

  • Best Overall (key metric: scalability)
  • Best Overall Runner-up
  • Most Novel Solution (key metric: experimental with potential high reward)

This level of funding may be everything a team needs to go to market with a consumer product, or it may provide enough support to unlock further funding for an infrastructure project.

The official submission period will run from 1 November to 6 January. All submissions will be judged by a group of external experts by mid January. The selected semifinalists will receive mentorship for their projects before they demo their ideas in early March. The winners will be announced at the end of March 2017.

We have also launched www.equalrating.com, a website offering educational content and background information to support the challenge. On the site, you will find the 3 key frameworks that may be useful for building understanding of the different aspects of this topic. You can read important statistics that humanize this issue, and see how connectivity influences gender dynamics, education, economics, and a myriad of other social issues. The reports section provides further depth to the different positions of the current debate. In the coming weeks, we will also stream a series of webinars to further inform potential applicants about the challenge details. We hope these webinars also provide opportunities for dialogue and questions.

Connecting the unconnected is one of the greatest challenges of our time. No one organization or effort can tackle it alone. Spread the word. Submit your ideas to build innovative and scalable ways to bring Internet access to the Next Billion — and the other billions, as well. Please join us in addressing this grand challenge.

Further information: www.equalrating.com
Contact: equalrating@mozilla.com

Bringing the Power of the Internet to the Next Billion and Beyond was originally published in Mozilla Open Innovation on Medium, where people are continuing the conversation by highlighting and responding to this story.

Aki Sasakiscriptworker 0.8.0

Tl;dr: I just shipped scriptworker 0.8.0 (changelog) (RTD) (github) (pypi).
This is a non-backwards-compatible release.


By design, taskcluster workers are very flexible and user-input-driven. This allows us to put CI task logic in-tree, which means developers can modify that logic as part of a try push or a code commit. This allows for a smoother, self-serve CI workflow that can ride the trains like any other change.

However, a secure release workflow requires certain tasks to be less permissive and more auditable. If the logic behind code signing or pushing updates to our users is purely in-tree, and the related checks and balances are also in-tree, the possibility of a malicious or accidental change being pushed live increases.

Enter scriptworker. Scriptworker is a limited-purpose taskcluster worker type: each instance can only perform one type of task, and validates its restricted inputs before launching any task logic. The scriptworker instances are maintained by Release Engineering, rather than the Taskcluster team. This separates roles between teams, which limits damage should any one user's credentials become compromised.

scriptworker 0.8.0

The past several releases have included changes involving the chain of trust. Scriptworker 0.8.0 is the first release that enables gpg key management and chain of trust signing.

An upcoming scriptworker release will enable upstream chain of trust validation. Once enabled, scriptworker will fail fast on any task or graph that doesn't pass the validation tests.

comment count unavailable comments

Air MozillaConnected Devices Weekly Program Update, 13 Oct 2016

Connected Devices Weekly Program Update Weekly project updates from the Mozilla Connected Devices team.

Air MozillaReps Weekly Meeting Oct. 13, 2016

Reps Weekly Meeting Oct. 13, 2016 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Air MozillaMozilla Learning, Science Community Call

Mozilla Learning, Science Community Call The presence and participation of women in STEM is on the rise thanks to the efforts of many across the globe, but still has many...

Chris McDonaldDescribing and Debugging Change Over Time in a System

So, Aria Stewart tweeted two questions and statement the other day:

I wanted to discuss this idea as it pertains to debugging and strategies I’ve been employing more often lately. But lets examine the topic at face value first, describing change over time in a system. The abstract “system” is where a lot of the depth in this question comes from. It could be talking about the computer you’re on, the code base you work in, data moving through your application, an organization of people, many things fall into a system of some sort, and time acts upon them all. I’m going to choose the data moving through a system as the primary topic, but also talk about code bases over time.

Another part of the question that keeps it quite open is the lack of “why”, “what”, or “how” in it. This means we could discuss why the data needs to be transformed in various ways, why we added a feature or change some code, why an organization is investing in writing software at all. We could talk about what change at each step in a data pipeline, what changes have happened in a given commit, or what goals were accomplished each month by some folks. Or, the topic could be how a compiler changes the data as it passes through, how a programmer sets about making changes to a code base, or how an organization made its decisions to go in the directions it did. All quite valid and this is but a fraction of the depth in this simple question.

Let’s talk about systems a bit. At work, we have a number of services talking via a messaging system and a relational database. The current buzz phrase for this is “micro services” but we also called them “service oriented architectures” in the past. My previous job, I worked in a much smaller system that had many components for gathering data, as well as a few components for processing that data and sending it back to the servers. Both of these systems shared common attributes which most other systems also must cope with: events that provide data to be processed happen in functionally random order, that data is fed into processors who then stage data to be consumed by other parts of the system.

When problems arise in systems like these, it can be difficult to tell what piece is causing disruption. The point where the data changes from  healthy to problematic may be a few steps removed from the layer that the problem is detected in. Also, sometimes the data is good enough to only cause subtle problems. At the start of the investigation, all you might know is something bad happened in the past. It is especially at these points when we need the description of the change that should happen to our data over time, hopefully with as much detail as possible.

The more snarky among us will point out the source code is what is running so why would you need some other description? The problem often isn’t that a given developer can’t understand code as they read it, though that may be the case. Rather, I find the problem is that code is meant to handle so many different cases and scenarios that the exact slice that I care about is often hard to track. Luckily our brains are build up these mental models that we can use to traverse our code, eliminating blocks of code because we intuitively “know” the problem couldn’t be in them, because we have an idea of how our code should work. Unfortunately, it is often at the mental model part where problems arise. The same tricks we use to read faster and then miss errors in our own writing are what can cause problems when understanding why a system is working in some way we didn’t expect.

Mental models are often incomplete due to using libraries, having multiple developers on a project, and the ravages of time clawing away at our memory. In some cases the mental model is just wrong. You may have intended to make a change but forgot to actually do it,  maybe you read some documentation in a different way than intended, possibly you made a mistake while writing the code such as a copy/paste error or off by 1 in a loop. It doesn’t really matter the source of the flaw though, because when we’re hunting a bug. The goal is to find what the flaw in both the code and the mental model are so it can be corrected, then we can try to identify why the model got out of wack in the first place.

Can we describe change in a system over time? Probably to some reasonable degree of accuracy, but likely not completely. How does all of this tie into debugging? The strategy I’ve been practicing when I hit these situations is particularly geared around the idea that my mental model and the code are not in agreement. I shut off anything that might interrupt a deep focus time such as my monitors and phone, then gather a stack of paper and a pen. I write down the reproduction steps at whatever level of detail they were given to me to use as a guide in the next process.

I then write out every single step along the path that the data will take as it appears in my mental model, preferably in order. This often means a number of arrows as I put in steps I forgot. Because I know the shape data and the reproduction steps, I can make assumptions like “we have an active connection to the database.” Assumptions are okay at this point, I’m just building up a vertical slice of the system and how it affects a single type of data. Once I’ve gotten a complete list of event on the path, I then start the coding part. I go through and add log lines that line up with list I made, or improve them when I see there is already some logging at a point. Running the code periodically to make sure my new code hasn’t caused any issues and that my mental model still holds true.

The goal of this exercise isn’t necessarily to bring the code base into alignment with my mental model, because my mental model may be wrong for a reason. But because there is a bug, so rarely am I just fixing my mental model, unless of course I discover the root cause and have to just say “working as intended.” As I go through, I make notes on my paper mental model where things vary, often forgotten steps make their way in now. Eventually, I find some step that doesn’t match up, at that point I probably know how to solve the bug, but usually I keep going, correcting the bug in the code, but continuing to analyze the system against my mental model.

I always keep going until I exhaust the steps in my mental model for a few reasons. First, since there was at least one major flaw in my mental model, there could be more, especially if that first one was obscuring other faults. Second, this is an opportunity to update my mental model with plenty of work already like writing the list and building any tools that were needed to capture the events. Last, the sort of logging and tools I build for validating my mental model, are often useful in the future when doing more debugging, so completing the path can make me better prepared for next time.

If you found this interesting, give this strategy a whirl. If you are wondering what level of detail I include in my event lists, commonly I’ll fill 1-3 pages with one event per line and some lines scratched out or with arrows drawn in the middle. Usually this documentation gets obsolete very fast. This is because it is nearly as detailed as the code, and only a thin vertical slice for very specific data, not the generalized case. I don’t try to save it or format it for other folks’ consumption. The are just notes for me.

I think this strategy is a step toward fulfilling the statement portion of Aria’s tweet, “Practice this.” One of the people you need to be concerned with the most when trying to describe change in a system, is yourself. Because if you can’t describe it to yourself, how are you ever going to describe it to others?

Emily DunhamCreating and deleting a Git branch named -D

Creating and deleting a Git branch named -D

git branch -D deletes a Git branch. Yet someone on IRC asked, “I accidentaly got a git branch named -D. How do I delete it?”. I took this as a personal challenge to create and nuke a -D branch myself, to explore this edge case of one of my favorite tools.

Making a branch with an illegal name

You create a branch in Git by typing git branch branchname. If you type git branch -D, the -D will be passed as an argument to the program by your shell, because your shell knows that all things starting with - are arguments.

You can tell your shell “I just mean a literal -, not an argument” by escaping it, like git branch \-D. But Git sees what we’re up to, and won’t let that fly. It complains fatal: '-D' is not a valid branch name.. So even when we get the string -D into Git, the porcelain spits it right back out at us.

But since this is Unix and Everything’s A File(TM), I can create a branch with a perfectly fine name to get through the porcelain and then change it later. If I was at the Git wizardry level of Emily Xie I could just write the files into .git without the intermediate step of watching the porcelain do it first, but I’m not quite that good yet.

So, let’s make a branch with a perfectly fine name in a clean repo, then swap things around under the hood:

$ mkdir dont
$ cd dont
$ git init
$ git commit --allow-empty -am "initial commit"
[master (root-commit) da1f6b6] initial commit
$ git branch
* master
$ git checkout -b dashdee
switched to a new branch 'dashdee'
$ git branch
* dashdee
$ grep -ri dashdee .git/
.git/HEAD:ref: refs/heads/dashdee
.git/logs/HEAD:da1f6b67446e83a456c4aeaeef1e256a8531640e da1f6b67446e83a456c4aeaeef1e256a8531640e E. Dunham <github@edunham.net> 1476402564 -0700    checkout: moving from master to dashdee
$ find -name dashdee

OK, so we’ve got this dashdee branch. Time to give it the name we’ve wanted all along:

$ find .git -type f -print0 | xargs -0 sed -i 's/dashdee/\-D/g'
$ mv .git/refs/heads/dashdee .git/refs/heads/-D
$ mv .git/logs/refs/heads/dashdee .git/logs/refs/heads/-D

Look what you’ve done...

Is this what you wanted?:

$ git branch
* -D

You are really on a branch named -D now. You have snuck around the guardrails, though they were there for a reason:

$ git commit --allow-empty -am "noooo"
[-D 18dac23] noooo

Try to make it go away

$ git branch -D -D
fatal: branch name required

It won’t give up that easily! You can’t escape:

$ git branch -D \-D
fatal: branch name required
$ git branch -D '-D'
fatal: branch name required
$ git branch -D '\-D'
error: branch '\-D' not found.

Notice the two categories of issue we’re hitting: In the first 2 examples, the shell was eating our branch name and not letting it through to Git. In the third case, we threw so many escapes in that Bash passed a string other than -D through to Git.

As an aside, I’m using Bash for this. Other shells might be differently quirky:

$ echo $0
$ bash --version
GNU bash, version 4.3.46(1)-release (x86_64-pc-linux-gnu)
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>

This is free software; you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Succeed at making it go away

Bash lets me nuke the branch with:

$ git branch -D ./-D
Deleted branch ./-D (was broken).

$ git branch

However, if your shell is not so eaily duped into passing a string starting with - into a program, you can also fix things by manually removing the file that the branch -D command would have removed for you:

$ rm .git/refs/heads/-D
$ git branch

Clean up

$ cd ..
$ rm -rf dont

Please don’t do this kind of silly thing to any repo you care about. It’s just cruel.

Mark CôtéMozReview UI refactoring

In Q3 the MozReview team started to focus on tackling various usability issues. We started off with a targetted effort on the “Finish Review” dialog, which was not only visually unappealing but difficult to use. The talented David Walsh compressed the nearly full-screen dialog into a dropdown expanded from the draft banner, and he changed button names to clarify their purpose. We have some ideas for further improvements as well.

David has now embarked on a larger mission: reworking the main review-request UI to improve clarity and discoverability. He came up with some initial designs and discussed them with a few MozReview users, and here’s the result of that conversation:

This design provides some immediate benefits, and it sets us up for some future improvements. Here are the thoughts behind the changes:

The commits table, which was one of the first things we added to stock Review Board, was never in the right place. All the surrounding text and controls reflect just the commit you are looking at right now. Moving the table to a separate panel above the commit metadata is a better, and hopefully more intuitive, representation of the hierarchical relationship between commit series and individual commit.

The second obvious change is that the commit table is now collapsed to show only the commit you are currently looking at, along with its position (e.g. “commit 3 of 5”) and navigation links to previous and next commits. This places the emphasis on the selected commit, while still conveying the fact that it is part of a series of commits. (Even if that series is actually only one commit, it is still important to show that MozReview is designed to operate on series.) To address feedback from people who like always seeing the entire series, it will be possible to expand the table and set that as a preference.

The commit title is still redundant, but removing it from the second panel left the rest of the information there looking rather abandoned and confusing. I’m not sure if there is a good fix for this.

The last functional change is the addition of a “Quick r+” button. This fixes the annoying process of having to select “Finish Review”, set the dropdown to “r+”, and then publish. It also removes the need for the somewhat redundant and confusing “Finish Review” button, since for anything other than an r+ a reviewer will most likely want to leave one or more comments explaining their action. The “Quick r+” button will probably be added after the other changes are deployed, in part because we’re not completely satisfied with its look and position.

The other changes are cosmetic, but they make various data and controls look much slicker while also being more compact.

We are also noodling around with a further enhancement:

This is a banner containing data about the current commit, which will appear when the user scrolls past the commits table. It provides a constant reminder of the current commit, and we may put in a way to skip up to the commits table and/or navigate between commits. We may also fold the draft/“Finish Review” banner into this as well, although we’re still working out what that would look like. In any case, this should help avoid unnecessary scrolling while also presenting a “you are here” signpost.

As I mentioned, these changes are part of an on-going effort to improve general usability. This refactoring gets us into position to tackle more issues:

  • Since the commits table will be clearly separated from the commit metadata, we can move the controls that affect the whole series (e.g. autoland) up to the first panel, and leave controls that affect only the current commit (right now, only “Finish Review”/“Quick r+”) with the second panel. Again this should make things more intuitive.

  • Similarly, this gives us a better mechanism for moving the remaining controls that exist only on the parent review request (“Review Summary”/“Complete Diff”) onto the individual commit review requests, alongside the other series controls. This in turns means that we’ll be able to do away with the parent review request, or at least make some radical changes to it.

MozReview usage is slowly ticking upwards, as more and more Mozillians are seeing the value of splitting their work up into a series of small, atomic commits; appreciating the smooth flow of pushing commits up for review; and especially digging the autoland functionality. We’re now hard at work to make the whole experience delightful.

About:CommunityMaker Party 2016: Stand Up for a Better Internet

Cross post from: The Mozilla Blog.

Mozilla’s annual celebration of making online is challenging outdated copyright law in the EU. Here’s how you can participate.

It’s that time of year: Maker Party.

Each year, Mozilla hosts a global celebration to inspire learning and making online. Individuals from around the world are invited. It’s an opportunity for artists to connect with educators; for activists to trade ideas with coders; and for entrepreneurs to chat with makers.

This year, we’re coming together with that same spirit, and also with a mission: To challenge outdated copyright laws in the European Union. EU copyright laws are at odds with learning and making online. Their restrictive nature undermines creativity, imagination, and free expression across the continent. Mozilla’s Denelle Dixon-Thayer wrote about the details in her recent blog post.

By educating and inspiring more people to take action, we can update EU copyright law for the 21st century.

Over the past few months, everyday internet users have signed our petition and watched our videos to push for copyright reform. Now, we’re sharing copyright reform activities for your very own Maker Party.

Want to join in? Maker Party officially kicks-off today.

Here are activities for your own Maker Party:

Be a #cczero Hero

In addition to all the amazing live events you can host or attend, we wanted to create a way for our global digital community to participate.

We’re planning a global contribute-a-thon to unite Mozillians around the world and grow the number of images in the public domain. We want to showcase what the open internet movement is capable of. And we’re making a statement when we do it: Public domain content helps the open internet thrive.

Check out our #cczero hero event page and instructions on contributing. You should be the owner of the copyright in the work. It can be fun, serious, artistic — whatever you’d like. Get started.

For more information on how to submit your work to the public domain or to Creative Commons, click here.


Post Crimes

Mozilla has created an app to highlight the outdated nature of some of the EU’s copyright laws, like the absurdity that photos of public landmarks can be unlawful. Try the Post Crimes web app: Take a selfie in front of the Eiffel Tower’s night-time light display, or the Little Mermaid in Denmark.

Then, send your selfie as a postcard to your Member of the European Parliament (MEP). Show European policymakers how outdated copyright laws are, and encourage them to forge reform. Get started.

Meme School

It’s absurd, but it’s true: Making memes may be technically illegal in some parts of the EU. Why? Exceptions for parody or quotation are not uniformly required by the present Copyright Directive.

Help Mozilla stand up for creativity, wit, and whimsy through memes! In this Maker Party activity, you and your friends will learn and discuss how complicated copyright law can be. Get started.


We can’t wait to see what you create this Maker Party. When you participate, you’re standing up for copyright reform. You’re also standing up for innovation, creativity, and opportunity online.

Air MozillaThe Joy of Coding - Episode 75

The Joy of Coding - Episode 75 mconley livehacks on real Firefox bugs while thinking aloud.

Air Mozilla[Monthly Speaker Series] Metadata is the new data… and why that matters, with Harlo Holmes.

[Monthly Speaker Series] Metadata is the new data… and why that matters, with Harlo Holmes. Today's proliferation of mobile devices and platforms such as Google and Facebook has exacerbated an extensive, prolific sharing about users and their behaviors in ways...

Armen ZambranoUsability improvements for Firefox automation initiative - Status update #7

On this update we will look at the progress made in the last two weeks.

A reminder that this quarter’s main focus is on:
  • Debugging tests on interactive workers (only Linux on TaskCluster)
  • Improve end to end times on Try (Thunder Try project)

For all bugs and priorities you can check out the project management page for it:

Status update:
Debugging tests on interactive workers

Accomplished recently:
  • No new progress

  • Android xpcshell
  • Blog/newsgroup post

Thunder Try - Improve end to end times on try

Project #1 - Artifact builds on automation

Accomplished recently:
  • The following platforms are now supported: linux, linux64, macosx64, win32, win64
  • An option was added to download symbols for our compiled artifacts during the artifact build

  • Debug artifact builds on try. (Right now --artifact always results in an opt artifact build.)
  • Android artifact builds on try, thanks to nalexander.

Project #2 - S3 Cloud Compiler Cache

Some of the issues found last quarter for this project was around NSS which also was in need of replacing. This project was put on hold until the NSS work was completed. We’re going to resume this for Q4.

Project #3 - Metrics

Accomplished recently:

  • Figure out what to do with these small populations:
    • Ignore them - too small to be statistically significant
    • Aggregate them - All the rarely run suites can be pushed into a “Other” category
    • Show some other statistic:  Maybe median is better?
    • Show median of past day, and 90% for the week:  That can show the longer trend, and short term situation, for better overall feel.

Project #4 - Build automation improvements
Accomplished recently:
  • Bug 1306167 - Updated build machines to use SSD. Linux PGO builds now take half the time


Project #5 - Run Web platform tests from the source checkout
Nothing to add on this edition.

Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Doug Belshaw5 steps to creating a sustainable digital literacies curriculum

Steps by Jake Hills via Unsplash

The following is based on my doctoral thesis, my experience as Web Literacy Lead at the Mozilla Foundation, and the work that I’ve done as an independent consultant, identifying, developing, and credentialing digital skills and literacies.

To go into more depth on this topic, check out my book, The Essential Elements of Digital Literacies.

 1. Take people on the journey with you

The quotation below, illustrated by Bryan Mathers, is an African proverb that I’ve learned to be true.

African proverb

The easiest thing to do, especially if you’re short of time, is to take a definition - or even a whole curriculum / scheme of work - and use it off-the-shelf. This rarely works, for a couple of reasons.

First, every context is different. Everything can look great, but the devil really is in the details of translating even very practical resources into your particular situation.

Second, because the people within your organisation or programme haven’t been part of the definition, they’re not invested in it. Why should they do something that’s been imposed upon them?

 2. Focus on identity

I’m a fan of Helen Beetham’s work. The diagram below is from a collaboration with Rhona Sharpe, which illustrates an important point: any digital literacies curriculum should scaffold towards digital identity.

Beetham & Sharpe (2009)

These days, we assume access (perhaps incorrectly?) and focus on skills and practices. What we need from any digital literacies curriculum is a way to develop learners’ identities.

There are obvious ways to do this - for example encourage students to create their own, independent, presence on the web. However, it’s also important to note that identities are multi-faceted, and so any digital literacies curriculum should encourage learners to develop identities in multiple places on the web. Interacting in various online communities involves different methods of expression.

 3. Cast the net wide

We all have pressures and skillsets we need to develop immediately. Nevertheless, equally important when developing digital literacies are the mindsets behind these skillsets.

In my doctoral thesis and subsequent book I outlined eight ‘essential elements’ of digital literacies from the literature.


Whether you’re creating a course within a formal educational institution, attempting to improve the digital literacies of your colleagues in a corporate setting, or putting together a after-school programme for youth, the above skillsets and mindsets are equally applicable.

It’s all too-easy to focus on surface level skillsets without addressing underlying mindsets. Any curriculum should develop both, hand-in-hand. As for what the above elements mean, why not co-create the definitions with (representatives of) your target audience?

 4. Focus on creation, not compliance

The stimulus for a new digital literacies curriculum can often be the recognition of an existing lack of skills. This often leads to a deficit model when it comes to developing the learning activities involved in the curriculum. In other words, the course undertaken by learners becomes just about them reaching a pre-defined standard, rather than developing their digital identity.

Oliver Sacks quotation

As Amy Burvall points out through the quotation in her image above, to create is to perceive the world in a different way.

If you’re developing a digital literacies curriculum and have the 'big stick’ of compliance hanging over you, then there’s ways in which you can have your carrot (and eat it, too!) By encouraging learners to create artefacts and connections as part of the learning activities, not only do you have something to demonstrate to show the success of your programme, but you are helping them become self-directed learners.

When individuals can point to something that they have created that resides online, then they move from 'elegant consumption’ to digital creation. This can be tremendously empowering.

 5. Ensure meaningful credentialing

Until recently, the most learners could expect from having completed a course on a particular subject was flimsy paper certificate, or perhaps a PDF of questionable value and validity.

Meta badge

All that has changed thanks to the power of the web, and Open Badges in particular. As you can discover in the Open Badges 101 course I put together with Bryan Mathers, there are many and varied ways in which you can scaffold learning.

Whether through complex game mechanics or more simple pathways, badges and microcredentialing work all the way from recognising that someone signed up for a course, through to completing it. In fact, some courses never finish, which means a never-ending way to show progression!

 Final word (and a bonus step!)

The value of any digital literacies curriculum depends both on the depth it goes into regarding skillsets and mindsets, but also its currency. The zeitgeist is fast-paced and ever-changing online. As a result, learning activities are likely to need to be updated regularly.


Good practice when creating a curriculum for digital literacies, therefore, is to version your work. Ensure that people know when it was created, and the number of the latest iteration. This also makes it easier when creating digital credentials that align with it.

If you take nothing else away from this post, learn this: experiment. Be as inclusive as possible, bringing people along with you. Ask people what they thing. Try new things and jettison what doesn’t work. Ensure that what you do has 'exchange value’ for your learners. Celebrate developments in their mindsets as well as their skillsets!

Questions? Comments? I’m @dajbelshaw on Twitter, or you can email me: hello@dynamicskillset.com

The Mozilla BlogMaker Party 2016: Stand Up for a Better Internet

Mozilla’s annual celebration of making online is challenging outdated copyright law in the EU. Here’s how you can participate

It’s that time of year: Maker Party.

Each year, Mozilla hosts a global celebration to inspire learning and making online. Individuals from around the world are invited. It’s an opportunity for artists to connect with educators; for activists to trade ideas with coders; and for entrepreneurs to chat with makers.

This year, we’re coming together with that same spirit, and also with a mission: To challenge outdated copyright laws in the European Union. EU copyright laws are at odds with learning and making online. Their restrictive nature undermines creativity, imagination, and free expression across the continent. Mozilla’s Denelle Dixon-Thayer wrote about the details in her recent blog post.

By educating and inspiring more people to take action, we can update EU copyright law for the 21st century.

Over the past few months, everyday internet users have signed our petition and watched our videos to push for copyright reform. Now, we’re sharing copyright reform activities for your very own Maker Party.

Want to join in? Maker Party officially kicks-off today. Here are activities for your own Maker Party:

Be a #cczero Hero

In addition to all the amazing live events you can host or attend, we created a way for our global digital community to participate.

We’re planning a global contribute-a-thon to unite Mozillians around the world and grow the number of images in the public domain. We want to showcase what the open internet movement is capable of. And we’re making a statement when we do it: Public domain content helps the open internet thrive.

Check out our #cczero hero event page and instructions on contributing. You should be the owner of the copyright in the work. It can be fun, serious, artistic — whatever you’d like. Get started.

For more information on how to submit your work to the public domain or to Creative Commons, click here.


Post Crimes

Mozilla has created an app to highlight the outdated nature of some of the EU’s copyright laws, like the absurdity that photos of public landmarks can be unlawful. Try the Post Crimes web app: Take a selfie in front of the Eiffel Tower’s night-time light display, or the Little Mermaid in Denmark.

Then, send your selfie as a postcard to your Member of the European Parliament (MEP). Show European policymakers how outdated copyright laws are, and encourage them to forge reform. Get started.

Meme School

It’s absurd, but it’s true: Making memes may be technically illegal in some parts of the EU. Why? Exceptions for parody or quotation are not uniformly required by the present Copyright Directive.

Help Mozilla stand up for creativity, wit, and whimsy through memes! In this Maker Party activity, you and your friends will learn and discuss how complicated copyright law can be. Get started.


We can’t wait to see what you create this Maker Party. When you participate, you’re standing up for copyright reform. You’re also standing up for innovation, creativity, and opportunity online.

Air MozillaTechWomen Emerging Leader Presentations (2016)

TechWomen Emerging Leader Presentations (2016) As part of the TechWomen program, Mozilla has had the fortunate opportunity to host five Emerging Leaders over the past month. Estelle Ndedi (Cameroon), Chioma...