Karl Dubost[worklog] Edition 037. Autumn is here. The fall of -webkit-

Two Japanese national holidays during the week. And there goes the week. Tune of the Week: Anderson .Paak - Silicon Valley

Webcompat Life

Progress this week:

Today: 2016-09-26T09:24:48.064519
336 open issues
----------------------
needsinfo       13
needsdiagnosis  110
needscontact    9
contactready    27
sitewait        161
----------------------

You are welcome to participate

  • Monday day off in Japan: Respect for the elders.
  • Thursday day off in Japan: Autumn Equinox.

Firefox 49 has been released and an important piece of cake is delivered now to every users. You can get some context on why some (not all) -webkit- landed on Gecko and the impact on Web standards.

We have a team meeting soon in Taipei.

The W3C was meeting this week in Lisbon. Specifically about testing.

I did a bit of Prefetch links testing and how they appear in devtools.

Webcompat issues

(a selection of some of the bugs worked on this week).

  • Little by little we are accumulating our issues list about CSS zoom. Firefox is the only one to not support the non-standard property. It's coming from (Trident) IE was imported in WebKit (Safari), then maintained alive in Blink (Chrome, Opera) to finally come into Edge. Sadness.

Webcompat.com development

Reading List

Follow Your Nose

TODO

  • Document how to write tests on webcompat.com using test fixtures.
  • ToWrite: Amazon prefetching resources with <object> for Firefox only.

Otsukare!

Mic Berman

How will you Lead? From a talk at MarsDD in Toronto April 2016

Niko MatsakisIntersection Impls

As some of you are probably aware, on the nightly Rust builds, we currently offer a feature called specialization, which was defined in RFC 1210. The idea of specialization is to improve Rust’s existing coherence rules to allow for overlap between impls, so long as one of the overlapping impls can be considered more specific. Specialization is hotly desired because it can enable powerful optimizations, but also because it is an important component for modeling object-oriented designs.

The current specialization design, while powerful, is also limited in a few ways. I am going to work on a series of articles that explore some of those limitations as well as possible solutions.

This particular posts serves two purposes: it describes the running example I want to consder, and it describes one possible solution: intersection impls (more commonly called lattice impls). We’ll see that intersection impls are a powerful feature, but they don’t completely solve the problem I am aiming to solve and they also intoduce other complications. My conclusion is that they may be a part of the final solution, but are not sufficient on their own.

Running example: interconverting between Copy and Clone

I’m going to structure my posts around a detailed look at the Copy and Clone traits, and in particular about how we could use specialization to bridge between the two. These two traits are used in Rust to define how values can be duplicated. The idea is roughly like this:

  • A type is Copy if it can be copied from one place to another just by copying bytes (i.e., with memcpy). This is basically types that consist purely of scalar values (e.g., u32, [u32; 4], etc).
  • The Clone trait expands upon Copy to include all types that can be copied at all, even if requires executing custom code or allocating memory (for example, a String or Vec<u32>).

These two traits are clearly related. In fact, Clone is a supertrait of Copy, which means that every type that is copyable must also be cloneable.

For better or worse, supertraits in Rust work a bit differently than superclasses from OO languages. In particular, the two traits are still independent from one another. This means that if you want to declare a type to be Copy, you must also supply a Clone impl. Most of the time, we do that with a #[derive] annotation, which auto-generates the impls for you:

1
2
3
4
5
#[derive(Copy, Clone, ...)]
struct Point {
    x: u32,
    y: u32,
}

That derive annotation will expand out to two impls looking roughly like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
struct Point {
    x: u32,
    y: u32,
}

impl Copy for Point {
    // Copy has no methods; it can also be seen as a "marker"
    // that indicates that a cloneable type can also be
    // memcopy'd.
}

impl Clone for Point {
    fn clone(&self) -> Point {
        *self // this will just do a memcpy
    }
}

The second impl (the one implementing the Clone trait) seems a bit odd. After all, that impl is written for Point, but in principle it could be used any Copy type. It would be nice if we could add a blanket impl that converts from Copy to Clone that applies to all Copy types:

1
2
3
4
5
6
// Hypothetical addition to the standard library:
impl<T:Copy> Clone for T {
    fn clone(&self) -> Point {
        *self
    }
}

If we had such an impl, then there would be no need for Point above to implement Clone explicitly, since it implements Copy, and the blanket impl can be used to supkply the Clone impl. (In other words, you could just write #[derive(Copy)].) As you have probably surmised, though, it’s not that simple. Adding a blanket impl like this has a few complications we’d have to overcome first. This is still true with the specialization system described in [RFC 1210][].

There are a number of examples where these kinds of blanket impls might be useful. Some examples: implementing PartialOrd in terms of Ord, implementing PartialEq in terms of Eq, and implementing Debug in terms of Display.

Coherence and backwards compatibility

Hi! I’m the language feature coherence! You may remember me from previous essays like Little Orphan Impls or RFC 1023.

Let’s take a step back and just think about the language as it is now, without specialization. With today’s Rust, adding a blanket impl<T:Copy> Clone for T would be massively backwards incompatible. This is because of the coherence rules, which aim to prevent there from being more than one trait applicable to any type (or, for generic traits, set of types).

So, if we tried to add the blanket impl now, without specialization, it would mean that every type annotated with #[derive(Copy, Clone)] would stop compiling, because we would now have two clone impls: one from derive and the blanket impl we are adding. Obviously not feasible.

Why didn’t we add this blanket impl already then?

You might then wonder why we didn’t add this blanket impl converting from Copy to Clone in the wild west days, when we broke every existing Rust crate on a regular basis. We certainly considered it. The answer is that, if you have such an impl, the coherence rules mean that it would not work well with generic types.

To see what problems arise, consider the type Option:

1
2
3
4
5
#[derive(Copy, Clone)]
enum Option<T> {
    Some(T),
    None,
}

You can see that Option<T> derives Copy and Clone. But because Option is generic for T, those impls have a slightly different look to them once we expand them out:

1
2
3
4
5
6
7
8
9
10
impl<T:Copy> Copy for Option<T> { }

impl<T:Clone> Clone for Option<T> {
    fn clone(&self) -> T {
        match *self {
            Some(ref v) => Some(v.clone()),
            None => None,
        }
    }
}

Before, the Clone impl for Point was just *self. But for Option<T>, we have to do something more complicated, which actually calls clone on the contained value (in the case of a Some). To see why, imagine a type like Option<Rc<u32>> – this is clearly cloneable, but it is not Copy. So the impl is rewritten so that it only assumes that T: Clone, not T: Copy.

The problem is that types like Option<T> are sometimes Copy and sometimes not. So if we had the blanket impl that converts all Copy types to Clone, and we have the impl above that impl Clone for Option<T> if T: Clone, then we can easily wind up in a situation where there are two applicable impls. For example, consider Option<u32>: it is Copy, and hence we could use the blanket impl that just returns *self. But it is also fits the Clone-based impl I showed above. This is a coherence violation, because now the compiler has to pick which impl to use. Obviously, in the case of the trait Clone, it shouldn’t matter too much which one it chooses, since they both have the same effect, but the compiler doesn’t know that.

Enter specialization

OK, all of that prior discussion was assuming the Rust of today. So what if we adopted the existing specialization RFC? After all, its whole purpose is to improve coherence so that it is possible to have multiple impls of a trait for the same type, so long as one of those implementations is more specific. Maybe that applies here?

In fact, the RFC as written today does not. The reason is that the RFC defines rules that say an impl A is more specific than another impl B if impl A applies to a strict subset of the types which impl B applies to. Let’s consider some arbitrary trait Foo. Imagine that we have an impl of Foo that applies to any Option<T>:

1
impl<T> Foo for Option<T> { .. }

The more specific rule would then allow a second impl for Option<i32>; this impl would specialize the more generic one:

1
impl Foo for Option<i32> { .. }

Here, the second impl is more specific than the first, because while the first impl can be used for Option<i32>, it can also be used for lots of other types, like Option<u32>, Option<i64>, etc. So that means that these two impls would be accepted under RFC #1210. If the compiler ever had to choose between them, it would prefer the impl that is specific to Option<i32> over the generic one that works for all T.

But if we try to apply that rule to our two Clone impls, we run into a problem. First, we have the blanket impl:

1
impl<T:Copy> Clone for T { .. }

and then we have an impl tailored to Option<T> where T: Clone:

1
impl<T:Clone> Clone for Option<T> { .. }

Now, you might think that the second impl is more specific than the blanket impl. After all, it can be used for any type, whereas the second impl can only be used Option<T>. Unfortunately, this isn’t quite right. After all, the blanket impl cannot be used for any type T: it can only be used for Copy types. And we already saw that there are lots of types for which the second impl can be used where the first impl is inapplicable. In other words, neither impl is a subset of one another – rather, they both cover two distinct, but overlapping, sets of types.

To see what I mean, let’s look at some examples:

1
2
3
4
5
6
| Type              | Blanket impl | `Option` impl |
| ----              | ------------ | ------------- |
| i32               | APPLIES      | inapplicable  |
| Box<i32>          | inapplicable | inapplicable  |
| Option<i32>       | APPLIES      | APPLIES       |
| Option<Box<i32>>  | inapplicable | APPLIES       |

Note in particular the first and fourth rows. The first row shows that the blanket impl is not a subset of the Option impl. The last row shows that the Option impl is not a subset of the blanket impl either. That means that these two impls would be rejected by RFC #1210 and hence adding a blanket impl now would still be a breaking change. Boo!

To see the problem from another angle, consider this Venn digram, which indicates, for every impl, the sets of types that it matches. As you can see, there is overlap between our two impls, but neither is a strict subset of one another:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
+-----------------------------------------+
|[impl<T:Copy> Clone for T]               |
|                                         |
| Example: i32                            |
| +---------------------------------------+-----+
| |                                       |     |
| | Example: Option<i32>                  |     |
| |                                       |     |
+-+---------------------------------------+     |
  |                                             |
  |   Example: Option<Box<i32>>                 |
  |                                             |
  |          [impl<T:Clone> Clone for Option<T>]|
  +---------------------------------------------+

Enter intersection impls

One of the first ideas proposed for solving this is the so-called lattice specialization rule, which I will call intersection impls, since I think that captures the spirit better. The intuition is pretty simple: if you have two impls that have a partial intersection, but which don’t strictly subset one another, then you can add a third impl that covers precisely that intersection, and hence which subsets both of them. So now, for any type, there is always a most specific impl to choose. To get the idea, it may help to consider this ASCII Art Venn diagram. Note the difference from above: there is now an impl (indicating with = lines and . shading) covering precisely the intersection of the other two.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
+-----------------------------------------+
|[impl<T:Copy> Clone for T]               |
|                                         |
| Example: i32                            |
| +=======================================+-----+
| |[impl<T:Copy> Clone for Option<T>].....|     |
| |.......................................|     |
| |.Example: Option<i32>..................|     |
| |.......................................|     |
+-+=======================================+     |
  |                                             |
  |   Example: Option<Box<i32>>                 |
  |                                             |
  |          [impl<T:Clone> Clone for Option<T>]|
  +---------------------------------------------+

Intersection impls have some nice properties. For one thing, it’s a kind of minimal extension of the existing rule. In particular, if you are just looking at any two impls, the rules for deciding which is more specific are unchanged: the only difference when adding in intersection impls is that coherence permits overlap when it otherwise wouldn’t.

They also give us a good opportunity to recover some optimization. Consider the two impls in this case: the blanket impl that applies to any T: Copy simply copies some bytes around, which is very fast. The impl that is tailed to Option<T>, however, does more work: it matches the impl and then recursively calls clone. This work is necessary if T: Copy does not hold, but otherwise it’s wasted work. With an intersection impl, we can recover the full performance:

1
2
3
4
5
6
// intersection impl:
impl<T:Copy> Clone for Option<T> {
    fn clone(&self) -> T {
        *self // since T: Copy, we can do this here
    }
}

A note on compiler messages

I’m about to pivot and discuss the shortcomings of intersection impls. But before I do so, I want to talk a bit about the compiler messages here. I think that the core idea of specialization – that you want to pick the impl that applies to the most specific set of types – is fairly intuitive. But working it out in practice can be kind of confusing, especially at first. So whenever we propose any extension, we have to think carefully about the error messages that might result.

In this particular case, I think that we could give a rather nice error message. Imagine that the user had written these two impls:

1
2
3
4
5
6
7
impl<T: Copy> Clone for T { // impl A
    fn clone(&self) -> T { ... }
}

impl<T: Clone> Clone for Option<T> { // impl B
    fn clone(&self) -> T { ... }
}

As we’ve seen, these two impls overlap but neither specializes the other. One might imagine an error message that says as much, and which also suggests the intersection impl that must be added:

1
2
3
4
5
6
7
8
9
10
error: two impls overlap, but neither specializes the other
  |
2 | impl<T: Copy> Clone for T {...}
  | ----
  |
4 | impl<T: Clone> Clone for Option<T> {...}
  |
  | note: both impls apply to a type like `Option<T>` where `T: Copy`;
  |       to specify the behavior in this case, add the following intersection impl:
  |       `impl<T: Copy> Clone for Option<T>`

Note the message at the end. The wording could no doubt be improved, but the key point is that we should be to actually tell you exactly what impl is still needed.

Intersection impls do not solve the cross-crate problem

Unfortunately, intersection impls don’t give us the backwards compatibility that we want, at least not by themselves. The problem is, if we add the blanket impl, we also have to add the intersection impl. Within the same crate, this might be ok. But if this means that downstream crates have to add an intersection impl too, that’s a big problem.

Intersection impls may force you to precict the future

There is one other problem with intersection impls that arises in cross-crate situations, which nrc described on the tracking issue: sometimes there is a theoretical intersection between impls, but that intersection is empty in practice, and hence you may not be able to write the code you wanted to write. Let me give you an example. This problem doesn’t show up with the Copy/Clone trait, so we’ll switch briefly to another example.

Imagine that we are adding a RichDisplay trait to our project. This is much like the existing Display trait, except that it can support richer formatting like ANSI codes or a GUI. For convenience, we want any type that implements Display to also implement RichDisplay (but without any fancy formatting). So we add a trait and blanket impl like this one (let’s call it impl A):

1
2
trait RichDisplay { /* elided */ }
impl<D: Display> RichDisplay for D { /* elided */ } // impl A

Now, imagine that we are also using some other crate widget that contains various types, including Widget<T>. This Widget<T> type does not implement Display. But we would like to be able to render a widget, so we implement RichDisplay for this Widget<T> type. Even though we didn’t define Widget<T>, we can implement a trait for it because we defined the trait:

1
impl<T: RichDisplay> RichDisplay for Widget<T> { ... } // impl B

Well, now we have a problem! You see, according to the rules from RFC 1023, impls A and B are considered to potentially overlap, and hence we will get an error. This might surprise you: after all, impl A only applies to types that implement Display, and we said that Widget<T> does not. The problem has to do with semver: because Widget<T> was defined in another crate, it is outside of our control. In this case, the other crate is allowed to implement Display for Widget<T> at some later time, and that should not be a breaking change. But imagine that this other crate added an impl like this one (which we can call impl C):

1
impl<T: Display> Display for Widget<T> { ... } // impl C

Such an impl would cause impls A and B to overlap. Therefore, coherence considers these to be overlapping – however, specialization does not consider impl B to be a specialization of impl A, because, at the moment, there is no subset relationship between them. So there is a kind of catch-22 here: because the impl may exist in the future, we can’t consider the two impls disjoint, but because it doesn’t exist right now, we can’t consider them to be specializations.

Clearly, intersection impls don’t help to address this issue, as the set of intersecting types is empty. You might imagine having some alternative extension to coherence that permits impl B on the logic of if impl C were added in the future, that’d be fine, because impl B would be a specialization of impl A.

This logic is pretty dubious, though! For example, impl C might have been written another way (we’ll call this alternative version of impl C impl C2):

1
2
impl<T: WidgetDisplay> Display for Widget<T> { ... } // impl C2
//   ^^^^^^^^^^^^^^^^ changed this bound

Note that instead of working for any T: Display, there is now some other trait T: WidgetDisplay in use. Let’s say it’s only implemented for optional 32-bit integers right now (for some reason or another):

1
2
trait WidgetDisplay { ... }
impl WidgetDisplay for Option<i32> { ... }

So now if we had impls A, B, and C2, we would have a different problem. Now impls A and B would overlap for Widget<Option<i32>>, but they would not overlap for Widget<String>. The reason here is that Option<i32>: WidgetDisplay, and hence impl A applies. But String: RichDisplay (because String: Display) and hence impl B applies. Now we are back in the territory where intersection impls come into play. So, again, if we had impls A, B, and C2, one could imagine writing an intersection impl to cover this situation:

1
impl<T: RichDisplay + WidgetDisplay> RichDisplay for Widget<T> { ... } // impl D

But, of course, impl C2 has yet to be written, so we can’t really write this intersection impl now, in advance. We have to wait until the conflict arises before we can write it.

You may have noticed that I was careful to specify that both the Display trait and Widget type were defined outside of the current crate. This is because RFC 1023 permits the use of negative reasoning if either the trait or the type is under local control. That is, if the RichDisplay and the Widget type were defined in the same crate, then impls A and B could co-exist, because we are allowed to rely on the fact that Widget does not implement Display. The idea here is that the only way that Widget could implement Display is if I modify the crate where Widget is defined, and once I am modifying things, I can also make any other repairs (such as adding an intersection impl) that are necessary.

Conclusion

Today we looked at a particular potential use for specialization: adding a blanket impl that implements Clone for any Copy type. We saw that the current subset-only logic for specialization isn’t enough to permit adding such an impl. We then looked at one proposed fix for this, intersection impls (often called lattice impls).

Intersection impls are appealing because they increase expressiveness while keeping the general feel of the subset-only logic. They also have an explicit nature that appeals to me, at least in principle. That is, if you have two impls that partially overlap, the compiler doesn’t select which one should win: instead, you write an impl to cover precisely that intersection, and hence specify it yourself. Of course, that explicit nature can also be verbose and irritating sometimes, particularly since you will often want the intersection impl to behave the same as one of the other two (rather than doing some third, different thing).

Moreover, the explicit nature of interseciton impls causes problems across crates:

  • they don’t allow you to add a blanket impl in a backwards compatible fashion;
  • they interact poorly with semver, and specifically the limitations on negative logic imposed by RFC 1023.

My conclusion then is that intersection impls may well be part of the solution we want, but we will need additional mechanisms. Stay tuned for additional posts.

A note on comments

As is my wont, I am going to close this post for comments. If you would like to leave a comment, please go to this thread on Rust’s internals forum instead.

Cameron KaiserTenFourFox 45.5.0b1 available: now with little-endian (integer) typed arrays, AltiVec VP9, improved MP3 support and a petulant rant

The TenFourFox 45.5.0 beta (yes, it says it's 45.4.0, I didn't want to rev the version number yet) is now available for testing (downloads, hashes). This blog post will serve as the current "release notes" since we have until November 8 for the next release and I haven't decided everything I'll put in it, so while I continue to do more work I figured I'd give you something to play with. Here's what's new so far, roughly in order of importance.

First, minimp3 has been converted to a platform decoder. Simply by doing that fixed a number of other bugs which were probably related to how we chunked frames, such as Google Translate voice clips getting truncated and problems with some types of MP3 live streams; now we use Mozilla's built-in frame parser instead and in this capacity minimp3 acts mostly as a disembodied codec. The new implementation works well with Google Translate, Soundcloud, Shoutcast and most of the other things I tried. (See, now there's a good use for that Mac mini G4 gathering dust on your shelf: install TenFourFox and set it up for remote screensharing access, and use it as a headless Internet radio -- I'm sitting here listening to National Public Radio over Shoutcast in a foxbox as I write this. Space-saving, environmentally responsible computer recycling! Yes, I know I'm full of great ideas. Yes. You're welcome.)

Interestingly, or perhaps frustratingly, although it somewhat improved Amazon Music (by making duration and startup more reliable) the issue with tracks not advancing still persisted for tracks under a certain critical length, which is dependent on machine speed. (The test case here was all the little five or six second Fingertips tracks from They Might Be Giants' Apollo 18, which also happens to be one of my favourite albums, and is kind of wrecked by this problem.) My best guess is that Amazon Music's JavaScript player interface ends up on a different, possibly asynchronous code path in 45 than 38 due to a different browser feature profile, and if the track runs out somehow it doesn't get the end-of-stream event in time. Since machine speed was a factor, I just amped up JavaScript to enter the Baseline JIT very quickly. That still doesn't fix it completely and Apollo 18 is still messed up, but it gets the critical track length down to around 10 or 15 seconds on this Quad G5 in Reduced mode and now most non-pathological playlists will work fine. I'll keep messing with it.

In addition, this release carries the first pass at AltiVec decoding for VP9. It has some of the inverse discrete cosine and one of the inverse Hadamard transforms vectorized, and I also wrote vector code for two of the convolutions but they malfunction on the iMac G4 and it seems faster without them because a lot of these routines work on unaligned data. Overall, our code really outshines the SSE2 versions I based them on if I do say so myself. We can collapse a number of shuffles and merges into a single vector permute, and the AltiVec multiply-sum instruction can take an additional constant for use as a bias, allowing us to skip an add step (the SSE2 version must do the multiply-sum and then add the bias rounding constant in separate operations; this code occurs quite a bit). Only some of the smaller transforms are converted so far because the big ones are really intimidating. I'm able to model most of these operations on my old Core 2 Duo Mac mini, so I can do a step-by-step conversion in a relatively straightforward fashion, but it's agonizingly slow going with these bigger ones. I'm also not going to attempt any of the encoding-specific routines, so if Google wants this code they'll have to import it themselves.

G3 owners, even though I don't support video on your systems, you get a little boost too because I've also cut out the loopfilter entirely. This improves everybody's performance and the mostly minor degradation in quality just isn't bad enough to be worth the CPU time required to clean it up. With this initial work the Quad is able to play many 360p streams at decent frame rates in Reduced mode and in Highest Performance mode even some 480p ones. The 1GHz iMac G4, which I don't technically support for video as it is below the 1.25GHz cutoff, reliably plays 144p and even some easy-to-decode (pillarboxed 4:3, mostly, since it has lots of "nothing" areas) 240p. This is at least as good as our AltiVec VP8 performance and as I grind through some of the really heavyweight transforms it should get even better.

To turn this on, go to our new TenFourFox preference pane (TenFourFox > Preferences... and click TenFourFox) and make sure MediaSource is enabled, then visit YouTube. You should have more quality settings now and I recommend turning annotations off as well. Pausing the video while the rest of the page loads is always a good idea as well as before changing your quality setting; just click once anywhere on the video itself and wait for it to stop. You can evaluate it on my scientifically validated set of abuses of grammar (and spelling), 1970s carousel tape decks, gestures we make at Gmail other than the middle finger and really weird MTV interstitials. However, because without further configuration Google will "auto-"control the stream bitrate and it makes that decision based on network speed rather than dropped frames, I'm leaving the "slower" appellation because frankly it will be, at least by default. Nevertheless, please advise if you think MSE should be the default in the next version or if you think more baking is necessary, though the pref will be user-exposed regardless.

But the biggest and most far-reaching change is, as promised, little-endian typed arrays (the "LE" portion of the IonPower-NVLE project). The rationale for this change is that, largely due to the proliferation of asm.js code and the little-endian Emscripten systems that generate it, there will be more and more code our big-endian machines can't run properly being casually imported into sites. We saw this with images on Facebook, and later with WhatsApp Web, and also with MEGA.nz, and others, and so on, and so forth. asm.js isn't merely the domain of tech demos and high-end ported game engines anymore.

The change is intentionally very focused and very specific. Only typed array access is converted to little-endian, and only integer typed array access at that: DataView objects, the underlying ArrayBuffers and regular untyped arrays in particular remain native. When a multibyte integer (16-bit halfword or 32-bit word) is written out to a typed array in IonPower-LE, it is transparently byteswapped from big-endian to little-endian and stored in that format. When it is read back in, it is byteswapped back to big-endian. Thus, the intrinsic big-endianness of the engine hasn't changed -- jsvals and doubles are still tag followed by payload, and integers and single-precision floats are still MSB at the lowest address -- only the way it deals with an integer typed array. Since asm.js uses a big typed array buffer essentially as a heap, this is sufficient to present at least a notional illusion of little-endianness as the asm.js script accesses that buffer as long as those accesses are integer.

I mentioned that floats (neither single-precision nor doubles) are not byteswapped, and there's an important reason for that. At the interpreter level, the virtual machine's typed array load and store methods are passed through the GNU gcc built-in to swap the byte order back and forth (which, at least for 32 bits, generates pretty efficient code). At the Baseline JIT level, the IonMonkey MacroAssembler is modified to call special methods that generate the swapped loads and stores in IonPower, but it wasn't nearly that simple for the full Ion JIT itself because both unboxed scalar values (which need to stay big-endian because they're native) and typed array elements (which need to be byte-swapped) go through the same code path. After I spent a couple days struggling with this, Jan de Mooij suggested I modify the MIR for loading and storing scalar values to mark it if the operation actually accesses a typed array. I added that to the IonBuilder and now Ion compiled code works too.

All of these integer accesses have almost no penalty: there's a little bit of additional overhead on the interpreter, but Baseline and Ion simply substitute the already-built-in PowerPC byteswapped load and store instructions (lwbrx, stwbrx, lhbrx, sthbrx, etc.) that we already employ for irregexp for these accesses, and as a result we incur virtually no extra runtime overhead at all. Although the PowerPC specification warns that byte-swapped instructions may have additional latency on some implementations, no PPC chip ever used in a Power Mac falls in that category, and they aren't "cracked" on G5 either. The pseudo-little endian mode that exists on G3/G4 systems but not on G5 is separate from these assembly language instructions, which work on all PowerPCs including the G5 going all the way back to the original 601.

Floating point values, on the other hand, are a different story. There are no instructions to directly store a single or double precision value in a byteswapped fashion, and since there are also no direct general purpose register-floating point register moves, the float has to be spilled to memory and picked up by a GPR (or two, if it's a double) and then swapped at that point to complete the operation. To get it back requires reversing the process, along with the GPR (or two) getting spilled this time to repopulate the double or float after the swap is done. All that would have significantly penalized float arrays and we have enough performance problems without that, so single and double precision floating point values remain big-endian.

Fortunately, most of the little snippets of asm.js floating around (that aren't entire Emscriptenized blobs: more about that in a moment) seem perfectly happy with this hybrid approach, presumably because they're oriented towards performance and thus integer operations. MEGA.nz seems to load now, at least what I can test of it, and WhatsApp Web now correctly generates the QR code to allow your phone to sync (just in time for you to stop using WhatsApp and switch to Signal because Mark Zuckerbrat has sold you to his pimps here too).

But what about bigger things? Well ...

Yup. That's DOSBOX emulating MECC's classic Oregon Trail (from the Internet Archive's MS-DOS Game Library), converted to asm.js with Emscripten and running inside TenFourFox. Go on and try that in 45.4. It doesn't work; it just throws an exception and screeches to a halt.

To be sure, it doesn't fully work in this release of 45.5 either. But some of the games do: try playing Oregon Trail yourself, or Where in the World is Carmen Sandiego or even the original, old school in its MODE 40 splendour, Те́трис (that's Tetris, comrade). Even Commander Keen Goodbye Galaxy! runs, though not even the Quad can make it reasonably playable. In particular the first two probably will run on nearly any Power Mac since they're not particularly dependent on timing (I was playing Oregon Trail on my iMac G4 last night), though you should expect it may take anywhere from 20 seconds to a minute to actually boot the game (depending on your CPU) and I'd just mute the tab since not even the Quad G5 at full tilt can generate convincing audio. But IonPower-LE will now run them, and they run pretty well, considering.

Does that seem impractical? Okay then: how about something vaguely useful ... like ... Linux?

This is, of course, Fabrice Belliard's famous jslinux emulator, and yes, IonPower now runs this too. Please don't expect much out of it if you're not on a high-end G5; even the Quad at full tilt took about 80 seconds elapsed time to get to a root prompt. But it really works and it's useable.

Getting into ridiculous territory was running Linux on OpenRISC:

This is the jor1k emulator and it's only for the highest end G5 systems, folks. Set it to 5fps to have any chance of booting it in less than five minutes. But again -- it's not that the dog walked well.

vi freaks like me will also get a kick out of vim.js. Or, if you miss Classic apps, now TenFourFox can be your System 7 (mouse sync is a little too slow here but it boots):

Now for the bad news: notice that I said things don't fully work. With em-dosbox, the Emscriptenoberated DOSBOX, notice that I only said some games run in TenFourFox, not most, not even many. Wolfenstein 3D, for example, gets as far as the main menu and starting a new game, and then bugs out with a "Reboot requested" message which seems to originate from the emulated BIOS. (It works fine on my MacBook Air, and I did get it to run under PCE.js, albeit glacially.) Catacombs 3D just sits there, trying to load a level and never finishing. Most of the other games don't even get that far and a few don't start at all.

I also tried a Windows 95 emulator (also DOSBOX, apparently), which got part way into the boot sequence and then threw a JavaScript exception "SimulateInfiniteLoop"; the Internet Archive's arcade games under MAME which starts up and then exhausts recursion and aborts (this seems like it should be fixable or tunable, but I haven't explored this further so far); and of course programs requiring WebGL will never, ever run on TenFourFox.

Debugging Emscripten goo output is quite difficult and usually causes tumours in lab rats, but several possible explanations come to mind (none of them mutually exclusive). One could be that the code actually does depend on the byte ordering of floats and doubles as well as integers, as do some of the Mozilla JIT conformance tests. However, that's not ever going to change because it requires making everything else suck for that kind of edge case to work. Another potential explanation is that the intrinsic big-endianness of the engine is causing things to fail somewhere else, such as they managed to get things inadvertently written in such a way that the resulting data was byteswapped an asymmetric number of times or some other such violation of assumptions. Another one is that the execution time is just too damn long and the code doesn't account for that possibility. Finally, there might simply be a bug in what I wrote, but I'm not aware of any similar hybrid endian engine like this one and thus I've really got nothing to compare it to.

In any case, the little-endian typed array conversion definitely fixes the stuff that needed to get fixed and opens up some future possibilities for web applications we can also run like an Intel Mac can. The real question is whether asm.js compilation (OdinMonkey, as opposed to IonPower) pays off on PowerPC now that the memory model is apparently good enough at least for most things. It would definitely run faster than IonPower, possibly several times faster, but the performance delta would not be as massive as IonPower versus the interpreter (about a factor of 40 difference), the compilation step might bring lesser systems to their knees, and it would require some significant additional engineering to get it off the ground (read: a lot more work for me to do). Given that most of our systems are not going to run these big massive applications well even with execution time cut in half or even 2/3rds (and some of them don't work correctly as it is), it might seem a real case of diminishing returns to make that investment of effort. I'll just have to see how many free cycles I have and how involved the effort is likely to be. For right now, IonPower can run them and that's the important thing.

Finally, the petulant rant. I am a fairly avid reader of Thom Holwerda's OSNews because it reports on a lot of marginal and unusual platforms and computing news that most of the regular outlets eschew. The articles are in general very interesting, including this heads-up on booting the last official GameCube game (and since the CPU in the Nintendo GameCube is a G3 derivative, that's even relevant on this blog). However, I'm going to take issue with one part of his otherwise thought-provoking discussion on the new Apple A10 processor and the alleged impending death of Mac OS macOS, where he says, "I didn't refer to Apple's PowerPC days for nothing. Back then, Apple knew it was using processors with terrible performance and energy requirements, but still had to somehow convince the masses that PowerPC was better faster stronger than x86; claims which Apple itself exposed — overnight — as flat-out lies when the company switched to Intel."

Besides my issue with what he links in that last sentence as proof, which actually doesn't establish Apple had been lying (it's actually a Low End Mac piece contemporary with the Intelcalypse asking if they were), this is an incredibly facile oversimplification. Before the usual suspects hop on the comments with their usual suspecty things, let's just go ahead for the sake of argument and say everything its detractors said about the G5 and the late generation G4 systems are true, i.e., they're hot, underpowered and overhungry. (I contest the overhungry part in particular for the late laptop G4 systems, by the way. My 2005 iBook G4 to this day still gets around five hours on a charge if I'm aggressive and careful about my usage. For a 2005 system that's damn good, especially since Apple said six for the same model I own but only 4.5 for the 2008 MacBooks. At least here you're comparing Reality Distortion Field to Reality Distortion Field, and besides, all the performance/watt in the world doesn't do you a whole hell of a lot of good if your machine's out of puff.)

So let's go ahead and just take all that as given for discussion purposes. My beef with that comment is it conveniently ignores every other PowerPC chip before the Intel transition just to make the point. For example, PC Magazine back in the day noted that a 400MHz Yosemite G3 outperformed a contemporary 450MHz Pentium II on most of their tests (read it for yourself, April 20, 1999, page 53). The G3, which doesn't have SIMD of any kind, even beat the P2 running MMX code. For that matter, a 350MHz 604e was over twice as fast at integer performance than a 300MHz P2. I point all of this out not (necessarily) to go opening old wounds but to remind those ignorant of computing history that there was a time in "Apple's PowerPC days" when even the architecture's detractors will admit it was at least competitive. That time clearly wasn't when the rot later set in, but he certainly doesn't make that distinction.

To be sure, was this the point of his article? Not really, since he was more addressing ARM rather than PowerPC, but it is sort of. Thom asserts in his exchange with Grüber Alles that Apple and those within the RDF cherrypick benchmarks to favour what suits them, which is absolutely true and I just did it myself, but Apple isn't any different than anyone else in that regard (put away the "tu quoque" please) and Apple did this as much in the Power Mac days to sell widgets as they do now in the iOS ones. For that matter, Thom himself backtracks near the end and says, "there is one reason why benchmarks of Apple's latest mobile processors are quite interesting: Apple's inevitable upcoming laptop and desktop switchover to its own processors." For the record I see this as highly unlikely due to the Intel Mac's frequent use as a client virtual machine host, though it's interesting to speculate. But the rise of the A-series is hardly comparable with Apple's PowerPC days at all, at least not as a monolithic unit. If he had compared the benchmark situation with when the PowerPC roadmap was running out of gas in the 2004-5 timeframe, by which time even boosters like yours truly would have conceded the gap was widening but Apple relentlessly ginned up evidence otherwise, I think I'd have grudgingly concurred. And maybe that's actually what he meant. However, what he wrote lumps everything from the 601 to the 970MP into a single throwaway comment, is baffling from someone who also uses and admires Mac OS 9 (as I do), and dilutes his core argument. Something like that I'd expect from the breezy mainstream computer media types. Thom, however, should know better.

(On a related note, Ars Technica was a lot better when they were more tech and less politics.)

Next up: updates to our custom gdb debugger and a maintenance update for TenFourFoxBox. Stay tuned and in the meantime try it and see if you like it. Post your comments, and, once you've played a few videos or six, what you think the default should be for 45.5 (regular VP8 video or MSE/VP9).

Mitchell BakerLiving with Diverse Perspectives

Diversity and Inclusion is more than having people of different demographics in a group.  It is also about having the resulting diversity of perspectives included in the decision-making and action of the group in a fundamental way.

I’ve had this experience lately, and it demonstrated to me both why it can be hard and why it’s so important.  I’ve been working on a project where I’m the individual contributor doing the bulk of the work. This isn’t because there’s a big problem or conflict; instead it’s something I feel needs my personal touch. Once the project is complete, I’m happy to describe it with specifics. For now, I’ll describe it generally.

There’s a decision to be made.  I connected with the person I most wanted to be comfortable with the idea to make sure it sounded good.  I checked with our outside attorney just in case there was something I should know.  I checked with the group of people who are most closely affected and would lead the decision and implementation if we proceed. I received lots of positive response.

Then one last person checked in with me from my first level of vetting and spoke up.  He’s sorry for the delay, etc but has concerns.  He wants us to explore a bunch of different options before deciding if we’ll go forward at all, and if so how.

At first I had that sinking feeling of “Oh bother, look at this.  I am so sure we should do this and now there’s all this extra work and time and maybe change. Ugh!”  I got up and walked around a bit and did a few thing that put me in a positive frame of mind.  Then I realized — we had added this person to the group for two reasons.  One, he’s awesome — both creative and effective. Second, he has a different perspective.  We say we value that different perspective. We often seek out his opinion precisely because of that perspective.

This is the first time his perspective has pushed me to do more, or to do something differently, or perhaps even prevent me from something that I think I want to do.  So this is the first time the different perspective is doing more than reinforcing what seemed right to me.

That lead me to think “OK, got to love those different perspectives” a little ruefully.  But as I’ve been thinking about it I’ve come to internalize the value and to appreciate this perspective.  I expect the end result will be more deeply thought out than I had planned.  And it will take me longer to get there.  But the end result will have investigated some key assumptions I started with.  It will be better thought out, and better able to respond to challenges. It will be stronger.

I still can’t say I’m looking forward to the extra work.  But I am looking forward to a decision that has a much stronger foundation.  And I’m looking forward to the extra learning I’ll be doing, which I believe will bring ongoing value beyond this particular project.

I want to build Mozilla into an example of what a trustworthy organization looks like.  I also want to build Mozilla so that it reflects experience from our global community and isn’t living in a geographic or demographic bubble.  Having great people be part of a diverse Mozilla is part of that.  Creating a welcoming environment that promotes the expression and positive reaction to different perspectives is also key.  As we learn more and more about how to do this we will strengthen the ways we express our values in action and strengthen our overall effectiveness.

Mozilla WebDev CommunityBeer and Tell – September 2016

Once a month, web developers from across the Mozilla Project get together to talk about our side projects and drink, an occurrence we like to call “Beer and Tell”.

There’s a wiki page available with a list of the presenters, as well as links to their presentation materials. There’s also a recording available courtesy of Air Mozilla.

emceeaich: Gopher Tessel

First up was emceeaich, who shared Gopher Tessel, a project for running a Gopher server (an Internet protocol that was popular before the World Wide Web) on a Tessel. Tessel is small circuit board that runs Node.js projects; Gopher Tessel reads sensors (such as the temperature sensor) connected to the board, and exposes their values via Gopher. It also can control lights connected to the board.

groovecoder: Crypto: 500 BC – Present

Next was groovecoder, who shared a preview of a talk about cryptography throughout history. The talk is based on “The Code Book” by Simon Sign. Notable moments and techniques mentioned include:

  • 499 BCE: Histiaeus of Miletus shaves the heads of messengers, tattoos messages on their scalps, and sends them after their hair has grown back to hide the message.
  • ~100 AD: Milk of tithymalus plant is used as invisible ink, activated by heat.
  • ~700 BCE: Scytale
  • 49 BC: Caesar cipher
  • 1553 AD: Vigenère cipher

bensternthal: Home Monitoring & Weather Tracking

bensternthal was up next, and he shared his work building a dashboard with weather and temperature information from his house. Ben built several Node.js-based applications that collect data from his home weather station, from his Nest thermostat, and from Weather Underground and send all the data to an InfluxDB store. The dashboard itself uses Grafana to plot the data, and all of these servers are run using Docker.

The repositories for the Node.js applications and the Docker configuration are available on GitHub:

craigcook: ByeHolly

Next was craigcook, who shared a virtual yearbook page that he made as a farewell tribute to former-teammate Holly Habstritt-Gaal, who recently took a job at another company. The page shows several photos that are clipped at the edges to look curved like an old television screen. This is done in CSS using clip-path with an SVG-based path for clipping. The SVG used is also defined using proportional units, which allows it to warp and distort correctly for different image sizes, as seen by the variety of images it is used on in the page.

peterbe: react-buggy

peterbe told us about react-buggy, a client for viewing Github issues implemented in React. It is a rewrite of buggy, a similar client peterbe wrote for Bugzilla bugs. Issues are persisted in Lovefield (a wrapper for IndexedDB) so that the app can function offline. The client also uses elasticlunr.js to provide full-text search on issue titles and comments.

shobson: tic-tac-toe

Last up was shobson, who shared a small Tic-Tac-Toe game on the viewsourceconf.org offline page that is shown when the site is in offline mode and you attempt to view a page that is not available offline.


If you’re interested in attending the next Beer and Tell, sign up for the dev-webdev@lists.mozilla.org mailing list. An email is sent out a week beforehand with connection details. You could even add yourself to the wiki and show off your side-project!

See you next month!

Air MozillaParticipation Q3 Demos

Participation Q3 Demos Watch the Participation Team share the work from the last quarter in the Demos.

Emily DunhamSetting a Freenode channel's taxonomy info

Setting a Freenode channel’s taxonomy info

Some recent flooding in a Freenode channel sent me on a quest to discover whether the network’s services were capable of setting a custom message rate limit for each channel. As far as I can tell, they are not.

However, the problem caused me to re-read the ChanServ help section:

/msg chanserv help
- ***** ChanServ Help *****
- ...
- Other commands: ACCESS, AKICK, CLEAR, COUNT, DEOP, DEVOICE,
-                 DROP, GETKEY, HELP, INFO, QUIET, STATUS,
-                 SYNC, TAXONOMY, TEMPLATE, TOPIC, TOPICAPPEND,
-                 TOPICPREPEND, TOPICSWAP, UNQUIET, VOICE,
-                 WHY
- ***** End of Help *****

Taxonomy is a cool word. Let’s see what taxonomy means in the context of IRC:

/msg chanserv help taxonomy
- ***** ChanServ Help *****
- Help for TAXONOMY:
-
- The taxonomy command lists metadata information associated
- with registered channels.
-
- Examples:
-     /msg ChanServ TAXONOMY #atheme
- ***** End of Help *****

Follow its example:

/msg chanserv taxonomy #atheme
- Taxonomy for #atheme:
- url                       : http://atheme.github.io/
- ОХЯЕБУ                    : лололол
- End of #atheme taxonomy.

That’s neat; we can elicit a URL and some field with a cryllic and apparently custom name. But how do we put metadata into a Freenode channel’s taxonomy section? Google has no useful hits (hence this blog post), but further digging into ChanServ’s manual does help:

/msg chanserv help set

- ***** ChanServ Help *****
- Help for SET:
-
- SET allows you to set various control flags
- for channels that change the way certain
- operations are performed on them.
-
- The following subcommands are available:
- EMAIL           Sets the channel e-mail address.
- ...
- PROPERTY        Manipulates channel metadata.
- ...
- URL             Sets the channel URL.
- ...
- For more specific help use /msg ChanServ HELP SET command.
- ***** End of Help *****

Set arbirary metadata with /msg chanserv set #channel property key value

The commands /msg chanserv set #channel email a@b.com and /msg chanserv set #channel property email a@b.com appear to function identically, with the former being a convenient wrapper around the latter.

So that’s how #atheme got their fancy cryllic taxonomy: Someone with the appropriate permissions issued the command /msg chanserv set #atheme property ОХЯЕБУ лололол.

Behaviors of channel properties

I’ve attempted to deduce the rules governing custom metadata items, because I couldn’t find them documented anywhere.

  1. Issuing a set property command with a property name but no value deletes
the property, removing it from the taxonomy.
  1. A property is overwritten each time someone with the appropriate permissions
issues a /set command with a matching property name (more on the matching in a moment). The property name and value are stored with the same capitalization as the command issued.
  1. The algorithm which decides whether to overwrite an existing property or
create a new one is not case sensitive. So if you set ##test email test@example.com and then set ##test EMAIL foo, the final taxonomy will show no field called email and one field called EMAIL with the value foo.
  1. When displayed, taxonomy items are sorted first in alphabetical order (case
insensitively), then by length. For instance, properties with the names a, AA, and aAa would appear in that order, because the initial alphebetization is case-insensitive.
  1. Attempting to place [mIRC color codes](http://www.mirc.com/colors.html) in the

property name results in the error “Parameters are too long. Aborting.”

However, placing color codes in the value of a custom property works just fine.

Other uses

As a final note, you can also do basically the same thing with Freenode’s NickServ, to set custom information about your nickname instead of about a channel.

Support.Mozilla.OrgWhat’s Up with SUMO – 22nd September

Hello, SUMO Nation!

How are you doing? Have you seen the First Inaugural Firefox Census already? Have you filled it out? Help us figure out what kind of people use Firefox! You can get to it right after you read through our latest & greatest news below.

Welcome, new contributors!

If you just joined us, don’t hesitate – come over and say “hi” in the forums!

Contributors of the week

Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

SUMO Community meetings

  • LATEST ONE: 21st of September- you can read the notes here and see the video at AirMozilla.
  • NEXT ONE: happening on the 28th of September!
  • If you want to add a discussion topic to the upcoming meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Wednesday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).
    • If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback.

Community

Platform

  • PLATFORM REMINDER! The Platform Meetings are BACK! If you missed the previous ones, you can find the notes in this document. (here’s the channel you can subscribe to). We really recommend going for the document and videos if you want to make sure we’re covering everything as we go.
  • A few important and key points to make regarding the migration:
    • We are trying to keep as many features from Kitsune as possible. Some processes might change. We do not know yet how they will look.
    • Any and all training documentation that you may be accessing is generic – both for what you can accomplish with the platform and the way roles and users are called within the training. They do not have much to do with the way Mozilla or SUMO operate on a daily basis. We will use these to design our own experience – “translate” them into something more Mozilla, so to speak.
    • All the important information that we have has been shared with you, one way or another.
    • The timelines and schedule might change depending on what happens.
  • We started discussions about Ranks and Roles after the migration – join in! More topics will start popping up in the forums up for discussion, but they will all be gathered in the first post of the main migration thread.
  • If you are interested in test-driving the new platform now, please contact Madalina.
    • IMPORTANT: the whole place is a work in progress, and a ton of the final content, assets, and configurations (e.g. layout pieces) are missing.
  • QUESTIONS? CONCERNS? Please take a look at this migration document and use this migration thread to put questions/comments about it for everyone to share and discuss. As much as possible, please try to keep the migration discussion and questions limited to those two places – we don’t want to chase ten different threads in too many different places.

Social

Support Forum

  • Once again, and with gusto – SUUUUMO DAAAAAY! Go for it!
  • Final reminder: If you are using email notifications to know what posts to return to, jscher2000 has a great tip (and tool) for you. Check it out here!

Knowledge Base & L10n

Firefox

  • for Android
    • Version 49 is out! Now you can enjoy the following:

      • caching selected pages (e.g. mozilla.org) for offline retrieval
      • usual platform and bug fixes
  • for Desktop
    • Version 49 is out! Enjoy the following:

      • text-to-speech in Reader mode (using your OS voice modules)
      • ending support for older Mac OS versions
      • ending support for older CPUs
      • ending support for Firefox Hello
      • usual platform and bug fixes
  • for iOS
    • No news from under the apple tree this time!

By the way – it’s the first day of autumn, officially! I don’t know about you, but I am looking forward to mushroom hunting, longer nights, and a bit of rain here and there (as long as it stops at some point). What is your take on autumn? Tell us in the comments!

Cheers and see you around – keep rocking the helpful web!

Mozilla WebDev CommunityExtravaganza – September 2016

Once a month, web developers from across Mozilla get together to talk about the work that we’ve shipped, share the libraries we’re working on, meet new folks, and talk about whatever else is on our minds. It’s the Webdev Extravaganza! The meeting is open to the public; you should stop by!

You can check out the wiki page that we use to organize the meeting, or view a recording of the meeting in Air Mozilla. Or just read on for a summary!

Shipping Celebration

The shipping celebration is for anything we finished and deployed in the past month, whether it be a brand new site, an upgrade to an existing one, or even a release of a library.

Survey Gizmo Integration with Google Analytics

First up was shobson, who talked about a survey feature on MDN that prompts users to leave feedback about how MDN helped them complete a task. The survey is hosted by SurveyGizmo, and custom JavaScript included on the survey reports the user’s answers back to Google Analytics. This allows us to filter on the feedback from users to answer questions like, “What sections of the site are not helping users complete their tasks?”.

View Source Offline Mode

shobson also mentioned the View Source website, which is now offline-capable thanks to Service Workers. The pages are now cached if you’ve ever visited them, and the images on the site have offline fallbacks if you attempt to view them with no internet connection.

SHIELD Content Signing

Next up was mythmon, who shared the news that Normandy, the backend service for SHIELD, now signs the data that it sends to Firefox using the Autograph service. The signature is included with responses via the Content-Signature header. This signing will allow Firefox to only execute SHIELD recipes that have been approved by Mozilla.

Open-source Citizenship

Here we talk about libraries we’re maintaining and what, if anything, we need help with for them.

Neo

Eli was up next, and he shared Neo, a tool for setting up new React-based projects with zero configuration. It installs and configures many useful dependencies, including Webpack, Babel, Redux, ESLint, Bootstrap, and more! Neo is installed as a command used to initialize new projects or a dependency to be added to existing projects, and acts as a single dependency that pulls in all the different libraries you’ll need.

Roundtable

The Roundtable is the home for discussions that don’t fit anywhere else.

Standu.ps Reboot

Last up was pmac, who shared a note about how he and willkg are re-writing the standu.ps service using Django, and are switching the rewrite to use Github authentication instead of Persona. They have a staging server setup and expect to have news next month about the availability of the new service.

Standu.ps is a service used by several teams at Mozilla for posting status updates as they work, and includes an IRC bot for quick posting of updates.


If you’re interested in web development at Mozilla, or want to attend next month’s Extravaganza, subscribe to the dev-webdev@lists.mozilla.org mailing list to be notified of the next meeting, and maybe send a message introducing yourself. We’d love to meet you!

See you next month!

Air MozillaReps weekly, 22 Sep 2016

Reps weekly This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Firefox NightlyThese Weeks in Firefox: Issue 1

Every two weeks, engineering teams working on Firefox Desktop get together and update each other on things that they’re working on. These meetings are public. Details on how to join, as well as meeting notes, are available here.

We feel that the bleeding edge development state captured in those meeting notes might be interesting to our Nightly blog audience. To that end, we’re taking a page out of the Rust and Servo playbook, and offering you handpicked updates about what’s going on at the forefront of Firefox development!

Expect these every two weeks or so.

Thanks for using Nightly, and keep on rocking the free web!

Highlights

Contributor(s) of the Week

  • The team has nominated Adam (adamgj.wong), who has helped clean-up some of our Telemetry APIs. Great work, Adam!

Project Updates

 

Add-ons

  • andym wants to remind everybody that the Add-ons team is still triaging and fixing SDK bugs (like this one, for example).

Electrolysis (e10s)

Core Engineering

  • ksteuber rewrote the Snappy Symbolication Server (mainly used for the Gecko Profiler for Windows builds) and this will be deployed soon.
  • felipe is in the process of designing experiment mechanisms for testing different behaviours for Flash (allowing some, denying some, click-to-play some, based on heuristics)

Platform UI and other Platform Audibles

Quality of Experience

Sync / Firefox Accounts

Uncategorized

Here are the raw meeting notes that were used to derive this list.

Want to help us build Firefox? Get started here!

Here’s a tool to find some mentored, good first bugs to hack on.

Air MozillaPrivacy Lab - September 2016 - EU Privacy Panel

Privacy Lab - September 2016 - EU Privacy Panel Want to learn more about EU Privacy? Join us for a lively panel discussion of EU Privacy, including GDPR, Privacy Shield, Brexit and more. After...

About:CommunityOne Mozilla Clubs

24009148094_5ce13ab4a5_z

In 2015, The Mozilla Foundation launched the Mozilla Clubs program to bring people together locally to teach, protect and build the open web in an engaging and collaborative way. Within a year it grew to include 240+ Clubs in 100+ cities globally, and now is growing to reach new communities around the world.

Today we are excited to share a new focus for Mozilla Clubs taking place on a University or College Campus (Campus Clubs). Mozilla Campus Clubs blend the passion and student focus of the former Firefox Student Ambassador program and Take Back The Web Campaign with the existing structure of  Mozilla Clubs to create a unified model for participation on campuses!

Mozilla Campus Clubs take advantage of the unique learning environments of Universities and Colleges to bring groups of students together to teach, build and protect the open web. It builds upon the Mozilla Club framework to provide targeted support to those on campus through its:

  1. Structure:  Campus Clubs include an Executive Team in addition to the Club Captain position, who help develop programs and run activities specific to the 3 impact areas (teach, build, protect).
  2. Training & Support: Like all Mozilla Clubs, Regional Coordinators and Club Captains receive training and mentorship throughout their clubs journey. However the nature of the training and support for Campus Clubs is specific to helping students navigate the challenges of setting up and running a club in the campus context.
  3. Activities: Campus Club activities are structured around 3 impact areas (teach, build, protect). Club Captains in a University or College can find suggested activities (some specific to students) on the website here.

These clubs will be connected to the larger Mozilla Club network to share resources, curriculum, mentorship and support with others around the world. In 2017 you’ll see additional unification in terms of a joint application process for all Regional Coordinators and a unified web presence.

This is an exciting time for us to unite our network of passionate contributors and create new opportunities for collaboration, learning, and growth within our Mozillian communities. We also see the potential of this unification to allow for greater impact across Mozilla’s global programs, projects and initiatives.

If you’re currently involved in Mozilla Clubs and/or the FSA program, here are some important things to know:

  • The Firefox Student Ambassador Program is now Mozilla Campus Clubs: After many months of hard work and careful planning the Firefox Ambassador Program (FSA) has officially transitioned to Mozilla Clubs as of Monday September 19th, 2016. For full details about the Firefox Student Ambassador transition check out this guide here.
  • Firefox Club Captains will now be Mozilla Club Captains: Firefox Club Captains who already have a club, a structure, and a community set up on a university/college should register your club here to be partnered with a Regional Coordinator and have access to new resources and opportunities, more details are here.
  • Current Mozilla Clubs will stay the same: Any Mozilla Club that already exists will stay the same. If they happen to be on a university or college campus Clubs may choose to register as a Campus Club, but are not required to do so.
  • There is a new application for Regional Coordinators (RC’s): Anyone interested in taking on more responsibility within the Clubs program can apply here.  Regional Coordinators mentor Club Captains that are geographically close to them. Regional Coordinators support all Club Captains in their region whether they are on campus or elsewhere.
  • University or College students who want to start a Club at their University and College may apply here. Students who primarily want to lead a club on a campus for/with other university/college students will apply to start a Campus Club.
  • People who want to start a club for any type of learner apply here. Anyone who wants to start a club that is open to all kinds of learners (not limited to specifically University students) may apply to start a Club here.

Individuals who are leading Mozilla Clubs commit to running regular (at least monthly) gatherings, participate in community calls, and contribute resources and learning materials to the community. They are part of a network of leaders and doers who support and challenge each other. By increasing knowledge and skills in local communities Club leaders ensure that the internet is a global public resource, open and accessible to all.

This is the beginning of a long term collaboration for the Mozilla Clubs Program. We are excited to continue to build momentum for Mozilla’s mission through new structures and supports that will help engage more people with a passion for the open web.

Air MozillaThe Joy of Coding - Episode 72

The Joy of Coding - Episode 72 mconley livehacks on real Firefox bugs while thinking aloud.

Julia ValleraIntroducing Mozilla Campus Clubs

24009148094_d1a1be14ec_k

In 2015, The Mozilla Foundation launched the Mozilla Clubs program to bring people together locally to teach, protect and build the open web in an engaging and collaborative way. Within a year it grew to include 240+ Clubs in 100+ cities globally, and now is growing to reach new communities around the world.

Today we are excited to share a new focus for Mozilla Clubs taking place on a University or College Campus (Campus Clubs). Mozilla Campus Clubs blend the passion and student focus of the former Firefox Student Ambassador program and Take Back The Web Campaign with the existing structure of  Mozilla Clubs to create a unified model for participation on campuses!

Mozilla Campus Clubs take advantage of the unique learning environments of Universities and Colleges to bring groups of students together to teach, build and protect the open web. It builds upon the Mozilla Club framework to provide targeted support to those on campus through its:

  1. Structure:  Campus Clubs include an Executive Team in addition to the Club Captain position, who help develop programs and run activities specific to the 3 impact areas (teach, build, protect).
  2. Specific Training & Support: Like all Mozilla Clubs, Regional Coordinators and Club Captains receive training and mentorship throughout their clubs journey. However the nature of the training and support for Campus Clubs is specific to helping students navigate the challenges of setting up and running a club in the campus context.
  3. Activities: Campus Club activities are structured around 3 impact areas (teach, build, protect). Club Captains in a University or College can find suggested activities (some specific to students) on the website here.

These clubs will be connected to the larger Mozilla Club network to share resources, curriculum, mentorship and support with others around the world. In 2017 you’ll see additional unification in terms of a joint application process for all Club leaders and a unified web presence.

This is an exciting time for us to unite our network of passionate contributors and create new opportunities for collaboration, learning, and growth within our Mozillian communities. We also see the potential of this unification to allow for greater impact across Mozilla’s global programs, projects and initiatives.

If you’re currently involved in Mozilla Clubs and/or the FSA program, here are some important things to know:

  • The Firefox Student Ambassador Program is now Mozilla Campus Clubs: After many months of hard work and careful planning the Firefox Ambassador Program (FSA) has officially transitioned to Mozilla Clubs as of Monday September 19th, 2016. For full details about the Firefox Student Ambassador transition check out this guide here.
  • Firefox Club Captains will now be Mozilla Club Captains: Firefox Club Captains who already have a club, a structure, and a community set up on a university/college should register your club here to be partnered with a Regional Coordinator and have access to new resources and opportunities, more details are here.
  • Current Mozilla Clubs will stay the same: Any Mozilla Club that already exists will stay the same. If they happen to be on a university or college campus Clubs may choose to register as a Campus Club, but are not required to do so.
  • There is a new application for Regional Coordinators (RC’s): Anyone interested in taking on more responsibility within the Clubs program can apply here.  Regional Coordinators mentor Club Captains that are geographically close to them. Regional Coordinators support all Club Captains in their region whether they are on campus or elsewhere.
  • University or College students who want to start a Club at their University and College may apply here. Students who primarily want to lead a club on a campus for/with other university/college students will apply to start a Campus Club.
  • People who want to start a club for any type of learner apply here. Anyone who wants to start a club that is open to all kinds of learners (not limited to specifically University students) may apply on the Mozilla Club website.

Individuals who are leading Mozilla Clubs commit to running regular (at least monthly) gatherings, participate in community calls, and contribute resources and learning materials to the community. They are part of a network of leaders and doers who support and challenge each other. By increasing knowledge and skills in local communities Club leaders ensure that the internet is a global public resource, open and accessible to all.

This is the beginning of a long term collaboration for the Mozilla Clubs Program. We are excited to continue to build momentum for Mozilla’s mission through new structures and supports that will help engage more people with a passion for the open web.

Andy McKaySystem Add-ons

System add-ons are a new kind of add-on in Firefox, you might also know them as Go Faster add-ons.

These are interesting add-ons, they allow Firefox developers to ship code faster by writing the code in an add-on and then allow that to be developed and shipped independently of the main Firefox code.

Mostly these are not using WebExtensions and there is some questions if they should. I've been thinking about this one for a while and here are my thoughts at the moment - they aren't more than thoughts at this time.

System add-ons are really "internal" pieces of code that would otherwise be shipped in mozilla-central, blessed by the module owner and generally approved. They are maintained by someone who is active involved in their code (usually but not always a Mozilla employee). They have gone through security and privacy reviews. They are tested against Firefox code in the test infrastructure on each release. They sometimes do things that no other add-on should be allowed to do.

This is all in contrast to third party add-ons that you'll find on AMO. When you look through all the reasoning behind WebExtensions, you'll find that a lot of the reasons involve things like "hard to maintain", "security problems" and so on. Please see my earlier posts for more on this. I would say that these reasons don't apply to system add-ons.

So do system add-ons need to be WebExtensions? Maybe they don't. In fact I think if we try and push them into being system add-ons we'll create a scenario where WebExtensions become the blocker.

System add-ons will want to do things that don't exist, so APIs will need to be added to WebExtensions. Some of the things that system add-ons will want to do are things that third party add-ons should not be allowed to do. Then we need to add in another permissions layer to say some add-on developers can use those APIs and others can't.

Already there is a distinction between what can and cannot land in Firefox and that's made by the module owners and people who work on Firefox.

If a system add-on does something unusual, do you end up in a scenario where you write a WebExtension API that only one add-on uses? The maintenance burden of creating an API for one part of Firefox that only one add-on can use doesn't seem worth it. It also makes WebExtensions the blocker and slows system add-on development down.

It feels to me like there isn't a compelling reason to make system add-ons to be WebExtensions. Instead we should encourage them to be so if it makes sense and let them be regular bootstrapped add-ons otherwise.

George WrightAn Introduction to Shmem/IPC in Gecko

We use shared memory (shmem) pretty extensively in the graphics stack in Gecko. Unfortunately, there isn’t a huge amount of documentation regarding how the shmem mechanisms in Gecko work and how they are managed, which I will attempt to address in this post.

Firstly, it’s important to understand how IPC in Gecko works. Gecko uses a language called IPDL to define IPC protocols. This is effectively a description language which formally defines the format of the messages that are passed between IPC actors. The IPDL code is then compiled into C++ code by our IPDL compiler, and we can then use the generated classes in Gecko’s C++ code to do IPC related things. IPDL class names start with a P to indicate that they are IPDL protocol definitions.

IPDL has a built-in shmem type, simply called mozilla::ipc::Shmem. This holds a weak reference to a SharedMemory object, and code in Gecko operates on this. SharedMemory is the underlying platform-specific implementation of shared memory and facilitates the shmem subsystem by implementing the platform-specific API calls to allocate and deallocate shared memory regions, and obtain their handles for use in the different processes. Of particular interest is that on OS X we use the Mach virtual memory system, which uses a Mach port as the handle for the allocated memory regions.

mozilla::ipc::Shmem objects are fully managed by IPDL, and there are two different types: normal Shmem objects, and unsafe Shmem objects. Normal Shmem objects are mostly intended to be used by IPC actors to send large data chunks between themselves as this is more efficient than saturating the IPC channel. They have strict ownership policies which are enforced by IPDL; when the Shmem object is sent across IPC, the sender relinquishes ownership and IPDL restricts the sender’s access rights so that it can neither read nor write to the memory, whilst the receiver gains these rights. These Shmem objects are created/destroyed in C++ by calling PFoo::AllocShmem() and PFoo::DeallocShmem(), where PFoo is the Foo IPDL interface being used. One major caveat of these “safe” shmem regions is that they are not thread safe, so be careful when using them on multiple threads in the same process!

Unsafe Shmem objects are basically a free-for-all in terms of access rights. Both sender and receiver can always read/write to the allocated memory and careful control must be taken to ensure that race conditions are avoided between the processes trying to access the shmem regions. In graphics, we use these unsafe shmem regions extensively, but use locking vigorously to ensure correct access patterns. Unsafe Shmem objects are created by calling PFoo::AllocUnsafeShmem(), but are still destroyed in the same manner as normal Shmem objects by simply calling PFoo::DeallocShmem().

With the work currently ongoing to move our compositor to a separate GPU process, there are some limitations with our current shmem situation. Notably, a SharedMemory object is effectively owned by an IPDL channel, and when the channel goes away, the SharedMemory object backing the Shmem object is deallocated. This poses a problem as we use shmem regions to back our textures, and when/if the GPU process dies, it’d be great to keep the existing textures and simply recreate the process and IPC channel, then continue on like normal. David Anderson is currently exploring a solution to this problem, which will likely be to hold a strong reference to the SharedMemory region in the Shmem object, thus ensuring that the SharedMemory object doesn’t get destroyed underneath us so long as we’re using it in Gecko.

Justin CrawfordDebugging WebExtension Popups

Note: In the time since I last posted here I have been doing a bit more hands-on web development [for example, on the View Source website]. Naturally this has led me to learn new things. I have learned a few things that may be new to others, too. I’ll drop those here when I run across them.


I have been looking for a practical way to learn about WebExtensions, the new browser add-on API in Firefox. This API is powerful for a couple reasons: It allows add-on developers to build add-ons that work across browsers, and it’s nicer to work with than the prior Firefox add-on API (for example, it watches code and reloads changes without restarting the browser).

So I found a WebExtensions add-on to hack on, which I’ll probably talk about in a later post. The add-on has a chrome component, which is to say it includes changes to the browser UI. Firefox browser chrome is just HTML/CSS/JavaScript, which is great. But it took me a little while to figure out how to debug it.

The tools for doing this are all fairly recent. The WebExtension documentation on MDN is fresh from the oven, and the capabilities shown below were missing just a few months ago.

Here’s how to get started debugging WebExtensions in the browser:

First, enable the Browser Toolbox. This is a special instance of Firefox developer tools that can inspect and debug the browser’s chrome. Cool, eh? Here’s how to make it even cooler:

  • Set up a custom Firefox profile with the Toolbox enabled, so you don’t have to enable it every time you fire up your development environment. Consider just using the DevPrefs add-on, which toggles a variety of preferences (including Toolbox) to optimize the browser for add-on development.
  • Once you have a profile with DevPrefs installed, you can launch it with your WebExtension like so: ./node_modules/.bin/web-ext run --source-dir=src --firefox-binary {path to firefox binary} --firefox-profile {name of custom profile} (See the WebExtensions command reference for more information.

Next, with the instance of Firefox that appears when you run the above command, go to the Tools -> Web Developer -> Browser Toolbox menu. A window should appear that looks just like a standard Firefox developer tools window. But this window is imbued with the amazing ability to debug the browser itself. Try it: Use the inspector to look at the back button!

browser_toolbox

In that window you’ll see a couple small icons near the top right. One looks like a waffle. This button makes the popup sticky — just like a good waffle. This is quite helpful, since otherwise the popup will disappear the minute you try to inspect, debug, or modify it using the Browser Toolbox.

popup_sticky

Next to the waffle is a button with a downward arrow on it. This button lets you select which content to debug — so, for example, you could select the HTML of your popup. When you have a sticky popup selected, you can inspect and hack on its HTML and CSS just like you would any other web content.

content_selector

This information is now documented in great detail on MDN. Check it out!

Will Kahn-GreeneStandup v2: system test

What is Standup?

Standup is a system for capturing standup-style posts from individuals making it easier to see what's going on for teams and projects. It has an associated IRC bot standups for posting messages from IRC.

Join us for a Standup v2 system test!

Paul and I did a ground-up rewrite of the Standup web-app to transition from Persona to GitHub auth, release us from the shackles of the old architecture and usher in a new era for Standup and its users.

We're done with the most minimal of minimal viable products. It's missing some features that the current Standup has mostly around team management, but otherwise it's the same-ish down to the lavish shade of purple in the header that Rehan graced the site with so long ago.

If you're a Standup user, we need your help testing Standup v2 on the -stage environment before Thursday, September 22nd, 2016!

We've thrown together a GitHub issue to (ab)use as a forum for test results and working out what needs to get fixed before we push Standup v2 to production. It's got instructions that should cover everything you need to know.

Why you would want to help:

  1. You get to see Standup v2 before it rolls out and point out anything that's missing that affects you.

  2. You get a chance to discover parts of Standup you may not have known about previously.

  3. This is a chance for you to lend a hand on this community project that helps you which we're all working on in our free time.

  4. Once we get Standup v2 up, there are a bunch of things we can do with Standup that will make it more useful. Freddy is itching to fix IRC-related issues and wants https support [1]. I want to implement user API tokens, a cli and search. Paul want's to have better weekly team reports and project pages.

    There are others listed in the issue tracker and some that we never wrote down.

    We need to get over the Standup v2 hurdle first.

Why you wouldn't want to help:

  1. You're on PTO.

    Stop reading--enjoy that PTO!

  2. It's the end of the quarter and you're swamped.

    Sounds like you're short on time. Spare a minute and do something in the Short on time, but want to help anyhow? section.

  3. You're looking to stop using Standup.

    I'd love to know what you're planning to switch to. If we can meet peoples' needs with some other service, that's more free time for me and Paul.

  4. Some fourth thing I lack the imagination to think of.

    If you have some other blocker to helping, toss me an email.

Hooray for the impending Standup v2!

[1]This is in progress--we're just waiting for a cert.

David LawrenceHappy BMO Push Day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1275568] bottom of page ‘duplicate’ button focuses top of page duplicate field
  • [1283930] Add Makefile.PL & local/lib/perl5 support to bmo/master
  • [1278398] Enable “Due Date” field for all websites, web services, infrastructure(webops, netops, etc), infosec bugs (all components)
  • [1213791] “suggested reviewers” menu overflows horizontally from visible area if reviewers have long name.
  • [1297522] changes to legal form
  • [1302835] Enable ‘Rank’ field for Tech Evangelism Product
  • [1267347] Editing the Dev-Events Form to be current
  • [1303659] Bug.comments (/rest/bug/<id>/comment) should return the count value in the results

discuss these changes on mozilla.tools.bmo.


Andy McKayAutonomous cars are not the answer

I'm frustrated by suggestion that self-driving cars are the answer to congestion, gridlock and a bunch of other things.

There are a bunch of people you have to move from location X to Y, maybe going via location Z. Location X, Y and Z will vary.

The assumption from the self driving car lobby is that this will increase the effectiveness of the transportation system. First some problems:

  • Use of a car transportation system is dependent upon a solid tax base to support the system which is inherently inefficient in so many ways. It is expensive to maintain for low density environments. Conversely, it is almost impossible to maintain for high density environments.

  • The car transportation system has no limit on capacity so as the efficiency of the transportation system increases, so will demand and density.

Secondly, one assumption seems to be wrong:

  • That X and Y are usually some parts of the suburbs and that people are commuting from X to Y because they are trying to maintain a standard of living. As a result everything else must change to support that. That seems to be terribly wrong.

And so, what troubles me is:

  • The location of X and Y. Can X (peoples homes) be moved closer to Y (where people live). Can Y be made redundant (e.g.: tele-commuting)? Will anyone have a Y in the near future (job automation, robots etc)?

  • The mode of transportation. Does another option other than cars exist? How about bikes? Walking? Even public transportation? Why do we have to depend upon cars? Is this really the best we can do?

  • If a lot of people change to another transportation system, what's to stop a large number of people moving to cars and causing grid lock again. Isn't that what history has shown us will happen?

  • Why do we keep supporting the most inefficient and expensive form of transport ever?

As an engineer I am frustrated that the choices here seem to be: self-driving cars or not self-driving cars. The real question is, why do we need cars at all?

This Week In RustThis Week in Rust 148

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

RustConf Experiences

New Crates & Project Updates

Crate of the Week

This week's crate of the week is (the in best TWiR-tradition shamelessly self-promoted) mysql-proxy, a flexible, lightweight and scalable proxy for MySQL databases. Thanks to andygrove for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

98 pull requests were merged in the last two weeks.

New Contributors

  • Caleb Jones
  • dangcheng
  • Eugene Bulkin
  • knight42
  • Liigo

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

  • RFC 1696: mem::discriminant(). Add a function that extracts the discriminant from an enum variant as a comparable, hashable, printable, but (for now) opaque and unorderable type.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

fn work(on: RustProject) -> Money

No jobs listed for this week.

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

No quote was selected for QotW.

(Full disclosure: we removed QotW for this issue because selected QotW was deemed inappropriate and against the core values of Rust community. Here is the relevant discussion on reddit. If you are curious, you can find the quote in git history).

Submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and brson.

The Mozilla BlogLatest Firefox Expands Multi-Process Support and Delivers New Features for Desktop and Android

With the change of the season, we’ve worked hard to release a new version of Firefox that delivers the best possible experience across desktop and Android.

Expanding Multiprocess Support

Last month, we began rolling out the most significant update in our history, adding multiprocess capabilities to Firefox on desktop, which means Firefox is more responsive and less likely to freeze. In fact, our initial tests show a 400% improvement in overall responsiveness.

Our first phase of the rollout included users without add-ons. In this release, we’re expanding support for a small initial set of compatible add-ons as we move toward a multiprocess experience for all Firefox users in 2017.

Desktop Improvement to Reader Mode

This update also brings two improvements to Reader Mode. This feature strips away clutter like buttons, ads and background images, and changes the page’s text size, contrast and layout for better readability. Now we’re adding the option for the text to be read aloud, which means Reader Mode will narrate your favorite articles, allowing you to listen and browse freely without any interruptions.

We also expanded the ability to customize in Reader Mode so you can adjust the text and fonts, as well as the voice. Additionally, if you’re a night owl like some of us, you can read in the dark by changing the theme from light to dark.

Offline Page Viewing on Android

On Android, we’re now making it possible to access some previously viewed pages when you’re offline or have an unstable connection. This means you can interact with much of your previously viewed content when you don’t have a connection. The feature works with many pages, though it is dependent on your specific device specs. Give it a try by opening Firefox while your phone is in airplane mode.

We’re continuing to work on updates and new features that make your Firefox experience even better. Download the latest Firefox for desktop and Android and let us know what you think.

Daniel GlazmanW3C

J'ai toujours dit que la standardisation au W3C, c'est de l'hémoglobine sur les murs dans une ambiance feutrée. Je ne changerai pas un iota à cette affirmation. Mais le W3C c'est aussi l'histoire d'une industrie dès ses premières heures et des amitiés franches construites dans l'explosion d'une nouvelle ère. J'ai passé ce soir, en marge du Technical Plenary Meeting du W3C à Lisbonne, un dîne inoubliable avec mes vieux potes Yves et Olivier que je connais et apprécie depuis ohlala tellement longtemps. Un moment délicieux, sympa et drôle autour d'un repas fabuleux dans une gargote lisboète de rêve. Des éclats de rire, des confidences, une super-soirée bref un vrai moment de bonheur. Merci à eux pour cette géniale soirée et à Olivier pour l'adresse, en tous points extra. Je re-signe quand vous voulez, les gars, et c'est un honneur de vous avoir comme potes :-)

Chris CooperRelEng & RelOps Weekly highlights - September 19, 2016

Welcome back to our *cough*weekly*cough* updates!

Modernize infrastructure:

Amy and Alin decommissioned all but 20 of our OS X 10.6 test machines, and those last few will go away when we perform the next ESR release. The next ESR release corresponds to Firefox 52, and is scheduled for March next year.

Improve Release Pipeline:

Ben finally completed his work on Scheduled Changes in Balrog. With it, we can pre-schedule changes to Rules, which will help minimize the potential for human error when we ship, and make it unnecessary for RelEng to be around just to hit a button.

Lots of other good Balrog work has happened recently too, which is detailed in Ben’s blog post.

Improve CI Pipeline:

Windows TaskCluster builders were split into level-1 (try) and level-3 (m-i, m-c, etc) worker types with sccache buckets secured by level.

Windows 10 AMI generators were added to automation in preparation for Windows 10 testing on TaskCluster. We’ve been looking to switch from testing on Windows 8 to Windows 10, as Windows 8 usage continues to decline. The move to TaskCluster seems like a natural breakpoint to make that switch.

Dustin massive patch set to enable in-tree config of the various build kinds landed last week. This was no small feat. Kudos to him for the testing and review stamina it took to get that done. Those of us working to migrate nightly builds to TaskCluster are now updating – and simplifying – our task graphs to leverage his work.

Operational:

After work to fix some bugs and make it reliable, Mark re-enabled the cron job that generates our Windows 7 AWS AMIs each night.

Now that many of the Windows 7 tests are being run in AWS, Amy and Q reallocated 20 machines from Windows 7 testing to XP testing to help with the load. We are reallocating 111 additional machines from Windows 7 to XP and Windows 8 in the upcoming week.

Amy created a template for postmortems and created a folder where all of Platform Operations can consolidate their postmortem documents.

Jake and Kendall took swift action on the TCP ChallengeAck side attack vulnerability. This has been fixed with a sysctl workaround on all linux hosts and instances.

Jake pushed a new version of the mig-agent client which was deployed across all Linux and OS X platforms.

Hal implemented the new GitHub feature to require two-factor authentication for several Mozilla organizations on GitHub.

Release:

Rail has automated re-generating our SHA-1 signed Windows installers for Firefox, which are served to users on old versions of Windows (XP, Vista). This means that users on those platforms will no longer need to update through a SHA-1 signed, watershed release (we were using Firefox 43 for this) before updating to the most recent version. This will save XP/Vista users some time and bandwidth by creating a one-step update process for them to get the latest Firefox.

See you next *cough*week*cough*!

David Rajchenbach TellerThis blog has moved

You can find my new blog on github. Still rough around the edges, but I’m planning to improve this as I go.


Firefox NightlyGetting Firefox Nightly to stick to Ubuntu’s Unity Dock

I installed Ubuntu 16.04.1 this week and decided to try out Unity, the default window manager. After I installed Nightly I assumed it would be simple to get the icon to stay in the dock, but Unity seemed confused about Nightly vs the built-in Firefox (I assume because the executables have the same name).

It took some doing to get Nightly to stick to the Dock with its own icon. I retraced my steps and wrote them down below.

My goal was to be able to run a couple versions of Firefox with several profiles. I thought the easiest way to accomplish that would be to add a new icon for each version+profile combination and a single left click on the icon would run the profile I want.

After some research, I think the Unity way is to have a single icon for each version of Firefox, and then add Actions to it so you can right click on the icon and launch a specific profile from there.

Installing Nightly

If you don’t have Nighly yet, download Nightly (these steps should work fine with Aurora or Beta also). Open a terminal:

$ mkdir /opt/firefox
$ tar -xvjf ~/Downloads/firefox-51.0a1.en-US.linux-x86_64.tar.bz2 /opt

You may need to chown some directories to get that in /opt which is fine. At the end of the day, make sure your regular user can write to the directory or else you won’t be able to install Nightly’s updates.

Adding the icon to the dock

Then create a file in your home directory named nightly.desktop and paste this into it:

[Desktop Entry]
Version=1.0
Name=Nightly
Comment=Browse the World Wide Web
Icon=/opt/firefox/browser/icons/mozicon128.png
Exec=/opt/firefox/firefox %u
Terminal=false
Type=Application
Categories=Network;WebBrowser;
Actions=Default;Mozilla;ProfileManager;

[Desktop Action Default]
Name=Default Profile
Exec=/opt/firefox/firefox --no-remote -P minefield-default

[Desktop Action Mozilla]
Name=Mozilla Profile
Exec=/opt/firefox/firefox --no-remote -P minefield-mozilla

[Desktop Action ProfileManager]
Name=Profile Manager
Exec=/opt/firefox/firefox --no-remote --profile-manager

Adjust anything that looks like it should change, the main callout being the Exec line should have the names of the profiles you want to use (in the above file mine are called minefield-default and minefield-mozilla). If you have more profiles just make more copies of that section and name them appropriately.

If you think you’ve got it, run this command:

$ desktop-file-validate nightly.desktop

No output? Great — it passed the validator. Now install it:

$ desktop-file-install --dir=.local/share/applications nightly.desktop

Two notes on this command:

  • If you leave off –dir it will write to /usr/share/applications/ and affect all users of the computer. You’ll probably need to sudo the command if you want that.
  • Something is weird with the parsing. Originally I passed in –dir=~/.local/… and it literally made a directory named ~ in my home directory, so, if the menu isn’t updating, double check the file is getting copied to the right spot.

Some people report having to run unity again to get the change to appear, but it showed up for me. Now left-clicking runs Nightly and right-clicking opens a menu asking me which profile I want to use.

Screenshot of menu

Right-click menu for Nightly in Unity’s Dock.

Modifying the Firefox Launcher

I also wanted to launch profiles off the regular Firefox icon in the same way.

The easiest way to do that is to copy the built-in one from /usr/share/applications/firefox.desktop and modify it to suit you. Conveniently, Unity will override a system-wide .desktop file if you have one with the same name in your local directory so installing it with the same commands as you did for Nightly will work fine.

Postscript

I should probably add a disclaimer that I’ve used Unity for all of two days and there may be a smoother way to do this. I saw a couple of 3rd-party programs that will generate .desktop files but I didn’t want to install more things I’d rarely use. Please leave a comment if I’m way off on these instructions! 🙂

This post originally appeared on Wil’s blog.

Henrik SkupinMoving home folder to another encrypted volume on OS X

Over the last weekend I was reinstalling my older MacBookPro (late 2011 model) again after replacing its hard drive with a fresh and modern SSD drive from Crucial 512GB. That change was really necessary given that simple file operations took about a minute, and every system tools claimed that the HDD was fine.

So after installing Mavericks I moved my home folder to another partition to make it easier later to reinstall OS X again. But as it turned out it is not that easy, especially not given that OS X doesn’t support mounting of other encrypted partitions beside the system partition during start-up yet. If you had a single user only, you will be busted after the home dir move and a reboot. That’s what I experienced. As fix under such a situation put back OS X into the “post install” state, and create a new administrator account via single-user mode. With this account you can at least sign-in again, and after unlocking the other encrypted partition you will have access to your original account again.

Having to first login via an account which data is still hosted on the system partition is not a workable solution for me. So I was continuing to find a solution which let me unlock the second encrypted partition during startup. After some search I finally found a tool which actually let me do this. It’s called Unlock and can be found on Github. To make it work it installs a LaunchDaemon which retrieves the encryption password via the System keychain, and unlocks the partition during start-up. To actually be on the safe side I compiled the code myself with Xcode and got it installed with some small modifications to the install script (I may want to contribute those modifications back into the repository for sure :).

In case you have similar needs, I hope this post will help you to avoid those hassles as I have experienced.

Mozilla Privacy BlogImproving Government Disclosure of Security Vulnerabilities

Last week, we wrote about the shared responsibility of protecting Internet security. Today, we want to dive deeper into this issue and focus on one very important obligation governments have: proper disclosure of security vulnerabilities.

Software vulnerabilities are at the root of so much of today’s cyber insecurity. The revelations of recent attacks on the DNC, the state electoral systems, the iPhone, and more, have all stemmed from software vulnerabilities. Security vulnerabilities can be created inadvertently by the original developers, or they can be developed or discovered by third parties. Sometimes governments acquire, develop, or discover vulnerabilities and use them in hacking operations (“lawful hacking”). Either way, once governments become aware of a security vulnerability, they have a responsibility to consider how and when (not whether) to disclose the vulnerability to the affected company so that developer can fix the problem and protect their users. We need to work with governments on how they handle vulnerabilities to ensure they are responsible partners in making this a reality today.

In the U.S., the government’s process for reviewing and coordinating the disclosure of vulnerabilities that it learns about or creates is called the Vulnerabilities Equities Process (VEP). The VEP was established in 2010, but not operationalized until the Heartbleed vulnerability in 2014 that reportedly affected two thirds of the Internet. At that time, White House Cybersecurity Coordinator Michael Daniel wrote in a blog post that the Obama Administration has a presumption in favor of disclosing vulnerabilities. But, policy by blog post is not particularly binding on the government, and as Daniel even admits, “there are no hard and fast rules” to govern the VEP.

It has now been two years since Heartbleed and the U.S. government’s blog post, but we haven’t seen improvement in the way that vulnerabilities disclosure is being handled. Just one example is the alleged hack of the NSA by the Shadow Brokers, which resulted in the public release of NSA “cyberweapons”, including “zero day” vulnerabilities that the government knew about and apparently had been exploiting for years. Companies like Cisco and Fortinet whose products were affected by these zero day vulnerabilities had just that, zero days to develop fixes to protect users before the vulnerabilities were possibly exploited by hackers.

The government may have legitimate intelligence or law enforcement reasons for delaying disclosure of vulnerabilities (for example, to enable lawful hacking), but these same vulnerabilities can endanger the security of billions of people. These two interests must be balanced, and recent incidents demonstrate just how easily stockpiling vulnerabilities can go awry without proper policies and procedures in place.

Cybersecurity is a shared responsibility, and that means we all must do our part – technology companies, users, and governments. The U.S. government could go a long way in doing its part by putting transparent and accountable policies in place to ensure it is handling vulnerabilities appropriately and disclosing them to affected companies. We aren’t seeing this happen today. Still, with some reforms, the VEP can be a strong mechanism for ensuring the government is striking the right balance.

More specifically, we recommend five important reforms to the VEP:

  • All security vulnerabilities should go through the VEP and there should be public timelines for reviewing decisions to delay disclosure.
  • All relevant federal agencies involved in the VEP must work together to evaluate a standard set of criteria to ensure all relevant risks and interests are considered.
  • Independent oversight and transparency into the processes and procedures of the VEP must be created.
  • The VEP Executive Secretariat should live within the Department of Homeland Security because they have built up significant expertise, infrastructure, and trust through existing coordinated vulnerability disclosure programs (for example, US CERT).
  • The VEP should be codified in law to ensure compliance and permanence.

These changes would improve the state of cybersecurity today.

We’ll dig into the details of each of these recommendations in a blog post series from the Mozilla Policy team over the coming weeks – stay tuned for that.

Today, you can watch Heather West, Mozilla Senior Policy Manager, discuss this issue at the New America Open Technology Institute event on the topic of “How Should We Govern Government Hacking?” The event can be viewed here.

Chris McDonaldi-can-management Weekly Update 1

A couple weeks ago I started writing a game and i-can-management is the directory I made for the project so that’ll be the codename for now. I’m going to write these updates to journal the process of making this game. As I’m going through this process alone, you’ll see all aspects of the game development process as I go through them. That means some weeks may be art heavy, while others game rules, or maybe engine refactoring. I also want to give a glance how I’m feeling about the project and rules I make for myself.

Speaking of rules, those are going to be a central theme on how I actually keep this project moving forward.

  • Optimize only when necessary. This seems obvious, but folks define necessary differently. 60 frames per second with 750×750 tiles on the screen is my current benchmark for whether I need to optimize. I’ll be adding numbers for load times and other aspects once they grow beyond a size that feels comfortable.
  • Abstractions are expensive, use them sparingly.This is something I learned from a Jonathan Blow talk I mention in my previous post. Abstractions can increase or remove flexibility. On one hand reusing components may allow more rapid iteration. On the other hand it may take considerable effort to make systems communicate that weren’t designed to pass messages.I’m making it clear in each effort whether I’m in exploration mode so I work mostly with just 1 function, or if I’m in architect mode where I’m trying to make the next feature a little easier to implement. This may mean 1000 line functions and lots of global like use for a while until I understand how the data will be used. Or it may mean abstracting a concept like the camera to a struct because the data is always used together.
  • Try the easier to implement answer before trying the better answer.I have two goals with this. First, it means I get to start trying stuff faster so I know if I want to pursue it or if I’m kinda off on the idea. Maybe this first implementation will show some other subsystem needs features first so I decide to delay the more correct answer. So in short quicker to test and expose unexpected requirements.The other goal is to explore building games in a more holistic way. Knowing a quick and dirty way to implement something may help when trying to get an idea thrown together really quick. Then knowing how to evolve that code into a better long term solution means next games or ideas that cross pollinate are faster to compose because the underlying concepts are better known.

The last couple weeks have been an exploration of OpenGL via glium the library I’m using to access OpenGL from Rust as well as abstract away the window creation. I’d only ever ran the example before this dive into building a game. From what I remember of doing this in C++ the abstraction it provides for the window creation and interaction, using the glutin library is pretty great. I was able to create a window of whatever size, hook up keyboard and mouse events, and render to the screen pretty fast after going through the tutorial in the glium book.

This brings me to one of the first frustrating points in this project. So many things are focused on 3d these days that finding resources for 2d rendering is harder. If you find them, they are for old versions of OpenGL or use libraries to handle much of the tile rendering. I was hoping to find an article like “I built a 2d tile engine that is pretty fast and these are the techniques I used!” but no such luck. OpenGL guides go immediately into 3d space after getting past basic polygons. But it just means I get to explore more which is probably a good thing.

I already had a deterministic map generator built to use as the source of the tiles on the screen. So, I copy and pasted some of the matrices from the glium book and then tweak the numbers I was using for my tiles until they show up on the screen and looked ok. From here I was pretty stoked. I mean if I have 25×40 tiles on the screen what more could someone ask for. I didn’t know how to make the triangle strips work well for the tiles to be drawn all at once, so I drew each tile to the screen separately, calculating everything on every frame.

I started to add numbers here and there to see how to adjust the camera in different directions. I didn’t understand the math I was working with yet so I was mostly treating it like a black box and I would add or multiply numbers and recompile to see if it did anything. I quickly realized I needed it to be more dynamic so I added detection for the mouse scrolling. Since I’m on my macbook most of the time I’m doing development I can scroll vertically as well as horizontally, making a natural panning feeling.

I noticed that my rendering had a few quirks, and I didn’t understand any of the math that was being used, so I went seeking more sources of information on how these transforms work. At first I was directed to the OpenGL transformations page which set me on the right path, including a primer on the linear algebra I needed. Unfortunately, it quickly turned toward 3d graphics and I didn’t quite understand how to apply it to my use case. In looking for more resources I found Solarium Programmers’ OpenGL 101 page which took some more time with orthographic projects, what I wanted for my 2d game.

Over a few sessions I rewrote all the math to use a coordinate system I understood. This was greatly satisfying, but if I hadn’t started with ignoring the math, I wouldn’t have had a testbed to see if I actually understood the math. A good lesson to remember, if you can ignore a detail for a bit and keep going, prioritize getting something working, then transforming it into something you understand more thoroughly.

I have more I learned in the last week, but this post is getting quite long. I hope to write a post this week about changing from drawing individual tiles to using a single triangle strip for the whole map.

In the coming week my goal is to have mouse clicks interacting with the map working. This involves figuring out what tile the mouse has clicked which I’ve learned isn’t trivial. In parallel I’ll be developing the first set of tiles using Pyxel Edit and hopefully integrating them into the game. Then my map will become richer than just some flat colored tiles.

Here is a screenshot of the game so far for posterity’s sake. It is showing 750×750 tiles with deterministic weighted distribution between grass, water, and dirt:Screen Shot 2016-09-18 at 8.38.15 PM.png:


The Servo BlogThis Week In Servo 78

In the last week, we landed 68 PRs in the Servo organization’s repositories.

Planning and Status

Our overall roadmap is available online and now includes the initial Q3 plans. From now on, we plan to include the quarterly plan with a high-level breakdown in the roadmap page.

This week’s status updates are here.

Special thanks to canaltinova for their work on implementing the matrix transition algorithms for CSS3 transform animation. This allows (both 2D and 3D) rotate(), perspective() and matrix() functions to be interpolated, as well as interpolations between arbitrary transformations, though the last bit is yet to be implemented. In the process of implementation, we had to deal with many spec bugs, as well as implementation bugs in other browsers, which complicated things immensely – it’s very hard to tell if your code has a mistake or if the spec itself is wrong in complicated algorithms like these. Great work, canaltinova!

Notable Additions

  • glennw added support for scrollbars
  • canaltinova implemented the matrix decomposition/interpolation algorithm
  • nox landed a rustup to the 9/14 rustc nightly
  • ejpbruel added a websocket server for use in the remote debugging protocol
  • creativcoder implemented the postMessage() API for ServiceWorkers
  • ConnorGBrewster made Servo recycle session entries when reloading
  • mrobinson added support for transforming rounded rectangles
  • glennw improved webrender startup times by making shaders compile lazily
  • canaltinova fixed a bug where we don’t normalize the axis of rotate() CSS transforms
  • peterjoel added the DOMMatrix and DOMMatrixReadOnly interfaces
  • Ms2ger corrected an unsound optimization in event dispatch
  • tizianasellitto made DOMTokenList iterable
  • aneeshusa excised SubpageId from the codebase, using PipelineId instead
  • gilbertw1 made the HTTP authentication cache use origins intead of full URLs
  • jmr0 fixed the event suppression logic for pages that have navigated
  • zakorgy updated some WebBluetooth APIs to match new specification changes

New Contributors

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!

Screenshot

Some screencasts of matrix interpolation at work:

This one shows all the basic transformations together (running a tweaked version of this page. The 3d rotate, perspective, and matrix transformation were enabled by the recent change.

Karl Dubost[worklog] Edition 036 - Administrative week

Busy week without much things done for bugs. W3C is heading to Lisbon for the TPAC, so tune of the week: Amalia Rodrigues. I'll be there in spirit.

Webcompat Life

Progress this week:

326 open issues
----------------------
needsinfo       12
needsdiagnosis  106
needscontact    8
contactready    28
sitewait        158
----------------------

You are welcome to participate

Webcompat issues

(a selection of some of the bugs worked on this week).

  • yet another appearance: none implemented in Blink. This time for meter.

Webcompat.com development

Reading List

Follow Your Nose

TODO

  • Document how to write tests on webcompat.com using test fixtures.
  • ToWrite: Amazon prefetching resources with <object> for Firefox only.

Otsukare!

Andy McKayTFSA Check

The TFSA is a savings account for Canadians that was introduced in 2009.

As a quick check I wanted to see how much or little my TFSA had changed against what it should be. That meant a double check of how much room I had in the TFSA each year. So this is a quick cacluation the theoretical case: that you are able to invest the maximum amount each year, at the beginning of the year and get 5% return (after fees) on that.

Year Maximum Total invested Compounded
2009$5,000.00$5,000.00$5,250.00
2010$5,000.00$10,000.00$10,762.50
2011$5,000.00$15,000.00$16,550.63
2012$5,000.00$20,000.00$22,628.16
2013$5,500.00$25,500.00$29,534.56
2014$5,500.00$31,000.00$36,786.29
2015$10,000.00$41,000.00$49,125.61
2016$5,500.00$46,500.00$57,356.89

Which always raises the question for me of what is a reasonable rate to calculate at these days. It always used to be 10%, but that's very hard to get these days. Since 2006 the annualized return on the S&P 500 is 5.158% for example. Perhaps 5% represents too conversative a number.

Matěj CeplOpenWeatherMapProvider for CyanogenMod 13

I don’t understand. CyanogenMod 13 introduced new Weather widget and lock screen support. Great! Unfortunately, the widget requires specific providers for weather services and CM does not provide any in the default installation. There exists Weather Underground provider, which works, but only other provider I found (Yahoo! Weather provider) does not work with my CM without Google Play!.

I would a way prefer OpenWeatherMap provider, but although CyanogenMod has the GitHub repository for one , but no APK anywhere (and certainly not one for F-Droid). Fortunately, I have found a blogpost which describes how simple it is build the APK from the given code. Unfortunately, author did not provide APK on his site. I am not sure, whether there is not some hook, but here is mine.

Chris McDonaldChanging Optimization Targets

Alternate Title: How I changed my mental model to be a more effective game developer and human.

Back in February 2016, I started my journey as a professional game developer. I joined Sparkypants to work on the backend for Dropzone. This was about 7 months ago at time of writing. I didn’t enter the game development world in the standard ways. I wasn’t at one of the various schools with game dev programs, I didn’t intern at a studio, I haven’t spent much of my personal development time building my own indie games. I had on the other hand, spent years building backend services, writing dev tools, competing in AI competitions, and building a slew of half finished open source projects. In short, I was not a game developer when I started.

My stark contrast in background works to my advantage in many parts of my job. Most of our engineers haven’t worked on backend services and haven’t needed to scale that sort of infrastructure. My lead and friend Johannes has been instrumental in many of my successes so far in the company. He has background in backend development as well as game development and has often been a translator and guide to me as I learn what being a game developer means.

At first, I assumed my contrast would work itself out naturally and I’d just become a game developer by osmosis. If I am surrounded by folks doing this and I’m actively developing a game, I will become a game developer. But that presupposes success, which was only coming to me in limited amounts. The other conclusion would be leaving game development because I wasn’t compatible with it, something I’m unwilling to accept at this time.

I shared my concerns around not fitting the culture at Sparkypants with Johannes, as well as some productivity worries. I’ve learned over the years that if I’m feeling problems like this, my boss may be as well. Johannes with his typical wonderful encouraging personality reminded me that there are large aspects of my personality that fit in with the culture, just maybe my development style and conflict resolution needed work and recommended this talk by Jonathan Blow to show me the mental model that is closer to how many of the other developers operate, among some other advice.

That talk by Jonathan Blow spends a fair amount of its time on the topic of optimization. Whether it is using data oriented techniques to make data series processing faster or drawing in a specific way to make the graphics card use less memory or any number of topics, optimization comes up in nearly every game development talk or article at some point. His point though was that we often spend too much time optimizing the wrong things. If you’ve been in computer science for a bit you’ve inevitably heard at least a fragment of the following quote from Donald Knuth, if not you’re in for a treat, this is a good one:

Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.

The bolded text is the part most folks quote, implying the rest. I had heard this, quoted it, and used it as justification for doing or not doing things many times in the past. But, I’d also forgotten it, I’d apply it when it was convenient for me, but not generally to my software development. Blow starts with the more traditional overthinking algorithms and code in general that most bring up when they speak on premature optimization. Then he followed on with the idea that selecting data structures is a form of optimization. That follow on was a segue to point out that any time you are thinking about a problem, you should keep in mind if it is the most important or urgent problems for you to think about.

The end of the day, your job as a game developer is not to optimize for speed or correctness, but to optimize for fun. This means trying a lot of ideas and throwing many of them out. If you spent a lot of time optimizing for a million users of a feature and only some folks in the company use it before you decide to remove it, you’ve wasted a lot of effort. Maybe not completely, since you’ve probably learned during the process, but that effort could have been put into other features or parts of the system that may actually need attention. This shift in thinking has me letting go of details in more cases, spend less time on projects and focusing on “functional” over “correct and scalable.”

The next day after watching that talk and discussing with Johannes, I attended RustConf and saw a series of amazing talks on Rust and programming in general. Of particular note for changing my mental model was Julia Evan’s closing keynote about learning systems programming with Rust. There were so many things that struck me during that talk, but I’ll just focus on the couple that were most relevant.

First and foremost was the humility in the talk. Julia’s self described experience level was “intermediate developer” while having about as many years of experience as I have and I considered myself a more “senior developer.” At many points over the last couple years I’ve wrestled with this, considering myself senior then seeing evidence that I’m not. As more confident person, it is an easy trap for me to fall into. I’m in my first year as a game developer, regardless of other experience I’m a junior game developer at best.

Starting to internalize this humility has resulted in fighting my coworkers less when they bring up topics that I think I have enough knowledge to weigh in on. The more experienced folks at work have decades of building games behind them. I’m not saying my input to these discussions is worthless, I still have a lot to contribute, but I’ve been able to check my ego at the door more easily and collaborate through topics instead of being contrary.

The humility in the talk makes another major concept from it, life long learning, take on a new light. I’ve always been striving for more knowledge in the computer science space, so life long learning isn’t new to me, but like the optimization discussion above there is more nuance to be discovered. Having humility when trying to learn makes the experience so much richer for all parties. Teachers being humble will not over explain a topic and recognize that their way is not the only way. Learners being humble will be more receptive to ideas that don’t fit their current mental model and seek more information about them.

This post has become quite long, so I’ll try to wrap things up and use further blog posts to explore these ideas with more concrete examples. Writing this has been a mechanism for me to understand some of this change in myself as well as help others who may end up in similar shoes.

If this blog post were a tweet, I think it’d be summarized into “Pay attention to the important things, check your ego at the door, and keep learning.” which I’m sure would get me some retweets and stars or hearts or whatever. And if someone else said it, I’d go “of course, yeah folks mess this up all the time!” But, there is so much more nuance in those ideas. I now realize I’m just a very junior game developer with some other sometimes relevant experience, I’ve so much to learn from my peers and am extremely excited to do so.

If you have additional resources that you’d think I or others who read this would find valuable, please comment below or send me at tweet.


Mozilla Security BlogUpdate on add-on pinning vulnerability

Earlier this week, security researchers published reports that Firefox and Tor Browser were vulnerable to “man-in-the-middle” (MITM) attacks under special circumstances. Firefox automatically updates installed add-ons over an HTTPS connection. As a backup protection measure against mis-issued certificates, we also “pin” Mozilla’s web site certificates, so that even if an attacker manages to get an unauthorized certificate for our update site, they will not be able to tamper with add-on updates.

Due to flaws in the process we used to update “Preloaded Public Key Pinning” in our releases, the pinning for add-on updates became ineffective for Firefox release 48 starting September 10, 2016 and ESR 45.3.0 on September 3, 2016. As of those dates, an attacker who was able to get a mis-issued certificate for a Mozilla Web site could cause any user on a network they controlled to receive malicious updates for add-ons they had installed.

Users who have not installed any add-ons are not affected. However, Tor Browser contains add-ons and therefore all Tor Browser users are potentially vulnerable. We are not presently aware of any evidence that such malicious certificates exist in the wild and obtaining one would require hacking or compelling a Certificate Authority. However, this might still be a concern for Tor users who are trying to stay safe from state-sponsored attacks. The Tor Project released a security update to their browser early on Friday; Mozilla is releasing a fix for Firefox on Tuesday, September 20.

To help users who have not updated Firefox recently, we have also enabled Public Key Pinning Extension for HTTP (HPKP) on the add-on update servers. Firefox will refresh its pins during its daily add-on update check and users will be protected from attack after that point.

Air MozillaWebdev Beer and Tell: September 2016

Webdev Beer and Tell: September 2016 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...

Wladimir PalantMore Last Pass security vulnerabilities

With Easy Passwords I develop a product which could be considered a Last Pass competitor. In this particular case however, my interest was sparked by the reports of two Last Pass security vulnerabilities (1, 2) which were published recently. It’s a fascinating case study given that Last Pass is considered security software and as such should be hardened against attacks.

I decided to dig into Last Pass 4.1.21 (latest version for Firefox at that point) in order to see what their developer team did wrong. The reported issues sounded like there might be structural problems behind them. The first surprise was the way Last Pass is made available to users however: on Addons.Mozilla.Org you only get the outdated Last Pass 3 as the stable version, the current Last Pass 4 is offered on the development channel and Last Pass actively encourages users to switch to the development channel.

My starting point were already reported vulnerabilities and the approach that Last Pass developers took in order to address those. In the process I discovered two similar vulnerabilities and a third one which had even more disastrous consequences. All issues have been reported to Last Pass and resolved as of Last Pass 4.1.26.

Password autocomplete

Having your password manager fill in passwords automatically is very convenient but not exactly secure. The awareness for the issues goes back to at least year 2006 when a report sparked a heavy discussion about the Firefox password manager. The typical attack scenario involves an (unfortunately very common) Cross-site scripting (XSS) vulnerability on the targeted website, this one allows an attacker to inject JavaScript code into the website which will create a login form and then read out the password filled in by the password manager — all that in the background and almost invisible to the user. So when the Firefox password manager requires user interaction these days (entering the first letter of the password) before filling in your password — that’s why.

Last Pass on the other hand supports filling in passwords without any user interaction whatsoever, even though that feature doesn’t seem to be enabled by default. But that’s not even the main issue, as Mathias Karlsson realized the code recognizing which website you are on is deeply flawed. So you don’t need to control a website to steal passwords for it, you can make Last Pass think that your website malicious.com is actually twitter.com and then fill in your Twitter password. This is possible because Last Pass uses a huge regular expression to parse URLs and this part of it is particularly problematic:

(?:(([^:@]*):?([^:@]*))?@)?

This is meant to match the username/password part before the hostname, but it will actually skip anything until a @ character in the URL. So if that @ character is in the path part of the URL then the regular expression will happily consider the real hostname part of the username and interpret anything following the @ character the hostname — oops. Luckily, Last Pass already recognized that issue even before Karlsson’s findings. Their solution? Add one more regular expression and replace all @ characters following the hostname by %40. Why not change the regular expressions so that it won’t match slashes? Beats me.

The bug that Karlsson found was then this band-aid code only replacing the last @ character but not any previous ones (greedy regular expression). As a response, Last Pass added more hack-foo to ensure that other @ characters are replaced as well, not by fixing the bug (using a non-greedy regular expression) but by making the code run multiple times. My bug report then pointed out that this code still wasn’t working correctly for data: URLs or URLs like http://foo@twitter.com:123@example.com/. While it’s not obvious whether the issues are still exploitable, this piece of code is just too important to have such bugs.

Of course, improving regular expressions isn’t really the solution here. Last Pass just shouldn’t do their own thing when parsing URLs, it should instead let the browser do it. This would completely eliminate the potential for Last Pass and the browser disagreeing on the hostname of the current page. Modern browsers offer the URL object for that, old ones still allow achieving the same effect by creating a link element. And guess what? In their fix Last Pass is finally doing the right thing. But rather than just sticking with the result returned by the URL object they compare it to the output of their regular expression. Guess they are really attached to that one…

Communication channels

I didn’t know the details of the other report when I looked at the source code, I only knew that it somehow managed to interfere with extension’s internal communication. But how is that even possible? All browsers provide secure APIs that allow different extension parts to communicate with each other, without any websites listening in or interfering. To my surprise, Last Pass doesn’t limit itself to these communication channels and relies on window.postMessage() quite heavily in addition. The trouble with this API is: anybody could be sending messages, so receivers should always verify the origin of the message. As Tavis Ormandy discovered, this is exactly what Last Pass failed to do.

In the code that I saw origin checks have been already added to most message receivers. However, I discovered another communication mechanisms: any website could add a form with id="lpwebsiteeventform" attribute. Submitting this form triggered special actions in Last Pass and could even produce a response, e.g. the getversion action would retrieve details about the Last Pass version. There are also plenty of actions which sound less harmless, such as those related to setting up and verifying multifactor authentication.

For my proof-of-concept I went with actions that were easier to call however. There was get_browser_history_tlds action for example which would retrieve a list of websites from your browsing history. And there were setuuid and getuuid actions which allowed saving an identifier in the Last Pass preferences which could not be removed by regular means (unlike cookies).

Last Pass resolved this issue by restricting this communication channel to lastpass.com and lastpass.eu domains. So now these are the only websites that can read out your browsing history. What they need it for? Beats me.

Full compromise

When looking into other interactions with websites, I noticed this piece of code (reduced to the relevant parts):

var src = window.frameElement.getAttribute("lpsrc");
if (src && 0 < src.indexOf("lpblankiframeoverlay.local"))
  window.location.href = g_url_prefix + "overlay.html" + src.substring(src.indexOf("?"));

This is how Last Pass injects its user interface into websites on Firefox: since content scripts don’t have the necessary privileges to load extension pages into frames, they create a frame with an attribute like lpsrc="http://lpblankiframeoverlay.local/?parameters". Later, the code above (which has the necessary privileges) looks at the frame and loads the extension page with the correct parameters.

Of course, a website can create this frame as well. And it can use a value for lpsrc that doesn’t contain question marks, which will make the code above add the entire attribute value to the URL. This allows the website to load any Last Pass page, not just overlay.html. Doesn’t seem to be a big deal but there is a reason why websites aren’t allowed to load extension pages: these pages often won’t expect this situation and might do something stupid.

And tabDialog.html indeed does something stupid. This page still has a message handler reacting to messages sent via window.postMessage() without checking the origin. And not only that, the command it is expecting is “call” — it would call an arbitrary function with the parameters supplied in the message. Which function did I choose? Why, eval() of course! Game over, we have arbitrary websites inject JavaScript code into the extension context, they can now do anything that the Last Pass user interface is capable of.

Conclusions

The security issues discovered in Last Pass are not an isolated incident. The base concept of the extension seems sound, for example the approach they use to derive the encryption key and to encrypt your data before sending it to the server is secure as far as I can tell. The weak point is the Last Pass browser extension however which is necessarily dealing with decrypted data. This extension is currently violating best practices which opens up unnecessary attack surfaces, the reported security vulnerabilities are a consequence of that. Then again, if Tavis Ormandy is right then Last Pass is in good company.

About:CommunityFirefox 49 new contributors

With the release of Firefox 49, we are pleased to welcome the 48 developers who contributed their first code change to Firefox in this release, 39 of whom were brand new volunteers! Please join us in thanking each of these diligent and enthusiastic individuals, and take a look at their contributions:

Kat BraybrookePhD Fieldwork, Day 1: A Researcher in Residence at the Tate (^.^)

Today is a big day in the web-wolf glitter land that is my PhD. After a crazy and wonderful year of reading, discussing, traveling and (over!)thinking everything there is to think about spaces for digital making, cultural institutions, methodologies and open creative practices, I start the pilot stage of my doctoral fieldwork at the Tate Britain, situated within the ever-colourful Taylor Digital Studio as its new Researcher-in-Residence. I have consent forms ready from the University of Sussex, about a thousand web broswer windows open on the Tate’s computers, a T4 file full of community photos and about 20 pages of to-do’s - and yet, it feels no small task to get started.

As a part of the study I’m undertaking with the Tate and other cultural institutions in the UK, my aim in hanging out, messing around and geeking out at these spaces, in addition to implementing more formal qualitative methods like participant observation and interviews, is to engage with the situated knowledges and actor-network theories of Donna Haraway, Bruno Latour, Doreen Massey and other great thinkers as a user myself - not as an “objective” researcher, seemingly removed from the environments I am actually an active part of. This is because when we make things in a space like the Tate’s Digital Studio together, we all become connected - whether we happen to be a computer, a workshop participant, a gallery curator or an observer. We all help make these sites what they are. Without these interactions, a makerspace is just a conglomeration of infrastructures, plaster and walls alone, without meaning or identity.

This project’s data collection starts, perhaps fittingly, with making. From September to December, I’ll be spending my Fridays in the Studio, collaborating with other PhD and researcher groups and with the Tate’s excellent Digital Learning team members, while playing the role of both researcher and designer through a few different hands-on projects. The first intervention I’ll be working on is SPACEHACKER, an evolving artwork that I’ll be launching as part of MozEx, an exhibit curated by the Tate and the V&A as part of this year’s Mozilla Festival in London. SPACEHACKER implements critical and speculative design models by asking participants to sketch out their imaginaries of a digital space that members of their community would feel welcome at. For those interested in getting involved (I’d love collaborators on this project!), I’ll be presenting it along with a few other (mega talented!) MozEx artists in this public pop-up at the Tate on October 10th-11th.

Community making at a workshop entitled “Wandering Ruins” in 2014.

While learning about speculative user imaginations of digital spatiality through this artwork, I’ll also be working with Tate teams as a design practitioner, building a digital mosaic website on the ever-excellent Tumblr platform that highlights the many workshops and community happenings that have occurred at the Digital Studio since it opened in November 2013. Through these hands-on methods and more traditional qualitative observation and interviews, I hope to help build a public-minded narrative of the myriad experiences of users at this site, and the groups who have helped bring them to life - while building an understanding of the politics and institutional apparatuses of access, ownership and power that may also weave themselves around these interactions. My goal with this multi-level approach is to keep the research process as open, iterative and participatory as possible - which makes blog posts like this (and the discussions they bring about) just as important to me as formal academic journal articles and conference presentations. It just wouldn’t feel right to not share this work openly with the communities of makers, thinkers, curators and users who are actually involved in it!

As you can probably tell from my e-tone, I’m massively excited to be getting started on fieldwork this autumn with the Tate and other institutions who are quickly becoming pioneers of digital participation by mixing spaces for making with cultural spaces. I will keep sharing work here as it evolves, and in the meantime I’ll be heading to Johannesburg for a few days to see whether there are similar-minded initiatives that merge culture with digital learning and making in their communities (have any ideas? Please let me know!), and I’ll also be talking about these ideas and others on a panel entitled “Who is the digital revolution for?” in Brighton at the end of this month for those who are in town. Summer may be winding down and the days of sun getting shorter, but it’s shaping up to be an exciting autumn - and I’m already looking forward to meeting, making and thinking these ideas over with many of you throughout it!

Daniel StenbergA sea of curl stickers

To spread the word, to show off the logo, to share the love, to boost the brand, to allow us to fill up our own and our friend’s laptop covers I ordered a set of curl stickers to hand out to friends and fans whenever I meet any going forward. They arrived today, and I thought I’d give you a look. (You can still purchase your own set of curl stickers from unixstickers.com)

The sticker is 74 x 26 mm at its max.

curl stickers en masse

a bunch of curl stickers

Wil ClouserGetting Firefox Nightly to stick to Ubuntu's Unity Dock

I installed Ubuntu 16.04.1 this week and decided to try out Unity, the default window manager. After I installed Nightly I assumed it would be simple to get the icon to stay in the dock, but Unity seemed confused about Nightly vs the built-in Firefox (I assume because the executables have the same name).

It took some doing to get Nightly to stick to the Dock with its own icon. I retraced my steps and wrote them down below.

My goal was to be able to run a couple versions of Firefox with several profiles. I thought the easiest way to accomplish that would be to add a new icon for each version+profile combination and a single left click on the icon would run the profile I want.

After some research, I think the Unity way is to have a single icon for each version of Firefox, and then add Actions to it so you can right click on the icon and launch a specific profile from there.

Installing Nightly

If you don't have Nighly yet, download Nightly (these steps should work fine with Aurora or Beta also). Open a terminal:

$ mkdir /opt/firefox
$ tar -xvjf ~/Downloads/firefox-51.0a1.en-US.linux-x86_64.tar.bz2 /opt

You may need to chown some directories to get that in /opt which is fine. At the end of the day, make sure your regular user can write to the directory or else you won't be able to install Nightly's updates.

Adding the icon to the dock

Then create a file in your home directory named nightly.desktop and paste this into it:

[Desktop Entry]
Version=1.0
Name=Nightly
Comment=Browse the World Wide Web
Icon=/opt/firefox/browser/icons/mozicon128.png
Exec=/opt/firefox/firefox %u
Terminal=false
Type=Application
Categories=Network;WebBrowser;
Actions=Default;Mozilla;ProfileManager;

[Desktop Action Default]
Name=Default Profile
Exec=/opt/firefox/firefox --no-remote -P minefield-default

[Desktop Action Mozilla]
Name=Mozilla Profile
Exec=/opt/firefox/firefox --no-remote -P minefield-mozilla

[Desktop Action ProfileManager]
Name=Profile Manager
Exec=/opt/firefox/firefox --no-remote --profile-manager

Adjust anything that looks like it should change, the main callout being the Exec line should have the names of the profiles you want to use (in the above file mine are called minefield-default and minefield-mozilla). If you have more profiles just make more copies of that section and name them appropriately.

If you think you've got it, run this command:

  $ desktop-file-validate nightly.desktop

No output? Great -- it passed the validator. Now install it:

  $ desktop-file-install --dir=.local/share/applications nightly.desktop

Two notes on this command:

  1. If you leave off --dir it will write to /usr/share/applications/ and affect all users of the computer. You'll probably need to sudo the command if you want that.
  2. Something is weird with the parsing. Originally I passed in --dir=~/.local/... and it literally made a directory named ~ in my home directory, so, if the menu isn't updating, double check the file is getting copied to the right spot.

Some people report having to run unity again to get the change to appear, but it showed up for me. Now left-clicking runs Nightly and right-clicking opens a menu asking me which profile I want to use.

Modifying the Firefox Launcher

I also wanted to launch profiles off the regular Firefox icon in the same way.

The easiest way to do that is to copy the built-in one from /usr/share/applications/firefox.desktop and modify it to suit you. Conveniently, Unity will override a system-wide .desktop file if you have one with the same name in your local directory so installing it with the same commands as you did for Nightly will work fine.

Postscript

I should probably add a disclaimer that I've used Unity for all of two days and there may be a smoother way to do this. I saw a couple of 3rd-party programs that will generate .desktop files but I didn't want to install more things I'd rarely use. Please leave a comment if I'm way off on these instructions! :)

Mozilla Open Design BlogProgress in the making

Since posting the seven initial design directions for the Mozilla brand identity three weeks ago, we’ve continued to shape the work. Guided by where Mozilla is headed strategically, principles of good design, and the feedback we’ve received through this open process, today we release four design contenders. These will continue to be refined over the course of the next two weeks, then put through global consumer testing. We expect a brand identity recommendation to emerge in October.
mj_tm_Moz_Nashville_edits for new pics.key

If you’re just joining this process, you can get oriented here and here. We’re  grateful that this process has sparked such interest among Mozillians, those who care about Mozilla, and the global design community—dozens of articles, hundreds of tweets, thousands of comments, and perhaps tens of thousands of words of feedback. As believers in transparency at Mozilla, we consider this a success.

Thanks to all of you who have added your voice to the conversation. Your many constructive comments and suggestions have helped us chart a path forward. Some of you will find that your favored design directions have been let go in the pursuit of something better. We hope you’ll find a design here that you feel best represents Mozilla today and tomorrow.

mj_tm_Moz_Nashville_edits for new pics.key

Some that we’ve left behind.
Of our original seven, four have fallen by the wayside, one has remained intact and two others have directly led to new ideas. We have let go The Open Button, which upon further study we found lacked a clear connection to Mozilla’s purpose, and Flik Flak, which had its fervent supporters but was either too complex, or too similar to other things, depending on your point of view.

For many, The Impossible M was an early favorite, but we discovered that it was just too close to other design treatments already in the public domain. The Connector stayed in the running for some time, but was eventually overtaken by new ideas (and always slightly suffered from being a bit too complex).

What we resolved to do next.
Working in tandem with our London agency partner johnson banks and making the most of our time zone difference nearly around the clock, we agreed to redirect efforts toward these design goals:

  • Focusing first on the core marks, particularly on their visual simplicity, before figuring out how they extend into design systems.
  • Exploring the dinosaur. From the blog feedback, it was clear that we had permission to link more directly back to the former dinosaur logo. Aside from The Eye, what other paleo elements might we explore?
  • Celebrating the Internet. Rather than seeking ways to render the Internet in three dimensions (as Wireframe World and Flik-Flak had bravely attempted to do), might be influenced by the random beauty of the Internet works and how people use it?
  • Refining and simplifying the two routes, Protocol and Wireframe World, that showed the most promise in the first round.
How the work links to the core narratives
At this stage of the project, we’re down to four overarching narratives, three from the original set and a new one:

The Pioneers: This is still a strong and resonant territory, and one that works well with at least one of the final four.
Taking a stand: This positioning emerged directly from our earliest discussions and is still very strong.
The maker spirit: We’ve seen from the first round, the community of Mozillians is vocal and engaged and is key to the organization going forward.
The Health of the Internet: This is a new idea that posits Mozilla is a guardian and promoter of the Internet’s overall well-being.

The Final Four
Below is our continued work in progress on the four refined identity directions that we’ll take into testing with our key audiences. Please click on the image to be taken to a page revealing the related design system, and make individual comments there. If you wish to compare and contrast these designs, please do so in the comments section below.
Route One: Protocol
protocol_master_logo
 
Route 2: Burst
burst_rotating
Route 3: Flame
flame_flicker
Route 4: Dino 2.0
dino_2-0_chomping1So there you have it: four final directions. Let us know what you think!

Mozilla Open Design BlogRoute Four: Dino 2.0

Comments from the first round on ‘The Eye’ route confirmed a suspicion that the typography might suggest something we didn’t intend, so first we looked at ways to make this creature more approachable.

mj_tm_Moz_Nashville_edits for new pics.key

 

The, we took a step back and looked again for ways to hint at a ‘zilla (whilst not being too specific), and create the basis of a wide-ranging design scheme. After weeks of experiments and simplification, Dino 2.0 emerged.

Essentially this Dino is just a red chevron and some white type – but somehow that one raised eye can watch and blink in a very unique way. And those jaws can merrily chomp when needs be.

We see this Dino as someone who can straddle two narratives – one that can stand up, be counted, shout, bark and bite when needed – yet act as a figurehead for Mozilla’s maker community across the globe. We’re developing a kind of ‘agit’ toolkit for this route, with crude hand-drawn industrial typefaces, and a suitably red, white and black colour scheme.

We’ve discovered Dino 2.0 successfully change its spots for communities and countries too.

dino_2-0_chomping1

 

jb_mozilla-sept_a_dino_2jb_mozilla-sept_a_dino_3jb_mozilla-sept_a_dino_4jb_mozilla-sept_a_dino_5jb_mozilla-sept_a_dino_6jb_mozilla-sept_a_dino_7

 

Mozilla Open Design BlogRoute Three: Burst

This route has stemmed from two trains of thought – firstly a new narrative direction where we have been actively considering Mozilla’s role in recording and advancing the health of the Internet. Visually we’ve been investigating data-led ideas and classic internet imagery, whilst not forgetting some of the thinking of ‘Wireframe World’ from the first round.

As we looked harder at data sources we realised that five was a key number: Mozilla is collecting data around five key measurements of Internet health as we type (and you read), and there are five nodes in a capital ‘M’. So we combined the two thoughts.

This creates a very beautiful, almost fragile idea that we know has great potential in online and animated forms. It also lends itself well to a set of interlinked images for Mozilla’s many initiatives.

burst_rotating

 

jb_mozilla-sept_b_burst_2

 

jb_mozilla-sept_b_burst_3

 

jb_mozilla-sept_b_burst_5jb_mozilla-sept_b_burst_6

jb_mozilla-sept_b_burst_4
 

Mozilla Open Design BlogRoute Two: Flame

This is a completely new direction. Whilst rooted in the ethos of the ‘Pioneer’ thinking it also crosses over into the ‘Taking a stand’ narratives. We started to think that a flame could be a powerful symbol of Mozilla’s determination to remain the beacon for an open, accessible and equal internet for all, and something that a community gathers around for warmth.

As we started to experiment with flame symbolism, one simple design won through which simply merges an ‘M’ and a flame. After some experimentation, we think a dot/pixellated form for this idea work nicely. It animates nicely, and might even be dynamically generated with lightweight code.

Here are some key slides showing applications out to sub-brands, community designs and merchandise.

flame_flicker

jb_mozilla-sept_d_flame_2jb_mozilla-sept_d_flame_3jb_mozilla-sept_d_flame_4jb_mozilla-sept_d_flame_5jb_mozilla-sept_d_flame_6jb_mozilla-sept_d_flame_7

 

Save

Mozilla Open Design BlogRoute One: Protocol 2.0

Protocol is a strong contender from the first round of ideas that we’ve continued to work on and improve. By putting the internet http:// protocol directly into the word – Moz://a – it creates a type-able word mark, and by doing so alludes to Mozilla’s role at the core of the Internet (and hence the ‘Pioneers’ positioning).

We’ve been strengthening the typography from the first round and looking at ways to expand the graphic language out across a typographic and pictogram language. We’ve also enhanced the blue to reflect the palette of the early Web. Here’s an early exploration:

mj_tm_Moz_Nashville_edits for new pics.key

We’re also experimenting with a thought that some of the characters in the mark could swap in and out, randomly, pulling fonts characters or emoticons from a local computer or the web itself. A freaky thought, but could be great.

protocol_master_logo_2

 

protocol_type_swap

jb_mozilla-sept_c_protocol_3jb_mozilla-sept_c_protocol_4jb_mozilla-sept_c_protocol_5jb_mozilla-sept_c_protocol_6jb_mozilla-sept_c_protocol_7jb_mozilla-sept_c_protocol_8

Support.Mozilla.OrgWhat’s Up with SUMO – 15th September

Hello, SUMO Nation!

We had a bit of a delay with the release of the 49th version of Firefox this week… but for good reasons! The release is coming next week – but our latest news are coming right here, right now. Dig in!

Welcome, new contributors!

If you just joined us, don’t hesitate – come over and say “hi” in the forums!

Contributors of the week

We salute you!

Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

SUMO Community meetings

  • LATEST ONE: 14th of September- you can read the notes here and see the video at AirMozilla.
  • NEXT ONE: happening on the 21st of September!
  • If you want to add a discussion topic to the upcoming meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Wednesday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).
    • If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback.

Community

Platform

  • PLATFORM REMINDER! The Platform Meetings are BACK! If you missed the previous ones, you can find the notes in this document. (here’s the channel you can subscribe to).
    • We have a first version of working {for} implementation on the staging site for the Lithium migration – thanks to Tyson from the Lithium team.
    • Some of the admins will be meeting with members of the Lithium team in two weeks to work face-to-face on the migration.
    • More questions from John99 and answers from our team – do check the document linked above for more details.
    • If you are interested in test-driving the new platform now, please contact Madalina.
      • IMPORTANT: the whole place is a work in progress, and a ton of the final content, assets, and configurations (e.g. layout pieces) are missing.
  • QUESTIONS? CONCERNS? Please take a look at this migration document and use this migration thread to put questions/comments about it for everyone to share and discuss. As much as possible, please try to keep the migration discussion and questions limited to those two places – we don’t want to chase ten different threads in too many different places.

Social

Support Forum

  • SUMO Day coming up next week! (As mentioned above).
  • The Norton startup crash for version 49 is still waiting for a fix from Symantec – if that doesn’t happen, expect a few questions in the forums about that.
  • A vulnerability was found in the Flash player last week – if you’re using it, please update it as soon as you can to the latest version!
  • Reminder: If you are using email notifications to know what posts to return to, jscher2000 has a great tip (and tool) for you. Check it out here!

Knowledge Base & L10n

  • We are (still) 1 week before next release / 5 weeks after current release. What does that mean? (Reminder: we are following the process/schedule outlined here)

    • Only Joni or other admins can introduce and/or approve potential last minute changes of next release content; only Joni or other admins can set new content to RFL; localizers should focus on this content.
  • We have some extra time, so please remember to localize the main articles for the upcoming release:
    • https://support.mozilla.org/kb/hello-status/translate
    • https://support.mozilla.org/kb/firefox-reader-view-clutter-free-web-pages/translate
    • https://support.mozilla.org/kb/html5-audio-and-video-firefox/translate
    • https://support.mozilla.org/kb/your-hardware-no-longer-supported/translate

Firefox

  • for Android
    • To repeat what you’ve heard last week (because it’s still true!): version is 49 coming next week. Highlights include:

      • caching selected pages (e.g. mozilla.org) for offline retrieval
      • usual platform and bug fixes
  • for Desktop
    • You’ve heard it before, you’ll hear it again: version 49 is coming next week – read more about it in the release thread (thank you, Philipp!). Highlights include:
      • text-to-speech in Reader mode
      • ending support for older Mac OS versions
      • ending support for older CPUs
      • ending support for Firefox Hello
      • usual platform and bug fixes
  • for iOS
    • …I hear there’s a new iPhone in town, but it’s far from being a jack of all trades ;-)

OK, I admit it, I’m not very good at making hardware jokes. I’m sorry! I guess you’ll have to find better jokes somewhere on the internet – do you have any interesting places that provide you with fun online? Tell us in the comments – and see you all next week!

Mozilla Localization (L10N)Localization Hackathon in Kuala Lumpur

13975340_10153976510682153_2559748474514988567_oThe last weekend of August saw the largest localization hackathon event the l10n-drivers ever organized. Thirty-four community contributors representing 12 languages from 13 East and Southeast Asian countries journeyed to Kuala Lumpur, Malaysia on Friday, August 26. Jeff, Flod, Gary Kwong and I arrived in time for the welcome dinner with most of the community members. The restaurant, LOKL Coffee, was ready for a menu makeover and took the opportunity to use this Mozilla event to do just that. A professional photographer spent much of the evening with us snapping photos.

We started off Saturday morning with Spectrogram, where l10n contributors moved from one side of the room to another to illustrate whether they agreed or disagreed with a statement. Statements help us understand each community’s preferences to address localization requests. An example: There are too many translation/localization tasks for me to keep up; I want to work on 2000 strings sliced up in 1 year, twice, 6 weeks, 4 weeks, weekly, every other day, daily.

Jeff, the newly appointed localization manager, updated everyone on l10n organization change; the coming attraction of the l20n development; Pontoon as one of the centralized l10n tools; and the ultimate goal of having a single source of l10n dashboard for the communities and l10n project managers.

29278375225_14057983ee_z1Flod briefed on the end of Firefox OS and the new initiatives with Connected Device. He focused on Firefox primarily. He discussed the 6-week rapid release cycles or cadence. He also covered the five versions of Firefox: Aurora, nightly, beta, release, and ERS. He described the change to a single source of repository, allowing strings move to production sooner. Firefox for iOS and Android were also presented. It was welcome news that the localized product can be shipped through automatic signoff, without community’s involvement.

I talked about the importance of developing a style guide for each of the languages represented. This helps with onboarding new comers, consistency among all contributors and sets the style and tone for each of the Mozilla products. I also briefly touched upon the difference between brand names and product names. I suggested to take this gathering as an opportunity to work on these.

For the rest of the weekend, our communities worked through the goals they set for ourselves. Many requested to move their locales to Pontoon, causing a temporarily stall in sync. Others completed quite a few projects, making significant advances on the dashboard charts. Even more decided to tackle the style guides, referencing the template and leveraging information from established outlets. When the weekend was over, nine communities reported to have some kind of draft versions, or modified and updated an existing one. Other accomplishments included identifying roles and responsibilities; making plans for meetup for the rest of the year; tool training; improving translation quality by finding critical errors; updating glossaries; completing some high priority projects.

28990074610_b82176fccc_kThe weekend was not just all work, but filled with cultural activities. Our Saturday dinner at Songket Restaurant was followed by almost an hour of Malaysian cultural dances from across the country, showcasing the diverse cultures that made up Malaysia. Many community members were invited to the stage to participate. It was a fun evening filled with laughter. Our Sunday dinner was arranged inside Pasar Seni, or the Central Market, a market dating back to 1888. It is now filled with shops and restaurants, giving all visitors a chance to take home some souvenirs and fond memories. Many of us visited the near by Pedaling Street, sampling tropical fruits, including Durian, made in all shapes and forms.

Putting together the largest l10n hackathon ever is a big achievement and lots of credit goes to our local support. 29262607536_235530cd88_zA big thanks to our Malaysian community, led by Syafiq, who was our eyes and ears on the ground from day one, planning, selecting the venue location, advising us on restaurants, lodging, transportation and cultural events. Not only we accomplished what we set out to do, we did it safely, we all had fun and we made more friends. Also a shout-out to Nasrun, our residence photographer for documenting the weekend through his lens. And a thank you to everyone for sharing a very special and productive weekend with fellow Mozillians! See you next time at another hackathon!

Air MozillaReps weekly, 15 Sep 2016

Reps weekly This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Firefox NightlyGet a Nightly that speaks your language!

Over the last months, Kohei Yoshino and myself worked on improving the discoverability of Firefox Nightly desktop builds and in particular localized builds.

Until recently, the only way to get Firefox Nightly for desktop was to download it in English from nightly.mozilla.org or, if you wanted a build in say Japanese, Arabic or French, look for the right FTP sub-folder in ftp.mozilla.org. Nightly.mozilla.org is itself a static HTML page based on a script that scraps the FTP site for builds periodically.

Of course, as a result, about 90% of our Nightly users use an en-US build. The few thousand users using a localized build are Mozilla localizers and long term contributors that knew where to search for them. We were clearly limiting ourselves to a subset of the population that could be using Firefox Nightly which is not a good thing when you want to actually grow the number of nightly users so as to get more feedback (direct and anonymous). The situation was so frustrating to some of our community members that they created their own download pages for Nightly in their language over time.

Then why didn’t we have a proper download page on www.mozilla.org as we do for all Firefox channels (release, beta, dev edition) ? Why are nightly builds available only from a separate sub-domain? One of the reasons was technical, www.mozilla.org uses data provided by the Release Management team through a public JSON API called product-details and that API didn’t provide any information about Nightly. Actually, this API was a typical legacy piece of code that had been doing the job well for a decade but was meant to be rewritten and replaced by another tool year after year. Until the switch to a new compatible API and the addition of Nightly data there, mozilla.org just didn’t know anything about Nightly.

So first we had to actually end the work on the migration from the old to the new API and mozilla.org switched to this new API in August.

Now that the data about Desktop Nightly was available to mozilla.org (and to any django-based Mozilla site that includes the django-product-details library), the front-end work could be started by Kohei so as to create a page that would list all platforms and localized builds and that would reuse as much code and existing assets as possible. And here is the result at mozilla.org/en-US/firefox/nightly/all:

Nightly multilocale download page

We are working on making that page linked from the nightly site in bug 1302728. That’s only a start of course, we still have a lot of work to make it easier for advanced testers to find our pre-release builds, but now when somebody will ask you for a comprehensive list of Desktop Nightly builds, you’ll know what address to share!

Many thanks to Kohei and the mozilla.org Webdev team for their help on making that happen!

Jared Wein<select> dropdown update 2

Mike and I met with the “select dropdown” team today and discussed where they’re at and the work that they can focus on for the next week. We are discussing a possible “hack-weekend” October 1st and 2nd at Michigan State.

Freddy got XCode up and running and can debug processes at the single-process/multi-process fork. Jared and Miguel are having issues getting their debugger to work. Freddy will be helping Jared and Miguel with their setup to compare what is different, with the fallback being Jared and Miguel using LLDB as their debugger.

In the past week, the team has also been spending time working on presentations and project plans.

For C++ code, their initial plan was to fake the single-process to run through the multi-process code path by removing the checks for if the code is running in a content process and if a special e10s desktop preference is enabled.

As a quick technical dive: When a select dropdown is clicked, we determine that we’re running in a content process, and we fire an event (“mozshowdropdown”). A content script running in the content process listens for the “mozshowdropdown” event and opens the popup.

The plan to fake the single-process to run through the multi-process won’t work though, because the content script mentioned above won’t be loaded and thus there won’t be an event listener. The content-script is only loaded right now through the remote-browser.xml binding. The content-script would have to be loaded through the non-remote-browser binding (browser.xml) as well as the various message listeners and event listeners.

While working on this, it would be a good idea to move the code from select-child.js to browser-content.js, since we don’t really need a separate file for select items and browser-content.js is loaded in single-process Firefox.

As for styling changes, the students were able to use the Browser Toolbox to change web content through forms.css and see how things like input textboxes could get different default colors. To change the styling of a select dropdown, the students will play around in browser.css to tweak the styling. In the end they’ll want to make sure that the styling exists under the /themes directory, and likely within /themes/shared/.

One of the students asked if we had ideas about specific algorithms or design-patterns that the search-within-the-dropdown implementation should use. We pointed them at the Browser Console’s filtering ability and asked that the students follow the implementation there.

That wraps up our notes from this week’s meeting. We’ll be meeting regularly for the next 10-ish weeks as the students make progress on their work.


Tagged: capstone, firefox, msu, planet-mozilla

Mozilla Addons BlogAdd-ons Update – 2016/09

Here’s what’s going on in the add-ons world this month. I’m changing the cadence (down from every 3 weeks) to better align with other work and spend less time writing these.

The Review Queues

In the past month, 1,891 listed add-on submissions were reviewed:

  • 1519 (80%) were reviewed in fewer than 5 days.
  • 132 (7%) were reviewed between 5 and 10 days.
  • 240 (13%) were reviewed after more than 10 days.

There are 159 listed add-ons awaiting review.

You can read about the improvements we’ve made in the review queues here.

If you’re an add-on developer and are looking for contribution opportunities, please consider joining us. Add-on reviewers are critical for our success, and can earn cool gear for their work. Visit our wiki page for more information.

Preliminary Review Removed

As we announced before, we simplified the review process by removing preliminary review, making an add-on review a more straightforward pass/fail decision.

All add-ons on AMO have been migrated to the new system, so add-ons that were preliminarily reviewed before are now fully reviewed, but with the experimental flag on by default. We will send a notification email after we iron out some minor bugs that came up after the migration.

Compatibility

The compatibility blog post for Firefox 50 is up, and the bulk validation will be run in a couple of weeks.

Multiprocess Firefox is now enabled for users without add-ons, and add-ons will be gradually phased in, so make sure you’ve tested your add-on and either use WebExtensions or set the multiprocess compatible flag in your add-on manifest.

As always, we recommend that you test your add-ons on Beta and Firefox Developer Edition to make sure that they continue to work correctly. End users can install the Add-on Compatibility Reporter to identify and report any add-ons that aren’t working anymore.

Recognition

We would like to thank Atique Ahmed Ziad, Surya Prashanth, weaksauce, zombie, jorgk, and Trishul Goel for their recent contributions to the add-ons world. You can read more about their work in our recognition page.

Mozilla Localization (L10N)This is what the power of the open Web looks like

One of the main goals of Pontoon is lowering barriers to entry. Especially for end users (mainly localizers), but also for contributors to the codebase, since many of our localizers have a developer background.

I’m happy to acknowledge that in the last 30 days there has been more activity from volunteer contributors in Pontoon development than ever before! Let’s have a closer look at what have they been working on:

Last month's Pontoon contributors

Last month’s Pontoon contributors

Michal Vašíček
Michal came up with the idea to highlight matches in original and translated strings when searching in the sidebar. He created a patch, but couldn’t finish it due to his school duties. It was taken over by Jarek, who earlier played a great role in reviewing the original patch by Michal.

Being only 14 years old, Michal is the youngest Pontoon contributor!

Jarek Śmiejczak (jotes)
Since he became an active Pontoon contributor over a year ago, Jarek has evolved from being not just a great developer but also a fantastic mentor; helping onboard new contributors and review their work. One way or another, he’s been involved with all bugs and features listed in this blog post.

Of course that doesn’t mean he stopped contributing code. On the contrary, he just completed a Firefox Accounts based authentication support which will soon replace Persona. And, he’s already busy working on bringing terminology support to Pontoon too.

Victor Bychek
A lot of our users have been complaining about their email addresses being exposed publicly in Pontoon UI and URLs, even if it complies with Commit Access Requirements. Thanks to Victor, these days are over: we no longer reveal email addresses in top contributor pages and filter by user selector, as long as you set a display name.

Victor is a pleasure to work with and is already busy with his next task, which will allow you to apply multiple filters at the same time.

Stoyan Dimitrov
As the new leader of the Bulgarian localization team, Stoyan takes his duties very professionally. He started by creating a vector version of Pontoon logo and changing the copy.

Later on he created a Firefox Add-On called Pontoon Enhanced, which can add new features to Pontoon before they are deployed or even implemented in the application. It’s basically a Test Pilot for Pontoon.

Michal Stanke
As an agile bug reporter, Michal has been one of the most valuable early adopters of Pontoon. Now he has decided to take a step further.

He set up his local Pontoon instance, fixed a few developer documentation bugs along the way and provided a patch that fixes one of the bugs he reported. It allows us to properly detect placeables of form %(thisIsVariable)s.

Get involved!
I consider myself lucky to be working with this great team. It is particularly valuable to see contributions coming from people who actually use the product. This is what the power of the open Web looks like!

You too can shape the future of Pontoon by filing a bug or starting to work on one of the mentored ones. The barriers to entry are low! 🙂

Ben HearsumWhat's New with Balrog - September 14th, 2016

The pace of Balrog development has been increasing since the beginning of 2016. We're now at a point where we push new code to production nearly every week, and I'd like to start highlighting all the great work that's being done. The past week was particularly busy, so let's get into it!

The most exciting thing to me is Meet Mangukiya's work on adding support for substituting some runtime information when showing deprecation notices. This is Meet's first contribution to Balrog, and he's done a great job! This work will allow us to send users to better pages when we've deprecated support for their platform or OS.

Njira has continued to chip away at some UI papercuts, fixing some display issues with history pages, addressing some bad word wrapping on some pages, and reworking some dialogs to make better use of space and minimize scrolling.

A few weeks ago Johan suggested that it might be time to get rid of the submodule we use for UI and integrate it with the primary repository. This week he's done so, and it's already improved the workflow for developers.

For my part, I got the final parts to my Scheduled Changes work landed - a project that has been in the works since January. With it, we can pre-schedule changes to Rules, which will help minimize the potential for human error when we ship, and make it unnecessary for RelEng to be around just to hit a button. I also fixed a regression (that I introduced) that made it impossible to grant full admin access, oops!

Scheduled Changes UI

I also want to give a big thank you to Benson Wong for his help and expertise in getting the Balrog Agent deployed - it was a key piece in the Scheduled Changes work, and went pretty smoothly!

Air MozillaThe Joy of Coding - Episode 71

The Joy of Coding - Episode 71 mconley livehacks on real Firefox bugs while thinking aloud.

Myk MelezThe Once And Future GeckoView

GeckoView is an Android library for embedding Gecko into an Android app. Mark Finkle introduced it via GeckoView: Embedding Gecko in your Android Application back in 2013, and a variety of Fennec hackers have contributed to it, including Nick Alexander, who described a Maven repository for GeckoView in 2014. It’s also been reused, at least experimentally, by Joe Bowser to implement MozillaView – GeckoView Proof of Concept.

But GeckoView development hasn’t been a priority, and parts of it have bitrotted. It has also remained intertwined with Fennec, which makes it more complicated to reuse for another Android app. And the core WebView class in Android (along with the cross-platform implementation in Crosswalk), already address a variety of web rendering use cases for Android app developers, which complicates its value proposition.

Nevertheless, it may have an advantage for the subset of native Android apps that want to provide a consistent experience across the fragmented Android install base or take advantage of the features Gecko provides, like WebRTC, WebVR, and WebAssembly. More research (and perhaps some experimentation) will be needed to determine to what extent that’s true. But if “there’s gold in them thar hills,” then I want to mine it.

So Nick recently determined what it would take to completely separate GeckoView from Fennec, and he filed a bunch of bugs on the work. I then filed meta-bug 1291362 — standalone Gradle-based GeckoView libary to track those bugs along with the rest of the work required to build and distribute a standalone Gradle-based GeckoView library reusable by other Android apps. Nick, Jim Chen, and Randall Barker have already made some progress on that project.

It’s still early days, and I’m still pursuing the project’s prioritization (say that ten times fast). So I can’t yet predict when we’ll complete that work. But I’m excited to see the work underway, and I look forward to reporting on its progress!

Julia ValleraShare your expertise with Mozilla Clubs around the world!

Club guides are an important part to the Mozilla Clubs program as they provide direction and assistance for specific challenges that clubs may face during day-to-day operation. We are looking for volunteers to help us author new guides and resources that will be shared globally. By learning more about the process and structure of our guides, we hope that you’ll collaborate on a Mozilla club guide soon!

Background

In early 2015, Mozilla Clubs staff began publishing a series of guides that provide Club leaders with helpful tips and resources they need to maintain their Clubs. Soon after, community members began assisting with, collaborating on and authoring these guides alongside staff.

Guides are created in response to inquiries from Club Captains and Regional Coordinators around challenges they face within their clubs. Some challenges are common across Clubs and others are specific to one Club. In either case, the Mozilla Clubs team tries to create guides that assist in overcoming those challenges. Once a guide is published, it is listed as a resource on the Mozilla Clubs’ website.

Club guide created by Simeon Oriko, Carolina Tejada Alvarez, Kristina Gorr and the Mozilla Clubs team

Screenshot of a guide co-authored by Simeon Oriko, Carolina Tejada Alvarez, Kristina Gorr and the Mozilla Clubs team

How are club guides used around the world?

At Mozilla Clubs, there is a growing list of guides and resources that help Club participants maintain Club activity around the world. These guides are in multiple languages and cover topics related to teaching the web, sustaining communities, growing partnerships, fostering collaborations and more.

Guides should be used and adapted as needed. Club leaders are free to choose which guides they use and don’t use. The information included in each guide is drawn from experienced community leaders that are willing to share their expertise. Guides will continue to evolve and we welcome suggestions for how to improve them. The source code, template and content are free and available on Github.

Here are a few examples of how guides have been used:

  • A new Club Captain is wondering how to teach their Club learners about open practices so they read the “Teaching Web Literacy in the Open” guide for facilitation tips and activity ideas.
  • A librarian is interested in starting a Club in a library and uses the “Hosting a Mozilla Club in your Library” guide for tips and event ideas.
  • A Club wants to a make a website, so they use the “Creating a website” guide to learn how to secure a domain, choose a web host and use a free website builder.

What is the process for creating club guides?

The process for creating guides is evolving and sometimes varies on a case-by-case basis. In general, it goes something like this:

Step 1: Club leaders make suggestions for new guides on our discussion forum.

Step 2: Mozilla Club staff respond to the suggestion and share any existing resources.

Step 3: If there are no existing resources, the suggestion is added to a list of upcoming guides.

Step 4: Staff seek experts from the community to contribute or help author the guide (in some cases this could be the person who made the suggestion).

Step 5: Once we find an expert (internal to Mozilla or a volunteer from the community) who is interested in collaborating, the guide is drafted in as little or as much time as needed.

We currently have 50+ guides and resources available and look forward to seeing that number grow. If you have ideas for guides and/or would like to contribute to them, please let us know here.

The Mozilla BlogCommission Proposal to Reform Copyright is Inadequate

The draft directive released today thoroughly misses the goal to deliver a modern reform that would unlock creativity and innovation in the Single Market.

Today the EU Commission released their proposal for a reformed copyright framework. What has emerged from Brussels is disheartening. The proposal is more of a regression than the reform we need to support European businesses and Internet users.

To date, over 30,000 citizens have signed our petition urging the Commission to update EU copyright law for the 21st century. The Commission’s proposal needs substantial improvement.  We collectively call on the EU institutions to address the many deficits in the text released today in subsequent iterations of this political process.

The proposal fails to bring copyright in line with the 21st century

The proposal does little to address much-needed exceptions to copyright law. It provides some exceptions for education and preservation of cultural heritage. Still, a new exception for text and data mining (TDM), which would advance EU competitiveness and research, is limited to public interest research institutions (Article 3). This limitation could ultimately restrict, rather than accelerate, TDM to unlock research and innovation across sectors throughout Europe.

These exceptions are far from sufficient. There are no exceptions for panorama, parody, or remixing. We also regret that provisions which would add needed flexibility to the copyright system — such as a UGC (user-generated content) exception and an flexible user clause like an open norm, fair dealing or fair use — have not been included. Without robust exceptions, and provisions that bring flexibility and a future-proof element, copyright law will continue to chill innovation and experimentation.

Pursuing the ‘snippet tax’ on the EU level will undermine competition, access to knowledge

The proposal calls for ancillary copyright protection, or a ‘snippet tax’. Ancillary copyright would allow online publishers to copyright ‘press publications’, which is broadly defined to cover works that have the purpose of providing “information related to news or other topics and published in any media under the initiative, editorial responsibility and control of a service provider” (Article 2(4)). This content would remain under copyright for 20 years after its publication — an eternity online. This establishment of a new exclusive right would limit the free flow of knowledge, cripple competition, and hinder start-ups and small- and medium-sized businesses. It could, for example, require bloggers linking out to other sites to pay new and unnecessary fees for the right to direct additional traffic to existing sites, even though having the snippet would benefit both sides.

Ancillary copyright has already failed miserably in both Germany and Spain. Including such an expansive exclusive right at the EU level is puzzling.

The proposal establishes barriers to entry for startups, coders, and creators

Finally, the proposal calls for an increase in intermediaries’ liability. Streaming services like YouTube, Spotify, and Vimeo, or any ISPs that “provide to the public access to large amounts of works or other subject-matter uploaded by their users” (Article 13(1)), will be obliged to broker agreements with rightsholders for the use of, and protection of their works. Such measures could include the use of “effective content recognition technologies”, which imply universal monitoring and strict filtering technologies that identify and/or remove copyrighted content. This is technically challenging — and more importantly, would disrupt the very foundations that make many online activities possible in the EU. For example, putting user generated content in the crosshairs of copyright takedowns. Only the largest companies would be able to afford the complex software required to comply if these measures are deemed obligatory, resulting in a further entrenchment of the power of large platforms at the expense of EU startups and free expression online.

These proposals, if adopted as they are, would deal a blow to EU startups, to independent coders, creators, and artists, and to the health of the internet as a driver for economic growth and innovation. The Parliament certainly has its work cut out for it. We reiterate the call from 24 organisations in a joint letter expressing many of these concerns and urge the European Commission to publish the results of the Related rights and Panorama exception public consultation.

We look forward to working toward a copyright reform that takes account of the range of stakeholders who are affected by copyright law. And we will continue to advocate for an EU copyright reform that accelerates innovation and creativity in the Digital Single Market.

Luis VillaCopyleft and data: databases as poor subject

tl;dr: Open licensing works when you strike a healthy balance between obligations and reuse. Data, and how it is used, is different from software in ways that change that balance, making reasonable compromises in software (like attribution) suddenly become insanely difficult barriers.

In my last post, I wrote about how database law is a poor platform to build a global public copyleft license on top of. Of course, whether you can have copyleft in data only matters if copyleft in data is a good idea. When we compare software (where copyleft has worked reasonably well) to databases, we’ll see that databases are different in ways that make even “minor” obligations like attribution much more onerous.

Card Puncher from the 1920 US Census.
Card Puncher from the 1920 US Census.

How works are combined

In software copyleft, the most common scenarios to evaluate are merging two large programs, or copying one small file into a much larger program. In this scenario, understanding how licenses work together is fairly straightforward: you have two licenses. If they can work together, great; if they can’t, then you don’t go forward, or, if it matters enough, you change the license on your own work to make it work.

In contrast, data is often combined in three ways that are significantly different than software:

  • Scale: Instead of a handful of projects, data is often combined from hundreds of sources, so doing a license conflicts analysis if any of those sources have conflicting obligations (like copyleft) is impractical. Peter Desmet did a great job of analyzing this in the context of an international bio-science dataset, which has 11,000+ data sources.
  • Boundaries: There are some cases where hundreds of pieces of software are combined (like operating systems and modern web services) but they have “natural” places to draw a boundary around the scope of the copyleft. Examples of this include the kernel-userspace boundary (useful when dealing with the GPL and Linux kernel), APIs (useful when dealing with the LGPL), or software-as-a-service (where no software is “distributed” in the classic sense at all). As a result, no one has to do much analysis of how those pieces fit together. In contrast, no natural “lines” have emerged around databases, so either you have copyleft that eats the entire combined dataset, or you have no copyleft. ODbL attempts to manage this with the concept of “independent” databases and produced works, but after this recent case I’m not sure even those tenuous attempts hold as a legal matter anymore.
  • Authorship: When you combine a handful of pieces of software, most of the time you also control the licensing of at least one of those pieces of software, and you can adjust the licensing of that piece as needed. (Widely-used exceptions to this rule, like OpenSSL, tend to be rare.) In other words, if you’re writing a Linux kernel driver, or a WordPress theme, you can choose the license to make sure it complies. Not necessarily the case in data combinations: if you’re making use of large public data sets, you’re often combining many other data sources where you aren’t the author. So if some of them have conflicting license obligations, you’re stuck.

How attribution is managed

Attribution in large software projects is painful enough that lawyers have written a lot on it, and open-source operating systems vendors have built somewhat elaborate systems to manage it. This isn’t just a problem for copyleft: it is also a problem for the supposedly easy case of attribution-only licenses.

Now, again, instead of dozens of authors, often employed by the same copyright-owner, imagine hundreds or thousands. And imagine that instead of combining these pieces in basically the same way each time you build the software, imagine that every time you have a different query, you have to provide different attribution data (because the relevant slices of data may have different sources or authors). That’s data!

The least-bad “solution” here is to (1) tag every field (not just data source) with licensing information, and (2) have data-reading software create new, accurate attribution information every time a new view into the data is created. (I actually know of at least one company that does this internally!) This is not impossible, but it is a big burden on data software developers, who must now include a lawyer in their product design team. Most of them will just go ahead and violate the licenses instead, pass the burden on to their users to figure out what the heck is going on, or both.

Who creates data

Most software is either under a very standard and well-understood open source license, or is produced by a single entity (or often even a single person!) that retains copyright and can adjust that license based on their needs. So if you find a piece of software that you’d like to use, you can either (1) just read their standard FOSS license, or (2) call them up and ask them to change it. (They might not change it, but at least they can if they want to.) This helps make copyleft problems manageable: if you find a true incompatibility, you can often ask the source of the problem to fix it, or fix it yourself (by changing the license on your software).

Data sources typically can’t solve problems by relicensing, because many of the most important data sources are not authored by a single company or single author. In particular:

  • Governments: Lots of data is produced by governments, where licensing changes can literally require an act of the legislature. So if you do anything that goes against their license, or two different governments release data under conflicting licenses, you can’t just call up their lawyers and ask for a change.
  • Community collaborations: The biggest open software relicensing that’s ever been done (Mozilla) required getting permission from a few thousand people. Successful online collaboration projects can have 1-2 orders of magnitude more contributors than that, making relicensing is hard. Wikidata solved this the right way: by going with CC0.

What is the bottom line?

Copyleft (and, to a lesser extent, attribution licenses) works when the obligations placed on a user are in balance with the benefits those users receive. If they aren’t in balance, the materials don’t get used. Ultimately, if the data does not get used, our egos feel good (we released this!) but no one benefits, and regardless of the license, no one gets attributed and no new material is released. Unfortunately, even minor requirements like attribution can throw the balance out of whack. So if we genuinely want to benefit the world with our data, we probably need to let it go.

So what to do?

So if data is legally hard to build a license for, and the nature of data makes copyleft (or even attribution!) hard, what to do? I’ll go into that in my next post.

Tantek Çelik#XOXOFest 2016: Ten Overviews & Personal Perspectives

I braindumped my rough, incomplete, and barely personal impressions from XOXO 2016 last night: #XOXOfest 2016: Independent Creatives Inspired, Shared, Connected. I encourage you to read the following well-written XOXO overview posts and personal perspectives. In rough order of publication (or when I read them):

(Maybe open Ben Darlow’s XOXO 2016 Flickr Set to provide some visual context while you read these posts.)

  1. Casey Newton (The Verge): In praise of the internet's best festival, which is going away (posted before mine, but I deliberately didn’t read it til after I wrote my own first XOXO 2016 post).
  2. Sasha Laundy: xoxo from XOXO
  3. Nabil “Nadreck” Maynard: XOXO, XOXO
  4. Matt Haughey: Starving artists / Memories of XOXO 2016
  5. Courtney Patubo Kranzke: XOXO Festival Thoughts
  6. Zoe Landon: Hugs and Kisses / A Year of XOXO
  7. Clint Bush: Andy & Andy: The XOXO legacy
  8. Erin Mickelson: XOXO
  9. Dylan Wilbanks: Eight short-ish thoughts about XOXO 2016
  10. Doug Hanke: Obligatory XOXO retrospective

There’s plenty of common themes across these posts, and lots I can personally relate to. For now I’ll leave you with just the list, no additional commentary. Go read these and see how they make you feel about XOXO. If you had the privilege of participating in XOXO this year, consider posting your thoughts as well.

Mozilla Addons BlogWebExtensions and parity with Chrome

A core strength of Firefox is its extensibility. You can do more to customize your browsing experience with add-ons than in any other browser. It’s important to us, and our move to WebExtensions doesn’t change that. One of the first goals of implementing WebExtensions, however, is reaching parity with Chrome’s extension APIs.

Parity allows developers to write add-ons that work in browsers that support the same core APIs with minimum fuss. It doesn’t mean the APIs are identical, and I wanted to clarify the reasons why there are implementation differences between browsers.

Different browsers

Firefox and Chrome are different browsers, so some APIs from Chrome do not translate directly.

One example is tab highlight. Chrome has this API because it has the concept of highlighted tabs, which Firefox does not. So instead of browser.tabs.onHighlighted, we fire this event on the active tab as documented on MDN. It’s not the same functionality as Chrome, but that response makes the most sense for Firefox.

Another more complicated example is private browsing mode. The equivalent in Chrome is called incognito mode and extensions can support multiple modes: spanning, split or not_allowed. Currently we throw an error if we see a manifest that is not spanning as that is the mode that Firefox currently supports. We do this to alert extension authors testing out their extension that it won’t operate the way they expect.

Less popular APIs

Some APIs are more popular than others. With limited people and time we’ve had to focus on the APIs that we thought were the most important. At the beginning of this year we downloaded 10,000 publicly available versions of extensions off the Chrome store and examined the APIs called in those extensions. It’s not a perfect sample, but it gave us a good idea.

What we found was that there are some really popular APIs, like tabs, windows, and runtime, and there are some APIs that are less popular. One example is fontSettings.get, which is used in 7 out of the 10,000 (0.07%) add-ons. Compare that to tabs.create, which is used in 4,125 out of 10,000 (41.25%) add-ons.

We haven’t prioritized the development of the least-used APIs, but as always we welcome contributions from our community. To contribute to WebExtensions, check out our contribution page.

Deprecated APIs

There are some really popular APIs in extensions that are deprecated. It doesn’t make sense for us to implement APIs that are already deprecated and are going to be removed. In these cases, developers will need to update their extensions to use the new APIs. When they do, they will work in the supported browsers.

Some examples are in the extension API, which are mostly replaced by the runtime API. For example, use runtime.sendMessage instead of extension.sendMessage; use runtime.onMessage instead of extension.onRequest and so on.

W3C

WebExtensions APIs will never completely mirror Chrome’s extension APIs, for the reasons outlined above. We are, however, already reaching a point where the majority of Chrome extensions work in Firefox.

To make writing extensions for multiple browsers as easy as possible, Mozilla has been participating in a W3C community group for extension compatibility. Also participating in that group are representatives of Opera and Microsoft. We’ll be sending a representative to TPAC this month to take part in discussions about this community group so that we can work towards a common browser standard for browser extensions.

Update: please check the MDN page on incompatibilities.

Armen ZambranoIncreasing test coverage

Last quarter I spent some time increasing mozci's test coverage. Here are some notes I took to help me remember in the future how to do it.

Here's some of what I did:
  • Read Python's page about increasing test coverage
    • I wanted to learn what core Python recommends
    • Tthey recommend is using coverage.py
  • Quick start with coverage.py
    • "coverage run --source=mozci -m py.test test" to gather data
    • "coverage html" to generate an html report
    • "/path/to/firefox firefox htmlcov/index.html" to see the report
  • NOTE: We have coverage reports from automation in coveralls.io
    • If you find code that needs to be ignored, read this.
      • Use "# pragma: no cover" in specific lines
      • You can also create rules of exclusion
    • Once you get closer to 100% you might want to consider to increase branch coverage instead of line coverage
    • Once you pick a module to increase coverage
      • Keep making changes until you run "coverage run" and "coverage html".
      • Reload the html page to see the new results
        After some work on this, I realized that my preferred place to improve tests is focusing on the simplest unit tests. I say this since integration tests do require proper work and thinking how to properly test them rather than *just* increasing coverage for the sake of it.


        Creative Commons License
        This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

    The Mozilla BlogCybersecurity is a Shared Responsibility

    There have been far too many “incidents” recently that demonstrate the Internet is not as secure as it needs to be. Just in the past few weeks, we’ve seen countless headlines about online security breaches. From the alleged hack of the National Security Agency’s “cyberweapons” to the hack of the Democratic National Committee emails, and even recent iPhone security vulnerabilities, these stories reinforce how crucial it is to focus on security.

    Internet security is like a long chain and each link needs to be tested and re-tested to ensure its strength. When the chain is broken, bad things happen: a website that holds user credentials (e.g., email addresses and passwords) is compromised because of weak security; user credentials are stolen; and, those stolen credentials are then used to attack other websites to gain access to even more valuable information about the user.

    One weak link can break the chain of security and put Internet users at risk. The chain only remains strong if technology companies, governments, and users work together to keep the Internet as safe as it can be.

    Technology companies must focus on security.

    Technology companies need to develop proactive, pro-user cybersecurity technology solutions.

    We must invest in creating a secure platform. That means supporting things like adopting and standardizing secure protocols, building features that improve security, and empowering users with education and better tools for their security.

    At Mozilla, we have security features like phishing and malware protection built into Firefox. We started one of the first Bug Bounty programs in 2004 because we want to be informed about any vulnerabilities found in our software so we can fix them quickly. We also support the security of the broader open source ecosystem (not just Mozilla developed products). We launched the Secure Open Source (SOS) Fund as part of the Mozilla Open Source Support program to support security audits and the development of patches for widely used open source technologies.

    Still, there is always room for improvement. The recent headlines show that the threat to user safety online is real, and it’s increasing. We can all do better, and do more.

    Governments must work with technology companies.  

    Cybersecurity is a shared responsibility and governments need to do their part. Governments need to help by supporting security solutions that no individual company can tackle, instead of advancing policies that just create weak links in the chain.

    Encryption, something we rely on to keep people’s information secure online everyday, is under attack by governments because of concerns that it inadvertently protects the bad guys. Some governments have proposed actions that weaken encryption, like in the case between Apple and the FBI earlier this year. But encryption is not optional – and creating backdoors for governments, even for investigations, compromises the security of all Internet users.

    The Obama Administration just appointed the first Federal Chief Information Security officer as part of the Cybersecurity National Action Plan. I’m looking forward to seeing how this role and other efforts underway can help government and technology companies work better together, especially in the area of security vulnerabilities. Right now, there’s not a clear process for how governments disclose security vulnerabilities they discover to affected companies.

    While lawful hacking by a government might offer a way to catch the bad guys, stockpiling vulnerabilities for long periods of time can further weaken that security chain. For example, the recent alleged attack and auction of the NSA’s “cyberweapons” resulted in the public release of code, files, and “zero day” vulnerabilities that gave companies like Cisco and Fortinet just that- zero days to develop fixes before they were possibly exploited by hackers. There aren’t transparent and accountable policies in place that ensure the government is handling vulnerabilities appropriately and disclosing them to affected companies. We need to make this a priority to protect user security online.

    Users can take easy and simple steps to strengthen the security chain.   

    Governments and companies can’t do this without you. Users should always update their software to benefit from new security features and fixes, create strong passwords to guard your private information, and use available resources to become educated digital citizens. These steps don’t just protect people who care about their own security, they help create a more secure system and go a long way in making it harder to break the chain.

    Working together is the only way to protect the security of the Internet for the billions of people online. We’re dedicated to this as part of our mission and we will continue our work to advance these issues.

    Mozilla Release Management TeamFirefox 49 delayed

    The original 2016 Firefox release schedule had the release of Firefox 49 shipping on September 13, 2016. During our release qualification period for Firefox 49, we discovered a bug in the release that causes some desktop and Android users to see a slow script dialog more often than we deem acceptable. In order to allow time to address this issue, we have rescheduled the release of Firefox 49 to September 20, 2016.

    In order to accommodate this change, we will shorten the following development cycle by a week. No other scheduled release dates are impacted by this change.

    In parallel, Firefox ESR 45.4.0 is also delayed by a week.

    Tantek Çelik#XOXOfest 2016: Independent Creatives Inspired, Shared, Connected

    Inspired, once again. This was the fifth XOXO Conference & Festival (my fourth, having missed last year).

    There’s too much about XOXO 2016 to fit into one "XOXO 2016" blog post. So much that there’s no way I’d finish if I tried.

    Outdoors on the last day of XOXO Festival 2016

    4-ish days of:

    Independent creatives giving moving, inspiring, vulnerable talks, showing their films with subsequent Q&A, performing live podcast shows (with audience participation!).

    Games, board games, video games, VR demos. And then everything person-to-person interactive. All the running into friends from past XOXOs (or dConstructs, or classic SXSWi), meetups putting IRL faces to Slack aliases.

    Friends connecting friends, making new friends, instantly bonding over particular creative passions, Slack channel inside jokes, rare future optimists, or morning rooftop yoga under a cloud-spotted blue sky.

    The walks between SE Portland venues. The wildly varying daily temperatures, sunny days hotter than predicted highs, cool windy nights colder than predicted lows. The attempts to be kind and minimally intrusive to local homeless.

    More conversations about challenging and vulnerable topics than small talk. Relating on shared losses. Tears. Hugs, lots of hugs.

    Something different happens when you put that many independent creatives in the same place, and curate & iterate for five years. New connections, between people, between ideas, the energy and exhaustion from both. A sense of a safer place.

    I have so many learnings from all the above, and emergent patterns of which swimming in my head that I’m having trouble sifting and untangling. Strengths of creative partners and partnerships. Uncountable struggles. The disconnects between attention, popularity, money. The hope, support, and understanding instead of judgment.

    I'm hoping to write at least a few single-ish topic posts just to get something(s) posted before the energies fade and memories start to blur.

    This Week In RustThis Week in Rust 147

    Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

    This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

    Updates from Rust Community

    News & Blog Posts

    New Crates & Project Updates

    Crate of the Week

    This week's crate of the week is tokio, a high-level asynchronous IO library based on futures. Thanks to notriddle for the suggestion.

    Submit your suggestions and votes for next week!

    Call for Participation

    Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

    Some of these tasks may also have mentors available, visit the task page for more information.

    If you are a Rust project owner and are looking for contributors, please submit tasks here.

    Updates from Rust Core

    84 pull requests were merged in the last two weeks.

    New Contributors

    • Cobrand
    • Jake Goldsborough
    • John Firebaugh
    • Justin LeFebvre
    • Kylo Ginsberg
    • Nicholas Nethercote
    • orbea
    • Richard Janis Goldschmidt
    • Ulrich Weigand

    Approved RFCs

    Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

    Final Comment Period

    Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

    New RFCs

    Upcoming Events

    If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

    fn work(on: RustProject) -> Money

    Tweet us at @ThisWeekInRust to get your job offers listed here!

    Quote of the Week

    No quote was selected for QotW.

    Submit your quotes for next week!

    This Week in Rust is edited by: nasa42, llogiq, and brson.