Air MozillaWebdev Extravaganza: September 2015

Webdev Extravaganza: September 2015 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on.

The Mozilla BlogForging an Alliance for Royalty-Free Video

Things are moving fast for royalty-free video codecs. A month ago, the IETF NETVC Working Group had its first meeting and two weeks ago Cisco announced Thor. Today, we’re taking the next big step in this industry-wide effort with the formation of the Alliance for Open Media. Its founding members represent some of the biggest names in online video, such as Netflix, Amazon, and YouTube, multiple browser vendors including Mozilla, Microsoft, and Google, and key technology providers like Cisco and Intel. The Alliance has come together to share technology and run the kind of patent analysis necessary to build a next-generation royalty-free video codec.

Mozilla has long championed royalty-free codecs. The Web was built on innovation without asking permission, and patent licensing regimes are incompatible with some of the Web’s most successful business models. That’s why we already support great codecs like VP8, VP9, and Opus in Firefox. But the Web doesn’t stand still and neither do we. As resolutions and framerates increase, the need for more advanced codecs with ever-better compression ratios will only grow. We started our own Daala project and formed NETVC to meet those needs, and we’ve seen explosive interest in the result. We believe that Daala, Cisco’s Thor, and Google’s VP10 combine to form an excellent basis for a truly world-class royalty-free codec.

In order to allow us to move quickly, the alliance is structured as a Joint Development Foundation project. These are an ideal complement to a larger, open standards organization like the IETF: One of the biggest challenges in developing open standards in a field like video codecs is figuring out how to review the patents. The Alliance provides a venue for us to share the legal legwork without having to worry about it being used against us down the road. That distributes the load, allows us to innovate faster and cheaper, and gives everyone more confidence that we are really producing a royalty-free codec.

The Alliance will operate under W3C patent rules and release code under an Apache 2.0 license. This means all Alliance participants are waiving royalties both for the codec implementation and for any patents on the codec itself. The initial members are just a start. We invite anyone with an interest in video, online or off, to join us.

For further information please visit www.aomedia.org or view the press release.

Air MozillaMartes mozilleros

Martes mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos.

Byron Joneshappy bmo push day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1197073] add support for 2fa using totp (eg. google authenticator)
  • [1199136] security bug group moves for new core-security-release group
  • [1199941] inactive sessions should expire faster (a week)

today’s push lands initial support for two-factor authentication on BMO. we currently support time-based one-time passwords (totp) with protection around just logging in. 2fa protection will be extended to protect other actions in the upcoming weeks.

visit the ‘two-factor authentication‘ section under your user preferences to enable 2fa.

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

QMOFirefox 41 Beta 7 Testday, September 4th

I’m writing to let you know that this Friday, September 4th, we’ll be hosting the Firefox 41.0 Beta 7 Testday. The main focus of this event is going to be set on Flash on 64-bit Firefox builds and plug-ins testing. Detailed participation instructions are available in this etherpad.

No previous testing experience is required so feel free to join us on the #qa IRC channel and our moderators will make sure you’ve got everything you need to get started.

Hope to see you all on Friday! Let’s make Firefox better together!

Daniel StenbergBlog refresh

Dear reader,

If you ever visited my blog in the past and you see this, you should’ve noticed a pretty significant difference in appearance that happened the other day here.

When I kicked off my blog here on the site back in August 2007 and moved my blogging from advogato to self-host, I installed WordPress and I’ve been happy with it since then from a usability stand-point. I crafted a look based on an existing theme and left it at that.

Over time, WordPress has had its hefty amount of security problems over and over again and I’ve also suffered from them myself a couple of times, and a few times I ended up patching it manually more than once. At one point when I decided to bite the bullet and upgrade to the latest version it didn’t work to upgrade anymore and I postpone it for later.

Time passed, I tried again without success and then more time passed.

I finally fixed the issues I had with upgrading. With a series of manual fiddling I finally managed to upgrade to the latest WordPress and when doing so my old theme was considered broken/incompatible so I threw that out and started fresh with a new theme. This new one is based on one of the simple default ones WordPress ships for free. I’ve mostly just made it slightly wider and edited the looks somewhat. I don’t need fancy. Hopefully I’ll be able to keep up with WordPress better this time.

Additionally, I added a captcha that now forces users to solve an easy math problem to submit anything to the blog to help me fight spam, and perhaps even more to solve a problem I have with spambots creating new users. I removed over 3300 users yesterday that never posted anything that has been accepted.

Enjoy.  Now back to our regular programming!

Seif LotfyCounting flows (Semi-evaluation of CMS, CML and PMC)

Assume we have a stream of events coming in one at a time, and we need to count the frequency of the different types of events in the stream.

In other words: We are receiving fruits one at a time in no given order, and at any given time we need to be able to answer how many of a specific fruit did we receive.

The most naive implementation is a dictionary in the form of , and is most accurate and suitable for streams with limited types of events.

Let us assume a unique item consists of 15 bytes and has a dedicated uint32 (4 bytes) counter assigned to it.

At 10 million unique items we end up using 19 MB which is a bit much, but on the plus side its as accurate as it gets.

But what if we don't have the 19 MB. Or what if we have to keep track of several streams?

Maybe saving to a DB? Well when querying the DB upon request, something in the lines of:

SELECT count(event) WHERE event = ?)

The more items we add, the more resource intensive the query becomes.

Thankfully solutions come in the form of Probabalistic datastructures (sketches).

I won't get into details but to solve this problem I semi-evaluated the following data structures:

  • Count-Min sketch (CMS) [2]
  • Count-Min-Log sketch (CML) [1][2]
  • Probabilistic Multiplicity Counting sketch (PMC) [1]

Test details:

For each sketch I linearly added a new flow with equivalently linear events. So the first flow got 1 event inserted. The second flow for 2 events inserted, all the way up to 10k-th flow with 10k events inserted.

flow 1: 1 event  
flow 2: 2 events  
...
flow 10000: 10000 events  

All three data structures were configured to have a size of 217KB (exactly 1739712 bits).

A couple dozen runs yielded the following results (based on my unoptimized code esp. for PMC and CML)

CMS: 07s for 50005000 insertion (fill rate: 31%)  
CML: 42s for 50005000 insertion (fill rate: 09%)  
PMC: 18s for 50005000 insertion (fill rate: 54%)  

CMS with ɛ: 0.0001, δ: 0.99 (code) CMS

Observe the biased estimation of CMS. CMS will never underestimate. In our case looking at the top border of the diagram we can see the there was a lot of overestimation.

CML with ɛ: 0.000025, δ: 0.99 (16-bit counters) (code)CML

Just like CMS, CML is also biased and will never underestimate. However unlike CMS the top border of the diagram is less noisy. Yet accuracy seems to be decreasing for the high counting flows.

PMC with (256x32) virtual metrices (code) PMC

Unlike the previous two sketches. This sketch is unbiased, so underestimations exist. Also the estimate flow count increases with the actual flow count (linearly bigger errors). The drawback here is that PMC gets filled up very quickly which means at some point it will just have everything overestimated. It is recommended to know what the max num of different flows will be beforehand.

Bringing it all together ALL

So what do you think. If you are familiar with these algorithms or can propose a different benchmarking scenario, please comment. I might be able to work on that on a weekend. The code was all written in Go, feel free to suggest optimizations of fix any bugs found (links above respective plots).

QMOAn open letter about Mozilla QA

Dear people of the web,

As some of you may already be aware, Mozilla has experienced a lot of change over the years. Most teams and projects within Mozilla have felt this change in some way, either directly or indirectly. The QA Team is no exception.

As a microcosm of the Mozilla Project, people involved in many disparate projects, QA has changed course many times. To many of you, these changes may have passed by unnoticed. Perhaps you noticed something was different about QA but were not able to understand how or why things had changed. Perhaps it was a feeling, that some of us seemed more distant, or that it just felt different.

This may come as a surprise to some, but there is no longer a single, unified QA team at Mozilla. After going through a few re-organizations, we are spread across the organization, embedded with — and reporting to — various product teams.

Those teams have benefited by having a dedicated QA person on staff full time. However, with so few of us to go around, many teams find themselves without any QA. In this state, we’ve lost the distinguished central QA organization that once was and in doing so we’ve lost a central QA voice.

As a result of these changes and a sense of perpetual reorganization we have reached a tipping point. We’ve lost some very talented and passionate people. Change within itself isn’t a bad thing. The loss of cohesion is. It is time to break this pattern, regain our cohesion and regain our focus on the community.

The core group of QA community members, paid and volunteer, will soon be getting together to formulate a mission statement. We’ll do this with a series of one-on-one conversations between core individuals who are interested in architecting a new QA community.  This will serve as the guiding light of our journey toward a more optimistic future together.

In recognition of those who might feel excluded from this process, we want to assure you that there will be opportunity to contribute very early on. Conducting these one on ones is just the  first step in a very long journey. We plan to bring everyone along who wants to be here, but this process requires great care and it will take time. If you’d like to help us build the future please get in touch with us.

Please read our wiki page to find out more about what we’re doing and where we’re going.

 

Sincerely,

Anthony Hughes and Matt Brandt

Matt ThompsonWhat we’re working on

What we got done:

What we’re doing next:

Next steps on the Mozilla Learning plan

Stuff we’re working on for the Sep 8 community call:

  • Leadership: focusing on a) audience tiers / ladder of engagement, and b) an analysis of our current offerings.
  • Advocacy: fleshing out our potential core activities / tactics. Plus mapping user journeys against them.
  • Impact: synthesizing the feedback we got on impact statements and milestones from the last community call.

Oh Hey — we’re launching the new Thimble!

https://thimble.mozilla.org

Manish GoregaokarDesigning a GC in Rust

For a while I’ve been working on a garbage collector for Rust with Michael Layzell. I thought this would be a good time to talk of our design and progress so far.

Motivation

“Wait”, you ask, “why does Rust need a garbage collector”? Rust is supposed to work without a GC, that’s one of its main selling points!

True. Rust does work pretty well without a GC. It’s managed to do without one so far, and we still have all sorts of well-written crates out there (none of which use a GC).

But Rust is not just about low-cost memory safety. It’s also about choosing your costs and guarantees. Box<T> and stack allocation are not always sufficient, sometimes one needs to reach for something like Rc<T> (reference counting). But even Rc is not perfect; it can’t handle cycles between pointers. There are solutions to that issue like using Weak<T>, but that only works in limited cases (when you know what the points-to graph looks like at compile time), and isn’t very ergonomic.

Cases where one needs to maintain a complicated, dynamic graph are where a GC becomes useful. Similarly, if one is writing an interpreter for a GCd language, having a GC in Rust would simplify things a lot.

Not to say that one should pervasively use a GC in Rust. Similar to Rc<T>, it’s best to use regular ownership-based memory management as much as possible, and sprinkle Rc/Gc in places where your code needs it.

Previous designs

This isn’t the first GC in Rust. Automatic memory management has existed before in various forms, but all were limited.

Besides the ones listed below, Nick Fitzgerald’s cycle collector based on this paper exists and is something that you should look into if you’re interested. There’s also an RFC by Peter Liniker which sketches out a design for an immutable GC.

Core Rust GC(s)

Rust itself had a garbage collector until a bit more than a year ago. These “managed pointers” (@T) were part of the language. They were removed later with a plan to make GC a library feature.

I believe these were basically reference counted (cycle collected?) pointers with some language integration, but I’m not sure.

Nowadays, the only form of automatic memory management in Rust are via Rc and Arc which are nonatomic and atomic reference counted pointers respectively. In other words, they keep track of the number of shared references via a reference count (incremented when it is cloned, decremented when destructors run). If the reference count reaches zero, the contents are cleaned up.

This is a pretty useful abstraction, however, as mentioned above, it doesn’t let you create cycles without leaking them.

Spidermonkey

You can read more about Servo’s Spidermonkey bindings in this blog post (somewhat outdated, but still relevant)

In Servo we use bindings to the Spidermonkey Javascript engine. Since Javascript is a garbage collected language, the Rust representations of Javascript objects are also garbage collected.

Of course, this sort of GC isn’t really useful for generic use since it comes bundled with a JS runtime. However, the Rust side of the GC is of a design that could be used in an independent library.

The Rust side of the Spidermonkey GC is done through a bunch of smart pointers, and a trait called JSTraceable. JSTraceable is a trait which can “trace” recursively down some data, finding and marking all GC-managed objects inside it. This is autoderived using Rust’s plugin infrastructure, so a simple #[jstraceable] annotation will generate trace hooks for the struct it is on.

Now, we have various smart pointers. The first is JS<T>. This is opaque, but can be held by other GC-managed structs. To use this on the stack, this must be explicitly rooted, via .root(). This produces a Root<T>, which can be dereferenced to get the inner object. When the Root is created, the contained object is listed in a collection of “roots” in a global. A root indicates that the value is being used on the stack somewhere, and the GC starts tracing usage from these roots. When the Root<T> is destroyed, the root is removed.

The problem with this is that JS<T> doesn’t work on the stack. There is no way for the GC to know that we are holding on to JS<T> on the stack. So, if I copy a JS<T> to the stack, remove all references to it from objects in the GC heap, and trigger a collection, the JS<T> will still be around on the stack after collection since the GC can’t trace to it. If I attempt to root it, I may get a panic or a segfault depending on the implementation.

To protect against this, we have a bunch of lints. The relevant one here protects against JS<T> from being carried around on the stack; but like most lints, it’s not perfect.

To summarize: Spidermonkey gives us a good GC. However using it for a generic Rust program is ill advised. Additionally, Servo’s wrappers around the GC are cheap, but need lints for safety. While it would probably be possible to write safer wrappers for general usage, it’s pretty impractical to carry around a JS runtime when you don’t need one.

However, Spidermonkey’s GC did inspire me to think more into the matter.

Brainstorming a design

For quite a while I’d had various ideas about GCs. Most were simplifications of Servo’s wrappers (there’s some complexity brought in there by Spidermonkey that’s not necessary for a general GC). Most were tracing/rooting with mark-and-sweep collection. All of them used lints. Being rather busy, I didn’t really work on it past that, but planned to work on it if I could find someone to work with.

One day, Michael pinged me on IRC and asked me about GCs. Lots of people knew that I was interested in writing a GC for Rust, and one of them directed him to me when he expressed a similar interest.

So we started discussing GCs. We settled on a tracing mark-and-sweep GC. In other words, the GC runs regular “sweeps” where it first “traces” the usage of all objects and marks them and their children as used, and then sweeps up all unused objects.

This model on its own has a flaw. It doesn’t know about GC pointers held on the stack as local variables (“stack roots”). There are multiple methods for solving this. We’ve already seen one above in the Spidermonkey design – maintain two types of pointers (one for the stack, one for the heap), and try very hard using static analysis to ensure that they don’t cross over.

A common model (used by GCs like Boehm, called “conservative GCs”) is to do something called “stack scanning”. In such a system, the GC goes down the stack looking for things which may perhaps be GC pointers. Generally the GC allocates objects in known regions of the memory, so a GC pointer is any value on the stack which belongs to one of these regions.

Of course, this makes garbage collection rather inefficient, and will miss cases like Box<Gc<T>> where the GCd pointer is accessible, but through a non-GC pointer.

We decided rather early on that we didn’t want a GC based on lints or stack scanning. Both are rather suboptimal solutions in my opinion, and very hard to make sound1. We were also hoping that Rust’s type system and ownership semantics could help us in designing a good, safe, API.

So, we needed a way to keep track of roots, and we needed a way to trace objects.

Tracing

The latter part was easy. We wrote a compiler plugin (well, we stole Servo’s tracing plugin which I’d written earlier) which autoderives an implementation of the Trace trait on any given struct or enum, using the same internal infrastructure that #[derive(PartialEq)] and the rest use. So, with just the following code, it’s easy to make a struct or enum gc-friendly:

#[derive(Trace)]
struct Foo {
    x: u8,
    y: Bar,
}

#[derive(Trace)]
enum Bar {
    Baz(u8), Quux
}

For a foo of type Foo foo.trace(), will expand to a call of foo.x.trace() and foo.y.trace(). bar.trace() will check which variant it is and call trace() on the u8 inside if it’s a Baz. For most structs this turns out to be a no-op and is often optimized away by inlining, but if a struct contains a Gc<T>, the special implementation of Trace for Gc<T> will “mark” the traceability of the Gc<T>. Types without Trace implemented cannot be used in types implementing Trace or in a Gc, which is enforced with a T: Trace bound on Gc<T>.

So, we have a way of walking the fields of a given object and finding inner Gc<T>s. Splendid. This lets us write the mark&sweep phase easily: Take the list of known reachable Gc<T>s, walk their contents until you find more Gc<T>s (marking all you find), and clean up any which aren’t reachable.

Rooting

Of course, now we have to solve the problem of keeping track of the known reachable Gc<T>s, i.e. the roots. This is a hard problem to solve without language support, and I hope that eventually we might be able to get the language hooks necessary to solve it. LLVM has support for tracking GCthings on the stack, and some day we may be able to leverage that in Rust.

As noted above, Spidermonkey’s solution was to have non-rooted (non-dereferencable) heap pointers, which can be explicitly converted to rooted pointers and then read.

We went the other way. All Gc<T> pointers, when created, are considered “rooted”. The instance of Gc<T> has a “rooted” bit set to true, and the underlying shared box (GcBox, though this is not a public interface) has its “root count” set to one.

When this Gc<T> is cloned, an identical Gc<T> (with rooted bit set to true) is returned, and the underlying root count is incremented. Cloning a Gc does not perform a deep copy.

let a = Gc::new(20); // a.root = true, (*a.ptr).roots = 1, (*a.ptr).data = 20

// ptr points to the underlying box, which contains the data as well as
// GC metadata like the root count. `Gc::new()` will allocate this box

let b = a.clone(); // b.root = true, (*a.ptr).roots++, b.ptr = a.ptr

This is rather similar to how Rc works, however there is no root field, and the roots counter is called a “reference counter”.

For regular local sharing, it is recommended to just use a borrowed reference to the inner variable (borrowing works fine with rust-gc!) since there is no cost to creating this reference.

When a GC thing is put inside another GC thing, the first thing no longer can remain a root. This is handled by “unrooting” the first GC thing:

struct Foo {
    bar: u32,
    baz: Gc<u32>,
}

let a = Gc::new(20); // why anyone would want to GC an integer I'll never know
                     // but I'll stick with this example since it's simple

let b = Gc::new(Foo {bar: 1, baz: a});
// a.root = false, (*a.ptr).roots--
// b initialized similar to previous example

// `a` was moved into `b`, so now `a` cannot be accessed directly here
// other than through `b`, and `a` is no longer a root.
// To avoid moving a, passing `a.clone()` to `b` will work

Of course, we need a way to traverse the object passed to the Gc<T>, in this case Foo, and look for any contained Gc<T>s to unroot. Sound familiar? This needs the same mechanism that trace() needed! We add struct-walking root() and unroot() methods to the Trace trait which are auto- derived exactly the same way, and continue. (We don’t need root() right now, but we will need it later on).

Now, during collection, we can just traverse the list of GcBoxs and use the ones with a nonzero root count as roots for our mark traversal.

So far, so good. We have a pretty sound design for a GC that works … for immutable data.

Mutability

Like Rc<T>, Gc<T> is by default immutable. Rust abhors aliasable mutability, even in single threaded contexts, and both these smart pointers allow aliasing.

Mutation poses a problem for our GC, beyond the regular problems of aliasable mutability: It’s possible to move rooted things into heap objects and vice versa:

let x = Gc::new(20);

let y = Gc::new(None);

*y = Some(x); // uh oh, x is still considered rooted!

// and the reverse!

let y = Gc::new(Some(Gc::new(20)));

let x = y.take(); // x was never rooted!
// `take()` moves the `Some(Gc<u32>)` out of `y`, replaces it with `None`       

Since Gc<T> doesn’t implement DerefMut, none of this is possible — one cannot mutate the inner data. This is one of the places where Rust’s ownership/mutability system works out awesomely in our favor.

Of course, an immutable GC isn’t very useful. We can’t even create cycles in an immutable GC, so why would anyone need this in the first place2?

So of course, we needed to make it somehow mutable. People using Rc<T> solve this problem by using RefCell<T>, which maintains something similar to the borrow semantics at runtime and is internally mutable. RefCell<T> itself can’t be used by us since it doesn’t guard against the problem illustrated above (and hence won’t implement Trace, but a similar cell type would work).

So we created GcCell<T>. This behaves just like RefCell<T>, except that it will root() before beginning a mutable borrow, and unroot() before ending it (well, only if it itself is not rooted, which is tracked by an internal field similar to Gc<T>). Now, everything is safe:

#[derive(Trace)]
struct Foo {
    a: u8,
    b: GcCell<Gc<u8>>,
}

let x = Gc::new(20);

let y = Gc::new(Foo {a: 10, b: Gc::new(30)});
{
    *y.b.borrow_mut() = x; // the `Gc(30)` from `y.b` was rooted by this call
                           // but since we don't actually use it here,
                           // the destructor gets rid of it.
                           // We could use swap() to retain access to it.
    // ...
    // x unrooted
}


// and the reverse case works too:

let y = Gc::new(GcCell::new(Some(Gc::new(20))));

let x = y.borrow_mut().take(); // the inner `Some(Gc(20))` gets rooted by `borrow_mut()`
                               // before `x` can access it

So now, mutation works too! We have a working garbage collector!

Open problems

Destructors

I believe this can be solved without lints, but it may require some upcoming features of Rust to be implemented first (like specialization).

In essence, destructors implemented on a value inside Gc<T> can be unsafe. This will only happen if they try to access values within a Gc<T> — if they do, they may come across a box that has already been collected, or they may lengthen the lifetime of a box scheduled to be collected.

The basic solution to this is to use “finalizers” instead of destructors. Finalizers, like in Java, are not guaranteed to run. However, we may need further drop hooks or trait specialization to make an airtight interface for this. I don’t have a concrete design for this yet, though.

Concurrency

Our model mostly just works in a concurrent situation (with thread safety tweaks, of course); in fact it’s possible to make it so that the concurrent GC will not “stop the world” unless someone tries to do a write to a GcCell. We have an experimental concurrent GC in this pull request. We still need to figure out how to make interop between both GCs safe, though we may just end up making them such that an object using one GC cannot be fed to an object using the other.

Performance

So far we haven’t really focused on performance, and worked on ensuring safety. Our collection triggering algorithm, for example, was horribly inefficient, though we planned on improving it. The wonderful Huon fixed this, though.

Similarly, we haven’t yet optimized storage. We have some ideas which we may work on later. (If you want to help, contributions welcome!)

Cross-crate deriving

Currently, an object deriving Trace should have Traceable children. This isn’t always possible when members from another crate (which does not depend on rust-gc) are involved. At the moment, we allow an #[unsafe_ignore_trace] annotation on fields which are of this type (which excludes it from being traced – if that crate doesn’t transitively depend on rust-gc, its members cannot contain GCthings anyway unless generics are involved). It should be possible to detect whether or not this is safe, and/or autoderive Trace using the opt-in builtin traits framework (needs specialization to work), but at the moment we don’t do anything other than expose that annotation.

Stdlib support for a global Trace trait that everyone derives would be awesome.

Conclusion

Designing a GC was a wonderful experience! I didn’t get to write much code (I was busy and Michael was able to implement most of it overnight because he’s totally awesome), but the long design discussions followed by trying to figure out holes in the GC design in every idle moment of the day were quite enjoyable. GCs are very hard to get right, but it’s very satisfying when you come up with a design that works! I’m also quite happy at how well Rust helped in making a safe interface.

I encourage everyone to try it out and/or find holes in our design. Contributions of all kind welcome, we’d especially love performance improvements and testcases.

Discuss: HN, Reddit


  1. I’m very skeptical that it’s possible to make either of these completely sound without writing lints which effectively rewrite a large chunk of the compiler

  2. There is a case to be made for an immutable GC which allows some form of deferred initialization of GC fields, however.

David HumphreyIntroducing a New Thimble and Bramble

Introduction

This week we're shipping something really cool with Mozilla, and I wanted to pause and tell you about what it is, and how it works.

The tldr; is that we took the Mozilla Foundation's existing web code editor, Thimble, and rewrote it to use Bramble, our forked version of the Brackets editor, which runs in modern web browsers. You can try it now at https://thimble.mozilla.org/

If you're the type who prefers animated pictures to words, I made you a bunch over on the wiki, showing what a few of the features look like in action. You can also check out Luke's great intro video.

If you're the type who likes words, the rest of this is for you.

Why?

I started working on this project two years ago. While at MozFest 2013 I wrote about an idea I had for a new concept app that merged Thimble and Brackets; at the time I called it Nimble.

I was interested in merging these two apps for a number of reasons. First, I wanted to eliminate the "ceiling" users had when using Thimble, wherein they would graduate beyond its abilities, and be forced to use other tools. In my view, Thimble should be able to grow and expand along with a learner's abilities, and a teacher's needs.

Second, people were asking for lots of new features in Thimble, and I knew from experience that the best code is code you don't have to write. I wanted to leverage the hard work of an existing community that was already focused on building a great web coding platform. Writing a coding environment is a huge challenge, and our team wasn't equipped to take it on by ourselves. Thankfully the Brackets project had already solved this.

On Brackets

Brackets was an easy codebase to get started on, and the community was encouraging and willing to help us with patches, reviews, and questions (I'm especially thankful for @randyedmunds and @busykai).

Brackets is written in an AMD module system, and uses requirejs, react, CodeMirror, LESS, jQuery, Bootstrap, loadash, acorn, tern, etc. One of the things I've loved most about working with the Brackets source is that it uses so much of the the best of the open web. It's ~1.3 million lines of code offer APIs for things things like:

  • code hinting, static analysis, and linting
  • language parsing and tokenizing (html, css, js, xml, less)
  • file system operations
  • editors
  • live DOM diff'ing and in-browser preview
  • swappable servers
  • layout, widgets, and dialogs
  • localization, theming, and preferences
  • extension loading at runtime, with hundreds already written

In short, Brackets isn't an editor so much as a rich platform for coding and designing front-end web pages and apps. Bracket's killer feature is its ability to render a live preview of what's in your editor, including dynamic updates as you type, often without needing to save. The preview even has an awareness of changes to linked files (e.g., external stylesheets and scripts).

Another thing I loved was that Brackets wasn't trying to solve code editing in general: they had a very clear mandate that favoured web development, and front-end web development in particular. HTML, CSS, and JavaScript get elevated status in Brackets, and don't have to fight with every other language for features.

All of these philosophies and features melded perfectly with our goal of making a great learning and teaching tool for web programming.

But what about X?

Obviously there are a ton of code editing tools available. If we start with desktop editors, there are a lot to choose from; but they all suffer from the same problem: you have to download 10s of megs of installer, and then you have to install them, along with a web server, in order to preview your work. Consider what's involved in installing each of these (on OS X):

Thimble, on the other hand, is ~1M (877K for Bramble, the rest for the front-end app). We worked extremely hard to get Brackets (38.5M if you install it) down to something that fits in the size of an average web page. If we changed how Brackets loads more significantly, we could get it smaller yet, but we've chosen to keep existing extensions working. The best part is that there is no install: the level of commitment for a user is the URL.

In addition to desktop editors, there are plenty of popular online options, too:

The list goes on. They are all great, and I use, and recommend them all. Each of these tools has a particular focus, and none of them do exactly what the new Thimble does; specifically, none of them tries to deal with trees of files and folders. We don't need to do what these other tools do, because they already do it well. Instead, we focused on making it possible for users to create a rich and realistic environment for working with arbitrary web site/app structures without needing to install a and run a web server.

From localhost to nohost

I've always been inspired by @jswalden's httpd.js. It was written back before there was node.js, back in a time when it wasn't yet common knowledge that you could do anything in JS. The very first time I saw it I knew that I wanted to find some excuse to make a web server in the browser. With nohost, our in-browser web server, we've done it.

In order to run in a browser, Bramble has to be more than just a code editor; it also has to include a bunch of stuff that would normally be provided by the Brackets Shell (similar to Electron.io) and node.js. This means providing a:

  • web server
  • web browser
  • filesystem

and glue to connect those three. Bracket's uses Chrome's remote debugging protocol and node.js to talk between the editor, browser, and server. This works well, but ties it directly to Chrome.

At first I wasn't sure how we'd deal with this. But then an experimental implementation of the Bracket's LiveDevelopment code landed, which switched away from using Chrome and the remote dev tools protocol to any browser and a WebSocket. Then, in the middle of the docs, we found an offhand comment that someone could probably rewrite it to use an iframe and postMessage...a fantastic idea! So we did.

Making it possible for an arbitrary web site to work in a browser-based environment is a little like Firefox's Save Page... feature. You can't just deal with the HTML alone--you also have to get all the linked assets.

Consider an example web page:

<!DOCTYPE html>  
<html>  
  <head>
   <meta charset="utf-8">
    <title>Example Page</title>
    <link rel="stylesheet”
          href="styles/style.css">
  </head>
  <body>
    <img src="images/cat.png">
    <script src="script.js"></script>
    <script>
      // Call function f in script.js
      f();
    </script>
  </body>
</html>  

In this basic web page we have three external resources referenced by URL. The browser needs to be able to request styles/style.css, images/cat.png, and script.js in order to fully render this page. And we're not done yet.

The stylesheet might also reference other stylesheets using @import, or might use other images (e.g., background-image: url(...)).

It gets worse. The script might need to XHR a JSON file from the server in order to do whatever f() requires.

Bramble tries hard to deal with these situations through a combination of static and dynamic rewriting of the URLs. Eventually, if/when all browsers ship it, we could do a lot of this with ServiceWorkers. Until then, we made do with what we already have cross browser.

First, Bramble's nohost server recursively rewrites the HTML, and its linked resources, in order to find relative filesystem paths (images/cat.png) and replace them with Blobs and URL objects that point to cached memory resources read out of the browser filesystem.

Parsing HTML with regex is a non-starter. Luckily browsers have a full parser built in, DOMParser. Once we have an in memory DOM vs. HTML text string, we can accurately querySelectorAll to find things that might contain URLs (img, link, video, iframe, etc., avoiding a due to circular references) and swap those for generated Blob URLs from the filesystem. When we're done, we can extract rewritten HTML text from our live in-memory DOM via documentElement.outerHTML, obtaining something like this:

<!DOCTYPE html>  
<html>  
  <head>
   <meta charset="utf-8">
    <title>Example Page</title>
    <link rel="stylesheet”
          href="blob:https%3A//mozillathimblelivepreview.net/346526f5-3c14-4073-b667-997324a5bfa9">
  </head>
  <body>
    <img src="blob:https%3A//mozillathimblelivepreview.net/ab090911-9ec1-499c-a9fc-7fce180704f7">
    <script src="blob:https%3A//mozillathimblelivepreview.net/264a3524-5316-47e5-a835-451e78247678"></script>
    <script>
      // Call function f in script.js
      f();
    </script>
  </body>
</html>  

All external resources now use URLs to cached memory resources. This HTML can then be itself turned into a Blob and URL object, and used as the src for our iframe browser (this works everywhere except IE, where you have to document.write the HTML, but can use Blob URLs for everything else).

For CSS we do use regex, looking for url(...) and other places where URLs can lurk. Thankfully there aren't a lot, and it's just a matter of reading the necessary resources from disk, caching to a Blob URL, and replacing the filesystem paths for URLs, before generating a CSS Blob URL that can be used in the HTML.

Despite what everyone tells you about the DOM being slow, the process is really fast. And because we own the filesystem layer, whenever the editor does something like a writeFile(), we can pre-generate a URL for the resource, and maintain a cache of such URLs keyed on filesystem paths for when we need to get them again in the future during a rewrite step. Using this cache we are able to live refresh the browser quite often without causing any noticeable slowdown on the main thread.

As an aside, it would be so nice if we could move the whole thing to a worker and be able to send an HTML string, and get back a URL. Workers can already access IndexedDB, so we could read from the filesystem there, too. This would mean having access to DOMParser (even if we can't touch the main DOM from a worker, being able to parse HTML is still incredibly useful for rewriting, diff'ing, etc).

Finally, we do dynamic substitutions of relative paths for generated Blob URLs at runtime by hijacking XMLHttpRequest and using our postMessage link from the iframe to the editor in order to return response data for a given filename.

And it all works! Sure, there's lots of things we won't ever be able to cope with, from synchronous XHR to various types of DOM manipulation by scripts that reference URLs as strings. But for the general case, it works remarkably well. Try downloading and dragging a zipped web site template from http://html5up.net/ into the editor. Bramble doesn't claim to be able to replace a full, local development environment for every use case; however, it makes it unnecessary in most common cases. It's amazing what the modern web can do via storage, file, drag-and-drop, parser, and worker APIs.

Origin Sandboxing

I talk about Thimble and Bramble as different things, and they are, especially at runtime. Bramble is an embeddable widget with an iframe API, and Thimble hosts it and provides some UI for common operations.

I've put a simple demo of the the Bramble API online for people to try (source is here). Bramble uses, but doesn't own its filesystem; nor does it have any notion of where the files came from or where they are going. It also doesn't have opinions about how the filesystem should be laid out.

This is all done intentionally so that we can isolate the editor and preview from the hosting app, running each on a different domain. We want users to be able to write arbitrary code, execute and store it; but we don't want to mix code for the hosting app and the editor/preview. The hosting app needs to decide on a filesystem layout, get and write the files, and then "boot" Bramble.

I've written previously about how we use MessageChannel to remotely host an IndexedDB backed filesystem in a remote window running on another domain: Thimble owns the filesystem and database and responds to proxied requests to do things via postMessage.

In the case of Thimble, we store data in a Heroku app using postgres on the server. Thimble listens for filesystem events, and then queues and executes file update requests over the network to sync the data upstream. Published projects are written to S3, and we then serve them on a secure domain. Because users can upload files to their filesystem in the editor, it makes it easier to transition to an https:// only web.

When the user starts Thimble, we request a project as a gzipped tarball from the publishing server, then unpack it in a Worker and recreate the filesystem locally. Bramble then "mounts" this local folder and begins working with the local files and folders, with no knowledge of the servers (all data is autosaved, and survives refreshes).

Conclusion

Now that we've got the major pieces in place, I'm interested to see what people will do with both Thimble and Bramble. Because we're in a full browser vs. an "almost-browser" shell, we have access to all the latest toys (for example, WebRTC and the camera). Down the road we could use this for some amazing pair programming setups, so learners and mentors could work with each other directly over the web on the same project.

We can also do interesting things with different storage providers. It would be just as easy to have Bramble talk to Github, Dropbox, or some other cloud storage provider. We intentionally kept Thimble and Bramble separate in order to allow different directions in the future.

Then there's all the possibilities that custom extensions opens up (did I mention that Bramble has dynamic extension loading? because it does!). I'd love to see us use bundles of extensions to enable different sorts of learning activities, student levels, and instructional modes. I'm also really excited to see what kind of new curriculum people will build using all of this.

In the meantime, please try things out, file bugs, chat with us on irc #thimble on moznet and have fun making something cool with just your browser. Even better, teach someone how to do it.

Let me close by giving a big shout out to the amazing students (current and former) who hacked on this with me. You should hire them: Gideon Thomas, Kieran Sedgwick, Kenny Nguyen, Jordan Theriault, Andrew Benner, Klever Loza Vega, Ali Al Dallal, Yoav Gurevich, as well as the following top notch Mozilla folks, who have been amazing to us: Hannah Kane, Luke Pacholski, Pomax, Cassie McDaniel, Ashley Williams, Jon Buckley, and others.

Air MozillaMozilla Weekly Project Meeting

Mozilla Weekly Project Meeting The Monday Project Meeting

This Week In RustThis Week in Rust 94

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

From the Blogosphere

New Releases & Project Updates

  • Rust web framework comparison. A comparison of some web frameworks written in Rust.
  • eco. A tool for reasoning about breaking changes in Rust ecosystems.
  • mioco. Scalable, asynchronous IO coroutine-based handling for Rust (aka MIO COroutines).
  • cargo-vendor. Cargo subcommand to vendor crates.io dependencies.
  • regex-dfa. Proof of concept for fast regexes in Rust (a regex -> DFA compiler).
  • rust-gc. Simple tracing (mark and sweep) garbage collector for Rust.
  • torch. Generate CPU FlameGraphs based on DWARF Debug Info.
  • libwebpki. Web PKI Certificate Validation in Rust.
  • sokoban-rs. An implementation of Sokoban in Rust.
  • urdict. Command line client for Urban Dictionary.
  • power-assert-rs. Power Assert in Rust. Provides better assertion message.
  • rust-passert. Pretty/Power assertion macros for Rust.
  • colerr. Wrap a given process and colorize it's standard error output.
  • minesweeper-rs. Simple minesweeper in Rust.

What's cooking on nightly?

104 pull requests were merged in the last week.

New Contributors

  • Adam Crume
  • Artem Shitov
  • Cesar Eduardo Barros
  • Daan Rijks
  • Jake Shadle
  • Matěj Grabovský
  • Michael Choate
  • Nikolay Kondratyev
  • Overmind JIANG
  • Tim JIANG

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

fn work(on: RustProject) -> Money

No jobs listed for this week. Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

"And God said, Noah you must transport these animals across a large body of water... but they are not Send. And Noah replied, I shall build a great Arc!"durka42 on #rust

Thanks to tomprogrammer for the tip. Submit your quotes for next week!.

Wladimir PalantWhy you probably want to disable jQuery.parseHTML even though you don't call it

TL;DR: jQuery.parseHTML is a security hazard and will be called implicitly in a number of obvious and not so obvious situations.

Why should you care?

Hey, jQuery is great! It’s so great that Stack Overflow users will recommend it no matter what your question is. And now they have two problems. Just kidding, they will have the incredible power of jQuery:

$("#list").append('<li title="' + item.info + '">' + item.name + '</li>');

The above is locating a list in the document, creating a new list item with dynamic content and adding it to the list — all that in a single line that will still stay below the 80 columns limit. And we didn’t even lose readability in the process.

Life is great until some fool comes along and mumbles “security” (yeah, that’s me). Can you tell whether the code above is safe to be used in a web application? Right, it depends on the context. Passing HTML code to jQuery.append will use the infamous innerHTML property implicitly. If you aren’t careful with the HTML code you are passing there, this line might easily turn into a Cross-Site Scripting (XSS) vulnerability.

Does item.name or item.info contain data from untrusted sources? Answering that question might be complicated. You need to trace the data back to its source, decide who should be trusted (admin user? localizer?) and make sure you didn’t forget any code paths. And even if you do all that, some other developer (or maybe even yourself a few months from now) might come along and add another code path where item.name is no longer trusted. Do you want to bet on this person realizing that they are making an entirely different piece of code insecure?

It’s generally better to give jQuery structured data and avoid taking any chances. The secure equivalent of the code above would be:

$("#list").append($("<li>", {title: item.info}).text(item.name));

Not quite as elegant any more but now jQuery will take care of producing a correct HTML structure and you don’t need to worry about that.

Wait, there is more!

There is one remarkable thing about jQuery APIs: each function can take all kinds of parameters. For example, the .append() function we used above can take a DOM element, a CSS selector, HTML code or a function returning any of the above. This keeps function names short, and you only need to remember one function name instead of four.

The side effect is however: even if you are not giving jQuery any HTML code, you still have to keep in mind that the function could accept HTML code. Consider the following code for example:

$(tagname + " > .temporary").remove();

This will look for elements of class temporary within a given tag and remove them, right? Except that the content of tagname better be trusted here. What will happen if an attacker manages to set the value of tagname to "<img src='dummy' onerror='alert(/xss/)'>"? You probably guessed it, the “selector” will be interpreted as HTML code and will execute arbitrary JavaScript code.

There is more than a dozen jQuery functions that will happily accept both selectors and HTML code. Starting with jQuery 1.9.0 security issues here got somewhat less likely, the string has to start with < in order to be interpreted as HTML code. Older versions will accept anything as HTML code as long as it doesn’t contain #, the versions before jQuery 1.6.1 didn’t even have that restriction.

To sum up: you better use jQuery 1.9.0 or above, otherwise your dynamically generated selector might easily end up being interpreted as an HTML string. And even with recent jQuery versions you should be careful with dynamic selectors, the first part of the selector should always be a static string to avoid security issues.

Defusing jQuery

With almost all of the core jQuery functionality potentially problematic, evaluating security of jQuery-based code is tricky. Ideally, one would simply disable unsafe functionality so that parsing HTML code by accident would no longer be possible. Unfortunately, there doesn’t seem to be a supported way yet. The approach I describe here seems to work in the current jQuery versions (jQuery 1.11.3 and jQuery 2.1.4) but might not prevent all potential issues in older or future jQuery releases. Use at your own risk! Oh, and feel free to nag jQuery developers into providing supported functionality for this.

There is a comment in the source code indicating that jQuery.parseHTML function being missing is an expected situation. However, removing this function doesn’t resolve all the issues, and it disables safe functionality as well. Removing jQuery.buildFragment on the other hand doesn’t seem to have any downsides:

delete jQuery.buildFragment;

// Safe element creation still works
$('<img>', {src: "dummy"});

// Explicitly assigning or loading HTML code for an element works
$(document.body).html('<img src="dummy">');
$(document.body).load(url);

// These will throw an exception however
$('<img src="dummy">');
$(document.body).append('<img src="dummy">');
$.parseHTML('<img src="dummy">');

Of course, you have to adjust all your code first before you disable this part of the jQuery functionality. And even then you might have jQuery plugins that will stop working with this change. There are some code paths in the jQuery UI library for example that rely on parsing non-trivial HTML code. So this approach might not work for you.

But how do I create larger DOM structures?

The example creating a single list item is nice of course but what if you have to create some complicated structure? Doing this via dozens of nested function calls is impractical and will result in unreadable code.

One approach would be placing this structure in your HTML document, albeit hidden. Then you would need to merely clone it and fill in the data:

<style type="text/css">
  #entryTemplate { display: none; }
</style>

<div id="entryTemplate">
  <div class="title"></div>
  <div class="description"></div>
</div>

<script>
  var template = $("#entryTemplate");
  var entry = template.clone().removeAttr("id");
  entry.find(".title").text(item.title);
  entry.find(".description").text(item.description);
  $(document.body).append(entry);
</script>

Other templating approaches for JavaScript exist as well of course. It doesn’t matter which one you use as long as you don’t generate HTML code on the fly.

Christian HeilmannQuickie: Fading in a newly created element using CSS

Update: I got an email from James at VIDesignz who found another solution to this problem using the :empty selector. I added it at the end of the article.

As part of our JSFoo workshop today I was asked to look into an issue a team had that you can not apply a CSS transition to a newly created element when you change the CSS settings in JavaScript. As I was dealing with professionals, they created a simple JSFiddle to show the problem:

As you see just changing the property (in this case the opacity) is not enough to have a transition. There are a few solutions to this shown in the Fiddle, too, like forcing a reflow, which of course could be a terrible idea.

I played with this and found the solution to be to not change the properties in JavaScript (which is kind of dirty anyways) but leave it all to CSS instead. The biggest part of the solution is not to use a transition but an animation instead and trigger it by applying a class to the newly created element right after adding it to the DOM:

Update: As mentioned by Kyle Simpson on Twitter, there is a way to do the same with a transition, but you need to wrap the creation and applying the class into requestAnimationFrame calls which in turn means some polyfilling:

Update2 You can also use the :empty selector in CSS to achieve the same when you add the new element as a child:

Aaron KlotzOn WebExtensions

There has been enough that has been said over the past week about WebExtensions that I wasn’t sure if I wanted to write this post. As usual, I can’t seem to help myself. Note the usual disclaimer that this is my personal opinion. Further note that I have no involvement with WebExtensions at this time, so I write this from the point of view of an observer.

API? What API?

I shall begin with the proposition that the legacy, non-jetpack environment for addons is not an API. As ridiculous as some readers might consider this to be, please humour me for a moment.

Let us go back to the acronym, “API.” Application Programming Interface. While the usage of the term “API” seems to have expanded over the years to encompass just about any type of interface whatsoever, I’d like to explore the first letter of that acronym: Application.

An Application Programming Interface is a specific type of interface that is exposed for the purposes of building applications. It typically provides a formal abstraction layer that isolates applications from the implementation details behind the lower tier(s) in the software stack. In the case of web browsers, I suggest that there are two distinct types of applications: web content, and extensions.

There is obviously a very well defined API for web content. On the other hand, I would argue that Gecko’s legacy addon environment is not an API at all! From the point of view of an extension, there is no abstraction, limited formality, and not necessarily an intention to be used by applications.

An extension is imported into Firefox with full privileges and can access whatever it wants. Does it have access to interfaces? Yes, but are those interfaces intended for applications? Some are, but many are not. The environment that Gecko currently provides for legacy addons is analagous to an operating system running every single application in kernel mode. Is that powerful? Absolutely! Is that the best thing to do for maintainability and robustness? Absolutely not!

Somewhere a line needs to be drawn to demarcate this abstraction layer and improve Gecko developers’ ability to make improvements under the hood. Last week’s announcement was an invitation to addon developers to help shape that future. Please participate and please do so constructively!

WebExtensions are not Chrome Extensions

When I first heard rumors about WebExtensions in Whistler, my source made it very clear to me that the WebExtensions initiative is not about making Chrome extensions run in Firefox. In fact, I am quite disappointed with some of the press coverage that seems to completely miss this point.

Yes, WebExtensions will be implementing some APIs to be source compatible with Chrome. That makes it easier to port a Chrome extension, but porting will still be necessary. I like the Venn Diagram concept that the WebExtensions FAQ uses: Some Chrome APIs will not be available in WebExtensions. On the other hand, WebExtensions will be providing APIs above and beyond the Chrome API set that will maintain Firefox’s legacy of extensibility.

Please try not to think of this project as Mozilla taking functionality away. In general I think it is safe to think of this as an opportunity to move that same functionality to a mechanism that is more formal and abstract.

François MarierLetting someone ssh into your laptop using Pagekite

In order to investigate a bug I was running into, I recently had to give my colleague ssh access to my laptop behind a firewall. The easiest way I found to do this was to create an account for him on my laptop and setup a pagekite frontend on my Linode server and a pagekite backend on my laptop.

Frontend setup

Setting up my Linode server in order to make the ssh service accessible and proxy the traffic to my laptop was fairly straightforward.

First, I had to install the pagekite package (already in Debian and Ubuntu) and open up a port on my firewall by adding the following to both /etc/network/iptables.up.rules and /etc/network/ip6tables.up.rules:

-A INPUT -p tcp --dport 10022 -j ACCEPT

Then I created a new CNAME for my server in DNS:

pagekite.fmarier.org.   3600    IN  CNAME   fmarier.org.

With that in place, I started the pagekite frontend using this command:

pagekite --clean --isfrontend --rawports=virtual --ports=10022 --domain=raw:pagekite.fmarier.org:Password1

Backend setup

After installing the pagekite and openssh-server packages on my laptop and creating a new user account:

adduser roc

I used this command to connect my laptop to the pagekite frontend:

pagekite --clean --frontend=pagekite.fmarier.org:10022 --service_on=raw/22:pagekite.fmarier.org:localhost:22:Password1

Client setup

Finally, my colleague needed to add the folowing entry to ~/.ssh/config:

Host pagekite.fmarier.org
  CheckHostIP no
  ProxyCommand /bin/nc -X connect -x %h:10022 %h %p

and install the netcat-openbsd package since other versions of netcat don't work.

On Fedora, we used netcat-openbsd-1.89 successfully, but this newer package may also work.

He was then able to ssh into my laptop via ssh roc@pagekite.fmarier.org.

Making settings permanent

I was quite happy settings things up temporarily on the command-line, but it's also possible to persist these settings and to make both the pagekite frontend and backend start up automatically at boot. See the documentation for how to do this on Debian and Fedora.

Cameron Kaiser38.2.1 is available

TenFourFox 38.2.1 is available (release notes, hashes, downloads). Due to the fact this is a chemspill and we're already delayed, it will become live by this evening. Other than the Mozilla fixes, issue 306 is also repaired. Further work on 38's MP3 support is being deferred until the replacement hard disk arrives (should be early next week).

Don't forget to test 38.2.1 with incremental GC disabled. See the previous post. Enjoy our new sexy wiki, too. Sexy. Yes.

Emma IrwinParticipation Leadership Framework 0.1

In the last heartbeat, as part of our Q3 goals for leadership development,  I interviewed a diverse set of people across Mozilla, asking what they think the skills, knowledge and attitudes of effective Participation Leadership at Mozilla are.  Two things really stood out during this process.  The first was how many people (staff, contributors and alumni) are truly, truly dedicated to the success of each other and Mozilla’s mission, which was really inspiring and helped inform the quality of this Framework. The second was how many opportunities and resources already exist (or are being created) for leadership development, that if bundled together,  with more specifically targeted curriculum and focused outcomes will provide powerful learning by Participating experiences.

This Heartbeat iterated on themes that emerged during those interviews.  I thank those who provided feedback on Discourse, and in Github, all  of which brought us to this  first  0.1 version.

2015-08-28_1244

Foundations of Participation Leadership are the  core skills, knowledge and attitudes that lend to success on both personal goals, and goals for Participation at Mozilla.

Building Blocks of Participation Leadership are units of learning, that together provide a whole vision for leadership, but individually build skills, attitude and knowledge that inform specific learning outcomes as needed.

Examples of skills, leadership and knowledge for each:

Personal Leadership

  • Accountability
  • Decision Making
  • Introspective, Authentic Leadership
  • “My Leadership Identity at Mozilla”

Essential Mozilla

  • Mozilla’s Timeline & History
  • Advocacy
  • Mozilla’s Mission
  • “Why Mozilla, Why Now, Why Me?”

Building for Action and Impact

  • Community Building
  • Agile for Participation Projects
  • Designing with Participation Hooks & Triggers
  • Building Bridges to Mozilla

Empowering Teams and People

  • Uncovering Unconscious Bias
  • Mentoring & Finding Mentorship
  • Teach & Facilitate Like Mozilla
  • Distributed Leadership

Working Open

  • Open Practices
  • Writing in the Open
  • Sharing & Licensing
  • Activism in the Open

Developing Specialization

  • Creating Automated Tests for Firefox OS
  • Inviting Skilled Participation to Fennec
  • Web Literacy Leadership

We would love your comments, suggestions and ideas on where we are so far. In the next heartbeat we’ll begin building and running workshops with these as guide, and further iterating towards 1.0.

 


Image Credit: Lead Type by jm3

Stuart ColvillePython: testing beyond exceptions

Recently working on some code I was doing some basic tests that checked for an exception being raised.

The tests looked like this:

def test_user_identification_is_valid_option(self):  
   with self.assertRaises(ValueError):
       example_seller(products=[
           'id': 'hai',
           'description': 'a description',
           'recurrence': None,
           'user_identification': True,
       }])


def test_user_identification_is_required(self):  
     with self.assertRaises(ValueError):
         example_seller(products=[{
             'id': 'hai',
             'description': 'a description',
             'recurrence': None,
         }])

I realised that it was really easy to be tricked by other parts of the code that might raise the same exception. For example, in this case the required code is raising ValueError separately.

In JavaScript using assertions in chai, assert.throws takes a function that should throw an exception, the 2nd arg is the exception instance you're expecting and the third arg can be a regex to match the string passed to the exception.

I wanted to do the same thing in Python.

assertRaisesRegexp

In Python (2.7+) assertRaisesRegexp does exactly what we want so we can fix our tests to be more explicit like so:

def test_user_identification_is_valid_option(self):  
    with self.assertRaisesRegexp(ValueError, 'user_identification must'):
       example_seller(products=[
           'id': 'hai',
           'description': 'a description',
           'recurrence': None,
           'user_identification': True,
       }])


def test_user_identification_is_required(self):  
     with self.assertRaisesRegexp(ValueError, 'Missing user_ident'):
         example_seller(products=[{
             'id': 'hai',
             'description': 'a description',
             'recurrence': None,
         }])

The second argument can either be a a regular expression object or a string containing a regular expression suitable for use by re.search().

With this in place we can be sure our tests are asserting on the actual part of the code that raises the exception not just asserting that any old ValueError is good enough.

Mozilla Addons BlogAMO T-shirt Update

Just want to give a quick update on the snazzy t-shirts designed by Erick León Bolinaga. They were finally done printing this week, and are headed to a fulfillment center for shipping. We expect them to begin shipping by the end of next week.

Thanks for your patience!

Air MozillaWebmaker Demos August 28 2015

Webmaker Demos August 28 2015 Webmaker Demos August 28 2015

Dan MinorMozReview Montréal Work Week

pigeon

Under the watchful gaze (and gentle cooing) of the pigeons, the MozReview developers gathered in Montréal for a work week. My main goal for the week was to make substantial progress towards Autoland to Inbound, my primary project for this quarter, maybe even a deployment of an initial iteration to the development server.

While we didn’t get quite that far, we did make a lot of progress on a lot of fronts, including finally getting Bugzilla API key support deployed. This is the first work week I’ve done where we just stayed and worked together in an AirBNB apartment rather than get hotel rooms and make use of shared space in a Mozilla office. I really enjoyed this, it was a nice casual work environment and we got a lot of focused work done.

Some things I worked on this week, in varying degrees of completion:

  • Bug 1198086 This adds an endpoint for making Autoland to “non-Try” tree requests which will allow us to build the UI for Autoland to Inbound. A while back I fixed Bug 1183295 which added support for non-Try destinations in the Autoland service itself. This means, outside of bug fixes, the backend support for Autoland to Inbound is implemented and we can focus on the UI.

  • Bug 1196263 is my other main project this quarter. We want to add a library which enables people to write their own static analysis bots that run against MozReview. This is based on work that GPS did in the winter to create a static analysis bot for Python. We still need to rework some of the messages we’re sending out on Pulse when a review is published, at the moment we’ll end up re-reviewing unchanged commits and spamming lots of comments. This was a problem with the original Python bot and needs to be fixed before bots can be enabled.

  • Bug 1168486 involves creating a “Custom Hosting Service” for review repositories. This will let us maintain metadata about things like whether or not a repo has an associated Try repository so we can disable triggering try runs on reviews where this doesn’t make sense.

  • Bug 1123139 is a small UI fix to remove unnecessary information from the Description field. We’ve decided to reserve the Description field for displaying the Mercurial commit message which will hopefully encourage people to write more descriptive messages for their changes. This will also move the “pull down these commits” hint to the Information section on the right of the page. Like most small UI fixes, this consumed an embarrassing amount of time. I’ve come to realize that no matter how many bad UIs I leave under my pillow at night, the UI fairy will not come and fix them, so I’ll just have to get better at this sort of thing.

Patrick ClokeBoston Python: Twisted async networking framework

Yesterday, Stephen DiCato and I gave a talk for Boston Python titled: Twisted async networking framework. It was an introduction to intermediate level talk about using the Twisted networking framework based on our experiences at Percipient Networks.

The talk, available on our GitHub (PDF) covered a few basic topics:

  1. What is asynchronous programming?
  2. What is Twisted?
  3. When/why to use Twisted?
  4. What is the event loop (reactor)?
  5. What are Deferreds and how do you use them?
  6. What are protocols (and related objects) and how do you use them?

Additionally there was a ‘bonus’ section: Using Twisted to build systems & services.

We used an example of a very simple chat server (NetCatChat: where the official client is netcat) to demonstrate these principles. All of our (working!) demo code is included in the repository.

There was a great turn out (almost 100 people showed up) and I greatly enjoyed the experience. Thanks to everyone who came, the sponsors for the night, Boston Python for setting this up, and Stephen for co-presenting! Please let us know if you have any questions or comments.

Rubén MartínBringing better support to regional communities

During this third quarter, one of the main goals for the Participation team at Mozilla is to better support Reps and Regional communities.

We want to focus our efforts this quarter in 10 countries to be more efficient with the resources we have and be able to:

  • Tailor country profiles and a community health dashboard.
  • Develop a mid-term plan with at least three communities.
  • Systematize a coaching framework with volunteers.

As part of the Reps/Regional group I’m currently involved in these efforts, focusing in three European countries: Germany, France and UK.

During the past and following weeks I’ll be meeting volunteers from these communities to know more about them and to figure out where to get some information that would help to develop the country profiles and the community dashboard, an important initiative to have a clear overview about our community status.

Also, I’m working with the awesome German community to meet and work together in a plan to align and improve the community in the next 6 months.

On top of all the previous things, we are starting a set of 1:1 meetings with key volunteers inside these communities to bring coaching and support in a more personal way, understanding everyone’s views and learning the best ways to help people’s skills and motivation.

Finally, I’m working to improve the Reps/Regional team accountability and work-flow productivity exploring better ways to manage our work as a team and working with the Reps Council to put together a Rep program profile doc to understand better the current status and what should be changed/improved.

You can know more about the Participation team Q3 goals and key results, as well as individual team members goals, in this public document and follow our daily work in our github page.

Emily DunhamApache Licenses

Apache Licenses

At the bottom fo the Apache 2.0 License file, there’s an appendix:

APPENDIX: How to apply the Apache License to your work.

...

Copyright [yyyy] [name of copyright owner]

...

Does that look like an invitation to fill in the blanks to you? It sure does to me, and has for others in the Rust community as well.

Today I was doing some licensing housekeeping and made the same embarrassing mistake.

This is a PSA to double check whether inviting blanks are part of the appendix before filling them out in Apache license texts.

Christian HeilmannES6 for now: Template strings

ES6 is the future of JavaScript and it is already here. It is a finished specification, and it brings a lot of features a language requires to stay competitive with the needs of the web of now. Not everything in ES6 is for you and in this little series of posts I will show features that are very handy and already usable.

If you look at JavaScript code I’ve written you will find that I always use single quotes to define strings instead of double quotes. JavaScript is OK with either, the following two examples do exactly the same thing:

var animal = "cow";
 
var animal = 'cow';

The reason why I prefer single quotes is that, first of all, it makes it easier to assemble HTML strings with properly quoted attributes that way:

// with single quotes, there's no need to 
// escape the quotes around the class value
var but = '<button class="big">Save</button>';
 
// this is a syntax error:
var but = "<button class="big">Save</button>";
 
// this works:
var but = "<button class=\"big\">Save</button>";

The only time you need to escape now is when you use a single quote in your HTML, which should be a very rare occasion. The only thing I can think of is inline JavaScript or CSS, which means you are very likely to do something shady or desperate to your markup. Even in your texts, you are probably better off to not use a single quote but the typographically more pleasing ‘.

Aside: Of course, HTML is forgiving enough to omit the quotes or to use single quotes around an attribute, but I prefer to create readable markup for humans rather than relying on the forgiveness of a parser. We made the HTML5 parser forgiving because people wrote terrible markup in the past, not as an excuse to keep doing so.

I’ve suffered enough in the DHTML days of document.write to create a document inside a frameset in a new popup window and other abominations to not want to use the escape character ever again. At times, we needed triple ones, and that was even before we had colour coding in our editors. It was a mess.

Expression substition in strings?

Another reason why I prefer single quotes is that I wrote a lot of PHP in my time for very large web sites where performance mattered a lot. In PHP, there is a difference between single and double quotes. Single quoted strings don’t have any substitution in them, double quoted ones have. That meant back in the days of PHP 3 and 4 that using single quotes was much faster as the parser doesn’t have to go through the string to substitute values. Here is an example what that means:

<?php
  $animal = 'cow';
  $sound = 'moo';
 
  echo 'The animal is $animal and its sound is $sound';
  // => The animal is $animal and its sound is $sound
 
  echo "The animal is $animal and its sound is $sound";
  // => The animal is cow and its sound is moo
?>

JavaScript didn’t have this substitution, which is why we had to concatenate strings to achieve the same result. This is pretty unwieldy, as you need to jump in and out of quotes all the time.

var animal = 'cow';
var sound = 'moo';
 
alert('The animal is ' + animal + ' and its sound is ' +
 sound);
// => "The animal is cow and its sound is moo"

Multi line mess

This gets really messy with longer and more complex strings and especially when we assemble a lot of HTML. And, most likely you will sooner or later end up with your linting tool complaining about trailing whitespace after a + at the end of a line. This is based on the issue that JavaScript has no multi-line strings:

 
// this doesn't work
var list = '<ul>
  <li>Buy Milk</li>
  <li>Be kind to Pandas</li>
  <li>Forget about Dre</li>
</ul>';
 
// This does, but urgh… 
var list = '<ul>\
  <li>Buy Milk</li>\
  <li>Be kind to Pandas</li>\
  <li>Forget about Dre</li>\
</ul>';
 
// This is the most common way, and urgh, too…
var list = '<ul>' +
'  <li>Buy Milk</li>' +
'  <li>Be kind to Pandas</li>' +
'  <li>Forget about Dre</li>' +
'</ul>';

Client side templating solutions

In order to work around the mess that is string handling and concatenation in JavaScript, we did what we always do – we write a library. There are many HTML templating libraries with Mustache.js probably having been the seminal one. All of these follow an own – non standardised – syntax and work in that frame of mind. It’s a bit like saying that you write your content in markdown and then realising that there are many different ideas of what “markdown” means.

Enter template strings

With the advent of ES6 and its standardisation we now can rejoice as JavaScript has now a new kid on the block when it comes to handling strings: Template Strings. The support of template strings in current browsers is encouraging: Chrome 44+, Firefox 38+, Microsoft Edge and Webkit are all on board. Safari, sadly enough, is not, but it’ll get there.

The genius of template strings is that it uses a new string delimiter, which isn’t in use either in HTML nor in normal texts: the backtick (`).

Using this one we now have string expression substitution in JavaScript:

var animal = 'cow';
var sound = 'moo';
 
alert(`The animal is ${animal} and its sound is ${sound}`);
// => "The animal is cow and its sound is moo"

The ${} construct can take any JavaScript expression that returns a value, you can for example do calculations, or access properties of an object:

var out = `ten times two totally is ${ 10 * 2 }`;
// => "ten times two totally is 20"
 
var animal = {
  name: 'cow',
  ilk: 'bovine',
  front: 'moo',
  back: 'milk',
}
alert(`
  The ${animal.name} is of the 
  ${animal.ilk} ilk, 
  one end is for the ${animal.front}, 
  the other for the ${animal.back}
`);
// => 
/*
  The cow is of the 
  bovine ilk, 
  one end is for the moo, 
  the other for the milk
*/

That last example also shows you that multi line strings are not an issue at all any longer.

Tagged templates

Another thing you can do with template strings is prepend them with a tag, which is the name of a function that is called and gets the string as a parameter. For example, you could encode the resulting string for URLs without having to resort to the horridly named encodeURIComponent all the time.

function urlify (str) {
  return encodeURIComponent(str);
}
 
urlify `http://beedogs.com`;
// => "http%3A%2F%2Fbeedogs.com"
urlify `woah$£$%£^$"`;
// => "woah%24%C2%A3%24%25%C2%A3%5E%24%22"
 
// nesting also works:
 
var str = `foo ${urlify `&&`} bar`;
// => "foo %26%26 bar"

This works, but relies on implicit array-to-string coercion. The parameter sent to the function is not a string, but an array of strings and values. If used the way I show here, it gets converted to a string for convenience, but the correct way is to access the array members directly.

Retrieving strings and values from a template string

Inside the tag function you can not only get the full string but also its parts.

function tag (strings, values) {
  console.log(strings);
  console.log(values);
  console.log(strings[1]);
}
 
tag `you ${3+4} it`;
/* =>
 
Array [ "you ", " it" ]
7
it
 
*/

There is also an array of the raw strings provided to you, which means that you get all the characters in the string, including control characters. Say for example you add a linebreak with \n you will get the double whitespace in the string, but the \n characters in the raw strings:

function tag (strings, values) {
  console.log(strings);
  console.log(values);
  console.log(strings[1]);
  console.log(string.raw[1]);
}
 
tag `you ${3+4} \nit`;
/* =>
 
Array [ "you ", "  it" ]
7
 
it
 \nit
*/

Conclusion

Template strings are one of those nifty little wins in ES6, that can be used right now. If you have to support older browsers, you can of course transpile your ES6 to ES5, you can do a feature test for template string support using a library like featuretests.io or with the following code:

var templatestrings = false;
try {
  new Function( "`{2+2}`" );
  templatestrings = true;
} catch (err) {
  templatestrings = false;
} 
 
if (templatestrings) {
	// …
}

More articles on template strings:

Cameron Kaiser38.2.1 delayed due to hardware failure

TenFourFox 38.2.1 was supposed to be released to you today but the hard disk used for compiling it blew up sometime yesterday and I've been recovering data from the drive and the last backup instead. The G5 version was built before the disk died, and does check out, but the other three builds haven't been yet. Let this be a reminder that DiskWarrior can fix a lot of things but not hardware failure (and people complaining of random faults in TenFourFox, please check your hardware first -- the symptom here was random freezes because the electronics kept dropping the drive off the SATA bus unexpectedly), so Data Rescue is busy getting the recoverable pieces off it and the rest I can restore from the file server. Both tools belong in your Power Mac bug-out bag and both still support PowerPC, so please support those vendors who still support us. It should be repaired enough to resume builds hopefully late tonight but I don't have an estimated time of release (hopefully no later than Sunday). It includes two Mozilla fixes and will also include a tweak for TenFourFox issue 306.

In the meantime, a fair bit of the wiki has been updated and rewritten for Github. I am also exploring an idea from bug 1160228 by disabling incremental garbage collection entirely. This was a bad idea on 31 where incremental GC was better than nothing, but now that we have generational garbage collection and the nursery is regularly swept, the residual tenured heap seems small enough to make periodic full GCs more efficient. On a tier 1 platform the overhead of lots of incremental cycles may well be below the noise floor, but on the pathological profile in the bug even a relatively modern system had a noticeable difference disabling incremental GC. On this G5 occasionally I get a pause in the browser for 20 seconds or so, but that happens very infrequently, and otherwise now that the browser doesn't have to schedule partial passes it seems much sprightlier and stays so longer. The iBook G4 saw an even greater benefit. Please note that this has not been tested well with multiple compartments or windows, so your mileage may vary, but with that said please see what you think: in about:config set javascript.options.mem.gc_incremental to false and restart the browser to flush everything out. If people generally find this superior, it may become the default in 38.3.

Gervase MarkhamTop 50 DOS Problems Solved: Sorting Directory Listings

Q: Could you tell me if it’s possible to make the DIR command list files in alphabetical order?

A: Earlier versions of DOS didn’t allow this but there’s a way round it. MS-DOS 5 gives you an /ON switch to use with DIR, for instance:

DIR *.TXT /ON /P

would list all the files with names ending in .TXT, pause the listing every screenful (/P) and sort the names into alphabetical order (/ON).

Users of earlier DOS programs can shove the output from DIR through a utility program that sorts the listing before printing it on the screen. That utility is SORT.EXE, supplied with DOS. … [So:]

DIR | SORT

diverts the output from DIR into SORT, which sorts the directory listing and sends it to the screen. Put this in a batch file called SDIR.BAT and you will have a sorted directory command called SDIR.

I guess earlier versions of DIR followed the Unix philosophy of “do one thing”…

Daniel StenbergContent over HTTP/2

cdn77 logoRoughly a week ago, on August 19, cdn77.com announced that they are the “first CDN to publicly offer HTTP/2 support for all customers, without ‘beta’ limitations”. They followed up just hours later with a demo site showing off how HTTP/2 might perform side by side with a HTTP/1.1 example. And yes, the big competitor CDNs are not yet offering HTTP/2 support it seems.

Their demo site initially got critized for not being realistic and for showing HTTP/2 to be way better in comparison that what a real life scenario would be more likely to look like, and it was also subsequently updated fairly quickly. It is useful to compare with the similarly designed previously existing demo sites hosted by Akamai and the Go project.

NGINX logocdn77’s offering is built on nginx’s alpha patch for HTTP/2 that was anounced just two weeks ago. I believe nginx’s full release is still planned to happen by the end of this year.

I’ve talked with cdn77’s Jakub Straka and their lead developer Honza about their HTTP/2 efforts, and since I suspect there are a few others in my audience who’re also similarly curious I’m offering this interview-style posting here, intertwined with my own comments and thoughts. It is not just a big ad for this company, but since they’re early players on this field I figure their view and comments on this are worth reading!

I’ve been in touch with more than one person who’ve expressed their surprise and awe over the fact that they’re using this early patch for nginx to run in production. So I had to ask them about that. Below, their comments are all prefixed with CDN77 and shown using italics.

nginx

CDN77: “Yes, we are running the alpha patch, which is basically a slightly modified SPDY. In the past we have been in touch with the Nginx team and exchanged tips and ideas, the same way we plan to work on the alpha patch in the future.

We’re actually pretty careful when deploying new and potentially unstable packages into production. We have separate servers for http2 and we are monitoring them 24/7 for any exceptions. We also have dedicated developers who debug any issues we are facing with these machines. We would never jeopardize the stability of our current network.

I’m not an expert on neither server-side HTTP/2 nor nginx in particular , but I think I read somewhere that the nginx HTTP/2 patch removes the SPDY support in favor of the new protocol.

CDN77: “You are right. HTTP/2 patch rewrites SPDY into the HTTP/2, so the SPDY is no longer supported after applying the patch. Since we have HTTP/2 running on separate servers, we still have SPDY support on the rest of the network.”

Did the team at cdn77 at all consider using something else than nginx for HTTP/2, like the promising newcomer h2o?

CDN77: “Not at all. Nginx is a clear choice for us. Its architecture and modularity is awesome. It is also very reliable and it has a pretty long history.

On scale

Can you share some of the biggest hurdles you had to overcome to deploy HTTP/2 on this scale with nginx?

CDN77: “Since nobody has tried the patch in such a scale ever before, we had to make sure it will hold even under pressure and needed to create a load heavy testing environment. We used servers from our partner company 10gbps.io and their 10G uplinks to create intensive ghost traffic. Also, it was important to make sure that supporting tools and applications are HTTP/2 ready – not all of them were. We needed to adjust the way we monitor and control servers in few cases.

There are a few bugs in Nginx that appear mainly in association with the longer-lived connections. They cause issues with the application layer and consume more resources. To be able to accommodate HTTP/2 and still keep necessary network redundancies, we needed to upgrade our network significantly.

I read this as an indication that the nginx patch isn’t perfected just yet rather than signifying that http2 is special. Perhaps also that http2 connections might use a larger footprint in nginx than good old http1 connections do.

Jakub mentioned they see average “performance savings” in the order of 20 to 60 percent depending on sites and contents with the switch to h2, but their traffic amounts haven’t been that large yet:

CDN77: “So far, only a fraction of the traffic is running via HTTP/2, but that is understandable since we launched the support few days ago. On the first day, only about 0.45% of the traffic was HTTP/2 and a big part of this was our own demo site. Over the weekend, we saw impressive adoption rate and the total HTTP/2 traffic accounts for more than 0.8% now, all that with the portion of our own traffic in this dropping dramatically. We expect to be pushing around 1.2% – 1.5% of total traffic over HTTP/2 till the end of this week.

Understandably, it is ramping up. Still, Firefox telemetry is showing at least 10% of the total traffic over HTTP/2 already.

Future of HTTPS and HTTP/2?

Whttp2 logohile I’m talking to a CDN operator, I figured I should poll their view on HTTPS going forward! Will the fact that all browsers only support h2 over HTTPS push more of your customers and your traffic in general over to HTTP, you think?

CDN77: “This is not easy to predict. There is encryption overhead, but HTTP/2 comes with header compression and it is binary. So at this single point, the advantages and disadvantages zero out. Also, the use of HTTPS is rising rapidly even on the older protocol, so we don’t consider this an issue at all.

In general, from a CDN perspective and as someone who just deployed this on a fairly large scale, what’s your general perception of what http2 means going forward?

CDN77: “We believe that this is a huge step forward in how we distribute content online and as a CDN company, we are especially excited as it concerns the very core of our business. From the new features, we have great expectations about cache invalidation that is being discussed right now.

Thanks to Jakub, Honza and Tomáš of cdn77 for providing answers and info. We live in exciting times.

Air Mozilla"Contextual Identity: Supporting multiple identities on the web" Presented by Steve Englehardt

"Contextual Identity: Supporting multiple identities on the web" Presented by Steve Englehardt Steven Englehardt - Contextual Identity

Air Mozilla"JavaScript of the Future: Asynchronous functions in ES7" Presented by Mariusz Kierski

"JavaScript of the Future: Asynchronous functions in ES7" Presented by Mariusz Kierski Mariusz Kierski - JavaScript of the future: Asynchronous functions in ES7

Air Mozilla"It's All About That Automation!" Presented by Koki Yoshida

"It's All About That Automation!" Presented by Koki Yoshida Koki Yoshida - It's all about that automation!

Air Mozilla"Building vs. Shipping Software" Presented by Karim Benhmida

"Building vs. Shipping Software" Presented by Karim Benhmida Karim Benhmida - Building vs shipping software

Air Mozilla"Life is Hard" Presented by Jonathan Almeida

"Life is Hard" Presented by Jonathan Almeida Jonathan Almeida - Life is Hard

Air Mozilla"Saving the World from Bad Experience" Presented by Jatin Chhikara

"Saving the World from Bad Experience" Presented by Jatin Chhikara Jatin Chhikara - Saving the world from bad experience

Air MozillaIntern Presentations

Intern Presentations Bernardo Rittmeyer Jatin Chhikara Steven Englehardt Gabriel Luong Karim Benhmida Eduoard Oger Jonathan Almeida Huon Wilson Mariusz Kierski Koki Yoshida

Air Mozilla"SIMD in Rust" Presented by Huon Wilson

"SIMD in Rust" Presented by Huon Wilson Huon Wilson - SIMD in Rust

Air Mozilla"I Promise CATs" Presented by Gabriel Luong

"I Promise CATs" Presented by Gabriel Luong Gabriel Luong - I Promise CATs!

Air Mozilla"Revamping the Sync Experience" Presented by Edouard Oger

"Revamping the Sync Experience" Presented by Edouard Oger Edouard Oger - Revamping the Sync Experience

Air Mozilla"Firefox Helps You Log In" Presented by Bernardo Rittmeyer

"Firefox Helps You Log In" Presented by Bernardo Rittmeyer Firefox Helps You Log In: Seamless password management for your daily browsing.

Mozilla WebDev CommunityBeer and Tell – August 2015

Once a month, web developers from across the Mozilla Project get together to spend an hour of overtime to honor our white-collar brethren at Amazon. As we watch our productivity fall, we find time to talk about our side projects and drink, an occurrence we like to call “Beer and Tell”.

There’s a wiki page available with a list of the presenters, as well as links to their presentation materials. There’s also a recording available courtesy of Air Mozilla.

openjck: Discord

openjck was up first and shared Discord, a Github webhook that scans pull requests for CSS compatibility issues. When it finds an issue, it leaves a comment on the offending line with a short description and which browsers are affected. The check is powered by doiuse, and projects can add a .doiuse file (using browserslist syntax) that specifies which browser versions they want to be tested against. Discord currently checks CSS and Stylus files.

The Discord team (mrmakeit, groovecoder, davidwalsh, and openjck) is looking for sites to test Discord out. Work on the site is currently suspended (which is why it’s a side project, the team may work on it in their free time) so that feedback can be gathered to determine where the site should go next. If you’re interested in trying out Discord, let groovecoder know!

peterbe: Activity and Fanout.io

Next up was peterbe, with an update to Activity. The site now uses Fanout.io and a message queue to improve how activity items are fetched from GitHub and other sources. The site queues up jobs to fetch data from the Github API, and as the jobs complete, they send their results to Fanout. Fanout’s JavaScript library maintains an open WebSocket with their service, and when Fanout receives the data from the completed jobs, it notifies the client of the new data, which gets written to localStorage and updates the React state. This allows Activity to remain structured as an offline-ready application while still receiving seamless updates if the user has an internet connection.


There’s a donation jar near the exit; for just $25 you can pay for an hour of time for an Amazon engineer to spend with their family. Checks may be made payable to No Questions Asked Laundry, LLC.

If you’re interested in attending the next Beer and Tell, sign up for the dev-webdev@lists.mozilla.org mailing list. An email is sent out a week beforehand with connection details. You could even add yourself to the wiki and show off your side-project!

See you next month!

Matt ThompsonImpact


Video recording of the Aug 26 Mozilla Learning community call

For the Mozilla Learning plan right now, we’re focused on impact. What impact will our advocacy and leadership work will have in the world over the next three years? How do we state that in a way that’s memorable, manageable, measurable and motivational?

How do other orgs do it? As a way to think big and step back, we asked participants in Tuesday’s community call to give examples of organizations or projects that inspire them right now. Here’s our list.

Who inspires you?

  • Free Code Camp — learn to code by helping non-profit organizations (Amira)
  • 18F — kicking ass when it comes to bringing open source to government (Kaitlin)
  • The Inter-Agency Network for Education in Emergencies — cool network and community of practice for 15,000 people teaching in refugee camps and other emergency settings around the world (Surman)
  • The Engine Room — small and scrappy, but doing amazing work with teaching open tools for social change (Michelle)
  • GDS — because they somehow manage to work like MoFo, even though they are part of Government (Adam)
  • Keyboardio — open source mechanical keyboard with a wonderful backlight, shipped with a screwdriver so that you can tinker around and reprogram.  (Shreyas)

  • Born Accessible — thinking about web content as “born accessible.”(Emma)
  • WikiSpeed — a non-profit that’s building open source, energy-efficient cars in 17 countries,  with no org chart or management structure (@OpenMatt)
  • NESTA — engaged in some interesting thought leadership that relates well to our work (Sam)

  • Ocean Cleanup — addressing “The Great Pacific Garbage Patch” with business / philanthropy / sponsorship / science / data / youth vision all coming together to stem it (Rebecca)
  • Conservation International — I’m digging their current campaign: “Nature doesn’t need people, people need nature” (Paul)
  • Mercy for Animals — they take a big, often controversial topic and make it approachable — and they have a massive, engaged volunteer force (Lindsey)
  • Truth and Reconciliation Commission Canada (Simona)
  • Generation Squeeze — taking on the impossible task of advocating for worklife balance, childcare and affordable housing on a living wage (ErikaD)

  • NYT documentary of bieber + skrillex + diplo –  Love the focus on storytelling and combo of graphics / animation. (Cassie)
  • model view culture — cranky and continuous analytic deconstructions of intersections between technology, inclusion, diversity with anger and no apologies and a paper journal that arrives on a regular basis. (@leahatplay)
  • Colors magazine — open contribution (Jordan)
  • the Unilever rapper campaign — because it was a long-stale pollution problem that was revitalized with creativity (Andrea)

  • Hollaback — uses online tools to work with young people and confront street harassment (Sara)
  • Craigslist — because their success is based on the assumption that most people are good. (David)
  • Dark Mountain — thinking through how WebLit does / does not survive in the anthropocene. (Chad)
  • NPR – They strike a successful balance between mass appeal and education. (Simon)

make things better -- 5319988695_22db1bded5_o

takeaways?

The above examples are…

  1. Crisp. Our group was able to communicate the story for each of these projects — in their own words, off the top of their head, in a single sentence. That means the mission is telegraphic, simple and sticky.
  2. Viral. Each of these organizations has succeeded in creating an influential, mini-evangelist to spread their story for them: you!
  3. Edgy.  Many of these examples have a bit of punk rock or social justice grit. They’re not wearing a bow tie.
  4. Diverse. There’s a broad range of stuff here, not just the usual tech / ed tech suspects. This is a party you’d want to be at.
  5. Real. There’s no jargon or planning language in any of the descriptions people provided — the language is authentic and human, because no one’s trying too hard. It’s just natural and unscripted.

Can we get to this same level of natural, edgy crispness for MoFo and our core strategies? Would others put us on a list like this? Food for thought.

Air MozillaGerman speaking community bi-weekly meeting

German speaking community bi-weekly meeting https://wiki.mozilla.org/De/Meetings

Andrew HalberstadtLooking beyond Try Syntax

Today marks the 5 year anniversary of try syntax. For the uninitiated, try syntax is a string that you put into your commit message which a parser then uses to determine the set of builds and tests to run on your try push. A common try syntax might look like this:

try: -b o -p linux -u mochitest -t none

Since inception, it has been a core part of the Mozilla development workflow. For many years it has served us well, and even today it serves us passably. But it is almost time for try syntax to don the wooden overcoat, and this post will explain why.

A brief history on try syntax

In the old days, pushing to try involved a web interface called sendchange.cgi. Pushing is probably the wrong word to use, as at no point did the process involve version control. Instead, patches were uploaded to the web service, which in turn invoked a buildbot sendchange with all the required arguments. Like today try server was often overloaded, sometimes taking over 4 hours for results to come back. Unlike today there was no way to pick and choose which builds and tests you wanted, every try push ran the full set.

The obvious solution was to create a mechanism for people to do that. It was while brainstorming this problem that ted, bhearsum and jorendorff came up with the idea of encoding this information in the commit message. Try syntax was first implemented by lsblakk in bug 473184 and landed on August 27th, 2010. It was a simple time; the list of valid builders could fit into a single 30 line config file; Fennec still hadn't picked up full steam; and B2G wasn't even a figment of anyone's wildest imagination.

It's probably not a surprise to anyone that as time went on, things got more complicated. As more build types, platforms and test jobs were added, the try syntax got harder to memorize. To help deal with this, lsblakk created the trychooser syntax builder just a few months later. In 2011, pbiggar created the trychooser mercurial extension (which was later forked and improved by sfink). These tools were (and still are) the canonical way to build a try syntax string. Little has changed since then, with the exception of the mach try command that chmanchester implemented around June 2015.

One step forward, two steps back

Since around 2013, the number of platforms and test configurations have grown at an unprecendented rate. So much so, that the various trychooser tools have been perpetually out of date. Any time someone got around to adding a new job to the tools, two other jobs had already sprung up in its place. Another problem caused by this rapid growth, was that try syntax became finicky. There were a lot of edge cases, exceptions to the rule and arbitrary aliases. Often jobs would mysteriously not show up when they should, or mysteriously show up when they shouldn't.

Both of those problems were exacerbated by the fact that the actual try parser code has never had a definite owner. Since it was first created, there have never been more than 11 commits in a year. There have been only two commits to date in 2015.

Two key insights

At this point, there are two things that are worth calling out:

  1. Generating try strings from memory is getting harder and harder, and for many cases is nigh impossible. We rely more and more on tools like trychooser.
  2. Try syntax is sort of like an API on which these tools are built on top of.

What this means is that primary generators of try syntax have shifted from humans to tools. A command line encoded in a commit message is convenient if you're a human generating the syntax manually. But as far as tooling goes, try syntax is one god awful API. Not only do the tools need to figure out the magic strings, they need to interact with version control, create an empty commit and push it to a remote repository.

There is also tooling on the other side of the see saw, things that process the try syntax post push. We've already seen buildbot's try parser but taskcluster has a separate try parser as well. This means that your try push has different behaviour, depending on whether the jobs are scheduled in buildbot or taskcluster. There are other one off tools that do some try syntax parsing as well, including but not limited to try tools in mozharness, the try re-trigger bot and the AWSY dashboard. These tools are all forced to share and parse the same try syntax string, so they have to be careful not to step on each other's toes.

The takeaway here is that for tools, a string encoded as a commit message is quite limiting and a lot less convenient than say, calling a function in a library.

Despair not, young Padawan

So far we've seen how try syntax is finicky, how the tools that use it are often outdated and how it fails as an API. But what is the alternative? Fortunately, over the course of 2015 a lot of progress has been made on projects that for the first time, give us a viable alternative to try syntax.

First and foremost, is mozci. Mozci, created by armenzg and adusca, is a tool that hooks into the build api (with early support for taskcluster as well). It can do things like schedule builds and tests against any arbitrary pushes, and is being used on the backend for tools like adusca's try-extender with integration directly into treeherder planned.

Another project that improves the situation is taskcluster itself. With taskcluster, job configuration and scheduling all lives in tree. Thanks to bhearsum's buildbot bridge, we can even use taskcluster to schedule jobs that still live in buildbot. There's an opportunity here to leverage these new tools in conjunction with mozci to gain complete and total control over how jobs are scheduled on try.

Finally I'd like to call out mach try once more. It is more than a thin wrapper around try syntax that handles your push for you. It actually lets you control how the harness gets run within a job. For now this is limited to test paths and tags, but there is a lot of potential to do some cool things here. One of the current limiting factors is the unexpressiveness of the try syntax API. Hopefully this won't be a problem too much longer. Oh yeah, and mach try also works with git.

A glimpse into the crystal ball

So we have several different projects all coming together at once. The hard part is figuring out how they all tie in together. What do we want to tackle first? How might the future look? I want to be clear that none of this is imminent. This is a look into what might be, not what will be.

There are two places we mainly care about scheduling jobs on try.

First imagine you push your change to try. You open up treeherder, except no jobs are scheduled. Instead you see every possible job in a distinct greyed out colour. Scheduling what you want is as simple as clicking the desired job icons. Hold on a sec, you don't have to imagine it. Adusca already has a prototype of what this might look like. Being able to schedule your try jobs this way has a huge benefit: you don't need to mentally correlate job symbols to job names. It's as easy as point and click.

Second, is pushing a predefined set of jobs to try from the command line, similar to how things work now. It's often handy to have the try command for a specific job set in your shell history and it's a pain to open up treeherder for a simple push that you've memorized and run dozens of times. There are a few improvements we can do here:

  • We can move the curses ui feature of the hg trychooser extension into mach try.
  • We can use mozci to automatically keep the known list of jobs up to date. This is useful for things like generating the curses ui on the fly, validation and tab completion.
  • We can use mozci + taskcluster + buildbot bridge to provide a much more expressive API for scheduling jobs. For example, you could easily push a T-style try run.
  • We can expand some of the functionality in mach try for controlling how the harnesses are run, for example we could use it to enable some of the debugging features of the harness while investigating test failures.

Finally for those who are stuck in their ways, it should still be possible to have a "classic try syntax" front-end to the new mozci backend. As large as this change sounds, it could be mostly transparent to the user. While I'm certainly not a fan of the current try syntax, there's no reason to begrudge the people who are.

Closing words

Try syntax has served us well for 5 long years. But it's almost time to move on to something better. Soon a lot of new avenues will be open and tools will be created that none of us have thought of yet. I'd like to thank all of the people mentioned in this post for their contributions in this area and I'm very excited for what the future holds.

The future is bright, and change is for the better.

Air MozillaWeb QA Weekly Meeting

Web QA Weekly Meeting This is our weekly gathering of Mozilla'a Web QA team filled with discussion on our current and future projects, ideas, demos, and fun facts.

Air MozillaReps weekly

Reps weekly Weekly Mozilla Reps call

Mike TaylorDynamically updating <meta viewport> in the year 2015.

18 months after writing the net-ward-winning article Dynamically updating <meta viewport> in the year 2014, I wrote some patches for Firefox for Android to make it possible to update a page's existing meta[name=viewport] element's content attribute and have the viewport be updated accordingly.

So when version 43 ships (at some point in 2015), code like this will work in more places than it did in 2014:

if(screen.width < 760) {
    viewport = document.querySelector("meta[name=viewport]");
    viewport.setAttribute('content', 'width=768');
}
if(screen.width > 760) {
    viewport = document.querySelector("meta[name=viewport]");
    viewport.setAttribute('content', 'width=1024');
}

I'll just go ahead and accept the 2015 netaward now, thanks for the votes everyone, wowowow.

Ian BickingConway’s Corollary

Conway’s Law states:

organizations which design systems are constrained to produce designs which are copies of the communication structures of these organizations

I’ve always read this as an accusation: we are doomed to recreate the structure of our organizations in the structure of software projects. And further: projects cannot become their True Selves, cannot realize the most superior design, unless the organization is itself practically structureless. That only without the constraints of structure can the engineer make the truly correct choices. Michelangelo sculpted from marble, a smooth and uniform stone, not from an aggregate, where any hit with the chisel might reveal only the chaotic structure and fault lines of the rock and not his vision.

But most software is built, not revealed. I’m starting to believe that Conway’s observation is a corollary, not so clearly cause-and-effect. Maybe we should work with it, not struggle against it. (With age I’ve lost the passion for pointless struggle.) It’s not that developers can’t imagine a design that goes contrary to the organizational structure, it’s that they can’t ship those designs. What we’re seeing is natural selection. And when through force of will such a design is shipped, that it survives and is maintained depends on whether that the organization changed in the process, whether a structure was created to support that design.

A second skepticism: must a particular construction and modularity of code be paramount? Code is malleable, and its modularity is for the purpose of humans. Most of what we do disappears anyway when the machine takes over – functions are inlined, types erased, the pieces become linked, and the machine doesn’t care one whit about everything we’ve done to make the software comprehensible. Modularity is to serve our purposes. And sometimes organization structure serves a purpose; we change it to meet goals, and we shouldn’t assume the people who change it are just busybodies. But those changes are often aspirational, and so those changes are setting ourselves up for conflict as the new structure probably does not mirror the software design.

If the parts of an organization (e.g. teams, departments, or subdivisions) do not closely reflect the essential parts of the product, or if the relationship between organizations do not reflect the relationships between product parts, then the project will be in trouble… Therefore: Make sure the organization is compatible with the product architecture – Coplien and Harrison

So change the architecture! There’s more than one way to resolve these tensions.

A last speculation: as described in the Second System Effect we see teams rearchitect systems with excessive modularity and abstraction. Maybe because they remember all these conflicts, they remember all the times organizational structure and product motivations didn’t match architecture. The team makes an incorrect response by creating an architecture that can simultaneously embody all imagined organizational structures, a granularity that embodies not just current organizational tensions but also organizational boundaries that have come and gone. But the value is only in predicting future changes in structure, and only then if you are lucky.

Maybe we should look at Conway’s Law as a prescription: projects should only have hard boundaries where there are organizational boundaries. Soft boundaries and definitions still exist everywhere: just like we give local variables meaningful names (even though outside the function no one can tell the difference), we might also create abstractions and modularity that serve immediate and concrete purposes. But they should only be built for the moment and the task at hand. Extra effort should be applied to being ready to refactor in the future, not predicting and embodying those predictions in present modularity. Perhaps this is another rephrasing of Agile and YAGNI. Code is a liability, agency over that code is an asset.

Air MozillaBay Area Rust Meetup August 2015

Bay Area Rust Meetup August 2015 The SF Rust Meetup for August.

Nicholas NethercoteWhat does the OS X Activity Monitor’s “Energy Impact” actually measure?

[Update: this post has been updated with significant new information. Look to the end.]

Activity Monitor is a tool in Mac OS X that shows a variety of real-time process measurements. It is well-known and its “Energy Impact” measure (which was added in Mac OS X 10.9) is often consulted by users to compare the power consumption of different programs. Apple support documentation specifically recommends it for troubleshooting battery life problems, as do countless articles on the web.

However, despite its prominence, the exact meaning of the “Energy Impact” measure is unclear. In this blog post I use a combination of code inspection, measurements, and educated guesses to hypothesize how it is computed in Mac OS X 10.9 and 10.10.

What is known about “Energy Impact”?

The following screenshot shows the Activity Monitor’s “Energy” tab.

There are no units given for “Energy Impact” or “Avg Energy Impact”.

The Activity Monitor documentation says the following.

Energy Impact: A relative measure of the current energy consumption of the app. Lower numbers are better.

Avg Energy Impact: The average energy impact for the past 8 hours or since the Mac started up, whichever is shorter.

That is vague. Other Apple documentation says the following.

The Energy tab of Activity Monitor displays the Energy Impact of each open app based on a number of factors including CPU usage, network traffic, disk activity and more. The higher the number, the more impact an app has on battery power.

More detail, but still vague. Enough so that various other  people have wondered what it means. The most precise description I have found says the following.

If my recollection of the developer presentation slide on App Nap is correct, they are an abstract unit Apple created to represent several factors related to energy usage meant to compare programs relatively.

I don’t believe you can directly relate them to one simple unit, because they are from an arbitrary formula of multiple factors.

[…] To get the units they look at CPU usage, interrupts, and wakeups… track those using counters and apply that to the energy column as a relative measure of an app.

This sounds plausible, and we will soon see that it appears to be close to the truth.

A detour: top

First, a necessary detour. top is a program that is similar to Activity Monitor, but it runs from the command-line. Like Activity Monitor, top performs periodic measurements of many different things, including several that are relevant to power consumption: CPU usage, wakeups, and a “power” measure. To see all these together, invoke it as follows.

top -stats pid,command,cpu,idlew,power -o power -d

(A non-default invocation is necessary because the wakeups and power columns aren’t shown by default unless you have an extremely wide screen.)

It will show real-time data, updated once per second, like the following.

PID            COMMAND                  %CPU         IDLEW        POWER
50300          firefox                  12.9         278          26.6
76256          plugin-container         3.4          159          11.3
151            coreaudiod               0.9          68           4.3
76505          top                      1.5          1            1.6 
76354          Activity Monitor         1.0          0            1.0

The PID, COMMAND and %CPU columns are self-explanatory.

The IDLEW column is the number of package idle exit wakeups. These occur when the processor package (containing the cores, GPU, caches, etc.) transitions from a low-power idle state to the active state. This happens when the OS schedules a process to run due to some kind of event. Common causes of wakeups include scheduled timers going off and blocked I/O system calls receiving data.

What about the POWER column? top is open source, so its meaning can be determined conclusively by reading the powerscore_insert_cell function in the source code. (The POWER measure was added to top in OS X 10.9.0 and the code has remain unchanged all the way through to OS X 10.10.2, which is the most recent version for which the code is available.)

The following is a summary of what the code does, and it’s easier to understand if the %CPU and POWER computations are shown side-by-side.

|elapsed_us| is the length of the sample period
|used_us| is the time this process was running during the sample period

  %CPU = (used_us * 100.0) / elapsed_us

  POWER = if is_a_kernel_process()
            0
          else
            ((used_us + IDLEW * 500) * 100.0) / elapsed_us
          

The %CPU computation is as expected.

The POWER computation is a function of CPU and IDLEW. It’s basically the same as %CPU but with a “tax” of 500 microseconds for each wakeup and an exception for kernel processes. The value of this function can easily exceed 100 — e.g. a program with zero CPU usage and 3,000 wakeups per second will have a POWER score of 150 — so it is not a percentage. In fact, POWER is a unitless measure because it is a semi-arbitrary combination of two measures with incompatible units.

Back to Activity Monitor and “Energy Impact”

MacBook Pro running Mac OS X 10.9.5

First, I did some measurements with a MacBook Pro with an i7-4960HQ processor running Mac OS X 10.9.5.

I did extensive testing with a range of programs: ones that trigger 100% CPU usage; ones that trigger controllable numbers of idle wakeups; ones that stress the memory system heavily; ones that perform frequent disk operations; and ones that perform frequent network operations.

In every case, Activity Monitor’s “Energy Impact” was the same as top‘s POWER measure. Every indication is that the two are computed identically on this machine.

For example, consider the data in the following table,  The data was gathered with a small test program that fires a timer N times per second; other than extreme cases (see below) each timer firing causes an idle platform wakeup.

-----------------------------------------------------------------------------
Hz     CPU ms/s   Intr        Pkg Idle   Pkg Power  Act.Mon. top
-----------------------------------------------------------------------------
     2     0.14        2.00       1.80     2.30W     0.1    0.1
   100     4.52      100.13      95.14     3.29W       5      5
   500     9.26      499.66     483.87     3.50W      25     25
  1000    19.89     1000.15     978.77     5.23W      50     50
  5000    17.87     4993.10    4907.54    14.50W     240    240
 10000    32.63     9976.38    9194.70    17.61W     485    480
 20000    66.66    19970.95   17849.55    21.81W     910    910
 30000    99.62    28332.79   25899.13    23.89W    1300   1300
 40000   132.08    37255.47   33070.19    24.43W    1610   1650
 50000   160.79    46170.83   42665.61    27.31W    2100   2100
 60000   281.19    58871.47   32062.39    29.92W    1600   1650
 70000   276.43    67023.00   14782.03    31.86W     780    750
 80000   304.16    81624.60     258.22    35.72W      43     45
 90000   333.20    90100.26     153.13    37.93W      40     42
100000   363.94    98789.49      44.18    39.31W      38     38

The table shows a variety of measurements for this program for different values of N. Columns 2–5 are from powermetrics, and show CPU usage, interrupt frequency, and package idle wakeup frequency, respectively. Column 6 is Activity Monitor’s “Energy Impact”, and column 7 is top‘s POWER measurement. Column 6 and 7 (which are approximate measurements) are identical, modulo small variations due to the noisiness of these measurements.

MacBook Air running Mac OS X 10.10.4

I also tested a MacBook Air with an i5-4250U processor running Mac OS X 10.10.4. The results were substantially different.

-----------------------------------------------------------------------------
Hz     CPU ms/s   Intr        Pkg Idle   Pkg Power Act.Mon. top
-----------------------------------------------------------------------------
     2     0.21        2.00       2.00     0.63W   0.0     0.1
   100     6.75       99.29      96.69     0.81W   2.4     5.2
   500    22.52      499.40     475.04     1.15W   10       25
  1000    44.07      998.93     960.59     1.67W   21       48
  3000   109.71     3001.05    2917.54     3.80W   60      145
  5000    65.02     4996.13    4781.43     3.79W   90      230
  7500   107.53     7483.57    7083.90     4.31W   140     350
 10000   144.00     9981.25    9381.06     4.37W   190     460

The results from top are very similar to those from the other machine. But Activity Monitor’s “Energy Impact” no longer matches top‘s POWER measure. As a result it is much harder to say with confidence what “Energy Impact” represents on this machine. I tried tweaking the previous formula so that the idle wakeup “tax” drops from 500 microseconds to 180 or 200 microseconds and that gives results that appear to be in the ballpark but don’t match exactly. I’m a bit skeptical whether Activity Monitor is doing all its measurements at the same time or not. But it’s also quite possible that other inputs have been added to the function that computes “Energy Impact”.

What about “Avg Energy Impact”?

What about the “Avg Energy Impact”? It seems reasonable to assume it is computed in the same way as “Energy Impact”, but averaged over a longer period. In fact, we already know that period from the Apple documentation that says it is the “average energy impact for the past 8 hours or since the Mac started up, whichever is shorter.”

Indeed, when the Energy tab of Activity Monitor is first opened, the “Avg Energy Impact” column is empty and the title bar says “Activity Monitor (Processing…)”. After a few seconds the “Avg Energy Impact” column is populated with values and the title bar changes to “Activity Monitor (Applications in last 8 hours)”. If you have top open during those 5–10 seconds can you see that systemstats is running and using a lot of CPU, and so presumably the measurements are obtained from it.

systemstats is a program that runs all the time and periodically measures, among other things, CPU usage and idle wakeups for each running process (visible in the “Processes” section of its output.) I’ve done further tests that indicate that the “Avg Energy Impact” is almost certainly computed using the same formula as “Energy Impact”. The difference is that the the measurements are from the past 8 hours of wake time — i.e. if a laptop is closed for several hours and then reopened, those hours are not included in the calculation — as opposed to the 1, 2 or 5 seconds of wake time used for “Energy Impact”.

battery status menu

Even more prominent than Activity Monitor is OS X’s battery status menu. When you click on the battery icon in the OS X menu bar you get a drop-down menu which includes a list of “Apps Using Significant Energy”.

Screenshot of the OS X battery status menu

How is this determined? When you open this menu for the first time in a while it says “Collecting Power Usage Information” for a few seconds, and if you have top open during that time you see that, once again, systemstats is running and using a lot of CPU. Furthermore, if you click on an application name in the menu Activity Monitor will be opened and that application’s entry will be highlighted. Based on these facts it seems reasonable to assume that “Energy Impact” is again being used to determine which applications show up in the battery status menu.

I did some more tests (on my MacBook Pro running 10.9.5) and it appears that once an energy-intensive application is started it takes about 20 or 30 seconds for it to show up in the battery status menu. And once the application stops using high amounts of energy I’ve seen it take between 4 and 10 minutes to disappear. The exception is if the application is closed, in which case it disappears immediately.

Finally, I tried to determine the significance threshold. It appears that a program with an “Energy Impact” of roughly 20 or more will eventually show up as significant, and programs that have much higher “Energy Impact” values tend to show up more quickly.

All of these battery status menu observations are difficult to make reliably and so should be treated with caution. They may also be different in OS X 10.10. It is clear, however, that the window used by the battery status menu is measured in seconds or minutes, which is much less than the 8 hour window used for “Avg Energy Impact”.

An aside: systemstats is always running on OS X. The particular invocation used for the long-running instance — the one used by both Activity Monitor and the battery status menu — takes the undocumented --xpc flag. When I tried running it with that flag I got an error message saying “This mode should only be invoked by launchd”. So it’s hard to know how often it’s making measurements. The output from vanilla command-line invocations indicate it’s about every 10 minutes.

But it’s worth noting that systemstats has a -J option which causes the CPU usage and wakeups for child processes to be attributed to their parents. It seems likely that the --xpc option triggers the same behaviour because the Activity Monitor does not show “Avg Energy Impact” for child processes (as can be seen in the screenshot above for the login, bash and vim processes that are children of the Terminal process). This hypothesis also matches up with the battery status menu, which never shows child processes. One consequence of this is that if you ssh into a Mac and run a power-intensive program from the command line it will not show up in Activity Monitor’s energy tab or the battery status menu, because it’s not attributable to a top-level process such as Terminal! Such processes will show up in top and in Activity Monitor’s CPU tab, however.

How good a measure is “Energy Impact”?

We’ve now seen that “Energy Impact” is used widely throughout OS X. How good a measure is it?

The best way to measure power consumption is to actually measure power consumption. One way to do this is to use an ammeter, but this is difficult. Another way is to measure how long it takes for the battery to drain, which is easier but slow and requires steady workloads. Alternatively, recent Intel hardware provides high-quality estimates of processor and memory power consumption that are relatively easy to obtain.

These approaches all have the virtue of measuring or estimating actual power consumption (i.e. Watts). But the big problem is that they are machine-wide measures that cannot be used on a per-process basis. This is why Activity Monitor uses several proxy measures — ones that correlate with power consumption — which can be measured on a per-process basis. “Energy Impact” is a hybrid of at least two different proxy measures: CPU usage and wakeup frequency.

The main problem with this is that “Energy Impact” is an exaggerated measure. Look at the first table above, with data from the 10.9.5 machine. The variation in the “Pkg Power” column — which shows the package power from the above-mentioned Intel hardware estimates — is vastly smaller than the variation in the “Energy Impact” measurements. For example, going from 1,000 to 10,000 wakeups per second increases the package power by 3.4x, but the “Energy Impact” increases by 9.7x, and the skew gets even worse at higher wakeup frequencies. “Energy Impact” clearly weights wakeups too heavily. (In the second table, with data from the 10.10.4 machine, the weight given to wakeups is less, but still too high.)

Also, in the first table “Energy Impact” actually decreases when the timer frequency gets high enough. Presumably this is because the timer interval is so short that the OS has trouble putting the package into a idle power state. This leads to the absurd result that firing a timer at 1,000 Hz has about the same “Energy Impact” value as firing one at 100,000 Hz, when the package power of the latter is about 7.5x higher.

Having said all that, it’s understandable why Apple uses formulations of this kind for “Energy Impact”.

  • CPU usage and wakeup frequency are probably the two most important factors affecting a process’s power consumption, and they are factors that can be measured on a per-process basis.
  • Having a single measure makes things easy for users; evaluating the relative important of multiple measures is more difficult.
  • The exception for kernel processes (which always have an “Energy Impact” of 0) avoids OS X itself being blamed for high power consumption. This makes a certain amount of sense — it’s not like users can close the kernel — while also being somewhat misleading.

If I were in charge of Apple’s Activity Monitor product, I’d do two things.

  1. I would compute a new formula for “Energy Impact”. I would measure the CPU usage, wakeup frequency (and any other inputs) and actual power consumption for a range of real-world programs, on a range of different Apple machines. From this data, hopefully a reasonably accurate model could be constructed. It wouldn’t be perfect, and it wouldn’t need to be perfect, but it should be possible to come up with something that reflects actual power consumption better than the existing formulations. Once formulated, I would then test the new version against synthetic microbenchmarks, like the ones I used above, to see how it holds up. Given the choice between accurately modelling real-world applications and accurately modelling synthetic microbenchmarks, I would definitely favour the former.
  2. I would publicly document the formula that is used so that developers can actually tell how their applications are being evaluated, and can optimize for that measure. You may think “but then developers will be optimizing for a synthetic measure rather than a real one” and you’d be right. That’s an inevitable consequence of giving a synthetic measure such prominence, and all the more reason for improving it.

Conclusion

“Energy Impact” is a flawed measure of an application’s power consumption. Nonetheless, it’s what many people use at this moment to evaluate the power consumption of OS X applications, so it’s worth understanding. And if you are an OS X application developer who wants to reduce the “Energy Impact” of your application, it’s clear that it’s best to focus first on reducing wakeup frequency, and then on reducing CPU usage.

Because Activity Monitor is closed source code I don’t know if I’ve characterized “Energy Impact” exactly correctly. The evidence given above indicates that I am close on 10.9.5, but not as close on 10.10.4. I’d love to hear if anybody has evidence that either corroborates or contradicts the conclusions I’ve made here. Thank you.

Update

A commenter named comex has done some great detective work and found on 10.10 and 10.11 Activity Monitor consults a Mac model-specific file in the /usr/share/pmenergy/ directory. (Thank you, comex.)

For example, my MacBook Air has a model number 7DF21CB3ED6977E5 and the file Mac-7DF21CB3ED6977E5.plist has the following list of key/value pairs under the heading “energy_constants”.

kcpu_time               1.0
kcpu_wakeups            2.0e-4

This matches the previously seen formula, but with the wakeups “tax” being 200 microseconds, which matches what I hypothesized above.

kqos_default            1.0e+00
kqos_background         5.2e-01
kqos_utility            1.0e+00
kqos_legacy             1.0e+00         
kqos_user_initiated     1.0e+00
kqos_user_interactive   1.0e+00

“QoS” refers to quality of service classes which allow an application to mark some of its own work as lower priority. I’m not sure exactly how this is factored in, but from the numbers above it appears that operations done in the lowest-priority “background” class is considered to have an energy impact of about half that done in all the other classes.

kdiskio_bytesread       0.0
kdiskio_byteswritten    5.3e-10

These ones are straightforward. Note that the “tax” for disk reads is zero, and for disk writes it’s a very small number. I wrote a small program that wrote endlessly to disk and saw that the “Energy Impact” was slightly higher than the CPU percentage alone, which matches expectations.

kgpu_time               3.0e+00

It makes sense that GPU usage is included in the formula. It’s not clear if this refers to the integrated GPU or the separate (higher performance, higher power) GPU. It’s also interesting that the weighting is 3x.

knetwork_recv_bytes     0.0 
knetwork_recv_packets   4.0e-6
knetwork_sent_bytes     0.0
knetwork_sent_packets   4.0e-6

These are also straightforward. In this case, the number of bytes sent is ignored, and only the number of packets matter, and the cost of reading and writing packets is considered equal.

So, in conclusion, on 10.10 and 10.11, the formula used to compute “Energy Impact” is machine model-specific, and includes the following factors: CPU usage, wakeup frequency, quality of service class usage, and disk, GPU, and network activity.

This is definitely an improvement over the formula used in 10.9, which is great to see. The parameters are also visible, if you know where to look! It would be wonderful if all these inputs, along with their relative weightings, could be seen at once in Activity Monitor. That way developers would have a much better sense of exactly how their application’s “Energy Impact” is determined.

Jonathan GriffinEngineering Productivity Update, August 26, 2015

It’s PTO season and many people have taken a few days or week off.  While they’re away, the team continues making progress on a variety of fronts.  Planning also continues for GoFaster and addon-signing, which will both likely be significant projects for the team in Q4.

Highlights

Treeherder: camd rolled out a change which collapses chunked jobs on Treeherder, reducing visual noise.  In the future, we plan on significantly increasing the number of chunks of many jobs in order to reduce runtimes, so this change makes that work more practical.  See camd’s blog post.  emorley has landed a change which allows TaskCluster job errors that occur outside of mozharness to be properly handled by Treeherder.

Automatic Starring: jgraham has developed a basic backend which supports recognizing simple intermittent failures, and is working on integrating that into Treeherder; mdoglio is landing some related database changes. ekyle has received sheriff training from RyanVM, and plans to use this to help improve the automated failure recognition algorithm.

Perfherder and Performance Testing: Datazilla has finally been decommissioned (R.I.P.), in favor of our newest performance analysis tool, Perfherder.  A lot of Talos documentation updates have been made at https://wiki.mozilla.org/Buildbot/Talos, including details about how we perform calculations on data produced by Talos.  wlach performed a useful post-mortem of Eideticker, with several takeaways which should be applicable to many other projects.

MozReview and Autoland: There’s a MozReview meetup underway, so expect some cool updates next time!

TaskCluster Support: ted has made a successful cross-compiled OSX build using TaskCluster!  Take it for a spin.  More work is needed before we can move OSX builds from the mac mini builders to the cloud.

Mobile Automation: gbrown continues to make improvements on the new |mach emulator| command which makes running Android tests locally on emulator very simple.

General Automation: run-by-dir is live on opt mochitest-plain; debug and ASAN coming soon.  This reduces test “bleed-through” and makes it easier to change chunking.  adusca, our Outreachy intern, is working to integrate the try extender into Treeherder.  And ahal has merged the mozharness “in-tree” configs with the regular mozharness config files, now that mozharness lives in the tree.

Firefox Automation: YouTube ad detection has been improved for firefox-media-tests by maja, which fixes the source of the top intermittent failure in this suite.

Bughunter: bc has got asan-opt builds running in production, and is working on gtk3 support.

hg.mozilla.org: gps has enabled syntax highlighting in hgweb, and has added a new JSON API as well.  See gps’ blog post.

The Details

bugzilla.mozilla.org
Treeherder
Perfherder/Performance Testing
  • talos cleanup and preparation to move in-tree
  • perfherder database cleanup in progress for simpler and more optimized queries. This is mainly preparatory work for making perfherder capable of managing/starring performance alerts, but as a bonus perfherder compare view should load virtually instantly once this is finished. 
  • most talos wiki docs are updated: https://wiki.mozilla.org/Buildbot/Talos
TaskCluster Support
Mobile Automation
  •  [gbrown] Working on “mach emulator” support: wip can download and run 2.3, 4.3, or x86 emulator images. Integrating with other mach commands like “install” and “mochitest”.
  •  [gbrown] Updated mochitest manifests to run most dom/media mochitests on Android 4.3 (under review, bug 1189784)
Firefox and Media Automation
  • [maja_zf] Improved ad detection on YouTube for firefox-media-tests, which fixes our top intermittent failure for long-running playback tests.
General Automation
  •  run-by-dir is live for mochitest-plain (opt only); debug is coming soon, followed by ASAN.
  • Mozilla CI tools is moving from using BuildAPI as the scheduling entry point to use TaskCluster’ scheduling. This work will allow us to schedule a graph of buildbot jobs and their dependencies in one shot. https://bugzil.la/1194264
  • adusca is integrating into treeherder the ability to extend the jobs run for any push. This is based on the http://try-extender.herokuapp.com prototype. Follow along in https://bugzil.la/1194830
  • Git was deployed to the test machines. This is necessary to make the Firefox UI update tests work on them.
  • [ahal] merge mozharness in-tree configs with the main mozharness configs
ActiveData
  • Bug fixes to the ETL – fix bad lookups on hg repo, mostly l10n builds 
  • More error reporting on ETL – Structured logging has changed a little, handle the new variations, and be more elegant when it comes to unknowns, an complain when there is non-conformance.
  • Some work on adding hg `repo` table – acts as a cache for ETL, but can be used to calculate ‘per push’ statistics on OrangeFactor data.
  • Added Talos to the `perf` table – used the old Datazilla ETL code to fill the ES cluster.  This may speed up extracting the replicates, for exploring the behaviour of a test.
  • Enable deep queries – Effectively performing SQL join on Elasticsearch – first attempt did too much refactoring.  Second attempt is simpler, but still slogging through all the resulting test breakage
hg.mozilla.org
WebDriver
  • Updated 
Marionette
  • [ahal] helped review and finish contributor patch for switching marionette_client from optparse to argparse
  • Corrected UUID used for session ID and element IDs
  • Updated dispatching of various marionette calls in Gecko
bughunter
  • [bc] Have asan-opt builds running in production. Finalizing patch. Still need to build gtk3 for rhel6 32bit in order to stop using custom builds and support opt in addition to debug.
charts.mozilla.org
  • Updated the hierarchical burndowns to EPM’s important metabugs that track features 
  • More config changes

Support.Mozilla.OrgSUMO Questions Day this Thursday, 27 August 2015

The summer holidays are now over  so it’s time to start organizing a new SUMO Day!

What are SUMO Days?

A SUMO Day is that time of the week where everybody who loves doing support, contributors, admins, moderators gather together and try and answer all the incoming questions on the Mozilla support forums. This is a 24 hour event, we will start early during European mornings  and finish late during US Pacific evenings.

We are also hanging out having fun and helping each other in #sumo on IRC.

I want to participate! Where do I start?

Just create an account and then take some time to help with unanswered questions. We have an etherpad ready with all the details plus additional tips and resources.

If you get stuck with questions that are too difficult feel free to ping us on IRC #sumo or ask for help on the contributors forums.

Moderators

SUMO Day will be moderated by madasan (EU morning/afternoon), marksc (EU afternoon/US morning), guigs (US morning/afternoon). We can always use more people to help moderate through the day so if you would like to do this just add your name in the etherpad!

What does it mean to be a SUMO Day moderator?

It’s easy! Just check out the forums and monitor incoming questions. Don’t forget to hang out on IRC on #sumo and the contributor forums and chat with the other SUMO Day participants about possible solutions to questions. As a moderator you also help out contributors who are stuck with difficult questions and need help.

Screensharing experiment

During this SUMO Day some of us will experiment with helping users via screen sharing. This is only open to senior contributors and forum moderators so if you’re one of them and you would like to participate please PM madasan.

 We’re trying to answer each and every incoming question on the support forum on Thursday so please join us. The more the merrier!

 

See you online and happy SUMO Day!

Air MozillaProject Beehive: A HW/SW co-designed stack for runtime and architectural research.

Project Beehive: A HW/SW co-designed stack for runtime and  architectural research. In this talk we will present an overview of our recent research efforts focusing on Hw/Sw co-designed platform for heterogeneous many-core architectural research. The presented...

Will Kahn-Greenepytest-wholenodeid addon: v0.2 released!

What is it?

pytest-wholenodeid is a pytest addon that shows the whole node id on failure rather than just the domain part. This makes it a lot easier to copy and paste the entire node id and re-run the test.

v0.2 released!

I wrote it in an hour today to make it easier to deal with test failures. Then I figured I'd turn it into a real project so friends could use it. Now you can use it, too!

I originally released v0.1 (the first release) and then noticed on PyPI that the description was a mess, so I fixed that and released v0.2.

To install:

pip install pytest-wholenodeid

It runs automatically. If you want to disable it temporarily, pass the --nowholeid argument to pytest.

More details on exactly what it does on the PyPI page.

If you use it and find issues, write up an issue in the issue tracker.

Mozilla Community Ops TeamWeekly Update 2015-08-26

Discourse

Discourse UX improvements (@Leo, @yousef)

There are some changes to Discourse that should be made to make it more suitable to Mozillian’s needs

  • Status [In Progress]: See SSO update below. We can still use help researching and building the plugins that we need.
SSO (@Leo)

To improve the login experience for people using Discourse within Mozilla, bridge the gap in various ways between our different instances (e.g. single username across instances), and integrate better with Mozilla wore widely (with Mozillians integration, etc.)

  • Status [In Progress]: Still working on initial version of SSO server, currently working on finishing touches
Discourse Documentation (@Kensie)

To make Discourse more user friendly for Mozillians, we need some good documentation on how to use it

  • Status [In Progress]: Added a couple docs based on questions that came up during the week. Still need people to ask questions so we can answer them.
MECHADISCOURSE (@Yousef)

Putting all Discourse instances on one infrastructure, automated with Ansible and CloudFormation. This will help us keep the many Discourse instances we have secure, up to date and running common plugins easily; at scale. Also saves $$$ while allowing all of our instances to be HA.

  • Status [In Progress]: Turns out this isn’t quite production ready so we’re going to use our staging servers as a test-bed to iron out issues
MoFo Discourse migrations (@Yousef)

Migrating the Webmaker, Science and Hive Discourse instances to MECHADISCOURSE. This provides the teams with more stable Infra for their Discourse instances.

  • Status [In Progress]: Leo is currently implementing Webmaker login for the Teach The Web Discourse

Ansible (@Tanner)

Config management, initializes servers, will be used with MECHADISCOURSE as its first “big” project. Makes it 100x easier to set up servers.

  • Status [Done!]: Production-ready, Jenkins has been set up so jobs can be triggered on-demand.

Monitoring (@Tanner)

we need to set up a robust monitoring solution for our sites.

  • Status [In Progress]: Will be using Nagios. Need to write checks and config for Nagios, and then deploy the NPRE agent to servers.

Community Hosting (@Tanner, @yousef)

Audit

We need to understand which sites are being actively used and which no longer need hosting, or need different hosting than they currently have

  • Status [In Progress]: Michael Buluma has started work on defining a MVP (minimum viable product) for a community website.
Migration

We will be moving away from OVH to simplify community hosting and save money.

  • Status [Stalled]: Waiting for progress on Participation Infrastructure side

Documentation (@Kensie)

Discourse documentation (see above)
Wiki update

Our wiki pages our out of date, and shouldn’t be under IT anymore

  • Status [In Progress]: Michael Buluma has started working on this.
Confluence (@Kensie)

Links to JIRA, will use it to help with project management, decision tracking.

  • Status [In Progress]: Help from Atlassian experts would be very welcome!

Matrix (@Leo)

Communication protocol which attempts to bind various different ones together – could possibly be used by us as a Telegram-esque IRC bouncer. Discussion and link to planning pad here.

  • Status [In Progress]: Started to investigate it, finding answers to various questions

MozFest Participation (@Kensie)

We are looking at ways our team can support MozFest, and planning session proposals that would be interesting to MozFest

Online Forum for Participants (@Tanner)

We are offering our services to host a Discourse instance for MozFest

  • Status [In Progress]: Putting up a Discourse instance as a PoC
Session Proposals (@Kensie)
  • Status [In Progress]: We have several proposals to submit, ideally by Friday (deadline is Monday).

Miscellaneous

  • Crowd didn’t work as hoped. It messed with a lot of plugins for Jenkins that relied on usernames, so ldap might work better.

Contribution Opportunities

Recap of contribution opportunities from status updates and ongoing contribution opportunities:

  • Discourse
    • Research/coding customizations
    • Documenting how to use Discourse/need questions to answer
    • Ansible expertise welcome
  • Monitoring
    • Nagios experts/mentors welcome
    • Community Hosting
    • Research MVP for community sites
  • Documentation
    • Discourse (see above)
    • Need writers to help drive wiki update
    • Atlassian experts welcome to help with Confluence/JIRA organization

Air MozillaProduct Coordination Meeting

Product Coordination Meeting Duration: 10 minutes This is a weekly status meeting, every Wednesday, that helps coordinate the shipping of our products (across 4 release channels) in order...

Air MozillaBugzilla Development Meeting

Bugzilla Development Meeting Help define, plan, design, and implement Bugzilla's future!

Tantek ÇelikVacation Mode @Yahoo? How About Evening Mode, Original Content Mode, and Walkie Talkies With Texting?

Called it. I recently asked “When did you last eat without using digital devices at all?” and proposed a “dumb camera mode” where you could only take/review photos/videos and perhaps also write/edit text notes on your otherwise “smart” phone that usually made you dumber through notification distractions.

Five days later @Yahoo published a news article titled: “The One Feature Every Smartphone Needs: Vacation Mode” — something I’m quite familiar with, having recently completed a one week Alaska cruise during which I was nearly completely off the grid.

Evening Mode rather than Vacation Mode

Despite the proposals in the Yahoo article, I still think a “dumb” capture/view mode would still be better on a vacation, where all you could do with your device was capture photos/text/GPS and subsequently view/edit what you captured. Even limited notifications distract and detract from a vacation.

However, the idea of “social media updates only from people you’re close to, either geographically or emotionally” would be useful when not on vacation. I'd use that as an Evening Mode most nights.

Original content rather than “shares”

In addition, the ability to filter and only see “original content — no shared news stories on Facebook, no retweets on Twitter” would be great as reading prioritization — I only have a minute, show me only original content, or show me original content from the past 24h before any (re)shares/bookmarks etc.

This strong preference to prioritize viewing original content is I think what has moved me to read my Instagram feed, and in contrast nearly ignore my Twitter feed / home page, as well as actively avoid Facebook’s News Feed.

Ideally I’d use an IndieWeb reader, but they too have yet to find a way to distinguish original content posts in contrast to bookmarks or brief quotes / commentary / shares of “news” articles.

Tame your inbox? No, vacation should mean no inbox

The Yahoo article suggests: “tame your inbox in the same fashion, showing messages from your important contacts as they arrive but hiding everything else” and completely misses the point of disconnecting from all inbox stress while on vacation.

SMS smart phone texting frustrations vs stress-free iPod

While I was on the Alaska cruise, other members of my family did txt/SMS each other a bit, but due to the unreliability of the shipboard cell tower, it was more frustrating to them than not.

With my iPod, I completely opted out of all such electronic text comms, and thus never stressed about checking my device to coordinate.

IRL coordination FTW

Instead I coordinated as I remember doing as a kid (and even teenager) — we made plans when we were together, about the next time and place we would meetup, and our general plans for the day. Then we’d adjust our plans by having *in-person* conversations whenever we next saw each other.

Or if we needed to find each other, we would wander around the ship, to our staterooms, the pool decks, the buffet, the gym, knowing that it was a small enough world that we’d likely run into each other, which we did several times.

During the entire trip there was only one time that I lost touch with everyone and actually got frustrated. But even that just took a bit longer of a ship search. Of course even for that situation there are solutions.

Walkie Talkies!

My nephews and niece used walkie-talkies that their father brought on board, and that actually worked in many ways better than anyone’s fancy smart phones.

Except walkie-talkies can be a bit intrusive.

Walkie Texting?

My question is:

If walkie-talkies can send high quality audio back and forth in broadcast mode, why can’t they broadcast short text messages to everyone else on that same “channel” as well?

Then I found this on Amazon: TriSquare eXRS TSX300-2VP 900MHz FHSS Digital Two-Way Radio Two 2-way radios

  • Digital Two-Way Radio
  • spread spectrum and encrypted
  • text mesaging between radios

(Discontinued by Manufacturer)

Anybody have one or a similar two-way radio that also supports texting?

Or would it be possible to do peer-to-peer audio/texting purely in software on smart “phones” peer-to-peer over bluetooth or wifi without having to go through a central router/tower?

That would seem ideal for a weekend road trip, say to Tahoe, or to the desert, or perhaps even for camping, again, maybe in the desert, like when you choose to escape from the rest of civilization for a week or more.

James LongThe Seasonal Blog Redux

It's that time of year again! The weeds are growing, the air is thick and stagnant, and I just deployed another refactoring of my blog. "Why does he keep working on his blog," you're thinking, "when I could do all of that with a static-site generator like Jekyll?"

Writing my own blogging engine has been one of the best decisions I've made. Having a side project that I actually use and get value from is a great place to implement my own ideas, or try out new libraries. Every now and then it's fun to throw it back in the furnance, get it hot, and start shaping it with new ideas.

A blog is a great litmus test for new libraries (remember, I have an admin site behind this). You have to deal with routing, forms, interfacing with things like the CodeMirror editor, server-side rendering, async data fetching, and more. I feel like it really hits most of the pain-points of big client-side apps, even if it's a relatively small project. The only thing it doesn't stress is a complex shape of data: the data I get back from the server is pretty simple, and and more complex apps would need something better to handle complex data.

But even then, contrasting my simple code with more complex solutions makes it really clear why they are solved that way. Take GraphQL for example; I definitely don't need it, but there are a few places in my code that would obviously be way more complex if my data was more complex, and it's clear what GraphQL is trying to solve.

Last time I completely rewrote my blog, I learned about react-router, Webpack (with babel integration), server-side rendering (universal apps), Docker, and various aspects of React.

This time, I learned about Redux, immutable-js, and having a fully snapshot-able app state.

What do I mean by snapshot? My entire app state (even component local state) lives as a nested tree with a single root. I can simply serialize that root, and load it in later to see the app exactly how it was a that point in time. Here's a fun trick to show you what I mean: copy all of this text, press cmd+shift+k and paste it in. That's my admin interface with 2 errors; you're seeing it exactly at that point (may not work in all browsers, Chrome is known to truncate prompt inputs. I'll make my own modal at some point).

Redux What?

Redux is library that complements React and manages application state. It provides a simple workflow for updating application state and allowing React components to subscribe to state changes. While it borrows ideas from Elm, Flux, and various fancy-sounding abstractions, it's actually quite simple.

It embraces an idea currently bubbling up in the UI community: make state explicit and immutable, use pure functions as much as possible, and push all side effects to the edge of your app. In fact, the entire state exists as a single atom: a deeply nested JS object that contains everything you need to render the current UI.

This seems radical, but it's the right way to do things.

  1. Your frontend is made up of simple pure functions that take inputs and return outputs. This makes it extremely easy to test, rationalize about, and do things like hot-reloading. Separating state from code just makes things simpler.

  2. Your state exists as a single object that is never mutated. Normally it's a JS object, but it could be an immutable.js object or even a client-side database. Thats right, putting state in one place means you could even use a database for state. That's not even the best part: with a single atom and immutability, you can easily snapshot and resume the app at any point in time!

Redux provides the ability for the UI to subscribe to changes to specific parts of the app state. Generally only top-level components in the UI select state from the global app state atom, and most components are pure: they simply receive data and render it.

The library has roots in flux, Facebooks original library for handling state. The main similarity is you dispatch actions to change state. An action is simply a JavaScript object with a type field and any other fields as arguments. These actions are dispatched across all registered "reducers", which are functions that take state and an action and return new state: (state, action) -> newState. All new states are grouped together into a new single atom app state.

The real-world is grey and misty like a London street. You can't use pure functions and a global app state atom for everything. Asynchronous code is inherently side-effecting, but by isolating it to a specific part of your app, the rest of the world doesn't have to be bothered with things such as promises or observables. Updating the app state and rendering the UI is completely synchronous, but "async action creators" are functions which have the ability to dispatch multiple actions over time.

Local state is obviously desirable in certain situations, although it's less important than you think. UIs tend to require global state: many different parts of the UI need access to the same data. However, local state is important mainly for performance reasons. We are not out of luck though: we can get local state back by scoping part of the global app state atom to single components, as CircleCI did.

The frontend space is super interesting these days, and there's a lot to talk about. Follow me as I blog more about what I learned rewriting my blog with these ideas. I'll walk through specific techniques in my blog's code dealing with:

  • Using immutable.js for app state
  • Integrating Redux with react-router
  • Data fetching and asynchronous action creators
  • Server-side rendering
  • Local state

Feel free to peruse my blog's code in the meantime.

Giorgio MaoneWebExtensions FAQ

WebExtensions are making some people happy, some people angry, many people ask questions.
Some of the answers can be found here, more to come as add-on developers keep discussing this hot topic.
My favourite one: No, your add-ons' ability and your own creativity won't be limited by the new API.

Michael KaplyUsing Hidden UI in the CCK2

One of the questions I get asked the most is how to hide certain UI elements of Firefox. I implemented the Hidden UI feature of the CCK2 specifically to address this problem. Using it can be a little daunting, though, so I wanted to take some time to give folks the basics.

The Hidden UI feature relies on CSS selectors. We can use CSS Selectors to specify any element in the Firefox user interface and then that element will be hidden. The trick is figuring out the selectors. To accomplish this, my primary tool is the DOM Inspector. With the DOM Inspector, I can look at any element in the Firefox user interface and determine it's ID. Once I have it's ID, I can usually specify the CSS selector as #ID and I can hide that element. Let's walk through using the DOM Inspector to figure out the ID of the home button.

  • Install the DOM Inspector
  • Go to Developer Tools and select DOM Inspector
  • From the DOM Inspector Window, select File->Inspect Chrome Document and select the first window
  • In the DOM Inspector Window, click on the Node Finder.
  • Click on the Home button in the Firefox Window.
  • You'll see results in the DOM Inspector that look like this:

  • This gives us something unique we can use - an ID. So #home-button in Hidden UI will hide the home button.

    You can use this method for just about every aspect of the Firefox UI except for menus and the Australis panel. For these items, I turn to the Firefox source code.

    If you want to hide anything on the Australis panel, you can look for IDs here. If you want to hide anything on the Firefox context menu, you can look here. If you want to hide anything in the menu bar, you can look here.

    As a last resort, you can simply hide menuitems based on their text. For instance, if you wanted to hide the Customize menu that appears when you right click on a toolbar, you could specify a selector of menuitem[label^='Customize]. This says "Hide any menu item that begins with the word Customize." Don't try to include the ellipsis in your selector because in most cases it's not ..., it's the unicode ellipsis (…). (Incidentally, that menu is defined here, along with the rest of the toolbar popup menu. Because it doesn't have an ID, you'll have to use menuitem.viewCustomizeToolbar.)

    Hopefully this should get everyone started. If there's something you can't figure out how to hide, let me know. And if you're trying to hide everything, you should probably be looking at a kiosk solution, not the CCK2...

Air MozillaMozilla Learning Community Call Aug 25

Mozilla Learning Community Call Aug 25 Mozilla Learning community calls are open to all. The goal: work on the Mozilla Learning plan together.

Jet VillegasSetting up for Android and Firefox OS Development

This post is a follow-up to an earlier article I wrote about setting up a FirefoxOS development environment.

I’m going to set up a Sony Z3C as the target device for Mobile OS software development. The Sony Z3C (also known as Aries or aosp_d5803 ) is a nice device for Mobile OS hacking as it’s an AOSP device with good support for building the OS binaries. I’ve set the phone up for both FirefoxOS and Android OS development, to compare and see what’s common across both environments.

Please note that if you got your Sony Z3C from the Mozilla Foxfooding program, then this article isn’t for you. Those phones are already flashed and automatically updated with specific FirefoxOS builds that Mozilla staff selected for your testing. Please don’t replace those builds unless you’re actively developing for these phones and have a device set aside for that purpose.

My development host is a Mac (OSX 10.10) laptop already set up to build the Firefox for Macintosh product. It’s also set up to build the Firefox OS binaries for the Flame device.

Most of the development environment for the Flame is also used for the Aries device. In particular, the case-sensitive disk partition is required for both FirefoxOS and Android OS development. You’ll want this partition to be at least 100GB in size if you want to build both operating systems. Set this up before downloading FirefoxOS or Android souce code to avoid ‘include file not found’ errors.

The next step to developing OS code for the Aries is to root the device. This will void your warranty, so tread carefully.

For most Gecko and Gaia developers, you’ll want to start from the base image for the Aries. The easiest way to flash your device with a known-good FirefoxOS build is to run flash.sh in the expanded aries.zip file from the official builds. You can then flash the phone with just Gecko or Gaia from your local source code.

The Aries binaries from a FirefoxOS build:

aries_firefoxos_images

The Aries binaries in an Android Lollipop build:

aries_android_images

If you want to build Android OS for the Aries, then read these docs from Sony, and these Mac-specific steps for building Android Lollipop. Note that the Android Lollipop SDK requires XCode 5.1.1 and Java 7 (JRE and JDK.) Both versions of XCode and Java are older than the latest versions available, so you’ll need to install the downgrades before building the Android OS.

When it comes time to configure your Android OS build via the lunch command, select aosp_d5803-userdebug as your device. Once the build is finished (after about 2 hours on my Mac,) use these commands to flash your phone with the Android OS you just built:

fastboot flash boot out/target/product/aries/boot.img
fastboot flash system out/target/product/aries/system.img
fastboot flash userdata out/target/product/aries/userdata.img

Mozilla ThunderbirdThunderbird and end-to-end email encryption – should this be a priority?

In the last few weeks, I’ve had several interesting conversations concerning email encryption. I’m also trying to develop some concept of what areas Thunderbird should view as our special emphases as we look forward. The question is, with our limited resources, should we strive to make better support of end-to-end email encryption a vital Thunderbird priority? I’d appreciate comments on that question, either on this Thunderbird blog posting or the email list tb-planning@mozilla.org.

"I took an oath to defend the constitution, and I felt the Constitution was being violated on a massive scale" SnowdenIn one conversation, at the “Open Messaging Day” at OSCON 2015, I brought up the issue of whether, in a post-Snowden world, support for end-to-end encryption was important for emerging open messaging protocols such as JMAP. The overwhelming consensus was that this is a non-issue. “Anyone who can access your files using interception technology can more easily just grab your computer from your house. The loss of functionality in encryption (such as online search of your webmail, or loss of email content if certificates are lost) will give an unacceptable user experience to the vast majority of users” was the sense of the majority.

Woman In HandcuffsIn a second conversation, I was having dinner with a friend who works as a lawyer for a state agency involved in white-collar crime prosecution. This friend also thought the whole Snowden/NSA/metadata thing had been blown out of proportion, but for a very different reason. Paraphrasing my friend’s comments, “Our agency has enormous powers to subpoena all kinds of records – bank statements,  emails – and most organizations will silently hand them over to me without you ever knowing about it. We can always get metadata from email accounts and phones, e.g. e-mail addresses of people corresponded with, calls made, dates and times, etc. There is alot that other government employees (non NSA) have access to just by asking for it, so some of the outrage about the NSA’s power and specifically the lack of judicial oversight is misplaced and out of proportion precisely because the public is mostly ignorant about the scope of what is already available to the government.”

So in summary, the problem is much bigger than the average person realizes, and other email vendors don’t care about it.

There are several projects out there trying to make encryption a more realistic option. In order to change internet communications to make end-to-end encryption ubiquitous, any protocol proposal needs wide adoption by key players in the email world, particularly by client apps (as opposed to webmail solutions where the encryption problem is virtually intractable.) As Thunderbird is currently the dominant multi-platform open-source email client, we are sometimes approached by people in the privacy movement to cooperate with them in making email encryption simple and ubiquitous. Most recently, I’ve had some interesting conversations with Volker Birk of Pretty Easy Privacy about working with them.

Should this be a focus for Thunderbird development?

Byron Joneshappy bmo push day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1195362] Quicksearch error pages (“foo is not a field” and friends) should still fill in search into quicksearch box
  • [1190476] set Comment field in GPG email to the URL of the bug
  • [1195645] don’t create a new session for every authenticated REST/BzAPI call
  • [1197084] No mail sent when bugs added to or removed from *-core-security groups
  • [1196614] restrict the ability for users with editusers/creategroups to alter admins and the admin group
  • [1196092] Switch logincookies primary key to auto_incremented id, make cookie a secondary UNIQUE key
  • [1197699] always store the ip address in the logincookies table
  • [1197696] group_members report doesn’t display nested inherited groups
  • [1196134] add ability for admins to force a user to change their password on next login
  • [1192687] add the ability for users to view and revoke existing sessions
  • [1195836] Remove install-module.pl from bmo
  • [1180733] “An invalid state parameter was passed to the GitHub OAuth2 callback” error when logging in with github

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

Christian HeilmannRock, Meats, JavaScript – BrazilJS 2015

BrazilJS audience

I just got back from a 4 day trip to Brazil and back to attend BrazilJS. I was humbled and very happy to give the opening keynote seeing that the closing was meant to be by Brendan Eich and Andreas Gal – so, no pressure.

The keynote

In my keynote, I asked for more harmony in our community, and more ownership of the future of JavaScript by those who use it in production.

Keynote time

For quite some while now, I am confused as to who we are serving as browser makers, standards writers and library creators. All of the excellent solutions we have seem to fall through the cracks somewhere when you see what goes live.

That’s why I wanted to remind the audience that whatever amazing, inspiring and clever thing they’ll hear about at the conference is theirs to take to fruition. We have too much frustration in our market, and too much trying to one-up one another instead of trying to solve problems and making the solutions easily and readily available. The slides are on Slideshare, and a video will become available soon.

About Brazil

There are a few things to remember when you are going to Brazil:

  • When people are excited about something, they are really excited about it. There’s a lot of passion.
  • Personal space is as rare as an affordable flat in central London – people will affectionately touch strangers and there is a lot of body language. If that’s not your thing, make it obvious!
  • You will eat your body weight in amazing meat and food is a social gathering, not just fuel. Thus, bring some time.
  • Everybody will apologise for their bad English before having a perfectly comprehensible conversation with you
  • People of all ages and backgrounds are into heavy music (rock, metal, hardcore…)

About the event

VR ride about the history of JavaScript

BrazilJS was a ridiculous attempt at creating the biggest JavaScript event with 1,300 people. And it was a 100% success at that. I am fascinated by the professionalism, the venue, the AV setup and all the things that were done for speakers and attendees alike. Here are just a few things that happened:

  • There was a very strong message about diversity and a sensible and enforced code of conduct. This should not be a surprise, but when you consider Brazilian culture and reputation (think Carnival) it takes pride and conviction in those matters to stand up for them the way the organisers did.
  • The AV setup was huge and worked fine. There were no glitches in the audio and every presentation was live translated from English to Brazilian Portuguese and vice versa. The translation crew did a great job and we as presenters should do more to support them. I will write a post soon about this.
  • Wireless was flaky, but available when you needed it. It is pretty ridiculous to assume in a country where connectivity isn’t cheap and over a thousand people with two devices each try to connect that you’d have a good connection. As a presenter, I never rely on availability – neither should you.
  • There was always enough coffee, snacks and even a huge cake celebrating JavaScript (made by the mom of one of the organisers – the cake, not JavaScript)
  • The overall theme was geek – as geek as it can get. The organisers dressed up as power rangers, in between talks we saw animated 90s TV series, there as a Virtual Reality ride covering the history of JavaScript built with Arduinos and there were old-school arcade machines and consoles to play with.
  • It was a single track conference over two days with lots of high-class speakers and very interesting topics.
  • As a speaker, everything was organised for me. We all took a hired bus from and to the venue and we had lunch catered for us.
  • The conference also had a minority/diversity scholarship program where people who couldn’t afford to come got a sponsored ticket. These people weren’t grandstanded or shown up but just became a part of the crowd. I was lucky to chat to a few and learned quite a few things.
  • The after party was a big “foot in mouth” moment for me as I kept speaking out against bands at those. However, in Brazil and choosing a band that covers lots of rock anthems, it very much worked. I never thought I see an inclusive, non-aggressive mosh pit and people stage diving at a JavaScript event – I was wrong.

action shotMe, stagediving at the BrazilJS after party – photo by @orapouso

So, all I can say is thank you to everyone involved. This was a conference to remember and the enthusiasm of the people I met and talked to is a testament to how much this worked!

Personal/professional notes

BrazilJS was an interesting opportunity for me as I wanted to connect with my Microsoft colleagues in the country. I was amazed by how well-organised our participation was and loved the enthusiasm people had for us. Even when one of our other speakers couldn’t show up, we simply ran an impromptu Q&A on stage abut Edge. Instead of a sales booth we had technical evangelists at hand, who also helped translating. Quite a few people came to the booth to fix their web sites for Microsoft Edge’s standard compliant rendering. It’s fun to see when fixing things yields quick results.

Other short impressions:

  • I had no idea what a machine my colleague Jonathan Sampson is on stage. His talk in adventurous Portuguese had the audience in stitches and I was amazed by the well-structured content. I will pester him to re-record this in English.
  • Ju Gonçalves (@cyberglot) gave a great, detailed talk about reduce(). If you are a conference organiser, check her out as a new Speaker() option – she is now based in Copenhagen.
  • It was fun to catch up with Laurie Voss after a few years (we worked in Yahoo together) and it was great of him to point to his LGBTQ Slack group inviting people to learn more about that facet of diversity in our community.
  • It warmed me to see the Mozilla Brazil community still kicking butt. Warm, affectionate and knowledgable people like the ones you could meet at the booth there are the reason why I became a Mozillian in the first place.

And that’s that

Organisers on stage

Thank you for everyone involved. Thank you to everybody asking me lots of technical questions and giving non-filtered feedback. Thank you for showing that a lot of geeks can also be very human and warm. Thank you for embracing someone who doesn’t speak your language. I met quite a few people I need to follow up with and I even had a BBQ at the family of two of the attendees I met before I went to my plane back home. You rock!

Always bet on JavaScript cake

Hannah KaneVancouver Trip Summary

I spent Thursday and Friday of last week with my lovely colleagues in Vancouver. Some things to note:

  • The Vancouver office is awesome, especially the art (h/t David Ascher’s wife)
  • Thanks to Jennie and the rest of the YVR team for making me feel welcome around the lunch table!
  • Luke promised to play guitar but he never did :(

Here’s how the two days went down:

  • Sabrina and I started off by having a morning meeting with Michelle via Vidyo. This produced several clarifying insights including the use of “portfolio” as the key metaphor for Clubs pages in the MLN Directory. This helped shaped our conversations during the rest of my visit.
  • Sabrina and I then reviewed what we already know about our audience, our programs and offerings, and value adds for the user.
  • We then sketched out a model for an engagement funnel

IMG_20150821_150739

    • Then we got to work on the MLN Directory model. We came up with streamlined sketches for the various content types, thinking in terms of mobile-first.
      • Member profile:
        • See field listing
        • Implied functionality: certain Leadership roles might be auto-applied (e.g. if the user owns an approved Club page, the system can apply the “Club Captain” role), while others might require an admin interface (e.g. Regional Coordinator, Hive Member). We’d like to allow for flexible Role names, to accommodate local flavor (i.e. Hive Chicago has specific role names they give to members).
      • Club and Hive pages:
        • Club page field listing
        • Hive page field listing
        • A key insight was that we should treat each distinct entity differently. That is, Club pages and Hive pages might be quite different, and we don’t need to try to force them into the same treatment. We also recognized that our MVP can simply address these two specific types of groups, since this is where our programs are focused.
        • We decided that focusing on Reporting for Clubs would be the highest value functionality, so we spec’ed out what that would look like (wireframes coming soon)
        • For Hive pages, we want to re-create the org listings and contact cards that the current Hive Directories have
  • We also met with Laura de Reynal and David Ascher to hash out plans for the audience research project. More on that soon, but you can see our “most important questions” at the top of this pad.
  • The issue of badges came up a few times. First, because we found that the plan for “Club Captain” and “Regional Coordinator” badges felt a little redundant given the concept of “roles.” Second, because we saw an opportunity to incentivize and reward participation by providing levels of badges (more like an “achievements” model). Seems like our colleagues were thinking along the same lines.

All in all, it was a really productive couple of days. We’ll be getting wireframes and then mockups out to various stakeholders over the next heartbeat, along with hashing out the technical issues with our engineering team.

Feel free to share any comments and questions.


Jim ChenRecent Fennec platform changes

There has been a series of recent changes to the Fennec platform code (under widget/android). Most of the changes was refactoring in preparation for supporting multiple GeckoViews.

Currently, only one GeckoView is supported at a time in an Android app. This is the case for Fennec, where all tabs are shown within one GeckoView in the main activity. However, we'd like to eventually support having multiple GeckoView's at the same time, which would not only make GeckoView more usable and make more features possible, but also reduce a lot of technical debt that we have accumulated over the years.

The simplest way to support multiple GeckoViews is to open multiple nsWindows on the platform side, and associate each GeckoView with a new nsWindow. Right now, we open a new nsWindow in our command line handler (CLH) during startup, and never worry about having to open another window again. In fact, we quit Fennec by closing our only window. This assumption of having only one window will change for multiple GeckoView support.

Next, we needed a way of associating a Java GeckoView with a C++ nsWindow. For example, if a GeckoView sends a request to perform an operation, Gecko would need to know which nsWindow corresponds to that GeckoView. However, Java and platform would need to coordinate GeckoView and nsWindow creation somehow so that a match can be made.

Lastly, existing messaging systems would need to change. Over the years, GeckoAppShell has been the go-to place for platform-to-Java calls, and GeckoEvent has been the go-to for Java-to-platform calls. Over time, the two classes became a big mess of unrelated code stuffed together. Having multiple GeckoViews would make it even harder to maintain these two classes.

But there's hope! The recent refactoring introduced a new mechanism of implementing Java native methods using C++ class members 1). Using the new mechanism, calls on a Java object instance are automatically forwarded to calls on a C++ object instance, and everything in-between is auto-generated. This new mechanism provides a powerful tool to solve the problems mentioned above. Association between GeckoView and nsWindow is now a built-in part of the auto-generated code – a native call on a GeckoView instance can now be transparently forwarded to a call on an nsWindow instance, without writing extra code. In addition, events in GeckoEvent can now be implemented as native methods. For example, preference events can become native methods inside PrefHelper, and the goal is to eventually eliminate GeckoEvent altogether 2).

Effort is underway to move away from using the CLH to open nsWindows, which doesn't give an easy way to establish an association between a GeckoView and an nsWindow 3). Instead, nsWindow creation would move into a native method inside GeckoView that is called during GeckoView creation. As part of moving away from using the CLH, making a speculative connection was moved out of the CLH into its own native method inside GeckoThread 4). That also had the benefit of letting us make the speculative connection much earlier in the startup process.

This post provides some background on the on-going work in Fennec platform code. I plan to write another follow-up post that will include more of the technical details behind the new mechanism to implement native calls.

1) Bug 1178850 (Direct native Java method calls to C++ classes), bug 1186530 (Implement per-instance forwarding of native Java methods), bug 1187552 (Support direct ownership of C++ objects by Java objects), bug 1191083 (Add mechanism to handle native calls before Gecko is loaded), bug 1192043 (Add mechanism to proxy native calls to Gecko thread)
2) Bug1188959 ([meta] Convert GeckoEvent to native methods)
3) Bug 1197957 (Let GeckoView control nsWindow creation)
4) Bug 1195496 (Start speculative connection earlier in startup)

Air MozillaChris Beard: Community Participation Guidelines

Chris Beard: Community Participation Guidelines Mozilla CEO Chris Bears talks about the Mozilla Project's Community Participation Guidelines in a recent Monday Project Meeting.

John O'Duinn“we are all remote” at Cultivate NYC

It’s official!!

I’ll be speaking about remoties at the O’Reilly Cultivate conference in NYC!

Cultivate logoCultivate is being held on 28-29 Sept 2015, in the Javits conference center, in New York City. This is intentionally the same week, and same location, as the O’Reilly Strata+Hadoop World conference, so if you lead others in your organization, and are coming to Strata anyways, you should come a couple of days early to focus on cultivate-ing (!) your leadership skills. For more background on O’Reilly’s series of Cultivate conferences, check out this great post by Mike Loukides. I attended the Cultivate Portland conference last month, when it was co-located with OSCON, and found it insightful edge-of-my-seat stuff. I expect Cultivate NYC to be just as exciting.

Meanwhile, of course, I’m still writing like crazy on my book (and writing code when no-one is looking!), so have to run. As always, if you work remotely, or are part of a distributed team, I’d love to hear what does/doesn’t work for you and any wishes you have for topics to include in the book – just let me know.

Hope to see you in NYC next month.

John.
=====

Cameron KaiserOkay, you want WebExtensions API suggestions? Here's three.

Not to bring out the "lurkers support me in E-mail" argument but the public blog comments are rather different in opinion and tenor from the E-mail I got regarding our last post upon my supreme concern and displeasure over the eventual end of XPCOM/XUL add-ons. I'm not sure why that should be, but never let it be said that MoFo leadership doesn't stick to their (foot)guns.

With that in mind let me extend, as an author of a niche addon that I and a number of dedicated users employ regularly for legacy protocols, an attempt at an olive branch. Here's the tl;dr: I need a raw socket API, I need a protocol handler API, and I need some means of being able to dynamically write an document/data stream and hand it to the docshell. Are you willing?

When Mozilla decommissioned Gopher support in Firefox 4, the almost universal response was "this shouldn't be in core" and the followup was "if you want it, it should be an add-on, maintained by the community." So I did, and XPCOM let me do this. With OverbiteFF, Gopher menus (and through an analogous method whois and ph) are now first class citizens in Firefox. You can type a Gopher URL and it "just works." You can bookmark them. You can interact with them. They appear no differently than any other web page. I created XPCOM components for a protocol object and a channel object, and because they're XPCOM-based they interact with the docshell just like every other native core component in Necko.

More to the point, I didn't need anyone's permission to do it. I just created a component and loaded it, and it became as "native" as anything else in the browser. Now I need "permission." I need APIs to do what I could do all by myself beforehand.

What I worry is that Mozilla leadership is going to tick the top 10 addons or so off as working and call it a day, leaving me and other niche authors no way of getting ours to work. I don't think these three APIs are either technically unrealistic or lack substantial global applicability; they're foundational for getting new types of protocol access into the browser, not just old legacy ones. You can innovate nearly anything network-based with these three proposals.

So how about it? I know you're reading. Are you going to make good on your promises to us little guys, or are we just screwed?

Robert KaiserEnding Development and Support for My Add-ons

This has been a long time coming, actually, and recent developments just put the final nail in the coffin.

I am ending all development and support for my "extension"-type add-ons effective immediately.

This affects (daily user numbers according to addons.mozilla.org):
If anyone is interested in taking over development and maintenance of any of those, please let me know and I'm happy to convert their repositories over to github for easier working with them, and and the new developer to their administration on AMO and/or move them over to you completely.

I will leave them listed on AMO for a little while so people who want to take over can take a look, but I will hide them from the site in the near future if nobody is interested.

The reasons for this step are multiple:

For one thing, I just don't have the time for updating their code or improving them. My job is stressful enough that my head is overflowing with Mozilla-related things all the time, and my employer is apparently not willing to give me any relief (in terms of hiring someone to supplement me) that would give me back sanity, so I need to remove some Mozilla- and software-related thing from my non-work time to gain back a little sanity so that I don't burn out.

I am also really sad that apparently nobody finds the time or energy to make decent managing and notification mechanisms available for UI code around the new-style web storage mechanisms like indexedDB, appCache, or ServiceWorkers caching, while we do have quite nice APIs there for long-standing things like cookies. For getting Tahoe Data Manager (which was my most interesting add-on) to work decently, I would have needed decent APIs there as well.

Then, my interest for experimenting with code has moved more and more away from the browser, which keeps changing around me all the time, and towards actual web development, where existing code doesn't get broken all the time and your code is more isolated. As a bonus, I can develop things that run on my (Firefox OS) phone and that I can show other people when I'm somewhere. And even there, I don't get as much time to dig into stuff as I would like to, see above.

And finally, and that's why this culminates right now, I disagree with some pieces of Mozilla's add-on strategies right now, and I don't want to be part of that as an add-on developer.
For one, I think add-on signing is a good idea in principle, but not enabling developers to test their code in any way in the same builds that users get is against everything I learned in terms of quality assurance. Then, requiring developers and other users of unbranded (or early pre-release) builds to turn off security for everything just to use/test one or two unsigned add-ons just feels plainly wrong to me (and don't tell me it can't be done otherwise, as I know there are perfectly good ways to solve this without undermining signing and preserving more safety). And I also fear that, while add-on signing brings a lot of pain to add-on developers and will make us lose some of them and their users, we will not reduce the malware/adware problem in the mid to long term, but rather make it worse, as they will resort to injecting binary DLLs into the Firefox process, which is the primary cause of startup crashes on updates, and I will have more grief in my actual job due to this, next to Firefox losing users that see those crashes.
And on the deprecation of "the permissive add-on model" as they call it in the post, I think that the Firefox UI being written in web (CSS/JS/HTML) or web-like (XUL) technologies and the ability to write add-ons that can use those to do anything in Firefox, including prototyping and inventing new functionality and UI paradigms, is the main thing that sets Firefox apart product-wise from all its competitors. If we take that away, there is no product reason for using Firefox over any other browser, the only reasons will be the philosophy behind Mozilla (which is what I'm signed up for anyhow)and the specific reflection of those in some internals of the browser, like respecting privacy and choice a little bit more than others - but most people consider that details, and it's hard to win them over with those. Don't get me wrong, I think that the WebExtensions API is a great idea (and it would be awesome to standardize some bits of it across browsers), and add-ons being sandboxed by default is long overdue. But we also would need to require less signing and review for add-ons that are confined to the safe APIs provided there, and I think we'd still, with heavy review, signing, and whatnot, need to allow people to go fully into the guts of Firefox, with full permissions, to provide the basis for the really ground-breaking add-ons that set us apart from the rest of the world. Even though almost all of the code of my add-ons ran within their own browser tab, they required a good reach into high-permission areas, which probably the new WebExtensions API will not allow that way. But I also do not even have the time to investigate how I could adapt my add-ons to any of this, so I decided to better pull the plug right now.

So, all in all, I probably have waited too long with this anyhow, mostly because I really like Tahoe Data Manager, but I just can't go on pretending that I will still develop or even maintain those add-ons.

Again, if anyone is interested in taking over, either fully or with a few patches here and there, please contact me and I'll help to make it happen.

(Note that this does not affect my language packs, dictionaries, or themes at this point, I'm continuing to maintain and develop them, at least for now.)

QMOFirefox 41 Beta 3 Testday Results

Hello Mozillians!

As you may already know, last Friday – August 21st – we held a new Testday event, for Firefox 41 Beta 3.

Results:

We’d like to take this opportunity to thank alex_mayorga, Bolaram Paul, Chandrakant Dhutadmal, Luna Jernberg, Moin Shaikh, gaby 2300 and the Bangladesh QA Community: Hossain Al Ikram, Rezaul Huque NayeemNazir Ahmed SabbirForhad HossainMd. Rahimul IslamSajib Raihan RussellRakibul Islam RatulSaheda Reza Antora,  Sunny, Mohammad Maruf Islam for getting involved in this event and making Firefox as best as it could be.

Also a big thank you goes to all our active moderators.

Keep an eye on QMO for upcoming events!