Michael KohlerAlpha Review – Using Janitor to contribute to Firefox

At the Firefox Hackathon in Zurich we used The Janitor to contribute to Firefox. It’s important to note that it’s still in alpha and invite-only.

Screen Shot 2016-05-05 at 14.37.23

The Janitor was started by Jan Keromnes, a Mozilla employee. While still in an alpha state, Jan gave us access to it so we could test run it at our hackathon. Many thanks to him for spending his Saturday on IRC and helping us out with everything!

Once you’re signed up, you can click on “Open in Cloud9” and directly get to the Cloud9 editor who kindly sponsor the premium accounts for this project. Cloud9 is a pure-web IDE based on real Linux environments, with an insanely fast editor.

Screen Shot 2016-05-05 at 14.38.23

Screen Shot 2016-05-05 at 14.38.50

At the hackathon we ran into a Cloud9 “create workspace” limitation, but according to Jan this should be fixed now.

Setting up

After an initial “git pull origin master” in the Cloud9 editor terminal, you can start to build Firefox in there. Simply running “./mach build” is enough. For me this took about 12 minutes for the first time, while my laptop still needs more than 50 minutes to compile Firefox. This is definitely an improvement. Further you won’t need anything else than a browser!

I had my environment ready in about 15 minutes if you count the time to compile Firefox. Comparing this to my previous setups, this solves a lot of dependency-hell problems and is also way faster.

Running the newly compiled Firefox

The Janitor includes a VNC viewer which opens a new tab and you can run your compiled Firefox in there. You can start a shell and run “./mach run” in the Firefox directory and you can start testing your changes.

Screen Shot 2016-05-05 at 14.49.08

Screen Shot 2016-05-05 at 14.50.20

Running ESLint

For some of the bugs we tackled at the hackathon, we needed to run ESLint (well, would be good to run this anyway, no matter what part of the code base you’re changing). The command looks like this:

user@e49de5f6914e:~/firefox$ ./mach eslint --no-ignore devtools/client/webconsole/test/browser_webconsole_live_filtering_of_message_types.js
0:00.40 Running /usr/local/bin/eslint
0:00.40 /usr/local/bin/eslint --plugin html --ext [.js,.jsm,.jsx,.xml,.html] --no-ignore devtools/client/webconsole/test/browser_webconsole_live_filtering_of_message_types.js

/home/user/firefox/devtools/client/webconsole/test/browser_webconsole_live_filtering_of_message_types.js
8:1   warning  Could not load globals from file browser/base/content/browser-eme.js: Error: ENOENT: no such file or directory, open '/home/user/firefox/browser/base/content/browser-eme.js'  mozilla/import-browserjs-globals
8:1   warning  Definition for rule 'mozilla/import-globals' was not found                                                                                                                     mozilla/import-globals
8:1   error    Definition for rule 'keyword-spacing' was not found                                                                                                                            keyword-spacing
18:17  error    content is a possible Cross Process Object Wrapper (CPOW)                                                                                                                      mozilla/no-cpows-in-tests

✖ 4 problems (2 errors, 2 warnings)

0:02.85 Finished eslint. Errors encountered.

As you might see from the input, running this in the Janitor environment results in not finding the Mozilla-specific rules. The reason here is that the eslint npm package is installed globally. Globally installed eslint can’t find the locally installed mozilla-eslint-plugin. In my opinion the easiest fix would be to not install it globally, just within the firefox directory (running “./mach eslint –setup”) while spinning up the instance should be enough here.

We could circumvent this problem by changing the global npm prefix and then running it with “/new/path/eslint …” so it doesn’t call the other one. In hindsight, we could just have installed it to the directory and then call it through node_modules.

Update, May 5, 15:09: Jan has has fixed this plugin issue :)

Creating a patch

Creating a patch is really easy, following the tutorial on MDN is enough. We were very happy to see that the moz-git-tools are already installed by default, so you can just create your own branch, checkin your changes and run “git format-patch -p -k master” to get a Git patch file. Since we need a Mercurial patch, you then run “git-patch-to-hg-patch” and you can upload the resulting file to Bugzilla and you’re set!

Those two commands could maybe be aliased by default so running “create-patch” or similar would directly do this for you to further decrease the work you need to do manually.

Seeing it in action

Conclusion

After some initial account problems, we didn’t really find any other bugs apart from the ESLint situation. Again, thanks a lot to Jan for providing us the environment and letting us test it. This will change the live of a lot of contributors! For now The Janitor supports contributions to Firefox, Chrome, Thunderbird, Servo and KDE. There is also a GitHub repository for it.

Michael KohlerFirefox Hackathon Zurich April 2016

Last Saturday we’ve held a Firefox Hackathon in Zurich, Switzerland. We’ve had 12 people joining us.

Introduction

At first I gave an introduction to Firefox and introduced the agenda of the hackathon.

Dev Tools Talk

After my talk we heard an amazing talk from Daniele who came from Italy to attend this hackathon. He talked about the Dev Tools and gave a nice introduction to new features!

Hackathon

Before the hackathon we created a list of “good first bugs” that we could work on. This was a great thing to do, since we could give the list to the attendees and they could pick a bug to work on. Setting up the environment to hack was pretty easy. We’ve used “The Janitor” to hack on Firefox, I’ll write a second blog post introducing you to this amazing tool! We ran into a few problems with it, but at the end we all could hack on Firefox!

We worked on about 13 different bugs, and we finished 10 patches! This is a great achievement, we probably couldn’t have done that if we needed more time to set up a traditional Firefox environment. Here’s the full list:

Thanks to everybody who contributed, great work! Also a big thanks to Julian Descolette, a Dev Tools employee from Switzerland who supported us as a really good mentor. Without him we probably couldn’t have fixed some of the bugs in that time!

Feedback

At the end of the hackathon we did a round of feedback. In general the feedback was rated pretty well, though we might have some things to improve for the next time.

40% of the attendees had their first interaction with our community at this hackathon! And guess what, 100% of the attendees who filled out the survey would be joining another hackathon in 6 months:

For the next hackathon, we might want to have a talk about the Firefox Architecture in general to give some context to the different modules. Also for the next hackathon we probably will have a fully working Janitor (meaning not alpha status anymore) which will help even more as well.

Lessions learned

  • Janitor will be great for hackathons (though still Alpha, so keep an eye on it)
  • The mix of talk + then directly start hacking works out
  • The participants are happy if they can create a patch in a few minutes to learn the process (Creating Patch, Bugzilla, Review, etc) and I think they are more motivated for future patches

All in all I think this was a great success. Janitor will make every contributor’s life way easier, keep it going! You can find the full album on Flickr (thanks to Daniele for the great pictures!).

Mozilla Addons BlogHow an Add-on Played Hero During an Industrial Dilemma

noitA few months ago Noit Saab’s boss at a nanotech firm came to him with a desperate situation. They had just discovered nearly 900 industrial containers held corrupted semiconductor wafers.

This was problematic for a number of reasons. These containers were scattered across various stages of production, and Noit had to figure out precisely where each container was at in the process. If not, certain departments would be wrongly penalized for this very expensive mishap.

It was as much an accounting mess as it was a product catastrophe. To top it off, Noit had three days to sort it all out. In 72 hours the fiscal quarter would end, and well, you know how finance departments and quarterly books go.

Fortunately for Noit—and probably a lot of very nervous production managers—he used a nifty little add-on called iMacros to help with all his web-based automation and sorting tasks. “Without iMacros this would have been impossible,” says Noit. “With the automation, I ran it overnight and the next morning it was all done.”

Nice work, Noit and iMacros! The day—and perhaps a few jobs—were saved.

“I use add-ons daily for everything I do,” says Noit. “I couldn’t live without them.” In addition to authoring a few add-ons himself, like NativeShot (screenshot add-on with an intriguing UI twist), MouseControl (really nice suite of mouse gestures), MailtoWebmails (tool for customizing the default actions of a “mailto:” link), and Profilist (a way to manage multiple profiles that use the same computer, though still in beta), here are some of his favorites…

“I use Telegram for all my chatting,” says Noit. “I’m not a big mobile fan so it’s great to see a desktop service for this.”

Media Keys, because “I always have music playing from a YouTube list, and sometimes I need to pause it, so rather than searching for the right tab, I use a global hotkey.”

“And of course, AdBlock Plus,” concludes Noit.

If you, dear friends, use add-ons in interesting ways and want to share your experience, please email us at editor@mozilla.com with “my story” in the subject line.

Air MozillaThe Joy of Coding - Episode 56

The Joy of Coding - Episode 56 mconley livehacks on real Firefox bugs while thinking aloud.

Air MozillaWeekly SUMO Community Meeting May 4, 2016

Weekly SUMO Community Meeting May 4, 2016 This is the sumo weekly call We meet as a community every Wednesday 17:00 - 17:30 UTC The etherpad is here: https://public.etherpad-mozilla.org/p/sumo-2016-04-27

Robert KaiserProjects Done, Looking For New Ones

I haven't been blogging much recently, but it's time to change that - like multiple things in my life that are changing right now.

I'll start with the most important piece first: My contract with Mozilla is ending in a week.

I had been accumulating frustration with pieces of my role that were founded in somewhat tedious routine like the whack-a-mole on crash spikes which was not very rewarding as well as never really giving time to breath and then overworking myself trying to get the needed success experiences in things like building dashboards and digging into data (which I really liked).
Being very passionate about Mozilla's Mission and Manifesto and identifying with the goals of my role I could for years paper over this frustration and fatigue but it kept building up in the background until it started impairing my strongest skill: communication with other people.

So, we had to call an end to this particular project - a role like this is never "finished", but it's also far from "failed" as I accomplished quite a bit over those 5 years, in various variants of the role.

After some cooldown and getting this out of my system, I'm happy to take on a new role of project management, possibly combined with some data analysis, somewhere, hopefully in an innovative area that aligns with my interests and possibly my passion for people being in control of their own lives.

As for Mozilla, no matter if an opportunity for work comes up there, I will surely stay around in the community, as I was before - after all, I still believe in the project and our mission and expect to continue to do so.

In other project management news, I just successfully finished the project of taking over my new condo and move in within a week. It took quite some coordination and planning beforehand, being prepared for last-minute changes, communicating well with all the different involved people and making informed but swift decisions at times - and it worked out perfectly. Sure, to put it into IT terms, there are still a few "bugs" left (some already fixed) and there's still a lot of followup work to do (need more furniture etc.) but the project "shipped" on time.

I'm looking forward to doing the same for future work projects, wherever they will manifest.

David LawrenceHappy BMO Push Day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1269795] [BMO] ImageMagick Is On Fire  (CVE-2016-3714)

discuss these changes on mozilla.tools.bmo.


Yunier José Sosa VázquezSean White se une a Mozilla como Vice Presidente de Estrategia Tecnológica

Sean White, el fundador y CEO de BrightSky Labs se unió a Mozilla desde mediados del mes pasado como nuevo Vice Presidente de Estrategia Tecnológica, según informa Chris Beard en un artículo publicado en el blog de Mozilla. En este rol, Sean guiará los proyectos estratégicos en la organización, y como foco inicial, trabajará en nuevas tecnologías emergentes en las áreas de Realidad Virtual y Aumentada (VR & AR) y los Dispositivos Conectados.

Sean White, foto tomada de blog.mozilla.org

Sean White, foto tomada de blog.mozilla.org

Sean White es un ejecutivo de alta tecnología, empresario, inventor y músico que ha pasado su carrera liderando el desarrollo de experiencias innovadoras, sistemas y tecnologías que permiten la expresión creativa, nos conectan el uno al otro, y mejoran nuestra comprensión del mundo que nos rodea. Recientemente fue el fundador y CEO de Brightsky Labs, y actualmente imparte clases de realidad mixta y aumentada en la Universidad de Stanford.

Además, White es graduado en Ciencias de la Computación y posee un Máster en dicha rama, ambos obtenidos en la Universidad de Stanford, cuenta con un Máster en Ingeniería Mecánica (Universidad de Columbia) y es Doctor en Ciencias de la Computación, este último adquirido en 2009 en la Universidad de Columbia.

¡Bienvenido a Mozilla Sean White!

Wladimir PalantUnderestimated issue: Hashing passwords without salts

My Easy Passwords extension is quickly climbing up in popularity, right now it already ranks 9th in my list of password generators (yay!). In other words, it already has 80 users (well, that was anticlimatic). At least, looking at this list I realized that I missed one threat scenario in my security analysis of these extensions, and that I probably rated UniquePasswordBuilder too high.

The problem is that somebody could get hold of a significant number of passwords, either because they are hosting a popular service or because a popular service leaked all their passwords. Now they don’t know of course which of the passwords have been generated with a password generator. However, they don’t need to: they just take a list of most popular passwords. Then they try using each password as a master password and derive a site-specific password for the service in question. Look it up in the database, has this password been ever used? If there is a match — great, now they know who is using a password generator and what their master password is.

This approach is easiest with password generators using a weak hashing algorithm like MD5 or SHA1, lots of hashes can be calculated quickly and within a month pretty much every password will be cracked. However, even with UniquePasswordBuilder that uses a comparably strong hash this approach allows saving lots of time. The attacker doesn’t need to bruteforce each password individually, they can rather crack all of them in parallel. Somebody is bound to use a weak master password, and they don’t even need to know in advance who that is.

How does one protect himself against this attack? Easy: the generated password shouldn’t depend on master password and website only, there should also be a user-specific salt parameter. This makes sure that, even if the attacker can guess the salt, they have to re-run the calculation for each single user — simply because the same master password will result in different generated passwords for different users. Luckily, UniquePasswordBuilder is the only extension where I gave a good security rating despite missing salts. Easy Passwords and CCTOO have user-defined salts, and hash0 even generates truly random salts.

David BurnsGeckoDriver (Marionette) Release v0.7.1

I have just released a new version of the Marionette, well the executable that you need to download.

The main fix in this release is the ability to send over custom profiles that will be used. To be able to use the custom profile you will need to have marionette:true capability and pass in a profile when you instantiate your FirefoxDriver.

We have also fixed a number of minor issues like IPv6 support and compiler warnings.

We have also move the repository where our executable is developed to live under the Mozilla Organization. This is now called GeckoDriver. We will be updating the naming of it in Selenium and documentation over the next few weeks.

Since you are awesome early adopters it would be great if we could raise bugs.

I am not expecting everything to work but below is a quick list that I know doesn't work.

  • No support for self-signed certificates
  • No support for actions
  • No support logging endpoint
  • I am sure there are other things we don't remember

Switching of Frames needs to be done with either a WebElement or an index. Windows can only be switched by window handles.

If in doubt, raise bugs!

Thanks for being an early adopter and thanks for raising bugs as you find them!

Niko MatsakisNon-lexical lifetimes based on liveness

In my previous post I outlined several cases that we would like to improve with Rust’s current borrow checker. This post discusses one possible scheme for solving those. The heart of the post is two key ideas:

  1. Define a lifetime as a set of points in the control-flow graph, where a point here refers to some particular statement in the control-flow graph (i.e., not a basic block, but some statement within a basic block).
  2. Use liveness as the basis for deciding where a variable’s type must be valid.

The rest of this post expounds on these two ideas, and shows how they affect the various examples from the previous post.

Problem case #1: references assigned into a variable

To see better what these two ideas mean – and why we need both of them – let’s look at the initial example from my previous post. Here we are storing a reference to &mut data[..] into the variable slice:

1
2
3
4
5
6
7
8
fn bar() {
    let mut data = vec!['a', 'b', 'c'];
    let slice = &mut data[..]; // <-+ lifetime today
    capitalize(slice);         //   |
    data.push('d'); // ERROR!  //   |
    data.push('e'); // ERROR!  //   |
    data.push('f'); // ERROR!  //   |
} // <------------------------------+

As shown, the lifetime of this reference today winds up being the subset of the block that starts at the let and stretches until the ending }. This results in compilation errors when we attempt to push to data. The reason is that a borrow like &mut data[..] effectively locks the data[..] for the lifetime of the borrow, meaning that data becomes off limits and can’t be used (this locking is just a metaphor for the type system rules; there is of course nothing happening at runtime).

What we would like is to observe that slice is dead – which is compiler-speak for it won’t ever be used again – after the call to capitalize. Therefore, if we had a more flexible lifetime system, we might compute the lifetime of the slice reference to something that ends right after the call to capitalize, like so:

1
2
3
4
5
6
7
8
9
fn bar() {
    let mut data = vec!['a', 'b', 'c'];
    let slice = &mut data[..]; // <-+ lifetime under this proposal
    capitalize(slice);         //   |
    // <----------------------------+
    data.push('d'); // OK
    data.push('e'); // OK
    data.push('f'); // OK
}

If we had this shorter lifetime, then the calls to data.push would be legal, since the lock is effectively released early.

At first it might seem like all we have to do to achieve this result is to adjust the definition of what a lifetime can be to make it more flexible. In particular, today, once a lifetime must extend beyond the boundaries of a single statement (e.g., beyond the let statement here), it must extend all the way till the end of the enclosing block. So, by adopting a definition of lifetimes that is just a set of points in the control-flow graph, we lift this constraint, and we can now express the idea of a lifetime that starts at the &mut data[..] borrow and ends after the call to capitalize, which we couldn’t even express before.

But it turns out that is not quite enough. There is another rule in the type system today that causes us a problem. This rule states that the type of a variable must outlive the variable’s scope. In other words, if a variable contains a reference, that reference must be valid for the entire scope of the variable. So, in our example above, the reference created by the &mut data[..] borrow winds up being stored in the variable slice. This means that the lifetime of that reference must include the scope of slice – which stretches from the let until the closing }. In other words, even if we adopt more flexible lifetimes, if we change nothing else, we wind up with the same lifetime as before.

You might think we could just remove the rule altogether, and say that the lifetime of a reference must include all the points where the lifetime is used, with no special treatment for references stored into variables. In this particular example we’ve been looking at, that would do the right thing: the lifetime of slice would only have to outlive the call to capitalize. But it starts to go wrong if the control-flow gets more complicated:

1
2
3
4
5
6
7
8
9
10
11
fn baz() {
    let mut data = vec!['a', 'b', 'c'];
    let slice = &mut data[..]; // <-+ lifetime if we ignored
    loop {                     //   | variables altogether
        capitalize(slice);     //   |
        // <------------------------+
        data.push('d'); // Should be error, but would not be.
    }
    data.push('e'); // OK
    data.push('f'); // OK
}

Here again the reference slice is still only be required to live until after the call to capitalize, since that is the only place it is used. However, in this variation, that is not the correct behavior: the reference slice is in fact still live after the call to capitalize, since it will be used again in the next iteration of the loop. The problem here is that we are entering the lifetime (after the call to capitalize) and then re-entering it (on the loop backedge) but without reinitializing slice.

One way to address this problem would be to modify the definition of a lifetime. The definition I gave earlier was very flexible and allowed any set of points in the control-flow to be included. Perhaps we want some special rules around backedges? This is the approach that RFC 396 took, for example. I initially explored this approach but found that it caused problems with more advanced cases, such as a variation on problem case 3 we will examine in a later post.

Instead, I have opted to weaken – but not entirely remove – the original rule. The original rule was something like this (expressed as an inference rule):

scope(x) = 's
T: 's
------------------
let x: T OK

In other words, it’s ok to declare a variable x with type T, as long as T outlive the scope 's of that variable. My new version is more like this:

live-range(x) = 's
T: 's
------------------
let x: T OK

Here I have substituted live-range for scope. By live-range I mean the set of points in the CFG where `x` may be later used, effectively. If we apply this to our two variations, we will see that, in the first example, the variable slice is dead after the call to capitalize: it will never be used again. But in the second variation, the one with a loop, slice is live, because it may be used in the next iteration. This accounts for the different behavior:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
// Variation #1: `slice` is dead after call to capitalize,
// so the lifetime ends
fn bar() {
    let mut data = vec!['a', 'b', 'c'];
    let slice = &mut data[..]; // <-+ lifetime under this proposal
    capitalize(slice);         //   |
    // <----------------------------+
    data.push('d'); // OK
    data.push('e'); // OK
    data.push('f'); // OK
}

// Variation #2: `slice` is live after call to capitalize,
// so the lifetime encloses the entire loop.
fn baz() {
    let mut data = vec!['a', 'b', 'c'];
    let slice = &mut data[..]; // <---------------------------+
    loop {                                               //   |
        capitalize(slice);                               //   |
        data.push('d'); // ERROR!                        //   |
    }                                                    //   |
    // <------------------------------------------------------+

    // But note that `slice` is dead here, so the lifetime ends:
    data.push('e'); // OK
    data.push('f'); // OK
}

Refining the proposal using fragments

One problem with the analysis as I presented it thus far is that it is based on liveness of individual variables. This implies that we lose precision when references are moved into structs or tuples. So, for example, while this bit of code will type-check:

1
2
3
4
5
6
7
8
9
10
let mut data1 = vec![];
let mut data2 = vec![];
let x = &mut data1[..]; // <--+ data1 is "locked" here
let y = &mut data2[..]; // <----+ data2 is "locked" here
use(x);                 //    | |
// <--------------------------+ |
data1.push(1);          //      |
use(y);                 //      |
// <----------------------------+
data2.push(1);

It would cause errors if we move those two references into a tuple:

1
2
3
4
5
6
7
8
let mut data1 = vec![];
let mut data2 = vec![];
let tuple = (&mut data1[..], &mut data2[..]); // <--+ data1 and data2
use(tuple.0);                                 //    | are locked here
data1.push(1);                                //    |
use(tuple.1);                                 //    |
// <------------------------------------------------+
data2.push(1);

This is because the variable tuple is live until after the last field access. However, the dynamic drop analysis is already computing a set of fragments, which are basically minimal paths that it needs to retain full resolution around which subparts of a struct or tuple have been moved. We could probably use similar logic to determine that we ought to compute the liveness of tuple.0 and tuple.1 independently, which would make this example type-check. (If we did so, then any use of tuple would be considered a gen of both tuple.0 and tuple.1, and any write to tuple would be considered a kill of both.) This would probably subsume and be compatible with the fragment logic used for dynamic drop, so it could be a net simplification.

Destructors

One further wrinkle that I did not discuss is that any struct with a destructor encounters special rules. This is because the destructor may access the references in the struct. These rules were specified in RFC 1238 but are colloquially called dropck. They basically state that when we create some variable x whose type T has a destructor, then T must outlive the parent scope of x. That is, the references in x don’t have to just be valid for the scope of x, they have to be valid for longer than the scope of x.

In some sense, the dropck rules remains unchanged by all I’ve discussed here. But in another sense dropck may stop being a special case. The reason is that, in MIR, all drops are made explicit in the control-flow graph, and hence if a variable x has a destructor, that should show us as just another use of x, and thus cause the lifetime of any references within to be naturally extended to cover that destructor. I admit I haven’t had time to dig into a lot of examples here: destructors are historically a very subtle case.

Implementation ramifications

Those of you familiar with the compiler will realize that there is a bit of a chicken-and-egg problem with what I have presented here. Today, the compiler computes the lifetimes of all references in the typeck pass, which is basically the main type-checking pass that computes the types of all expressions. We then use the output of this pass to construct MIR. But in this proposal I am defining lifetimes as a set of points in the MIR control-flow-graph. What gives?

To make this work, we have to change how the compiler works internally. The rough idea is that the typeck pass will no longer concern itself with regions: it will erase all regions, just as trans does. This has a number of ancillary benefits, though it also carries a few complications we have to resolve (maybe a good topic for another blog post!). We’ll then build MIR from this, and hence the initially constructed MIR will also have no lifetime information (just erased lifetimes).

Then, looking at each function in the program in turn, we’ll do a safety analysis. We’ll start by computing lifetimes – at this point, we have the MIR CFG in hand, so we can easily base them on the CFG. We’ll then run the borrowck. When we are done, we can just forget about the lifetimes entirely, since all later passes are just doing optimization and code generation, and they don’t care about lifetimes.

Another interesting question is how to represent lifetimes in the compiler. The most obvious representation is just to use a bit-set, but since these lifetimes would require one bit for every statement within a function, they could grow quite big. There are a number of ways we could optimize the representation: for example, we could track the mutual dominator, even promoting it upwards to the innermost enclosing loop, and only store bits for that subportion of the graph. This would require fewer bits but it’d be a lot more accounting. I’m sure there are other far more clever options as well. The first step I think would be to gather some statistics about the size of functions, the number of inference variables per fn, and so forth.

In any case, a key observation is that, since we only need to store lifetimes for one function at a time, and only until the end of borrowck, the precise size is not nearly as important as it would be today.

Conclusion

Here I presented the key ideas of my current thoughts around non-lexical lifetimes: using flexible lifetimes coupled with liveness. I motivated this by examining problem case #1 from my introduction. I also covered some of the implementation complications. In future posts, I plan to examine problem cases #2 and #3 – and in particular to describe how to extend the system to cover named lifetime parameters, which I’ve completely ignored here. (Spoiler alert: problem cases #2 and #3 are also no longer problems under this system.)

I also do want to emphasize that this plan is a work-in-progress. Part of my hope in posting it is that people will point out flaws or opportunities for improvement. So I wouldn’t be surprised if the final system we wind up with winds up looking quite different.

(As is my wont lately, I am disabling comments on this post. If you’d like to discuss the ideas in here, please do so in this internals thread instead.)

Daniel StenbergA book status update

— How’s Daniel’s curl book going?

I can hear absolutely nobody asking. I’ll just go ahead and tell you anyway since I had a plan to get a first version “done” by “the summer” (of 2016). I’m not sure I believe in that time frame anymore.

everything-curl-coverI’m now north of 40,000 words with a bunch of new chapters and sections added recently and I’m now generating an index that looks okay. The PDF version is exactly 200 pages now.

The index part is mainly interesting since the platform I use to write the book on, gitbook.com, doesn’t offer any index functionality of its own so I  had to hack one up and add. That’s just one additional beauty of having the book made entirely in markdown.

Based on what I’ve written so far and know I still have outstanding, I am about 70% done, indicating there are about 17,000 words left for me. At this particular point in time. The words numbers tend to grow over time as the more I write (and the completion level is sort of stuck), the more I think of new sections that I should add and haven’t yet written…

On this page you can get the latest book stats, right off the git repo.

Daniel StenbergNo more heartbleeds please

caution-quarantine-areaAs a reaction to the whole Heartbleed thing two years ago, The Linux Foundation started its Core Infrastructure Initiative (CII for short) with the intention to help track down well used but still poorly maintained projects or at least detect which projects that might need help. Where the next Heartbleed might occur.

A bunch of companies putting in money to improve projects that need help. Sounds almost like a fairy tale to me!

Census

In order to identify which projects to help, they run their Census Project: “The Census represents CII’s current view of the open source ecosystem and which projects are at risk.

The Census automatically extracts a lot of different meta data about open source projects in order to deduce a “Risk Index” for each project. Once you’ve assembled such a great data trove for a busload of projects, you can sort them all based on that risk index number and then you basically end up with a list of projects in a priority order that you can go through and throw code at. Or however they deem the help should be offered.

Which projects will fail?

The old blog post How you know your Free or Open Source Software Project is doomed to FAIL provides such a way, but it isn’t that easy to follow programmatically. The foundation has its own 88 page white paper detailing its methods and algorithm.

Risk Index

  • A project without a web site gets a point
  • If the project has had four or more CVEs (publicly disclosed security vulnerabilities) since 2010, it receives 3 points and if fewer than four there’s a diminishing scale.
  • The number of contributors the last 12 months is a rather heavy factor, which thus could make the index grow old fairly quick. 3 contributors still give 4 points.
  • Popular packages based on Debian’s popcon get points.
  • If the project’s main language is C or C++, it gets two points.
  • Network “exposed” projects get points.
  • some additional details like dependencies and how many outstanding patches not accepted upstream that exist

All combined, this grades projects’ “risk” between 0 and 15.

Not high enough resolution

Assuming that a larger number of CVEs means anything bad is just wrong. Even the most careful and active projects can potentially have large amounts of CVEs. It means they disclose what they find and that people are actually reviewing code, finding problems and are reporting problems. All good things.

Sure, security problems are not good but the absence of CVEs in a project doesn’t say that the project is one bit more secure. It could just mean that nobody ever looked closely enough or that the project doesn’t deal with responsible disclosure of the problems.

When I look through the projects they have right now, I get the feeling the resolution (0-15) is too low and they’ve shied away from more aggressively handing out penalty based on factors we all recognize in abandoned/dead projects (some of which are decently specified in Tom Calloway’s blog post mentioned above).

The result being that the projects get a score that is mostly based on what kind of project it is.

But this said, they have several improvements to their algorithm already suggested in their issue tracker. I firmly believe this will improve over time.

The riskiest ?

The top three projects, the only ones that scores 13 right now are expat, procmail and unzip. All of them really small projects (source code wise) that have been around since a very long time.

curl, being the project I of course look out for, scores a 9: many CVEs (3), written in C (2), network exposure (2), 5+ apps depend on it (2). Seriously, based on these factors, how would you say the project is situated?

In the sorted list with a little over 400 projects, curl is rated #73 (at the time of this writing at least). Just after reportbug but before libattr1. [curl summary – which is mentioning a very old curl release]

But the list of projects mysteriously lack many projects. Like I couldn’t find neither c-ares nor libssh2. They may not be super big, but they’re used by a bunch of smaller and bigger projects at least, including curl itself.

The full list of projects, their meta-data and scores are hosted in their repository on github.

Benefits for projects near me

I can see how projects in my own backyard have gotten some good out of this effort.

I’ve received some really great bug reports and gotten handed security problems in curl by an individual who did his digging funded by this project.

I’ve seen how the foundation sponsored a test suite for c-ares since the project lacked one. Now it doesn’t anymore!

Badges!

In addition to that, the Linux Foundation has also just launched the CII Best Practices Badge Program, to allow open source projects to fill in a bunch of questions and if meeting enough requirements, they will get a “badge” to boast to the world as a “well run project” that meets current open source project best practices.

I’ve joined their mailing list and provided some of my thoughts on the current set of questions, as I consider a few of them to be, well, lets call them “less than optimal”. But then again, which project doesn’t have bugs? We can fix them!

curl is just now marked as “100% compliance” with all the best practices listed. I hope to be able to keep it like that even with future and more best practices added.

Allen Wirfs-BrockHow to Invent the Future

Alan Kay famously said “The best way to predict the future is to invent it.” But how do we go about inventing a future that isn’t a simple linear extrapolation of the present?

Kay and his colleagues at Xerox PARC did exactly that over the course of the 1970s and early 1980s. They invented and prototyped the key concepts of the Personal Computing Era. Concepts that were then realized in commercial products over the subsequent two decades.

So, how was PARC so successful at “inventing the future”? Can that success be duplicated or perhaps applied at a smaller scale? I think it can. To see how, I decided to try to sketch out what happened at Xerox PARC as a pattern language.

futurepatlang

Look Twenty Years Into the Future

If your time horizon is short you are doing product development or incremental research. That’s all right; it’s probably what most of us should be doing. But if you want to intentionally “invent the future” you need to choose a future sufficiently distant to allow time for your inventions to actually have an impact.

Extrapolate Technologies

What technologies will be available to us in twenty years? Start with the current and emerging technologies that already exist today. Which relevant  technologies are likely to see exponential improvement over the next twenty years? What will they be like as they mature over that period? Assume that as the technical foundation for your future.

Focus on People

Think about how those technologies may affect people. What new human activities do they enable? Is there a human problem they may help solve? What role might those technologies have in everyday life? What could be the impact upon society as a whole?

Create a Vision

Based upon your technology and social extrapolations  create a clearly articulated vision of what your desired future. It should be radically different form the present in some respects. If it isn’t, then invention won’t be required to achieve it.

A Team of Dreamers and Doers

Inventing a future requires a team with a mixture of skills.  You need dreamers who are able to use their imagination to create and refine the core future vision. You also need doers who are able to take ill-defined dreams and turn them into realities using available technologies. You must have both and they must work closely together.

Prototype the Vision

Try to create a high fidelity functional approximation of your vision of the future. Use the best of today’s technology as stand-ins for your technology extrapolations. Remember what is expensive and bulky today may be cheap and tiny in your future. If the exact technical combination you need doesn’t exist today, build it.

Live Within the Prototype

It’s not enough to just build a prototype of your envisioned future. You have to use the prototype as the means for experiencing that future. What works? What doesn’t? Use you experience with the prototype to iteratively refine the vision and the prototypes.

Make It Useful to You

You’re a person who hopes to live in this future, so prototype things that will be useful to you.  You will know you are on to something when your prototype become an indispensible part of your life. If it isn’t there yet, keep iterating until it is.

Amaze the World

If you are successful in applying these patterns you will invent things that are truly amazing.  Show those inventions to the world. Demonstrate that your vision of the future is both compelling and achievable. Inspire other people to work towards that same future. Build products and businesses if that is your inclination, but remember that inventing the future takes more than a single organization or project. The ultimate measure of your success will be your actual impact on the future.

 

 

 

 

 

Armen ZambranoReplay Pulse messages

If you know what is Pulse and you would like to write some integration tests for an app that consumes them pulse_replay might make your life a bit easier.

You can learn more about by reading this quick README.md.


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Air MozillaCloud Services QA Team Sync, 03 May 2016

Cloud Services QA Team Sync Weekly sync-up, volunteer, round-robin style, on what folks are working on, having challenges with, etc.

Air MozillaWebdev Extravaganza: May 2016

Webdev Extravaganza: May 2016 Once a month web developers from across Mozilla get together to share news about the things we've shipped, news about open source libraries we maintain...

Air MozillaConnected Devices Weekly Program Update, 03 May 2016

Connected Devices Weekly Program Update Weekly project updates from the Mozilla Connected Devices team.

Chris H-CMailing-List Mush: End of Life for Firefox on OSX 10.6-8, ICU dropping Windows XP Support

Apparently I’m now Windows XP Firefox Blogging Guy. Ah well, everyone’s gotta have a hobby.

End of Life for Firefox on OSX 10.6-8

The Firefox Future Releases Blog announced the end of support for Mac OSX 10.6-10.8 for Firefox. This might be our first look at how Windows XP’s end of life might be handled. I like the use of language:

All three of these versions are no longer supported by Apple. Mozilla strongly encourages our users to upgrade to a version of OS X currently supported by Apple. Unsupported operating systems receive no security updates, have known exploits, and are dangerous for you to use.

You could apply that just as easily and even more acutely to Windows XP.

But, then, why isn’t Mozilla ending support for XP in a similar announcement? Essentially it is because Windows XP is still too popular amongst Firefox users. The Windows XP Firefox population still outnumbers the Mac OSX (all versions) and Linux populations combined.

My best guess is that we’ll be able to place the remaining Windows XP Firefox users on ESR 52 which should keep the last stragglers supported into 2018. That is, if the numbers don’t suddenly decrease enough that we’re able to drop support completely before then, shuffling the users onto ESR 45 instead.

What’s nice is the positive-sounding emails at the end of the thread announcing the gains in testing infrastructure and the near-term removal of code that supported now-unused build configurations. The cost of supporting these platforms is non-0, and gains can be realized immediately after dropping support.

ICU Planning to Drop Support for Windows XP

A key internationalization library in use by Firefox, ICU, is looking to drop Windows XP support in their next version. The long-form discussion is on dev-platform (you might want to skim the unfortunate acrimony over Firefox for Android (Fennec) present in that thread) but it boils down to: do we continue shipping old software to support Windows XP? For how long? Is this the straw that will finally break WinXP support’s back?

:milan made an excellent point on how the Windows XP support decision is likely to be made:

Dropping the XP support is *completely* not an engineering decision.  It isn’t even a community decision.  It is completely, 100% MoCo driven Firefox product management decision, as long as the numbers of users are where they are.

On the plus side, ICU seems to be amenable to keep Windows XP support for a little longer if we need it… but we really ought to have a firm end-of-life date for the platform if we’re to make that argument in a compelling fashion. At present we don’t have (or at least haven’t communicated) such a date. ICU may just march on without us if we don’t decide on one.

For now I will just keep an eye on the numbers. Expect a post when the Windows XP numbers finally dip below the Linux+OSX as that will be a huge psychological barrier broken.

But don’t expect that post for a few months, at least.

:chutten

 

 


Yunier José Sosa VázquezFirefox integra GTK3 en Linux y mejora la seguridad del compilador de JavaScript (JIT)

El 26 de abril pasado, Mozilla liberó una nueva versión de Firefox en la que destacan la integración con GTK3 en Linux, mejoras en las seguridad del compilador JS en tiempo real, cambios en WebRTC y nuevas funcionalidades para Android e iOS. Como anunciaba @Pochy en el día de ayer, esta actualización de Firefox la pueden obtener desde nuestra zona de Descargas.

Después de varios meses de pruebas y desarrollo, finalmente GTK3 ha sido incluido para la versión en Linux. Esto permitirá reducir la dependencia de las versiones antiguas del servidor X11, compatibilidad mejorada con HiDPI y sobre todo una mejor integración con los temas.

El nuevo navegador también mejora la seguridad del compilador de JavaScript Just in Time (JIT) de SpiderMonkey. La idea es influir en el código RWX (lectura-escritura-ejecución), que a veces provoca un riesgo. Inicialmente representa una excepción a las reglas del sistema operativo, especialmente el almacenamiento de datos en un área de memoria donde pueden ser ejecutados (leer), pero no escribirse.

Para remediar este problema, Mozilla he empleado un mecanismo denominado W^X. Su función es la de prohibir la escritura de JavaScript en áreas de memoria que contienen el código JIT. Este cambio será a expensas de un ligero descenso en el rendimiento, que se calcula de acuerdo con el editor en un rango de 1 a 4%. Por otra parte, se invita a los creadores de algunas extensiones para probar la compatibilidad de su código de tratar con este mecanismo.

También se ha mejorado el rendimiento y la fiabilidad de las conexiones de WebRTC, y permite el uso del módulo de descifrado de contenido para el contenido H.264 y AAC cuando sea posible. Mientras tanto, para los desarrolladores contamos con nuevas herramientas que ahora pueden emplear en sus trabajos, en este artículo publicado en el blog de Labs podrás conocer más al respecto.

Novedades en Android

  • El Historial y los Marcadores se han añadido al menú.
  • La instalación de complementos no verificados será cancelada.
  • Las página almacenadas en la caché ahora son mostradas cuando estamos sin conexión.
  • Las notificaciones sobre las pestañas abiertas en segundo plano ahora muestran la URL.
  • Firefox pedirá permisos para acceder a ciertos permisos en tiempo de ejecución y no al instalar la aplicación (Android 6.0 o superiores).
  • Durante el autocompletamiento mientras escribes una dirección ahora se incluye el dominio para hacer más fácil tu navegación.

Firefox para iOS

  • Añadida la localización en danés [da].
  • Los sitios sugeridos por defecto ahora pueden ser eliminados.
  • Los 5 primeros sitios del ranking de Alexa son mostrados a los nuevos usuarios.
  • Mejorado el manejo por parte del navegador de vínculos a Apple Maps y otras aplicaciones de terceros como Twitter.

Si prefieres ver la lista completa de novedades, puedes llegarte hasta las notas de lanzamiento (en inglés).

Aclaración para la versión móvil.

En las descargas se pueden encontrar 3 versiones para Android. El archivo que contiene i386 es para los dispositivos que tengan la arquitectura de Intel. Mientras que en los nombrados arm, el que dice api11 funciona con Honeycomb (3.0) o superior y el de api9 es para Gingerbread (2.3).

Puedes obtener esta versión desde nuestra zona de Descargas en español para Linux, Mac, Windows y Android. Si te ha gustado, por favor comparte con tus amigos esta noticia en las redes sociales. No dudes en dejarnos un comentario ;-).

Chris CooperRelEng & RelOps Weekly highlights - May 2, 2016

Can you dig it?
Two weeks worth of awesome crammed into one blog post. Can you dig it?

Modernize infrastructure:

Kendall and Greg have deployed new hg web nodes! They’re bigger, better, faster! The four new nodes have more processing power than the old ten nodes combined. In addition, all of the web and ssh nodes have been upgraded to CentOS 7, giving us a modern operating system and better security.

Relops and jmaher certified Windows 7 in the cloud for 40% of tests. We’re now prepping to move those tests. The rest should follow soon. From a capacity standpoint, moving any Windows testing volume into the cloud is huge.

Mark deployed new versions of hg and git to the Windows testing infrastructure.

Rob’s new mechanisms for building TaskCluster Windows workers give us transparency on what goes into a builder (single page manifests) and have now been used to successfully build Firefox under mozharness for TaskCluster with an up-to-date toolchain (mozilla-build 2.2, hg 3.7.3, python 2.7.11, vs2015 on win 2012) in ec2.

Improve Release Pipeline:

Firefox 46.0 Release Candidates (RCs) were all done with our new Release Promotion process. All that work in the beta cycle for 46.0 paid off.

Varun began work on improving Balrog’s backend to make multifile responses (such as GMP) easier to understand and configure. Historically it has been hard for releng to enlist much help from the community due to the access restrictions inherent in our systems. Kudos to Ben for finding suitable community projects in the Balrog space, and then more importantly, finding the time to mentor Varun and others through the work.

Improve CI Pipeline:

Aki’s async code has landed in taskcluster-client.py! Version 0.3.0 is now on pypi, allowing us to async all the python TaskCluster things.

Nick’s finished up his work to enable running localized (l10n) desktop builds on Try. We’ve wanted to be able to verify changes against l10n builds for a long time…this particular bug is 3 years old. There are instructions in the wiki: https://wiki.mozilla.org/ReleaseEngineering/TryServer#Desktop_l10n_jobs

With build promotion well sorted for the Firefox 46 release, releng is switching gears and jumping into the TaskCluster migration with both feet this month. Kim and Mihai will be working full-time on migration efforts, and many others within releng have smaller roles. There is still a lot of work to do just to migrate all existing Linux workloads into TaskCluster, and that will be our focus for the next 3 months.

Operational:

Vlad and Amy landed patches to decommission the old b2g bumper service and its infrastructure.

Alin created a dedicated server to run buildduty tools. This is part of an ongoing effort to separate services and tools that had previously been piggybacking on other hosts.

Amy and Jake beefed up our AWS puppetmasters and tweaked some time out values to handle the additional load of switching to puppet aspects. This will ensure that our servers stay up to date and in sync.

What’s better than handing stuff off? Turning stuff off. Hal started disabling no-longer-needed vcs-sync jobs.

Release:

Shipped Firefox 46.0RC1 and RC2, Fennec 46.0b12, Firefox and Fennec 46.0, ESR 45.1.0 and 38.8.0, Firefox and Fennec 47.0beta1, and Thunderbird 45.0b1. The week before, we shipped Firefox and Fennec 45.0.2 and 46.0b10, Firefox 45.0.2esr and Thunderbird 45.0.

For further details, check out the release notes here:

See you next week!

Hannes VerschoreTracelogger gui updates

Tracelogger is one of the tools JIT devs (especially me) use to look into performance issues and to improve the JS engine of Firefox performance-wise. It traces which functions are executing together with extra information like which engine is running. How long compilation took. How many times we are GC’ing and if we are calling VM functions …

I made the GUI a bit more powerful. First of all I moved the computation of the overview to a web worker. This should help the usability of the tool. Next to that I made it possible to toggle the categories on and off. That might make it easier to understand the graphs. I also introduced a settings popup. In the settings popup you can now choice to see absolute (cpu ticks) or relative (%) timings.

Screenshot from 2016-05-03 09:36:41

There are still a lot of improvements that are possible. Eventually it should be possible to zoom on graphs, toggle scripts on/off, see full times of scripts (instead of self time only) and maybe make it possible to show another graph (like a flame chart). Hopefully one day.

This is off course open source and available at:
https://github.com/h4writer/tracelogger/tree/master/website

Mozilla Open Policy & Advocacy BlogThis is what a rightsholder looks like in 2016

In today’s policy discussions around intellectual property, the term ‘rightsholder’ is often misconstrued as someone who supports maximalist protection and enforcement of intellectual property, instead of someone who simply holds the rights to intellectual property. This false assumption can at times create a kind of myopia, in which the breadth and variety of actors, interests, and viewpoints in the internet ecosystem – all of whom are rightsholders to one degree or another – are lost.

This is not merely a process issue – it undermines constructive dialogues aimed at achieving a balanced policy. Copyright law is, ostensibly, designed and intended to advance a range of beneficial goals, such as promoting the arts, growing the economy, and making progress in scientific endeavour. But maximalist protection policies and draconian enforcement benefit the few and not the many, hindering rather than helping these policy goals. For copyright law to enhance creativity, innovation, and competition, and ultimately to benefit the public good, we must all recognise the plurality and complexity of actors in the digital ecosystem, who can be at once IP rightsholders, creators, and consumers.

Mozilla is an example of this complex rightsholder stakeholder. As a technology company, a non-profit foundation, and a global community, we hold copyrights, trademarks, and other exclusive rights. Yet, in the pursuit of our mission, we’ve also championed open licenses to share our works with others. Through this, we see an opportunity to harness intellectual property to promote openness, competition and participation in the internet economy.

We are a rightsholder, but we are far from maximalists. Much of the code produced by Mozilla, including much of Firefox, is licensed using a free and open source software licence called the Mozilla Public License (MPL), developed and maintained by the Mozilla Foundation. We developed the MPL to strike a real balance between the interests of proprietary and open source developers in an effort to promote innovation, creativity and economic growth to benefit the public good.

Similarly, in recognition of the challenges the patent system raises for open source software development, we’re pioneering an innovative approach to patent licensing with our Mozilla Open Software Patent License (MOSPL). Today, the patent system can be used to hinder innovation by other creators. Our solution is to create patents that expressly permit everyone to innovate openly. You can read more in our terms of license here.

While these are just two initiatives from Mozilla amongst many more in the open source community, we need more innovative ideas in order to fully harness intellectual property rights to foster innovation, creation and competition. And we need policy makers to be open (pun intended) to such ideas, and to understand the place they have in the intellectual property ecosystem.

More than just our world of software development, the concept of a rightsholder is in reality broad and nuanced. In practice, we’re all rightsholders – we become rightsholders by creating for ourselves, whether we’re writing, singing, playing, drawing, or coding. And as rightsholders, we all have a stake in this rich and diverse ecosystem, and in the future of intellectual property law and policy that shapes it.

Here is some of our most recent work on IP reform:

Mozilla Addons BlogMay 2016 Featured Add-ons

Pick of the Month: uBlock Origin

by Raymond Hill
Very efficient blocker with a low CPU footprint.

”Wonderful blocker, part of my everyday browsing arsenal, highly recommended.”

Featured: Download Plan

by Abraham
Schedule download times for large files during off-peak hours.

”Absolutely beautiful interface!!”

Featured: Emoji Keyboard

by Harry N.
Input emojis right from the browser.

”This is a good extension because I can input emojis not available in Hangouts, Facebook, and email.”

Featured: Tab Groups

by Quicksaver
A simple way to organize a ton of tabs.

”Awesome feature and very intuitive to use.”

Nominate your favorite add-ons

Featured add-ons are selected by a community board made up of add-on developers, users, and fans. Board members change every six months, so there’s always an opportunity to participate. Stayed tuned to this blog for the next call for applications. Here’s further information on AMO’s featured content policies.

If you’d like to nominate an add-on for featuring, please send it to amo-featured@mozilla.org for the board’s consideration. We welcome you to submit your own add-on!

Wladimir PalantAdventures porting Easy Passwords to Chrome and back to Firefox

Easy Passwords is based on the Add-on SDK and runs in Firefox. However, people need access to their passwords in all kinds of environments, so I created an online version of the password generator. The next step was porting Easy Passwords to Chrome and Opera. And while at it, I wanted to see whether that port will work in Firefox via Web Extensions. After all, eventually the switch to Web Extensions will have to be done.

Add-on SDK to Chrome APIs

The goal was using the same codebase for all variants of the extension. Most of the logic is contained in HTML files anyway, so it wouldn’t have to be changed. As to the remaining code, it should just work with some fairly minimal implementation of the SDK APIs on top of the Chrome APIs. Why not the other way round? Well, I consider the APIs provided by the Add-on SDK much cleaner and easier to use.

It turned out that Easy Passwords used twelve SDK modules, many of these could be implemented in a trivial way however. For example, the timers module merely exports functions that are defined anyway (unlike SDK extensions, Chrome extensions run in a window context).

There were a few conceptual differences however. For example, Chrome extensions don’t support modularization — all background scripts execute in a single shared scope of the background page. Luckily, browserify solves this problem nicely by compiling all the various modules into a single background.js script while giving each one its own scope.

The other issue is configuration: Chrome doesn’t generate settings UI automatically the way simple-prefs module does it. No way around creating a special page for the two settings. Getting automatic SDK-style localization of HTML pages on the other hand was only a matter of a few lines (Chrome makes it a bit more complicated by disallowing dashes in message names).

A tricky issue was unifying the way scripts are attached to HTML pages. With the Add-on SDK these are content scripts which are defined in the JavaScript code — otherwise they wouldn’t be able to communicate to the extension. In Chrome you use regular <script> tags however, the scripts get the necessary privileges automatically. In the end I had to go with conditional comments interpreted by the build system, for the Chrome build these would become regular HTML code. This had the advantage that I could have additional scripts for Chrome only, in order to emulate the self variable which is available to SDK content scripts.

Finally, communication turned out tricky as well. The Add-on SDK automatically connects a content script to whichever code is responsible for it. Whenever some code creates a panel it gets a panel.port property which can be used to communicate with that panel — and only with that panel. Chrome’s messaging on the other hand is all-to-all, the code is meant to figure out itself whether it is supposed to process a particular message or leave it for somebody else. And while Chrome also has a concept of communication ports, these can only be distinguished by their name — so my implementations of the SDK modules had to figure out which SDK object a new communication port was meant for by looking at its name. In the end I implemented a hack: since I had exactly one panel, exactly one page and exactly one page worker, I only set the type of the port as its name. Which object it should be associated with? Who cares, there is only one.

And that’s mostly it as far as issues go. Quite surprisingly, fancy JavaScript syntax is no longer an issue as of Chrome 49 — let statements, for..of loops, rest parameters, destructuring assignments, all of this works. The only restrictions I noticed: node lists like document.forms cannot be used in for..of loops, and calling Array.filter() as opposed to Array.prototype.filter.call() isn’t supported (the former isn’t documented on MDN either, it seems to be non-standard). And a bunch of stuff which requires extra code with the Add-on SDK “just works”: pop-up size is automatically adjusted to content, switching tabs closes pop-up, tooltips and form validation messages work inside the pop-up like in every webpage.

The result was a Chrome extension that works just as well as the one for Firefox, with the exception of not being able to show the Easy Passwords icon in pop-up windows (sadly, I suspect that this limitation is intentional). It works in Opera as well and will be available in their add-on store once it is reviewed.

Chrome APIs to Web Extensions?

And what about running the Chrome port in Firefox now? Web Extensions are compatible to Chrome APIs, so in theory it shouldn’t be a big deal. And in fact, after adding applications property to manifest.json the extension could be installed in Firefox. However, after it replaced the version based on the Add-on SDK all the data was gone. This is bug 1214790 and I wonder what kind of solution the Mozilla developers can come up with.

It wasn’t really working either. Turned out, crypto functionality wasn’t working because the code was running in a context without access to Web Extensions APIs. Also, messages weren’t being received properly. After some testing I identified bug 1269327 as the culprit: proxied objects in messages were being dropped silently. Passing the message through JSON.stringify() and JSON.parse() before sending solved the issue, this would create a copy without any proxies.

And then there were visuals. One issue turned out to be a race condition which didn’t occur on Chrome, I guess that I made too many assumptions. Most of the others were due to bug 1225633 — somebody apparently considered it a good idea to apply a random set of CSS styles to unknown content. I filed bug 1269334 and bug 1269336 on the obvious bugs in these CSS styles, and overwrote some of the others in the extension. Finally, the nice pop-up sizing automation doesn’t work in Firefox, so the size of the Easy Passwords pop-up is almost always wrong.

Interestingly, pretty much everything that Chrome does better than the Add-on SDK isn’t working with Web Extensions right now. It isn’t merely the pop-up sizing: HTML tooltips in pop-ups don’t show up, and pop-ups aren’t being closed when switching tabs. In addition, tabs.query() doesn’t allow searching extension pages and submitting passwords produces bogus error messages.

While most of these issues can be worked around easily, some are not. So I guess that it will take a while until I replace the SDK-based version of Easy Passwords by one based on Web Extensions.

Armen ZambranoOpen Platform Operations’ logo design

Last year, the Platform Operations organization was born and it brought together multiple teams across Mozilla which empower development with tools and processes.

This year, we've decided to create a logo that identifies us an organization and builds our self-identify.

We've filed this issue for a logo design [1] and we would like to have a call for any community members to propose their designs. We would like to have all applications in by May 13th. Soon after that, we will figure out a way to narrow it down to one logo! (details to be determined).

We would also like to thank whoever made the logo which we pick at the end (details also to be determined).

Looking forward to collaborate with you and see what we create!

[1] https://github.com/mozilla/Community-Design/issues/62


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Daniel GlazmanBlueGriffon officially recommended by the French Government

en-US TL;DR: BlueGriffon is now officially recommended as the html editor for the French Administration in its effort to rely on and promote Free Software!

Je suis très heureux de signaler que BlueGriffon, mon éditeur Web cross-platform et Wysiwyg, est officiellement recommandé par le Socle Interministériel de Logiciels Libres pour 2016 !!! Vous trouverez la liste officielle des logiciels recommandés ici (document pdf).

Gervase MarkhamDNSSEC on gerv.net

My ISP, the excellent Mythic Beasts, has started offering a managed DNSSEC service for domains they control – just click one button, and you’ve got DNSSEC on your domain. I’ve just enabled it on gerv.net (which, incidentally, as of a couple of weeks ago, is also available over a secure channel thanks to MB and Let’s Encrypt).

If you have any problems accessing any resources on gerv.net, please let me know by email – gerv at mozilla dot org should be unaffected by any problems.

Laurent JouanneauRelease of SlimerJS 0.10

I'm pleased to announce the release of SlimerJS 0.10!

SlimerJS is a scriptable browser. It is a tool like PhantomJS except it is based on Firefox and and it is not (yet) "headless" (if some Mozillians could help me to have a true headless browser ;-)...).

This new release brings new features and compatibility with Firefox 46. Among of them:

  • support of PDF export
  • support of Selenium with a "web driver mode"
  • support of stdout, stderr and stdin streams with the system module
  • support of exit code with phantom.exit() and slimer.exit()
  • support of node_modules with require()
  • support of special files (/dev/* etc) with the fs module

This version fixes also many bugs and conformance issues with PhantomJS 1.9.8 and 2.x. It fixed also some issues to run CasperJS 1.1.

See change details in release notes. As usual, you can download SlimerJS from the download page.

Note that there isn't anymore "standalone edition" (with embedding of XulRunner), because Mozilla ceased to maintain and build XulRunner. Only the "lightweight" edition is available from now, and you must install Firefox to run SlimerJS.

Consider this release as a "1.0pre". I'll try to release in few weeks the next major version, 1.0. It will only fix bugs found in 0.10 (if any), and will implement last few features to match the PhantomJS 2.1 API.

Matěj CeplOn GitLab growing and OStatus extension

Finally, this issue ticket gave me the opportunity to write what I think about OStatus. So, I did.

  1. http://www.joelonsoftware.com/articles/fog0000000018.html -- I am sorry, if you like OStatus, but it is the most insane open source example of this disease. After the astro design, we have no working and stable platform for the thing.

    There was old identi.ca, which was scrapped (I know more polite term is "moved to GNU/Social", yeah … how many users of this social software there is? And yes, I know, the protocol is named differently, but it is just another version of the same thing from my point of view), and pump.io, which is … I have just upgraded my instance to see whether I can write honestly that it is a well working proprietary (meaning, used by one implementation by the author of the protocol only) distributed network, and no, it is broken.

    And even if the damned thing worked, it would not offer me the functionality I really want: which is to connect with my real friends, who are all on Twitter, Facebook, or even some on G+. Heck, pump.io would be useless even these friends were on Diaspora (no, they are not, nobody is there). So, yes, if you want something which is useless, go and write OStatus component.

  2. I don't know what happens when we want to share issues, etc. I don't know and I don't care (for example, it seems to me that issues are something which is a way more linked to one particular project). And yes, I am the reporter of https://bugzilla.mozilla.org/show_bug.cgi?id=719725 (and author of http://article.gmane.org/gmane.linux.redhat.fedora.devel/79936/), and I think that it is impossible to do it. At least, nobody have managed to do it and it was not for the lack of trying. How is that OpenID or other federated identity doing?

    Besides, Do The Simplest Thing That Could Possibly Work because You Aren’t Gonna Need It . I vote for git request-pull(1) parser. And, no, not just sending an URL in HTTP GET, I would like to see that button (when the comment is recognized as being parseable) next to the comment with the plain text output of git request-pull.

  3. Actually, git request-pull(1) parser not only follows YAGNI, but it also in a loving way steps around the biggest problem of all federated solution: broken or missing federated identity.

Matěj CeplOn GitLab growing an OStatus extension

Finally, this issue ticket gave me the opportunity to write what I think about OStatus. So, I did.

  1. http://www.joelonsoftware.com/articles/fog0000000018.html -- I am sorry, if you like OStatus, but it is the most insane open source example of this disease. After the astro design, we have no working and stable platform for the thing.

    There was old identi.ca, which was scrapped (I know more polite term is "moved to GNU/Social", yeah … how many users of this social software there is? And yes, I know, the protocol is named differently, but it is just another version of the same thing from my point of view), and pump.io, which is … I have just upgraded my instance to see whether I can write honestly that it is a well working proprietary (meaning, used by one implementation by the author of the protocol only) distributed network, and no, it is broken.

    And even if the damned thing worked, it would not offer me the functionality I really want: which is to connect with my real friends, who are all on Twitter, Facebook, or even some on G+. Heck, pump.io would be useless even these friends were on Diaspora (no, they are not, nobody is there). So, yes, if you want something which is useless, go and write OStatus component.

  2. I don't know what happens when we want to share issues, etc. I don't know and I don't care (for example, it seems to me that issues are something which is a way more linked to one particular project). And yes, I am the reporter of https://bugzilla.mozilla.org/show_bug.cgi?id=719725 (and author of http://article.gmane.org/gmane.linux.redhat.fedora.devel/79936/), and I think that it is impossible to do it. At least, nobody have managed to do it and it was not for the lack of trying. How is that OpenID or other federated identity doing?

    Besides, Do The Simplest Thing That Could Possibly Work because You Aren’t Gonna Need It . I vote for git request-pull(1) parser. And, no, not just sending an URL in HTTP GET, I would like to see that button (when the comment is recognized as being parseable) next to the comment with the plain text output of git request-pull.

  3. Actually, git request-pull(1) parser not only follows YAGNI, but it also in a loving way steps around the biggest problem of all federated solution: broken or missing federated identity.

This Week In RustThis Week in Rust 128

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

This week's edition was edited by: Vikrant and llogiq.

Updates from Rust Community

News & Blog Posts

Notable New Crates & Project Updates

  • Rust project changelog for 2016-04-29. Updates to bitflags, lazy_static, regex, rust-mode, rustup, uuid.
  • Xi Editor. A modern editor with a backend written in Rust.
  • rure. A C API for the regex crate.
  • cassowary-rs. A Rust implementation of the Cassowary constraint solving algorithm.
  • Sapper. A lightweight web framework built on async hyper, implemented in Rust language.
  • servo-vdom. A modified servo browser which accepts content patches over an IPC channel.
  • rustr and rustinr. Rust library for working with R, and an R package to generate Rust interfaces.
  • Rorschach. Pretty print binary blobs based on common layout definition.

Crate of the Week

This week's Crate of the Week is arrayvec, which gives us a Vec-like interface over plain arrays for those instances where you don't want the indirection. Thanks to ehiggs for the suggestion!

Submit your suggestions for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

92 pull requests were merged in the last week.

New Contributors

  • Andy Russell
  • Brayden Winterton
  • Demetri Obenour
  • Ergenekon Yigit
  • Jonathan Turner
  • Michael Tiller
  • Timothy McRoy
  • Tomáš Hübelbauer

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week!.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

fn work(on: RustProject) -> Money

No jobs listed for this week.

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

In general, enough layers of Rc/RefCell will make anything work.

gkoz on TRPLF.

Thanks to birkenfeld for the suggestion.

Submit your quotes for next week!

Maja FrydrychowiczNot Testing a Firefox Build (Generic Tasks in TaskCluster)

A few months ago I wrote about my tentative setup of a TaskCluster task that was neither a build nor a test. Since then, gps has implemented “generic” in-tree tasks so I adapted my initial work to take advantage of that.

Triggered by file changes

All along I wanted to run some in-tree tests without having them wait around for a Firefox build or any other dependencies they don’t need. So I originally implemented this task as a “build” so that it would get scheduled for every incoming changeset in Mozilla’s repositories.

But forget “builds”, forget “tests” — now there’s a third category of tasks that we’ll call “generic” and it’s exactly what I need.

In base_jobs.yml I say, “hey, here’s a new task called marionette-harness — run it whenever there’s a change under (branch)/testing/marionette/harness”. Of course, I can also just trigger the task with try syntax like try: -p linux64_tc -j marionette-harness -u none -t none.

When the task is triggered, a chain of events follows:

For Tasks that Make Sense in a gecko Source Checkout

As you can see, I made the build.sh script in the desktop-build docker image execute an arbitrary in-tree JOB_SCRIPT, and I created harness-test-linux.sh to run mozharness within a gecko source checkout.

Why not the desktop-test image?

But we can also run arbitrary mozharness scripts thanks to the configuration in the desktop-test docker image! Yes, and all of that configuration is geared toward testing a Firefox binary, which implies downloading tools that my task either doesn’t need or already has access to in the source tree. Now we have a lighter-weight option for executing tests that don’t exercise Firefox.

Why not mach?

In my lazy work-in-progress, I had originally executed the Marionette harness tests via a simple call to mach, yet now I have this crazy chain of shell scripts that leads all the way mozharness. The mach command didn’t disappear — you can run Marionette harness tests with ./mach python-test .... However, mozharness provides clearer control of Python dependencies, appropriate handling of return codes to report test results to Treeherder, and I can write a job-specific script and configuration.

The Servo BlogThese Weeks In Servo 61

In the last two weeks, we landed 228 PRs in the Servo organization’s repositories.

Planning and Status

Our overall roadmap and quarterly goals are available online.

This week’s status updates are here.

Zhen Zhang and Rahul Sharma were selected as 2016 GSoC students for Servo! They will be working on the File API and foundations for Service Workers respectively.

Notable Additions

  • nox landed Windows support in the upgraded SpiderMonkey - now we just need to land it in Servo!
  • bholley implemented Margin, Padding, font-size, and has_class for the Firefox/Gecko support in Servo’s style system
  • pcwalton fixed a bug that was preventing us from hitting 60fps reliably with browser.html and WebRender!
  • mbrubeck changed to use the line-breaking algorithm from Raph Levien’s xi-unicode project
  • frewsxcv removed the horrific Dock-thrashing while running the WPT and CSS tests on OSX
  • vramana implemented fetch support for file:// URLs
  • fabrice implemented armv7 support across many of our dependencies and in Servo itself
  • larsberg re-enabled gating checkins on Windows builds, now that the Windows Buildbot instance is more reliable
  • asajeffrey added reporting of backtraces to the Constellation during panic!, which will allow better reporting in the UI
  • danl added the style property for flex-basis in Flexbox
  • perlun improved line heights and fonts in input and textarea
  • jdm re-enabled the automated WebGL tests
  • ms2ger updated the CSS tests
  • dzbarsky implemented glGetVertexAttrib
  • jdm made canvas elements scale based on the DOM width and height
  • edunham improved our ability to correctly recognize and validate licenses
  • pcwalton implemented overflow:scroll in WebRender
  • KiChjang added support for multipart/form-data submission
  • fitzgen created a new method for dumping time profile info to an HTML file
  • mrobinson removed the need for StackingLevel info in WebRender
  • ddefisher added initial support for persistent sessions in Servo
  • cgwalters added an option to Homu to support linear commit histories better
  • simonsapin promoted rust-url to version 1.0
  • wafflespeanut made highfive automatically report test failures from our CI infrastructure
  • connorgbrewster finished integrating the experimental XML5 parser
  • emilio added some missing WebGL APIs and parameter validation
  • izgzhen implemented the scrolling-related CSSOM View APIs
  • wafflespeanut redesigned the network error handling code
  • jdm started and in-tree glossary

New Contributors

Get Involved

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!

Screenshot

Screenshot of Firefox browsing a very simple page using Servo’s Stylo style system implementation: (screenshot)

Logic error that caused the page to redraw after every HTML parser operation: (screenshot)

Meetings and Mailing List

Nick Fitzgerald made a thread describing his incredibly awesome profiler output for Servo: https://groups.google.com/forum/#!topic/mozilla.dev.servo/KmzdXoaKo9s

Karl Dubost[worklog] Kusunoki, that smell.

This feeling when finally things get fixed after 2 years of negociating. Sometimes things take longer. Sometimes policies change on the other side. All in one, that's very good fortune for the Web and the users. It's bit like that smell for the last two years in summer time in my street, I finally got to ask the gardener of one of the houses around and it revealed what I should have known: Camphor tree (楠). Good week. Tune of the week: Carmina Burana - O Fortuna - Carl Orff.

Webcompat Life

Progress this week:

Today: 2016-05-02T09:21:45.583211
368 open issues
----------------------
needsinfo       4
needsdiagnosis  108
needscontact    35
contactready    93
sitewait        119
----------------------

You are welcome to participate

Londong agenda.

We had a meeting this week: Minutes

Webcompat issues

(a selection of some of the bugs worked on this week).

Webcompat development

Gecko Bugs

Updating Our Webkit Prefixing Policy

This is the big news of the week. And that's a lot of good for the Web. WebKit (aka Apple) is switching from vendor prefixes to feature flags. What does it mean? It means that new features will be available only to developers who activate them. It allows for testing without polluting the feature-space.

The current consensus among browser implementors is that, on the whole, prefixed properties have hurt more than they’ve helped. So, WebKit’s new policy is to implement experimental features unprefixed, behind a runtime flag. Runtime flags allow us to continue to get experimental features into developers’ hands while avoiding the various problems vendor prefixes had.

Also

We’ll be evaluating existing features on a case-by-case basis. We expect to significantly reduce the number of prefixed properties supported over time but Web compatibility will require us to keep around prefixed versions of some features.

HTTP Cache Invalidation, Facebook, Chrome and Firefox

Facebook is proposing to change the policy for HTTP Cache invalidation. This thread is really interesting. It started as a simple question on changing the behavior of Firefox to align with changes planned for Chrome, but it is evolving into a discussion about how to do cache invalidation the right way. Really cool.

I remember this study below a little while ago (March 3, 2012). And I was wondering if we had similar data for Firefox.

for those users who filled up their cache, - 25% of them fill it up in 4 hours. - 50% of them fill it up within 20 hours. - 75% of them fill it up within 48 hours. Now, that's just wall clock time... but how many hours of "active" browsing does it take to fill the cache? - 25% in 1 hour, - 50% in 4 hours, - and 75% in 10 hours.

Found again through Cache me if you can.

I wonder how many times a resource which is set up with a max-age of 1 year is still around in the cache after 1 year. And if indeed Web developers set a long time for cache invalidation as to mean never reload it, it seems wise to have something in Cache-Control: to allow this. There is must-revalidate, I was wondering if the immutable is another way of saying never-revalidate. Maybe a max-age value is even not necessary at all. Anyway, read the full thread on the bug.

Reading List

Follow Your Nose

TODO

  • Document how to write tests on webcompat.com using test fixtures.
  • ToWrite: rounding numbers in CSS for width
  • ToWrite: Amazon prefetching resources with <object> for Firefox only.

Otsukare!

Varun JoshiGuess who got into Google Summer of Code?

I'm really excited to say that I'll be participating in Google Summer of Code this year, with Mozilla! I'm going to be working on the Balrog update server this summer, under Ben's guidance. Thank you Mozilla and Google for giving me this chance!

I'll be optimizing Balrog by devising a mechanism to handle update races. A future blog post will describe how exactly these races occur and how I aim to resolve them. Basically, an algorithm similar to what git uses for 3-way merges is required, but we also need to take care of nesting since the 'Blob' data structure, that we use, has nested data. I will also share my proposal and the timeline I'll be following in the coming weeks.

These three months are going to be amazing and I'm really looking forward to working with the Mozilla RelEng community! I will be blogging weekly once the coding period commences, and I welcome any suggestions that might lead to me presenting a better final product.

Support.Mozilla.OrgMozillian profile: Jayesh

Hello, SUMO Nation!

Do you still remember Dinesh? Turns out he’s not the only Mozillian out there who’s happy to share his story with us. Today, I have the pleasure of introducing Jayesh, one of the many SUMOzillians among you, with a really inspiring story of his engagement in the community to share. Read on!

Jayesh

I’m Jayesh from India. I’ve been contributing to Mozilla as a Firefox Student Ambassador since 2014. I’m a self-made entrepreneur, tech lover, and passionate traveller. I am also an undergraduate with a Computer Science background.

During my university days I used to waste a lot of time playing games as I did not have a platform to showcase my technical skills. I thought working was only useful when you had a “real” job. I only heard about open source, but in my third year I came to know about open source contributors – through my friend Dinesh, who told me about the FSA program – this inspired me a lot. I thought it was the perfect platform for me to kickstart my career as a Mozillian and build a strong, bright future.

Being a techie, I could identify with Mozilla and its efforts to keep the web open. I registered for the FSA program with the guidance of my friend, and found a lot of students and open source enthusiasts from India contributing to Mozilla in many ways. I was very happy to join the Mozilla India Community.

Around 90% of Computer Science students at the university learn the technology but don’t actually try to implement working prototypes using their knowledge, as they don’t know about the possibility of open source contributions – they just believe that showcasing counts only during professional internships and work training. Thus, I thought of sharing my knowledge about open source contributors through the Mozilla community.

I gained experience conducting events for Mozilla in the Tirupati Community, where my friend was seeking help in conducting events as he was the only Firefox Student Ambassador in that region. Later, to learn more, we travelled to many places and attend various events in Bengaluru and Hyderabad , where we met a very well developed Mozilla community in southern India. We met many Mozilla Representatives and sought help from them. Vineel and Galaxy helped us a lot, guiding us through our first steps.

Later, I found that I was the only Mozillian in my region – Kumbakonam, where I do my undergrad studies – within a 200 miles radius. This motivated me to personally build a new university club – SRCMozillians. I inaugurated the club at my university with the help of the management.

More than 450 students in the university registered for the FSA program in the span of two days, and we have organized more than ten events, including FFOS App days, Moz-Quiz, Web-Development-Learning, Connected Devices-Learning, Moz-Stall, a ponsored fun event, community meet-ups – and more! All this in half a year. For my efforts, I was recognized as FSA of the month, August 2015 & FSA Senior.

The biggest problems we faced while building our club were the studying times, when we’d be having lots of assignments, cycle tests, lab internals, and more – with everyone really busy and working hard, it took time to bridge the gap and realise grades alone are not the key factor to build a bright future.

My contributions to the functional areas in Mozilla varied from time to time. I started with Webmaker by creating educational makes about X-Ray Goggles, App-Maker and Thimble. I’m proud of being recognized as a Webmaker Mentor for that. Later, I focused on Army of Awesome (AoA) by tweeting and helping Firefox users. I even developed two Firefox OS applications (Asteroids – a game and a community application for SRCMozillians), which were available in the Marketplace. After that, I turned my attention to Quality Assurance, as Software Testing was one of the subject in my curriculum. I started testing tasks in One And Done – this helped me understand the key concepts of software testing easily – especially checking the test conditions and triaging bugs. My name was even mentioned on the Mozilla blog about the Firefox 42.0 Beta 3 Test day for successfully testing and passing all the test cases.

I moved on to start localization for Telugu, my native language. I started translating KB articles – with time, my efforts were recognized, and I became a Reviewer for Telugu. This area of contribution proved to be very interesting, and I even started translating projects in Pontoon.

As you can see from my Mozillian story above, it’s easy to get started with something you like. I guarantee that every individual student with passion to contribute and build a bright career within the Mozilla community, can discover that this is the right platform to start with. The experience you gain here will help you a lot in building your future. I personally think that the best aspect of it is the global connection with many great people who are always happy to support and guide you.

– Jayesh , a proud Mozillian

Thank you, Jayesh! A great example of turning one’s passion into a great initiative that enables many people around you understand and use technology better. We’re looking forward to more open source awesomeness from you!

SUMO Blog readers – are you interested in posting on our blog about your open source projects and adventures? Let us know!

Dustin J. MitchellLoading TaskCluster Docker Images

When TaskCluster builds a push to a Gecko repository, it does so in a docker image defined in that very push. This is pretty cool for developers concerned with the build or test environment: instead of working with releng to deploy a change, now you can experiment with that change in try, get review, and land it like any other change. However, if you want to actually download that docker image, docker pull doesn’t work anymore.

The image reference in the task description looks like this now:

"image": {
    "path": "public/image.tar",
    "taskId": "UDZUwkJWQZidyoEgVfFUKQ",
    "type": "task-image"
},

This is referring to an artifact of the task that built the docker image. If you want to pull that exact image, there’s now an easier way:

./mach taskcluster-load-image --task-id UDZUwkJWQZidyoEgVfFUKQ

will download that docker image:

dustin@dustin-moz-devel ~/p/m-c (central) $ ./mach taskcluster-load-image --task-id UDZUwkJWQZidyoEgVfFUKQ
Task ID: UDZUwkJWQZidyoEgVfFUKQ
Downloading https://queue.taskcluster.net/v1/task/UDZUwkJWQZidyoEgVfFUKQ/artifacts/public/image.tar
######################################################################## 100.0%
Determining image name
Image name: mozilla-central:f7b4831774960411275275ebc0d0e598e566e23dfb325e5c35bf3f358e303ac3
Loading image into docker
Deleting temporary file
Loaded image is named mozilla-central:f7b4831774960411275275ebc0d0e598e566e23dfb325e5c35bf3f358e303ac3
dustin@dustin-moz-devel ~/p/m-c (central) $ docker images
REPOSITORY          TAG                                                                IMAGE ID            CREATED             VIRTUAL SIZE
mozilla-central     f7b4831774960411275275ebc0d0e598e566e23dfb325e5c35bf3f358e303ac3   51e524398d5c        4 weeks ago         1.617 GB

But if you just want to pull the image corresponding to the codebase you have checked out, things are even easier: give the image name (the directory under testing/docker), and the tool will look up the latest build of that image in the TaskCluster index:

dustin@dustin-moz-devel ~/p/m-c (central) $ ./mach taskcluster-load-image desktop-build
Task ID: TjWNTysHRCSfluQjhp2g9Q
Downloading https://queue.taskcluster.net/v1/task/TjWNTysHRCSfluQjhp2g9Q/artifacts/public/image.tar
######################################################################## 100.0%
Determining image name
Image name: mozilla-central:f5e1b476d6a861e35fa6a1536dde2a64daa2cc77a4b71ad685a92096a406b073
Loading image into docker
Deleting temporary file
Loaded image is named mozilla-central:f5e1b476d6a861e35fa6a1536dde2a64daa2cc77a4b71ad685a92096a406b073

Tim TaubertA Fast, Constant-time AEAD for TLS

The only TLS v1.2+ cipher suites with a dedicated AEAD scheme are the ones using AES-GCM, a block cipher mode that turns AES into an authenticated cipher. From a cryptographic point of view these are preferable to non-AEAD-based cipher suites (e.g. the ones with AES-CBC) because getting authenticated encryption right is hard without using dedicated ciphers.

For CPUs without the AES-NI instruction set, constant-time AES-GCM however is slow and also hard to write and maintain. The majority of mobile phones, and mostly cheaper devices like tablets and notebooks on the market thus cannot support efficient and safe AES-GCM cipher suite implementations.

Even if we ignored all those aforementioned pitfalls we still wouldn’t want to rely on AES-GCM cipher suites as the only good ones available. We need more diversity. Having widespread support for cipher suites using a second AEAD is necessary to defend against weaknesses in AES or AES-GCM that may be discovered in the future.

ChaCha20 and Poly1305, a stream cipher and a message authentication code, were designed with fast and constant-time implementations in mind. A combination of those two algorithms yields a safe and efficient AEAD construction, called ChaCha20/Poly1305, which allows TLS with a negligible performance impact even on low-end devices.

Firefox 47 will ship with two new ECDHE/ChaCha20 cipher suites as specified in the latest draft. We are looking forward to see the adoption of these increase and will, as a next step, work on prioritizing them over AES-GCM suites on devices not supporting AES-NI.

QMOFirefox 47 Beta 3 Testday, May 6th

Hey everyone,

I am happy to announce that the following Friday, May 6th, we are organizing a new event – Firefox 47 Beta 3 Testday. The main focus will be on Synced Tabs Sidebar and Youtube Embedded Rewrite features. The detailed instructions are available via this etherpad.

No previous testing experience is needed, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better! 😉

See you all on Friday!

Mozilla Addons BlogWebExtensions in Firefox 48

We last updated you on our progress with WebExtensions when Firefox 47 landed in Developer Edition (Aurora), and today we have an update for Firefox 48, which landed in Developer Edition this week.

With the release of Firefox 48, we feel WebExtensions are in a stable state. We recommend developers start to use the WebExtensions API for their add-on development. Over the last release more than 82 bugs were closed on WebExtensions alone.

If you have authored an add-on in the past and are curious how it’s affected by the upcoming changes, please use the lookup tool. There is also a wiki page filled with resources to support you through the changes.

APIs Implemented

Many APIs gained improved support in this release, including: alarms, bookmarks, downloads, notifications, webNavigation, webRequest, windows and tabs.

The options v2 API is now supported so that developers can implement an options UI for their users. We do not plan to support the options v1 API, which is deprecated in Chrome. You can see an example of how to use this API in the WebExtensions examples on Github.

image08

In Firefox 48 we pushed hard to make the WebRequest API a solid foundation for privacy and security add-ons such as Ghostery, RequestPolicy and NoScript. With the current implementation of the onErrorOccurred function, it is now possible for Ghostery to be written as a WebExtension.

The addition of reliable origin information was a major requirement for existing Firefox security add-ons performing cross-origin checks such as NoScript or uBlock Origin. This feature is unique to Firefox, and is one of our first expansions beyond parity with the Chrome APIs for WebExtensions.

Although requestBody support is not in Firefox 48 at the time of publication, we hope it will be uplifted. This change to Gecko is quite significant because it will allow NoScript’s XSS filter to perform much better as a WebExtension, with huge speed gains (20 times or more) in some cases over the existing XUL and XPCOM extension for many operations (e.g. form submissions that include file uploads).

We’ve also had the chance to dramatically increase our unit test coverage again across the WebExtensions API, and now our modules have over 92% test coverage.

Content Security Policy Support

By default WebExtensions now use a Content Security Policy, limiting the location of resources that can be loaded. The default policy for Firefox is the same as Chrome’s:

"script-src 'self'; object-src 'self';"

This has many implications, such as the following: eval will no longer work, inline JavaScript will not be executed and only local scripts and resources are loaded. To relax that and define your own, you’ll need to define a new CSP using the content_security_policy entry in the WebExtension’s manifest.

For example, to load scripts from example.com, the manifest would include a policy configuration that would look like this:

"content_security_policy": "script-src 'self' https://example.com; object-src 'self'"

Please note: this will be a backwards incompatible change for any Firefox WebExtensions that did not adhere to this CSP. Existing WebExtensions that do not adhere to the CSP will need to be updated.

Chrome compatibility

To improve the compatibility with Chrome, a change has landed in Firefox that allows an add-on to be run in Firefox without the add-on id specified. That means that Chrome add-ons can now be run in Firefox with no manifest changes using about:debugging and loading it as a temporary add-on.

Support for WebExtensions with no add-on id specified in the manifest is being added to addons.mozilla.org (AMO) and our other tools, and should be in place on AMO for when Firefox 48 lands in release.

Android Support

With the release of Firefox 48 we are announcing Android support for WebExtensions. WebExtensions add-ons can now be installed and run on Android, just like any other add-on. However, because Firefox for Android makes use of a native user interface, anything that involves user interface interaction is currently unsupported (similar to existing extensions on Android).

You can see what the full list of APIs supported on Android in the WebExtensions documentation on MDN, these include alarms, cookies, i18n and runtime.

Developer Support

In Firefox 45 the ability to load add-ons temporarily was added to about:debugging. In Firefox 48 several exciting enhancements are added to about:debugging.

If your add-on fails to load for some reason in about:debugging (most commonly due to JSON syntax errors), then you’ll get a helpful message appearing at the top of about:debugging. In the past, the error would be hidden away in the browser console.

image02

It still remains in the browser console, but is now visible that an error occurred right in the same page where loading was triggered.

image04

Debugging

You can now debug background scripts and content scripts in the debugging tools. In this example, to debug background scripts I loaded the add-on bookmark-it from the MDN examples. Next click “Enable add-on debugging”, then click “debug”:

image03

You will need to accept the incoming remote debugger session request. Then you’ll have a Web Console for the background page. This allows you to interact with the background page. In this case I’m calling the toggleBookmark API.

image06

This will call the toggleBookmark function and bookmark the page (note the bookmark icon is now blue. If you want to debug the toggleBookmark function,  just add the debugger statement at the appropriate line. When you trigger toggleBookmark, you’ll be dropped into the debugger:image09

You can now debug content scripts. In this example I’ve loaded the beastify add-on from the MDN examples using about:debugging. This add-on runs a content script to alter the current page by adding a red border.

All you have to do to debug it is to insert the debugger statement into your content script, open up the Developer Tools debugger and trigger the debug statement:

image05

You are then dropped into the debugger ready to start debugging the content script.

Reloading

As you may know, restarting Firefox and adding in a new add-on is can be slow, so about:debugging now allows you to reload an add-on. This will remove the add-on and then re-enable the add-on, so that you don’t have to keep restarting Firefox. This is especially useful for changes to the manifest, which will not be automatically refreshed. It also resets UI buttons.

In the following example the add-on just calls setBadgeText to add “Test” onto the browser action button (in the top right) when you press the button added by the add-on.

image03

Hitting reload for that add-on clears the state for that button and reloads the add-on from the manifest, meaning that after a reload, the “Test” text has been removed.

image07

This makes developing and debugging WebExtensions really easy. Coming soon, web-ext, the command line tool for developing add-ons, will gain the ability to trigger this each time a file in the add-on changes.

There are also lots of other ways to get involved with WebExtensions, so please check them out!

Update: clarified that no add-on id refers to the manifest as a WebExtension.

Daniel Stenbergcurl 7.49.0 goodies coming

Here’s a closer look at three new features that we’re shipping in curl and libcurl 7.49.0, to be released on May 18th 2016.

connect to this instead

If you’re one of the users who thought --resolve and doing Host: header tricks with --header weren’t good enough, you’ll appreciate that we’re adding yet another option for you to fiddle with the connection procedure. Another “Swiss army knife style” option for you who know what you’re doing.

With --connect-to you basically provide an internal alias for a certain name + port to instead internally use another name + port to connect to.

Instead of connecting to HOST1:PORT1, connect to HOST2:PORT2

It is very similar to --resolve which is a way to say: when connecting to HOST1:PORT1 use this ADDR2:PORT2. --resolve effectively prepopulates the internal DNS cache and makes curl completely avoid the DNS lookup and instead feeds it with the IP address you’d like it to use.

--connect-to doesn’t avoid the DNS lookup, but it will make sure that a different host name and destination port pair is used than what was found in the URL. A typical use case for this would be to make sure that your curl request asks a specific server out of several in a pool of many, where each has a unique name but you normally reach them with a single URL who’s host name is otherwise load balanced.

--connect-to can be specified multiple times to add mappings for multiple names, so that even following HTTP redirects to other host names etc can be handled. You don’t even necessarily have to redirect the first used host name.

The libcurl option name for for this feature is CURLOPT_CONNECT_TO.

Michael Kaufmann brought this feature.

http2 prior knowledge

In our ongoing quest to provide more and better HTTP/2 support in a world that is slowly but steadily doing more and more transfers over the new version of the protocol, curl now offers --http2-prior-knowledge.

As the name might hint, this is a way to tell curl that you have “prior knowledge” that the URL you specifies goes to a host that you know supports HTTP/2. The term prior knowledge is in fact used in the HTTP/2 spec (RFC 7540) for this scenario.

Normally when given a HTTP:// or a HTTPS:// URL, there will be no assumption that it supports HTTP/2 but curl when then try to upgrade that from version HTTP/1. The command line tool tries to upgrade all HTTPS:// URLs by default even, and libcurl can be told to do so.

libcurl wise, you ask for a prior knowledge use by setting CURLOPT_HTTP_VERSION to CURL_HTTP_VERSION_2_PRIOR_KNOWLEDGE.

Asking for http2 prior knowledge when the server does in fact not support HTTP/2 will give you an error back.

Diego Bes brought this feature.

TCP Fast Open

TCP Fast Open is documented in RFC 7413 and is basically a way to pass on data to the remote machine earlier in the TCP handshake – already in the SYN and SYN-ACK packets. This of course as a means to get data over faster and reduce latency.

The --tcp-fastopen option is supported on Linux and OS X only for now.

This is an idea and technique that has been around for a while and it is slowly getting implemented and supported by servers. There have been some reports of problems in the wild when “middle boxes” that fiddle with TCP traffic see these packets, that sometimes result in breakage. So this option is opt-in to avoid the risk that it causes problems to users.

A typical real-world case where you would use this option is when  sending an HTTP POST to a site you don’t have a connection already established to. Just note that TFO relies on the client having had contact established with the server before and having a special TFO “cookie” stored and non-expired.

TCP Fast Open is so far only used for clear-text TCP protocols in curl. These days more and more protocols switch over to their TLS counterparts (and there’s room for future improvements to add the initial TLS handshake parts with TFO). A related option to speed up TLS handshakes is --false-start (supported with the NSS or the secure transport backends).

With libcurl, you enable TCP Fast Open with CURLOPT_TCP_FASTOPEN.

Alessandro Ghedini brought this feature.

Support.Mozilla.OrgWhat’s Up with SUMO – 28th April

Hello, SUMO Nation!

Did you know that in Japanese mythology, foxes with nine tails are over a 100 years old and have the power of omniscience? I think we could get the same result if we put a handful of SUMO contributors in one room – maybe except for the tails ;-)

Here are the news from the world of SUMO!

Welcome, new contributors!

If you just joined us, don’t hesitate – come over and say “hi” in the forums!

Contributors of the week

Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

Most recent SUMO Community meeting

The next SUMO Community meeting

  • …is happening on WEDNESDAY the 4th of May – join us!
  • Reminder: if you want to add a discussion topic to the upcoming meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Wednesday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).
    • If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback.

Community

Social

Support Forum

Knowledge Base & L10n

  • Hackathons everywhere! Find your people and get organized!
  • We have three upcoming iOS articles that will need localization. Their drafts are still in progress (pending review from the product team). Coming your way real soon – watch your dashboards!
  • New l10n milestones coming to your dashboards soon, as well.

Firefox – RELEEEEAAAAASE WEEEEEEK ;-)

What’s your experience of release week? Share with us in the comments or our forums! We are looking forward to seeing you all around SUMO – KEEP ROCKING THE HELPFUL WEB!

Air MozillaWeb QA Weekly Meeting, 28 Apr 2016

Web QA Weekly Meeting This is our weekly gathering of Mozilla'a Web QA team filled with discussion on our current and future projects, ideas, demos, and fun facts.

Air MozillaReps weekly, 28 Apr 2016

Reps weekly This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Chris H-CIn Lighter News…

…Windows XP Firefox users may soon be able to properly render poop.

winxpPoo

Here at Mozilla, we take these things seriously.

:chutten


Ian BickingA Product Journal: Data Up and Data Down

I’m blogging about the development of a new product in Mozilla, look here for my other posts in this series

We’re in the process of reviewing the KPI (Key Performance Indicators) for Firefox Hello (relatedly I joined the Firefox Hello team as engineering manager in October). Mozilla is trying (like everyone else) to make data-driven decisions. Basing decisions on data has some potential to remove or at least reveal bias. It provides a feedback mechanism that can provide continuity even as there are personnel changes. It provides some accountability over time. Data might also provide insight about product opportunities which we might otherwise miss.

Enter the KPI: for Hello (like most products) the key performance indicators are number of users, growth in users over time, user retention, and user sentiment (e.g., we use the Net Promoter Score). But like most projects those are not actually our success criteria: product engagement is necessary but not sufficient for organizational goals. Real goals might be revenue, social or political impact, or improvement in brand sentiment.

The value of KPI is often summarized as “letting us know how we’re doing”. I think the value KPI offers is more select:

  1. When you think a product is doing well, but it’s not, KPI is revealing.
  2. When you know a product isn’t doing well, KPI let’s you triage: is it hopeless? Do we need to make significant changes? Do we need to maintain our approach but try harder?
  3. When a product is doing well the KPI gives you a sense of the potential. You can also triage success: Should we invest heavily? Stay the path? Is there no potential to scale the success far enough?

I’m skeptical that KPI can provide the inverse of 1: when you think a product is doing poorly, can KPI reveal that it is doing well? Because there’s another set of criteria that defines “success”, KPI is necessary but not sufficient. It requires a carefully objective executive to revise their negative opinion about the potential of a project based on KPI, and they may have reasonably lost faith that a project’s KPI-defined success can translate into success given organizational goals.

The other theoretical value of KPI is that you could correlate KPI with changes to the product, testing whether each change improves your product’s core value. I’m sure people manage to do this, with both very fine grained measurements and fine grained deployments of changes. But it seems more likely to me that for most projects given a change in KPI you’ll simply have to say “yup” and come up with unverified theories about that change.

The metrics that actually support the development of the product are not “key”, they are “incidental”. These are metrics that find bugs in the product design, hint at unexplored opportunities, confirm the small wins. These are metrics that are actionable by the people making the product: how do people interact with the tool? What do they use it for? Where do they get lost? What paths lead to greater engagement?

What is KPI for?

I’m trying to think more consciously about the difference between managing up and managing down. A softer way of phrasing this is managing in and managing out – but in this case I think the power dynamics are worth highlighting.

KPI is data that goes up. It lets someone outside the project – and above the project – make choices: about investment, redirection, cancellation. KPI data doesn’t go down, it does little to help the people doing the work. Feeling joy or despair about your project based on KPI is not actionable for those people on the inside of a project.

Incentive or support

I would also distinguish two kinds of management here: one perspective on management is that the organization should set up the right incentives and consequences so that rewards are aligned with organizational goals. The right incentives might make people adapt their behavior to get alignment; how they adapt is undefined. The right incentives might also exclude those who aren’t in alignment, culling misalignment from the organization. Another perspective is that the organization should work to support people, that misalignment of purpose between a person and the organization is more likely a bug than a misalignment of intention. Are people black boxes that we can nudge via punishment and reward? Are there less mechanical ways to influence change?

Student performance measurement are another kind of KPI. It lets someone on the outside (of the classroom) know if things are going well or poorly for the students. It says little about why, and it doesn’t support improvement. School reform based on measurement presumes that teachers and schools are able to achieve the desired outcomes, but simply not willing. A risk of top-down reform: the people on the top use a perspective from the top. As an authority figure, how do I make decisions? The resulting reform is disempowering, supporting decisions from above, as opposed to using data to support the empowerment of those making the many day-to-day decisions that might effect a positive outcome.

Of course, having data available to inform decisions at all levels – from the executive to the implementor – would be great. But there’s a better criteria for data: it should support decision making processes. What are your most important decisions?

As an example from Mozilla, we have data about how much Firefox is used and its marketshare. How much should we pay attention to this data? We certainly don’t have the granularity to connect changes in this KPI to individual changes we make in the project. The only real way to do that is through controlled experiments (which we are trying). We aren’t really willing to triage the project; no one is asking “should we just give up on Firefox?” The only real choice we can make is: are we investing enough in Firefox, or should we invest more? That’s a question worth asking, but we need to keep our attention on the question and not the data. For instance, if we decide to increase investment in Firefox, the immediate questions are: what kind of investment? Over what timescale? Data can be helpful to answer those questions, but not just any data.

Exploratory data

Weeks after I wrote (but didn’t publish) this post I encountered Why Greatness Cannot Be Planned: The Myth of the Objective, a presentation by Kenneth Stanley:

Setting an objective can block its own achievement. It can be an obstacle to creativity and innovation in general. Without protection of individual autonomy collaboration can become dangerously objective.”

The example he uses is manually searching a space of nonlinear image generation to find interesting images. The positive example is one where people explore, branching from novel examples until something recognizable emerges:

One negative example is one where an algorithm explores with a goal in mind:

Another negative example is selection by voting, instead of personal exploration; a product of convergent consensus instead of divergent treasure hunting:

If you decide what you are looking for, you are unlikely to find it. This generated image search space is deliberately nonlinear, so it’s difficult to understand how actions affect outcomes. Though artificial, I think the example is still valid: in a competitive environment, the thing you are searching for is hard to find, because if it was not hard then someone would have found it. And it’s probably hard because actions affect outcomes in unexpected ways.

You could describe this observation as another way of describing the pitfalls of hill climbing: getting stuck at local maximums. Maybe an easy fix is to add a little randomness, to bounce around, to see what lies past the hill you’ve found. But the hills themselves can be distractions: each hill supposes a measurement. The divergent search doesn’t just reveal novel solutions, but it can reveal a novel rubric for success.

This is also a similar observation to that in Innovator’s Dilemma: specifically that in these cases good management consistently and deliberately keeps a company away from novelty and onto the established track, and it does so by paying attention to the feedback that defines the company’s (current) success. The disruptive innovation, a term somewhat synonymous with the book, is an innovation that requires a change in metrics, and that a large portion of the innovation is finding the metric (and so finding the market), not implementing the maximizing solution.

But I digress from the topic of data. If we’re going to be data driven to entirely new directions, we may need data that doesn’t answer a question, doesn’t support a decision, but just tells us about things we don’t know. To support exploration, not based on a hypothesis which we confirm or reject based on the data, because we are still trying to discover our hypothesis. We use the data to look for the hidden variable, the unsolved need, the desire that has not been articulated.

I think we look for this kind of data more often than we would admit. Why else would we want complex visualizations? The visualizations are our attempt at finding a pattern we don’t expect to find.

In Conclusion

I’m lousy at conclusions. All those words up there are like data, and I’m curious what they mean, but I haven’t figured it out yet.

Geoff LankowDoes Firefox update despite being set to "never check for updates"? This might be why.

If, like me, you have set Firefox to "never check for updates" for some reason, and yet it does sometimes anyway, this could be your problem: the chrome debugger.

The chrome debugger uses a separate profile, with the preferences copied from your normal profile. But, if your prefs (such as app.update.enabled) have changed, they remain in the debugger profile as they were when you first opened the debugger.

App update can be started by any profile using the app, so the debugger profile sees the pref as it once was, and goes looking for updates.

Solution? Copy the app update prefs from the main profile to the debugger profile (mine was at ~/.cache/mozilla/firefox/31392shv.default/chrome_debugger_profile), or just destroy the debugger profile and have a new one created next time you use it.

Just thought you might like to know.

Air MozillaPrivacy Lab - April 2016 - Encryption vs. the FBI

Privacy Lab - April 2016 - Encryption vs. the FBI Riana Pfefferkorn, Cryptography Fellow at the Stanford Center for Internet and Society, will talk about the FBI's dispute with Apple over encrypted iPhones.

Air MozillaPrivacy Lab - April 2016 - Encryption vs. the FBI

Privacy Lab - April 2016 - Encryption vs. the FBI Riana Pfefferkorn, Cryptography Fellow at the Stanford Center for Internet and Society, will talk about the FBI's dispute with Apple over encrypted iPhones.

Mike HommeyAnnouncing git-cinnabar 0.3.2

Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.

Get it on github.

These release notes are also available on the git-cinnabar wiki.

This is mostly a bug and regression-fixing release.

What’s new since 0.3.1?

  • Fixed a performance regression when cloning big repositories on OSX.
  • git configuration items with line breaks are now supported.
  • Fixed a number of issues with corner cases in mercurial data (such as, but not limited to nodes with no first parent, malformed .hgtags, etc.)
  • Fixed a stack overflow, a buffer overflow and a use-after free in cinnabar-helper.
  • Better work with git worktrees, or when called from subdirectories.
  • Updated git to 2.7.4 for cinnabar-helper.
  • Properly remove all refs meant to be removed when using git version lower than 2.1.

Mozilla Addons BlogJoin the Featured Add-ons Community Board

Are you a big fan of add-ons? Think you can help help identify the best content to spotlight on AMO? Then let’s talk!

All the add-ons featured on addons.mozilla.org (AMO) are selected by a board of community members. Each board consists of 5-8 members who nominate and select featured add-ons once a month for six months. Featured add-ons help users discover what’s new and useful, and downloads increase dramatically in the months they’re featured, so your participation really makes an impact.

And now the time has come to assemble a new board for the months July – December.

Anyone from the add-ons community is welcome to apply: power users, theme designers, developers, and evangelists. Priority will be given to applicants who have not served on the board before, followed by those from previous boards, and finally from the outgoing board. This page provides more information on the duties of a board member. To be considered, please email us at amo-featured@mozilla.org with your name, and tell us how you’re involved with AMO. The deadline is Tuesday, May 10, 2016 at 23:59 PDT. The new board will be announced about a week after.

We look forward to hearing from you!

Michael KaplyBroken Add-ons in Firefox 46

A lot of add-ons are being broken by a subtle change in Firefox 46, in particular the removal of legacy array/generator comprehension.

Most of these add-ons (including mine) did not use array comprehension intentionally, but they copied some code from this page on developer.mozilla.org for doing an md5 hash of a string. It looked like this:

var s = [toHexString(hash.charCodeAt(i)) for (i in hash)].join("");

You should search through your source code for toHexString and make sure you aren’t using this. MDN was updated in January to fix this. Here’s what the new code looks like:

var s = Array.from(hash, (c, i) => toHexString(hash.charCodeAt(i))).join("");

The new code will only work in Firefox 32 and beyond. If for some reason you need an older version, you can go through the history of the page to find the array based version.

Using this old code will cause a syntax error, so it will cause much more breakage than you realize. You’ll want to get it fixed sooner than later because Firefox 46 started rolling out yesterday.

As a side note, Giorgio Maone caught this in January, but unfortunately all that was updated was the MDN page.

Air MozillaThe Joy of Coding - Episode 55, 27 Apr 2016

The Joy of Coding - Episode 55 mconley livehacks on real Firefox bugs while thinking aloud.

Air MozillaApril 2016 Speaker Series: When Change is the Only Constant, Org Structure Doesn't Matter - Kirsten Wolberg

April 2016 Speaker Series: When Change is the Only Constant, Org Structure Doesn't Matter - Kirsten Wolberg Regardless of whether an organization is decentralized or command & control, large-scale changes are never simple nor straightforward. There's no silver bullets. And yet, when...

Rail AliievFirefox 46.0 and SHA512SUMS

In my previous post I introduced the new release process we have been adopting in the 46.0 release cycle.

Release build promotion has been in production since Firefox 46.0 Beta 1. We have discovered some minor issues; some of them are already fixed, some still waiting.

One of the visible bugs is Bug 1260892. We generate a big SHA512SUMS file, which should contain all important checksums. With numerous changes to the process the file doesn't represent all required files anymore. Some files are missing, some have different names.

We are working on fixing the bug, but you can use the following work around to verify the files.

For example, if you want to verify http://ftp.mozilla.org/pub/firefox/releases/46.0/win64/ach/Firefox%20Setup%2046.0.exe, you need use the following 2 files:

http://ftp.mozilla.org/pub/firefox/candidates/46.0-candidates/build5/win64/ach/firefox-46.0.checksums

http://ftp.mozilla.org/pub/firefox/candidates/46.0-candidates/build5/win64/ach/firefox-46.0.checksums.asc

Example commands:

# download all required files
$ wget -q http://ftp.mozilla.org/pub/firefox/releases/46.0/win64/ach/Firefox%20Setup%2046.0.exe
$ wget -q http://ftp.mozilla.org/pub/firefox/candidates/46.0-candidates/build5/win64/ach/firefox-46.0.checksums
$ wget -q http://ftp.mozilla.org/pub/firefox/candidates/46.0-candidates/build5/win64/ach/firefox-46.0.checksums.asc
$ wget -q http://ftp.mozilla.org/pub/firefox/releases/46.0/KEY
# Import Mozilla Releng key into a temporary GPG directory
$ mkdir .tmp-gpg-home && chmod 700 .tmp-gpg-home
$ gpg --homedir .tmp-gpg-home --import KEY
# verify the signature of the checksums file
$ gpg --homedir .tmp-gpg-home --verify firefox-46.0.checksums.asc && echo "OK" || echo "Not OK"
# calculate the SHA512 checksum of the file
$ sha512sum "Firefox Setup 46.0.exe"
c2ed64298ac2140d8dbdaed28cabc90b38dd9444e9c0d6dd335a2a32cf043a35314945536a5c75124a88bf418a4e2ba77256be223425380e7fcc45a97da8f479  Firefox Setup 46.0.exe
# lookup for the checksum in the checksums file
$ grep c2ed64298ac2140d8dbdaed28cabc90b38dd9444e9c0d6dd335a2a32cf043a35314945536a5c75124a88bf418a4e2ba77256be223425380e7fcc45a97da8f479 firefox-46.0.checksums
c2ed64298ac2140d8dbdaed28cabc90b38dd9444e9c0d6dd335a2a32cf043a35314945536a5c75124a88bf418a4e2ba77256be223425380e7fcc45a97da8f479 sha512 46275456 install/sea/firefox-46.0.ach.win64.installer.exe

This is just a temporary work around and the bug will be fixed ASAP.

Air MozillaSuMo Community Call 27th April 2016

SuMo Community Call 27th April 2016 This is the sumo weekly call We meet as a community every Wednesday 17:00 - 17:30 UTC The etherpad is here: https://public.etherpad-mozilla.org/p/sumo-2016-04-27

Niko MatsakisNon-lexical lifetimes: introduction

Over the last few weeks, I’ve been devoting my free time to fleshing out the theory behind non-lexical lifetimes (NLL). I think I’ve arrived at a pretty good point and I plan to write various posts talking about it. Before getting into the details, though, I wanted to start out with a post that lays out roughly how today’s lexical lifetimes work and gives several examples of problem cases that we would like to solve.

The basic idea of the borrow checker is that values may not be mutated or moved while they are borrowed. But how do we know whether a value is borrowed? The idea is quite simple: whenever you create a borrow, the compiler assigns the resulting reference a lifetime. This lifetime corresponds to the span of the code where the reference may be used. The compiler will infer this lifetime to be the smallest lifetime that it can that still encompasses all the uses of the reference.

Note that Rust uses the term lifetime in a very particular way. In everyday speech, the word lifetime can be used in two distinct – but similar – ways:

  1. The lifetime of a reference, corresponding to the span of time in which that reference is used.
  2. The lifetime of a value, corresponding to the span of time before that value gets freed (or, put another way, before the destructor for the value runs).

This second span of time, which describes how long a value is valid, is of course very important. We refer to that span of time as the value’s scope. Naturally, lifetimes and scopes are linked to one another. Specifically, if you make a reference to a value, the lifetime of that reference cannot outlive the scope of that value, Otherwise your reference would be pointing into free memory.

To better see the distinction between lifetime and scope, let’s consider a simple example. In this example, the vector data is borrowed (mutably) and the resulting reference is passed to a function capitalize. Since capitalize does not return the reference back, the lifetime of this borrow will be confined to just that call. The scope of data, in contrast, is much larger, and corresponds to a suffix of the fn body, stretching from the let until the end of the enclosing scope.

1
2
3
4
5
6
7
8
9
10
11
12
fn foo() {
    let mut data = vec!['a', 'b', 'c']; // --+ 'scope
    capitalize(&mut data[..]);          //   |
//  ^~~~~~~~~~~~~~~~~~~~~~~~~ 'lifetime //   |
    data.push('d');                     //   |
    data.push('e');                     //   |
    data.push('f');                     //   |
} // <---------------------------------------+

fn capitalize(data: &mut [char]) {
    // do something
}

This example also demonstrates something else. Lifetimes in Rust today are quite a bit more flexible than scopes (if not as flexible as we might like, hence this RFC):

  • A scope generally corresponds to some block (or, more specifically, a suffix of a block that stretches from the let until the end of the enclosing block) [1].
  • A lifetime, in contrast, can also span an individual expression, as this example demonstrates. The lifetime of the borrow in the example is confined to just the call to capitalize, and doesn’t extend into the rest of the block. This is why the calls to data.push that come below are legal.

So long as a reference is only used within one statement, today’s lifetimes are typically adequate. Problems arise however when you have a reference that spans multiple statements. In that case, the compiler requires the lifetime to be the innermost expression (which is often a block) that encloses both statements, and that is typically much bigger than is really necessary or desired. Let’s look at some example problem cases. Later on, we’ll see how non-lexical lifetimes fixes these cases.

Problem case #1: references assigned into a variable

One common problem case is when a reference is assigned into a variable. Consider this trivial variation of the previous example, where the &mut data[..] slice is not passed directly to capitalize, but is instead stored into a local variable:

1
2
3
4
5
6
7
8
fn bar() {
    let mut data = vec!['a', 'b', 'c'];
    let slice = &mut data[..]; // <-+ 'lifetime
    capitalize(slice);         //   |
    data.push('d'); // ERROR!  //   |
    data.push('e'); // ERROR!  //   |
    data.push('f'); // ERROR!  //   |
} // <------------------------------+

The way that the compiler currently works, assigning a reference into a variable means that its lifetime must be as large as the entire scope of that variable. In this case, that means the lifetime is now extended all the way until the end of the block. This in turn means that the calls to data.push are now in error, because they occur during the lifetime of slice. It’s logical, but it’s annoying.

In this particular case, you could resolve the problem by putting slice into its own block:

1
2
3
4
5
6
7
8
9
10
fn bar() {
    let mut data = vec!['a', 'b', 'c'];
    {
        let slice = &mut data[..]; // <-+ 'lifetime
        capitalize(slice);         //   |
    } // <------------------------------+
    data.push('d'); // OK
    data.push('e'); // OK
    data.push('f'); // OK
}

Since we introduced a new block, the scope of slice is now smaller, and hence the resulting lifetime is smaller. Of course, introducing a block like this is kind of artificial and also not an entirely obvious solution.

Problem case #2: conditional control flow

Another common problem case is when references are used in only match arm. This most commonly arises around maps. Consider this function, which, given some key, processes the value found in map[key] if it exists, or else inserts a default value:

1
2
3
4
5
6
7
8
9
10
fn process_or_default<K,V:Default>(map: &mut HashMap<K,V>,
                                   key: K) {
    match map.get_mut(&key) { // -------------+ 'lifetime
        Some(value) => process(value),     // |
        None => {                          // |
            map.insert(key, V::default()); // |
            //  ^~~~~~ ERROR.              // |
        }                                  // |
    } // <------------------------------------+
}

This code will not compile today. The reason is that the map is borrowed as part of the call to get_mut, and that borrow must encompass not only the call to get_mut, but also the Some branch of the match. The innermost expression that encloses both of these expressions is the match itself (as depicted above), and hence the borrow is considered to extend until the end of the match. Unfortunately, the match encloses not only the Some branch, but also the None branch, and hence when we go to insert into the map in the None branch, we get an error that the map is still borrowed.

This particular example is relatively easy to workaround. One can (frequently) move the code for None out from the match like so:

1
2
3
4
5
6
7
8
9
10
11
12
fn process_or_default1<K,V:Default>(map: &mut HashMap<K,V>,
                                    key: K) {
    match map.get_mut(&key) { // -------------+ 'lifetime
        Some(value) => {                   // |
            process(value);                // |
            return;                        // |
        }                                  // |
        None => {                          // |
        }                                  // |
    } // <------------------------------------+
    map.insert(key, V::default());
}

When the code is adjusted this way, the call to map.insert is not part of the match, and hence it is not part of the borrow. While this works, it is of course unfortunate to require these sorts of manipulations, just as it was when we introduced an artificial block in the previous example.

Problem case #3: conditional control flow across functions

While we were able to work around problem case #2 in a relatively simple, if irritating, fashion. there are other variations of conditional control flow that cannot be so easily resolved. This is particularly true when you are returning a reference out of a function. Consider the following function, which returns the value for a key if it exists, and inserts a new value otherwise (for the purposes of this section, assume that the entry API for maps does not exist):

1
2
3
4
5
6
7
8
9
10
11
12
fn get_default<'m,K,V:Default>(map: &'m mut HashMap<K,V>,
                               key: K)
                               -> &'m mut V {
    match map.get_mut(&key) { // -------------+ 'm
        Some(value) => value,              // |
        None => {                          // |
            map.insert(key, V::default()); // |
            //  ^~~~~~ ERROR               // |
            map.get_mut(&key).unwrap()     // |
        }                                  // |
    }                                      // |
}                                          // v

At first glance, this code appears quite similar the code we saw before. And indeed, just as before, it will not compile. But in fact the lifetimes at play are quite different. The reason is that, in the Some branch, the value is being returned out to the caller. Since value is a reference into the map, this implies that the map will remain borrowed until some point in the caller (the point 'm, to be exact). To get a better intuition for what this lifetime parameter 'm represents, consider some hypothetical caller of get_default: the lifetime 'm then represents the span of code in which that caller will use the resulting reference:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
fn caller() {
    let mut map = HashMap::new();
    ...
    {
        let v = get_default(&mut map, key); // -+ 'm
          // +-- get_default() -----------+ //  |
          // | match map.get_mut(&key) {  | //  |
          // |   Some(value) => value,    | //  |
          // |   None => {                | //  |
          // |     ..                     | //  |
          // |   }                        | //  |
          // +----------------------------+ //  |
        process(v);                         //  |
    } // <--------------------------------------+
    ...
}

If we attempt the same workaround for this case that we tried in the previous example, we will find that it does not work:

1
2
3
4
5
6
7
8
9
10
11
fn get_default1<'m,K,V:Default>(map: &'m mut HashMap<K,V>,
                                key: K)
                                -> &'m mut V {
    match map.get_mut(&key) { // -------------+ 'm
        Some(value) => return value,       // |
        None => { }                        // |
    }                                      // |
    map.insert(key, V::default());         // |
    //  ^~~~~~ ERROR (still)                  |
    map.get_mut(&key).unwrap()             // |
}                                          // v

Whereas before the lifetime of value was confined to the match, this new lifetime extends out into the caller, and therefore the borrow does not end just because we exited the match. Hence it is still in scope when we attempt to call insert after the match.

The workaround for this problem is a bit more involved. It relies on the fact that the borrow checker uses the precise control-flow of the function to determine what borrows are in scope.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
fn get_default2<'m,K,V:Default>(map: &'m mut HashMap<K,V>,
                                key: K)
                                -> &'m mut V {
    if map.contains(&key) {
    // ^~~~~~~~~~~~~~~~~~ 'n
        return match map.get_mut(&key) { // + 'm
            Some(value) => value,        // |
            None => unreachable!()       // |
        };                               // v
    }

    // At this point, `map.get_mut` was never
    // called! (As opposed to having been called,
    // but its result no longer being in use.)
    map.insert(key, V::default()); // OK now.
    map.get_mut(&key).unwrap()
}

What has changed here is that we moved the call to map.get_mut inside of an if, and we have set things up so that the if body unconditionally returns. What this means is that a borrow begins at the point of get_mut, and that borrow lasts until the point 'm in the caller, but the borrow checker can see that this borrow will not have even started outside of the if. So it does not consider the borrow in scope at the point where we call map.insert.

This workaround is more troublesome than the others, because the resulting code is actually less efficient at runtime, since it must do multiple lookups.

It’s worth noting that Rust’s hashmaps include an entry API that one could use to implement this function today. The resulting code is both nicer to read and more efficient even than the original version, since it avoids extra lookups on the not present path as well:

1
2
3
4
5
6
fn get_default3<'m,K,V:Default>(map: &'m mut HashMap<K,V>,
                                key: K)
                                -> &'m mut V {
    map.entry(key)
       .or_insert_with(|| V::default())
}

Regardless, the problem exists for other data structures besides HashMap, so it would be nice if the original code passed the borrow checker, even if in practice using the entry API would be preferable. (Interestingly, the limitation of the borrow checker here was one of the motivations for developing the entry API in the first place!)

Conclusion

This post looked at various examples of Rust code that do not compile today, and showed how they can be fixed using today’s system. While it’s good that workarounds exist, it’d be better if the code just compiled as is. In an upcoming post, I will outline my plan for how to modify the compiler to achieve just that.

Endnotes

1. Scopes always correspond to blocks with one exception: the scope of a temporary value is sometimes the enclosing statement.

Air MozillaBay Area Rust Meetup April 2016

Bay Area Rust Meetup April 2016 Rust meetup on the subject of operating systems.

Air MozillaConnected Devices Weekly Program Review, 26 Apr 2016

Connected Devices Weekly Program Review Weekly project updates from the Mozilla Connected Devices team.

Richard NewmanDifferent kinds of storage

I’ve been spending most of my time so far on Project Tofino thinking about how a user agent stores data.

A user agent is software that mediates your interaction with the world. A web browser is one particular kind of user agent: one that fetches parts of the web and shows them to you.

(As a sidenote: browsers are incredibly complicated, not just for the obvious reasons of document rendering and navigation, but also because parts of the web need to run code on your machine and parts of it are actively trying to attack and track you. One of a browser’s responsibilities is to keep you safe from the web.)

Chewing on Redux, separation of concerns, and Electron’s process model led to us drawing a distinction between a kind of ‘profile service’ and the front-end browser itself, with ‘profile’ defined as the data stored and used by a traditional browser window. You can see the guts of this distinction in some of our development docs.

The profile service stores full persistent history and data like it. The front-end, by contrast, has a pure Redux data model that’s much closer to what it needs to show UI — e.g., rather than all of the user’s starred pages, just a list of the user’s five most recent.

The front-end is responsible for fetching pages and showing the UI around them. The back-end service is responsible for storing data and answering questions about it from the front-end.

To build that persistent storage we opted for a mostly event-based model: simple, declarative statements about the user’s activity, stored in SQLite. SQLite gives us durability and known performance characteristics in an embedded database.

On top of this we can layer various views (materialized or not). The profile service takes commands as input and pushes out diffs, and the storage itself handles writes by logging events and answering queries through views. This is the CQRS concept applied to an embedded store: we use different representations for readers and writers, so we can think more clearly about the transformations between them.

Where next?

One of the reasons we have a separate service is to acknowledge that it might stick around when there are no browser windows open, and that it might be doing work other than serving the immediate needs of a browser window. Perhaps the service is pre-fetching pages, or synchronizing your data in the background, or trying to figure out what you want to read next. Perhaps you can interact with the service from something other than a browser window!

Some of those things need different kinds of storage. Ad hoc integrations might be best served by a document store; recommendations might warrant some kind of graph database.

When we look through that lens we no longer have just a profile service wrapping profile storage. We have a more general user agent service, and one of the data sources it manages is your profile data.

Mozilla Addons BlogMigrating Popup ALT Attribute from XUL/XPCOM to WebExtensions

Today’s post comes from Piro, the developer of Popup ALT Attribute, in addition to 40 other add-ons. He shares his thoughts about migrating XUL/XPCOM add-ons to WebExtensions, and shows us how he did it with Popup ALT Attribute. You can see the full text of this post on his personal blog.

***

Hello, add-on developers. My name is YUKI Hiroshi aka Piro, a developer of Firefox add-ons. For many years I developed Firefox and Thunderbird add-ons personally and for business, based on XUL and XPCOM.

I recently started to research the APIs are required to migrate my add-ons to WebExtensions, because Mozilla announced that XUL/XPCOM add-ons will be deprecated at the end of 2017. I realized that only some add-ons can be migrated with currently available APIs, and
Popup ALT Attribute is one such add-on.

Here is the story of how I migrated it.

What’s the add-on?

Popup ALT Attribute is an ancient add-on started in 2002, to show what is written in the alt attribute of img HTML elements on web pages. By default, Firefox shows only the title attribute as a tooltip.

Initially, the add-on was implemented to replace an internal function FillInHTMLTooltip() of Firefox itself.

In February 2016, I migrated it to be e10s-compatible. It is worth noting that depending on your add-on, if you can migrate it directly to WebExtensions, it will be e10s-compatible by default.

Re-formatting in the WebExtensions style

I read the tutorial on how to build a new simple WebExtensions-based add-on from scratch before migration, and I realized that bootstrapped extensions are similar to WebExtensions add-ons:

  • They are dynamically installed and uninstalled.
  • They are mainly based on JavaScript code and some static manifest files.

My add-on was easily re-formatted as a WebExtensions add-on, because I already migrated it to bootstrapped.

This is the initial version of the manifest.json I wrote. There were no localization and options UI:

{
  "manifest_version": 2,
  "name": "Popup ALT Attribute",
  "version": "4.0a1",
  "description": "Popups alternate texts of images or others like NetscapeCommunicator(Navigator) 4.x, and show long descriptions in the multi-row tooltip.",
  "icons": { "32": "icons/icon.png" },
  "applications": {
    "gecko": { "id": "{61FD08D8-A2CB-46c0-B36D-3F531AC53C12}",
               "strict_min_version": "48.0a1" }
  },
  "content_scripts": [
    { "all_frames": true,
      "matches": ["<all_urls>"],
      "js": ["content_scripts/content.js"],
      "run_at": "document_start" }
  ]
}

I had already separated the main script to a frame script and a loader for it. On the other hand, manifest.json can have some manifest keys to describe how scripts are loaded. It means that I don’t need to put my custom loaders in the package anymore. Actually, a script for any web page can be loaded with the content_scripts rule in the above sample. See the documentation for content_scripts for more details.

So finally only 3 files were left.

Before:

+ install.rdf
+ icon.png
+ [components]
+ [modules]
+ [content]
    + content-utils.js

And after:

+ manifest.json (migrated from install.rdf)
+ [icons]
|   + icon.png (moved)
+ [content_scripts]
    + content.js (moved and migrated from content-utils.js)

And I still had to isolate my frame script from XPCOM.

  • The script touched nsIPrefBranch and some XPCOM components via XPConnect, so they were temporarily commented out.
  • User preferences were not available and only default configurations were there as fixed values.
  • Some constant properties accessed, like Ci.nsIDOMNode.ELEMENT_NODE, had to be replaced as Node.ELEMENT_NODE.
  • The listener for mousemove events from web pages was attached to the global namespace for a frame script, but it was re-attached to the document itself of each web page, because the script was now executed on each web page directly.

Localization

For the old install.rdf I had a localized description. In WebExtensions add-ons I had to do it in different way. See how to localize messages for details. In short I did the following:

Added files to define localized descriptions:

+ manifest.json
+ [icons]
+ [content_scripts]
+ [_locales]
    + [en_US]
    |   + messages.json (added)
    + [ja]
        + messages.json (added)

Note, en_US is different from en-US in install.rdf.

English locale, _locales/en_US/messages.json was:

{
  "name": { "message": "Popup ALT Attribute" },
  "description": { "message": "Popups alternate texts of images or others like NetscapeCommunicator(Navigator) 4.x, and show long descriptions in the multi-row tooltip." }
}

Japanese locale, _locales/ja/messages.json was also included. And, I had to update my manifest.json to embed localized messages:

{
  "manifest_version": 2,
  "name": "__MSG_name__",
  "version": "4.0a1",
  "description": "__MSG_description__",
  "default_locale": "en_US",
  ...

__MSG_****__ in string values are automatically replaced to localized messages. You need to specify the default locale manually via the default_locale key.

Sadly, Firefox 45 does not support the localization feature, so you need to use Nightly 48.0a1 or newer to try localization.

User preferences

Currently, WebExtensions does not provide any feature completely compatible to nsIPrefBranch. Instead, there are simple storage APIs. It can be used like an alternative of nsIPrefBranch to set/get user preferences. This add-on had no configuration UI but had some secret preferences to control its advanced features, so I did it for future migrations of my other add-ons, as a trial.

Then I encountered a large limitation: the storage API is not available in content scripts. I had to create a background script just to access the storage, and communicate with it via the inter-sandboxes messaging system. [Updated 4/27/16: bug 1197346 has been fixed on Nightly 49.0a1, so now you don’t need any hack to access the storage system from content scripts anymore. Now, my library (Configs.js) just provides easy access for configuration values instead of the native storage API.]

Finally, I created a tiny library to do that. I don’t describe how I did it here, but if you hope to know details, please see the source. There are just 177 lines.

I had to update my manifest.json to use the library from both the background page and the content script, like:

  "background": {
    "scripts": [
      "common/Configs.js", /* the library itself */
      "common/common.js"   /* codes to use the library */
    ]
  },
  "content_scripts": [
    { "all_frames": true,
      "matches": ["<all_urls>"],
      "js": [
        "common/Configs.js", /* the library itself */
        "common/common.js",  /* codes to use the library */
        "content_scripts/content.js"
      ],
      "run_at": "document_start" }
  ]

Scripts listed in the same section share a namespace for the section. I didn’t have to write any code like require() to load a script from others. Instead, I had to be careful about the listing order of scripts, and wrote a script requiring a library after the library itself, in each list.

One last problem was: how to do something like the about:config or the MCD — general methods to control secret preferences across add-ons.

For my business clients, I usually provide add-ons and use MCD to lock their configurations. (There are some common requirements for business use of Firefox, so combinations of add-ons and MCD are more reasonable than creating private builds of Firefox with different configurations for each client.)

I think I still have to research around this point.

Options UI

WebExtensions provides a feature to create options pages for add-ons. It is also not supported on Firefox 45, so you need to use Nightly 48.0a1 for now. As I previously said, this add-on didn’t have its configuration UI, but I implemented it as a trial.

In XUL/XPCOM add-ons, rich UI elements like <checkbox>, <textbox>, <menulist>, and more are available, but these are going away at the end of next year. So I had to implement a custom configuration UI based on pure HTML and JavaScript. (If you need more rich UI elements, some known libraries for web applications will help you.)

On this step I created two libraries:

Conclusion

I’ve successfully migrated my Popup ALT Attribute add-on from XUL/XPCOM to WebExtensions. Now it is just a branch but I’ll release it after Firefox 48 is available.

Here are reasons why I could do it:

  • It was a bootstrapped add-on, so I had already isolated the add-on from all destructive changes.
  • The core implementation of the add-on was similar to a simple user script. Essential actions of the add-on were enclosed inside the content area, and no privilege was required to do that.

However, it is a rare case for me. My other 40+ add-ons require some privilege, and/or they work outside the content area. Most of my cases are such non-typical add-ons.

I have to do triage, plan, and request new APIs not only for me but for other XUL/XPCOM add-on developers also.

Thank you for reading.

The Mozilla BlogUpdate to Firefox Released Today

The latest version of Firefox was released today. It features an improved look and feel for Linux users, a minor security improvement and additional updates for all Firefox users.

The update to Firefox for Android features minor changes, including an improvement to user notifications and clearer homescreen shortcut icons.

More information:

Air MozillaMartes mozilleros, 26 Apr 2016

Martes mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos. Bi-weekly meeting to talk (in Spanish) about Mozilla status, community and...

Marcia KnousNightly is where I will live

After some time working on Firefox OS and Connected Devices, I am moving back to Desktop land. Going forward I will be working with the Release Management Team as the Nightly Program Manager. That means I would love to work with all of you all to identify any potential issues in Nightly and help bring them to resolution. To that end, I have done a few things. First, we now have a Telegram Group for Nightly Testers. Feel free to join that group if you want to keep up with issues we are

David LawrenceHappy BMO Push Day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1195736] intermittent internal error: “file error – nav_link: not found” (also manifests as fields_lhs: not found)

discuss these changes on mozilla.tools.bmo.


Daniel GlazmanFirst things first

Currently implementing many new features into Postbox, I carefully read (several times) Mark Surman's recent article on Thunderbird's future. I also read Simon Phipps's report twice. Then the contract offer for a Thunderbird Architect posted by Mozilla must be read too:

... Thunderbird is facing a number of technical challenges, including but not limited to:

  • ...
  • The possible future deprecation of XUL, its current user interface technology and XPCOM, its current component technology, by Mozilla
  • ...

In practice, the last line above means for Thunderbird:

  1. rewrite the whole UI and the whole JS layer with it
  2. most probably rewrite the whole SMTP/MIME/POP/IMAP/LDAP/... layer
  3. most probably have a new Add-on layer or, far worse, no more Add-ons

Well, sorry to say, but that's a bit of a « technical challenge »... So yes, that's indeed a « fork in the road » but let's be serious a second, it's unfortunately this kind of fork; rewriting the app is not a question of if but only a question of when. Unless Thunderbird dies entirely, of course.

Evaluating potential hosts for Thunderbird and a fortiori chosing one seems to me rather difficult without first discussing the XUL/XPCOM-less future of the app, i.e. without having in hands the second milestone delivered by the Thunderbird Architect. First things first. I would also be interested in knowing how many people MoCo will dedicate to the deXULXPCOMification of Firefox, that would allow some extrapolations and some pretty solid requirements (and probably rather insurmountable...) for TB's host.

Last but not least and from a more personal point of view, I feel devastated confronting Mark's article and the Mozilla Manifesto.

Daniel StenbergAbsorbing 1,000 emails per day

Some people say email is dead. Some people say there are “email killers” and bring up a bunch of chat and instant messaging services. I think those people communicate far too little to understand how email can scale.

I receive up to around 1,000 emails per day. I average on a little less but I do have spikes way above.

Why do I get a thousand emails?

Primarily because I participate on a lot of mailing lists. I run a handful of open source projects myself, each with at least one list. I follow a bunch more projects; more mailing lists. We have a whole set of mailing lists at work (Mozilla) and I participate and follow several groups in the IETF. Lists and lists. I discuss things with friends on a few private mailing lists. I get notifications from services about things that happen (commits, bugs submitted, builds that break, things that need to get looked at). Mails, mails and mails.

Don’t get me wrong. I prefer email to web forums and stuff because email allows me to participate in literally hundreds of communities from a single spot in an asynchronous manner. That’s a good thing. I would not be able to do the same thing if I had to use one of those “email killers” or web forums.

Unwanted email

I unsubscribe from lists that I grow tired from. I stamp down on spam really hard and I run aggressive filters and blacklists that actually make me receive rather few spam emails these days, percentage wise. There are nowadays about 3,000 emails per month addressed to me that my mail server accepts that are then classified as spam by spamassassin. I used to receive a lot more before we started using better blacklists. (During some periods in the past I received well over a thousand spam emails per day.) Only 2-3 emails per day out of those spam emails fail to get marked as spam correctly and subsequently show up in my inbox.

Flood management

My solution to handling this steady high paced stream of incoming data is prioritization and putting things in different bins. Different inboxes.

  1. Filter incoming email. Save the email into its corresponding mailbox. At this very moment, I have about 30 named inboxes that I read. I read them in order, top to bottom as they’re sorted in roughly importance order (to me).
  2. Mails that don’t match an existing mailing list or topic that get stored into the 28 “topic boxes” run into another check: is the sender a known “friend” ? That’s a loose term I use, but basically means that the mail is from an email address that I have had conversations with before or that I know or trust etc. Mails from “friends” get the honor of getting put in mailbox 0. The primary one. If the mail comes from someone not listed as friend, it’ll end up in my “suspect” mailbox. That’s mailbox 1.
  3. Some of the emails get the honor of getting forwarded to a cloud email service for which I have an app in my phone so that I can get a sense of important mail that arrive. But I basically never respond to email using my phone or using a web interface.
  4. I also use the “spam level” in spams to save them in different spam boxes. The mailbox receiving the highest spam level emails is just erased at random intervals without ever being read (unless I’m tracking down a problem or something) and the “normal” spam mailbox I only check every once in a while just to make sure my filters are not hiding real mails in there.

Reading

I monitor my incoming mails pretty frequently all through the day – every day. My wife calls me obsessed and maybe I am. But I find it much easier to handle the emails a little at a time rather than to wait and have it pile up to huge lumps to deal with.

I receive mail at my own server and I read/write my email using Alpine, a text based mail client that really excels at allowing me to plow through vast amounts of email in a short time – something I can’t say that any UI or web based mail client I’ve tried has managed to do at a similar degree.

A snapshot from my mailbox from a while ago looked like this, with names and some topics blurred out. This is ‘INBOX’, which is the main and highest prioritized one for me.

alpine screenshot

I have my mail client to automatically go to the next inbox when I’m done reading this one. That makes me read them in prio order. I start with the INBOX one where supposedly the most important email arrives, then I check the “suspect” one and then I go down the topic inboxes one by one (my mail client moves on to the next one automatically). Until either I get overwhelmed and just return to the main box for now or I finish them all up.

I tend to try to deal with mails immediately, or I mark them as ‘important’ and store them in the main mailbox so that I can find them again easily and quickly.

I try to only keep mails around in my mailbox that concern ongoing topics, discussions or current matters of concern. Everything else should get stored away. It is hard work to maintain the number of emails there at a low number. As you all know.

Writing email

I averaged at less than 200 emails written per month during 2015. That’s 6-7 per day.

That makes over 150 received emails for every email sent.

Allen Wirfs-BrockSlide Bite: Survival of the Fittest

incrementalevolution

The first ten or fifteen years of a computing era is a period of chaotic experimentation. Early product concepts rapidly evolve via both incremental and disruptive innovations. Radical ideas are tried. Some succeed and some fail. Survival of the fittest prevails. By mid-era, new stable norms should be established. But we can’t predict the exact details.

Andrew TruongExperience, Learn, Revitalize and Share: The Adventures of High School

High school was an adventure. This time around, I was in courses that I picked and not determined by someone else who followed how you rank the offered courses. At the end of junior high, we were left with the phrase from every teacher that we will no longer be with the people we usually hang out with. How true can that be? I am not able to say.


I started my first year off in high school rough. However, I was able to adapt quite easily through the attendance of leadership seminars every week. I started to get a little more involved with events around the school and eventually around the community. The teachers were far from different than what our junior high teachers prescribed them as. They weren't uncaring, leaving you on your own and were helpful with finding your way around. They were the exact opposite of what our junior high teachers told us. Perhaps, they told us that "lie" to prepare us, or maybe they went through something completely different during their time.

First year in, I had an assortment of classes and I felt good and at ease with them. I was fortunate enough to have every other day free in the second semester where I was able to go to leadership and further enhance my life skills. The regular days, I had a class where I was able to do homework and receive additional help when I needed it, due the fact that I didn't do too well in junior high. Nonetheless, I excelled in the main course wasting most of my time in the additional help class.


Grade 11 rolls by and I took a block (there are 4 blocks in a day) of my day during the first semester to go to leadership. There I was able to further enhance my abilities, be assigned responsibilities and earn the trust of the department head. Furthermore, I ran to the students' union president, though I was not successful - it may have benefited me instead. There's nothing much to say as things went a certain direction and it worked out quite well.

Into my last year of high school, there's a new development in our family and household. This year is extremely important as I must pass all courses in order to graduate and move on to post-secondary. I was satisfied with my first semester where my courses went out pretty well. I still took a block of my day out the first semester to go to leadership. But, this time, I took on the position of being the chairperson of Spirit Wear for the school year. Designing, advertising, and promoting what we had to sell was a wonderful journey. I also met some great people during my spare time in leadership and I learned a lot more about myself and what I was socially doing wrong. That realization of what I was doing wrong, dawned upon me and led me to become who I am today.

The second semester comes around the corner and it was a roller coaster for me. For some odd reason, the course I excelled in continuously the 2 years before, I was now having trouble. It was partially a leap from what I knew and learned to something completely different. Part of the blame for this is the instructor, as I knew from how others have struggled with this particular teacher in the past, I would too - even though I told myself I won't. I got through it with my ups and downs despite being worried about whether or not I would being able to graduate and move on to post-secondary. In the end, I graduated and received my high school diploma.

Mark SurmanFirefox and Thunderbird: A Fork in the Road

Firefox and Thunderbird have reached a fork in the road: it’s now the right time for them to part ways on both a technical and organizational level.

In line with the process we started in 2012, today we’re taking another step towards the independence of Thunderbird. We’re posting a report authored by open source leader Simon Phipps that explores options for a future organizational home for Thunderbird. We’ve also started the process of helping the Thunderbird Council chart a course forward for Thunderbird’s future technical direction, by posting a job specification for a technical architect.

In this post, I want to take the time to go over the origins of Thunderbird and Firefox, the process for Thunderbird’s independence and update you on where we are taking this next. For those close to Mozilla, both the setting and the current process may already be clear. For those who haven’t been following the process, I wanted to write a longer post with all the context. If you are interested in that context, read on.

Summary

Much of Mozilla, including the leadership team, believes that focusing on the web through Firefox offers a vastly better chance of moving the Internet industry to a more open place than investing further in Thunderbird—or continuing to attend to both products.

Many of us remain committed Thunderbird users and want to see Thunderbird remain a healthy community and product. But both Firefox and Thunderbird face different challenges, have different goals and different measures of success. Our actions regarding Thunderbird should be viewed in this light.

Success for Firefox means continued relevance in the mass consumer market as a way for people to access, shape and feel safe across many devices. With hundreds of millions of users on both desktop and mobile, we have the raw material for this success. However, if we want Firefox to continue to have an impact on how developers and consumers interact with the Internet, we need to move much more quickly to innovate on mobile and in the cloud. Mozilla is putting the majority of its human and financial resources into Firefox product innovation.

In contrast, success for Thunderbird means remaining a reliable and stable open source desktop email client. While many people still value the security and independence that come with desktop email (I am one of them), the overall number of such people in the world is shrinking. In 2012, around when desktop email first became the exception rather than the rule, Mozilla started to reduce its investment and transitioned Thunderbird into a fully volunteer-run open source project.

Given these different paths, it should be no surprise that tensions have arisen as we’ve tried to maintain Firefox and Thunderbird on top of a common underlying code base and common release engineering system. In December, we started a process to deal with those release engineering issues, and also to find a long-term organizational home for Thunderbird.

The Past

On a technical level, Firefox and Thunderbird have common roots, emerging from the browser and email components of the Mozilla Application Suite nearly 15 years ago. When they were turned into separate products, they also maintained a common set of underlying software components, as well as a shared build and release infrastructure. Both products continue to be intertwined in this manner today.

Firefox and Thunderbird also share common organizational roots. Both were incorporated by the Mozilla Foundation in 2003, and from the beginning, the Foundation aimed to make these products successful in the mainstream consumer Internet market. We believed—and still believe—mass-market open source products are our biggest lever in our efforts to ensure the Internet remains a public resource, open and accessible to all.

Based on this belief, we set up Mozilla Corporation (MoCo) and Mozilla Messaging (MoMo) as commercial subsidiaries of the Mozilla Foundation. These organizations were each charged with innovating and growing a market: one in web access, the other in messaging. We succeeded in making the browser a mass market success, but we were not able to grow the same kind of market for email or messaging.

In 2012, we shut down Mozilla Messaging. That’s when Thunderbird became a purely volunteer-run project.

The Present

Since 2012, we have been doggedly focused on how to take Mozilla’s mission into the future.

In the Mozilla Corporation, we have tried to innovate and sustain Firefox’s relevance in the browser market while breaking into new product categories—first with smartphones, and now in a variety of connected devices.

In the Mozilla Foundation, we have invested in a broader global movement of people who stand for the Internet as a public resource. In 2016, we are focused on becoming a loud and clear champion on open internet issues. This includes significant investments in fuelling the open internet movement and growing a next generation of leaders who will stand up for the web.

These are hard and important things to do—and we have not yet succeeded at them to the level that we need to.

During these shifts, we invested less and less of Mozilla’s resources in Thunderbird, with the volunteer community developing and sustaining the product. MoCo continues to provide the underlying code and build and release infrastructure, but there are no dedicated staff focused on Thunderbird.

Many people who work on Firefox care about Thunderbird and do everything they can to accommodate Thunderbird as they evolve the code base, which slows down Firefox development when it needs to be speeding up. People in the Thunderbird community also remain committed to building on the Firefox codebase. This puts pressure on a small, dedicated group of volunteer coders who struggle to keep up. And people in the Mozilla Foundation feel similar pressure to help the Thunderbird community with donations and community management, which distracts them from the education and advocacy work that’s needed to grow the open internet movement on a global level.

Everyone has the right motivations, and yet everyone is stretched thin and frustrated. And Mozilla’s strategic priorities are elsewhere.

The Future

In late 2015, Mozilla leadership and the Thunderbird Council jointly agreed to:

a) take a new approach to release engineering, as a first step towards putting Thunderbird on the path towards technical independence from Firefox; and

b) identify the organizational home that will best allow Thunderbird to thrive as a volunteer-run project.

Mozilla has already posted a proposal for separating Thunderbird from Firefox release engineering infrastructure. In order to move the technical part of this plan further ahead and address some of the other challenges Thunderbird faces, we agreed to contract for a short period of time with a technical architect who can support the Thunderbird community as they decide what path Thunderbird should take. We have a request for proposals for this position here.

On the organizational front, we hired open source leader Simon Phipps to look at different long-term options for a home for Thunderbird, including: The Document Foundation, Gnome, Mozilla Foundation, and The Software Freedom Conservancy. Simon’s initial report will be posted today in the Thunderbird Planning online forum and is currently being reviewed by both Mozilla and the Thunderbird Council.

With the right technical and organizational paths forward, both Firefox and Thunderbird will have a better chance at success. We believe Firefox will evolve into something consumers need and love for a long time—a way to take the browser into experiences across all devices. But we need to move fast to be effective.

We also believe there’s still a place for stable desktop email, especially if it includes encryption. The Thunderbird community will attract new volunteers and funders, and we’re digging in to help make that happen. We will provide more updates as things progress further.

The post Firefox and Thunderbird: A Fork in the Road appeared first on Mark Surman.