JavaScript at MozillaECMAScript 2016+ in Firefox

After the release of ECMAScript 2015, a.k.a. ES6, the ECMAScript Language Specification is evolving rapidly: it’s getting many new features that will help developing web applications, with a new release planned every year.

Last week, Firefox Nightly 54 reached 100% on the Kangax ECMAScript 2016+ compatibility table that currently covers and the ECMAScript 2017 draft.

Here are some highlights for those spec updates.

Kangax ECMAScript 2016+ compatibility table

ECMAScript 2016

ECMAScript 2016 is the latest stable edition of the ECMAScript specification. ECMAScript 2016 introduces two new features, the Exponentiation Operator and Array.prototype.includes, and also contains various minor changes.

New Features

Exponentiation Operator

Status: Available from Firefox 52 (now Beta, will ship in March 2017).

The exponentiation operator (**) allows infix notation of exponentiation.
It’s a shorter and simpler replacement for Math.pow. The operator is right-associative.

// without Exponentiation Operator
console.log(Math.pow(2, 3));             // 8
console.log(Math.pow(2, Math.pow(3, 2)); // 512

// with Exponentiation Operator
console.log(2 ** 3);                     // 8
console.log(2 ** 3 ** 2);                // 512
Array.prototype.includes

Status: Available from Firefox 43.

Array.prototype.includes is an intuitive way to check the existence of an element in an array, replacing the array.indexOf(element) !== -1 idiom.

let supportedTypes = [
  "text/plain",
  "text/html",
  "text/javascript",
];

// without Array.prototype.includes.
console.log(supportedTypes.indexOf("text/html") !== -1); // true
console.log(supportedTypes.indexOf("image/png") !== -1); // false

// with Array.prototype.includes.
console.log(supportedTypes.includes("text/html")); // true
console.log(supportedTypes.includes("image/png")); // false

Miscellaneous Changes

Generators can’t be constructed

Status: Available from Firefox 43.

Calling generator with new now throws.

function* g() {
}

new g(); // throws
Iterator for yield* can catch throw()

Status: Available from Firefox 27.

When Generator.prototype.throw is called on a generator while it’s executing yield*, the operand of the yield* can catch the exception and return to normal completion.

function* inner() {
  try {
    yield 10;
    yield 11;
  } catch (e) {
    yield 20;
  }
}

function* outer() {
  yield* inner();
}

let g = outer();
console.log(g.next().value);  // 10
console.log(g.throw().value); // 20, instead of throwing
Function with non-simple parameters can’t have “use strict”

Status: Available from Firefox 52 (now Beta, will ship in March 2017).

When a function has non-simple parameters (destructuring parameters, default parameters, or rest parameters), the function can’t have the "use strict" directive in its body.

function assertEq(a, b, message="") {
  "use strict"; // error

  // ...
}

However, functions with non-simple parameters can appear in code that’s already strict mode.

"use strict";
function assertEq(a, b, message="") {
  // ...
}
Nested RestElement in Destructuring

Status: Available from Firefox 47.

Now a rest pattern in destructuring can be an arbitrary pattern, and also be nested.

let [a, ...[b, ...c]] = [1, 2, 3, 4, 5];

console.log(a); // 1
console.log(b); // 2
console.log(c); // [3, 4, 5]

let [x, y, ...{length}] = "abcdef";

console.log(x);      // "a"
console.log(y);      // "b"
console.log(length); // 4
Remove [[Enumerate]]

Status: Removed in Firefox 47.

The enumerate trap of the Proxy handler has been removed.

ECMAScript 2017 draft

ECMAScript 2017 will be the next edition of ECMAScript specification, currently in draft. The ECMAScript 2017 draft introduces several new features, Object.values / Object.entries, Object.getOwnPropertyDescriptors, String padding, trailing commas in function parameter lists and calls, Async Functions, Shared memory and atomics, and also some minor changes.

New Features

Async Functions

Status: Available from Firefox 52 (now Beta, will ship in March 2017).

Async functions help with long promise chains broken up to separate scopes, letting you write the chains just like a synchronous function.

When an async function is called, it returns a Promise that gets resolved when the async function returns. When the async function throws, the promise gets rejected with the thrown value.

An async function can contain an await expression. That receives a promise and returns a resolved value. If the promise gets rejected, it throws the reject reason.

async function makeDinner(candidates) {
  try {
    let itemsInFridge = await fridge.peek();

    let itemsInStore = await store.peek();

    let recipe = await chooseDinner(
      candidates,
      itemsInFridge,
      itemsInStore,
    );

    let [availableItems, missingItems]
      = await fridge.get(recipe.ingredients);

    let boughtItems = await store.buy(missingItems);

    pot.put(availableItems, boughtItems);

    await pot.startBoiling(recipe.temperature);

    do {
      await timer(5 * 60);
    } while(taste(pot).isNotGood());

    await pot.stopBoiling();

    return pot;
  } catch (e) {
    return document.cookie;
  }
}

async function eatDinner() {
  eat(await makeDinner(["Curry", "Fried Rice", "Pizza"]));
}
Shared memory and atomics

Status: Available behind a flag from Firefox 52 (now Beta, will ship in March 2017).

SharedArrayBuffer is an array buffer pointing to data that can be shared between Web Workers. Views on a shared memory can be created with the TypedArray and DataView constructors.

When transferring SharedArrayBuffer between the main thread and a worker, the underlying data is not transferred but instead the information about the data in memory is sent. As a result it reduces the cost of using a worker to process data retrieved on main thread, and also makes it possible to process data in parallel on multiple workers without creating separate data for each.

// main.js
let worker = new Worker("worker.js");

let sab = new SharedArrayBuffer(IMAGE_SIZE);
worker.onmessage = functions(event) {
  let image = createImageFromBitmap(event.data.buffer);
  document.body.appendChild(image);
};
captureImage(sab);
worker.postMessage({buffer: sab})

// worker.js
onmessage = function(event) {
  let sab = event.data.buffer;
  processImage(sab);
  postMessage({buffer: sab});
};

Moreover, a new API called Atomics provides low-level atomic access and synchronization primitives for use with shared memory. Lars T Hansen has already written about this in the Mozilla Hacks post “A Taste of JavaScript’s New Parallel Primitives“.

Object.values / Object.entries

Status: Available from Firefox 47.

Object.values returns an array of a given object’s own enumerable property values, like Object.keys does for property keys, and Object.entries returns an array of [key, value] pairs.

Object.entries is useful to iterate over objects.

createElement("img", {
  width: 320,
  height: 240,
  src: "http://localhost/a.png"
});

// without Object.entries
function createElement(name, attributes) {
  let element = document.createElement(name);

  for (let name in attributes) {
    let value = attributes[name];
    element.setAttribute(name, value);
  }

  return element;
}

// with Object.entries
function createElement(name, attributes) {
  let element = document.createElement(name);

  for (let [name, value] of Object.entries(attributes)) {
    element.setAttribute(name, value);
  }

  return element;
}

When property keys are not used in the loop, Object.values can be used.

Object.getOwnPropertyDescriptors

Status: Available from Firefox 50.

Object.getOwnPropertyDescriptors returns all own property descriptors of a given object.

The return value of Object.getOwnPropertyDescriptors can be passed to Object.create, to create a shallow copy of the object.

function shallowCopy(obj) {
  return Object.create(Object.getPrototypeOf(obj),
                       Object.getOwnPropertyDescriptors(obj));
}
String padding

Status: Available from Firefox 48.

String.prototype.padStart and String.prototype.padEnd add padding to a string, if necessary, to extend it to the given maximum length. The padding characters can be specified by argument.

They can be used to output data in a tabular format, by adding leading spaces (align to end) or trailing spaces (align to start), or to add leading "0" to numbers.

let stock = {
  apple: 105,
  pear: 52,
  orange: 78,
};
for (let [name, count] of Object.entries(stock)) {
  console.log(name.padEnd(10) + ": " + String(count).padStart(5, 0));
  // "apple     : 00105"
  // "pear      : 00052"
  // "orange    : 00078"
}
Trailing commas in function parameter lists and calls

Status: Available from Firefox 52 (now Beta, will ship in March 2017).

Just like array elements and object properties, function parameter list and function call arguments can now have trailing commas, except for the rest parameter.

function addItem(
  name,
  price,
  count = 1,
) {
}

addItem(
  "apple",
  30,
  2,
);

This simplifies generating JavaScript code programatically, i.e. transpiling from other language. Code generator doesn’t have to worry about whether to emit comma or not, while emitting function parameters or function call arguments.

Also this makes it easier to rearrange parameters by copy/paste.

Miscellaneous Changes

Remove proxy OwnPropertyKeys error with duplicate keys

Status: Available from Firefox 51.

The ownKeys trap of a user-defined Proxy handler is now permitted to return duplicate keys for non-extensible object.

let nonExtensibleObject = Object.preventExtensions({ a: 10 });

let x = new Proxy(nonExtensibleObject, {
  ownKeys() {
    return ["a", "a", "a"];
  }
});

Object.getOwnPropertyNames(x); // ["a", "a", "a"]
Case folding for \w, \W, \b, and \B in unicode RegExp
Status: Available from Firefox 54 (now Nightly, will ship in June 2017).

\w, \W, \b, and \B in RegExp with unicode+ignoreCase flags now treat U+017F (LATIN SMALL LETTER LONG S) and U+212A (KELVIN SIGN) as word characters.

console.log(/\w/iu.test("\u017F")); // true
console.log(/\w/iu.test("\u212A")); // true
Remove arguments.caller

Status: Removed in Firefox 53 (now Developer Edition, will ship in April 2017).

The caller property on arguments objects, that threw a TypeError when gotten or set, has been removed.

function f() {
  "use strict";
  arguments.caller; // doesn't throw.
}
f();

What’s Next?

We’re also working on implementing ECMAScript proposals.

New Feature

Function.prototype.toString revision (proposal Stage 3)

Status: Work in progress.

This proposal standardizes the string representation of functions to make it interoperable between browsers.

Lifting Template Literal Restriction (proposal Stage 3)

Status: Available from Firefox 53 (now Developer Edition, will ship in April 2017).

This proposal removes a restriction on escape sequences in Tagged Template Literals.

If an invalid escape sequence is found in a tagged template literal, the template value becomes undefined but the template raw value becomes the raw string.

function f(callSite) {
  console.log(callSite);     // [undefined]
  console.log(callSite.raw); // ["\\f. (\\x. f (x x)) (\\x. f (x x))"]
}

f`\f. (\x. f (x x)) (\x. f (x x))`;
Async Iteration (proposal Stage 3)

Status: Work in progress.

The Async Iteration proposal comes with two new features: async generator and for-await-of syntax.

The async generator is a mixture of a generator function and an async function. It can contain yield, yield*, and await. It returns a generator object that behaves asynchronously by returning promises from next/throw/return methods.

The for-await-of syntax can be used inside an async function and an async generator. It behaves like for-of, but interacts with the async generator and awaits internally on the returned promise.

async function* loadImagesSequentially(urls) {
  for (let url of urls) {
    yield await loadImage(url);
  }
}

async function processImages(urls) {
  let processedImages = [];

  for await (let image of loadImagesSequentially(urls)) {
    let processedImage = await processImageInWorker(image);
    processedImages.push(processedImage);
  }

  return processedImage;
}

Conclusion

100% on the ES2016+ compatibility table is an important milestone to achieve, but the ECMAScript language will continue evolving. We’ll keep working on implementing new features and fixing existing ones to make them standards-compliant and improve their performance. If you find any bug, performance issue, or compatibility fault, please let us know by filing a bug in Bugzilla. Firefox’s JavaScript engine engineers can also be found in #jsapi on irc.mozilla.org. 🙂

Cameron Kaiser45.8.0b1 available with user agent switching (plus: Chrome on PowerPC?!)

45.8.0b1 is available (downloads, hashes). This was supposed to be in your hands last week, but I had a prolonged brownout (68VAC on the mains! thanks, So Cal Edison!) which finally pushed the G4 file server over the edge and ruined various other devices followed by two extended blackouts, so the G5 was just left completely off all week and I did work on my iBook G4 and i7 MBA until I was convinced I wasn't going to regret powering the Quad back on again. The other reason for the delay is because there's actually a lot of internal changes in this release: besides including the work so far on Operation Short Change (there is still a fair bit more to do, but this current snapshot improves benchmarks anywhere from four to eight percent depending on workload), this also has numerous minor bug fixes imported from later versions of Firefox and continues removing telemetry from UI-facing code to improve browser responsiveness. I also smoked out and fixed many variances from the JavaScript conformance test suite, mostly minor, but this now means we are fully compliant and some classes of glitchy bugs should just disappear.

But there are two other big changes in this release. The first is to font support. While updating the ATSUI font blacklist for Apple's newest crock of crap, the fallback font (Helvetica Neue) appeared strangely spaced, with words as much as a 1/6th of the page between them. This looks like it's due to the 10.4-compatible way we use for getting native font metrics; there appears to be some weird edge case in CGFontGetGlyphAdvances() that occasionally yields preposterous values, meaning we've pretty much shipped this problem since the very beginning of TenFourFox and no one either noticed or reported it. It's not clear to me why (or if) other glyphs are unaffected but in the meantime the workaround is to check if the space glyph's horizontal advance value is redonkulous and if it is, assuming the font metrics are not consistent with a non-proportional font, heuristically generate a more reasonable size. In the process I also tuned up font handling a little bit, so text run generation should be a tiny bit faster as well.

The second is this:

As promised, user agent switching is now built-in to TenFourFox; no extension is required (and you should disable any you have installed or they may conflict). I chose five alternatives which have been useful to me in the past for sites that balk at TenFourFox (including cloaking your Power Mac as Intel Firefox), or for lower-powered systems, including the default Classilla user agent. I thought about allowing an arbitrary string but frankly you could do that in about:config if you really wanted, and was somewhat more bookwork. However, I'm willing to entertain additional suggestions for user agent strings that work well on a variety of sites. For fun, open Google in one tab and the preferences pane in another, change the agent to a variety of settings, and watch what happens to Google as you reload it.

Note that the user agent switching feature is not intended to hide you're using TenFourFox; it's just for stupid sites that don't properly feature-sniff the browser (Chase Bank, I'm looking at yoooooouuuuu). The TenFourFox start page can still determine you're using TenFourFox for the purpose of checking if your browser is out of date by looking at navigator.isTenFourFox, which I added to 45.8 (it will only be defined in TenFourFox; the value is undefined in other browsers). And yes, there really are some sites that actually check for TenFourFox and serve specialized content, so this will preserve that functionality once they migrate to checking for the browser this way.

If you notice font rendering is awry in this version, please compare it with 45.7 before reporting it. Even if the bug is actually old I'd still like to fix it, but new regressions are more serious than something I've been shipping for awhile. Similarly, please advise on user agent strings you think are useful, and I will consider them (no promises). 45.8 is scheduled for release on March 7.

Finally, last but not least, who says you can't run Google Chrome on a Power Mac?

Okay, okay, this is the PowerPC equivalent of a stupid pet trick, but this is qemu 2.2.1 (the last version that did not require thread-local storage, which won't work with OS X 10.6 and earlier) running on the Quad as a 64-bit PC, booting Instant Web Kiosk into Chromium 53. It is incredibly, crust-of-the-earth-coolingly slow, and that nasty blue tint is an endian problem in the emulator I need to smoke out. But it actually can browse websites (eventually, with patience, after a fashion) and do web-browsery things, and I was even able to hack the emulator to run on 10.4, a target this version of qemu doesn't support.

So what's the point of running a ridiculously sluggish emulated web browser, particularly one I can't stand for philosophical reasons? Well, there's gotta be something I can work on "After TenFourFox"™ ...

Jokes aside, even though I see some optimization possibilities in the machine code qemu's code generator emits, those are likely to yield only marginal improvements. Emulation will obviously never be anywhere near as fast as native code. If I end up doing anything with this idea of emulation for future browser support, it would likely only be for high-end G5 systems, and it may never be practical even then. But, hey, Google, guess what: it only took seven years to get Chrome running on a Tiger Power Mac.

Air MozillaFebruary Privacy Lab - Cyber Futures: What Will Cybersecurity Look Like in 2020 and Beyond?

February Privacy Lab - Cyber Futures: What Will Cybersecurity Look Like in 2020 and Beyond? CLTC will present on Cybersecurity Futures 2020, a report that poses five scenarios for what cybersecurity may look like in 2020, extrapolating from technological, social,...

Daniel StenbergSome curl numbers

We released the 163rd curl release ever today. curl 7.53.0 – approaching 19 years since the first curl release (6914 days to be exact).

It took 61 days since the previous release, during which 47 individuals helped us fix 95 separate bugs. 25 of these contributors were newcomers. In total, we now count more than 1500 individuals credited for their help in the project.

One of those bug-fixes, one was a security vulnerability, upping our total number of vulnerabilities through the years to 62.

Since the previous release, 7.52.1, 155 commits were made to the source repository.

The next curl release, our 164th, is planned to ship in exactly 8 weeks.

QMORezaul Huque Nayeem: industrious, tolerant and associative

28506408403_9c6f20866c_mRezaul Huque Nayeem has been involved with Mozilla since 2013. He is from Mirpur, Dhaka, Bangladesh where he is an undergraduate student of Computer science and Engineering at the Daffodil International University. He loves to travel countrywide and hangout with friends. In his spare time, he volunteers for some social organizations.

Rezaul Huque Nayeem is from Bangladesh in south Asia.

Rezaul Huque Nayeem is from Bangladesh in south Asia.

Hi Nayeem! How did you discover the Web?

I discovered the web when I was kid, one day (2000/2001) in my uncle’s office I heard something about Email and Yahoo. That was the first time and I learned little bit about the internet on that day. I remember that I was very amazed when I downloaded a picture.

How did you hear about Mozilla?

In 2012 one of my friends told me about Mozilla and its mission. He told me how to contribute in many pathways in Mozilla.

How and why did you start contributing to Mozilla?

I started contributing to Mozilla on 20 march, 2015 by QA Marathon Dhaka. On that day my mentor Hossain Al Ikram showed me how to contribute to Mozilla by doing QA. He teached me how to test any feature on Firefox, how to verify bugs or do triage. From that day I love doing QA. Day by day I met many awesome mozillians and was helped by them. I love to contribute with them in a global community. I also like the way Mozilla works for making better web.

Nayeem in QA Marathon, Dhaka

Have you contributed to any other Mozilla projects in any other way?

I did some localization on Firefox OS and MDN. I contributed in MLS (Mozilla Location Service). I also participated in many Web Maker focused events.

What’s the contribution you’re the most proud of?

I feel proud to contribute to QA. By doing QA now i can find bugs and help to get them fixed. That’s why now many people can use a bug free browser. And it also teaches me how to work with a community and make me active and industrious.

23087282073_77d2bb9a15_z

Please tell us more about your community. Is there anything you find particularly interesting or special about it?

The community I work with is Mozilla Bangladesh QA Community, a functional community of Mozilla Bangladesh where we are focused in contributing in QA. It is the biggest QA community and growing day by day. There is about 50 + active contributors who regularly participates in Test days, Bug Verification days and Bug triage days. Last year, we verified more than 700 bugs. We have more than 10 community mentors to help contributors. In our community every member is so much friendly and helpful. It’s a very active and lovely community.

14258295_619034631600826_3641841310249124012_o

What’s your best memory with your fellow community members?

Every online and offline event was very exciting for me. But Firefox QA Testday, Dhaka (4.dec.2015) was the best memorable event with my community for me. It was really an awesome offline daylong event.

23631670381_c8e43f7282_z

You had worked as a Firefox Student Ambassador. Do you have any event that you want to share?

I organized two events on my institutional campus, Dhaka Polytechnic Institute. One was a Webmaker event and another one was for MozillaBD Privacy Talk. Both was thrilling for me, as I was leading those events.

24919138712_10560d7830_z

What advice would you give to someone who is new and interested in contributing to Mozilla?

I will tell him that, first you have to decide what you really want to do? If you work with a community, then please do not contribute for yourself, do it for your community.

If you had one word or sentence to describe Mozilla, what would it be?

Mozilla is the One who really wants to make web free for people

What exciting things do you envision for you and Mozilla in the future?

I envision that Mozilla will give much effort on connecting devices so that world would get some exciting gear.

Gervase MarkhamOverheard at Google CT Policy Day…

Jacob Hoffman-Andrews (of Let’s Encrypt): “I tried signing up for certspotter alerts for a domain and got a timeout on the signup page.”
Andrew Ayer (of CertSpotter): “Oh, dear. Which domain?”
Jacob Hoffman-Andrews: “hoffman-andrews.com
Andrew Ayer: “Do you have a lot of certs for that domain?”
Jacob Hoffman-Andrews: “Oh yeah, I totally do!”
Andrew Ayer: “How many?”
Jacob Hoffman-Andrews: “A couple of hundred thousand.”
Andrew Ayer: “Yeah, that would do it…”

Dave TownsendWelcome to a new Firefox/Toolkit peer, Shane Caraveo

It’s that time again when I get to announce a new Firefox/Toolkit peer. Shane has been involved with Mozilla for longer than I can remember and recently he has been doing fine work on webextensions including the new sidebar API. As usual we probably should have made Shane a peer sooner so this is a case of better late than never.

I took a moment to tell Shane what I expect of all my peers:

  • Respond to review requests promptly. If you can’t then work with the patch author or me to find an alternate reviewer.
  • Only review things you’re comfortable with. Firefox and Toolkit is a massive chunk of code and I don’t expect any of the peers to know it all. If a patch is outside your area then again work with the author or me to find an alternate reviewer.

Please congratulate Shane in the usual way, by sending him lots of patches to review!

Chris CooperRelEng & RelOps highlights - February 21, 2017

It’s been a while. How are you?

Modernize infrastructure:

We finally closed the 9-year-old bug requesting that we redirect all HTTP traffic to hg.mozilla.org to HTTPS! Many thanks to everyone who helped ensure that automation and other tools continued to work normally. Not every day you get to close bugs that are older than my kids. https://bugzilla.mozilla.org/show_bug.cgi?id=450645

The new TreeStatus page (https://mozilla-releng.net/treestatus) was finally released by garbas with a proxy in place of old url.

Improve Release Pipeline:

Initial work on Uplift dashboard has been done by bastien and released to production by garbas. https://shipit.mozilla-releng.net/release-dashboard

Releng had a workweek in Toronto to plan how release promotion will work in a TaskCluster world. With the uplift for Firefox 52 rapidly approaching (see Release below), we came up with a multi-phase plan that should allow us to release the Linux and Android versions of Firefox 52 from TaskCluster, with the Mac and Windows versions still being created by buildbot.

Improve CI Pipeline:

Alin and Sebastian disabled Windows 10 tests on our CI. Windows 10 tests will be reappearing later this year once we move datacentres and acquire new hardware to support them. https://bugzilla.mozilla.org/show_bug.cgi?id=1330999

Andrei and Relops converted some Windows talos machines to run Linux64 to reduce wait times on this platform. https://bugzilla.mozilla.org/show_bug.cgi?id=1337452

There are some upcoming deadlines involving datacentre moves that, while not currently looming, are definitely focusing our efforts in the TaskCluster migration. As part of the aforementioned workweek, we targeted the next platform that needs to migrate, Mac OS X. We are currently breaking out the packaging and signing steps for Mac so that they can be done on Linux. That work can then be re-used for l10n repacks *and* release promotion.

Operational:

Since most of our Linux64 builds and tests have migrated to TaskCluster, Alin was able to shut down many of our Linux buildbot masters. This will reduce our monthly AWS bill and the complexity of our operational environment. https://bugzilla.mozilla.org/show_bug.cgi?id=1335435

Hal ran our first “hard close” Tree Closing Window (TCW) in quite a while on Saturday, February 11 (https://bugzilla.mozilla.org/show_bug.cgi?id=1324148). It ran about an hour longer than planned due to some strange interactions deep in the back end, which is why it was a “hard close.” The issue may be related to occasional “database glitches” we have seen in the past. This time IT got some data, and have raised a case with our load balancer vendor.

Release:

We are deep in the beta cycle for Firefox 52, with beta 8 coming out this week. Firefox 52 is an important milestone release because it signals the start of another ESR cycle.

See you again soon!

Mozilla Privacy BlogWe Are All Creators Now

Print

For the past several months, the U.S. Copyright Office has been collecting comments on a part of the Digital Millennium Copyright Act (DMCA), specifically related to intermediary liability and safe harbors. Under the current law, internet companies – think, for example, of Facebook, or Reddit, or Wikimedia – are able to offer their services without legal exposure if a user uploads content in violation of copyright law, as long as they follow proper takedown procedures upon being informed of the action.

Last year, we filed comments in the first round of this proceeding, and today, we have made a second submission in response to the Copyright Office’s request for additional input. Across our filings and other engagement on this issue, we identify major concerns with some of the proposals being considered – changes that would greatly harm content creators, the public and intermediaries. Some of the proposals seem to be more about manipulating copyright law to achieve another agenda, these filings need to be rejected.

Mandating content blocking technology is a bad idea – bad for users, bad for businesses, and not effective at addressing the problem of copyright infringement. We’re fighting this issue not only in the United States, but also in Europe, because it goes against the letter and the spirit of copyright law, and poses immense risk to the internet’s benefits for social and economic activity.

Separately, we believe automated systems, currently in place on sites such as YouTube, are very ineffective at assessing fair use and copyright infringement, as they can’t consider context. We are making suggestions to ease or eliminate the burden of those automated systems on those acting within the law.

Ultimately, to achieve its constitutional purpose and deliver ultimate benefit for people and for the internet, copyright practice must recognize that members of the general public are no longer just consumers, but creators. Copyright issues relating to creator’s artistic inputs are often far more important than any copyright interest in their creative output.

The greatest risk of tinkering with the safe harbor setup today is interfering with the fulfillment of the public’s desire to create, remix and share content. In contrast, recognizing and protecting that interest creates the greatest opportunity for positive policy change. We continue to be optimistic about the possibility of seeing such change at the end of this consultation process.

 

Robert KaiserThe PHP Authserver

As mentioned previously here on my blog, my FOSDEM 2017 talk ended up opening up the code for my own OAuth2 login server which I created following my earlier post on the login systems question.

The video from the talk is now online at the details page of the talk (including downloadable versions if you want them), my slides are available as well.

The gist of it is that I found out that using a standard authentication protocol in my website/CMS systems instead of storing passwords with the websites is a good idea, but I also didn't want to report who is logging into which website at what point to a third party that I don't completely trust privacy-wise (like Facebook or Google). My way to deal with that was to operate my own OAuth2 login server, preferably with open code that I can understand myself.

As the language I know best is PHP (and I can write pretty clean and good quality code in that language), I looked for existing solutions there but couldn't find a finished one that I could just install, adapt branding-wise and operate.
I found a good library for OAuth2 (and by extension OpenID Connect) in oauth2-server-php, but the management of actual login processes and creating the various end points that call the library still had to be added, and so I set out to do just that. For storing passwords, I investigated what solutions would be good and in the end settled for using PHP's builtin password_hash function including its auto-upgrade-on-login functionalities, right now that means using bcrypt (which is decent but not fully ideal), with PHP 7.2, it will move to Argon2 (which is probably the best available option right now). That said, I wrote some code to add an on-disk random value to the passwords so that hacking the database alone will be insufficient for an offline brute-force attack on the hashes. In general, I tried to use a lot of advice from Mozilla's secure coding guidelines for websites, and also made sure my server passes with A+ score on Mozilla Observatory as well as SSL Labs, and put the changes for that in the code as much as possible, or example server configurations in the repository otherwise, so that other installations can profit from this as well.
For sending emails and building up HTML as DOM doucuments, I'm using helper classes from my own php-utility-classes and for some of the database access, esp. schema upgrades, I ended up including doctrine DBAL. Optionally, the code is there to monitor traffic via Piwik.

The code for all this is now available at https://github.com/KaiRo-at/authserver.

It should be relatively easy to install on a Linux system with Apache and MySQL - other web servers and databases should not be hard to add but are untested so far. The main README has some rudimentary documentation, but help is needed to improve on that. Also, all testing is done by trying logins with the two OAuth2 implementations I have done in my own projects, I need help in getting a real test suite set up for the system.
Right now, all the system supports is the OAuth2 "Authorization Code" flow, it would be great to extend it to support OIDC as well, which php-server-php can handle but the support code for it needs to be written.
Branding can easily be adapted for the operator running the service via the skin support (my own branding on my installation builds on that as well), and right now US English and German are supported by the service but more can easily be added if someone contributes them.

And last but not least, it's all under the MPL2 license, which I hope enables people easily to contribute - I hope including yourself!

QMOFirefox 52 Beta 7 Testday Results

Hello Mozillians!

As you may already know, last Friday – February 17th – we held a new Testday event, for Firefox 52 Beta 7.

Thank you all for helping us making Mozilla a better place – P.Avinash Sharma, Vuyisile Ndlovu, Athira Ananth, Ilse Macías and Iryna Thompson, Surentharan R.A., Subash.M, vinothini.k, R.krithika sowbarnika, Dhinesh Kumar, Fahima Zulfath A, Nagaraj.V, A.Kavipriya, Rajesh, varun tiwari, Pavithra.R, Vishnu Priya, Paarttipaabhalaji, Kavya, Sankararaman and Baranitharan.

From Bangladesh team: Nazir Ahmed Sabbir, Maruf Rahman, Md.Majedul islam, Md. Raihan Ali, Sabrina joadder silva, Afia Anjum Preety, Rezwana Islam Ria, Rayhan, Md. Mujtaba Asif, Anmona Mamun Monisha, Wasik Ahmed, Sajedul Islam, Forhad Hossain, Asif Mahmud Rony, Md Rakibul Islam.

Results:

– several test cases executed for the Graphics.

– 5 bugs verified: 637311, 1111599, 1311096, 1292629, 1215856.

– 2 new bugs filed: 1298395, 1340883.

Again thanks for another successful testday 🙂

We hope to see you all in our next events, all the details will be posted on QMO!

 

About:CommunityMozilla at FOSDEM 2017

With over 17 talks hosted in our packed Mozilla Devroom, more than 550 attendees at our booth, and our #mozdem hashtag earning 1,8 million impressions, Mozilla’s presence at FOSDEM 2017 February 4-5 was a successful, volunteer-Mozillian driven initiative.

FOSDEM is one of the biggest events in the Open Source world and attracts more than 6,000 attendees from all over the world — Open Source advocates, Technical developers, and people interested in Copyright, new languages, and the open web. Through our booth we were able to hear from developers about what they expect from Mozilla — from our tools and technologies, our involvement in the open source community, how we can improve our contribution areas. We had a full day Devroom on Saturday with 17 talks (8 from volunteers) averaging nearly 200 attendees per talk that covered several topics like Dev Tools, Rust, A-Frame and others. There were also presentations about community motivation, Diversity & Inclusion, and Copyright in Europe. Together these allowed us to show what’s important for Mozilla right now, what ideas and issues we want to promote, and what technologies are we using.

In working with volunteer-Mozillians to coordinate our presence, the Open Innovation team took a slightly different path this year, being more rigorous in our approach. First, we identified goals and intended outcomes, having conversations with different teams (DevTools, DevRel, Open Source Experiments, etc). Those conversations helped us to define a set of expectations and success for these teams. For example, Developer Relations was interested in getting feedback from participants on Mozilla and web technologies, since the event has an audience very relevant for them (web developers, technical developers). Open Source Experiments was interested in create warm leads for project partners, to help boost the program. So we had a variety of goals, which were shared with volunteers, and that helped us to measure the success of our participation in a solid way.

DevTools talk Graphic

DevTools talk Graphic

FOSDEM is always a place to discuss and have interesting conversations. While we covered several topics at the Devroom and at our booth, Rust proved to be a common talking point on many occasions. Although it can be considered a new programming language, we were asked about how to participate, where to find more information and how to join the Rust community.

All in all, the Mozilla presence at FOSDEM proved to be very solid and it couldn’t had happened with the help of the volunteers that staffed the booth and worked hard. I would like to mention and thank (alphabetically): Alex Lakatos, Daniele Scasciafratte, Edoardo Viola, Eugenio Petullà, Gabriel Micko, Gloria Dwomoh, Ioana Chiorean, Kristi Progri, Merike Sell, Luna Jernberg and Redon Skikuli and a lot of other volunteers that went there to help or only participate at the event. Also big kudos to Ziggy Maes and Anthony Maton, who helped to coordinate Mozilla presence.

our volunteers at FOSDEM 2017

our volunteers at FOSDEM 2017

Some highlight numbers of our presence in this edition:

  • Nearly 200 people on average per talk in our devroom
  • Mozillians directly engaged with around 550 people during the weekend at our booth
  • More than 200 people checked our Code of Conduct for our devroom
  • Our hashtag #mozdem, had around 1,8 Million impressions
  • The Survey we ran at the event was filled out by 210 developers

There are a lot of pictures and blogposts from mozillians on medium, or in their blogs. If you want to see some tweets, impressions and photos, check this storify.

This Week In RustThis Week in Rust 170

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate of the week is CDRS, a client for Apache Cassandra written completely in Rust. Thanks to Alex Pikalov for the suggestion.

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

107 pull requests were merged in the last week.

New Contributors

  • Amos Onn
  • Andrew Gaspar
  • Benoît CORTIER
  • Brian Vincent
  • Dmitry Guzeev
  • Glyne J. Gittens
  • Jeff Muizelaar
  • Luxko
  • Matt Williams
  • Michal Nazarewicz
  • Mikhail Pak
  • Sebastian Waisbrot

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Style RFCs

Style RFCs are part of the process for deciding on style guidelines for the Rust community and defaults for Rustfmt. The process is similar to the RFC process, but we try to reach rough consensus on issues (including a final comment period) before progressing to PRs. Just like the RFC process, all users are welcome to comment and submit RFCs. If you want to help decide what Rust code should look like, come get involved!

PRs:

Issues in final comment period:

Other significant issues:

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

No quote was selected for QotW.

Submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and brson.

Niko MatsakisNon-lexical lifetimes using liveness and location

At the recent compiler design sprint, we spent some time discussing non-lexical lifetimes, the plan to make Rust’s lifetime system significantly more advanced. I want to write-up those plans here, and give some examples of the kinds of programs that would now type-check, along with some that still will not (for better or worse).

If you were at the sprint, then the system I am going to describe in this blog post will actually sound quite a bit different than what we were talking about. However, I believe it is equivalent to that system. I am choosing to describe it differently because this version, I believe, would be significantly more efficient to implement (if implemented naively). I also find it rather easier to understand.

I have a prototype implementation of this system. The example used in this post, along with the ones from previous posts, have all been tested in this prototype and work as expected.

Yet another example

I’ll start by giving an example that illustrates the system pretty well, I think. This section also aims to give an intution for how the system works and what set of programs will be accepted without going into any of the details. Somewhat oddly, I’m going to number this example as “Example 4”. This is because my previous post introduced examples 1, 2, and 3. If you’ve not read that post, you may want to, but don’t feel you have to. The presentation in this post is intended to be independent.

Example 4: Redefined variables and liveness

I think the key ingredient to understanding how NLL should work is understanding liveness. The term “liveness” derives from compiler analysis, but it’s fairly intuitive. We say that a variable is live if the current value that it holds may be used later. This is very important to Example 4:

let mut foo, bar;
let p = &foo;
// `p` is live here: its value may be used on the next line.
if condition {
    // `p` is live here: its value will be used on the next line.
    print(*p);
    // `p` is DEAD here: its value will not be used.
    p = &bar;
    // `p` is live here: its value will be used later.
}
// `p` is live here: its value may be used on the next line.
print(*p);
// `p` is DEAD here: its value will not be used.

Here you see a variable p that is assigned in the beginning of the program, and then maybe re-assigned during the if. The key point is that p becomes dead (not live) in the span before it is reassigned. This is true even though the variable p will be used again, because the value that is in p will not be used.

So how does liveness relate to non-lexical lifetimes? The key rule is this: Whenever a variable is live, all references that it may contain are live. This is actually a finer-grained notion than just the liveness of a variable, as we will see. For example, the first assignment to p is &foo – we want foo to be borrowed everywhere that this assignment may later be accessed. This includes both print() calls, but excludes the period after p = &bar. Even though the variable p is live there, it now holds a different reference:

let foo, bar;
let p = &foo;
// `foo` is borrowed here, but `bar` is not
if condition {
    print(*p);
    // neither `foo` nor `bar` are borrowed here
    p = &bar;   // assignment 1
    // `foo` is not borrowed here, but `bar` is
}
// both `foo` and `bar` are borrowed here
print(*p);
// neither `foo` nor `bar` are borrowed here,
// as `p` is dead

Our analysis will begin with the liveness of a variable (the coarser-grained notion I introduced first). However, it will use reachability to refine that notion of liveness to obtain the liveness of individual values.

Control-flow graphs and point notation

Recall that in NLL-land, all reasoning about lifetimes and borrowing will take place in the context of MIR, in which programs are represented as a control-flow graph. This is what Example 4 looks like as a control-flow graph:

// let mut foo: i32;
// let mut bar: i32;
// let p: &i32;

A
[ p = &foo     ]
[ if condition ] ----\ (true)
       |             |
       |     B       v
       |     [ print(*p)     ]
       |     [ ...           ]
       |     [ p = &bar      ]
       |     [ ...           ]
       |     [ goto C        ]
       |             |
       +-------------/
       |
C      v
[ print(*p)    ]
[ return       ]

As a reminder, I will use a notation like Block/Index to refer to a specific point (statement) in the control-flow graph. So A/0 and B/2 refer to p = &foo and p = &bar, respectively. Note that there is also a point for the goto/return terminators of each block (i.e., A/1, B/4, and C/1).

Using this notation, we can say that we want foo to be borrowed during the points A/1, B/0, and C/0. We want bar to be borrowed during the points B/3, B/4, and C/0.

Defining the NLL analysis

Now that we have our two examples, let’s work on defining how the NLL analysis will work.

Step 0: What is a lifetime?

The lifetime of a reference is defined in our system to be a region of the control-flow graph. We will represent such regions as a set of points.

A note on terminology: For the remainder of this post, I will often use the term region in place of “lifetime”. Mostly this is because it’s the standard academic term and it’s often the one I fall back to when thinking more formally about the system, but it also feels like a good way to differentiate the lifetime of the reference (the region where it is in use) with the lifetime of the referent (the span of time before the underlying resource is freed).

Step 1: Instantiate erased regions

The plan for adopting NLL is to do type-checking in two phases. The first phase, which is performed on the HIR, I would call type elaboration. This is basically the “traditional type-system” phase. It infers the types of all variables and other things, figures out where autoref goes, and so forth; the result of this is the MIR.

The key change from today is that I want to do all of this type elaboration using erased regions. That is, until we build the MIR, we won’t have any regions at all. We’ll just keep a placeholder (which I’ll write as 'erased). So if you have something like &i32, the elaborated, internal form would just be &'erased i32. This is quite different from today, where the elaborated form includes a specific region. (However, this erased form is precisely what we want for generating code, and indeed MIR today goes through a “region erasure” step; this step would be unnecessary in the new plan, since MIR as produced by type check would always have fully erased regions.)

Once we have built MIR, then, the idea is roughly to go and replace all of these erased regions with inference variables. This means we’ll have region inference variables in the types of all local variables; it also means that for each borrow expression like &foo, we’ll have a region representing the lifetime of the resulting reference. I’ll write the expression together with this region like so: &'0 foo.

Here is what the CFG for Example 4 looks like with regions instantiated. You can see I used the variable '0 to represent the region in the type of p, and '1 and '2 for the regions of the two borrows:

// let mut foo: i32;
// let mut bar: i32;
// let p: &'0 i32;

A
[ p = &'1 foo  ]
[ if condition ] ----\ (true)
       |             |
       |     B       v
       |     [ print(*p)     ]
       |     [ ...           ]
       |     [ p = &'2 bar   ]
       |     [ ...           ]
       |     [ goto C        ]
       |             |
       +-------------/
       |
C      v
[ print(*p)    ]
[ return       ]

Step 2: Introduce region constraints

Now that we have our region variables, we have to introduce constraints. These constriants will come in two kinds:

  • liveness constraints; and,
  • subtyping constraints.

Let’s look at each in turn.

Liveness constraints.

The basic rule is this: if a variable is live on entry to a point P, then all regions in its type must include P.

Let’s continue with Example 4. There, we have just one variable, p. It’s type has one region ('0) and it is live on entry to A/1, B/0, B/3, B/4, and C/0. So we wind up with a constraint like this:

{A/1, B/0, B/3, B/4, C/0} <= '0

We also include a rule that for each borrow expression like &'1 foo, '1 must include the point of borrow. This gives rise to two further constraints in Example 4:

{A/0} <= '1
{B/2} <= '2

Location-aware subtyping constraints

The next thing we do is to go through the MIR and establish the normal subtyping constraints. However, we are going to do this with a slight twist, which is that we are going to take the current location into account. That is, instead of writing T1 <: T2 (T1 is required to be a subtype of T2) we will write (T1 <: T2) @ P (T1 is required to be a subtype of T2 at the point P). This in turn will translate to region constraints like (R2 <= R1) @ P.

Continuing with Example 4, there are a number of places where subtyping constraints arise. For example, at point A/0, we have p = &'1 foo. Here, the type of &'1 foo is &'1 i32, and the type of p is &'0 i32, so we have a (location-aware) subtyping constraint:

(&'1 i32 <: &'0 i32) @ A/1

which in turn implies

('0 <= '1) @ A/1 // Note the order is reversed.

Note that the point here is A/1, not A/0. This is because A/1 is the first point in the CFG where this constraint must hold on entry.

The meaning of a region constraint like ('0 <= '1) @ P is that, starting from the point P, the region '1 must include all points that are reachable without leaving the region '0. The implementation basically does a depth-first search starting from P; the search stops if we exit the region '0. Otherwise, for each point we find, we add it to '1.

Jumping back to example 4, we wind up with two constraints in total. Combining those with the liveness constraint, we get this:

('0 <= '1) @ A/1
('0 <= '2) @ B/3
{A/1, B/0, B/3, B/4, C/0} <= '0
{A/0} <= '1
{B/2} <= '2

We can now try to find the smallest values for '0, '1, and '2 that will make this true. The result is:

'0 = {A/1, B/0, B/3, B/4, C/0}
'1 = {A/0, A/1, B/0, C/0}
'2 = {B/3, B/4, C/0}

These results are exactly what we wanted. The variable foo is borrowed for the region '1, which does not include B/3 and B/4. This is true even though the '0 includes those points; this is because you cannot reach B/3 and B/4 from A/1 without going through B/1, and '0 does not include B/1 (because p is not live at B/1). Similarly, bar is borrowed for the region '2, which begins at B/4 and extends to C/0 (and need not include earlier points, which are not reachable).

You may wonder why we do not have to include all points in '0 in '1. Intuitively, the reasoning here is based on liveness: '1 must ultimately include all points where the reference may be accessed. In this case, the subregion constraint arises because we are copying a reference (with region '1) into a variable (let’s call it x) whose type includes the region '0, so we need reads of '0 to also be counted as reads of '1but, crucially, only those reads that may observe this write. Because of the liveness constraints we saw earlier, if x will later be read, then x must be live along the path from this copy to that read (by the definition of liveness, essentially). Therefore, because the variable is live, '0 will include that entire path. Hence, by including the points in '0 that are reachable from the copy (without leaving '0), we include all potential reads of interest.

Conclusion

This post presents a system for computing non-lexical lifetimes. It assumes that all regions are erased when MIR is created. It uses only simple compiler concepts, notably liveness, but extends the subtyping relation to take into account where the subtyping must hold. This allows it to disregard unreachable portions of the control-flow.

I feel pretty good about this iteration. Among other things, it seems so simple I can’t believe it took me this long to come up with it. This either means that is it the right thing or I am making some grave error. If it’s the latter people will hopefully point it out to me. =) It also seems to be efficiently implementable.

I want to emphasize that this system is the result of a lot of iteration with a lot people, including (but not limited to) Cameron Zwarich, Ariel Ben-Yehuda, Felix Klock, Ralf Jung, and James Miller.

It’s interesting to compare this with various earlier attempts:

  • Our earliest thoughts assumed continuous regions (e.g., RFC 396). The idea was that the region for a reference ought to correspond to some continuous bit of control-flow, rather than having “holes” in the middle.
    • The example in this post shows the limitation of this, however. Note that the region for the variable p includes B/0 and B/4 but excludes B/1.
    • This is why we lean on liveness requirements instead, so as to ensure that the region contains all paths from where a reference is created to where it is eventually dereferenced.
  • An alternative solution might be to consider continuous regions but apply an SSA or SSI transform.
    • This allows the example in this post to type, but it falls down on more advanced examples, such as vec-push-ref (hat tip, Cameron Zwarich). In particular, it’s possible for subregion relations to arise without a variable being redefined.
    • You can go farther, and give variables a distinct type at each point in the program, as in Ericson2314’s stateful MIR for Rust. But even then you must contend with invariance or you have the same sort of problems.
    • Exploring this led to the development of the “localized” subregion relationship constraint (r1 <= r2) @ P, which I had in mind in my original series but which we elaborated more fully at the rustc design sprint.
    • The change in this post versus what we said at the sprint is that I am using one type per variable instead of one type per variable per statement; I am also explicitly using the results of an earlier liveness analysis to construct the constraints, whereas in the sprint we incorporated the liveness into the region inference itself (by reasoning about which values were live across each individual statement and thus creating many more inequalities).

There are some things I’ve left out of this post. Hopefully I will get to them in future posts, but they all seem like relatively minor twists on this base model.

  • I’d like to talk about how to incorporate lifetime parameters on fns (I think we can do that in a fairly simple way by modeling them as regions in an expanded control-flow graph, as illustrated by this example in my prototype).
  • There are various options for modeling the “deferred borrows” needed to accept vec.push(vec.len()).
  • We might consider a finer-grained notion of liveness that operates not on variables but rather on the “fragments” (paths) that we use when doing move-checking. This would help to make let (p, q) = (&foo, &bar) and let pair = (&foo, &bar) entirely equivalent (in the system as I described it, they are not, because whenever pair is live, both foo and bar would be borrowed, even if only pair.0 is ever used). But even if we do this there will still be cases where storing pointers into an aggregate (e.g., a struct) can lose precision versus using variables on the stack, so I’m not sure it’s worth worrying about.

Comments? Let’s use this old internals thread.

Firefox NightlyFosdem 2017 Nightly slides and video

FOSDEM

FOSDEM is a two-day event organised by volunteers to promote the widespread use of free and open source software.

Every year in February, Mozillians from all over the world go to Belgium to attend Fosdem, the biggest Free and Open Source Software event in Europe with over 5,000 developers and Free Software advocate attending.

Mozilla has its own Developer Room and a booth and many or our projects were presented. A significant part of the Firefox Release Management team attended the event and we had the opportunity to present the Firefox Nightly Reboot project in our Developer room on Saturday to a crowd of Mozillians and visitors interested in Mozilla and the future of Firefox.

Here are the slides of my presentation and this is the video recording of my talk:

With Mozilla volunteers (thanks guys!), we also heavily promoted the use of Nightly on the Mozilla booth over the two days of the event.

Mozilla booth Fosdem 2017

We had some interesting Nightly-specific feedback such as:

  • Many visitors thought that the Firefox Dev Edition was actually Nightly (promoted to developers, dark theme, updates daily).
  • Some users mentionned that they prefer to use the Dev Edition or Beta over Nightly not because of a concern about stability but because they find the updating window that pops up if you don’t update daily to be annoying.
  • People were very positive about Firefox and wanted to help Mozilla but said they lacked time to get involved. So they were happy to know that just using Firefox Nightly with telemetry activated and sending crash reports is already a great way to help us.

In a nutshell, this event was really great, we probably spoke to a hundred developers about Nightly and it was almost as popular on the booth as Rust (people really love Rust!).

Do you want to talk about Nightly yourself?

Of course my slides can be used as a basis for your own presentations to promote the use of Nightly to power users and our core community through the open source events you participate in your region or the ones organized by Mozilla Clubs!

The slides use reveal.js as a presentation framework and only need a browser to be displayed. You can download the tar.gz/zip archive of the slides or pull them from github with this command:
git clone https://github.com/pascalchevrel/reveal.js/ -b nightly_fosdem_2017

Anjana VakilNotes from KatsConf2

Hello from Dublin! Yesterday I had the privilege of attending KatsConf2, a functional programming conference put on by the fun-loving, welcoming, and crazy-well-organized @FunctionalKats. It was a whirlwind of really exciting talks from some of the best speakers around. Here’s a glimpse into what I learned.

I took a bunch of notes during the talks, in case you’re hungering for more details. But @jessitron took amazing graphical notes that I’ve linked to in the talks below, so just go read those!

And for the complete experience, check out this storify Vicky Twomey-Lee, who led a great ally skills workshop the evening before the conference, made of the #KatsConf2 tweets:

<noscript>[<a href="http://storify.com/whykay/kats-conf-2" target="_blank">View the story "Kats Conf 2" on Storify</a>]</noscript>

Hopefully this gives you an idea of what was said and which brain-exploding things you should go look up now! Personally it opened up a bunch of cans of worms for me - definitely a lot of the material went over my head, but I have a ton of stuff to go find out more (i.e. the first thing) about.

Disclaimer: The (unedited!!!) notes below represent my initial impressions of the content of these talks, jotted down as I listened. They may or may not be totally accurate, or precisely/adequately represent what the speakers said or think, and the code examples are almost certainly mistake-ridden. Read at your own risk!

The origin story of FunctionalKats

FunctionalKatas => FunctionalKats => (as of today) FunctionalKubs

  • Meetups in Dublin & other locations
  • Katas for solving programming problems in different functional languages
  • Talks about FP and related topics
  • Welcome to all, including beginners

The Perfect Language

Bodil Stokke @bodil

What would the perfect programming language look like?

“MS Excel!” “Nobody wants to say ‘JavaScript’ as a joke?” “Lisp!” “I know there are Clojurians in the audience, they’re suspiciously silent…”

There’s no such thing as the perfect language; Languages are about compromise.

What the perfect language actually is is a personal thing.

I get paid to make whatever products I feel like to make life better for programmers. So I thought: I should design the perfect language.

What do I want in a language?

It should be hard to make mistakes

On that note let’s talk about JavaScript. It was designed to be easy to get into, and not to place too many restrictions on what you can do. But this means it’s easy to make mistakes & get unexpected results (cf. crazy stuff that happens when you add different things in JS). By restricting the types of inputs/outputs (see TypeScript), we can throw errors for incorrect input types - error messages may look like the compiler yelling at you, but really they’re saving you a bunch of work later on by telling you up front.

Let’s look at PureScript

Category theory! Semiring: something like addition/multiplication that has commutativity (a+b == b+a). Semigroup: …?

There should be no ambiguity

1 + 2 * 3

vs.

(+ 1 (* 2 3))

Pony: 1 + (2 * 3) – have to use parentheses to make precedence explicit

It shouldn’t make you think

Joe made a language at Ericsson in the late 80’s called “Erlang”. This is a gif of Joe from the Erlang movie. He’s my favorite movie star.

Immutability: In Erlang, values and variable bindings never change. At all.

This takes away some cognitive overhead (because we don’t have to think about what value a variable has at the moment)

Erlang tends to essentially fold over state: the old state is an input to the function and the new state is an output.

The “abstraction ceiling”

This term has to do with being able to express abstractions in your language.

Those of you who don’t know C: you don’t know what you’re missing, and I urge you not to find out. If garbage collection is a thing you don’t have to worry about in your language, that’s fantastic.

Elm doesn’t really let you abstract over the fact that e.g. map over array, list, set is somehow the same type of operation. So you have to provide 3 different variants of a function that can be mapped over any of the 3 types of collections. This is a bit awkward, but Elm programmers tend not to mind, because there’s a tradeoff: the fact that you can’t do this makes the type system simple so that Elm programmers get succinct, helpful error messages from the compiler.

I was learning Rust recently and I wanted to be able to express this abstraction. If you have a Collection trait, you can express that you take in a Collection and return a Collection. But you can’t specify that the output Collection has to be the same type as the incoming one. Rust doesn’t have this ability to deal with this, but they’re trying to add it.

We can do this in Haskell, because we have functors. And that’s the last time I’m going to use a term from category theory, I promise.

On the other hand, in a language like Lisp you can use its metaprogramming capabilities to raise the abstraction ceiling in other ways.

Efficiency

I have a colleague and when I suggested using OCaml as an implementation language for our utopian language, she rejected it because it was 50% slower than C.

In slower languages like Python or Ruby you tend to have performance-critical code written in the lower-level language of C.

But my feeling is that in theory, we should be able to take a language like Haskell and build a smarter compiler that can be more efficient.

But the problem is that we’re designing languages that are built on the lambda calculus and so on, but the machines they’re implemented on are not built on that idea, but rather on the Von Neumann architecture. The computer has to do a lot of contortions to take the beautiful lambda calculus idea and convert it into something that can run on an architecture designed from very different principles. This obviously complicates writing a performant and high-level language.

Rust wanted to provide a language as high-level as possible, but with zero-cost abstractions. So instead of garbage collection, Rust has a type-system-assisted kind of clean up. This is easier to deal with than the C version.

If you want persistent data structures a la Erlang or Clojure, they can be pretty efficient, but simple mutation is always going to be more efficient. We couldn’t do PDSs natively.

Suppose you have a langauge that’s low-level enough to have zero-cost abstractions, but you can plug in something like garbage collection, currying, perhaps extend the type system, so that you can write high-level programs using that functionality, but it’s not actually part of the library. I have no idea how to do this but it would be really cool.

Summing up

You need to think about:

  • Ergonomics
  • Abstraction
  • Efficiency
  • Tooling (often forgotten at first, but very important!)
  • Community (Code sharing, Documentation, Education, Marketing)

Your language has to be open source. You can make a proprietary language, and you can make it succeed if you throw enough money at it, but even the successful historical examples of that were eventually open-sourced, which enabled their continued use. I could give a whole other talk about open source.

Functional programming & static typing for server-side web

Oskar Wickström @owickstrom

FP has been influencing JavaScript a lot in the last few years. You have ES6 functional features, libraries like Underscore, Rambda, etc, products like React with FP/FRP at their core, JS as a compile target for functional languages

But the focus is still client-side JS.

Single page applications: using the browser to write apps more like you wrote desktop apps before. Not the same model as perhaps the web browser was intended for at the beginning.

Lots of frameworks to choose from: Angular, Ember, Meteor, React&al. Without JS on the client, you get nothing.

There’s been talk recently of “isomorphic” applications: one framework which runs exactly the same way on the esrver and the client. The term is sort of stolen & not used in the same way as in category theory.

Static typing would be really useful for Middleware, which is a common abstraction but every easy to mess up if dynamically typed. In Clojure if you mess up the middleware you get the Stack Trace of Doom.

Let’s use extensible records in PureScript - shout out to Edwin’s talk related to this. That inspired me to implement this in PureScript, which started this project called Hyper which is what I’m working on right now in my free time.

Goals:

  • Safe HTTP middleware architecture
  • Make effects of middleware explicit
  • No magic

How?

  • Track middleware effects in type system
  • leverage extensible records in PureScript
  • Provide a common API for middleware
  • Write middleware that can work on multiple backends

Design

  • Conn: sort of like in Elixer, instead of passing a request and returning a response, pass them all together as a single unit
  • Middleware: a function that takes a connection c and returns another connection type c’ inside another type m
  • Indexed monads: similar to a state monad, but with two additional parameters: the type of the state before this action, and the type after. We can use this to prohibit effectful operations which aren’t correct.
  • Response state transitions: Hyper uses phantom types to track the state of response, guaranteeing correctness in response side effects

Functional Program Derivation

@Felienne

(Amazing, hand-drawn, animal-filled) Slides

Program derivation:

Problem => Specification => Derivation => Program

Someone said this is like “refactoring in reverse”

Generalization: introduce parameters instead of constant values

Induction: prove something for a base case and a first step, and you’ve proven it for all numbers

Induction hypothesis: if you are at step n, you must have been at step n-1 before that.

With these elements, we have a program! We just make an if/else: e.g. for sum(n), if n == 0: return 0; else return sum(n-1) + n

It all comes down to writing the right specification: which is where we need to step away from the keyboard and think.

Induction is the basis of recursion.

We can use induction to create a specification for sorting lists from which we can derive the QuickSort algorithm.

But we get 2 sorting algorithms for the price of 1: if we place a restriction that we can only do one recursive call, we can tweak the specification to derive InsertionSort, thus proving that Insertion Sort is a special case of Quick Sort.

I stole this from a PhD dissertation (“Functional Program Derivation” by ). This is all based on program derivation work by Djikstra.

Takeaways:

  • Programming == Math. Practicing some basic math is going to help you write code, even if you won’t be doing these kind of exercises on yo ur day-to-day
  • Calculations provide insight
  • Delay choices where possible. Say “let’s assume a solution to this part of the problem” and then go back and solve it later.

I’m writing a whole book on this, if you’re interested in giving feedback on chapter drafts let me know! mail at felienne dot com

Q&A:

  • is there a link between the specification and the complexity of the program? Yes, the specification has implications for implementation. The choices you make within the specification (e.g. caching values, splitting computation) affect the efficency of the program.
  • What about proof assistants? Those are nice if you’re writing a dissertation or whatnot, but if you’re at the stage where you’re practicing this, the exercise is being precise, so I recommend doing this on paper. The second your fingers touch the keyboard, you can outsource your preciseness to the computer.
  • Once you’ve got your specification, how do you ensure that your program meets it? One of the things you could do is write the spec in something like fscheck, or you could convert the specification into tests. Testing and specification are really enriching each other. Writing tests as a way to test your specification is also a good way to go. You should also have some cases for which you know, or have an intuition of, the behavior. But none of this is supposed to go in a machine, it’s supposed to be on paper.

The cake and eating it: or writing expressive high-level programs that generate fast low-level code at runtime

Nada Amin @nadamin

Distinguish stages of computation

  • Program generator: basic types (Int, String, T) are executed at code generation time
  • Rep(Int), Rep(String), Rep(T) are left as variables in the generated code and executed at program run time(?)

Shonan Challenge for Generative Programming - part of the gen. pro. for HPC literature: you want to generate code that is specialized to a particular matrix

  • Demo of generating code to solve this challenge

Generative Programming Patterns

  • Deep linguistic reuse
  • Turning interpreters into compilers
    • You can think of the process of staging as something which generates code, think of an interpreter as taking code and additional input and creates a result.
    • Putting them together we get something that takes code and symbolic input, and in the interpret stage generates code which takes actual input, which in the execution stage produces a result
    • This idea dates back to 1971, Futamura’s Partial Evaluation
  • Generating efficient low-level code
    • e.g. for specialized parsers
    • We can take an efficient HTTP parser from 2000+ lines to 200, with parser combinators
    • But while this is great for performance, it leaves big security holes
    • So we can use independent tools to verify the generated code after the fact

Sometimes generating code is not the right solution to your problem

More info on the particular framework I’m using: Generative Programming for ‘Abstraction without Regret’

Rug: an External DSL for Coding Code Transformations (with Scala Parser-Combinators)

Jessica Kerr @jessitron, Atomist

The last talk was about abstraction without (performance) regret. This talk is about abstraction without the regret of making your code harder to read.

Elm is a particularly good language to modify automatically, because it’s got some boilerplate, but I love that boilerplate! No polymorphism, no type classes - I know exactly what that code is going to do! Reading it is great, but writing it can be a bit of a headache.

As a programmer I want to spend my time thinking about what the users need and what my program is supposed to do. I don’t want to spend my time going “Oh no, i forgot to put that thing there”.

Here’s a simple Elm program that prints “Hello world”. The goal is to write a program that modifies this existing Elm code and changes the greeting that we print.

We’re going to do this with Scala. The goal is to generate readable code that I can later go ahead and change. It’s more like a templating engine, but instead of starting with a templating file it starts from a cromulent Scala program.

Our goal is to parse an Elm file into a parse tree, which give us the meaningful bits of that file.

The “parser” in parser combinators is actually a combination of lexer and parser.

Reuse is dangerous, dependencies are dangerous, because they create coupling. (Controlled, automated) Cut & Paste is a safer solution.

at which point @jessitron does some crazy fast live coding to write an Elm parser in Scala

Rug is the super-cool open-source project I get to work on as my day job now! It’s a framework for creating code rewriters

Repo for this talk

In conclusion: any time my job feels easy, I think “OMG I’m doing it wrong”. But I don’t want to introduce abstraction into my code, because someone else is going to have difficulty reading that. I want to be able to abstract without sacrificing code readability. I can make my job faster and harder by automating it.

Relational Programming

Will Byrd

There are many programming paradigms that don’t get enough attention. The one I want to talk about today is Relational Programming. It’s somewhat representative of Logic Programming, like Prolog. I want to show you what can happen when you commit fully to the paradigm, and see where that leads us.

Functional Programming is a special case of Relational Programming, as we’re going to see in a minute.

What is functional programming about? There’s a hint in the name. It’s about functions, the idea that representing computation in the form of mathematical functions could be useful. Because you can compose functions, you don have to reason about mutable state, etc. - there are advantages to modeling computation as math. functions.

In relational programming, instead of representing computation as functions we represent it as relations. You can think of a relation in may ways. If you’re familiar with relational databases, or you can think in terms of tuples where we want to reason over sets or collections of tuples, or we can think of it in terms of algebra - like high school algebra - where we have variables representing unknown quantities and we have to figure out their values. We’ll see that we can get FP as a special case - there’s a different set of tradeoffs - but we’ll see that when you commit fully to this paradigm you can get some very surprising behavior.

Let’s start in our functional world, we’re going to write a little program in Scheme or Racket, a little program to manipulate lists. We’ll just do something simple like append or concatenate. Let’s define append in Scheme:

(define append
  (lambda (l s)
    (if (null? l)
        s
        (cons (car l) (append (cdr l) s))))

We’re going to use a relational programming language called Mini Kanren which is basically an extension that has been applied to lots of languages which allows us to put in variables representing values and ask Kanren to fill in those values.

So I’m going to define appendo. (By convention we define our names ending in -o, it’s kind of a long story, happy to explain offline.)

Writes a bunch of Kanren that we don’t really understand

Now I can do:

> (run 1 (q) (appendo '(a b c) '(d e) q))
((a b c d e))

So far, not very interesting, if this is all it does then it’s no better than append. But where it gets interesting is that I can run it backwards to find an input:

> (run 1 (X) (appendo '(a, b, c) X (a b c d e)))
((d e))

Or I can ask it to find N possible inputs:

> (run 2 (X Y) (appendo X Y (a b c d e)))
((a b c d) (e))
((a b c d e) ())

Or all possible inputs:

> (run* (X Y) (appendo X Y (a b c d e)))
((a b c d) (e))
((a b c d e) ())
...

What happens if I do this?

> (run* (X Y Z) (appendo X Y Z))

It will run forever. This is sort of like a database query, except where the tables are infinite.

One program we could write is an interpreter, an evaluator. We’re going to take an eval that’s written in MiniKanren, which is called evalo and takes two arguments: the expression to be evaluated, and the value of that expression.

> (run 1 (a) (evalo '(lambda (x) x) q))
((closure x x ()))
> (run 1 (a) (evalo '(list 'a) q))
((a))

Professor wrote a Valentine's day post "99 ways to say 'I love you' in Racket", to teach people Racket by showing 99 different racket expressions that evaluate to the list `(I love you)`

> (run 99 (q) (evalo q '(I love you)))
...99 ways...

What about quines: a quine is a program that evaluates to itself. How could we find or generate a quine?

> (run 1 (q) (evalo q q))

And twines: two different programs p and q where p evaluates to q and q evaluates to p.

> (run 1 (p q) (=/= p q) (evalo p q) (evalo q p))
...two expressions that basically quote/unquote themselves...

What would happen if we run Scheme’s append in our evaluator?

> (run 1 (q)
    (evalo
      `(letrec ((append
                  (lambda (l s)
                    (if (null? l)
                        s
                        (cons (car l)
                              (append (cdr l)
                                      s)))))))
          (append '(a b c) '(d e))
      q))
((a b c d e))

But we can put the variable also inside the definition of append:

> (run 1 (q)
    (evalo
      `(letrec ((append
                  (lambda (l s)
                    (if (null? l)
                        q
                        (cons (car l)
                              (append (cdr l)
                                      s)))))))
          (append '(a b c) '(d e))
      '(a b c d e)))
(s)

Now we’re starting to synthesize programs, based on specifications. When I gave this talk at PolyConf a couple of years ago Jessitron trolled me about how long it took to run this, since then we’ve gotten quite a bit faster.

This is a tool called Barliman that I (and Greg Rosenblatt) have been working on, and it’s basically a frontend, a dumb GUI to the interpreter we were just playing with. It’s just a prototype. We can see a partially specified definition - a Scheme function that’s partially defined, with metavariables that are fill-in-the-blanks for some Scheme expressions that we don’t know what they are yet. Barliman’s going to guess what the definition is going to be.

(define ,A
    (lambda ,B
      ,C))

Now we give Barliman a bunch of examples. Like (append '() '()) gives '(). It guesses what the missing expressions were based on those examples. The more test cases we give it, the better approximation of the program it guesses. With 3 examples, we can get it to correctly guess the definition of append.

Yes, you are going to lose your jobs. Well, some people are going to lose their jobs. This is actually something that concerns me, because this tool is going to get a lot better.

If you want to see the full dog & pony show, watch the ClojureConj talk I gave with Greg.

Writing the tests is indeed the harder part. But if you’re already doing TDD or property-based testing, you’re already writing the tests, why don’t you just let the computer figure out the code for you based on those tests?

Some people say this is too hard, the search space is too big. But that’s what they said about Go, and it turns out that if you use the right techniques plus a lot of computational power, Go isn’t as hard as we thought. I think in about 10-15 years program synthesis won’t be as hard as we think now. We’ll have much more powerful IDEs, much more powerful synthesis tools. It could even tell you as you’re writing your code whether it’s inconsistent with your tests.

What this will do for jobs, I don’t know. I don’t know, maybe it won’t pan out, but I can no longer tell you that this definitely won’t work. I think we’re at the point now where a lot of the academic researchers are looking at a bunch of different parts of synthesis, and no one’s really combining them, but when they do, there will be huge breakthroughs. I don’t know what it’s going to do, but it’s going to do something.

Working hard to keep things lazy

Raichoo @raichoo

Without laziness, we waste a lot of space, because when we have recursion we have to keep allocating memory for each evaluated thing. Laziness allows us to get around that.

What is laziness, from a theoretical standpoint?

The first thing we want to talk about is different ways to evaluate expressions.

> f x y = x + y
> f (1 + 1) (2 + 2)  

How do we evaluate this?

=> (1 + 1) + (2 + 2)
=> 2 + 4
=> 6  

This evaluation was normal form

Church-Rosser Theorem: the order of evaluation doesn’t matter, ultimately a lambda expression will evaluate to the same thing.

But! We have things like non-termination, and termination can only be determined after the fact.

Here’s a way we can think of types: Let’s think of a Boolean as something which has three possible values: True, False, and “bottom”, which represents not-yet-determined, a computation that hasn’t ended yet. True and False are more defined than bottom (e.g. _|_ <= True). Partial ordering.

Monotone functions: if we have a function that takes a Bool and returns a Bool, and x and y are bools where x <= y, then f x <= f y. We can now show that f _|_ = True and f x = False doesn’t work out, because it would have the consequence that True => False, which doesn’t work - that’s a good thing because if it did, we would have solve the halting problem. What’s nice here is that if we write a function and evaluate it in normal order, in the lazy way, then this naturally works out.

Laziness is basically non-strictness (this normal order thing I’ve been talking about the whole time), and sharing.

Laziness lets us reuse code and use combinators. This is something I miss from Haskell when I use any other language.

Honorable mention: Purely Functional Data Structures by Chris Okasaki. When you have Persistent Data Structures, you need laziness to have this whole amortization argument going on. This book introduces its own dialect of ML (lazy ML).

How do we do laziness in Haskell (in GHC)? At an intermediate stage of compilation called STG, Haskell takes unoptimized code and optimizes it to make it lazy. (???)

Total Functional Programming

Edwin Brady @edwinbrady

Idris is a pure functional language with dependent types. It’s a “total” language, which means you have program totality: a program either terminates, or gives you new results.

Goals are:

  • Encourage type-driven development
  • Reduce the cost of writing correct software - giving you more tools to know upfront the program will do the correct thing.

People on the internet say, you can’t do X, you can’t do Y in a total language. I’m going to do X and Y in a total language.

Types become plans for a program. Define the type up front, and use it to guide writing the program.

You define the program interactively. The compiler should be less like a teacher, and more like a lab assistant. You say “let’s work on this” and it says “yes! let me help you”.

As you go, you need to refine the type and the program as necessary.

Test-driven development has “red, green, refactor”. We have “type, define, refine”.

If you care about types, you should also care about totality. You don’t have a type that completely describes your program unless your program is total.

Given f : T: if program f is total, we know that it will always give a result of type T. If it’s partial, we only know that if it gives a result, it will be type T, but it might crash, run forever, etc. and not give a result.

The difference between total and partial functions in this world: if it’s total, we can think of it as a Theorem.

Idris can tell us whether or not it thinks a program is total (though we can’t be sure, because we haven’t solved the halting problem “yet”, as a student once wrote in an assignment). If I write a program that type checks but Idris thinks it’s possibly not total, then I’ve probably done the wrong thing. So in my Idris code I can tell it that some function I’m defining should be total.

I can also tell Idris that if I can prove something that’s impossible, then I can basically deduce anything, e.g. an alt-fact about arithmetic. We have the absurd keyword.

We have Streams, where a Stream is sort of like a list without nil, so potentially infinite. As far as the runtime is concerned, this means this is lazy. Even though we have strictness.

Idris uses IO like Haskell to write interactive programs. IO is a description of actions that we expect the program to make(?). If you want to write interactive programs that loop, this stops it being total. But we can solve this by describing looping programs as a stream of IO actions. We know that the potentially-infinite loops are only going to get evaluated when we have a bit more information about what the program is going to do.

Turns out, you can use this to write servers, which run forever and accept responses, which are total. (So the people on the internet are wrong).

Check out David Turner’s paper “Elementary Strong Functional Programming”, where he argues that totality is more important than Turing-completeness, so if you have to give up one you should give up the latter.

Book coming out: Type-Driven Development with Idris

Gervase MarkhamTechnology Is More Like Magic Than Like Science

So said Peter Kreeft, commenting on three very deep sentences from C.S. Lewis on the problems and solutions of the human condition.

Suppose you are asked to classify four things –

  • religion,
  • science,
  • magic, and
  • technology.

– and put them into two categories. Most people would choose “religion and magic” and “science and technology”. Read Justin Taylor’s short article to see why the deeper commonalities are between “religion and science” and “magic and technology”.

Chris CooperBeing productive when distributed teams get together, take 2

Salmon jumpingEvery year, hundreds of release engineers swim upstream because they’re built that way.

Last week, we (Mozilla release engineering) had a workweek in Toronto to jumpstart progress on the TaskCluster (TC) migration. After the success of our previous workweek for release promotion, we were anxious to try the same format once again and see if we could realize any improvements.

Prior preparation prevents panic

We followed all of the recommendations in the Logistics section of Jordan’s post to great success.

Keeping developers fed & watered is an integral part of any workweek. If you ever want to burn a lot of karma, try building consensus between 10+ hungry software developers about where to eat tonight, and then finding a venue that will accommodate you all. Never again; plan that shit in advance. Another upshot of advance planning is that you can also often go to nicer places that cost the same or less. Someone on your team is a (closet) foodie, or is at least a local. If it’s not you, ask that person to help you with the planning.

What stage are you at?

The workweek in Vancouver benefitted from two things:

  1. A week of planning at the All-Hands in Orlando the month before; and,
  2. Rail flying out to Vancouver a week early to organize much of the work to be done.

For this workweek, it turned out we were still at the planning stage, but that’s totally fine! Never underestimate the power of getting people on the same page. Yes, we did do *some* hacking during the week. Frankly, I think it’s easier to do the hacking bit remotely, but nothing beats a bunch of engineers in a room in front of a whiteboard for planning purposes. As a very distributed team, we rarely have that luxury.

Go with it

…which brings me to my final observation. Because we are a very distributed team, opportunities to collaborate in person are infrequent at best. When you do manage to get a bunch of people together in the same room, you really do need to go with discussions and digressions as they develop.

This is not to say that you shouldn’t facilitate those discussions, timeboxing them as necessary. If I have one nit to pick with Jordan’s post it’s that the “Operations” role would be better described as a facilitator. As a people manager for many years now, this is second-nature to me, but having someone who understands the problem space enough to know “when to say when” and keep people on track is key to getting the most out of your time together.


By and large, everything worked out well in Toronto. It feels like we have a really solid format for workweeks going forward.

Air MozillaWebdev Beer and Tell: February 2017

Webdev Beer and Tell: February 2017 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...

Christian HeilmannMy closing keynote of the Tweakers DevSummit – slides and resources

Yesterday I gave the closing keynote of the Tweakers Developer Summit in Utrecht, The Netherlands. The conference topic was “Webdevelopment – Coding the Universe” and the organisers asked me to give a talk about Machine Learning and what it means for developers in the nearer future. So I took out my crystal ball 🔮 and whipped up the following talk:

Suit up, bring extra oxygen Internet space explorers needed. from Christian Heilmann

Here are the resources covered in the talk:

Yes, this was a lot – maybe too much – for one talk, but the feedback I got was all very positive, so I am hoping for the video to come out soon.

Karl Dubost[worklog] Edition 055 - Tiles on a wall

webcompat life

With all these new issues, it's sometimes difficult to follow everything. The team has grown. We are more active on more fronts and the non-written rules we had when we were 3 or 4 persons were working. When we grow, we need a bit more process, and slightly more dedicated areas for each of us. It will help us to avoid work conflicts and to make progress smoothly. Defining responsibilities and empowering people. This week it happens some tiles were put in my house on two days. After the first day, the tiles were there on the wall without the joint in between. I started to regret the choice of tiles. Too big, too obvious. But the day after, the joint was put in place in between the tiles and everything made sense. Everything was elegant and how we had envision it.

webcompat issues

webcompat.com dev

Otsukare!

Niko MatsakisProject idea: datalog output from rustc

I want to have a tool that would enable us to answer all kinds of queries about the structure of Rust code that exists in the wild. This should cover everything from synctactic queries like “How often do people write let x = if { ... } else { match foo { ... } }?” to semantic queries like “How often do people call unsafe functions in another module?” I have some ideas about how to build such a tool, but (I suspect) not enough time to pursue them. I’m looking for people who might be interested in working on it!

The basic idea is to build on Datalog. Datalog, if you’re not familiar with it, is a very simple scheme for relating facts and then performing analyses on them. It has a bunch of high-performance implementations, notably souffle, which is also available on GitHub. (Sadly, it generates C++ code, but maybe we’ll fix that another day.)

Let me work through a simple example of how I see this working. Perhaps we would like to answer the question: How often do people write tests in a separate file (foo/test.rs) versus an inline module (mod test { ... })?

We would (to start) have some hacked up version of rustc that serializes the HIR in Datalog form. This can include as much information as we would like. To start, we can stick to the syntactic structures. So perhaps we would encode the module tree via a series of facts like so:

// links a module with the id `id` to its parent `parent_id`
ModuleParent(id, parent_id).
ModuleName(id, name).

// specifies the file where a given `id` is located
File(id, filename).

So for a module structure like:

// foo/mod.rs:
mod test;

// foo/test.rs:
#[test] 
fn test() { }

we might generate the following facts:

// module with id 0 has name "" and is in foo/mod.rs
ModuleName(0, "").
File(0, "foo/mod.rs").

// module with id 1 is in foo/test.rs,
// and its parent is module with id 0.
ModuleName(1, "test").
ModuleParent(1, 0).
File(1, "foo/test.rs").

Then we can write a query to find all the modules named test which are in a different file from their parent module:

// module T is a test module in a separate file if...
TestModuleInSeparateFile(T) :-
    // ...the name of module T is test, and...
    ModuleName(T, "test"),
    // ...it is in the file T_File... 
    File(T, T_File),
    // ...it has a parent module P, and...
    ModuleParent(T, P),
    // ...the parent module P is in the file P_File... 
    File(P, P_File),
    // ...and file of the parent is not the same as the file of the child.
    T_File != P_File.

Anyway, I’m waving my hands here, and probably getting datalog syntax all wrong, but you get the idea!

Obviously my encoding here is highly specific for my particular query. But eventually we can start to encode all kinds of information this way. For example, we could encode the types of every expression, and what definition each path resolved to. Then we can use this to answer all kinds of interesting queries. For example, some things I would like to use this for right now (or in the recent past):

So, you interested? If so, contact me – either privmsg over IRC (nmatsakis) or over on the internals threads I created.

Mozilla Addons BlogThe Road to Firefox 57 – Compatibility Milestones

Back in November, we laid out our plans for add-ons in 2017. Notably, we defined Firefox 57 as the first release where only WebExtensions will be supported. In parallel, the deployment of Multiprocess Firefox (also known as e10s) continues, with many users already benefiting from the performance and stability gains. There is a lot going on and we want you to know what to expect, so here is an update on the upcoming compatibility milestones.

We’ve been working on setting out a simple path forward, minimizing the compatibility hurdles along the way, so you can focus on migrating your add-ons to WebExtensions.

Legacy add-ons

By legacy add-ons, we’re referring to:

Language packs, dictionaries, OpenSearch providers, lightweight themes, and add-ons that only support Thunderbird or SeaMonkey aren’t considered legacy.

Firefox 53, April 18th release

  • Firefox will run in multiprocess mode by default for all users, with some exceptions. If your add-on has the multiprocessCompatible flag set to false, Firefox will run in single process mode if the add-on is enabled.
  • Add-ons that are reported and confirmed as incompatible with Multiprocess Firefox (and don’t have the flag set to false) will be marked as incompatible and disabled in Firefox.
  • Add-ons will only be able to load binaries using the Native Messaging API.
  • No new legacy add-ons will be accepted on addons.mozilla.org (AMO). Updates to existing legacy add-ons will still be accepted.

Firefox 54-56

  • Legacy add-ons that work with Multiprocess Firefox in 53 may still run into compatibility issues due to followup work:
    • Multiple content processes is being launched in Firefox 55. This enables multiple content processes, instead of the single content process currently used.
    • Sandboxing will be launched in Firefox 54. Additional security restrictions will prevent certain forms of file access from content processes.

Firefox 57, November 14th release

  • Firefox will only run WebExtensions.
  • AMO will continue to support listing and updating legacy add-ons after the release of 57 in order to have an easier transition. The exact cut-off time for this support hasn’t been determined yet.
  • Multiprocess compatibility shims are removed from Firefox. This doesn’t affect WebExtensions, but it’s one of the reasons went with this timeline.

For all milestones, keep in mind that Firefox is released using a “train” model, where Beta, Developer Edition, and Nightly correspond to the future 3 releases. You should expect users of pre-release versions to be impacted by these changes earlier than their final release dates. The Release Calendar lists future release dates per channel.

We are committed to this timeline, and will work hard to make it happen. We urge all developers to look into WebExtensions and port their add-ons as soon as possible. If you think your add-on can’t be ported due to missing APIs, here’s how you can let us know.

Christopher Arnold

https://en.wikipedia.org/wiki/Beowulf
A long time ago I remember reading Stephen Pinker discussing the evolution of language.  I had read Beowulf, Chaucer and Shakespeare, so I was quite interested in these linguistic adaptations over time.  Language shifts rapidly through the ages, to the  point that even English of 500 years ago sounds foreign to us now.  His thesis in the piece was about how language is going to shift toward the Chinese pronunciation of it.  Essentially, the majority of speakers will determine the rules of the language’s direction.  There are more Chinese in the world than native English speakers, so as they adopt and adapt the language, more of us will speak like the greater factions of our language’s custodians.  The future speakers of English, will determine its course.  By force of "majority rules", language will go in the direction of its greatest use, which will be the Pangea of the global populace seeking common linguistic currency with others of foreign tongues.  Just as the US dollar is an “exchange currency” standard at present between foreign economies, English is the shortest path between any two ESL speakers, no matter which background.

Subsequently, I heard these concepts reiterated in a Scientific American podcast.  The concept there being that English, when spoken by those who learned it as a second language, is easier for other speakers to understand than native-spoken English.  British, Indian, Irish, Aussie, New Zealand and American English are relics in a shift, very fast, away from all of them.  As much as we appreciate each, they are all toast.  Corners will be cut, idiomatic usage will be lost, as the fastest path to information conveyance determines that path that language takes in its evolution.  English will continue to be a mutt language flavored by those who adopt and co-opt it.  Ultimately meaning that no matter what the original language was, the common use of it will be the rules of the future.  So we can say goodbye to grammar as native speakers know it.  There is a greater shift happening than our traditions.  And we must brace as this evolution takes us with it to a linguistic future determined by others.

I’m a person who has greatly appreciated idiomatic and aphoristic usage of English.  So I’m one of those, now old codgers, who cringes at the gradual degradation of language.  But I’m listening to an evolution in process, a shift toward a language of broader and greater utility.  So the cringes I feel, are reactions to the time-saving adaptations of our language as it becomes something greater than it has been in the past.  Brits likely thought/felt the same as their linguistic empire expanded.  Now is just a slightly stranger shift.

This evening I was in the kitchen, and I decided to ask Amazon Alexa to play some Led Zeppelin.  This was a band that used to exist in the 1970’s era during which I grew up.  I knew their entire corpus very well.  So when I started hearing one of my favorite songs, I knew this was not what I had asked for.  It was a good rendering for sure, but it was not Robert Plant singing.  Puzzled, I asked Alexa who was playing.  She responded “Lez Zeppelin”.  This was a new band to me.  A very good cover band I admit.  (You can read about them here: http://www.lezzeppelin.com/)
But why hadn't Alexa wanted to respond to my initial request?  Was it because Atlantic Records hadn't licensed Led Zeppelin's actual catalog for Amazon Prime subscribers?

Two things struck me.  First, we aren’t going to be tailoring our English to Chinese ESL common speech patterns as Mr. Pinker predicted.  We’re probably also going to be shifting our speech patterns to what Alexa, Siri, Cortana and Google Home can actually understand.  They are the new ESL vector that we hadn't anticipated a decade ago.  It is their use of English that will become conventional, as English is already the de facto language of computing, and therefore our language is now the slave to code.

What this means for that band (that used to be called Zeppelin) is that such entity will no longer be discoverable.  In the future, if people say “Led Zeppelin” to Alexa, she’ll respond with Lez Zeppelin (the rights-available version of the band formerly known as "Led Zeppelin").  Give humanity 100 years or so, and the idea of a band called Led Zeppelin will seem strange to folk.  Five generations removed, nobody will care who the original author was.  The "rights" holder will be irrelevant.  The only thing that will matter in 100 years is what the bot suggests.

Our language isn't ours.  It is the path to the convenient.  In bot speak, names are approximate and rights (ignoring the stalwart protectors) are meaningless.  Our concepts of trademarks, rights ownership, etc. are going to be steam-rolled by other factors, other "agents" acting at the user's behest.  The language and the needs of the spontaneous are immediate!

Air MozillaReps Weekly Meeting Feb. 16, 2017

Reps Weekly Meeting Feb. 16, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Emma IrwinThank you Guillermo Movia

 

I first got to know Guillermo during our time together on Mozilla Reps council –  which was actually his second time contributing community leadership, the original was as a founding council member. Since this time, I’ve come to appreciate and rely on his intuition, experience and skill navigating complexities of community management as a peer in the community and colleague at Mozilla for the past two years.

Before I go any further I would like to thank Guillermo, on behalf of many,  for politely responding to terrible mispronunciations of his name over the years including (but not limited to)  ‘G-glermo, geejermo, Glermo, Juremo, Glermo, Gillermo and various versions of Guilllllllllmo’.

Although I am  excited to see Guillermo off to new adventures – I ,  and many others in the Mozilla community wanted to mark his 12 years with Mozilla by honoring and celebrating a journey so far.  Thankfully, he took some time to meet with me last week for an interview…

In the Beginning…

As many who do not speak English as a first language might understand, Guillermo remembers spending his early days on IRC and mailing lists trying to understand ways to get involved in his first language – Spanish.  It was this experience and eventual collaboration with other Spanish speaking community leaders Ruben and Francisco that led to the formation of the Hispano Community.

Emerging Leader, Emerging Community

Guillermo’s love of the open web, radiates through all aspects of his life, and history including his Bachelor’s thesis with a cover which you might notice resembles a browser…

During this same time of dedicated study, Guillermo began to both participate-in, and organize Mozilla events in Argentina.  One of his most memorable moments of empowerment was when Asa Dozler from Mozilla, who had been visiting his country, declared to Guillermo and the emerging community group  ‘you are the Argentina Community’ – with subsequent emails in support of that from both Asa and Mary Colvig that ultimately led to a new community evolution.

Building Participation

Guillermo joined Mozilla as staff during Firefox OS era, at first part time while he also worked at Nobox  organizing events and activities for De Todos Para Todos campaign.  He started full time not long afterwards stepping into the role of community manager for LATAM.   His work with Participation included developing regional leadership strategies including development of a coaching framework.

Proudest Moments

I asked Guillermo to reflect on what his proudest moments have been so far, and here’s what he said:

  1. Participation Mozilla Hispano creation community.
  2. Being part of Firefox OS launch teams, and localization
  3. Organizing community members, training. translating.
  4. Being part of the Mozilla Reps original council.

A Central Theme

In all that Guillermo shared, there was such a strong theme of empowering people – of building frameworks and opportunities that help people reach into their potential as emerging community leaders and mobilizers.  I think as a community we have been fortunate recipients of his talents in this area.

And that theme continues on in his wishes for Mozilla’s future – as an organization were community members continue to  innovate and have impact on Mozilla’s mission.

Thank you Guillermo!

Goodbye – not Goodbye!  Once a Mozillian, always a Mozillian – see you soon.

Please share your #mozlove memories, photos and gtratitude for #gmovia on Twitter and other social media!

 

Save

Save

FacebookTwitterGoogle+Share

Air MozillaMozilla Curriculum Workshop, February 2017 - Privacy & Security

Mozilla Curriculum Workshop, February 2017 - Privacy & Security Join us for a discussion and prototyping session about privacy and security curriculum.

Tarek ZiadéMolotov, simple load testing

I don't know why, but I am a bit obsessed with load testing tools. I've tried dozens, I built or been involved in the creation of over ten of them in the past 15 years. I am talking about load testing HTTP services with a simple HTTP client.

Three years ago I built Loads at Mozilla, which is still being used to load test our services - and it's still evolving. It was based on a few fundamental principles:

  1. A Load test is an integration test that's executed many times in parallel against a server.
  2. Ideally, load tests should be built with vanilla Python and a simple HTTP client. There's no mandatory reason we have to rely on a Load Test class or things like this - the lighter the load test framework is, the better.
  3. You should be able to run a load test from your laptop without having to deploy a complicated stack, like a load testing server and clients, etc. Because when you start building a load test against an API, step #1 is to start with small loads from one box - not going nuclear from AWS on day 1.
  4. Doing a massively distributed load test should not happen & be driven from your laptop. Your load test is one brick and orchestrating a distributed load test is a problem that should be entirely solved by another software that runs in the cloud on its own.

Since Loads was built, two major things happened in our little technical word:

  • Docker is everywhere
  • Python 3.5 & asyncio, yay!

Python 3.5+ & asyncio just means that unlike my previous attempts at building a tool that would generate as many concurrent requests as possible, I don't have to worry anymore about key principle #2: we can do async code now in vanilla Python, and I don't have to force ad-hoc async frameworks on people.

Docker means that for running a distributed test, a load test that runs from one box can be embedded inside a Docker image, and then a tool can orchestrate a distributed test that runs and manages Docker images in the cloud.

That's what we've built with Loads: "give me a Docker image of something that's performing a small load test against a server, and I shall run it in hundreds of box." This Docker-based design was a very elegant evolution of Loads thanks to Ben Bangert who had that idea. Asking for people to embed their load test inside a Docker image also means that they can use whatever tool they want as long as it performs HTTP calls on the server to stress, and optionally send some info via statsd.

But proposing a helpful, standard tool to build the load test script that will be embedded in Docker is still something we want to suggest. And frankly, 90% of the load tests happen from a single box. Going nuclear is not happening that often.

Introducing Molotov

Molotov is a new tool I've been working on for the past few months - it's based on asyncio, aiohttp and tries to be as light as possible.

Molotov scripts are coroutines to perform HTTP calls, and spawning a lot of them in a few processes can generate a fair amount of load from a single box.

Thanks to Richard, Chris, Matthew and others - my Mozilla QA teammates, I had some great feedback to create the tool, and I think it's almost ready for being used by more folks - it stills need to mature, and the docs to improve but the design is settled, and it works well already.

I've pushed a release at PyPI and plan to push a first stable final release this month once the test coverage is looking better & the docs are polished.

But I think it's ready for a bit of community feedback. That's why I am blogging about it today -- if you want to try it, help building it here are a few links:

Try it with the console mode (-c), try to see if it fits your brain and let us know what you think.

Christian HeilmannScriptConf in Linz, Austria – if you want all the good with none of the drama.

Last month I was very lucky to be invited to give the opening keynote of a brand new conference that can utterly go places: ScriptConf in Linz, Austria.

Well deserved sticker placement

What I liked most about the event was an utter lack of drama. The organisation for us presenters was just enough to be relaxed and allowing us to concentrate on our jobs rather than juggling ticket bookings. The diversity of people and subjects on stage was admirable. The catering and the location did the job and there was not much waste left over.

I said it before that a great conference stands and falls with the passion of the organisers. And the people behind ScriptConf were endearingly scared and amazed by their own success. There were no outrageous demands, no problems that came up in the last moment, and above all there was a refreshing feeling of excitement and a massive drive to prove themselves as a new conference in a country where JavaScript conferences aren’t a dime a dozen.

ScriptConf grew out of 5 different meetups in Austria. It had about 500 extremely well behaved and excited attendees. The line-up of the conference was diverse in terms of topics and people and it was a great “value for money” show.

As a presenter you got spoiled. The hotel was 5 minutes walk away from the event and 15 minutes from the main train station. We had a dinner the day before and a tour of a local ars electronica center before the event. It is important to point out that the schedule was slightly different: the event started at noon and ended at “whenever” (we went for “Leberkäse” at 3am, I seem to recall). Talks were 40 minutes and there were short breaks in between each two talks. As the opening keynote presenter I loved this. It is tough to give a rousing talk at 8am whilst people file slowly into the building and you’ve still got wet hair from the shower. You also have a massive lull in the afternoon when you get tired. It is a totally different thing to start well-rested at noon with an audience who had enough time to arrive and settle in.

Presenters were from all around the world, from companies like Slack, NPM, Ghost, Google and serverless.

The presentations:

Here’s a quick roundup of who spoke on what:

  • I was the opening keynote, talking about how JavaScript is not a single thing but a full development environment now and what that means for the community. I pointed out the importance of understanding different ways to use JavaScript and how they yield different “best practices”. I also did a call to arms to stop senseless arguing and following principles like “build more in shorter time” and “move fast and break things” as they don’t help us as a market. I pointed out how my employer works with its engineers as an example how you can innovate but also have a social life. It was also an invitation to take part in open source and bring more human, understanding communication to our pull requests.
  • Raquel Vélez of NPM told the history of NPM and explained in detail how they built the web site and the NPM search
  • Nik Graf of Serverless covered the serverless architecture of AWS Lambda
  • Hannah Wolfe of Ghost showed how they moved their kickstarter-funded NodeJS based open blogging system from nothing to a ten people company and their 1.0 release explaining the decisions and mistakes they did. She also announced their open journalism fund “Ghost for journalism”
  • Felix Rieseberg of Slack is an ex-Microsoft engineer and his talk was stunning. His slides about building Apps with Electron are here and the demo code is on GitHub. His presentation was a live demo of using Electron to built a clone of Visual Studio Code by embedding Monaco into an Electron Web View. He coded it all live using Visual Studio Code and doing a great job explaining the benefits of the console in the editor and the debugging capabilities. I don’t like live code, but this was enjoyable and easy to follow. He also did an amazing job explaining that Electron is not there to embed a web site into a app frame, but to allow you to access native functionality from JavaScript. He also had lots of great insight into how Slack was built using Electron. A great video to look forward to.
  • Franziska Hinkelmann of the Google V8 team gave a very detailed talk about Performance Debugging of V8, explaining what the errors shown in the Chrome Profiler mean. It was an incredibly deep-tech talk but insightful. Franziska made sure to point out that optimising your code for the performance tricks of one JavaScript engine is not a good idea and gave ChakraCore several shout-outs.
  • Mathieu Henri from Microsoft Oslo and JS1K fame rounded up the conference with a mind-bending live code presentation creating animations and sound with JavaScript and Canvas. He clearly got the most applause. His live coding session was a call to arms to play with technology and not care about the code quality too much but dare to be artsy. He also very much pointed out that in his day job writing TypeScript for Microsoft, this is not his mode of operation. He blogged about his session and released the code here.

This was an exemplary conference, showing how it should be done and reminded me very much of the great old conferences like Fronteers, @media and the first JSConf. The organisers are humble, very much engaged and will do more great work given the chance. I am looking forward to re-live the event watching the videos and can safely recommend each of the presenters for any other conference. There was a great flow and lots of helping each other out on stage and behind the scenes. It was a blast.

Air MozillaThe Joy of Coding - Episode 91

The Joy of Coding - Episode 91 mconley livehacks on real Firefox bugs while thinking aloud.

Tim TaubertThe Future of Session Resumption

A while ago I wrote about the state of server-side session resumption implementations in popular web servers using OpenSSL. Neither Apache, nor Nginx or HAproxy purged stale entries from the session cache or rotated session tickets automatically, potentially harming forward secrecy of resumed TLS session.

Enabling session resumption is an important tool for speeding up HTTPS websites, especially in a pre-HTTP/2 world where a client may have to open concurrent connections to the same host to quickly render a page. Subresource requests would ideally resume the session that for example a GET / HTTP/1.1 request started.

Let’s take a look at what has changed in over two years, and whether configuring session resumption securely has gotten any easier. With the TLS 1.3 spec about to be finalized I will show what the future holds and how these issues were addressed by the WG.

Did web servers react?

No, not as far as I’m aware. None of the three web servers mentioned above has taken steps to make it easier to properly configure session resumption. But to be fair, OpenSSL didn’t add any new APIs or options to help them either.

All popular TLS 1.2 web servers still don’t evict cache entries when they expire, keeping them around until a client tries to resume — for performance or ease of implementation. They generate a session ticket key at startup and will never automatically rotate it so that admins have to manually reload server configs and provide new keys.

The Caddy web server

I want to seize the chance and positively highlight the Caddy web server, a relative newcomer with the advantage of not having any historical baggage, that enables and configures HTTPS by default, including automatically acquiring and renewing certificates.

Version 0.8.3 introduced automatic session ticket key rotation, thereby making session tickets mostly forward secure by replacing the key every ~10 hours. Session cache entries though aren’t evicted until access just like with the other web servers.

But even for “traditional” web servers all is not lost. The TLS working group has known about the shortcomings of session resumption for a while and addresses those with the next version of TLS.

1-RTT handshakes by default

One of the many great things about TLS 1.3 handshakes is that most connections should take only a single round-trip to establish. The client sends one or more KeyShareEntry values with the ClientHello, and the server responds with a single KeyShareEntry for a key exchange with ephemeral keys.

If the client sends no or only unsupported groups, the server will send a HelloRetryRequest message with a NamedGroup selected from the ones supported by the client. The connection will fall back to two round-trips.

That means you’re automatically covered if you enable session resumption only to reduce network latency, a normal handshake is as fast as 1-RTT resumption in TLS 1.2. If you’re worried about computational overhead from certificate authentication and key exchange, that still might be a good reason to abbreviate handshakes.

Pre-shared keys in TLS 1.3

Session IDs and session tickets are obsolete since TLS 1.3. They’ve been replaced by a more generic PSK mechanism that allows resuming a session with a previously established shared secret key.

Instead of an ID or a ticket, the client will send an opaque blob it received from the server after a successful handshake in a prior session. That blob might either be an ID pointing to an entry in the server’s session cache, or a session ticket encrypted with a key known only to the server.

enum { psk_ke(0), psk_dhe_ke(1), (255) } PskKeyExchangeMode;

struct {
   PskKeyExchangeMode ke_modes<1..255>;
} PskKeyExchangeModes;

Two PSK key exchange modes are defined, psk_ke and psk_dhe_ke. The first signals a key exchange using a previously shared key, it derives a new master secret from only the PSK and nonces. This basically is as (in)secure as session resumption in TLS 1.2 if the server never rotates keys or discards cache entries long after they expired.

The second psk_dhe_ke mode additionally incorporates a key agreed upon using ephemeral Diffie-Hellman, thereby making it forward secure. By mixing a shared (EC)DHE key into the derived master secret, an attacker can no longer pull an entry out of the cache, or steal ticket keys, to recover the plaintext of past resumed sessions.

Note that 0-RTT data cannot be protected by the DHE secret, the early traffic secret is established without any input from the server and thus derived from the PSK only.

TLS 1.2 is surely here to stay

In theory, there should be no valid reason for a web client to be able to complete a TLS 1.3 handshake but not support psk_dhe_ke, as ephemeral Diffie-Hellman key exchanges are mandatory. An internal application talking TLS between peers would likely be a legitimate case for not supporting DHE.

But also for TLS 1.3 it might make sense to properly configure session ticket key rotation and cache turnover in case the odd client supports only psk_ke. It still makes sense especially for TLS 1.2, it will be around for probably longer than we wish and imagine.

Tim TaubertThe Future of Session Resumption

A while ago I wrote about the state of server-side session resumption implementations in popular web servers using OpenSSL. Neither Apache, nor Nginx or HAproxy purged stale entries from the session cache or rotated session tickets automatically, potentially harming forward secrecy of resumed TLS session.

Enabling session resumption is an important tool for speeding up HTTPS websites, especially in a pre-HTTP/2 world where a client may have to open concurrent connections to the same host to quickly render a page. Subresource requests would ideally resume the session that for example a GET / HTTP/1.1 request started.

Let’s take a look at what has changed in over two years, and whether configuring session resumption securely has gotten any easier. With the TLS 1.3 spec about to be finalized I will show what the future holds and how these issues were addressed by the WG.

Did web servers react?

No, not as far as I’m aware. None of the three web servers mentioned above has taken steps to make it easier to properly configure session resumption. But to be fair, OpenSSL didn’t add any new APIs or options to help them either.

All popular TLS 1.2 web servers still don’t evict cache entries when they expire, keeping them around until a client tries to resume — for performance or ease of implementation. They generate a session ticket key at startup and will never automatically rotate it so that admins have to manually reload server configs and provide new keys.

The Caddy web server

I want to seize the chance and positively highlight the Caddy web server, a relative newcomer with the advantage of not having any historical baggage, that enables and configures HTTPS by default, including automatically acquiring and renewing certificates.

Version 0.8.3 introduced automatic session ticket key rotation, thereby making session tickets mostly forward secure by replacing the key every ~10 hours. Session cache entries though aren’t evicted until access just like with the other web servers.

But even for “traditional” web servers all is not lost. The TLS working group has known about the shortcomings of session resumption for a while and addresses those with the next version of TLS.

1-RTT handshakes by default

One of the many great things about TLS 1.3 handshakes is that most connections should take only a single round-trip to establish. The client sends one or more KeyShareEntry values with the ClientHello, and the server responds with a single KeyShareEntry for a key exchange with ephemeral keys.

If the client sends no or only unsupported groups, the server will send a HelloRetryRequest message with a NamedGroup selected from the ones supported by the client. The connection will fall back to two round-trips.

That means you’re automatically covered if you enable session resumption only to reduce network latency, a normal handshake is as fast as 1-RTT resumption in TLS 1.2. If you’re worried about computational overhead from certificate authentication and key exchange, that still might be a good reason to abbreviate handshakes.

Pre-shared keys in TLS 1.3

Session IDs and session tickets are obsolete since TLS 1.3. They’ve been replaced by a more generic PSK mechanism that allows resuming a session with a previously established shared secret key.

Instead of an ID or a ticket, the client will send an opaque blob it received from the server after a successful handshake in a prior session. That blob might either be an ID pointing to an entry in the server’s session cache, or a session ticket encrypted with a key known only to the server.

enum { psk_ke(0), psk_dhe_ke(1), (255) } PskKeyExchangeMode;

struct {
   PskKeyExchangeMode ke_modes<1..255>;
} PskKeyExchangeModes;

Two PSK key exchange modes are defined, psk_ke and psk_dhe_ke. The first signals a key exchange using a previously shared key, it derives a new master secret from only the PSK and nonces. This basically is as (in)secure as session resumption in TLS 1.2 if the server never rotates keys or discards cache entries long after they expired.

The second psk_dhe_ke mode additionally incorporates a key agreed upon using ephemeral Diffie-Hellman, thereby making it forward secure. By mixing a shared (EC)DHE key into the derived master secret, an attacker can no longer pull an entry out of the cache, or steal ticket keys, to recover the plaintext of past resumed sessions.

Note that 0-RTT data cannot be protected by the DHE secret, the early traffic secret is established without any input from the server and thus derived from the PSK only.

TLS 1.2 is surely here to stay

In theory, there should be no valid reason for a web client to be able to complete a TLS 1.3 handshake but not support psk_dhe_ke, as ephemeral Diffie-Hellman key exchanges are mandatory. An internal application talking TLS between peers would likely be a legitimate case for not supporting DHE.

But also for TLS 1.3 it might make sense to properly configure session ticket key rotation and cache turnover in case the odd client supports only psk_ke. It still makes sense especially for TLS 1.2, it will be around for probably longer than we wish and imagine.

Daniel StenbergNew screen and new fuses

I got myself a new 27″ 4K screen to my work setup, a Dell P2715Q, and replaced one of my old trusty twenty-four inch friends with it.

I now work with the “Thinkpad 13″ on the left as my video conference machine (it does nothing else and it runs Windows!), the two mid screens are a 24″ and the new 27” and they are connected to my primary dev machine while the rightmost thing is my laptop for when I need to move.

Did everything run smoothly? Heck no.

When I first inserted the 4K screen without modifying anything else in the setup, it was immediately obvious that I really needed to upgrade my graphics card since it didn’t have muscles enough to drive the screen at 4K so the screen would then instead upscale a 1920×1200 image in a slightly blurry fashion. I couldn’t have that!

New graphics card

So when I was out and about later that day I more or less accidentally passed a Webhallen store, and I got myself a new card. I wanted to play it easy so I stayed with an AMD processor and went with ASUS Dual-Rx460-O2G. The key feature I wanted was to be able to drive one 4K screen and one at 1920×1200, and then I unfortunately had to give up on the ones with only passive cooling and I instead had to pick what sounds like a gaming card. (I hate shopping graphics cards.)As I was about to do surgery on the machine anyway. I checked and noticed that I could add more memory to the motherboard so I bought 16 more GB to a total of 32GB.

Blow some fuses

Later that night, when the house was quiet and dark I shut down my machine, inserted the new card, the new memory DIMMs and powered it back up again.

At least that was the plan. When I fired it back on, it said clock and my lamps around me all got dark and the machine didn’t light up at all. The fuse was blown! Man, wasn’t that totally unexpected?

I did some further research on what exactly caused the fuse to blow and blew a few more in the process, as I finally restored the former card and removed the memory DIMMs again and it still blew the fuse. Puzzled and slightly disappointed I went to bed when I had no more spare fuses.

I hate leaving the machine dead in parts on the floor with an uncertain future, but what could I do?

A new PSU

Tuesday morning I went to get myself a PSU replacement (Plexgear PS-600 Bronze), and once I had that installed no more fuses blew and I could start the machine again!

I put the new memory back in and I could get into the BIOS config with both screens working with the new card (and it detected 32GB ram just fine). But as soon as I tried to boot Linux, the boot process halted after just 3-4 seconds and seemingly just froze. Hm, I tested a few different kernels and safety mode etc but they all acted like that. Weird!

BIOS update

A little googling on the messages that appeared just before it froze gave me the idea that maybe I should see if there’s an update for my bios available. After all, I’ve never upgraded it and it was a while since I got my motherboard (more than 4 years).

I found a much updated bios image on ASUS support site, put it on a FAT-formatted USB-drive and I upgraded.

Now it booted. Of course the error messages I had googled for are still present, and I suppose they were there before too, I just hadn’t put any attention to them when everything was working dandy!

Displayport vs HDMI

I had the wrong idea that I should use the display port to get 4K working, but it just wouldn’t work. DP + DVI just showed up on one screen and I even went as far as trying to download some Ubuntu Linux driver package for Radeon RX460 that I found, but of course it failed miserably due to my Debian Unstable having a totally different kernel running and what not.

In a slightly desperate move (I had now wasted quite a few hours on this and my machine still wasn’t working), I put back the old graphics card – (with DVI + hdmi) only to note that it no longer works like it did (the DVI one didn’t find the correct resolution anymore). Presumably the BIOS upgrade or something shook the balance?

Back on the new card I booted with DVI + HDMI, leaving DP entirely, and now suddenly both screens worked!

HiDPI + LoDPI

Once I had logged in, I could configure the 4K screen to show at its full 3840×2160 resolution glory. I was back.

Now I only had to start fiddling with getting the two screens to somehow co-exist next to each other, which is a challenge in its own. The large difference in DPI makes it hard to have one config that works across both screens. Like I usually have terminals on both screens – which font size should I use? And I put browser windows on both screens…

So far I’ve settled with increasing the font DPI in KDE and I use two different terminal profiles depending on which screen I put the terminal on. Seems to work okayish. Some texts on the 4K screen are still terribly small, so I guess it is good that I still have good eye sight!

24 + 27

So is it comfortable to combine a 24″ with a 27″ ? Sure, the size difference really isn’t that notable. The 27 one is really just a few centimeters taller and the differences in width isn’t an inconvenience. The photo below shows how similar they look, size-wise:

Mozilla Open Design BlogRoads not taken

Here, Michael Johnson (MJ), founder of johnson banks, and Tim Murray (TM), Mozilla creative director, have a long-distance conversation about the Mozilla open design process while looking in the rear-view mirror.

TM: We’ve come a long way from our meet-in-the-middle in Boston last August, when my colleague Mary Ellen Muckerman and I first saw a dozen or so brand identity design concepts that had emerged from the studio at johnson banks.

MJ: If I recall, we didn’t have the wall space to put them all up, just one big table – but by the end of the day, we’d gathered around seven broad approaches that had promise. I went back to London and we gave them a good scrubbing to put on public show.

blog-pics-13feb-1

It’s easy to see, in retrospect, certain clear design themes starting to emerge from these earliest concepts. Firstly, the idea of directly picking up on Mozilla’s name in ‘The Eye’ (and in a less overt way in the ‘Flik Flak’). ‘The Eye’ also hinted at the dinosaur-slash-Godzilla iconography that had represented Mozilla at one time. We also see at this stage the earliest, and most minimal version of the ‘Protocol’ idea.

TM: You explored several routes related to code, and ‘Protocol’ was the cleverest. Mary Ellen and I were both drawn to ‘The Eye’ for its suggestion that Mozilla would be opinionated and bold. It had a brutal power, but we were also a bit worried that it was too reminiscent of the Eye of Sauron or Monsters, Inc.

MJ: Logos can be a bit of a Rorschach test, can’t they? Within a few weeks, we’d come to a mutual conclusion as to which of these ideas needed to be left on the wayside, for various reasons. The ‘Open button’, whilst enjoyed by many, seemed to restrict Mozilla too closely to just one area of work. Early presentation favourites such as ‘The Impossible M’ turned out to be just a little too close to other things out in the ether, as did Flik Flak – the value, in a way, of sharing ideas publicly and doing an impromptu IP check with the world. ‘Wireframe World’ was to come back, in a different form in the second round.

TM: This phase was when our brand advisory group, a regular gathering of Mozillians representing different parts of our organization, really came into play. We had developed a list of criteria by which to review the designs – Global Reach, Technical Beauty, Breakthrough Design, Scalability, and Longevity – and the group ranked each of the options. It’s funny to look back on this now, but ‘Protocol’ in its original form received some of the lowest scores.

MJ: One of my sharpest memories of this round of designs, once they became public was how many online commentators critiqued the work for being ‘too trendy’ or said ‘this would work for a gallery not Mozilla’. This was clearly going to be an issue because, rightly or wrongly, it seemed to me that the tech and coding community had set the bar lower than we had expected in terms of design.

TM: A bit harsh there, Michael? It was the tech and coding community that had the most affinity for ‘Protocol’ in the beginning. If it wasn’t for them, we might have let it go early on.

MJ: Ok, that’s a fair point. Well, we also started to glimpse what was to become another recurring theme – despite johnson banks having been at the vanguard of broadening out brands into complete and wide-ranging identity systems, we were going to have to get used to a TL:DR way of seeing/reading that simply judged a route by its logo alone.

TM: Right! And no matter how many times we said that these were early explorations we received feedback that they were too “unpolished.” Meanwhile, others felt that they were too polished, suggesting that this was the final design. We were whipsawed by feedback.

blog-pics-13feb-2

MJ: Whilst to some the second round seemed like a huge jump from the first, to us it was a logical development. All our attempts to develop The Eye had floundered, in truth – but as we did that work, a new way to hint at the name and its heritage had appeared. It was initially nicknamed ‘Chomper’, but was then swiftly renamed ‘Dino 2.0’. We see, of course the second iteration of Protocol, this time with slab-serifs. And two new approaches – Flame and Burst.

blog-pics-13feb-3

 

TM: I kind of fell in love with ‘Burst’ and ‘Dino 2.0’ in this round. I loved the idea behind ‘Flame’ — that it represented both our eternal quest to keep the Internet alive and healthy and the warmth of community that’s so important to Mozilla — but not this particular execution. To be fair, we’d asked you to crank it out in a matter of days.

MJ: Well, yes that’s true. With ‘Flame’ we then suffered from too close a comparison to all the other flame logos out in there in the ether, including Tinder. Whoops! ‘Burst’ was, and still is, a very intriguing thought – that we see Mozilla’s work through 5 key nodes and questions, and see the ‘M’ shape that appears as the link between statistical points.

blog-pics-13feb-4

TM: Web developers really rebelled against the moire of Burst, didn’t they? We now had four really distinct directions that we could put into testing and see how people outside of Mozilla (and North America) might feel. The testing targeted both existing and desired audiences, the latter being ‘Conscious Choosers’, a group of people who made decisions based on their personal values.

MJ: We also put these four in front of a design audience in Nashville at the Brand New conference, and of course commentary lit up the Open Design blog. The results in person and on the blogs were pretty conclusive – that Protocol and Dino 2.0 were the clear favourites. But one set of results were a curve ball – that the overall ‘feel’ of Burst was enjoyed by a key group of desired (and not current) supporters.

blog-pics-13feb-5

TM: This was a tricky time. ‘Dino’ had many supporters, but just about as many vocal critics. As soon as one person remarked “It looks like a stapler,” the entire audience of blog readers piled on, or so it seemed. At our request, you put the icon through a series of operations that resulted in the loss of his lower jaw. Ouch. Even then, we had to ask, as beloved as this dino character was for its historical reference to the old Mozilla, would it mean anything to newcomers?

blog-pics-13feb-6

MJ: Yes, this was an odd period. For many, it seemed that the joyful and playful side of the idea was just too much ‘fun’ and didn’t chime with the desired gravitas of a famous Internet not-for-profit wanting to amplify its voice, be heard and be taken seriously. Looking back, Dino died a slow and painful death.

TM: Months later, rogue stickers are still turning up of the ‘Dino’ favicon showing just the open jaws without the Mozilla word mark. A sticker seemed to be the perfect vehicle for this design idea. And meanwhile, ‘Protocol’ kept chugging along like a tortoise at the back of the race, always there, never dropping out.

MJ: We entered the autumn period in a slightly odd place. Dino had been effectively trolled out of the game. Burst had, against the odds, researched well, but more for the way it looked than what it was trying to say. And Protocol, the idea that had sat in the background whilst all around it hogged the spotlight, was still there, waiting for its moment. It had always researched well, especially with the coding community, and was a nice reminder of Mozilla’s pioneering spirit with its nod to the http protocol.

TM: To me though, the ‘Protocol’ logo was a bit of a one-trick pony, an inside joke for the coding community that didn’t have as much to offer to the non-tech world. You and I came to the same conclusion at the same time, Michael. We needed to push ‘Protocol’ further, maybe even bring over some of the joy and color of routes such as ‘Burst’ and the fun that ‘Dino’ represented.

blog-pics-13feb-7

MJ: Our early reboot work wasn’t encouraging. Simply putting the two routes together, such as this, looked exactly like what it is: two routes mashed together. A boardroom compromise to please no-one. But, gradually, some interesting work started to emerge, from different parts of the studio.

We began to think more about how Mozilla’s key messages could really flow from the Moz://a ‘start’, and started to explore how to use intentionally attention grabbing copy-lines next to the mark, without word spaces, as though written into a URL.

blog-pics-13feb-8

We also experimented with the Mozilla wordmark used more like a ‘selected’ item in a design or editing programme – where that which is selected reverses out from that around it.

blog-pics-13feb-9

From an image-led perspective, we also began to push the Mozilla mark much further, exploring if it could be shattered, deconstructed or glitched, in so becoming a much more expressive idea.

blog-pics-13feb-10

In parallel, an image-led idea had developed, which placed the Mozilla name at the beginning of an imaginary toolbar, and there followed imagery culled directly from the Internet, in an almost haphazard way.

blog-pics-13feb-11

TM: This slide, #19 in the exploratory deck, was the one that really caught our attention. Without losing the ‘moz://a’ idea that coders loved, it added imagery that suggested freedom and exploration which would appeal to a variety of audiences. It was eye-catching. And the thought that this imagery could be ever-changing made me excited by the possibilities for extending our brand as a representation of the wonder of the Internet.

blog-pics-13feb-12

 

MJ: When we sat back and took stock of this frenetic period of exploration, we realised that we could bring several of these ideas together, and this composite stage was shared with the wider team.

What you see from this, admittedly slightly crude ‘reboot’ of the idea, are the roots of the final route – a dynamic toolkit of words and images, deliberately using bright, neon, early-internet hex colours and crazy collaged imagery. This was really starting to feel like a viable way forward to us.

To be fair, we’d kept Mozilla slightly in the dark for several weeks by this stage. We had retreated to our design ‘bunker’, and weren’t going to come out until we had a reboot of the Protocol idea that we thought was worthy. That took a bit of explaining to the team at Mozilla towers.

TM: That’s a very comic-book image you have of our humble digs, Michael. But you’re right, our almost daily email threads went silent for a few weeks, and we tap-danced a bit back at the office when people asked when we’d see something. It was make or break time from all perspectives. As soon as we saw the new work arrive, we knew we had something great to work with.

blog-pics-13feb-15

MJ: The good news was that, yes, the Mozilla team were on board with the direction of travel. But, certain amends began almost immediately: we’d realised that stand-out of the logo was much improved if it reversed out of a block, and we started to explore softening the edges of the type-forms. And whilst the crazy image collages were, broadly, enjoyed, it was clear that we were going to need some clear rules on how these would be curated and used.

blog-pics-13feb-16

TM: We had the most disagreement over the palette. We found the initial neon palette to be breathtaking and a fun recollection of the early days of the Internet. On the other hand, it didn’t seem practical for use in the user interface for our web properties, especially when these colors were paired together. And our executive team found the neons to be quite polarizing.

MJ: We were asked to explore a more ‘pastel’ palette, that to our eyes lacked some of the ‘oomph’ of the neons. We’d also started to debate the black bounding box, and whether it should or shouldn’t crop the type on the outer edges. We went back and forwards on these details for a while.

TM: To us, it helped to know that the softer colors picked up directly from the highlight bar in Firefox and other browsers. We liked how distinctive the cropped bounding box looked and grew used to it quickly, but ultimately felt that it created too much tension around the logo.

blog-pics-13feb-17

MJ: And as we continued to refine the idea for launch, we also debated the precise typeforms of the logo. In the first stage of the Protocol ‘reboot’ we’d proposed a slab-serif free font called Arvo as the lead font, but as we used it more, we realised many key characters would have to be redrawn.

That started a new search – for a type foundry that could both develop a slab-serif font for Mozilla, and at the same time work on the final letterforms of the logo so there would be harmony between the two. We started discussions with Arvo’s creators, and also with Fontsmith in London, who had helped on some of the crafting of the interim logos and also had some viable typefaces.

TM: Meanwhile a new associate creative director, Yuliya Gorlovetsky, had joined our Mozilla team, and she had distinct ideas about typeface, wordmark, and logo based on her extensive typography experience.

blog-pics-13feb-18

MJ: Finally, after some fairly robust discussions, Typotheque was chosen to do this work, and above and below you can see the journey that the final mark had been on, and the editing process down to the final logo, testing different ways to tackle the key characters, especially the  ‘m’, ‘z’ and ‘a’.

This work, in turn, and after many debates about letterspacing, led to the final logo and its set of first applications.

blog-pics-13feb-19

TM: So here we are. Looking back at this makes it seem a fait accompli somehow, even though we faced setbacks and dead-ends along the way. You always got us over the roadblocks though, Michael, even when you disagreed profoundly with some of our decisions.

MJ: Ha! Well, our job was and is to make sure you stay true to your original intent with this brand, which was to be much bolder, more provocative and to reach new audiences. I’ll admit that I sometimes feared that corporate forces or the online hordes were wearing down the distinctive elements of any of the systems we were proposing.

But, even though the iterative process was occasionally tough, I think it’s worked out for the best and it’s gratifying that an idea that was there from the very first design presentation slowly but surely became the final route.

TM: It’s right for Mozilla. Thanks for being on this journey with us, Michael. Where shall we go next?

 

Eitan IsaacsonDark Windows for Dark Firefox

I recently set the Compact Dark theme as my default in Firefox. Since we don’t yet have Linux client-side window decorations yet (when is that happening??), it looks kind of bad in GNOME. The window decorator shows up as a light band in a sea of darkness:

Firefox with light window decorator

It just looks bad. You know? I looked for an addon that would change the decorator to the dark-themed one, but I couldn’t find any. I ended up adapting the gtk-dark-theme Atom addon to a Firefox one. It was pretty easy. I did it over a remarkable infant sleep session on a Saturday morning. Here is the result:

Firefox with dark window decorator

You can grab the yet-to-be-reviewed addon here.


Mozilla VR BlogNew features in A-Frame Inspector v0.5.0

New features in A-Frame Inspector v0.5.0

This is a small summary of some new features of the latest A-Frame Inspector version that may pass unnoticed to some.

Image assets dialog

v0.5.0 introduces an assets management window to import textures in your scene without having to manually type URLs.

The updated texture widget includes the following elements:

  • Preview thumbnail: it will open the image assets dialog.
  • Input box: Hover the mouse over it and it will show the complete URL of the asset..
  • Open in a new tab: It will open a new tab with the full sized texture
  • Clear: It will clear the value of the attribute.

New features in A-Frame Inspector v0.5.0

Once the image assets dialog is open you’ll see the list of images currently being used in your project, with the previous selection for the widget, if any, highlighted.
You could click in any image from this gallery to set the value of the map attribute you’re editing.

New features in A-Frame Inspector v0.5.0

If you want to include new images to this list, click on LOAD TEXTURE and you’ll see several options to include a new image on your project:

New features in A-Frame Inspector v0.5.0

Here you could add new image assets to your scene by:

  • Entering an URL
  • Opening an uploadcare dialog that will let you upload files from your computer, google drive, dropbox.. and from other sources in ( this is currently uploading the images to our uploadcare account, so please be kind :), we’re working on letting you define your API key to use your own account).
  • Drag and dropping from your computer. This will upload to uploadcare too.
  • Choosing one from the curated list of images we’ve included in the assets-sample https://github.com/aframevr/sample-assets repo.

Once added your image you’ll see a thumbnail showing some information about the image and the name that will have this texture in your project (the asset ID that can be referenced as #name).

New features in A-Frame Inspector v0.5.0

After editing the name if needed, click on LOAD TEXTURE and it will add your texture to the list of assets available in your project, showing you the list of textures you saw when you opened the dialog.

New features in A-Frame Inspector v0.5.0 Now just clicking on the newly created texture you’ll set the new value for the attribute you were editing.

New features in A-Frame Inspector v0.5.0

New features in the scenegraph

Toggle visibility

  • Toggle panels: New shortcuts:
    • 1: Toggle scenegraph panel
    • 2: Toggle components panel
    • TAB: Toggle both panels
  • Toggle entity visibility of each element in the scene is now possible by pressing the eye icon in the scenegraph.

New features in A-Frame Inspector v0.5.0

Broader scenegraph filtering

In the previous version of the inspector we could filter by the tag name of the entity or by its ID. In the new version the filter will take into account also the names of the components that each entity has and the values of the attributes of these components.

New features in A-Frame Inspector v0.5.0

For example if we write: red it will return the entities which name contains red but also all of them with a red color in the material component. We could also filter by geometry, or directly by sphere and so on.
We’ve added the shortcut CTRL or CMD + f to set the focus on the filter input for a faster filtering, and ESC to clear the filter.

Cut, copy and paste

Thanks to @vershwal it’s now possible to cut, copy and paste entities using the expected shorcuts:

  • CTRL or CMD + x: Cut selected entity
  • CTRL or CMD + c: Copy selected entity
  • CTRL or CMD + v: Paste the latest copied or cut entity

New shortcuts

The list of the new shorcuts introduced in this version:

  • 1: Toggle scenegraph panel
  • 2: Toggle components panel
  • TAB: Toggle both scenegraph and components panel
  • CTRL or CMD + x: Cut selected entity
  • CTRL or CMD + c: Copy selected entity
  • CTRL or CMD + v: Paste a new entity
  • CTRL or CMD + f: Focus scenegraph filter

Remember that you can press h to show the list of all the shortcuts available:

New features in A-Frame Inspector v0.5.0

Emma IrwinThank you Brian King!

Brian King was one of the first people I met at Mozilla.  He is someone whose opinion,  ideas, trust, support and friendship have meant a lot to me – and I know countless others would  similarly describe Brian as someone who made collaborating, working and gathering together as a highlight of their Mozilla experiences, and personal success.

Brian has  been a part of the Mozilla community for nearly 18 years – and even though we are thrilled for his new adventures, we really wanted to find a megaphone to say thank you…   Here are some highlights from my interview with him last week.

Finding Mozilla

Brian came to Mozilla all those years ago, as a developer.  He worked for a company that developed software which promoted minority languages including Basque, Catalan, Frisian, Irish, Welsh.  As many did back in the day – he met people in newsgroups and on IRC, and slowly became immersed in the community – regularly attending developer meetups.  Community, from the very beginning was the reason Brian became grew more deeply involved and connected to Mozilla’s mission.

Shining Brightly

Early contributions were code – becoming involved in with the HTML Editor, then part of the Mozilla Suite. He got a job at Activestate in Vancouver, and worked on the Komodo IDE for dynamic languages. Skipping forward he became more and more invested in transitioning to Add-On contribution, and review – even co-authoring a book “Creating Applications with Mozilla”  – which I did not know!  Very cool. During this time he describes himself as being “very fortunate” to be able to make a living by working in the Mozilla and Web ecosystem while running a consultancy writing Firefox add-ons and other software.

Dear Community – “You had me at Hello”

 

 

 

Something Brian shared with me, was that being part of the community essentially sustained  his connection with Mozilla during times when he was to busy to contribute – and I think many other Mozillians feel this same way  – it’s never goodbye, only see you soon.  On Brian’s next adventure, I think we can take comfort that the open door of community will sustain our connection for years to come.

As Staff

Brian came on as Mozilla staff in 2012 as the European Community Manager, with success in this and overseeing the evolution of the Mozilla Reps program. He was instrumental in successfully building Firefox OS launch teams all around the world. Most recently he has been sharpening that skillset of empowering individuals, teams and communities with support for various programs, regional support, and the Activate campaign.

Proud Moments

With a long string of accomplishments at Mozilla, I asked Brian what his proudest moments were. Some of those he listed were:

  1. AMO editor for a few years reviewing thousands of Addons
  2. Building community in the Balkan area
  3. Building out the Mozilla Reps program, and being a founding council member.
  4. Helping drive Mozilla success at FOSDEM
  5. Building FFOS Launch Teams

But he emphasized, in all of these, the opportunity to bring new people into the community, to nurture and help individuals and groups reach their goals provided an enormous sense of accomplishment and fulfillment.

He didn’t mention it, but I also found this photo of Brian on TV in Transylvania, Romania that looks pretty cool.

Look North!

To wrap up, I asked Brian what he most wanted to see for Mozilla in the next 5 years, leaning on what he knows for years as part of, and leading community:

My hope is that Mozilla finds it’s North Star for the next 5-10 years, doubles down on recent momentum, and as part of that bakes community participation into all parts of the organization. It must be a must-have, and not a nice-to-have.”

Thank you Brian King!

You can give your thanks to Brian with #mozlove #brianking  – share gratitude, laughs and stories.

Save

Save

Save

FacebookTwitterGoogle+Share

Patrick WaltonPathfinder, a fast GPU-based font rasterizer in Rust

Ever since some initial discussions with Raph Levien (author of font-rs) at RustConf last September, I’ve been thinking about ways to improve vector graphics rendering using modern graphics hardware, specifically for fonts. These ideas began to come together in December, and over the past couple of months I’ve been working on actually putting them into a real, usable library. They’ve proved promising, and now I have some results to show.

Today I’m pleased to announce Pathfinder, a Rust library for OpenType font rendering. The goal is nothing less than to be the fastest vector graphics renderer in existence, and the results so far are extremely encouraging. Not only is it very fast according to the traditional metric of raw rasterization performance, it’s practical, featuring very low setup time (end-to-end time superior to the best CPU rasterizers), best-in-class rasterization performance even at small glyph sizes, minimal memory consumption (both on CPU and GPU), compatibility with existing font formats, portability to most graphics hardware manufactured in the past five years (DirectX 10 level), and security/safety.

Performance

To illustrate what it means to be both practical and fast, consider these two graphs:

Rasterization performance

Setup performance

(Click each graph for a larger version.)

The first graph is a comparison of Pathfinder with other rasterization algorithms with all vectors already prepared for rendering (and uploaded to the GPU, in the case of the GPU algorithms). The second graph is the total time taken to prepare and rasterize a glyph at a typical size, measured from the point right after loading the OTF file in memory to the completion of rasterization. Lower numbers are better. All times were measured on a Haswell Intel Iris Pro (mid-2015 MacBook Pro).

From these graphs, we can see two major problems with existing GPU-based approaches:

  1. Many algorithms aren’t that fast, especially at small sizes. Algorithms aren’t fast just because they run on the GPU! In general, we want rendering on the GPU to be faster than rendering on the CPU; that’s often easier said than done, because modern CPUs are surprisingly speedy. (Note that, even if the GPU is somewhat slower at a task than the CPU, it may be a win for CPU-bound apps to offload some work; however, this makes the use of the algorithm highly situational.) It’s much better to have an algorithm that actually beats the CPU.

  2. Long setup times can easily eliminate the speedup of algorithms in practice. This is known as the “end-to-end” time, and real-world applications must carefully pay attention to it. One of the most common use cases for a font rasterizer is to open a font file, rasterize a character set from it (Latin-1, say) at one pixel size for later use, and throw away the file. With Web fonts now commonplace, this use case becomes even more important, because Web fonts are frequently rasterized once and then thrown away as the user navigates to a new page. Long setup times, whether the result of tessellation or more exotic approaches, are real problems for these scenarios, since what the user cares about is the document appearing quickly. Faster rasterization doesn’t help if it regresses that metric.

(Of the two problems mentioned above, the second is often totally ignored in the copious literature on GPU-based vector rasterization. I’d like to see researchers start to pay attention to it. In most scenarios, we don’t have the luxury of inventing our own GPU-friendly vector format. We’re not going to get the world to move away from OpenType and SVG.)

Vector drawing basics

In order to understand the details of the algorithm, some basic knowledge of vector graphics is necessary. Feel free to skip this section if you’re already familiar with Bézier curves and fill rules.

OpenType fonts are defined in terms of resolution-independent Bézier curves. TrueType outlines contain lines and quadratic Béziers only, while OpenType outlines can contain lines, quadratic Béziers, and cubic Béziers. (Right now, Pathfinder only supports quadratic Béziers, but extending the algorithm to support cubic Béziers should be straightforward.)

In order to fill vector paths, we need a fill rule. A fill rule is essentially a test that determines, for every point, whether that point is inside or outside the curve (and therefore whether it should be filled in). OpenType’s fill rule is the winding rule, which can be expressed as follows:

  1. Pick a point that we want to determine the color of. Call it P.

  2. Choose any point outside the curve. (This is easy to determine since any point outside the bounding box of the curve is trivially outside the curve.) Call it Q.

  3. Let the winding number be 0.

  4. Trace a straight line from Q to P. Every time we cross a curve going clockwise, add 1 to the winding number. Every time we cross a curve going counterclockwise, subtract 1 from the winding number.

  5. The point is inside the curve (and so should be filled) if and only if the winding number is not zero.

How it works, conceptually

The basic algorithm that Pathfinder uses is the by-now-standard trapezoidal pixel coverage algorithm pioneered by Raph Levien’s libart (to the best of my knowledge). Variations of it are used in FreeType, stb_truetype version 2.0 and up, and font-rs. These implementations differ as to whether they use sparse or dense representations for the coverage buffer. Following font-rs, and unlike FreeType and stb_truetype, Pathfinder uses a dense representation for coverage. As a result, Raph’s description of the algorithm applies fairly well to Pathfinder as well.

There are two phases to the algorithm: drawing and accumulation. During the draw phase, Pathfinder computes coverage deltas for every pixel touching (or immediately below) each curve. During the accumulation phase, the algorithm sweeps down each column of pixels, computing winding numbers (fractional winding numbers, since we’re antialiasing) and filling pixels appropriately.

The most important concept to understand is that of the coverage delta. When drawing high-quality antialiased curves, we care not only about whether each pixel is inside or outside the curve but also how much of the pixel is inside or outside the curve. We treat each pixel that a curve passes through as a small square and compute how much of the square the curve occupies. Because we break down curves into small lines before rasterizing them, these coverage areas are always trapezoids or triangles, and so and so we can use trapezoidal area expressions to calculate them. The exact formulas involved are somewhat messy and involve several special cases; see Sean Barrett’s description of the stb_truetype algorithm for the details.

Rasterizers that calculate coverage in this way differ in whether they calculate winding numbers and fill at the same time they calculate coverage or whether they fill in a separate step after coverage calculation. Sparse implementations like FreeType and stb_truetype usually fill as they go, while dense implementations like font-rs and Pathfinder fill in a separate step. Filling in a separate step is attractive because it can be simplified to a prefix sum over each pixel column if we store the coverage for each pixel as the difference between the coverage of the pixel and the coverage of the pixel above it. In other words, instead of determining the area of each pixel that a curve covers, for each pixel we determine how much additional area the curve covers, relative to the coverage area of the immediately preceding pixel.

This modification has the very attractive property that all coverage deltas both inside and outside the curve are zero, since points completely inside a curve contribute no additional area (except for the first pixel completely inside the curve). This property is key to Pathfinder’s performance relative to most vector texture algorithms. Calculating exact area coverage is slow, but calculating coverage deltas instead of absolute coverage essentially allows us to limit the expensive calculations to the edges of the curve, reducing the amount of work the GPU has to do to a fraction of what it would have to do otherwise.

In order to fill the outline and generate the final glyph image, we simply have to sweep down each column of pixels, calculating the running total of area coverage and writing pixels according to the winding rule. The formula to determine the color of each pixel is simple and fast: min(|coverage total so far|, 1.0) (where 0.0 is a fully transparent pixel, 1.0 is a fully opaque pixel, and values in between are different shades). Importantly, all columns are completely independent and can be calculated in parallel.

Implementation details

With the advanced features in OpenGL 4.3, this algorithm can be straightforwardly adapted to the GPU.

  1. As an initialization step, we create a coverage buffer to hold delta coverage values. This coverage buffer is a single-channel floating-point framebuffer. We always draw to the framebuffer with blending enabled (GL_FUNC_ADD, both source and destination factors set to GL_ONE).

  2. We expand the TrueType outlines from the variable-length compressed glyf format inside the font to a fixed-length, but still compact, representation. This is necessary to be able to operate on vertices in parallel, since variable-length formats are inherently sequential. These outlines are then uploaded to the GPU.

  3. Next, we draw each curve as a patch. In a tessellation-enabled drawing pipeline like the one that Pathfinder uses, rather than submitting triangles directly to the GPU, we submit abstract patches which are converted into triangles in hardware. We use indexed drawing (glDrawElements) to take advantage of the GPU’s post-transform cache, since most vertices belong to two curves.

  4. For each path segment that represents a Bézier curve, we tessellate the Bézier curve into a series of small lines on the GPU. Then we expand all lines out to screen-aligned quads encompassing their bounding boxes. (There is a complication here when these quads overlap; we may have to generate extra one-pixel-wide quads here and strip them out with backface culling. See the comments inside the tessellation control shader for details.)

  5. In the fragment shader, we calculate trapezoidal coverage area for each pixel and write it to the coverage buffer. This completes the draw step.

  6. To perform the accumulation step, we attach the coverage buffer and the destination texture to images. We then dispatch a simple compute shader with one invocation per pixel column. For each row, the shader reads from the coverage buffer and writes the total coverage so far to the destination texture. The min(|coverage total so far, 1.0) expression above need not be computed explicitly, because our unsigned normalized atlas texture stores colors in this way automatically.

The performance characteristics of this approach are excellent. No CPU preprocessing is needed other than the conversion of the variable-length TrueType outline to a fixed-length format. The number of draw calls is minimal—any number of glyphs can be rasterized in one draw call, even from different fonts—and the depth and stencil buffers remain unused. Because the tessellation is performed on the fly instead of on the CPU, the amount of data uploaded to the GPU is minimal. Area coverage is essentially only calculated for pixels on the edges of the outlines, avoiding expensive fragment shader invocations for all the pixels inside each glyph. The final accumulation step has ideal characteristics for GPU compute, since branch divergence is nonexistent and cache locality is maximized. All pixels in the final buffer are only painted at most once, regardless of the number of curves present.

Compatibility concerns

For any GPU code designed to be shipping to consumers, especially OpenGL 3.0 and up, compatibility and portability are always concerns. As Pathfinder is designed for OpenGL 4.3, released in 2012, it is no exception. Fortunately, the algorithm can be adapted in various ways depending on the available functionality.

  • When compute shaders are not available (OpenGL 4.2 or lower), Pathfinder uses OpenCL 1.2 instead. This is the case on the Mac, since Apple has not implemented any OpenGL features newer than OpenGL 4.2 (2011). The compute-shader crate abstracts over the subset of OpenGL and OpenCL necessary to access GPU compute functionality.

  • When tessellation shaders are not available (OpenGL 3.3 or lower), Pathfinder uses geometry shaders, available in OpenGL 3.2 and up.

(Note that it should be possible to avoid both geometry shaders and tessellation shaders, at the cost of performing that work on the CPU. This turns out to be quite fast. However, since image load/store is a hard requirement, this seems pointless: both image load/store and geometry shaders were introduced in DirectX 10-level hardware.)

Although these system requirements may seem high at first, the integrated graphics found in any Intel Sandy Bridge (2011) CPU or later meet them.

Future directions

The immediate next step for Pathfinder is to integrate into WebRender as an optional accelerated path for applicable fonts on supported GPUs. Beyond that, there are several features that could be added to extend Pathfinder itself.

  1. Support vector graphics outside the font setting. As Pathfinder is a generic vector graphics rasterizer, it would be interesting to expose an API allowing it to be used as the backend for e.g. an SVG renderer. Rendering the entire SVG specification is outside of the scope of Pathfinder itself, but it could certainly be the path rendering component of a full SVG renderer.

  2. Support CFF and CFF2 outlines. These have been seen more and more over time, e.g. in Apple’s new San Francisco font. Adding this support involves both parsing and extracting the CFF2 format and adding support for cubic Bézier curves to Pathfinder.

  3. Support WOFF and WOFF2. In the case of WOFF2, this involves writing a parser for the transformed glyf table.

  4. Support subpixel antialiasing. This should be straightforward.

  5. Support emoji. The Microsoft COLR and Apple sbix extensions are straightforward, but the Google SVG table allows arbitrary SVGs to be embedded into a font. Full support for SVG is probably out of scope of Pathfinder, but perhaps the subset used in practice is small enough to support.

  6. Optimize overlapping paths. It would be desirable to avoid antialiasing edges that are covered by other paths. The fill rule makes this trickier than it initially sounds.

  7. Support hinting. This is low-priority since it’s effectively obsolete with high-quality antialiasing, subpixel AA, and high-density displays, but it might be useful to match the system rendering on Windows.

Conclusion

Pathfinder is available on GitHub and should be easily buildable using the stable version of Rust and Cargo. Please feel free to check it out, build it, and report bugs! I’m especially interested in reports of poor performance, crashes, or rendering problems on a variety of hardware. As Pathfinder does use DirectX 10-level hardware features, some amount of driver pain is unavoidable. I’d like to shake these problems out as soon as possible.

Finally, I’d like to extend a special thanks to Raph Levien for many fruitful discussions and ideas. This project wouldn’t have been possible without his insight and expertise.

Mozilla Addons BlogAdd-ons Update – 2017/02

Here’s the state of the add-ons world this month.

If you haven’t read Add-ons in 2017, I suggest that you do. It lays out the high-level plan for add-ons this year.

The Review Queues

In the past month, 1,670 listed add-on submissions were reviewed:

  • 1148 (69%) were reviewed in fewer than 5 days.
  • 106 (6%) were reviewed between 5 and 10 days.
  • 416 (25%) were reviewed after more than 10 days.

There are 467 listed add-ons awaiting review.

If you’re an add-on developer and are looking for contribution opportunities, please consider joining us. Add-on reviewers are critical for our success, and can earn cool gear for their work. Visit our wiki page for more information.

Compatibility

The blog post for 52 is up and the bulk validation was already run. Firefox 53 is coming up.

Multiprocess Firefox is enabled for some users, and will be deployed for all users very soon. Make sure you’ve tested your add-on and either use WebExtensions or set the multiprocess compatible flag in your add-on manifest.

As always, we recommend that you test your add-ons on Beta and Firefox Developer Edition to make sure that they continue to work correctly. End users can install the Add-on Compatibility Reporter to identify and report any add-ons that aren’t working anymore.

Recognition

We would like to thank the following people for their recent contributions to the add-ons world:

  • eight04
  • Aayush Sanghavi
  • zombie
  • Doug Thayer
  • ingoe
  • totaki
  • Piotr Drąg
  • ZenanZha
  • Joseph Frazier
  • Revanth47

You can read more about their work in our recognition page.

Firefox NightlyThese Weeks in Firefox: Issue 10

Highlights

  • The Sidebar WebExtension API (compatible with Opera’s API) has been implemented 💰
  • Preferences reorg and search project is fully underway. jaws and mconley lead a “hack-weekend” this past weekend with some MSU students working on the reorg and search projects
  • A lot of people were stuck on Firefox Beta 44, we found out about it and fixed it. Read more about it on :chuttens blog
  • According to our Telemetry, ~62% of our release population has multi-process Firefox enabled by default now 😎
  • Page Shot is going to land in Firefox 54.  We are planning on making it a WebExtension so that users can remove it fully if they choose to.

Friends of the Firefox team

Project Updates

Activity Stream

Content Handling Enhancement

Electrolysis (e10s)

  • e10s-multi is tentatively targeted to ride the trains in Firefox 55
    • Hoping to use a scheme where we measure the user’s available memory in order to determine maximum content process count
    • Here’s the bug to track requirements to enable e10s-multi on Dev Edition by default

Firefox Core Engineering

Form Autofill

Go Faster

  • 1-day uptake of system add-ons is ~85% in beta (thanks to restartless), and ~72% in release (Wiki)

Platform UI and other Platform Audibles

Privacy/Security

Search

  • Fixed a glaring problem with one-off buttons in scaled (zoomed) display configurations that made the search settings button appear in a separate line.
  • We now correctly report when a search engine could not be installed due to an invalid format.
  • Some Places work mostly in preparation of support for hi-res favicons.

Sync / Firefox Accounts

Storage Management

Test Pilot

Here are the raw meeting notes that were used to derive this list.

Want to help us build Firefox? Get started here!

Here’s a tool to find some mentored, good first bugs to hack on.

Air MozillaMartes Mozilleros, 14 Feb 2017

Martes Mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos. Bi-weekly meeting to talk (in Spanish) about Mozilla status, community and...

Chris H-CThe Most Satisfying Graph

There were a lot of Firefox users on Beta 44.

Usually this is a good thing. We like having a lot of users[citation needed].

It wasn’t a good thing this time, as Beta had already moved on to 45. Then 46. Eventually we were at Beta 52, and the number of users on Beta 44 was increasing.

We thought maybe it was because Beta 44 had the same watershed as Release 43. Watershed? Every user running a build before a watershed must update to the watershed first before updating to the latest build. If you have Beta 41 and the latest is Beta 52, you must first update to Beta 44 (watershed) so we can better ascertain your cryptography support before continuing on to 48, which is another watershed, this time to do with fourteen-year-old processor extensions. Then, and only then, can you proceed to the currently-most-recent version, Beta 52.

(If you install afresh, the installer has the smarts to figure out your computer’s cryptographic and CPU characteristics and suitability so that new users jump straight to the front of the line)

Beta 44 being a watershed should, indeed, require a longer-than-usual lifetime of the version, with respect to population. If this were the only effect at play we’d expect the population to quickly decrease as users updated.

But they didn’t update.

It turns out that whenever the Beta 44 users attempted to download an update to that next watershed release, Beta 48, they were getting a 404 Not Found. At some point, the watershed Beta 48 build on download.mozilla.org was removed, possibly due to age (we can’t keep everything forever). So whenever the users on Beta 44 wanted to update, they couldn’t. To compound things, any time a user before Beta 44 wanted to update, they had to go through Beta 44. Where they were caught.

This was fixed on… well, I’ll let you figure out which day it was fixed on:

beta44_dau

This is now the most satisfying graph I’ve ever plotted at Mozilla.

:chutten


David LawrenceHappy BMO Push Day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1337382] clicking on ‘show’ after updating a bug to show the list of people emailed no longer works
  • [1280393] [a11y] All inputs and selects need to be labeled properly

discuss these changes on mozilla.tools.bmo.


This Week In RustThis Week in Rust 169

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate of the week is derive_builder, automatically implement the builder pattern for arbitrary structs. Now with macro 1.1 support (custom derive since Rust 1.15). Thanks to Willi Kappler for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

153 pull requests were merged in the last week.

New Contributors

  • Aaron Power
  • Alexander Battisti
  • bjorn3
  • Charlie Fan
  • Gheorghe Anghelescu
  • Giang Nguyen
  • Henning Kowalk
  • Ingvar Stepanyan
  • Jan Zerebecki
  • Jordi Polo
  • Mario
  • Rob Speer
  • Shawn Walker-Salas

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

Closed RFCs

Following proposals were rejected by the team after their 'final comment period' elapsed.

No RFCs were closed this week!

New RFCs

Style RFCs

Style RFCs are part of the process for deciding on style guidelines for the Rust community and defaults for Rustfmt. The process is similar to the RFC process, but we try to reach rough consensus on issues (including a final comment period) before progressing to PRs. Just like the RFC process, all users are welcome to comment and submit RFCs. If you want to help decide what Rust code should look like, come get involved!

PRs:

Issues in final comment period:

Other significant issues:

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

No quote was selected for QotW.

Submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and brson.

Air MozillaTEST -PROJ. MTG

TEST -PROJ. MTG The Monday Project Meeting

Air MozillaMozilla Weekly Project Meeting, 13 Feb 2017

Mozilla Weekly Project Meeting The Monday Project Meeting

Mozilla Reps CommunityReps as mobilizers in the Mozilla Community

TL;DR

This is a blogpost that explains what Mobilizer in Mozilla Community is and how Reps would potentially fit in this role. Right now we have defined the mobilizers as a group of trusted, aligned and committed mozillians that are interested in:

  1. finding and connecting new talent with Mozilla projects
  2. growing in their mobilizing and coaching skills
  3. supporting their local communities and the rest of the organization to reach their goals and be more effective and
  4. creating collaborations with other local communities in an effort to expand Mozilla’s mission and Mozilla’s outreach in the open source ecosystem

Definitions:

  • trusted: Mozillians will feel good sharing information intended just for internal use and they keep it that way.
  • aligned: We refer as alignment or aligned to the people who have a clear understanding on where Mozilla is right now, what are the current needs/priorities for the organization and as a result they plan their activities to bring value to these high level goals.
  • committed: People with proven past record on their accountability and support to the activities delivered inside the organization.

Background

When we introduced the mobilizer word in Reps community a lot of people were confused. What does being a mobilizer mean? And what does that mean for me as a Rep? By trying to give an initial answer we realised that every person had a different interpretation to the word, creating different expectations. What we failed to realize was that in fact Reps have already been walking the mobilizer path for a year now via the RepsNext changes.

This blogpost serves as a way to shed some light on the questions above. It is also trying to initiate a discussion inside Reps. We’re proud to call ourselves a volunteer lead program, thus we need to discuss in the open about the path we are taking and its implications for the program. We haven’t finalized any role descriptions or role responsibilities. This is something we need your help with in the comments below.

What does being a mobilizer mean?

Mobilizer is a term that we have started using only recently. From the wiktionary, a mobilizer is a person that mobilizes something or someone, basically the one that is moving things or people forward.

In Mozilla we have struggled a lot in the past on bringing coordination inside the communities and also connect with other communities with the purpose of serving Mozilla’s mission and goals. In the past we have partially solved this with creating the Reps program, however it was never clear that this was its purpose. So what does actually being a mobilizer mean?

A Mobilizer in the Mozilla community is a trusted, aligned and committed Mozillian that is interested in:

  1. finding and connecting new talent with Mozilla projects
  2. growing in their mobilizing and coaching skills and
  3. supporting their local communities and the rest of the organization to reach their goals and be more effective
  4. creating collaborations with other local communities in an effort to expand Mozilla’s mission and Mozilla’s outreach in the open source ecosystem.

But what is going to be the result/output of those efforts?

The purpose of Mobilizers’ actions is to build healthy communities and connections that will serve Mozilla’s goals as well as specific functional team goals.

How does Reps fit in that role?

When we’ve built the program 7 years ago we came with a simple definition:

“The Mozilla Reps program aims to empower and support volunteer Mozillians who want to become official representatives of Mozilla in their region/locale.

The program provides a simple framework and a specific set of tools to help Mozillians to organize and/or attend events, recruit and mentor new contributors, document and share activities, and support their local communities better.”

At the beginning we were aiming to give volunteers a way to officially represent Mozilla around the world. But the program evolved to be so much more. Over the years, Reps found a structured way to become the bridge between the Mozilla Corporation, Mozilla Foundation and the volunteer community. They became the backbone of the community, they helped to build structured communities and to bring more contributors to the project. Mozilla is competing in a very aggressive environment however we have a key differentiator, and that is our volunteer community. During the past years Reps have been the key to unlocking this huge potential and provide value to Mozilla goals. So even if it is not in our description, Reps are the ones that unofficially took the role of mobilizing their communities and help them align with Mozilla’s goals.

The transformation from an events program to a community building program (RepsNext)

As mentioned on the definition of Reps, it has been given to the participants of the program: “a specific set of tools to help Mozillians to organize and/or attend events” and it was this specific set of tools and resources that enabled hundreds of Reps through the years to conduct thousands of events in order to spread Mozilla’s mission.

However, Reps are not only about events. Of course events are a great tool that enables us to bring more contributors and awareness to Mozilla. But we came to the realization that even though events were great, we needed to use them in a way that they would be something more than a one-off effort. As we were growing, we found the need to use our resources better and to use them in order to serve Mozilla’s goals instead of just spreading awareness. And last but not least, we needed to give the resources to our community not only on how to manage budget but also on how to build and support healthy communities, how to coach new contributors and how to build connections.

We saw the need to evolve and a year ago we introduced RepsNext, a project that has already brought great changes and that will still have a long way to go.

What if a Rep doesn’t want to be a mobilizer?

Being a mobilizer is an exciting, great step for Reps. However, we recognize that there are Reps that don’t see themselves as such, and this is completely understandable. That’s why we are building a strategy for Reps that don’t want to continue that path and want to focus on core functional work activities. A strategy that recognizes their contributions, efforts and enables them to volunteer in the Mozilla world and keep representing Mozilla as core contributors.

So what do you think?

  • Does the mobilizer role feel like the natural evolution path for Reps?
  • And what else you would like to see?

Please leave your feedback on the discourse topic.

The Servo BlogThis Week In Servo 92

In the last week, we landed 118 PRs in the Servo organization’s repositories.

Both the Quantum CSS and Quantum Render projects took important steps last week by merging the relevant project branches into the main Firefox source tree. This means that it’s now possible to run these experimental powered-by-Servo-technology builds by flipping a build-time switch in a local development build, and automated tests are tracking any new regressions that these builds cause.

Planning and Status

Our overall roadmap is available online, including the overall plans for 2017 and Q1. Please check it out and provide feedback!

This week’s status updates are here.

Notable Additions

  • absoludity replaced some low-level SpiderMonkey API calls with high-level Rust equivalents.
  • nical allowed multiple WebRender instances to share a thread pool.
  • stshine corrected an integer overflow when laying out large replaced elements.
  • Manishearth implemented parsing for many Stylo CSS properties.
  • emilio fixed the sorting behaviour for pseudo-element declarations.
  • kvark improved the sampling precision for image masks.
  • rlhunt made it easier to run individual reftests in the WebRender test harness.
  • fitzgen implemented partial support for nested template instantiation in rust-bindgen.
  • deror1869107 rewrote some WebVR uses of typed arrays to use higher-level Rust APIs instead.
  • canaltinova improved the performance of style system code by boxing large data structures.
  • zakorgy avoided segfaults in high-level typed array code from null pointers.
  • zhuowei made the Android build extract its resources on first launch.
  • jrmuizel implemented font loading on macOS.
  • scoopr added support for method arguments to Objective-C bindings in rust-bindgen.
  • danlrobertson fixed an issue with using a debugger on code that used ipc-channel.
  • canaltinova and mukilan added support for form owners to the HTML parser.
  • shinglyu implemented a performance testing harness for Stylo.
  • flier made padding bytes be calculated for complex structures in rust-bindgen.

New Contributors

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!

Anjana VakilNotes from FOSDEM 2017

Riding the tram you hear the word “Linux” pronounced in four different languages. Stepping out into the grey drizzle, you instantly smell fresh waffles and GitHub-sponsored coffee, and everywhere you look you see a FSF t-shirt. That’s right kids, it’s FOSDEM time again! The beer may not be free, but the software sure is.

Last year I got my first taste of this most epic of FLOSS conferences, back when I was an unemployed ex-grad-student with not even 5 pull requests to my name. This year, as a bona fide open source contributor, Mozillian, and full-time professional software engineer, I came back for more. Here are some things I learned:

  • Open source in general - and, anecdotally, FOSDEM in particular - has a diversity problem. (Yes, we already knew this, but it still needs mentioning.)
  • …But not for long, if organizations like Mozilla and projects like IncLudo have anything to say about it.
  • Games are a powerful tool to introduce programming, promote diversity, and build 21st-century literacy skills.
  • Speech applications built on open technologies are not just a pipe dream!
  • How browsers render the web is super interesting and something I want to get a clue about.
  • FoxPuppet is making automating & testing Firefox even easier than with our beloved Marionette!
  • The Mozilla Tech Speakers are straight killin’ it.

See the notes below for more!

Disclaimer: The (unedited!!!) notes below represent my impressions of the content of these talks, jotted down as I listened. They may or may not be totally accurate, or precisely/adequately represent what the speakers said or think. If you want to get it from the horse’s mouth, follow the links to the FOSDEM schedule entry to find the video, slides, and/or other resources!

Mozilla Dev Room, Saturday

Firefox Nightly

Pascal Chevrel

  • Nightly users are crucial for early detection of regressions and bugs introduced by new features (especially important with the advent of Quantum)
  • Nightly users aren’t as diverse as stable users:
    • 86% Windows, 6% Linux, 5% Mac
    • Around 70% come from just 15 countries (USA, Germany, Russia at top)
    • Over 80% are using en-US locale
  • Generally, Nightly users are tech-savvy (though not necessarily devs) - this means the bugs they report are generally higher-quality, more helpful

  • Nightly Reboot Status: Project started in 2016 to encourage Nightly use and bring better integration with release process
    • Download pages: now you can download Nightly! :D
    • Nightly-specific default bookmarks
    • Locale-specific about:home page with e.g. info about local events, links to translated articles
    • Nightly IRC channel
    • Nightly blog (e.g. mconley’s “These weeks in Firefox”)
    • @FirefoxNightly on Twitter
  • Help out!
    • Use Nightly as your main browser
    • Translate information & help bug reporters that aren’t comfortable with English
    • Help triage bugs - too many for Mozilla staff to handle
  • Q&A
    • A lot of former Nightly users have moved over to Aurora - this means they find the bugs later, and Aurora isn’t more stable/as stable as people expect

Firefox DevTools

Alex “Laka” Lakatos (@lakatos88)

  • Opening DevTools:
    • Right click > Inspect Element
    • Tools menu
  • Element inspector:
    • Search box: can search for CSS selectors
  • CSS rules:
    • use filter box to find e.g. every ‘color’ selector
    • Colors
      • Shift-Click on colored circle to switch between hex/RGB/named representation
      • Click on it to get color picker
  • Debugger
    • Click line to set breakpoint
    • Can view all variables at the point where execution is paused
  • Network
    • Shows every request you make
    • Shows two sizes: Size during transport and size on disk - helps you minimize transfer time
  • DevTools in Nightly
    • New features rolled out constantly
    • Disable HTTP Cache when DevTools is open - super useful while you’re developing and the cache drives you insane!
    • If you’re nostalgic, can use the Firebug theme
  • Moving DevTools to add-on instead of built in to Firefox - Will help ship faster

WebExtensions

Daniele Scasciafratte

  • Firefox is famous for extensions - it’s the most customizable browser out there
  • Extensions sometimes become features of Ffx itself
  • WebExtensions is “One API to Rule Them All”
    • Standard HTML/CSS/JS (not XUL)
    • compatibility with Chrome
    • Content scripts allow extensions to run in pages
    • Background scripts allow them to maintain long-term state from the moment they’re installed to the moment they’re removed
    • Fully customizable “browser actions” (buttons on top right) with their own menus etc.

The Firefox Puppet Show

Dave Hunt (& Henrik the Foxy Puppet)

  • Selenium
    • Tool for browser automation
    • Never maintained by Mozilla
    • Works for multiple browsers, including Firefox
    • Limited to the content space; has no control over browser chrome
  • FirefoxDriver
    • Deprecated due to add-on signing from Ffx 48
  • Marionette (Woo!!!)
    • Introduced 2012, originally for FirefoxOS
    • Directly integrated into Gecko
    • Controls chrome and content (whereas Selenium is content-based)
  • GeckoDriver
    • Proxy for W3C WebDriver & Gecko
    • WebDriver spec should be a recommendation by end of Q1 2017
    • Idea: any browser implementing the spec can be automated
    • Has been adopted by all major browser vendors
    • Thanks to WebDriver, Selenium (or any automation client) doesn’t have to worry about how to automate all the browsers: just has to implement the spec, and can then control any compliant browser
    • GeckoDriver not feature complete yet, partly bc the spec is still in late stages of development
  • FoxPuppet
    • Python package w/ simple API for finding & interacting with the Firefox UI
    • Allows you to interact with Firefox in Selenium, builds on top of Selenium
    • Ultimately going to be used to test Ffx itself
    • Marionette can do more than what just Selenium offers (chrome); FoxPuppet takes it the next step by making it much simpler to write automation code using Selenium+Marionette
    • At the moment supports window management, interaction with popups, soon interaction with tabs…
  • Q&A:
    • Can it run headlessly?
      • No, because there’s no true headless mode for Firefox, though you can make it effectively headless via e.g. running in Docker
    • Can it work with WebGL?
      • The problem is similar to with canvas - if we just have one element, we can’t look inside of it unless there’s some workaround to expose additional information about the the state of the app specifically for testing
    • Excecuting async JS?
      • Selenium has functionality to handle this, and since FoxPuppet builds on Selenium, the base Selenium functionality is still available

WebRender

Nicolas Silva

  • Servo
    • Mostly research, for trying out new ideas more quickly than we could in Ffx (without worrying about putting it in front of users)
  • How do rendering engines work? (Using Gecko as a reference)
    • layout: DOM Tree layout is computed and transformed into a Frame Tree (has other names in e.g. WebKit)
    • invalidation: From the Frame Tree we get a display list - pretty flat structure of things we need to render
    • painting: We then render the display list into a Layer Tree
      • like layers in Photoshop
      • Intermediate surfaces containing rendered elements
    • compositing: mirror the layer tree on the compositor process, for scrolling etc. at a high frame rate
  • WebRender
    • attempts to move away from this type of architecture and do something different
    • designed to work around the GPU, like a game rendering engine
    • drops the distinction between painting/compositing, and just render everything to the window directly
    • take the page content & turn it into a series of primitives we can send to the GPU (??)
    • written in Rust
    • using OpenGL for now, though in the future other backends would be possible
    • doesn’t understand arbitrary shapes, but rather only a simple set of shapes that are common on the web (e.g. rectangles, rounded rects)
  • Working fast on a GPU
    • GPUs are very stateful things, so switching state/state changes have a big impact
    • Batching is the key
    • Transferring data to/from the CPU is expensive
    • Memory bandwidth is really costly, especially on mobile devices - so try to avoid touching too many pixels/touching the same pixels too many times (aka “overdraw”)
      • Rendering back-to-front, you have to draw e.g. the whole background even if most of it is covered by other things
      • Rendering front-to-back means you can avoid drawing any parts of layers other than those that will ultimately be seen

How Rust is being developed

@jgbarah Jesus M. Gonzalez-Barahona, Bitergia

  • Analytics dashboard with contribution data: https://rust-analytics.mozilla.community

Code Invaders: Learning to code with IoT games and HTML/CSS/JS

István “Flaki” Szmozsánszky @slsoftworks

  • Happy Code Friends
    • Tech needs more diversity, we need more people to learn to code
    • But we need to spark people’s interest - games!
  • Arduboy
    • Kickstarter project funded about 1-2 years ago
    • Basically puts a micro Arduino, screen, buttons in a nice little (cheap) package
    • But unfortunately you had to write programs for it in C through the Arduino IDE - not very beginner friendly
  • Code Invaders
    • Teaching people the basics of programming (functions, variables, …) while building their own game
    • Code is written in JS and compiled to C by pushing a button in the IDE
    • Makes Arduboy programming accessible for beginners

Diversity in Open Source

Kristi Pogri @KristiPogri

  • Horrendenously low %s of open-source developers are women
  • Open source isn’t really open unless it’s open to everyone
  • Organizations/initiatives like WoMoz and Outreachy are actively trying to change this

Diversity user research

Gloria Dwomoh

  • Listening is a skill, one which we don’t study enough in comparison with e.g. reading, writing, speaking
  • Hearing vs. Listening:
    • Hearing is perceiving sounds
    • Listening is concentrating on them
  • Body language is important for listening - showing that you’re bored makes the speaker cut themselves off
  • Active listening
    • Reflection: when someone tells you something, you try to reflect back what they’re saying - to show that you understood, e.g. “so you mean that….”
    • Funneling: you try to ask for more general or more specific information
    • Empathy: important for building trust with the speaker
      • Sympathy is “oh, that must have been hard, I’m sorry”
      • Empathy is also “I understand what you mean, because I had a similar experience when ….” - show that you deeply understand the speaker’s experience

Don’t break the internet! Mozilla Copyright Campaign in Europe

Raegan MacDonald

  • We need to bring copyright into the 21st century
    • Bring in flexibility
    • Encourage creativity & expression online
  • Copyright reform is happening, but unfortunately it’s not the kind of reform we need
    • Doesn’t focus on the interests of users on the internet
    • Instead of protecting & encouraging innovation & creativity online, may in some cases undermine that
  • Mozilla wants to ensure that the internet remains a “global public resource open & accessible to all”
    • not trying to get rid of copyright
    • but rather encourage copyright laws that support all actors in the web ecosystem
  • Issues in the current copyright directive
    • Upload filters:
      • platforms that are holding large amounts of copyrighted content would need agrements with rights holders
      • ensuring that they uphold those agreements would require them to implement upload filters that may end up restricting users’ ability to post their own content
    • Neighboring rights - aka “snippet tax” or “google tax”
      • proposal to extend copyright to press publishers
      • press publications would get to charge aggregators for e.g. posting a snippet of their article, the headline, and a hyperlink
      • already been attempted in Germany and Spain, where it had negative effects on startup aggregators and entrenched the power of established aggregators (Google)
    • Text & Data Mining (TDM)
      • there would be restrictions on ingesting copyrighted data for the purposes of data mining
      • there would only be exceptions for research institutions
  • The fight right now is unfortunately quite binary: The big Silicon Valley companies/aggregators (Google etc.) vs. the Publishing/Music/Film industry
    • We need it to involve the full spectrum of stakeholders on the web, especially users, independent content creators
  • Get involved!
    • changecopyright.org
    • raegan@mozilla.com
    • Series of events across Europe
  • Q&A:
    • Since filtering requires monitoring, and monitoring is unconstitutional in the EU, are there plans to fight this if it passes?
      • Yes, there is absolutely a contradiction there, and we plan to fight it. We want to bring the proposal in line with existing law and channel activism against these filters.
    • Previous events/campaigns were focused on Freedom of Panorama (copyright exception that allows you to take photographs of e.g. buildings, art and post them online). Will new events be focused on the 4 areas you discussed?
      • Yes, this is sort of our 2nd wave of activism on this issue, and we’ll be organizing and encouraging more advocacy around these issues.
    • Do you coordinate with the media?
      • Yes. There are a number of organizations working on a modern version of copyright that looks forward, not backwards. The C4C (copyright for creativity) brings together a lot of players (e.g libraries, digital rights NGOs), and that serves as a sort of umbrella. A lot of folks have similar issues and we work together as much as possible to amplify & support certain voices.
    • What is the purpose of another wave? Are we starting over?
      • EU policy making is a very slow game. This reform has been under discussion for over 5 years, and the process of it going through negotiations to reach a final EU parliament agreement will be at least a year. If we want to have an impact & mobilize different voices, it has to be a sustained, long-term effort, which was not the case in the 1st wave because we didn’t have the proposal yet. Now that we have it, we have more focus on what to encourage people to speak out about, which is potentially game-changing.
    • It seems that the education exception excludes all informal sources of education
      • This exception applies to cases where licensed materials can’t be acquired. But that’s not really the problem; the problem is the cost. There’s now a campaign copyrightforeducation.org. It’s something we’re following closely, and we’re mostly relying on our partners who are experts in this area.
    • When will this be decided by parliament?
      • There will be votes on committee opinions next month, but the main opinion will be deliberated in March, and they want it to be voted by end of summer 2017. So the next 6 months could be game-changing, it’s an important time to contact your representatives.
    • Would the TDM exception implicate privacy concerns?
      • This doesn’t deal with privacy-protected content, but rather would allow people that have lawfully acquired works/texts to create e.g. a visualization. It doesn’t get into privacy issues about mining people’s metadata and all that - it’s a separate issue from privacy and wouldn’t override it.

Other rooms, Saturday-Sunday

WebRTC and speech recognition with Adhearsion

Luca Pradovera @lucaprado - MojoLingo

  • Spoken dialog system built entirely on open tools
  • PocketSphinx for recognition (understands stock grammar of ~100 words)
  • Rasa NLU for interpretation
  • Flite for TTS (voice a bit robotic but gets the job done)
  • FreeSwitch - switching platform with good WebRTC support. Asterisk is an alternative
  • Adhearsion is the main control layer
    • Ruby framework for voice apps
    • handles things like picking up the call, transferring, answering, recording, etc.
    • Connects to FreeSwitch or Asterisk
    • Uses actor model - treats each call as an actor, handles it in isolation (so if one call fails, the whole system doesn’t fail)
    • Case studies
      • RingRx HIPAA-compliant phone system
      • LiveConnect for broadcasting surgical procedures

Can open source open minds? IncLudo games for workplace diversity

Jesse Himmelstein

  • Diversity is an institutional advantage
  • But it’s difficult to implement - “we are feeling beings who think, not thinking beings who feel”
    • some research suggests that our ability to reason may exist to convince others of our ideas, rather than to make decisions
  • IncLudo project created a variety of games to improve workplace diversity in India
    • some teach about biases
    • some (esp. board games) encourage conversation & exchange of experiences/stories among players
  • Process
    • Game jams are a good way to try out new ideas
    • Paper prototypes are great for experimenting, though clients don’t always accept them so easily
    • Having a diverse team helps
  • Q&A
    • You mentioned a game where you have to hide your bias from others
      • You pretend you’re management at a company, and you have to hire someone for a position. Everyone has a secret bias card (“don’t want to hire [women, muslims, …]”). Your goal is to fight for the candidate you (don’t) want, but without being so obvious about it as to reveal your secret bias to the other managers. There were some really funny conversations coming out of it.
    • How do you measure the impact?
      • That’s really hard. There’s a few different things: you can try to measure what people learned from the game, which is difficult in itself. The other attempt is to see what the organizations actually do in real live - that’s what (our partner) ZMQ is going to do: see if the orgs actually change their practices.
    • How do your games relate to competitive vs. collaborative games
      • I wouldn’t agree that competition is bad by itself - it motivates us and as long as we understand that we’re competing in the game and not once it’s over. Our games are competitive, with the exception of Pirat Partage which wasn’t competitive but then the players started asking us for a scoring system so that they could see who’s winnign
    • Aren’t competition and diversity contradictory?
      • If you’re trying to bring diversity into an existing social structure made of companies that are competing, it makes sense to sell it to them that way.

Hellink, an educational game about open data

Thomas Planques

  • Game for 1st year uni students that aims to
    • teach information literacy: critical thinking about info sources
    • raise awareness about open data
  • The game talks about process of creation of scientific knowledge
    • Who pays for scientific knowledge?
      • Unis pay Scientists’ salaries
      • Unis must also pay publishers, the publishers don’t pay the scientists
      • Unis also pay to buy the journal, to buy back the knowledge
      • So we say in scientific knowledge domain, people (taxpayers) pay 3 times
    • What keeps this system going?
      • each scientist has an H index based on their publications that’s important for their career
      • the publishers own the journals, i.e. the means for scientists to advance their careers
      • monopoly by big scientific publishers make knowledge less accessible
    • One big solution would be the open data movement
      • publishing not in private journals, but open source archives
      • but scientists often think this will strip them of their work
        • but it’s the other way around
    • This is the subject of our game: we believe in the goal of open knowledge
      • the goal is to raise awareness about this problem
      • and also that checking the sources of your info is very important
    • We chose to use the same humorous tone as Ace Attorney (manga-like Japanese game where you play a lawyer, trial is like Dragonball-Z)
      • you’re sent to investigate a massive plot linked to who owns info & data in the uni/scientific publishing world
      • you have one person who tells you a false assertion backed by false info sources
      • like in Papers Please, you need to point out what is false in the info source & link it with a publishing rule that the source is supposed to respect
    • What we learned
      • procedural rhetoric: the game itself must convey the message
        • a lot of educational games involve a little playing and a lot of reading
        • the goal here is that you actually learn by playing
      • making a game about notions like this that have gray areas is a lot harder than making games about hard science/math/programming etc. & needs more time invested
      • what should ed games aim to do?
        • games in general are not very good at conveying a very complex message/complex domain of learning
        • what they’re good at is piquing interest and raising awareness
        • ed games shouldn’t aim to teach everything to the player, but rather to give them the interest to go look into the topic themselves
      • we tried to make our game one that’s actually fun in itself
        • the game is for 1st year uni students, who by that point are used to playing a lot of “serious games” that are not up to the standard of entertainment games
        • ours aims not just to be a “serious game” for education, but one that’s actually fun to play in and of itself
    • The game will be released (for free) in April 2017

Tablexia: app for children with dyslexia

Andrea Sichova, cz.nic

  • Cognitive training for children with dyslexia
    • focused on older children, 11-15 years
    • teachers told us: we can give a lot of lessons, but with older children the problem is motivation
    • so we developed a game to address that
    • available on Android/iOS
    • available in schools & counseling facilities, but we also encourage students to use it indepedently
    • Available in Czech, Slovak, and German language
  • What’s dyslexia?
    • specific learning disability
    • problems with reading/writing caused by cognitive functions (attention, working memory)
    • dyslexic people have different learning strategies to cope
  • App development
    • focused on mobile (Android)
    • testing: tried to put it in front of students as much as possible, ask questionnaires etc
    • difficulty: needs to be at the right level so that they’re neither bored nor frustrated
    • tasks:
      • attention/working memory: need to read instructions and remember how they relate to previous ones
      • spacial reasoning: map tiles
      • phonological recognition/analysis: see a word, hear different options, choose the right one
      • visual memory: have to click on the right things at the right time
      • visual recognition: recognizing “alien” writing symbols, some of which are similar to others
      • phonological memory: remember the content/order of sounds you heard
    • also includes “badges” for completing exercises, statistics view of progress, and “encyclopedia” with info about dyslexia (which also has voice recordings of all texts)
  • Does it work?
    • studying it is difficult: we need a lot of children diagnosed with dyslexia, and let them play the game for some time
    • children are not going to school regularly - problems with attendance
  • Language
    • We focused on Czech
    • Dyslexia is closely related to the language
      • aspects that dyslexic children struggle with are different from e.g. Czech to German

Niko MatsakisCompiler design sprint summary

This last week we had the rustc compiler team design sprint. This was our second rustc compiler team sprint; the first one (last year) we simply worked on pushing various projects over the finish line (for example, in an epic effort, arielb1 completed dynamic drop during that sprint).

This sprint was different: we had the goal of talking over many of the big design challenges that we’d like to tackle in the upcoming year and making sure that the compiler team was roughly on board with the best way to implement them.

I or others will be trying to write up many of the details in various forums, either on this blog or perhaps on internals etc, but I thought it’d be fun to start with a quick post that describes the overall topics of discussion. For each one, I’ll give a quick summary and, where possible, point you at the minutes and notes that we took.

On-demand processing and incremental compilation

The first topic of discussion was perhaps the most massive, in terms of its impact on the codebase. The goal is to reorient how rustc works internally completely. Right now, like many compilers, rustc works by running a series of passes, one after the other. So for example we first parse, then do macro expansion and name resolution (these used to be distinct, but have now become interwoven as part of the work on macros 2.0), then type-checking, and so forth. This is a time-honored approach, but it’s beginning to show its age:

  • Some parts of the compiler front-end cannot be so neatly separated. I already mentioned how macro expansion and name resolution are now interdependent (you have to resolve the path that leads to a macro to know which macro to expand). Similar things arise in type-checking, particularly as we aim to support constant expressions in types. In that case, we have to type-check the constant expression, but it must also be part of a type, and so forth.
  • For better IDE support, it is desirable to be able to compile just what is needed to type-check a particular function (we can come back and cleanup the rest later).
  • Things like impl Trait make the type-checking of some functions partially dependent on the results of others, so the old approach of type-checking all function bodies in an arbitrary order doesn’t work.

The idea is to replace it with on-demand compilation, which basically means that we will have a graph of “things we might want to compute” (for example, “does the function foo type-check”). We can “demand” any one of these “queries”, and the compiler will go and do what it has to do to figure out the answer. That may involve satisfying other queries internally (hopefully without cycles). In the end, your entire type-check will complete, but the order in which we do the compiler will be far less specified.

This idea for on-demand compilation naturally dovetails with the plans for the next generation of incremental compilation. The current design is similar to make: when a change is made, we eagerly propagate the effect of that change, throwing away any old results that might have been affected. Often, though, we don’t know that the old results would have been affected. It frequently happens that one makes changes which only affect some parts of a result: e.g., a change to a fn body that just renames some variables might still wind up generating precisely the same MIR in the end.

Under the newer scheme, the idea is to limit the spread of changes. If the inputs to a particular computation change, we do indeed have to re-run the computation, but we can check if its output is different from the output we have saved. If not, we don’t have to dirty things that were dependent on the computation. (The scheme we wound up with can be considered a specialized variant of Adapton, which is a very cool Rust and Ocaml library for doing generic incrementalized computation.)

Links:

Supporting alternate backends

We spent some time discussing how to integrate alternate backends (e.g., Cretonne, WASM, and – in its own way – miri.). Now that we have MIR, a lot of the hard work is done: the translation from MIR to LLVM is fairly straightforward, and the translation from MIR to Cretonne or WASM might be even more simple (particularly since eddyb already made the code that computes field and struct layouts be independent from LLVM).

There are still some parts of the system that we will need to factor out from librustc_trans. For example, the “collector”, which is the bit of code that determines what monomorphizations we need to generate of each function, is independent from LLVM.

The goal with Cretonne, as discussed on internals, is ultimately to use it as the debug-mode backend. It promises to offer a very fast, “decent quality” compilation experience, with LLVM sticking around as the heavyweight compiler (and to support more architectures). The plan for Cretonne integration is (most likely) to begin with a stateless REPL, similar to play.rust-lang.org or the playbot on IRC. The idea would be to take a complete Rust program (i.e., with a main() function), compile it to a buffer, and execute that. This avoids the need to generate .o files from Cretonne, since that code does not exist (Cretonne’s first consumer is going to be a JIT, after all).

After we had finished admiring stoklund’s admirable job of writing clean, documented code in Cretonne, we also dug into some of the details of how it works. There are still a number of things that are needed before we can really get this project off the ground (notably: a register allocator), but in general it is a very nice match with MIR and also our plans around constant evaluation via miri (discussed in an upcoming part of this blog post). We discussed how best to maintain debuginfo, and in particular some of stoklund’s very cool ideas to use the same feature that JITs use to perform de-optimization to track debuginfo values (which would then guarantee perfect fidelity).

We had the idea that we might enable different backends per codegen-unit (i.e., per module, in incremental compilation), so that we can use LLVM to accommodate some of the more annoying features (e.g., inline assembly) that may not appear in Cretonne any time soon.

Links:

MIR Optimization

We spent some time – not as much as I might have liked – digging into the idea of optimizing MIR and trying to form an overall strategy. Almost any optimization we might do requires some notion of unsafe code guidelines to justify, so one of the things we talked about was how to “separate out” that part of the system so that it can be evolved and tightened as we get a more firm idea of what unsafe code can and cannot do. The general conclusion was that this could be done primarily by having some standard dataflow analyses that try to detect when values “escape” and so forth – we would probably start with a VERY conservative notion that any local which has ever been borrowed may be mutated by any pointer write or function call, for example, and then gradually tighten up.

In general, we don’t expect rustc to be doing a lot of aggressive optimization, as we prefer to leave that to the backends like LLVM. However, we would like to generate better code primarily for the purposes of improving compilation time. This works because optimizing MIR is just plain simpler and faster than other IRs, since it is higher-level, and because it is pre-monomorphization. If we do a good enough job, it can also help to close the gap between the performance of debug mode and release mode builds, thus also helping with compilation time by allowing people to use debug more builds more often.

Finally, we discussed aatch’s inlining PR, and iterated around different designs. In particular, we considered an “on the fly” inlining design where we did inlining more like a JIT does it, during the lowering to LLVM (or Cretonne, etc) IR. Ultimately we deciding that the current plan (inlining in MIR) seemed best, even though it involves potentially allocating more data-structures, because it enables us to optimize (A) before monomorphization, multiplying the benefit and (B) we can remove a lot of temporaries and so forth, in particular around small functions like Deref::deref, whereas if we do the inlining as we lower, we are ultimately leaving that to LLVM to do.

Unsafe code guidelines

We spent quite a while discussing various aspects of the intersection of (theoretical) unsafe code guidelines and the compiler. I’ll be writing up some detailed posts on this topic, so I won’t go into much detail, but I’ll leave some high-level notes:

  • We discussed exhaustiveness and made up plans for how to incorporate the ! type there.
  • We discussed how to ensure that we can still optimize safe code even in the presence of unsafe code, and what kinds of guarantees we need to require.
    • Likely the kinds of assertions I was describing in my most recent post on the topic aren’t quite right, and we want the “locking” approach I began with, but modified to account for privacy.
  • We looked some at how LLVM handles dependence analysis and so forth, and what kinds of rules we would need to ensure that LLVM is not doing more aggressive optimization than our rules would permit.
    • The LLVM rules we looked at all seem to fall under the rubrik of “LLVM will consider a local variable to have escaped unless it can prove that it hasn’t”. What I wonder about is the extent to which other optimizations might take advantage of the ways that the C standard technically forbid you to transmute a pointer to a usize and then back again (or at least forbid you from using the resulting pointer). Apparently gcc will do some amount of optimization on this basis, but perhaps not LLVM, though more investigation is warranted.

Links:

Macros 2.0, hygiene, spans

jseyfried called in and filled us in on some of the latest progress around Macros 2.0. We discussed the best way to track hygiene information – in particular, whether we could do it using the same spans that we use to track line number and column information. In general I think there was consensus that this could work. =) We also discussed some of the interactions with privacy and hygiene that arise when you try to be smarter than our current macro system.

Links:

Diagnostic improvements

While talking about spans, we discussed some of the ways we could address some shortcomings in our current diagnostic output. For example, we’d like to avoid highlighting multiple lines when citing a method, and instead just underlyine the method name, and that sort of thing. We’d also like to print out types using identifiers local to the site of the error (i.e., Option<T> and not ::std::option::Option<T>). Hopefully we’ll be converting those rough plans into mentoring instructions, as these seem like good starter projects for someone wanting to learn more about how rustc works.

Links:

miri integration

We discussed integrating the miri interpreter. The initial plan is to have it play a very limited role: simply replacing the current constant evaluator that lowers to LLVM constants. Since miri produces basically a big binary blob (possibly with embedded pointers called “redirections”), but LLVM wants a higher-level thing, we have to use some bitcasts and so forth to encode it. This is actually an area where Cretonne’s level of abstraction, which is lower than LLVM, is probably a better fit. But it should all work out fine in any case.

This initial step of using miri as constant evaluator would not change in any way the set of programs that are accepted, except in so far as it makes them work better and more reliably. But it does give us the tools to start handling constants in the front-end as well as a much wider range of const fn bodies and so forth (possibly even including limited amounts of unsafe code).

Links:

Variable length arrays and allocas

We discussed the desire to support allocas (RFC 1808) coupled with the desire to support unsized types in more locations (in particular as the types of parameters). We worked through how we would implement this and what some of the complications might be, and drew up a rough plan for an extension to the language that would be expressive, efficiently implementable, and avoid unpredictable rampant stack growth. This will hopefully makes its way into an RFC soon.

Links:

Non-lexical lifetimes

We spent quite a while iterating on the design for non-lexical lifetimes. I plan to write this up shortly in another blog post, but the summary is that we think we have a design that we are quite happy with. It addresses (I believe) all the known examples and even extends to support nested method calls where the outer call has an &mut self argument (e.g., vec.push(vec.len()), which today do not compile.

Links:

Conclusion

Those were the main topics of discussion – pretty exciting stuff! I can’t wait to see these changes play out over the next year. Thanks to all the attendees, and particularly those who dialed in remotely at indecent hours of the day and night (notably jseyfried and nrc) to accommodate the Parisian time zone.

Comments? Check out the internals thread.

Marcia KnousNightly Workshop May 6-7 in Paris

I am happy to report that we are hosting anupcoming Nightly eventin May, in theMozilla Paris space.
Please read the instructions on the wiki to apply for one of the 5 spots. Please note that you must be local to the area as we cannot sponsor travel.
This should be an exciting event, and community members that participate will get to meet over 50 localizers from the EU, African and Arabic communities! Looking forward to seeing your applications.

Mike HoyePlanet Migration Shakeout

This note is intended for Planet and its audience, to let you know that while we’re mostly up and running, we’ve found a few feeds that aren’t getting pulled in consistently or at all. I’m not sure where the problem is right now – for example, Planet reports some feeds as returning 403 errors, but server logs from the machines those feeds live on don’t show those 403s as having ever been served up. A number of other feeds show Planet reporting “internal server errors”, but again, no such errors are visible elsewhere.

Which is a bit disconcerting, and I have my suspicions, but I won’t be able to properly dig into this stuff for a few days. Apologies for the degraded state of the service, and I’ll report back with more information as I find it. Tracking bug is #1338588.

Update: Looks like it’s a difference of opinion between an old version of Python and a new version of TLS. I expect this to be resolved Monday.

Second update: I do not expect this to be resolved today. The specific disagreement between Python and TLS describes itself as the less-than-helpful SSL23_GET_SERVER_HELLO:tlsv1 alert internal error whose root cause can be found here; HTTPlib2 does not support SNI, needed to connect to a number of virtually-hosted blogs here in modernity, and it will take some more extensive surgery than expected to get Planet back on its feet.

David BurnsHonest and open conversations

Can you have an open and honest conversation with your peers and, this is the most important one, can you have an open and honest conversation with your manager?

Have a good think about this, don't answer straight away. Let's go through the following scenarios to find out if you can have open and honest conversations.

Can you...

Tell your manager when you are struggling with a task and not feel like you are going to chastised?

For me, as a manager and a technical lead, it is super important to help grow people. We all have times where we don't know something and no amount of searching the internet can fix it. Being able to go to your "lead" and say, "I don't know what to do.." is a good thing for everyone!

Tell your manager when you are being harassed?

This should be a given but if you were to ask a lot of your female colleagues, you will hear a resounding "NO!". This has to do with company culture or "not upsetting the 10x'er". Even though it can cost a lost of money for a company if there is harassment, a lot of people just don't trust their manager to tell them about problems like this.

Tell your manager that they are wrong

Feedback is hard to give and to accept. Especially in some cultures where it is seen as a weird thing. European culture is like that, you give a slight nod and that is it and anything more makes people uncomfortable.

Now imagine getting critical feedback, it can be hard.

Now... imagine telling your manager that you think they are wrong and giving them feedback. This could be at a technical level or it could be at how they are as a manager. Expressing that feedback can be hard. Now... how does your manager take it. Do they get all defensive, do you get defensive.

If you answered No to any of the above, you really need to take the initiative and speak to your manager and tell them that you don't feel there is good opportunity for dialogue and you want to fix this. If they don't want to meet you half way to solve this then you don't need to feel bad that you want a new manager. This could be in a new company or within your company.

Honest and open conversations between your peers and your managers will create an amazing work environment and will allow everyone to succeed. It all starts from trust.

Karl Dubost[worklog] Edition 054 - Be like the bamboo. Flexibility.

webcompat life

webcompat issues

webcompat.com dev

Miscellaneous

Otsukare!

Air MozillaRust Meetup February 2017

Rust Meetup February 2017 Rust Meetup for February 2017

The Mozilla BlogU.S. Court of Appeals Upholds Suspension of Immigration Executive Order

We are pleased with today’s decision by the 9th Circuit Court of Appeals to uphold the District Court of Washington’s suspension of the U.S. Executive Order on immigration.

We believe today’s decision is a step in the right direction, but we expect legal proceedings will continue. There is more work to do on this issue, and what we said when we filed this legal brief remains true: The ability for individuals, and the ideas and expertise they carry with them, to travel across borders is central to the creation of the technologies and standards that power the open internet. We will continue to fight for more trust and transparency across organizations and borders to help protect the health of the internet and to nurture the innovation needed to advance the internet.

Photo:  Tim Evanson/Flickr

Bas SchoutenWhen silence just isn't an option

Disclaimer: The opinions I will be expressing will be solely my own, they will in no way be related to Mozilla or the work I do there.

Today, I will be doing something I didn’t think I would ever be doing. I will start to post publicly on topics that may be considered somewhat political. My personal opinions and thoughts, if you will. This is something I have always said I would never do, once something is written, it is unlikely to ever go away, and may always be associated with you and the work we do. Having said that, I do believe the climate we currently live in has reached a point where I don't think I have any other choice.

Why wouldn't I speak up?

As engineers, scientists, and other types more concerned with what we're building and discovering about the world around us, it seems to only make sense that we wouldn't publicly take a stance that could be considered political. Above all our task is to follow the data, wherever it leads and use that data in various ways to benefit human kind. In this process our personal feelings about topics only pose a threat to our scientific integrity, every one of us is susceptible to biases, and all we can do is try as much as possible to eliminate them from our work. Throwing out a bit of data because it doesn't seem to support the hypothesis you're trying to prove, building something that inherently puts one group at a disadvantage compared to another for personal gain, all those things are fundamental crimes against our professional integrity. And not only that, they will always backfire eventually, but more on this later.

And this is where it all started. If I speak openly about my political opinions, won't the data I present, the things I built, inevitably be viewed as colored by them? I have genuinely seen someone comment on a crash once along the lines of 'Firefox always crashes when I go to conservative websites, it never does when I go to your liberal websites. If you don't stop promoting your liberal agenda I will switch browsers.'. Whether that is true or not (it is not), it was exactly that phenomenon, an organization perceived as liberal produced a product, and people believed that product was inherently designed to put them and their ideas at a disadvantage. Considering the amount of effort I put into letting the data I collect and the things that I produce not be colored by my personal positions, this is something I want to avoid at all costs.

After all, as long as scientists collect a wealth of data, ensure that experiments are reproducible by any group of people, and the knowledge we gather is then used to build things that obviously and visibly work, we don't need to make a public political stance, right? People will look at the data we collect and the things we build, realize that they're good, and be able to make well-informed opinions for themselves, that fit within their ideological views. Since I believe only a tiny percentage of the population is inherently evil, things will then work themselves out just fine, so we're good!

So what changed?

With the tools we're building, giving people the ability to communicate with people from all over the world, people with different ideas and cultural backgrounds, I always believed an atmosphere of compassion, understanding and a desire to help others, whoever they are, would automatically arise. I thought that with the knowledge we were collecting and spreading - about our place in the universe and how small and vulnerable we are on a cosmic scale - we could automatically foster an appreciation of life on this planet, we would cherish it going forward.

I thought we were at a point where history wouldn't repeat itself, where things would only get better from here.

I have now come at a point where I can no longer deny that it seems that I was wrong. Both new and old problems are festering among our species, and we have to find new ways of dealing with them, because the old ones aren't working. As humans we appear to be inherently complacent, particularly when our own livelihood isn't directly threatened, but the time for that is past, we have to change course now, the risk of 'sitting this one out' is simply too great. As engineers and scientists we have given humanity the means to do a tremendous amount of damage, and now we have a role to play in making sure it doesn't.

What are you going to do about it?

It is likely not many people will ever see what I write here, and most likely much less of those people will read anything they didn't already know. However it has occurred to me that if I can say just one sensible thing, give just a couple of people a nudge in the right direction, that may have a trickle down effect that makes something of a difference in the world, and that, I have to try. Perhaps more importantly for me directly it will help me structure my thoughts, give me a place to point to rather than explaining standpoints over and over again, and possibly even get some useful feedback to improve on my own understanding of the world I live in.

And so, I have decided that over the next couple of weeks I will write a couple of posts in which I outline my thoughts on a number of topics that seem to apply to the troubles of the world today. Feel free to disagree with me, but please do so respectfully, and if you want to have a debate, support your argument with (conventional) facts. On most of the topics I will be writing about I will not be an expert, sometimes I will presumably be wrong and say things which aren't true, although I will do my best to include reliable references whenever I can. I'm okay with all of that, because even in the cases where I am, somewhere I might spark a debate, create some more understanding and ever so slightly nudge someone towards my utopian fantasy of a world where we all live together in peace on a planet (or multiple planets) that we care for.

Support.Mozilla.OrgWhat’s Up with SUMO – 9th February 2017

Hello, SUMO Nation!

Today’s post has a slightly different format for two reasons:

  1. We are rethinking the way these (regular) blog posts work and the way they should be shaped – but that’s going to take a while because…
  2. We have migrated to a Completely New Site™ and we need to update you on a few things regarding its current (and future) state. (hint: we’re all really busy)

There you have it. So, while we may be returning to your regularly scheduled programming at a slightly later time, now it’s time talk about…

The Completely New Site™

  1. The migration process was not easy from a technical point of view and things did go wrong in some expected (and some unexpected) ways. Moving 8 years of data from one custom platform to another is like that.
  2. The delay in switching to the new site was caused by last minute issues we managed to fix (but we needed time for that).
  3. We are live at https://support.mozilla.org/ but there are still a lot of things to work on, most of which we are trying to tackle now using Admin powers.
  4. We have a long list of outstanding issues to fight with in the first two weeks after the launch. You can add more to it, don’t worry. Please keep filing bugs. Thanks to all of you who already did so. Before you file a bug, please remember to check this list.
  5. If you are confused about the way the site works (its options, basic features, etc.), you can start fighting that confusion using the site FAQ (“How do things work?”).
  6. Our priorities for the next two weeks are:
    • Making sure site navigation and content are in correct places and work well for all launch locales.
    • Making sure that all users have the right permissions and access to the right resources based on that for all launch locales. As a refresher, take a look at the Roles & Responsibilities doc (as shared with you at the beginning of the migration process in 2016)
    • Working on fixing the bugs from the list linked above.
    • Improving the UX design of the site.
    • Improving the notifications.
    • Improving the onboarding and “ask a question/find an answer” flows.
    • Sharing documentation that explains how we can all get “back to SUMO business as usual” using the new platform (answering questions, working on the KB).
  7. The following are not a priority at the moment but will be worked on later:

So, if you are on the new site (yay!), we ask you for a little extra patience while we make it our new home. In the meantime, if you have questions about:

Now, let me tell you a bit more about…

The Next Month (or so) for the KB / L10n of SUMO…

  1. The KB content of the launch locales is mostly ready for use and consumption by the users, thanks to your help.
  2. All Editors, Reviewers, and Locale Leads should have the right permissions to work within their locale’s KB, but for now we are not localizing anything – please hold off with edits for now.
  3. Joni is coming back on Monday (13th February) and will make sure the English KB is in shape.
  4. Once the English KB is cleaned up and reorganized, we will work on copying the same structure for all launch locales.
  5. The documentation explaining how the localization process works on the new site is coming once we knock all the l10n bugs out of the way. For now, you can get a taste of it reading these two documents (one) (two).
  6. Our goal is to ensure that:
    • All KB Editors, Reviewers and Locale Leads have the right permissions for their locale’s KBs
    • The visible KB nodes are all in the right place and reflect the English version as close as possible (for now, this may be changing in the future, depending on your needs/ideas)
    • The new KB nodes are in place and localized accordingly
    • All KB templates are organized under a separate KB for each launch locale
    • All KB content that should be archived is moved to a separate Archive KB for each launch locale
    • Key UI elements are reviewed and retranslated for each launch locale
    • Locales that were not included in the launch are prepared for addition to the main site

…and how you can help with that

  1. Subscribe to the changes in your locale’s KB (and the English KB as well). You can do it following the instructions from this site.
  2. Keep filing bugs about things that don’t work for your locale (or globally). You can use Bugzilla (as usual) or this spreadsheet.
  3. Wait for further information – I am working on making the site better for localizers (and international users), but everything takes time. I really appreciate your patience and support.

We hope that the above information will help you understand where we are now with the site switch and what are our next goals and steps. If you have questions, you know where to find us. We are looking forward to seeing you around the new SUMO site. Thank you for being there for the users!

Mozilla Addons Blogweb-ext 1.8 released

A new version of web-ext has been released! Web-ext is the recommended tool for developing WebExtensions on Firefox, because it has the ability to automatically reload your WebExtension as you make changes.

Since our last blog post, version 1.7 and 1.8 have been released. The full change log is on github.

The run command now shows a desktop notification if auto-reloading results in an error: image01

Other options added to the run command include:

  • Addition of a –start-url option. This will start Firefox at a particular URL and assists in testing.
  • Addition of a –browser-console option. This will open the Browser Console by default, which is where any errors or logging will be shown.
  • Addition of –pref option. This will load the specified preferences into Firefox. For example: –pref privacy.userContext.enabled=true
  • When a reload occurs, it will show you the last reload time more concisely.

An –ignore-files option was added, so by default the web-ext-artifacts directory is added to that list when building your extension.

A new option to linting, –warnings-as-errors, will allow you to make the linter more strict, so that warnings are raised as errors. Also, when you run web-ext and you have an error in your JSON, you’ll get an error message showing the line number. As an example:

image00

Any command run will let you check to see if a new version of web-ext exists, ensuring that you are using the latest version of web-ext.

Finally a regression on Windows was fixed, but more importantly the test suite was enabled on Windows to reduce regressions on Windows in the future.

Special thanks to all the people who contributed to this release: Aniket Kudale, Jostein Kjønigsen and eight04. A special thanks to Elvina Valieva and Shubsheka Jalan who have been contributing via the Outreachy program.

Dustin J. MitchellTaskCluster-Github Improvements

Repositories on Github can use TaskCluster to automate build, test, and release processes. The service that enables this is called, appropriately enough, taskcluster-github.

This week, Irene Storozhko, Brian Stack, and I gathered in Toronto to land some big improvements to this service.

First, the service now supports “release” events, which means it can trigger tasks when a new release is added to github, such as building and uploading binaries or making release announcements.

Second, we have re-deployed the service as an integration Irene has developed. This makes the set-up process much easier – just go to the integration page and click “Install”. No messing with web hooks, adding users to teams, etc.

The integration gives users a great deal more control over our access to repositories: it can be installed organization-wide, or only for specific repositories. The permissions required are much more restricted than the old arrangement, too. On the backend, the integration also gives us much better access to debugging information that was previously only available to organization administrators.

Finally, Irene has developed a quickstart page to guide new users through setting up a repository to use TaskCluster-Github. With this tool, we hope to see many more Mozilla projects building automation in TaskCluster, even if that’s as simple as running tests.

Dennis SchubertIntroducing 'The Joy of Diagnosis'

We Mozillians really love to share the stuff we are working on with everyone interested. A great example of a Mozillian sharing his work is Mike Conley, who started a weekly live hacking series called “The Joy of Coding” back in 2015 (he just had his two years anniversary!). It is really great to watch him work on a wide series of Firefox tasks, no matter if it is about UI stuff, electrolysis work, or even porting and improving internal tools to Rust.

It is even better to sometimes see him struggle on what felt like a really simple task. That just makes everyone feel more human. Also, it is a great way of introducing contributors from all over the place to our work and provide them with insight into the steps needed to tackle their first project. This is definitely a thing we should do more often, everywhere inside Mozilla.

Let us jump to another topic for a second. Have you heard of Mozilla’s Web Compatibility team, of which I am part of? Are you aware of the work we do, our motivations, and goals? Basically, we… fix the internet, so it is actually pretty great if you have never heard of us. That means we are doing a great job.

So, what better way is there to get the word out and to make more people aware of our mission than to share the work we do, right?

There we go, you have just discovered “The Joy of Diagnosis”, a cheesy copy of Mike Conley’s idea for our team. We already made an introductory episode, which we called Episode 0. If you want, you can watch it on either Air Mozilla or YouTube.

Please note that this is the very first episode, I did not spend much time planning ahead, and this is not the most polished video you will ever see. If you have any feedback, please reach out to me! I would love to get in touch with you.

Schedule

Basically, we do not have a schedule since we usually do not have many isolated tasks to work on for recording. Some issues easily take up more time than we would ever have in a video, so we will probably just publish episodes whenever we have something cool. Please follow us on Twitter at @MozWebCompat to know when we published something new!

Although I tagged this stuff with “livehacking”, the first episode was not streamed live. :) However, I like the idea of enabling people to provide live feedback and asking questions in a chat, so if this project turns out to be interesting for at least some people, we surely will consider something.

Shownotes

I spent most of the time introducing our idea and the team as well as the individual parts of work we do and where people could jump in. However, we had a quick look at web-bug #4176.

If you want, check the Compatibility pages on the Mozilla Wiki to learn more about the team, our projects, and our work. Also, be sure to check out Mike Conley’s The Joy of Coding.

Air MozillaReps Weekly Meeting Feb. 09, 2017

Reps Weekly Meeting Feb. 09, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Chris H-CData Science is Hard: Client Delays for Crash Pings

Second verse, much like the first: how quickly do we get data from clients?

This time: crash pings.

Recording Delay

The recording delay of crash pings is different from main pings in that the only time information we have about when the information happens is crashDate, which only tells you the day the crash happened, not the time. This results in a weird stair-step pattern on the plot as I make a big assumption:

Assumption: If the crash ping was created on the same day that the crash happened, it took essentially 0 time to do so. (If I didn’t make this assumption, the plot would have every line at 0 for the first 24 hours and we’d not have as much information displayed before the 96-hour max)

output_19_1

The recording delay for crash pings is the time between the crash happening and the user restarting their browser. As expected, most users appear to restart their browser immediately. Even the slowest channel (release) has over 80% of its crash pings recorded within two days.

Submission Delay

The submission delay for crash pings, as with all pings, is the time between the creation of the ping and the sending of the ping. What makes the crash ping special is that it isn’t even created until the browser has restarted, so I expected these to be quite short:

output_22_1

They do not disappoint. Every branch but Nightly has 9 out of every 10 crash pings sent within minutes of it being created.

Nightly is a weird one. It starts off having the worst proportion of created pings unsent, but then becomes the best.

Really, all four of these lines should be within an error margin of just being a flat line at the top of the graph, since the code that creates the ping is pretty much the same code that sends it. How in the world are these many crash pings remaining unsent at first, but being sent eventually?

Terribly mysterious.

Combined Delay

output_26_1

The combined client delay for crash pings shows that we ought to have over 80% of all crash pings from all channels within a day or two of the crash happening. The coarseness of the crashDate measure makes it hard to say exactly how many and when, but the curve is clearly a much faster one than for the main ping delays previously examined.

Crash Rates

For crash rates that use crashes counted from crash pings and some normalization factor (like usage hours) counted from main pings, it doesn’t actually matter how fast or slow these pings come in. If only 50% of crashes and 50% of usage hours came in within a day, the crash rate would still be correct.

What does matter is when the pings arrive at different speeds:

combined_crashmain_delay

(Please forgive my awful image editing work)

Anywhere that the two same-coloured lines fail to overlap is a time when the server-recorded count of crashes from crash pings will not be from the same proportion of the population as the sever-recorded count of usage hours from main pings.

For example: On release (dark blue), if we look at the crash rate at 22 or 30-36 hours out from a given period, the crash rate is likely to approximate what a final tally will give us. But if we check early (before 22h, or between 22 and 30h), when the main pings are lagging, the crash rate will seem higher than reality. If we check later (after 36h), the crash rate will seem lower.

This is where the tyranny of having a day-resolution crashDate really comes into its own. If we could model exactly when a channels’ crash and main submission proportions are equal, we could use that to generate accurate approximations of the final crash rate. Right now, the rather-exact figures I’m giving in the previous paragraph may have no bearing on reality.

Conclusion

If we are to use crash pings and main pings together to measure “something”, we need to fully understand and measure the differences in their client-side delays. If the curves above are stable, we might be able to model their differences with some degree of accuracy. This would require a higher-resolution crash timestamp.

If we wish to use this measured “something” earlier than 24h from the event (like, say, to measure how crashy a new release is), we need to either chose a method that doesn’t rely on main pings, or speed up main ping reporting so that it has a curve closer to that of crash pings.

To do my part I will see if having a better crash timestamp (hours would do, minutes would be the most I would need) is something we might be willing to pursue, and I will lobby for the rapid completion and adoption of pingSender as a method for turning main pings’ submission delay CDF into a carbon copy of crash pings’.

Please peruse the full analysis on reports.telemetry.mozilla.org if you are interested in the details of how these graphs were generated.

:chutten


The Mozilla BlogLaunching an Independent OpenNews Program

At Mozilla, one of our essential roles is convener: working to identify, connect and support like-minded people who are building a healthier Internet.

An early — and strong — example of that work is the OpenNews program. Six years ago, Mozilla and Knight Foundation created an initiative to combine open-source practices with journalism. Our aim: strengthen journalism on the open web, and empower newsroom developers, designers and data reporters across the globe.

The program flourished. Since 2011, OpenNews has placed 33 fellows in 19 newsrooms, from BBC and NPR to La Nacion and the New York Times. It built a global community of more than 1,100 developers and reporters. It spawned the annual SRCCON conference, bolstered newsroom diversity and gave way to innovative newsgathering tools like Tabula. OpenNews has also played a key role in building the annual MozFest in London and Mozilla’s nascent leadership network initiative.

Mozilla is immensely proud of OpenNews — and immensely grateful to the team behind its success. And today, we’re announcing  that OpenNews is spinning out as  an independent organization. Going forward, OpenNews — with the support of nonprofit fiscal partner Community Partners — will build on the success it achieved when incubated at Mozilla. OpenNews will continue to play an active role in MozFest and Mozilla’s leadership network.

Mozilla isn’t departing the realm of journalism and media — they will remain central topics as we develop Mozilla’s Internet Health strategy over the coming years. MozFest will increasingly focus on issues like fake news, online harassment and advertising economics. This will be bolstered by Mozilla’s involvement in events like MisInfoCon in Boston later this month, where Mozilla is a sponsor and participant. On the technology front, we’ll continue to host the Coral project, which builds platforms that increase trust and engagement. We see news and media as key to our nascent Mozilla Leadership Network — and to our growing Internet health agenda.

As we chart a course forward in this work, we will be reaching out to the community to talk more specifically about where Mozilla should focus its efforts in the news and media space. If you want us to reach out to you as part of this conversation, please contact Mozilla’s Chris Lawrence at clawrence@mozillafoundation.org.

See also:

Knight Foundation: OpenNews network of journalists and technologists to launch with $1.1 million from Knight Foundation

OpenNews: OpenNews Ascent Stage Initiated

Mozilla Marketing Engineering & Ops BlogIntroducing the MozMEAO infrastructure repo

This is a quick introduction post from the MozMEAO Site Reliability Engineers. As SRE’s at Mozilla, Josh and I are responsible for infrastructure, automation, and operations for several sites, including mozilla.org and MDN.

We try to keep much of our work as public as possible, so we’ve created https://github.com/mozmar/infra to share some of our automation and tooling. We have additional internal repos to manage some of our private infrastructure as well.

Feel free to try out our scripts and tools, and let us know via Github issue or pull request if we’ve missed anything.

The Rust Programming Language BlogAnnouncing Rust 1.15.1

The Rust team is happy to announce the latest version of Rust, 1.15.1. Rust is a systems programming language focused on safety, speed, and concurrency.

If you have a previous version of Rust installed, getting Rust 1.15.1 is as easy as:

$ rustup update stable

If you don’t have it already, you can download Rust from the appropriate page on our website, and check out the detailed release notes for 1.15.1 on GitHub.

What’s in 1.15.1 stable

This release fixes two issues, a soundness bug in the new vec::IntoIter::as_mut_slice method, and a regression wherein certain C components of the Rust distribution were not compiled with -fPIC. The latter results in the text section of executables being writable in some configurations, including common Linux configurations, subverting an important attack mitigation, and creating longer startup times by causing the linker to do more work. For mostly-Rust codebases, the practical impact of losing read-only text sections is relatively small (since Rust’s type system is its first line of defense), but for Rust linked into other codebases the impact could be unexpectedly quite significant. PIC issues are well understood and not Rust-specific, so the rest of this post focuses on the soundness bug.

The problem with as_mut_slice, a three line function, was discovered just minutes after publishing Rust 1.15.0, and is a reminder of the perils of writing unsafe code.

as_mut_slice is a method on the IntoIter iterator for the Vec type that offers a mutable view into the buffer being iterated over. Conceptually it is simple: just return a reference to the buffer; and indeed the implementation is simple, but it’s unsafe because IntoIter is implemented with an unsafe pointer to the buffer:

pub fn as_mut_slice(&self) -> &mut [T] {
    unsafe {
        slice::from_raw_parts_mut(self.ptr as *mut T, self.len())
    }
}

It’s just about the simplest unsafe method one could ask for. Can you spot the error? Our reviewers didn’t! This API slipped through the cracks because it is such a standard and small one. It’s a copy-paste bug that the reviewers glossed over. This method takes a shared reference and unsafely derives from it a mutable reference. That is totally bogus because it means as_mut_slice can be used to produce multiple mutable references to the same buffer, which is the one single thing you must not do in Rust.

Here’s a simple example of what this bug would let you write, incorrectly:

fn main() {
    let v = vec![0];
    let v_iter = v.into_iter();
    let slice1: &mut [_] = v_iter.as_mut_slice();
    let slice2: &mut [_] = v_iter.as_mut_slice();
    slice1[0] = 1;
    slice2[0] = 2;
}

Here both slice1 and slice2 are referencing the same mutable slice. Also notice that the iterator they are created from, v_iter is not declared mutable, which is a good indication something fishy is going on.

The solution here is trivial, just change &self to &mut self:

pub fn as_mut_slice(&mut self) -> &mut [T] {
    unsafe {
        slice::from_raw_parts_mut(self.ptr as *mut T, self.len())
    }
}

With that, proper uniqueness invariants are maintained, only one mutable slice can be created at a time, and v_iter must be declared mutable in order to pull out a mutable slice.

So we made that change, and we’re releasing a fix. In Rust we take pride in not breaking APIs, but since this is a new, minor feature, and the present implementation is spectacularly unsound, we decided to go ahead and release the fix immediately, hopefully before too many codebases pick it up — that is, we don’t consider this a breaking change that requires a careful transition, but a necessary bug fix. For more about Rust’s approach to ensuring stability see the “Stability as a Deliverable” blog post, RFC 1122, on language evolution, and RFC 1105, on library evolution.

We’re still evaluating what to learn from this, but this is a good reminder of the care that must be taken when writing unsafe code. We have some ideas for how to catch this kind of problem earlier in the development process, but haven’t made any decisions yet.

We apologize for the inconvenience. Let’s go hack.

Contributors to 1.15.1

We had 2 individuals contribute to Rust 1.15.1. Thanks!

Air MozillaThe Joy of Coding - Episode 90

The Joy of Coding - Episode 90 mconley livehacks on real Firefox bugs while thinking aloud.

Air MozillaWeekly SUMO Community Meeting Feb. 08, 2017

Weekly SUMO Community Meeting Feb. 08, 2017 This is the sumo weekly call

QMOFirefox 52 Beta 7 Testday, February 17th

Hello Mozillians,

We are happy to announce that Friday, February 17th, we are organizing Firefox 52 Beta 7 Testday. We will be focusing our testing on Graphics. Check out the detailed instructions via this etherpad .

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better!

See you on Friday!

Marcia KnousGoodbye to a Mozillian

Back in 2008 I had the pleasure of participating in the first MozillaMozCamp, which was held in the beautiful city of Barcelona.I remember that event fondly for several reasons, one of the mostsignificant being I met Giuliano Masseroni and the rest of the Italian community.I was immediately struck by his personality and passion, which shone like a bright star.Giuliano has not been active for several years, but was instrumental in the early days ofthe Italian community. And as flod pointed out in hisblog post, he did a lotof work helping Italian users on the Italian support forum.Giuliano was a great person, and he will be sorely missed.

William LachanceCancel all the things

I just added a feature to Treeherder which lets you cancel a set of jobs (say, from a botched try push) much more easily. I’m hopeful that this will be helpful in keeping our resource usage on try more under control.

It uses the “pinboard” feature of Treeherder which very few people are familiar with, so I made a very short video tutorial on how to make use of this feature and put it on the Joy of Automation channel:

Happy cancelling!

Air MozillaWebdev Extravaganza: February 2017

Webdev Extravaganza: February 2017 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on. This...

Joel MaherProject Stockwell – February 2017

I realized my post for last month was titled “Project Stockwell – January 2016” – that is a fun typo to make 🙂

Last month we focused on triaging all bugs that met our criteria of >=30 failures/week.  Every day there are many new bugs to triage and we started with a large list.  In the end we have commented on all the bugs and have a small list every day to revisit or investigate.

One thing we focus on is only requesting assistance at most once per week- to that note we have a “Neglected Oranges” dashboard that we use daily.

What is changing this month- We will be recommending resolution on priority bugs (>=30 failures/week) in 2 weeks time.  Resolution is active debugging, landing changes to the test to reduce,debug, or fix the intermittent, or in the case where there is a lack of time or ease of finding a fix disabling the test.  If this goes well, we will reduce that down to 7 days in March.

So how are we doing?

Week starting: Jan 02, 2017 Jan 30, 2017
Orange Factor 13.76 10.75
# priority intermittent 42 61

We have less overall failures, but more bugs spread out.  Some interesting bugs:

In terms of projects underway, here is some status:

  • adding BUG_COMPONENTS to all files in m-c (bug 1328351) – slow and steady progress, thanks for the reviews to date!  We expect large majority of this to be completed this month.
  • retrigger an existing job with additional debugging arguments (bug 1322433) – main discussion is done, figuring out small details, should see a prototype this month
  • add |mach test-info| support (bug 1324470) – landed today!
  • add a test-lint job to linux64/mochitest (bug 1323044) – no progress yet, I expect some this month.

Are there items we should be working on or looking into?  Please join our meetings.


David LawrenceHappy BMO Push Day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1335233] Fix some memory leaks in jobqueue code
  • [1336958] Call delete on HTML::Tree objects to avoid leaking memory
  • [1336387] CSP breaks GitHubAuth on index and bug modal pages
  • [1335843] Secure HTML bugmail body begins with HUGE “Subject: [Bug NNN] Bug-summary”, which usually causes overflow in my email client
  • [1237790] The experimental user interface doesn’t provide link to help of bug fields
  • [1336659] Remove og:image meta tag from bug detail page
  • [1321592] Update Bugzilla Etiquette and add Abuse Policy

discuss these changes on mozilla.tools.bmo.


Chris H-CData Science is Hard: Units

I like units. Units are fun. When playing with Firefox Telemetry you can get easy units like “number of bookmarks per user” and long units like “main and content but not content-shutdown crashes per thousand usage hours“.

Some units are just transformations of other units. For instance, if you invert the crash rate units (crashes per usage hours) you get something like Mean Time To Failure where you can see how many usage hours there are between crashes. In the real world of Canada I find myself making distance transformations between miles and kilometres and temperature transformations between Fahrenheit and Celsius.

My younger brother is a medical resident in Canada and is slowly working his way through the details of what it would mean to practice medicine in Canada or the US. One thing that came up in conversation was the unit differences.

I thought he meant things like millilitres being replaced with fluid ounces or some other vaguely insensible nonsense (I am in favour of the metric system, generally). But no. It’s much worse.

It turns out that various lab results have to be communicated in terms of proportion. How much cholesterol per unit of blood? How much calcium? How much sugar, insulin, salt?

I was surprised when my brother told me that in the United States this is communicated in grams. If you took all of the {cholesterol, calcium, sugar, insulin, salt} out of the blood and weighed it on a (metric!) scale, how much is there?

In Canada, this is communicated in moles. Not the furry animal, but the actual count of molecules. If you took all of the substance out of the blood and counted the molecules, how many are there?

So when you are trained in one system to recognize “good” (typical) values and “bad” (atypical) values, when you shift to the other system you need to learn new values.

No problem, right? Like how you need to multiply by 1.6 to get kilometres out of miles?

No. Since grams vs moles is a difference between “much” and “many” you need to apply a different conversion depending on the molecular weight of the substance you are measuring.

So, yes, there is a multiple you can use for cholesterol. And another for calcium. And another for sugar, yet another for insulin, and still another for salt. It isn’t just one conversion, it’s one conversion per substance.

Suddenly “crashes per thousand usage hours” seems reasonable and sane.

:chutten