Mozilla Reps CommunityReps Weekly Call – November 27th 2014

Last Thursday we had our regular weekly call about the Reps program, where we talk about what’s going on in the program and what Reps have been doing during the last week.

cool-fox

Summary

  • End of the year – Metrics and Receipts.
  • Reminder: Vouch and vouched on Mozillians.
  • Community PR survey.
  • AdaCamp.

Detailed notes

AirMozilla video

Don’t forget to comment about this call on Discourse and we hope to see you next week!

Gervase MarkhamBugzilla for Humans, II

In 2010, johnath did a very popular video introducing people to Bugzilla, called “Bugzilla for Humans“. While age has been kind to johnath, it has been less kind to his video, which now contains several screenshots and bits of dialogue which are out of date. And, being a video featuring a single presenter, it is somewhat difficult to “patch” it.

Enter Popcorn Maker, the Mozilla Foundation’s multimedia presentation creation tool. I have written a script for a replacement presentation, voiced it up, and used Popcorn Maker to put it together. It’s branded as being in the “Understanding Mozilla” series, as a sequel to “Understanding Mozilla: Communications” which I made last year.

So, I present “Understanding Mozilla: Bugzilla“, an 8.5 minute introduction to Bugzilla as we use it here in the Mozilla project:

Because it’s a Popcorn presentation, it can be remixed. So if the instructions ever change, or Bugzilla looks different, new screenshots can be patched in or erroneous sections removed. It’s not trivial to seamlessly patch my voiceover unless you get me to do it, but it’s still much more possible than patching a video. (In fact, the current version contains a voice patch.) It can also be localized – the script is available, and someone could translate it into another language, voice it up, and then remix the presentation and adjust the transitions accordingly.

Props go to the Popcorn team for making such a great tool, and the Developer Tools team for Responsive Design View and the Screenshot button, which makes it trivial to reel off a series of screenshots of a website in a particular custom size/shape format without any need for editing.

Doug BelshawOn the denotative nature of programming

This is just a quick post almost as a placeholder for further thinking. I was listening to the latest episode of Spark! on CBC Radio about Cracking the code of beauty to find the beauty of code. Vikram Chandra is a fiction author as well as a programmer and was talking about the difference between the two mediums.

It’s definitely worth a listen [MP3]

The thing that struck me was the (perhaps obvious) insight that when writing code you have to be as denotative as possible. That is to say ambiguity is a bad thing leading to imprecision, bugs, and hard-to-read code. That’s not the case with fiction, which relies on connotation.

This reminded me of a paper I wrote a couple of years ago with my thesis supervisor about a ‘continuum of ambiguity’. In it, we talk about the overlap between the denotative and connotative aspects of a word, term, or phrase being the space in which ambiguity occurs. For everything other than code, it would appear, this is the interesting and creative space.

I’ve recently updated the paper to merge comments from the 'peer review’ I did with people in my network. I also tidied it up a bit and made it look a bit nicer.

Read it here: Digital literacy, digital natives, and the continuum of ambiguity


Comments? Questions? Email me: doug@mozillafoundation.org

Soledad PenadesPublishing a Firefox add-on without using addons.mozilla.org

A couple of days ago Tom Dale published a post detailing the issues the Ember team are having with getting the Ember Inspector add-on reviewed and approved.

It left me wondering if there would not be any other way to publish add-ons on a different site. Knowing Mozilla, it would be very weird if add-ons were “hardcoded” and tied only and exclusively to a mozilla.org property.

So I asked. And I got answers. The answer is: yes, you can publish your add-on anywhere, and yes your add-on can get the benefit of automatic updates too. There are a couple of things you need to do, but it is entirely feasible.

First, you need to host your add-on using HTTPS or “all sorts of problems will happen”.

Second: the manifest inside the add-on must have a field pointing to an update file. This field is called the updateURL, and here’s an example from the very own Firefox OS simulator source code. Snippet for posterity:

<em:updateURL>@ADDON_UPDATE_URL@</em:updateURL>

You could have some sort of “template” file to generate the actual manifest at build time–you already have some build step that creates the xpi file for the add-on anyway, so it’s a matter of creating this little file.

And you also have to create the update.rdf file which is what the browser will be looking at somewhat periodically to see if there’s an update. Think of that as an RSS feed that the browser subscribes to ;-)

Here’s, again, an example of how an update.rdf file looks like, taken from one of the Firefox OS simulators:

<?xml version="1.0" encoding="utf-8"?>
<RDF xmlns="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:em="http://www.mozilla.org/2004/em-rdf#">
<Description about="urn:mozilla:extension:fxos_2_2_simulator@mozilla.org">
<em:updates>
<Seq><li>
<Description>
  <em:version>2.2.20141123</em:version>
  <em:targetApplication>
  <Description>
    <em:id>{ec8030f7-c20a-464f-9b0e-13a3a9e97384}</em:id>
    <em:minVersion>19.0</em:minVersion>
    <em:maxVersion>40.*</em:maxVersion>
    <em:updateLink>https://ftp.mozilla.org/pub/mozilla.org/labs/fxos-simulator/2.2/mac64/fxos-simulator-2.2.20141123-mac64.xpi</em:updateLink>
  </Description>
  </em:targetApplication>
</Description>
</li></Seq>
</em:updates>
</Description>
</RDF>

And again this file could be generated at build time and perhaps checked in the repo along with the xpi file containing the add-on itself, and served using github pages which do allow serving https.

The Firefox OS simulators are a fine example of add-ons that you can install, get automatic updates for, and are not hosted in addons.mozilla.org.

Hope this helps.

Thanks to Ryan Stinnett and Alex Poirot for their information-rich answers to my questions–they made this post possible!

flattr this!

Carsten BookVortrag ueber Mozilla Sheriffs heute beim OpenSource Treffen in Muenchen

Hi,

ich werde heute ab 18 Uhr wieder beim OpenSource Treffen in Muenchen dabei sein und dabei etwas erzaehlen ueber Mozilla Code Sheriffs.

 

Natuerlich werde ich auch fuer allgemeine Fragen rund um Mozilla zur Verfuegung stehen.

 

Dann bis heute abend!

 

Viele Gruesse

Carsten

Benoit GirardImproving Layer Dump Visualization

I’ve blogged before about adding a feature to visualize platforms log dumps including the layer tree. This week while working on bug 1097941 I had no idea which module the bug was coming from. I used this opportunity to improve the layer visualization features hoping that it would help me identify the bug. Here are the results (working for both desktop and mobile):

Layer Tree Visualization Demo
Layer Tree Visualization Demo – Maximize me

This tools works by parsing the output of layers.dump and layers.dump-texture (not yet landed). I reconstruct the data as DOM nodes which can quite trivially support the features of a layers tree because layers tree are designed to be mapped from CSS. From there some javascript or the browser devtools can be used to inspect the tree. In my case all I had to do was locat from which layer my bad texture data was coming from: 0xAC5F2C00.

If you want to give it a spin just copy this pastebin and paste it here and hit ‘Parse’. Note: I don’t intend to keep backwards compatibility with this format so this pastebin may break after I go through the review for the new layers.dump-texture format.


Yura ZenevichResources For Contributing to Firefox OS Accessibility.

Resources For Contributing to Firefox OS Accessibility.

28 Nov 2014 - Toronto, ON

I believe when contributing to Firefox OS and Gaia, just like with most open source projects, a lot of attention should be given to reducing the barrier for entry for new contributors. It is even more vital for Gaia since it is an extremely fast moving project and the number of things to keep track of is overwhelming. In an attempt to make it easier to start contributing to Firefox OS Accessibility I compiled the following list of resources, that I will try keeping up to date. It should be helpful for a successful entry into the project:

Firstly, links to high level documentation:


Git

Gaia project is hosted on Github and the version control that is used is Git. Here's a link to the project source code:

https://github.com/mozilla-b2g/gaia/

One of my coworkers (James Burke) proposed the following workflow that you might find useful (link to source):

  • Fork Gaia using Github UI (I will use my Github user name - yzen as an example)
  • From your fork of Gaia (https://github.com/yzen/gaia), clone that locally, and set up a "remote" that is called "upstream" that points to the mozilla-b2g/gaia repo:
    git clone --recursive git@github.com:yzen/gaia.git gaia-yzen
    cd gaia-yzen
    git remote add upstream git@github.com:mozilla-b2g/gaia.git
  • For each bug you are working on, create a branch to work on it. This branch will be used for the pull request when you are ready. So taking this bug number 123 as an example, and assuming you are starting in your clone's master branch in the project directory:
    # this updates your local master to match mozilla-b2g's latest master
    # you should always do this to give better odds your change will work
    # with the latest master state for when the pull request is merged
    git pull upstream master

    # this updates your fork on github's master to match
    git push origin master

    # Create bug-specific branch based on current state of master
    git checkout -b bug-123
  • Now you will be in the bug-123 branch locally, and its contents will look the same as the master branch. The idea with bug-specific branches is that you keep your master branch pristine and only matching what is in the official mozilla-b2g branch. No other local changes. This can be useful for comparisons or rebasing.

  • Do the changes in relation to the bug you are working on.

  • Commit the change to the branch and then push the branch to your fork. For the commit message, you can just copy in the title of the bug:

    git commit -am "Bug 123 - this is the summary of changes."
    git push origin bug-123
  • Now you can go to https://github.com/yzen/gaia and do the pull request.

  • In the course of the review, if you need to do other commits to the branch for review feedback, once it is all reviewed, you can flatten all the commits into one commit, then force push the change to your branch. I normally use rebase -i for this. So, in the gaia-yzen directory, while you are in the bug-123, you can run:

    git rebase -i upstream/master

At this point, git gives you a way to edit all the commits. I normally 'pick' the first one, then choose 's' for squash for the rest, so the rest of the commits are squashed to the first picked commit.

Once that is done and git is happy, you can the force push the new state of the branch back to GitHub:

    git push -f origin bug-123

More resources at:


Source Code

  • All apps are located in apps/ directory. Each up is located within its own directory. So for example if you are working on a Calendar app you would be making your changes in the apps/calendar directory.

  • The way we want to make sure that the improvements that we work on actually help Firefox OS accessibility and do not regress we have a policy of adding gaia-ui python Marionette tests for all new accessible functionality. You can find tests in the tests/python/gaia-ui-tests/gaiatest/tests/accessibility/ directory.

More resources at:


Building and Running Gaia


Testing


Localization

Localization is very relevant to accessibility especially because one of the tasks that we perform when making something accessible is ensuring that all elements in the applications are labeled for the user of assistive technologies. Please see Localization best practices for guidelines on how to add new text to applications.


Debugging


Using a screen reader

Using a device or navigating a web application is different with the screen reader. Screen reader introduces a concept of virtual cursor (or focus) that represents screen reader's current position inside the app or web page. For mode information and example videos please see: Screen Reader


Accessibility

Here are some of the basic resources to help you get to know what mobile accessibility (and accessibility) is:

yzen

Kevin NgoAdelheid, an Interactive Photocentric Storybook

Photos scroll along the bottom, pages slide left and right.

Half a year ago, I built an interactive photocentric storybook as a gift to my girlfriend for our anniversary. It binds photos, writing, music, and animation together into an experential walk down memory lane. I named it Adelheid, a long-form version of my girlfriend's name. And it took me about a month of my after-work free time whenever she wasn't around. Adelheid is an almagation of my thoughts as it molds my joy of photography, writing, and web development into an elegantly-bound package.

storybook

A preview of the personal storybook I put together.

Design Process

As before, I wanted it to a representation of myself: photography, writing, web development. I spent time sketching it out on a notebook and came up with this. The storybook is divided into chapters. Chapters consist of a song, summary text, a key photo, other photos, and moments. Moments are like subchapters; they consist of text and a key photo. Chapters slide left and right like pages in a book, photos roll through the bottom like an image reel, moments lie behind the chapters like the back of a notecard, all while music plays in the background. Then I put in a title page at the beginning that lifts like a stage curtain.

It took a month of work to bring it to fruition, and it was at last unveiled as a surprise on a quiet night at Picnic Island Park in Tampa, Florida.

Technical Bits

With all of the large image and audio files, it becomes quite a large app. My private storybook contains about 110MB, as a single-page app! Well, that's quite ludicrous. However, I made it easy for myself and had it intended to only be used as a packaged app. This means I don't have to worry about load times over a web server since all assets can be downloaded and installed as a desktop app.

Unfortunately, it currently only works well in Firefox. Chrome was targeted initially but was soon dropped to decrease maintenance time and hit my deadline. There's a lot of fancy animation going on, and it was difficult to get it working properly in both browsers. Not only for CSS compatability, but it currently only works as a packaged app for Firefox. Packaged apps have not been standardized, and I only configured packaged app manifests for Firefox's specifications.

After the whole thing, I became a bit more adept at CSS3 animations. This included the chapter turns, image reels, and moment flips. Some nice touches were parallaxed images so the key images transitioned a bit slower to give off a three-dimensional effect. Also the audio faded in and out between chapter turns using a web audio library.

You can install the demo app at adelheid.ngokevin.com.

Christian HeilmannWhat if everything is awesome?

this kid is awesome

These are the notes for my talk at Codemotion Madrid this year.
You can watch the screencast on YouTube and you can check out the slides at Slideshare.

An incredibly silly movie

The other day I watched Pacific Rim and was baffled by the awesomeness and the awesome inanity of this movie.

Let’s recap a bit:

  • There is a rift to another dimension under water that lets alien monsters escape into our world.
  • These monsters attack our cities and kill people. That is bad.
  • The most effective cause of action is that we build massive, erect walking robots on land to battle them.
  • These robots are controlled by pilots who walk inside them and throw punches to box these monsters.
  • These pilots all are super fit, ripped and beautiful and in general could probably take on these monsters in a fight with bare hands. The scientists helping them are helpless nerds.
  • We need to drop the robots with helicopters to where they are needed, because that looks awesome, too.

All in all the movie is borderline insane as if we had a rift like that under water, all we’d need is mine it. Or have some massive ships and submarines where the rift is ready to shoot and bomb anything that comes through. Which, of course, beats trying to communicate with it.

The issue is that this solution would not make for a good blockbuster 3D movie aimed at 13 year olds. Nothing fights or breaks in a fantastic manner and you can’t show crumbling buildings. We’d be stuck with mundane tasks. Like writing a coherent script, proper acting or even manual camera work and settings instead of green screen. We can’t have that.

Tech press hype

What does all that have to do with web development? Well, I get the feeling we got to a world where we try to be awesome for the sake of being awesome. And at the same time we seem to be missing out on the fact that what we have is pretty incredible as it is.

One thing I blame is the tech press. We still get weekly Cinderella stories of the lonely humble developer making it big with his first app (yes, HIS first app). We hear about companies buying each other for billions and everything being incredibly exciting.

Daily frustrations

In stark contrast to that, our daily lives as developers are full of frustrations. People not knowing what we do and thus not really giving us feedback on what we do, for example. We only appear on the scene when things break.

Our users can also be a bit of an annoyance as they do not upgrade the way we want them to and keep using things differently than we anticipated.

And even when we mess up, not much happens. We put our hearts and lots of efforts into our work. When we see something obviously broken or in dire need of improvement we want to fix it. The people above us in the hierarchy, however, are happy to see it as a glitch to fix later.

Flight into professionalism

Instead of working on the obvious broken communication between us and those who use our products and us and those who sell them (or even us and those who maintain our products) I hear a louder and louder call for “professional development”. This involves many abstractions and intelligent package managers and build scripts that automate a lot of the annoying cruft of our craft. Cruft in the form of extraneous code. Code that got there because of mistakes that our awesome new solutions make go away. But isn’t the real question why we still make so many mistakes in the first place?

Apps good, web bad!

One of the things we seem to be craving is to match the state of affairs of native platforms, especially the form factor of apps. Apps seem to be the new, modern form factor of software delivery. Fact is that they are a questionable success (and may even be a step back in software evolution as I put it in a TEDx talk. If you look at who earns something with them and how long they are in use on average it is hard to shout “hooray for apps”. On the web, there is a problem that there are so far no standards that define an app that are working across platforms. If you wonder how far that is going along, the current state of mobile apps on the web describes this in meticulous detail at the W3C.

Generic code is great code?

A lot of what we crave as developers is generic. We don’t want to write code that does one job well, we want to write code that can take any input and does something intelligent with it. This is feel good code. We are not only clever enough to be programmers We also write solutions that prevent people from making mistakes by predicting them.

Fredrik Noren wrote a brilliant piece on this called “On Generalisation“. In it he argues that writing generic code means trying to predict the future and that we are bad at that. He calls out for simpler, more modular and documented code that people can extend instead of catch-all solutions to simple problems.

I found myself nodding along reading this. There seems to be a general tendency to re-invent instead of improving existing solutions. This comes natural for developers – we want to create instead of read and understand. I also blame sites like hacker news which are a beauty pageant of small, quick and super intelligent technical solutions for every conceivable problem out there.

Want some proof? How about Static Site Generators listing 295 different ways to create static HTML pages? Let’s think about that: static HTML pages!

The web is obese!

We try to fix our world by stacking abstractions and creating generic solutions for small issues. The common development process and especially the maintenance process looks different, though.

People using Content Management Systems to upload lots of un-optimised photos are a problem. People using too many of our sleak and clever solutions also add to the fact that that web performance is still a big issue. According to the HTTP Archive the average web site is 2 MB in data delivered in 100(!) HTTP requests. And that years after we told people that each request is a massive cause of a slow and sluggish web experience. How can anyone explain things like the new LG G Watch site clocking in at 54 MB on the first loadLG G Watch site clocking in at 54 MB on the first load whilst being a responsive design?

Tools of awesome

There are no excuses. We have incredible tools that give us massive insight into our work. What we do is not black art any longer, we don’t hope that browsers do good things with our code. We can peek under the hood and see the parts moving.

Webpagetest.org is incredible. It gives us detailed insight into what is going right and wrong in our web sites right in the browser. You can test the performance of a site simulating different speeds and load it from servers all over the world. You get a page optimisation checklist, graphs about what got loaded when and when things started rendering. You even get a video of your page loading and getting ready for users to play with.

There are many resources how to use this tool and others that help us with fixing performance issues. Addy Osmani gave a great talk at CSS Conf in Berlin re-writing the JSConf web site live on stage using many of these tools.

Browsers are incredible tools

Browsers have evolved from simple web consumption tools to full-on development environments. Almost all browsers have some developer tools built in that not only allow you to see the code in the current page but also to debug. You have step-by-step debugging of JavaScript, CSS debugging and live previews of colours, animations, element dimensions, transforms and fonts. You have insight into what was loaded in what sequence, you can see what is in localStorage and you can do performance analysis and see the memory consumption.
The innovation in development tools of browsers is incredible and moves at an amazing speed. You can now even debug on devices connected via USB or wireless and Chrome allows you to simulate various devices and network/connectivity conditions.
Sooner or later this might mean that we won’t any other editors any more. Any user downloading a browser could also become a developer. And that is incredible. But what about older browsers?

Polyfills as a services

A lot of bloat on the web happens because of us trying to give new, cool effects to old, tired browsers. We do this because of a wrong understanding of the web. It is not about giving the same functionality to everybody, but instead to give a working experience to everybody.

The idea of a polyfill is genius: write a solution for an older environment to play with new functionality and get our UX ready for the time browsers support it. It fails to be genius when we never, ever remove the polyfills from our solutions. The Financial Times development team now had a great idea to offer polyfill as a service. This means you include one JavaScript file.

<script src="//cdn.polyfill.io/v1/polyfill.min.js" 
        async defer>
</script>

You can define which functionality you want to polyfill and it’ll be done that way. When the browser supports what you want, the stop-gap solution never gets included at all. How good is that?

Flexbox growing up

Another thing of awesome I saw the other day at CSS tricks. Chris Coyier uses Flexbox to create a toolbar that has fixed elements and others using up the rest of the space. It extends semantic HTML and does a great job being responsive.

rwd-flexbox

All the CSS code needed for it is this:

*, *:before, *:after {
  -moz-box-sizing: inherit;
       box-sizing: inherit;
}
html {
  -moz-box-sizing: border-box;
       box-sizing: border-box;
}
body {
  padding: 20px;
  font: 100% sans-serif;
}
.bar {
  display: -webkit-flex;
  display: -ms-flexbox;
  display: flex;
  -webkit-align-items: center;
      -ms-flex-align: center;
          align-items: center;
  width: 100%;
  background: #eee;
  padding: 20px;
  margin: 0 0 20px 0;
}
.bar > * {
  margin: 0 10px;
}
.icon {
  width: 30px;
  height: 30px;
  background: #ccc;
  border-radius: 50%;
}
.search {
  -webkit-flex: 1;
      -ms-flex: 1;
          flex: 1;
}
.search input {
  width: 100%;
}
.bar-2 .username {
  -webkit-order: 2;
      -ms-flex-order: 2;
          order: 2;
}
.bar-2 .icon-3 {
  -webkit-order: 3;
      -ms-flex-order: 3;
          order: 3;
}
.bar-3 .search {
  -webkit-order: -1;
      -ms-flex-order: -1;
          order: -1;
}
.bar-3 .username {
  -webkit-order: 1;
      -ms-flex-order: 1;
          order: 1;
}
.no-flexbox .bar {
  display: table;
  border-spacing: 15px;
  padding: 0;
}
.no-flexbox .bar > * {
  display: table-cell;
  vertical-align: middle;
  white-space: nowrap;
}
.no-flexbox .username {
  width: 1px;
}
@media (max-width: 650px) {
  .bar {
    -webkit-flex-wrap: wrap;
        -ms-flex-wrap: wrap;
            flex-wrap: wrap;
  }
  .icon {
    -webkit-order: 0 !important;
        -ms-flex-order: 0 !important;
            order: 0 !important;
  }
  .username {
    -webkit-order: 1 !important;
        -ms-flex-order: 1 !important;
            order: 1 !important;
    width: 100%;
    margin: 15px;
  }
 
  .search {
    -webkit-order: 2 !important;
        -ms-flex-order: 2 !important;
            order: 2 !important;
    width: 100%;
  }
}

That is pretty incredible, isn’t it?

More near-future tech of awesome

Other things that are brewing get me equally excited. WebRTC, WebGL, Web Audio and many more things are pointing to a high fidelity web. A web that allows for rich gaming experiences and productivity tools built right into the browser. We can video and audio chat with each other and send data in a peer-to-peer fashion without relying or burning up a server between us.

Service Workers will allow us to build a real offline experience. With AppCahse we’re hoping users will get something and not aggressively cache outdated information. If you want to know more about that watch these two amazing videos by Jake Archibald: The Service Worker: The Network layer that is yours to own and The Service worker is coming, look busy!

Web Components have been the near future for quite a while now and seem to be in a bit of a “let’s build a framework instead” rut. Phil Legetter has done and incredible job collecting what that looks like. It is true: the support of Shadow DOM across the board is still not quite there. But a lot of these frameworks offer incredible client-side functionality to go into the standard.

What can you do?

I think it is time to stop chasing the awesome of “soon we will be able to use that” and instead be more fearless about using what we have now. We love to write about just how broken things are when they are in their infancy. We tend to forget to re-visit them when they’ve matured more. Many things that were a fever dream a year ago are now ready for you to roll out – if you work with progressive enhancement. In general, this is a safe bet as the web will never be in a finished state. Even native platforms are only in a fixed state between major releases. Mattias Petter Johansson of Spotify put it quite succinctly in a thread why JavaScript is the only client side language:

Hating JavaScript is like hating the Internet.
The Internet is a cobweb of different technologies cobbled together with duct tape, string and chewing gum. It’s not elegantly designed in any way, because it’s more of a growing organism than it is a machine constructed with intent.

The web is chaotic, so much for sure, but it also aims to be longer lasting than other platforms. The in-built backwards compatibility of its technologies makes it a beautiful investment. As Paul Bakaus of Google put it:

If you build a web app today, it will run in browsers 10 years from now. Good luck trying the same with your favorite mobile OS (excluding Firefox OS).

The other issue we have to overcome is the dogma associated with some of our decisions. Yes, it would be excellent if we could use open web standards to build everything. It would be great if all solutions had their principles of distribution, readability and easy sharing. But we live in a world that has changed. In many ways in the mobile space we have to count our blessings. We can and should allow some closed technology take its course before we go back to these principles. We’ve done it with Flash, we can do it with others, too. My mantra these days is the following:

If you enable people world-wide to get a good experience and solve a problem they have, I like it. The technology you use is not the important part. How much you lock them in is. Don’t lock people in.

Go share and teach

One thing is for sure: we never had a more amazing environment to learn and share. Services like GitHub, JSFiddle, JSBin and Codepen make it easy to distribute and explain code. You can show instead of describe and you can fix instead of telling people that they are doing wrong. There is no better way to learn than to show and if you set out to teach you end up learning.

A great demo of this is together.js. Using this WebRTC based tool (or its implementation in JSFiddle by hitting the collaborate button) you can code together, with several cursors, audio chat or a text chat client directly in the browser. You explain in context and collaborate live. And you make each other learn something and get better. And this is what is really awesome.

Marco BonardoMozilla at the JS Day 2012 in Verona

Allison NaaktgeborenApplying Privacy Series: The 2nd meeting

The day after the first meeting…

Engineering Manager: Welcome DBA, Operations Engineer, and Privacy Officer. Did you all get a chance to look over the project wiki? What do you think?

Operations Engineer: I did.

DBA: Yup, and I have some questions.

Privacy Officer: Sounds really cool, as long as we’re careful.

Engineer: We’re always careful!

DBA: There are a lot of pages on the web, Keeping that much data is going to be expensive. I didn’t see anything on the wiki about evicting entries and for a table that big, we’ll need to do that regularly.

Privacy Officer: Also, when will we delete the device ids? Those are like a fingerprint for someone’s phone, so keeping them around longer than absolutely necessary increases risk for the user & the company’s risk.

Operations Engineer: The less we keep around, the less it costs to maintain.

Engineer: We know that most mobile users have only 1-3 pages open at any given time and we estimate no more than 50,000 users will be eligible for the service.

DBA: Well that does suggest a manageable load, but that doesn’t answer my question.

Engineer: Want to say if a page hasn’t been accessed in 48 hours we evict it from the server? And we can tune that knob as necessary?

Operations Engineer: As long as I can tune it in prod if something goes haywire.

Privacy Officer:: And device ids?

Engineer: Apply the same rule to them?

Engineering Manager: 48 hours would be too short. Not everyone uses their mobile browser every day. I’d be more comfortable with 90 days to start.

DBA: I imagine you’d want secure destruction for the ids.

Privacy Officer:: You got it!

DBA: what about the backup tapes? We back up the dbs regularly?

Privacy Officer:: are the backups online?

DBA: No, like I said, they’re on tape. Someone has to physically run ‘em through a machine. You’d need physical access to the backup storage facility.

Privacy Officer:: Then it’s probably fine if we don’t delete from the tapes.

Operations Engineer: What is the current timeline?

Engineer: End of the quarter, 8 weeks or so.

Operations Engineer: We’re under water right now, so it might be tight getting the hardware in & set up. New hardware orders usually take 6 weeks to arrive. I can’t promise the hardware will be ready in time.

Engineering Manager: We understand, please do your best and if we have to, Product Manager won’t be happy, but we’ll delay the feature if we need to.

Privacy Officer:: Who’s going to be responsible for the data on the stage & production servers?

Engineering Manager: Product Manager has final say.

DBA: thanks. good to know!

Engineer: I’ll draw up a plan  and send it around for feedback tomorrow.

 

Who brought up user data safety & privacy concerns in this conversation?

Privacy Officer is obvious. The DBA & Operations Engineer also raised privacy concerns.

Robert HelmerBetter Source Code Browsing With FreeBSD and Mozilla DXR

Lately I've been reading about the design and implementation of the FreeBSD Operating System (great book, you should read it).

However I find browsing the source code quite painful. Using vim or emacs is fine for editing invidual files, but when you are trying to understand and browse around a large codebase, dropping to a shell and grepping/finding around gets old fast. I know about ctags and similar, but I also find editors uncomfortable for browsing large codebases for an extended amount of time - web pages tend to be easier on the eyes.

There's an LXR fork called FXR available, which is way better and I am very grateful for it - however it has all the same shortcomings LXR that we've become very familiar with on the Mozilla LXR fork (MXR):

  • based on regex, not static analysis of the code - sometimes it gets things wrong, and it doesn't really understand the difference between a variable with the same name in different files
  • not particularly easy on the eyes (shallow and easily fixable, I know)

I've been an admirer of Mozilla's next gen code browsing tool, DXR, for a long time now. DXR uses a clang plugin to do static analysis of the code, so it produces the real call graph - this means it doesn't need to guess at the definition of types or where a variable is used, it knows.

A good example is to contrast a file on MXR with the same file on DXR. Let's say you wanted to know where this macro was first defined, that's easy in DXR - just click on the word "NS_WARNING" and select "Jump to definition".

Now try that on MXR - clicking on "NS_WARNING" instead yields a search which is not particularly helpful, since it shows every place in the codebase that the word "NS_WARNING" appears (note that DXR has the ability to do this same type of search, in case that's really what you're after).

So that's what DXR is and why it's useful. I got frustrated enough with the status quo trying to grok the FreeBSD sources that I took a few days and the with help of folks in the #static channel on irc.mozilla.org (particularly Erik Rose) to get DXR running on FreeBSD and indexed a tiny part of the source tree as a proof-of-concept (the source for "/bin/cat"):

http://freebsdxr.rhelmer.org

This is running on a FreeBSD instance in AWS.

DXR is currently undergoing major changes, SQLite to ElasticSearch transition being the central one. I am tracking how to get the "es" branch of DXR going in this gist.

Currently I am able to get a LINT kernel build indexed on DXR master branch, but still working through issues on the "es" branch.

Overall, I feel like I've learned way more about static analysis, how DXR works, FreeBSD source code and produced some useful patches for the Mozilla and the DXR project and hopefully will provide a useful resource for the FreeBSD project, all along the way. Totally worth it, I highly recommended working with all of the aforementioned :)

Brian R. BondyDeveloping and releasing the Khan Academy Firefox OS app

I'm happy to announce that the Khan Academy Firefox OS app is now available in the Firefox Marketplace!

Khan Academy’s mission is to provide a free world-class education for anyone anywhere. The goal of the Firefox OS app is to help with the “anyone anywhere” part of the KA mission.

Why?

There's something exciting about being able to hold a world class education in your pocket for the cheap price of a Firefox OS phone. Firefox OS devices are mostly deployed in countries where the cost of an iPhone or Android based smart phone is out of reach for most people.

The app enables developing countries, lower income families, and anyone else to take advantage of the Khan Academy content. A persistent internet connection is not required.

What's that.... you say you want another use case? Well OK, here goes: A parent wanting each of their kids to have access to Khan Academy at the same time could be very expensive in device costs. Not anymore.

Screenshots!

App features

  • Access to the full library of Khan Academy videos and articles.
  • Search for videos and articles.
  • Ability to sign into your account for:
  • Profile access.
  • Earning points for watching videos.
  • Continuing where you left off from previous partial video watches, even if that was on the live site.
  • Partial and full completion status of videos and articles.
  • Downloading videos, articles, or entire topics for later use.
  • Sharing functionality.
  • Significant effort was put in, to minify topic tree sizes for minimal memory use and faster loading.
  • Scrolling transcripts for videos as you watch.
  • The UI is highly influenced by the first generation iPhone app.

Development statistics

  • 340 commits
  • 4 months of consecutive commits with at least 1 commit per day
  • 30 minutes - 2 hours per day max

Technologies used

Technologies used to develop the app include:

Localization

The app is fully localized for English, Portuguese, French, and Spanish, and will use those locales automatically depending on the system locale. The content (videos, articles, subtitles) that the app hosts will also automatically change.

I was lucky enough to have several amazing and kind translators for the app volunteer their time.

The translations are hosted and managed on Transifex.

Want to contribute?

The Khan Academy Firefox OS app source is hosted in one of my github repositories and periodically mirrored on the Khan Academy github page.

If you'd like to contribute there's a lot of future tasks posted as issues on github.

Current minimum system requirements

  • Around 8MB of space.
  • 512 MB of RAM

Low memory devices

By default, apps on the Firefox marketplace are only served to devices with at least 500MB of RAM. To get them on 256MB devices, you need to do a low memory review.

One of the major enhancements I'd like to add next, is to add an option to use the YouTube player instead of HTML5 video. This may use less memory and may be a way onto 256MB devices.

How about exercises?

They're coming in a future release.

Getting preinstalled on devices

It's possible to request to get pre-installed on devices and I'll be looking into that in the near future after getting some more initial feedback.

Projects like Matchstick also seem like a great opportunity for this app.

Hannah KaneWe are very engaging

Yesterday someone asked me what the engagement team is up to, and it made me sad because I realized I need to do a waaaaay better job of broadcasting my team’s work. This team is dope and you need to know about it.

As a refresher, our work encompasses these areas:

  • Grantwriting
  • Institutional partnerships
  • Marketing and communications
  • Small dollar fundraising
  • Production work (i.e. Studio Mofo)

In short, we aim to support the Webmaker product and programs and our leadership pipelines any time we need to engage individuals or institutions.

What’s currently on our plate:

Pro-tip: You can always see what we’re up to by checking out the Engagement Team Workbench.

These days we’re spending our time on the following:

  • End of Year Fundraising: With the help of a slew of kick-ass engineers, Andrea and Kelli are getting to $2M. (view the Workbench).
  • Mozilla Gear launch: Andrea and Geoffrey are obsessed with branded hoodies. To complement our fundraising efforts, they just opened a brand new site for people to purchase Mozilla Gear (view the project management spreadsheet).
  • Fall Campaign: Remember the 10K contributor goal? We do! An-Me and Paul have been working with Claw, Amira, Michelle, and Lainie, among others, to close the gap through a partner-based strategy (view the Workbench).
  • Mobile Opportunity: Ben is helping to envision and build partnerships around this work, and Paul and Studio Mofo are providing marketing, comms, and production support (the Mobile Opportunity Workbench is here, the engagement-specific work will be detailed soon).
  • Building a Webmaker Marketing Plan for 2015: The site and programs aren’t going to market themselves! Paul is drafting a comprehensive marketing calendar for 2015 that complements the product and program strategies. (plan coming soon)
  • 2015 Grants Pipeline: Ben and An-Me are always on the lookout for opportunities, and Lynn is responsible for writing grants and reports to fund our various programs and initiatives.
  • Additional Studio Mofo projects: Erika, Mavis, and Sabrina are always working on something. In addition to their work supporting most of the above, you can see a full list of projects here.
  • Salesforce for grants and partnerships: We’ve completed a custom Salesforce installation and Ben has begun the process of training staff to use it. Much more to come to make it a meaningful part of our workflow (Workbench coming soon).
  • Open Web Fellows recruitment: We’re supporting our newest fellowship with marketing support (view the Hype Plan)

Niko MatsakisPurging proc

The so-called “unboxed closure” implementation in Rust has reached the point where it is time to start using it in the standard library. As a starting point, I have a pull request that removes proc from the language. I started on this because I thought it’d be easier than replacing closures, but it turns out that there are a few subtle points to this transition.

I am writing this blog post to explain what changes are in store and give guidance on how people can port existing code to stop using proc. This post is basically targeted Rust devs who want to adapt existing code, though it also covers the closure design in general.

To some extent, the advice in this post is a snapshot of the current Rust master. Some of it is specifically targeting temporary limitations in the compiler that we aim to lift by 1.0 or shortly thereafter. I have tried to mention when that is the case.

The new closure design in a nutshell

For those who haven’t been following, Rust is moving to a powerful new closure design (sometimes called unboxed closures). This part of the post covers the highlight of the new design. If you’re already familiar, you may wish to skip ahead to the “Transitioning away from proc” section.

The basic idea of the new design is to unify closures and traits. The first part of the design is that function calls become an overloadable operator. There are three possible traits that one can use to overload ():

1
2
3
trait Fn<A,R> { fn call(&self, args: A) -> R };
trait FnMut<A,R> { fn call_mut(&mut self, args: A) -> R };
trait FnOnce<A,R> { fn call_once(self, args: A) -> R };

As you can see, these traits differ only in their “self” parameter. In fact, they correspond directly to the three “modes” of Rust operation:

  • The Fn trait is analogous to a “shared reference” – it means that the closure can be aliased and called freely, but in turn the closure cannot mutate its environment.
  • The FnMut trait is analogous to a “mutable reference” – it means that the closure cannot be aliased, but in turn the closure is permitted to mutate its environment. This is how || closures work in the language today.
  • The FnOnce trait is analogous to “ownership” – it means that the closure can only be called once. This allows the closure to move out of its environment. This is how proc closures work today.

Enabling static dispatch

One downside of the older Rust closure design is that closures and procs always implied virtual dispatch. In the case of procs, there was also an implied allocation. By using traits, the newer design allows the user to choose between static and virtual dispatch. Generic types use static dispatch but require monomorphization, and object types use dynamic dispatch and hence avoid monomorphization and grant somewhat more flexibility.

As an example, whereas before I might write a function that takes a closure argument as follows:

1
2
3
4
5
fn foo(hashfn: |&String| -> uint) {
    let x = format!("Foo");
    let hash = hashfn(&x);
    ...
}

I can now choose to write that function in one of two ways. I can use a generic type parameter to avoid virtual dispatch:

1
2
3
4
5
6
7
fn foo<F>(hashfn: F)
    where F : FnMut(&String) -> uint
{
    let x = format!("Foo");
    let hash = hashfn(&x);
    ...
}

Note that we write the type parameters to FnMut using parentheses syntax (FnMut(&String) -> uint). This is a convenient syntactic sugar that winds up mapping to a traditional trait reference (currently, for<'a> FnMut<(&'a String,), uint>). At the moment, though, you are required to use the parentheses form, because we wish to retain the liberty to change precisely how the Fn trait type parameters work.

A caller of foo() might write:

1
2
let some_salt: String = ...;
foo(|str| myhashfn(str.as_slice(), &some_salt))

You can see that the || expression still denotes a closure. In fact, the best way to think of it is that a || expression generates a fresh structure that has one field for each of the variables it touches. It is as if the user wrote:

1
2
3
let some_salt: String = ...;
let closure = ClosureEnvironment { some_salt: &some_salt };
foo(closure);

where ClosureEnvironment is a struct like the following:

1
2
3
4
5
6
7
8
9
struct ClosureEnvironment<'env> {
    some_salt: &'env String
}

impl<'env,'arg> FnMut(&'arg String) -> uint for ClosureEnvironment<'env> {
    fn call_mut(&mut self, (str,): (&'arg String,)) -> uint {
        myhashfn(str.as_slice(), &self.some_salt)
    }
}

Obviously the || form is quite a bit shorter.

Using object types to get virtual dispatch

The downside of using generic type parameters for closures is that you will get a distinct copy of the fn being called for every callsite. This is a great boon to inlining (at least sometimes), but it can also lead to a lot of code bloat. It’s also often just not practical: many times we want to combine different kinds of closures together into a single vector. None of these concerns are specific to closures. The same things arise when using traits in general. The nice thing about the new closure design is that it lets us use the same tool – object types – in both cases.

If I wanted to write my foo() function to avoid monomorphization, I might change it from:

1
2
3
fn foo<F>(hashfn: F)
    where F : FnMut(&String) -> uint
{...}

to:

1
2
fn foo(hashfn: &mut FnMut(&String) -> uint) {
{...}

Note that the argument is now a &mut FnMut(&String) -> uint, rather than being of some type F where F : FnMut(&String) -> uint.

One downside of changing the signature of foo() as I showed is that the caller has to change as well. Instead of writing:

1
foo(|str| ...)

the caller must now write:

1
foo(&mut |str| ...)

Therefore, what I expect to be a very common pattern is to have a “wrapper” that is generic which calls into a non-generic inner function:

1
2
3
4
5
6
7
8
fn foo<F>(hashfn: F)
    where F : FnMut(&String) -> uint
{
    foo_obj(&mut hashfn)
}

fn foo_obj(hashfn: &mut FnMut(&String) -> uint)
{...}

This way, the caller does not have to change, and only this outer wrapper is monomorphized, and it will likely be inlined away, and the “guts” of the function remain using virtual dispatch.

In the future, I’d like to make it possible to pass object types (and other “unsized” types) by value, so that one could write a function that just takes a FnMut() and not a &mut FnMut():

1
2
fn foo(hashfn: FnMut(&String) -> uint) {
{...}

Among other things, this makes it possible to transition simply between static and virtual dispatch without altering callers and without creating a wrapper fn. However, it would compile down to roughly the same thing as the wrapper fn in the end, though with guaranteed inlining. This change requires somewhat more design and will almost surely not occur by 1.0, however.

Specifying the closure type explicitly

We just said that every closure expression like || expr generates a fresh type that implements one of the three traits (Fn, FnMut, or FnOnce). But how does the compiler decide which of the three traits to use?

Currently, the compiler is able to do this inference based on the surrouding context – basically, the closure was an argument to a function, and that function requested a specific kind of closure, so the compiler assumes that’s the one you want. (In our example, the function foo() required an argument of type F where F implements FnMut.) In the future, I hope to improve the inference to a more general scheme.

Because the current inference scheme is limited, you will sometimes need to specify which of the three fn traits you want explicitly. (Some people also just prefer to do that.) The current syntax is to use a leading &:, &mut:, or :, kind of like an “anonymous parameter”:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
// Explicitly create a `Fn` closure which cannot mutate its
// environment. Even though `foo()` requested `FnMut`, this closure
// can still be used, because a `Fn` closure is more general
// than `FnMut`.
foo(|&:| { ... })

// Explicitly create a `FnMut` closure. This is what the
// inference would select anyway.
foo(|&mut:| { ... })

// Explicitly create a `FnOnce` closure. This would yield an
// error, because `foo` requires a closure it can call multiple
// times in a row, but it is being given a closure that can be
// called exactly once.
foo(|:| { ... }) // (ERROR)

The main time you need to use an explicit fn type annotation is when there is no context. For example, if you were just to create a closure and assign it to a local variable, then a fn type annotation is required:

1
let c = |&mut:| { ... };

Caveat: It is still possible we’ll change the &:/&mut:/: syntax before 1.0; if we can improve inference enough, we might even get rid of it altogether.

Moving vs non-moving closures

There is one final aspect of closures that is worth covering. We gave the example of a closure |str| myhashfn(str.as_slice(), &some_salt) that expands to something like:

1
2
3
struct ClosureEnvironment<'env> {
    some_salt: &'env String
}

Note that the variable some_salt that is used from the surrounding environment is borrowed (that is, the struct stores a reference to the string, not the string itself). This is frequently what you want, because it means that the closure just references things from the enclosing stack frame. This also allows closures to modify local variables in place.

However, capturing upvars by reference has the downside that the closure is tied to the stack frame that created it. This is a problem if you would like to return the closure, or use it to spawn another thread, etc.

For this reason, closures can also take ownership of the things that they close over. This is indicated by using the move keyword before the closure itself (because the closure “moves” things out of the surrounding environment and into the closure). Hence if we change that same closure expression we saw before to use move:

1
move |str| myhashfn(str.as_slice(), &some_salt)

then it would generate a closure type where the some_salt variable is owned, rather than being a reference:

1
2
3
struct ClosureEnvironment {
    some_salt: String
}

This is the same behavior that proc has. Hence, whenever we replace a proc expression, we generally want a moving closure.

Currently we never infer whether a closure should be move or not. In the future, we may be able to infer the move keyword in some cases, but it will never be 100% (specifically, it should be possible to infer that the closure passed to spawn should always take ownership of its environment, since it must meet the 'static bound, which is not possible any other way).

Transitioning away from proc

This section covers what you need to do to modify code that was using proc so that it works once proc is removed.

Transitioning away from proc for library users

For users of the standard library, the transition away from proc is fairly straightforward. Mostly it means that code which used to write proc() { ... } to create a “procedure” should now use move|| { ... }, to create a “moving closure”. The idea of a moving closure is that it is a closure which takes ownership of the variables in its environment. (Eventually, we expect to be able to infer whether or not a closure must be moving in many, though not all, cases, but for now you must write it explicitly.)

Hence converting calls to libstd APIs is mostly a matter of search-and-replace:

1
2
3
4
5
Thread::spawn(proc() { ... }) // becomes:
Thread::spawn(move|| { ... })

task::try(proc() { ... }) // becomes:
task::try(move|| { ... })

One non-obvious case is when you are creating a “free-standing” proc:

1
let x = proc() { ... };

In that case, if you simply write move||, you will get some strange errors:

1
let x = move|| { ... };

The problem is that, as discussed before, the compiler needs context to determine what sort of closure you want (that is, Fn vs FnMut vs FnOnce). Therefore it is necessary to explicitly declare the sort of closure using the : syntax:

1
2
let x = proc() { ... }; // becomes:
let x = move|:| { ... };

Note also that it is precisely when there is no context that you must also specify the types of any parameters. Hence something like:

1
2
3
4
5
6
7
8
let x = proc(x:int) foo(x * 2, y);
//      ~~~~ ~~~~~
//       |     |
//       |     |
//       |     |
//       |   No context, specify type of parameters.
//       |
//      proc always owns variables it touches (e.g., `y`)

might become:

1
2
3
4
5
6
7
8
let x = move|: x:int| foo(x * 2, y);
//      ~~~~ ^ ~~~~~
//       |   |   |
//       |   |  No context, specify type of parameters.
//       |   |
//       |   No context, also specify FnOnce.
//       |
//     `move` keyword means that closure owns `y`

Transitioning away from proc for library authors

The transition story for a library author is somewhat more complicated. The complication is that the equivalent of a type like proc():Send ought to be Box<FnOnce() + Send> – that is, a boxed FnOnce object that is also sendable. However, we don’t currently have support for invoking fn(self) methods through an object, which means that if you have a Box<FnOnce()> object, you can’t call it’s call_once method (put another way, the FnOnce trait is not object safe). We plan to fix this – possibly by 1.0, but possibly shortly thereafter – but in the interim, there are workarounds you can use.

In the standard library, we use a trait called Invoke (and, for convenience, a type called Thunk). You’ll note that although these two types are publicly available (under std::thunk), these types do not appear in the public interface any other stable APIs. That is, Thunk and Invoke are essentially implementation details that end users do not have to know about. We recommend you follow the same practice. This is for two reasons:

  1. It generally makes for a better API. People would rather write Thread::spawn(move|| ...) and not Thread::spawn(Thunk::new(move|| ...)) (etc).
  2. Eventually, once Box<FnOnce()> works properly, Thunk and Invoke may be come deprecated. If this were to happen, your public API would be unaffected.

Basically, the idea is to follow the “thin wrapper” pattern that I showed earlier for hiding virtual dispatch. If you recall, I gave the example of a function foo that wished to use virtual dispatch internally but to hide that fact from its clients. It did do by creating a thin wrapper API that just called into another API, performing the object coercion:

1
2
3
4
5
6
7
8
fn foo<F>(hashfn: F)
    where F : FnMut(&String) -> uint
{
    foo_obj(&mut hashfn)
}

fn foo_obj(hashfn: &mut FnMut(&String) -> uint)
{...}

The idea with Invoke is similar. The public APIs are generic APIs that accept any FnOnce value. These just turnaround and wrap that value up into an object. Here the problem is that while we would probably prefer to use a Box<FnOnce()> object, we can’t because FnOnce is not (currently) object-safe. Therefore, we use the trait Invoke (I’ll show you how Invoke is defined shortly, just let me finish this example):

1
2
3
4
5
6
7
8
9
10
pub fn spawn<F>(taskbody: F)
    where F : FnOnce(), F : Send
{
    spawn_inner(box taskbody)
}

fn spawn_inner(taskbody: Box<Invoke+Send>)
{
    ...
}

The Invoke trait in the standard library is defined as:

1
2
3
trait Invoke<A=(),R=()> {
    fn invoke(self: Box<Self>, arg: A) -> R;
}

This is basically the same as FnOnce, except that the self type is Box<Self>, and not Self. This means that Invoke requires allocation to use; it is really tailed for object types, unlike FnOnce.

Finally, we can provide a bridge impl for the Invoke trait as follows:

1
2
3
4
5
6
7
8
impl<A,R,F> Invoke<A,R> for F
    where F : FnOnce(A) -> R
{
    fn invoke(self: Box<F>, arg: A) -> R {
        let f = *self;
        f(arg)
    }
}

This impl allows any type that implements FnOnce to use the Invoke trait.

High-level summary

Here are the points I want you to take away from this post:

  1. As a library consumer, the latest changes mostly just mean replacing proc() with move|| (sometimes move|:| if there is no surrounding context).
  2. As a library author, your public interface should be generic with respect to one of the Fn traits. You can then convert to an object internally to use virtual dispatch.
  3. Because Box<FnOnce()> doesn’t currently work, library authors may want to use another trait internally, such as std::thunk::Invoke.

I also want to emphasize that a lot of the nitty gritty details in this post are transitionary. Eventually, I believe we can reach a point where:

  1. It is never (or virtually never) necessary to explicitly declare Fn vs FnMut vs FnOnce explicitly.
  2. We can frequently (though not always) infer the keyword move.
  3. Box<FnOnce()> works, so Invoke and friends are not needed.
  4. The choice between static and virtual dispatch can be changed without affecting users and without requiring wrapper functions.

I expect the improvements in inference before 1.0. Fixing the final two points is harder and so we will have to see where it falls on the schedule, but if it cannot be done for 1.0 then I would expect to see those changes shortly thereafter.

Jared WeinThe Bugs Blocking In-Content Prefs, part 2

Season's greetingsAt the beginning of November I published a blog post with the list of bugs that are blocking in-content prefs from shipping. Since that post, quite a few bugs have been fixed and we figured out an approach for fixing most of the high-contrast bugs.

As in the last post, bugs that should be easy to fix for a newcomer are highlighted in yellow.

Here is the new list of bugs that are blocking the release:

The list is now down to 16 bugs (from 20). In the meantime, the following bugs have been fixed:

  • Bug 1022578: Can’t tell what category is selected in about:preferences when using High Contrast mode
  • Bug 1022579: Help buttons in about:preferences have no icon when using High Contrast mode
  • Bug 1012410: Can’t close in-content cookie exceptions dialog
  • Bug 1089812: Implement updated In-content pref secondary dialogs

Big thanks goes out to Richard Marti and Tim Nguyen for fixing the above mentioned bugs as well as their continued focus on helping to bring the In-Content Preferences to to the Beta and Release channels.


Tagged: firefox, planet-mozilla, ux

Lucas RochaLeaving Mozilla

I joined Mozilla 3 years, 4 months, and 6 days ago. Time flies!

I was very lucky to join the company a few months before the Firefox Mobile team decided to rewrite the product from scratch to make it more competitive on Android. And we made it: Firefox for Android is now one of the highest rated mobile browsers on Android!

This has been the best team I’ve ever worked with. The talent, energy, and trust within the Firefox for Android group are simply amazing.

I’ve thoroughly enjoyed my time here but an exciting opportunity outside Mozilla came up and decided to take it.

What’s next? That’s a topic for another post ;-)

Will Kahn-GreeneInput: New feedback form

Since the beginning of 2014, I've been laying the groundwork to rewrite the feedback form that we use on Input.

Today, after a lot of work, we pushed out the new form! Just in time for Firefox 34 release.

This blog post covers the circumstances of the rewrite.

Why?

In 2011, James, Mike and I rewrote Input from the ground up. In order to reduce the amount of time it took to do that rewrite, we copied a lot of the existing forms and styles including the feedback forms. At that time, there were two: one for desktop and one for mobile. In order to avoid a translation round, we kept all the original strings of the two forms. The word "Firefox" was hardcoded in the strings, but that was fine since at the time Input only collected feedback for Firefox.

In 2013, in order to reduce complexity on the site because there's only one developer (me), I merged the desktop and mobile forms into one form. In order to avoid a translation round, I continued to keep the original strings. The wording became awkward and the flow through the form wasn't very smooth. Further, the form wasn't responsive at all, so it worked ok on desktop machines, but mediocre on other viewport sizes.

2014 rolled around and it was clear Input was going to need to branch out into capturing feedback for multiple products---some of which were not Firefox. The form made this difficult.

Related, the smoketest framework I wrote in 2014 struggled with testing the form accurately. I spent some time tweaking it, but a simpler form would make smoketesting a lot easier and less flakey.

Thus over the course of 3 years, we had accumulated the following problems:

  1. The flow through the form felt awkward, instructions weren't clear and information about what data would be public and what data would be private wasn't clear.
  2. Strings had "Firefox" hardcoded and wouldn't support multiple products.
  3. The form wasn't responsive and looked/behaved poorly in a variety of situations.
  4. The form never worked in right-to-left languages and possibly had other accessibility issues.
  5. The architecture didn't let us experiment with the form---tweaking the wording, switching to a more granular gradient of sentiment, capturing other data, etc.

Further, we were seeing many instances of people putting contact information in the description field and there was a significant amount of dropoff.

I had accrued the following theories:

  1. Since the email address is on the third card, users would put their email address in the description field because they didn't know they could leave their contact information later.
  2. Having two cards would reduce the amount of drop-off and unfinished forms than three cards.
  3. Having simpler instruction text would reduce the amount of drop-off.

Anyhow, it was due for an overhaul.

So what's changed?

I've been working on the overhaul for most of 2014, but did the bulk of the work in October and November. It has the following changes:

  1. The new form is shorter and clearer text-wise and design-wise.
  2. It consists of two cards: one for capturing sentiment and one for capturing details about that sentiment.
  3. It clearly delineates data that will be public from data that will be kept private.
  4. It works with LTR and RTL languages (If that's not true, please open a bug.)
  5. It fixes some accessibility issues. (If you find any, please open a bug.)
  6. It uses responsive design, mobile first. Thus it was designed for mobile devices and then scaled to desktop-sized viewports.
  7. It's smaller in kb size and requires fewer HTTP requests.
  8. It's got a better architecture for future development.
  9. It doesn't have "Firefox" hardcoded anymore.
  10. It's simpler so the smoketests work reliably now.
The old Input feedback form.

The old Input feedback form.

The new Input feedback form.

The new Input feedback form.

Note: Showing before and after isn't particularly exciting since this is only the first card of the form in both cases.

Going forward

The old and new forms were instrumented in various ways, so we'll be able to analyze differences between the two. Particularly, we'll be able to see if the new form performs worse.

Further, I'll be checking the data to see if my theories hold true especially the one regarding why people put contact data in the description.

There are a few changes in the queue that we want to make over the course of the next 6 months. Now that the new form has landed, we can start working on those.

Even if there are problems with the new form, we're in a much better position to fix them than we were before. Progress has been made!

Take a moment---try out the form and tell us YOUR feedback

Have you ever submitted feedback? Have you ever told Mozilla what you like and don't like about Firefox?

Take a moment and fill out the feedback form and tell us how you feel about Firefox.

Thanks, etc

I've been doing web development since 1997 or so. I did a lot of frontend work back then, but I haven't done anything serious frontend-wise in the last 5 years. Thus this was a big project for me.

I had a lot of help: Ricky, Mike and Rehan from the SUMO Engineering team were invaluable reviewing code, helping me fix issues and giving me a huge corpus of examples to learn from; Matt, Gregg, Tyler, Ilana, Robert and Cheng from the User Advocacy team who spent a lot of time smoothing out the rough edges of the new form so it captures the data we need; Schalk who wrote the product picker which I later tweaked; Matej who spent time proof-reading the strings to make sure they were consistent and felt good; the QA team which wrote the code that I copied and absorbed into the current Input smoketests; and the people who translated the user interface strings (and found a bunch of issues) making it possible for people to see this form in their language.

Brian R. BondySQL on Khan Academy enabled by SQLite, sqljs, asm.js and Emscripten

Originally the computer programming section at Khan Academy only focused on learning JavaScript by using ProcessingJS. This still remains our biggest environment and still has lots of plans for further growth, but we recently generalized and abstracted the whole framework to allow for new environments.

The first environment we added was HTML/CSS which was announced here. You can try it out here. We also have a lot of content for learning how to make webpages already created.

SQL on Khan Academy

We recently also experimented with the ability to teach SQL on Khan Academy. This wasn't a near term priority for us, so we used our hack week as an opportunity to bring an SQL environment to Khan Academy.

You can try out the SQL environment here.

Implementation

To implement the environment, one would think of WebSQL, but there are a couple major browser vendors (Mozilla and Microsoft) who do not plan to implement it and W3C stopped working on the specification at the end of 2010,

Our implementation of SQL is based off of SQLite which is compiled down to asm.js by Emscripten packaged into sqljs.

All of these technologies I just mentioned, other than SQLite which is sponsored by Mozilla, are Mozilla based projects. In particular, largely thanks to Alon Zakai.

The environment

The environment looks like this, the entire code for creating, inserting, updating, and querying a database occur in a single editor. Behind the scenes, we re-create the entire state of the database and result sets on each code edit. Things run smoothly in the browser and you don't notice that.

Unlike many online SQL tutorials, this environment is entirely client side. It has no limitations on what you can do, and if we wanted, we could even let you export the SQL databases you create.

One of the other main highlights is that you can modify the inserts in the editor, and see the results in real time without having to run the code. This can lead to some cool insights on how changing data affects aggregate queries.

Hour of Code

Unlike the HTML/CSS work, we don’t have a huge number of tutorials created, but we do have some videos, coding talk throughs, challenges and a project setup in a single tutorial which we’ll be using for one of our hour of code offerings: Hour of Databases.

Doug BelshawToward The Development of a Web Literacy Map: Exploring, Building, and Connecting Online

The title of this post is also the title of a presentation I’m giving at the Literacy Research Association conference next week. The conference has the theme ‘The Dialogic Construction of Literacies’ – so this session is a great fit. It’s been organised by Ian O'Byrne and Greg McVerry, both researchers and Mozilla contributors.

Tiger

I’m cutting short my participation in the Mozilla work week in Portland, Oregon next week to fly to present at this conference. This is not only because I think it’s important to honour prior commitments, but because I want to encourage more literacy researchers to get involved in developing the Web Literacy Map.

I’ve drafted the talk in the style in which I’d deliver it. The idea isn’t to read it, but to use this to ensure that my presentation is backed up by slides, rather than vice-versa. I’ll then craft speaker notes to ensure I approximate what’s written here.

Click here to read the text of the draft presentation

I’d very much appreciate your feedback. Here’s the specific things I’m looking for answers to:

  • Gaps - what have I missed?
  • Structure - does it 'flow’?
  • Red flags - is there anything in there liable to cause problems/issues?

I’ve created a thread on the #TeachTheWeb discussion forum for your responses - or you can email me directly: doug@mozillafoundation.org

Thanks in advance! And remember, it doesn’t matter how new you are to the Web Literacy Map or the process of creating it. I’m interested in the views of newbies and veterans alike.

\(@ ̄∇ ̄@)/

Andrea MarchesiniSwitchy 0.9 released

Break-news: Finally I had time to update Switchy to the latest addon-sdk 1.7 and now, version 0.9.x is restart-less!

What is Switchy? Switchy is an add-on for Firefox to better manage several profiles. This add-on allows the user to create Firefox profiles, rename, delete and open them just with a click.

By using Switchy, you can open more profiles at the same time: an important feature for those who are concerned about security and privacy. For instance, you can have a separate profile for Facebook and other social networks while browsing other websites or have a separate profile for Google so you are not always logged in.

Don’t we have some similar addons? There are other similar add-ons but Switchy has extra features. You can assign websites to be exclusive for particular profiles. This means that, when from profile X I try to open one of websites saved in a specific profile, Switchy allows me to “switch” to the correct profile with just 1 click. For example, if I open ‘Facebook’ from my default profile, Switchy immediately offers me the opportunity to open the correct profile where I am logged in on Facebook - which is nice!

What is new in version 0.9? Restart-less, and a new awesome UI for the Switchy panel.

I hope you enjoy it!

François MarierHiding network disconnections using an IRC bouncer

A bouncer can be a useful tool if you rely on IRC for team communication and instant messaging. The most common use of such a server is to be permanently connected to IRC and to buffer messages while your client is disconnected.

However, that's not what got me interested in this tool. I'm not looking for another place where messages accumulate and wait to be processed later. I'm much happier if people email me when I'm not around.

Instead, I wanted to do to irssi what mosh did to ssh clients: transparently handle and hide temporary disconnections. Here's how I set everything up.

Server setup

The first step is to install znc:

apt-get install znc

Make sure you get the 1.0 series (in jessie or trusty, not wheezy or precise) since it has much better multi-network support.

Then, as a non-root user, generate a self-signed TLS certificate for it:

openssl req -x509 -sha256 -newkey rsa:2048 -keyout znc.pem -nodes -out znc.crt -days 365

and make sure you use something like irc.example.com as the subject name, that is the URL you will be connecting to from your IRC client.

Then install the certificate in the right place:

mkdir ~/.znc
mv znc.pem ~/.znc/
cat znc.crt >> ~/.znc/znc.pem

Once that's done, you're ready to create a config file for znc using the znc --makeconf command, again as the same non-root user:

  • create separate znc users if you have separate nicks on different networks
  • use your nickserv password as the server password for each network
  • enable ssl
  • say no to the chansaver and nickserv plugins

Finally, open the IRC port (tcp port 6697 by default) in your firewall:

iptables -A INPUT -p tcp --dport 6697 -j ACCEPT

Client setup (irssi)

On the client side, the official documentation covers a number of IRC clients, but the irssi page was quite sparse.

Here's what I used for the two networks I connect to (irc.oftc.net and irc.mozilla.org):

servers = (
  {
    address = "irc.example.com";
    chatnet = "OFTC";
    password = "fmarier/oftc:Passw0rd1!";
    port = "6697";
    use_ssl = "yes";
    ssl_verify = "yes";
    ssl_cafile = "~/.irssi/certs/znc.crt";
  },
  {
    address = "irc.example.com";
    chatnet = "Mozilla";
    password = "francois/mozilla:Passw0rd1!";
    port = "6697";
    use_ssl = "yes";
    ssl_verify = "yes";
    ssl_cafile = "~/.irssi/certs/znc.crt";
  }
);

Of course, you'll need to copy your znc.crt file from the server into ~/.irssi/certs/znc.crt.

Make sure that you're no longer authenticating with the nickserv from within irssi. That's znc's job now.

Wrapper scripts

So far, this is a pretty standard znc+irssi setup. What makes it work with my workflow is the wrapper script I wrote to enable znc before starting irssi and then prompt to turn it off after exiting:

#!/bin/bash
ssh irc.example.com "pgrep znc || znc"
irssi
read -p "Terminate the bouncer? [y/N] " -n 1 -r
echo
if [[ $REPLY =~ ^[Yy]$ ]]
then
  ssh irc.example.com killall -sSIGINT znc
fi

Now, instead of typing irssi to start my IRC client, I use irc.

If I'm exiting irssi before commuting or because I need to reboot for a kernel update, I keep the bouncer running. At the end of the day, I say yes to killing the bouncer. That way, I don't have a backlog to go through when I wake up the next day.

Gervase MarkhamNot That Secret, Actually…

(Try searching Google Maps for “Secret Location”… there’s one in Norway, one in Toronto, and two in Vancouver!)

Karl DubostFix Your Flexbox Web site

Web compatibility issues takes many forms. Some are really hard to solve and there are sound business reasons behind them. On the other hand, some Web compatibility issues are really easy to fix with the benefits of allowing more potential market shares for the Web site. CSS Flexbox is one of those. I have written about it in the past. Let's make another practical demonstration on how to fix some of the flexbox issues.

8 Lines of CSS Code

Spoiler alert: This is the final result before and after fixing the CSS.

Screenshots of Hao123 site

How we did it? Someone had reported that the layout was broken on hao123.com on Firefox OS (Mobile). Two things are at happening. First of all, because Hao123 was not sending the mobile version to Firefox OS, we relied on User Agent overriding. Faking the Firefox Android user agent, we had access to the mobile version. Unfortunately, this version is partly tailored for -webkit- CSS properties.

Inspecting the stylesheets with the developer tools, we can easily discover the culprit.

 grep -i "display:-webkit" hao123-old.css
    display:-webkit-box;
    display:-webkit-box;
    display:-webkit-box
    display:-webkit-box;
    display:-webkit-box;
    display:-webkit-box
    display:-webkit-box;
 grep -i "flex" hao123-old.css
    -webkit-box-flex:1;

So I decided to fix it by adding

  1. display:flex; for each display:-webkit-box;
  2. flex-grow: 1; for -webkit-box-flex:1;

The thing which is amazing which this kind of fix is that the site dynamically fixes itself in your viewport as you go with it. You are literally sculpting the page. And if the company is telling why they should bother about it? Because for something that will take around 10 minutes to fix, they will suddenly have a much bigger coverage of devices… which means users… which means marketshares.

Guides For Fixing Web Compatibility Issues

I started a repository to help people fixing their own Web site with the most common issues in Web Compatibility. Your contribution on the project is more than welcome.

Otsukare.

Hannah KaneHow to Mofo

OpenMatt and I have been talking about the various ways of working at Mofo, and we compiled this list of what we think works best. What do y’all Mofos think?

When starting a new project:

  • Clearly state the problem or goal. Don’t jump ahead to the solution. Ameliorating the problem is what you’ll measure success against, not your ability to implement an arbitrary solution.
  • Explicitly state assumptions. And, whenever possible, test those assumptions before you build anything. You may have assumptions about the nature of the problem you’re trying to solve, who’s experiencing it, or your proposed solution.
  • Have clear success metrics. How will you know if you’re winning? Do you have the instruments you need to measure success?
  • Determine what resources you need. Think about design, development, content, engagement, evaluation, and ongoing maintenance. We’re working on improving the ways we allocate resources throughout the organization, but to start, be clear about what resources your project will need.
  • Produce a project brief. Detail all of the above in a single document. (Example templates here and here.) Use the project brief when you…
  • …Have a project kick-off meeting. Invite *all* the stakeholders to get involved early.

Communication:

  • Have a check-in plan. Will you have daily check-ins? Weekly email updates? How are you checking in and holding each other accountable?
  • Build a workbench and keep it updated.  We recommend a wiki page that will serve as a one-stop shop for anyone needing information about the project. Things to include: links to project briefs and notes, logistics for meetings, a timeline, a list of who’s involved, and, of course, bugs! Examples here, here, and here.
  • Put your notes in one spot.  A single canonical pad for notes and agendas. We don’t need to create a  new pad every time you have a meeting or a thought! That makes them very hard to track and find later. Examples here and here.

Doing the do:

  • Plan in two-week heartbeats. This helps us stay on track and makes it clear what the priorities are. Speaking of priorities…
  • Learn the Fine Art of Prioritizing. Hint: Not everything can be P1. The product owner or project manager should rank tasks in order of value added. Remember: prioritization is part of managing workflow. It may be true that all or most of the tasks are required for a successful launch, but that doesn’t help a developer or designer who’s trying to decide what to work on next.
  • Work with your friendly neighborhood Tactical Priorities Syndicate. The name sounds scary, but they’re here to serve you. They meet weekly to get your priorities into each two-week heartbeat process. https://wiki.mozilla.org/Webmaker/TPS

Update: To hack on the next version of this, please visit http://workopen.org/mofo (thanks to Doug for the suggestion!)


Roberto A. VitilloClustering Firefox hangs

Jim Chen recently implemented a system to collect stacktraces of threads running some code for more than 500ms. A summary of the aggregated data is displayed in a nice dashboard in which the top N aggregated stacks are shown according to different filters.

I have looked at a different way to group the frames that would help us identify the culprits of main-thread hangs, aka jank. The problem with aggregating stackframes and looking at the top N is that there is a very long tail of stacks that are not considered. It might very well be that in that tail some important patterns could be lurking that we are missing.

So I tried different clustering techniques until I settled with the very simple solution of aggregating the traces by their last frame. Why the last frame? When I used k-means to cluster the traces I noticed that, for many of the more interesting clusters the algorithm found, most stacks had the last frame in common, e.g.:

  • Startup::XRE_Main, (chrome script), Timer::Fire, nsRefreshDriver::Tick, PresShell::Flush, PresShell::DoReflow
  • Startup::XRE_Main, Timer::Fire, nsRefreshDriver::Tick, PresShell::Flush, PresShell::DoReflow
  • Startup::XRE_Main, EventDispatcher::Dispatch, (content script), PresShell::Flush, PresShell::DoReflow

Aggregating by the last frame yields clusters that are big enough to be considered interesting in terms of number of stacktraces and are likely to explain the most common issues our users experience.

Currently on Aurora, the top 10 meaningful offending main-thread frames are in order of importance:

  1. PresShell::DoReflow accounts for 5% of all stacks
  2. nsCycleCollector::collectSlice accounts for 4.5% of all stacks
  3. nsJSContext::GarbageCollectNow accounts for 3% of all stacks
  4. IPDL::PPluginInstance::SendPBrowserStreamConstructor accounts for 3% of all stacks
  5. (chrome script) accounts for 3% all stacks
  6. filterStorage.js (Adblock Plus?) accounts for 2.7% of all stacks
  7. nsStyleSet::FileRules accounts for 2.7% of all stacks
  8. IPDL::PPluginInstance::SendNPP_Destroy accounts for 2% of all stacks
  9. IPDL::PPluginScriptableObject::SendHasProperty accounts for 2% of all stacks
  10. IPDL::PPluginScriptableObject::SendInvoke accounts for 1.7% of all stacks

Even without showing sample stacks for each cluster, there is some useful information here. The elephants in the room are clearly plugins; or should I say Flash? But just how much do “plugins” hurt our responsiveness? In total, plugin related traces account for about 15% of all hangs. It also seems that the median duration of a plugin hang is not different from a non-plugin one, i.e. between 1 and 2 seconds.

But just how often does a hang occur during a session? Let’s have a look:

hangs

The median number of hangs for a session amounts to 5; the mean is not that interesting as there are big outliers that skew the data. Also noteworthy is that the median duration of a session is about 16 minutes.

As one would expect, the number of hangs tend to increase as the duration of a session does:

hangsvsuptime

Tha analysis was run on a week’s worth of data for Aurora (over 50M stackframes) and similar results where obtained on previous weeks.

There is some work in progress to improve the status quo. Aaron Klotz’s formidable async plugin initialization is going to eliminate trace 4 and he might tackle frame 8 in the future. Furthermore, a recent improvent in cycle collection is hopefully going to reduce the impact of frame 2.


Mozilla FundraisingOfficial Mozilla Gear Is Now Open for Business

Today is the day: The new Official Mozilla Gear website is open for business: https://gear.mozilla.org/ Official Mozilla Gear is the public site where anyone can buy branded gear for their own personal use or to give to loved ones.  Consider … Continue reading

Mozilla ThunderbirdThunderbird Reorganizes at 2014 Toronto Summit

In October 2014, 22 active contributors to Thunderbird gathered at the Mozilla office in Toronto to discuss the status of Thunderbird, and plan for the future.

Toronto Contributors at 2014 Toronto Summit

Thunderbird contributors gather in Toronto to plan the future.

As background, Mitchell Baker, Chair of the Mozilla Foundation, posted in July 2012 that Mozilla would significantly reduce paid staff dedicated to Thunderbird, and asked community volunteers to move Thunderbird forward. Mozilla at that time committed several paid staff to maintain Thunderbird, each working part-time on Thunderbird but with a main commitment to other Mozilla projects. The staff commitment in total was approximately one full-time equivalent.

Over the last two years, those individuals had slowly reduced their commitment to Thunderbird, yet the formal leadership of Thunderbird remained with these staff. By 2014 Thunderbird had reached the point where nobody was effectively in charge, and it was difficult to make important decisions. By gathering the key active contributors in one place, we were able to make real decisions, plan our future governance, and move to complete the transition from being staff-led to community-led.

At the Summit, we made a number of key decisions:

  • A group of seven individuals were elected to comprise a Thunderbird Council with the authority to make decisions affecting Thunderbird. I (Kent James) am currently the Chair of this council.
  • For our next major release, Thunderbird 38 due in May 2015, we set this roadmap:
    • Folders: allow >4GByte mbox folders, plus finish support for maildir
    • Instant Messaging: Support WebRTC
    • Calendaring: Merge Lightning into Thunderbird as a shipped addon
    • Accounts: Merge the New Account Types binary addon into core, allowing new account types to be defined using addons in the future.
    • IMAP: support OAUTH authorization in GMail.
  • We agreed that Thunderbird needs to have one or more full-time, paid staff to support shipping a stable, reliable product, and allow progress to be made on frequently-requested features. To this end, we plan to appeal directly to our users for donations.
  • The Thunderbird active contributors are proud to be part of Mozilla, expect to remain part of Mozilla for the foreseeable future, and believe we have an important role to play in fulfilling the goals of the Mozilla Manifesto.

There is a lot of new energy in Thunderbird since the Summit, a number of people are stepping forward to take on some critical roles, and we are looking forward to a great next release. More help is always welcome though!

Brian R. BondyAutomated end to end testing at Khan Academy using Gecko

Developers at Khan Academy are responsible for shipping new stuff they create to khanacademy.org as it's ready. As a whole, the site is deployed several times per day. Testing deploys of khanacademy.org can take up a lot of time.

We have tons of JavaScript and Python unit tests, but they do not catch various errors that can only happen on the live site, such as Content Security Policy (CSP) errors.

We recently deployed a new testing environment for end to end testing which will result in safer deploys. End to end testing is not meant to replace manual testing at deploy time completely, but over time, it will reduce the amount of time taken for manual testing.


Which types of errors do the tests catch?

The end to end tests catch things like missing resources on pages, JavaScript errors, and CSP errors. They do not replace unit tests, and unit tests should be favoured when it's possible.


Which frameworks are we using?


We chose to implement the end to end testing with CasperJS powered by the SlimerJS engine. Actually we even have one more abstraction on top of that so that tests are very simple and clean to write.

SlimerJS is similar and mostly compatible with the more known PhantomJS, but SlimerJS is based on Firefox's Gecko rendering engine instead of WebKit. At the time of this writing, it's based on Gecko 33. CasperJS is a set of higher level APIs and can be configured to use PhantomJS or SlimerJS.

The current version of PhantomJS is based on Webkit and is too far behind to be useful to end to end tests for our site yet. There's a newer version of PhantomJS coming, but it's not ready yet. We also considered using Selenium to automate browsers to do the testing, but it didn't meet our objectives for various reasons.


What do the tests do?

They test the actual live site. They can load a list of pages, run scripts on the pages, and detect errors. The scripts emulate a user of the site who fills out forms, logs in, clicks things, waits for things, etc.

We also have scripts for creating and saving programs in our CS learning environment, doing challenges, and we'll even have some for playing videos.


Example script

Here's an example end-to-end test script that logs in, and tests a couple pages. It will return an error if there are any JavaScript errors, CSP errors, network errors, or missing resources:

EndToEnd.test("Basic logged in page load tests", function(casper, test) {
    Auth.thenLogin(casper);
    [
        [ "Home page", "/"],
        [ "Mission dashboard", "/mission/cc-sixth-grade-math"]
    ].map(function(testPage) {
        thenEcho(casper, "Loading page: " + testPage[0]);
        KAPageNav.thenOpen(casper, testPage[1]);
    });
    Auth.thenLogout(casper);
});

When are tests run?

Developers are currently prompted to run the tests when they do a deploy, but we'll be moving this to run automatically from Jenkins during the deploy process. Tests are run both on the staged website version before it is set as the default, and after it is set as the default version.

The output of tests looks like this:

Henrik SkupinFirefox Automation report – week 39/40 2014

In this post you can find an overview about the work happened in the Firefox Automation team during week 39 and 40.

Highlights

One of our goals for last quarter was to get locale testing enabled in Mozmill-CI for each and every supported locale of Firefox beta and release builds. So Cosmin investigated the timing and other possible side-effects, which could happen when you test about 90 locales across all platforms! The biggest change we had to do was for the retention policy of logs from executed builds due to disk space issues. Here we not only delete the logs after a maximum amount of builds, but also after 3 full days now. That gives us enough time for investigation of test failures. Once that was done we were able to enable the remaining 60 locales. For details of all the changes necessary, you can have a look at the mozmill-ci pushlog.

During those two weeks Henrik spent his time on finalizing the Mozmill update tests to support the new signed builds on OS X. Once that was done he also released the new mozmill-automation 2.0.8.1 package.

Individual Updates

For more granular updates of each individual team member please visit our weekly team etherpad for week 39 and week 40.

Meeting Details

If you are interested in further details and discussions you might also want to have a look at the meeting agenda, the video recording, and notes from the Firefox Automation meetings of week 39 and week 40.

Byron Joneshappy bmo push day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1100942] Attachment links in request.cgi should go to the attachment and not default to &action=edit
  • [1101659] Remove curtisk from the auto-cc of the sec portion of the moz project review
  • [1102420] Remove “Firefox Screen Sharing Whitelist Submission” link from new-bug page
  • [1103069] Please fix the colo-trip field for Infrastructure and Operations :: DCops
  • [1102229] custom css stylesheets are not loaded if CONCATENATE_ASSETS is false
  • [1103837] Clicking on a “Bug Bounty” attachment should edit that attachment with the bug-bounty form

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

Nicholas NethercoteTwo suggestions for the Portland work week

Mozilla is having a company-wide work week in Portland next week. It’s extremely rare to have this many Mozilla employees in the same place at the same time, and I have two suggestions.

  • Write down a list of people that you want to meet. This will probably contain people you’ve interacted with online but not in person. And send an email to everybody on that list saying “I would like to meet you in person in Portland next week”. I’ve done this at previous work weeks and it has worked well. (And I did it again earlier today.)
  • During the week, don’t do much work you could do at home. This includes most solo coding tasks. If you’re tempted to do such work, stand up and try to find someone to talk to (or listen to) who you couldn’t normally talk to easily. (This is a rule of thumb; if a zero-day security exploit is discovered in your code on Tuesday morning, yes, you should fix it.) Failing that, gee, you might as well do something that you can only do in Portland.

That’s it. Have a great week!

David BoswellRadical Participation Idea: Slow Down

The Portland Coincidental Work Week is next week and we’ll be working on our plans for 2015. One of the things we want to include in our planning is Mitchell’s question about what does radical participation look like for Mozilla today?

Everyone who is interested in this question is welcome to join us next Thursday and Friday for the Participation work week. Please come with ideas you have about this question. Here is one idea I’m thinking about that feels like an important part of a radical participation plan.

Slow Down

I’ve worked at small software start-ups and I’ve worked at large volunteer-based organizations. There are many differences between the two. The speed that information reaches everyone is a major difference.

For example, I worked at a small start-up called Alphanumerica. There were a dozen of us all working together in the same small space. Here’s a picture of me in my corner (to give you an idea of how old this photo is it was taken on a digital camera that stored photos on a floppy disk.)

MVC-017F

To make sure everyone knew about changes, you could get everyone’s attention and tell them. People could then go back to work and everyone would be on the same page. In this setting, moving fast and breaking things works.

Information doesn’t spread this quickly in a globally distributed group of tens of thousands of staff and volunteers. In this setting, if things are moving too fast then no one is on the same page and coordinating becomes very difficult.

communities_map

Mozilla is not a small start-up where everyone is physically together in the same space. We need to move fast though, so how can we iterate and respond quickly and keep everyone on the same page?

Slow Down To Go Fast Later

It might seem odd, but there is truth to the idea that you can slow down now in order to go faster later. There is even research that backs this up. There’s a Harvard Business Review article on this topic worth reading—this paragraph covers the main take-aways:

In our study, higher-performing companies with strategic speed made alignment a priority. They became more open to ideas and discussion. They encouraged innovative thinking. And they allowed time to reflect and learn. By contrast, performance suffered at firms that moved fast all the time, focused too much on maximizing efficiency, stuck to tested methods, didn’t foster employee collaboration, and weren’t overly concerned about alignment

For Mozilla, would radical participation look like setting goals around alignment and open discussions? Would it be radical to look at other large volunteer-based organizations and see what they optimize for instead of using start-ups as a model?

I’m very interested to hear what people think about the value of slowing down at Mozilla as well as hearing other ideas about what radical participation looks like. Feel free to comment here, post your own blog and join us in Portland.


Armen ZambranoPinning mozharness from in-tree (aka mozharness.json)

Since mozharness came around 2-3 years ago, we have had the same issue where we test a mozharness change against the trunk trees, land it and get it backed out because we regress one of the older release branches.

This is due to the nature of the mozharness setup where once a change is landed all jobs start running the same code and it does not matter on which branch that job is running.

I have recently landed some code that is now active on Ash (and soon on Try) that will read a manifest file that points your jobs to the right mozharness repository and revision. We call this process to "pin mozhaness". In other words, what we do is to fix an external factor to our job execution.

This will allow you to point your Try pushes to your own mozharness repository.

In order to pin your jobs to a repository/revision of mozharness you have to change a file called mozharness.json which indicates the following two values:
  • "repo": "https://hg.mozilla.org/build/mozharness",
  • "revision": "production"


This is a similar concept as talos.json introduced which locks every job to a specific revision of talos. The original version of it landed in 2011.

Even though we have a similar concept since 2011, that doesn't mean that it was as easy to make it happen for mozharness. Let me explain a bit why:

  • For talos, mozharness has been checking out the right revision of talos.
  • In the case of mozharness, we can't make mozharness check itself out.
    • Well, we could but it would be a bigger mess
    • Instead we have made buildbot ScriptFactory be a bit more flexible
Coming up:
  • Enable on Try
  • Free up Ash and Cypress
    • They have been used to test custom mozharness patches and the default branch of Mozharness (pre-production)
Long term:
  • Enable the feature on all remaining Gecko trees
    • We would like to see this run at scale for a bit before rolling it out
    • This will allow mozharness changes to ride the trains
If you are curious, the patches are in bug 791924.

Thanks for Rail for all his patch reviews and Jordan for sparking me to tackle it.



Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Tristan NitotEn vrac du lundi

Christian HeilmannDiversifight – a talk at the diversity hackathon at Spotify Sweden

Yesterday afternoon I presented at the “Diversify” hackathon in the offices of Spotify in Stockholm, Sweden. The event was aimed at increasing diversity in IT by inviting a group of students that represented a good mix of gender and ethnic background to work together on hacks with music and music data. There was no strict competitive aspect to this hackathon and no prizes or winners – it was all about working together and seeing how a mixed group can achieve better results.

speaking at diversify
the earth needs rebels
Photos by Sofie Lindblom and Ejay Janis

When I was asked to speak at an event about diversity in IT, I was flattered but also confused. Being very white and male I don’t really have a chance to speak from a viewpoint of a group that brings diversity to the mix. But I do have a lot of experience and I looked into the matter quite a bit. Hence I put together a talk that covers a few things I see going wrong, a few ideas and tools we have to make things better by bettering ourselves and a reminder that the world of web development used to be much more diverse and we lost these opportunities. In essence, the break-neck speed of our market, the hype the press and events living on overselling the amazing world of startups and the work environments we put together seem to actively discouraging diversity. And that is what I wanted the students to consider and fight once they go out and start working in various companies.

Diversity is nothing we can install – it is something we need to fight for. And it makes no sense if only those belonging to disadvantaged groups do that.

This talk is pretty raw and unedited, and it is just a screencast. I would love to give a more polished version of it soon.

You can watch the the screencast on YouTube.

The slides are available on Slideshare.

Resources I covered in the talk:

The feedback was amazing, students really liked it and I am happy I managed to inspire a few people to think deeper about a very important topic.

A big thank you to the Spotify Street Team and especially Caroline Arkenson for having me over (and all the hedgehog photos in the emails).

Yunier José Sosa VázquezEl nuevo botón “Olvidar” de Firefox

Protege tu privacidad con el nuevo botón Olvidar, disponible solo en la última versión de Firefox. En tan solo unos clics, puedes eliminar tu historial e información personal más reciente -desde los últimos cinco minutos y hasta 24 horas- sin tocar el resto. El botón Olvidar es muy útil si usas un equipo público y quieres limpiar tu información, o si llegas a un sitio web dudoso y necesitas salir de ahí rápidamente.

Olvidar

Sino encuentras este botón en la barra de herramientas de Firefox, en el Menú  menu, elige menuPanel-customize Personalizar y arrastra el botón Olvidar hacia donde desees. Allí también puedes configurar tu navegador como más te plazca, quitando y añadiendo botones hacia el Menú o hacia la barra de herramientas.

La última versión de Firefox la puedes obtener desde nuestra Zona de Descargas para Windows, Mac, Linux y Android.

Staś MałolepszyMeet.js talk on Clientside localization in Firefox OS

Firefox OS required a fast and lean localization method that could scale up to 70 languages, cater to the needs of hundreds of developers worldwide all speaking different languages and support a wide spectrum of devices with challenging hardware specs.

At the end of September, I went to Poznań to speak about localization technology in Firefox OS at Meet.js Summit. In my talk I discussed how we had been able to create a localization framework which embraces new Web technologies like Web components and mutation observers, how we'd come up with new developer tools to make localization work easier and what exciting challenges lay ahead of us.

Botond BalloTrip Report: C++ Standards Meeting in Urbana-Champaign, November 2014

Summary / TL;DR

Project Status
C++14 Finalized and approved, will be published any day now
C++17 Some minor features so far. Many ambitious features are being explored. for (e : range) was taken out.
Networking TS Sockets library based on Boost.ASIO moving forward
Filesystems TS On track to be published early 2015
Library Fundamentals TS Contains optional, any, string_view and more. No major changes since last meeting. Expected 2015.
Library Fundamentals TS II Follow-up to Library Fundamentals TS; will contain array_view and more. In early stage, with many features planned.
Array Extensions TS Continues to be completely stalled. A new proposal was looked at but failed to gain consensus.
Parallelism TS Progressing well. Expected 2015.
Concurrency TS Progressing well. Expected 2015. Will have a follow-up, Concurrency TS II.
Transactional Memory TS Progressing well. Expected 2015.
Concepts (“Lite”) TS Progressing well. Expected 2015.
Reflection Looking at two different proposals. Too early to say anything definitive.
Graphics 2D Graphics TS based on cairo moving forward
Modules Microsoft and Clang have implementations at various stages of completeness. They are iterating on it and trying to converge on a design.
Coroutines Proposals for both stackless and stackful variants will be developed, in a single TS.

Introduction

Last week I attended another meeting of the ISO C++ Standards Committee at the Univeristy of Illinois at Urbana-Champaign. This was the third and last Committee meeting in 2014; you can find my reports on the previous meetings here (February 2014, Issaquah) and here (June 2014, Rapperswil). These reports, particularly the Rapperswil one, provide useful context for this post.

The focus of this meeting was moving forward with the various Technical Specifications (TS) that are in progress, and looking ahead to C++17.

C++14

C++14 was formally approved as an Internal Standard in August when it passed its final ballot (the “DIS”, or Draft International Standard, ballot; see my Issaquah report for a description of the procedure for publishing a new language standard).

It will take another few weeks for ISO to publish the approved standard; it’s expected to happen before the end of the year.

C++17

Strategic Vision

With C++14 being approved, the Committee is turning its attention towards what its strategic goals are for the next revision of the language standard, C++17.

As I explained in my Rapperswil report, most major new features are targeted for standardization in two steps: first, as a Technical Specification (TS), an experimental publication vehicle with no backwards-compatibility requirements, to gain implementation and use experience; and then, by incorporation into an International Standard (IS), such as C++17.

Therefore, a significant amount of the content of C++17 is expected is to consist of features being published as Technical Specifications in the near future. It’s not immediately clear which TS’s will be ready for inclusion in C++17; it depends on when the TS itself is published, and whether any concerns about it come up as it’s being implemented and used. Hopefully, at least the ones being published over the next year or so, such as Filesystems, Concepts, Parallelism, Library Fundamentals I, and Transactional Memory, are considered for inclusion in C++17.

In addition, there are some major features that do not yet have a Technical Specification in progress which many hope will be in C++17: namely, Modules and Reflection. Due to the size and scope of these features, it is increasingly likely that the committee will deem it safer to standardize these as TS’s first as well, rather than targetting them directly at C++17. In this case, there may not be time for the additional step of gaining experience with the TS and merging it into the IS in time for C++17; however, it’s too early to know with any confidence at this point.

Minor Features

That said, C++17 will certainly contain some language and library features, and some smaller ones have already made it in. I mentioned a few in my Rapperrswil report, but some new ones came out of this meeting:

  • Language features
    • The most notable and exciting feature in my books is folding expressions. These give you the ability to expand a parameter pack over a binary operator. For example, if Args is a non-type parameter pack of booleans, then Args &&... is a new expression which is the ‘and’ of all the booleans in the pack. All binary operators support this; for operators that have a logical identity element (e.g. 0 for addition), an empty pack is allowed and evaluates to that identity.
    • Another notable change was not an addition, but a removal: the terse form of the range-based for loop, for (elem : range) (which would have meant for (auto&& elem : range)), was removed. (Technically, it was never added, because the C++ working draft was locked for additions in Rapperswil while the C++14 DIS ballot was in progress. However, there was consensus in the Evolution and Core Working Groups in Rapperswil to add it, and there was wording ready to be merged to the working draft as soon as the ballot concluded and it was unlocked for C++17 additions. That consensus disappeared when the feature was put up for a vote in front of full committee in Urbana.) The reason for the removal was that in for (elem : range), there is no clear indication that elem is a new variable being declared; if there already is a variable named elem in scope, one can easily get confused and think the existing variable is being used in the loop. Proponents of the feature pointed out that there is precedent for introducing a new name without explicit syntax for declaring it (such as a type) in generalized lambda captures ([name = init](){ ... } declares a new variable named name), but this argument was not found convincing enough to garner consensus for keeping the feature.
    • std::uncaught_exceptions(), a function that allows you to determine accurately whether a destructor is being called due to stack unwinding or not. There is an existing function, std::uncaught_exception() (note the singular) that was intended for the same purpose, but was inaccurate by design in some cases, as explained in the proposal. This is considered a language feature even though it’s exposed as a library function, because implementing this function requires compiler support.
    • Attributes for namespaces and enumerators. This fills a grammatical hole in the language, where most entities could have an attribute attached to them, but namespaces and enumerators couldn’t; now they can.
    • A shorthand syntax for nested namespace definition.
    • u8 character literals.
    • A piece of official terminology, “forwarding references”, was introduced for a particular use of rvalue references. Some educators have previously termed this use “universal references”, but the committee felt the term “forwarding references” was more accurate.
    • Allowing full constant expressions in non-type template arguments. This plugs a small hole in the language where the template arguments for certain categories of non-type template parameters were restricted to be of a certain form without good reason.
  • Library features

Evolution Working Group

As usual, I spent most of my time in the Evolution Working Group (EWG), which concerns itself with the long-term evolution of the core language. In spite of there being a record number of proposals addressed to EWG in the pre-Urbana mailing, EWG managed to get through all of them.

Incoming proposals were categorized into three rough categories:

  • Accepted. The proposal is approved without design changes. They are sent on to the Core Working Group (CWG), which revises them at the wording level, and then puts them in front of the committee at large to be voted into whatever IS or TS they are targeting.
  • Further Work. The proposal’s direction is promising, but it is either not fleshed out well enough, or there are specific concerns with one or more design points. The author is encouraged to come back with a modified proposal that is more fleshed out and/or addresses the stated concerns.
  • Rejected. The proposal is unlikely to be accepted even with design changes.

Accepted proposals (note: I’m not including here the ones which also passed CWG the same meeting and were voted into the standard – see above for those):

  • Source code information capture, a proposal to provide a replacement for the __FILE__, __LINE__, and __FUNCTION__ macros that doesn’t involve the preprocessor. I think this proposal constitutes a major advance because it removes one of the main remaining uses of the preprocessor.
  • Alias-set attributes, a mechanism to pass information to the optimizer about pointer aliasing (like restrict in C, but better). Some design feedback was given, but generally the proposal was considered baked enough that the next revision can go directly to CWG.
  • A few small design changes to the Transactional Memory TS.
  • A proposal to specify that the behaviour of standard library comparison function objects for comparing pointers is consistent with the behaviour of the built-in comparison operators, where the latter behaviour is defined. This was a matter of tweaking the specification to say something that people took for granted to begin with.
  • A modification to the Concepts Lite TS: removing constexpr constraints, which were one of the kinds of constraints allowed in requires-expressions. The reason for the removal is that they are tricky to specify and implement, and have no major motivating uses.
  • A compile-time string class, templated only on the string length, which stores its data in a constexpr character array. This one was one two competing compile-time string proposals, the other one being a variadic char... template class which encodes the string contents in the template arguments themselves. The two proposals present a tradeoff between expressiveness and compile-time efficiency: one the one hand, encoding the string contents in the template arguments allows processing the string via template metaprogramming, while in the other proposal the string can only be processed with constexpr functions; on the other hand, the variadic approach involves creating lots of template instantiations for string processing, which can slow down compile times signficantly. EWG’s view was that the compile-time efficiency consideration was the more important one, especially as constexpr functions are getting more and more powerful. Therefore, the constexpr array-based proposal was selected to move forward. As the proposal has both core language and library components, it will be going to LEWG for design review of the library components before being sent to CWG and LWG.

Proposals for which further work is encouraged:

  • Destructive move, which addresses classes for which an operation that moves from an object and destroys the moved-from object at the same time is more efficient than moving and destroying separately, because the intermediate (moved-from but not yet destroyed) state would require extra object state to represent.
  • Default comparisons. Three different proposals on this topic were presented: one which would automatically give all classes comparison operators unless they opted out by =delete-ing them, or defined their own; one which would allow opting in to compiler-defined comparison operators via =default; and one which would synthesize comparison operators using reflection. As suggested by the variety of the proposals, this is a feature that everyone wants but no one can agree exactly how it should work. Design considerations that came up included opt-in vs. opt-out, special handling for certain types of fields (such as mutable fields and pointers), special handling for classes with a single member, compile-time performance, and different strengths of ordering (such as weak vs. total orders). After discussing the proposal for half a day, we ran out of time, and decided to pick up at the next meeting in Lenexa, possibly armed with revised proposals. There was one poll taken which provided fairly clear guidance on a single aspect of the proposal: there was much stronger consensus for opt-in behaviour than for opt-out.
  • A [[noreturn]] attribute for main(), designed for programs that are never meant to finish, such as some software running on embedded systems. This would allow the optimizer to remove code for running some cleanup such as the destructors of global objects. EWG liked the proposal, and sent it to CWG with one change, naming the attribute [[noexit]] instead. CWG, however, pointed out that global destructors are potentially generated by all translation units, not just the one that defines main(), and therefore the proposal is not implementable without link-time optimization. EWG discussed the proposal further, but didn’t reach any consensus, and decided to put it off until Lenexa.
  • A paper concerning violations of the zero-overhead principle in exception handling. The motivation behind this discussion was resource-constrained systems such as embedded systems, where the overhead associated with exception handling was unwelcome. The general feedback given was to try to evaluate and address such overhead in a comprehensive manner, rather than trying to avoid running into it in a few specific cases.
  • Proposals for a unified function call syntax. Two alternative proposals were presented: one for partial unification (calling non-member functions with member function call syntax), and one for complete unification (either kind of function can be called with either syntax); the latter would either involve breaking code, or having separate name lookup rules for the two syntaxes (and thus not fully achieving the intended unification in spirit). People were somewhat in favour of the first proposal, and a lot more cautious about the second. There seemed to be enough interest to encourage further exploration of the idea.
  • A proposal to allow initializer lists with elements of a move-only type. There was consensus that we want some way to do this, but no consensus for this specific approach; it was not immediately clear what a superior approach would be.
  • Overloading the member access operator (operator .), similarly to how operator -> can be overloaded. This would enable writing “smart reference” classes, much like how overloading operator -> enables writing smart pointer classes. This would be a significant new feature, and many design considerations remain to be explored; however, there was general interest in the idea.
  • Mechanisms for selecting from parameter packs. This proposal has two parts. The first part is a simple syntax for indexing into a parameter pack: if Ts is a parameter pack, and N is a compile-time integral constant, Ts.[N] is the parameter at index N in Ts (or a SFINAE-eligible error if the index N is out of range). The dot is necessary for disambiguation (if the syntax were simply Ts[N], then consider Ts[Ns]..., where Ns is a parameter pack of size equal to Ts; is this a pack of array types T_1[N_1], T_2[N_2], ..., or is it T_(N_1), T_(N_2), ...?). While people weren’t ecstatic about this syntax (the dot seemed arbitrary), there weren’t any better suggestions raised, and people preferred to have the feature with this syntax than to not have it at all. The second part of the proposal was less baked, and concerned “subsetting” a parameter pack with a pack of indices to yield a new pack; EWG encouraged further thought about this part, and suggested exploring two aspects separately: pack literals (for example 0 ...< 5 might be hypothetical syntax for a pack literal which expands to 0, 1, 2, 3, 4) and pack transformations, which are operations that take a parameter pack as input, and transform it to another parameter pack.
  • A proposal to fix a counter-intuitive aspect of the definition of “trivially copyable”.
  • Supporting custom diagnostics for SFINAE-eligible errors. This proposal aimed to resolve a long-standing deficiency in template design: you had to pick between making an incorrect use of a template SFINAE-eligible (expressing the constraint via enable_if or similar), or giving a custom diagnostic for it (expressing the constraint via a static_assert). The specific suggestion was to allow annotating a = delete-ed function with a custom error message that would be shown if it were chosen as the best match in overload resolution. EWG felt that this was a problem worth solving, but preferred a more general solution, and encouraged the author to come back with one.
  • A proposal to specify the order of evaluation of subexpressions within an expression for some types of expressions. EWG felt this change would be valuable, as the order of evaluation being currently unspecified is a common cause of surprise and bugs, but the exact rules still need some thought.
  • Another proposal for classes with runtime size. Unfortunately, EWG continues to be pretty much deadlocked on this topic:
    • People want arrays of runtime bound, together with a mechanism for them to be used as class members.
    • There is no consensus for having arrays of runtime bound without such a mechanism.
    • There are hard technical problems that need to be solved to allow classes of runtime size. One of the biggest challenges is that some platforms’ ABIs would have to be rearchitected to accomodate classes with a runtime-size data member in the middle (this includes class hierarchies where one of the subobjects that’s not at the end has a runtime-sized member at the end).
    • No one has yet come up with a comprehensive solution to these technical problems.
    • There is a divide between two ways of looking of these proposals: one is to say that stack allocation is an optimization, and implementations are free to place runtime-sized arrays on the heap in situations where placing them on the stack is too difficult; the other is to want a guarantee that the allocation is on the stack. Proponents of the second view argue that we don’t need a new syntax for the “stack allocation is an optimization” use case; we should instead improve our optimizers so they can optimize uses of std::vector and similar into stack allocations.

    Given this state of affairs, the future of classes with runtime size (and of arrays of runtime bound, which people want to tie to classes with runtime size) continues to be uncertain.

  • Inline variables. After some discussion, it became clear that this was a proposal for two separate features with a single syntax: a way to declare and initialize global constants in a header file without having to define them in a .cpp file (which is something everyone wants); and a way to define “expression aliases”. EWG expressed interest in these problems, and encouraged fleshing out separate proposals for them.
  • Categorically qualified classes. This proposal provides a mechanism to express that a class’s objects are meant to be used only as named objects, not temporaries (useful for “scope guard”-type classes), or that a class’s objects are meant to only be used as temporaries (useful for expression templates). For classes in the latter category, it’s useful to provide a mechanism to convert objects of this type to objects of another type when initializing a named variable; as such, this part of the proposal overlaps with the operator auto proposal that was discussed (and encouraged for further work) in Rapperswil. EWG felt that the two use cases (scope guards and expression templates) weren’t sufficiently similar to necessitate fixing them the same way, and that the design questions raised during the operator auto discussion weren’t adequately addressed in this proposal; encouragement was given to continue exploring the problem space, being open to different approaches for the two use cases.
  • Generalized lifetime extension. This paper outlined a set of rules for determining whether the result of an expression refers to any temporaries that appear as subexpressions, and proposed that when the result of an expression is bound to a named variable of reference type (at local scope), the temporaries referred to by the result have their lifetimes extended to the lifetime of the variable. A very limited form of this exists in C++ today; this proposal would generalize it considerably. I found this proposal to be very interesting; it has the potential to dramatically reduce the number of use-after-free errors that occur due to temporaries being destroyed earlier than we intend them to be. On the other hand, if not done carefully, the proposal would have the potential to cause programmers to be more laissez-fare about their analysis of temporary lifetimes, possibly leading to more errors. For EWG, the sticking point was that performing the refers-to analysis for function call expressions where the function body is in another translation unit requires the co-operation of the function author. The paper proposed annotating parameters with the keyword export to indicate that the function’s return value refers to this parameter. EWG didn’t like this, feeling that these annotations would be “exception specifications all over again”, i.e. components of a function declaration that are not quite part of its type, for which we need ad-hoc rules to determine their behaviour with respect to redeclarations, function pointers, overrides in derived classes, being passed as non-type template arguments, and so on. The conclusion was that the problem this proposal addresses is a problem we want solved, but that this approach was not in the right direction for solving the problem.

Rejected proposals:

  • A proposal to let return {expr} be explicit, in the sense that it would allow invoking constructors of the function’s return type even if they were explicit. This proposal had support in Rapperswil, but after several new papers argued against it, EWG decided to shelve it.
  • The proposal for named arguments that Ehsan and I wrote. While a few people in the room liked the idea, the majority had reservations about it; prominent among these were the proposal encouraging functions with many parameters, and the additional maintenance burden on library authors caused by parameter name changes breaking code.
  • A proposal for a null-coalescing conditional operator, a ?: b, which would have been equivalent to a ? a : b. EWG felt the utility wasn’t sufficiently compelling to warrant a language change.
  • Checked-dereference conditions. This would have made if (T x : expr) { S } equivalent to if (auto p = expr) { T x = *p; S } (and similarly for while loops and the test-expressions of for loops). EWG felt this shorthand wasn’t sufficiently compelling, and could cause confusion due to the similarity of the syntax to the range-based for loop.
  • A proposal for uniform handling of subobjects. This would allow data members and bases of a class to be interleaved in any order. EWG felt this change was too intrusive and insufficiently motivated.

Contracts

EWG held a special evening session on the topic of contracts, as there was a lot of interest in them at this meeting. Several papers on the topic were presented; a couple of others were not due to lack of time or a presenter.

The only proposal that was specifically considered was a proposal to turn the assert macro into a compiler-recognized operator with one of a specified set of semantics based on the value of the NDEBUG macro; it was rejected, mostly on the basis that it was infeasible to muck with assert and NDEBUG for backwards-compatibility reasons.

Other than that, the discussion was more about high-level design aspects for contract programming rather specific proposals. Some issues that came up were:

  • Where to specify the contracts for a function – in the declaration, in the implementation, or potentially either – and what the implications are.
  • Whether optimizers should be allowed to assume that contracts are obeyed, such that non-obeyance (e.g. precondition violation) implies undefined behaviour.
  • Whether the standard should specify different modes of behaviour (e.g. “release” vs. “debug”) with respect to contract checking (and if so, how to control the mode, or if this should be left implementation-defined).
  • What the behaviour should be upon contract violation (“keep going” but undefined behaviour, custm handler, terminate, throw, etc.).

The discussion closed with some polls to query the consensus of the room:

  • There was consensus that we want some form of contracts.
  • There was consensus that ensuring correctness and realizing performance gains are both important goals of a contracts proposal, with correctness being with primary one.
  • There was consensus that we need to support contracts in interfaces / declarations (at least).
  • There was no consensus for having some notion of “build modes” specified in the standard to control aspects of contract checking.

These views will likely guide future proposals on this topic.

Coroutines

Coroutines was another topic with a lot of interest at this meeting. There were three proposals on the table: “resumable functions”, “resumable lambdas”, and a library interface based on Boost.Coroutine. These proposals started out under the purview of SG 1 (Concurrency), but then they started growing into a language feature with applications unrelated to concurrency as well, so the proposals were presented in an evening session to give EWG folks a chance to chime in too.

The coroutines proposals fall into two categories: stackful and stackless, with the “resumable functions” and “resumable lambdas” proposals being variations on a stackless approach, and Boost.Coroutine proposal being a stackful approach.

The two approaches have an expressiveness/performance tradeoff. Stackful coroutines have more overhead, because a stack needs to be reserved for them; the size of the stack is configurable, but making it too small risks undefined behaviour (via a stack overflow), while making it too large wastes space. Stackless coroutines, on the other hand, use only as much space as they need by allocating space for each function call on the heap (these are called activation frames; in some cases, the heap allocation can be optimized into stack allocation). The price they pay in expressiveness is that any function that calls a resumable function (i.e. a stackless coroutine) must itself be resumable, so the compiler knows to allocate activation frames on the heap when calling it, too. By contrast, with the stackful approach, any old function can call into a stackful coroutine, because execution just switches to using the coroutine’s side stack for the duration of the call.

Within the “stackless” camp, the difference between the “resumable functions” and “resumable lambdas” approaches is relatively small. The main difference is that the “resumable lambdas” approach allows coroutines to be passed around as first-class objects (since lambdas are objects).

The authors of the “resumable functions” and Boost.Coroutine proposals have attempted to come up with a unified proposal that combines the power of “stackful” with the expressiveness of “stackless”, but haven’t succeeded, and in fact have come to believe that the tradeoff is inherent. In light of this, and since both approaches have compelling use cases, the committee was of the view that both approaches should be pursued independently, both targetting a single Coroutines Technical Specification, with the authors co-operating to try to capture any commonalities between their approaches (if nothing else then a common, consistent set of terminology) even if a unified proposal isn’t possible. For the stackless approach, participants were polled for a preference between the “resumable functions” and “resumable lambdas” approaches; there was stronger support for the “resumable functions” approach, though I think this was at least in part due to the “resumable lambdas” approach being newer and less well understood.

I had a chance to speak to Chris Kohlhoff, the author of the “resumable lambdas” proposal, susbequent to this session. He had an idea for combining the “stackless” and “stackful” approaches under a single syntax that I found very interesting, which he plans to prototype. If it pans out, it might end up as the basis of compelling unified proposal after all.

I’m quite excited about the expressivity coroutines would add to the language, and I await developments on this topic eagerly, particularly on Chris’s unified approach.

Embedded Systems

The topic of forming a Study Group to explore ways to make C++ more suitable for embedded systems came up again. In addition to the two papers presented on the topic, some further ideas in this space were containers that can be stored in ROM (via constexpr), and having exceptions without RTTI. It was pointed out that overhead reductions of this sort might be of interest to other communities, such as gaming, graphics, real-time programming, low-latency programming, and resource-constrained systems. EWG encouraged discussion across communities before forming a Study Group.

Library/Library Evolution Working Groups (LWG and LEWG)

I mentioned the library features that are targeted for C++17 in the “C++17″ section above. Here I’ll talk about progress on the Library Fundamentals Technical Specifications, and future work.

Library Fundamentals TS I

The first Library Fundamentals TS has already gone through its first formal ballot, the PDTS (Preliminary Draft Technical Specification) ballot. LWG addressed comments sent in by national standards bodies in response to the ballot; the resulting changes were very minor, the most notable being the removal of the network byte-order conversion functions (htonl() and friends) over concerns that they clash with similarly-named macros. LWG will continue addressing the comments during a teleconference in December, and then they plan to send out the specification for its DTS (Draft Technical Specification) ballot, which, if successful, will be its last before publication.

Library Fundamentals TS II

The second Library Fundamentals TS is in the active development stage. Coming into the meeting, it contained a single proposal, for a generalized callable negator. During this meeting, several new features were added to it:

There will very likely be more features added at the next meeting, in May 2015; the TS is tentatively scheduled to be sent out for its PDTS ballot at the end of that meeting.

Future Work

In addition to the proposals which have already been added into C++17 or one of the TS’s, there are a lot of other library proposals in various stages of consideration.

Proposals approved by LEWG and under review by LWG:

Proposals approved by LEWG for which LWG review is yet to start:

Proposal for which LEWG is encouraging further work:

Proposals rejected by LEWG:

There will be a special library-only meeting in Cologne, Germany in February to allow LWG and LEWG to catch up a bit on all these proposals.

Study Groups

SG 1 (Concurrency)

SG 1’s main projects are the Concurrency TS and the Parallelism TS. As with the Library Fundamentals TS, both are likely to be the start of a series of TS’s (so e.g. the Parallelism TS will be followed by a Parallelism TS II).

Besides coroutines, which I talked about above, I haven’t had a chance to follow SG 1’s work in any amount of detail, but I will mention the high-level status:

The Parallelism TS already had its PDTS ballot; comments were addressed this week, resulting in minor changes, including the addition of a transform-reduce algorithm. SG 1 will continue addressing comments during a teleconference in December, and then plans to send the spec out for its DTS ballot. As mentioned above, there are plans for a Parallelism TS II, but no proposals have been approved for it yet.

The Concurrency TS has not yet been sent out for its PDTS ballot; that is now planned for Lenexa.

Some library proposals that have been approved by LEWG for the Concurrency TS:

Task regions are still being considered by LEWG, and would likely target Concurrency TS II.

A major feature being looked at by SG 1 is executors and schedulers, with two competing proposals. The two approaches were discussed, and SG 1 felt that at this stage there’s still design work to be done and it’s too early to make a choice. This feature is targeting the second Concurrency TS as it’s unlikely to be ready in time for Lenexa, and SG 1 doesn’t want to hold up the first Concurrency TS beyond Lenexa.

Coroutines are also a concurrency feature, but as mentioned above, they are now targeting a separate TS.

SG 2 (Modules)

EWG spent an afternoon discussing modules. At this point, Microsoft and Clang both have modules implementations, at various levels of completion. The Microsoft effort is spearheaded by Gabriel Dos Reis, who summarized the current state of affairs in a presentation.

The goals of modules are:

  • componentization
  • isolation from macros
  • improving build times
  • making it easier to write semantics-aware developer tools
  • being a code distribution mechanism is, at the moment, an explicit non-goal

The aspects of a modules design that people generally agree on at this point are:

  • modules are not a scoping mechanism (i.e. they are independent of namespaces)
  • while performing template instantiation while compiling a module, the compiler has access to the full module being compiled, but only to the interfaces of imported modules
  • the interface of a module can be separated from its implementation
  • module interfaces cannot have cyclic dependencies
  • only one module owns the definition of an entity

Design points that still need further thought are:

  • visibility of private class members across module boundaries
  • ordering of static/dynamic initialization
  • can macros flow into modules? (e.g. NDEBUG)
    • one view on this is that there should be no standard way to provide an input macro to a module, but implementations can provide implementation-defined mechanisms, such as defining NDEBUG on the compiler command line to build a debug version of a module
    • another option is to “parameterize” a module on certain input parameters (such as the value of the NDEBUG macro)
      • this in turn raises the question of a more general parameterization mechanism for modules, akin to templates
  • can macros flow out of modules? (e.g. can the Boost.Preprocessor library be packaged up into a module?)
  • semantics of entities with internal linkage in a module interface
  • can a module interface be spread across several files?
  • the syntax for defining a module
  • how to deal with #includes in a module

EWG was generally pleased with the progress being made, and encouraged implementors to continue collaborating to get their designs to converge, and report back in Lenexa.

The Clang folks also reported promising performance numbers from their implementation, but detailed/comprehensive benchmarks remain to be performed.

SG 3 (Filesystems)

SG 3 did not meet in Urbana. The Filesystems TS is waiting for its DTS ballot to close; assuming it’s successful (which is the general expectation), it will be published early next year.

Proposals targeting a follow-up Filesystems TS II are welcome; none have been received so far.

SG 4 (Networking)

Organizationally, the work of SG 4 has been conducted directly by LEWG over the past few meetings. This arrangement has been formalized at this meeting, with SG 4’s chair, Kyle Kloepper, retiring, and the SG becoming “dormant” until LEWG decides to reactivate it.

In Rapperswil, LEWG had favourably reviewed a proposal for a C++ networking library based on Boost.ASIO, and asked the author (Chris Kohlhoff, whom I’ve talked about earlier in the context of coroutines) to update the proposal to leverage C++14 language features. Chris has done so, and presented an updated proposal to LEWG in Urbana; this update was also received favourably, and was voted to become the initial working draft of the Networking TS, which now joins the roster of Technical Specifications being worked on by the committee. In other words, we’re one step closer to having a standard sockets library!

SG 5 (Transactional Memory)

I haven’t been following the work of SG 5 very closely, but I know the Transactional Memory TS is progressing well. Its working draft has been created based on two papers, and it’s going to be sent out for its PDTS ballot shortly (after a review conducted via teleconference), with the intention being that the ballot closes in time to look at the comments in Lenexa.

SG 6 (Numerics)

Topics of discussion in SG 6 included:

  • a replacement for std::rand which combines the security of the C++11 <random> facilities with the simple interface of std::rand
  • special math functions for C++17
  • typedefs similar to int16_t for floating-point types
  • bignums, ratios, fixed-point arithmetic

A Numerics TS containing proposals for some of the above may be started in the near future.

There is an existing TR (Technical Report, an older name for a Technical Specification) for decimal floating-point arithmetic. There is a proposal to integrate this into C++17, but there hasn’t been any new progress on that in Urbana.

SG 7 (Reflection)

SG 7 looked at two reflection proposals: an updated version of a proposal for a set of type traits for reflecting the members of classes, unions, and enumerations, and a a significantly reworked version of a comprehensive proposal for static reflection.

The reflection type trait proposal was already favourably reviewed in Rapperswil. At this meeting, additional feedback was given on two design points:

  • Access control. There was consensus that reflection over inaccessible members should be allowed, but that it should occur via a separate mechanism that is spelt differently in the code (for example, there might be one namespace called std::reflect which provides traits for reflecting accessible members only, and another called std::reflect_invasively which provides traits for reflecting all members including inaccessible ones). The rationale is that for some use cases, reflecting only over accessible members is appropriate, while for others, reflecting over all members is appropriate, and we want to be able to spot uses of an inappropriate mechanism easily. Some people also expressed a desire to opt-out from invasive reflection on a per-class basis.
  • Syntax. The proposal’s syntax for e.g. accessing the name of the second member of a class C is std::class_member::name<C,1>. A preference was expressed a) for an additional level of grouping of reflection-related traits into a namespace or class reflect, e.g. std::reflect::class_member::name<C,1>, and b) for not delaying the provision of all inputs until the last component of the trait, e.g. std::reflect<C>::class_member<1>::name. (This last form has the disadvantage that it would actually need to be std::reflect<C>::template class_member<1>::name; some suggestions were thrown around for avoiding this by making the syntax use some compiler magic (as the traits can’t be implemented purely as a library anyways)).

It was also reiterated that this proposal has some limitations (notably, member templates cannot be reflected, nor can members of reference or bitfield type), but SG 7 remains confident that the proposal can be extended to fill these gaps in due course (in some cases with accompanying core language changes).

The comprehensive static reflection proposal didn’t have a presenter, so it was only looked at briefly. Here are some key points from the discussion:

  • This proposal allows reflection at a much greater level of detail – often at the level of what syntax was used, rather than just what entities were declared. For example, this proposal allows distinguishing between the use of different typedefs for the same type in the declaration of a class member; the reflection type traits proposal does not.
  • No one has yet explored this area enough to form a strong opinion on whether having access to this level of detail is a good thing.
  • SG 7 is interested in seeing motivating use cases that are served by this proposal but not by the reflection type traits proposal.
  • Reflecting namespaces – a feature included in this proposal – is viewed as an advanced reflection feature that is best left off a first attempt at a reflection spec.
  • The author is encouraged to do further work on this proposal, with the above in mind. Splitting the proposal into small components is likely to help SG 7 make progress on evaluating it.

There is also a third proposal for reflection, “C++ type reflection via variadic template expansion”, which sort of fell off SG 7’s radar because it was in the post-Issaquah mailing and had no presenter in Rapperswil or Urbana; SG 7 didn’t look at it in Urbana, but plans to in Lenexa.

SG 8 (Concepts)

The Core Working Group continued reviewing the Concepts TS (formerly called “Concepts Lite”) in Urbana. The fundamental design has not changed over the course of this review, but many details have. A few changes were run by EWG for approval (I mentioned these in the EWG section above: the removal of constexpr constraints, and the addition of folding expressions). The hope was to be ready to send out the Concepts TS for its PDTS ballot at the end of the meeting, but it didn’t quite make it. Instead, CWG will continue the review via teleconferences, and possibly a face-to-face meeting, for Concepts only, in January. If all goes well, the PDTS ballot might still be sent out in time for the comments to arrive by Lenexa.

SG 9 (Ranges)

As far as SG 9 is concerned, this has been the most exciting meeting yet. Eric Niebler presented a detailed and well fleshed-out proposal for integrating ranges into the standard library.

Eric’s ranges are built on top of iterators, thus fitting on top of today’s iterator-based algorithms almost seamlessly, with one significant change: the begin and end iterators of a range are not required to be of the same type. As the proposal explains, this small change allows a variety of ranges to be represented efficiently that could not be under the existing same-type model, including sentinel- and predicate-based ranges.

The main parts of the proposal are a set of range-related concepts, a set of range algorithms, and a set of range views. The foundational concept is Iterable, which corresponds roughly to what we conversationally call (and also what the Boost.Range library calls) a “range”. An Iterable represents a range of elements delimited by an Iterator at the beginning and a Sentinel at the end. Two important refinements of the Iterable concept are Container, which is an Iterable that owns its elements, and Range, which is a lightweight Iterable that doesn’t own its elements. The range algorithms are basically updated versions of the standard library algorithms that take ranges as Iterables; there are also versions that take (Iterator, Sentinel) pairs, for backwards-compatibiltiy with today’s callers. Finally, the range views are ways of transforming ranges into new ranges; they correspond to what the Boost.Range library calls range adaptors. There is also a suggestion to enhance algorithms with “projections”; I personally see this as unnecessary, since I think range views serve their use cases better.

Eric has fully implemented this proposal, thus convincingly demonstrating its viability.

Importantly, this proposal depends on the Concepts TS to describe the concepts associated with ranges and define algorithms and views in terms of these functions. (Eric’s implementation emulates the features of the Concepts TS with a C++11 concepts emulation layer.)

The proposal was overall very well received; there was clear consensus that Eric should pursue the high-level design he presented and come back with a detailed proposed specification.

An important practical point that needed to be addressed is that this proposal is not 100% backwards-compatible with the current STL. This wasn’t viewed as a problem, as previous experience trying to introduce C++0x concepts to the STL while not breaking anything has demonstrated that this wasn’t possible without a lot of contortions, and people have largely accepted that a clean break from the old STL is needed to build a tidy, concepts-enabled “STL 2.0″. Eric’s proposal covers large parts of what such an STL 2.0 would look like, so there is good convergence here. The consensus was that Eric should collaborate with Andrew Sutton (primary author and editor of the Concepts TS) on a proposal for a Technical Specification for a concepts-enabled ranges library; the exact scope (i.e. whether it will be just a ranges library, or a complete STL overhaul) is yet to be determined.

SG 10 (Feature Test)

The Feature Test Standing Document (the not-quite-a-standard document used by the committee to specify feature test macros) has been updated with C++14 features.

The feature test macros are enjoying adoption by multiple implementors, including GCC, Clang, EDG, and others.

SG 12 (Undefined Behaviour)

SG 12 looked at:

SG 13 (I/O, formerly “Graphics”)

SG 13 has been working on a proposal for 2D Graphics TS based on cario’s API. In Urbana, an updated version of this proposal which included some proposed wording was presented to LEWG. LEWG encouraged the authors to complete the wording, and gave a couple of pieces of design advice:

  • If possible, put in place some reasonable defaults (e.g., a default foreground color that’s in place if you don’t explicitly set a foreground color) so a “Hello world” type program can be written more concisely.
  • Where the API differs from a mechanical transliteration of the cairo API, document the rationale for the difference.

Next Meeting

The next full meeting of the Committee will be in Lenexa, Kansas, the week of May 4th, 2015.

There will also be a library-only meeting in Cologne, Germany the week of Feberuary 23rd, and a Concepts-specific meeting in Skillman, New Jersey from January 26-28.

Conclusion

This was probably the most action-packed meeting I’ve been to yet! My personal highlights:

  • The amount of interest in coroutines, and the green-light that was given to develop proposals for both the stackful and stackless versions. I think coroutines have the potential to revolutionize how C++ programmers express control flow in many application domains.
  • Eric Niebler’s proposal for bringing ranges to the standard library. It’s the first cohesive proposal I’ve seen that addresses all the tough practical questions involved in such an endeavour, and it was accordingly well-received.
  • The continuing work on modules, particularly the fact that Microsoft and Clang both have implementations in progress and are cooperating to converge on a final design.

Stay tuned for further developments!


PomaxNew Entry

%3Cp%3Eclick%20the%20entry%20to%20start%20typing%3C%2Fp%3E%0A

PomaxRSS description testing

My RSS generator wasn't adding article bodyies to the RSS, which caused some problems for certain RSS readers. Let's see if this fixes it.

Patrick McManusProxy Connections over TLS - Firefox 33

There have been a bunch of interesting developments over the past few months in Mozilla Platform Networking that will be news to some folks. I've been remiss in not noting them here. I'll start with the proxying over TLS feature. It landed as part of Firefox 33, which is the current release.

This feature is from bug 378637 and is sometimes known as HTTPS proxying. I find that naming a bit ambigous - the feature is about connecting to your proxy server over HTTPS but it supports proxying for both http:// and https:// resources (as well as ftp://, ws://, and ws:/// for that matter). https:// transactions are tunneled via end to end TLS through the proxy via the CONNECT method in addition to the connection to the proxy being made over a separate TLS session.. For https:// and wss:// that means you actually have end to end TLS wrapped inside a second TLS connection between the client and the proxy.

There are some obvious and non obvious advantages here - but proxying over TLS is strictly better than traditional plaintext proxying. One obvious reason is that it provides authentication of your proxy choice - if you have defined a proxy then you're placing an extreme amount of trust in that intermediary. Its nice to know via TLS authentication that you're really talking to the right device.

Also, of course the communication between you and the proxy is also kept confidential which is helpful to your privacy with respect to observers of the link between client and proxy though this is not end to end if you're not accessing a https:// resource. Proxying over TLS connections also keep any proxy specific credentials strictly confidential. There is an advantage even when accessing https:// resources through a proxy tunnel - encrypting the client to proxy hop conceals some information (at least for that hop) that https:// normally leaks such as a hostname through SNI and the server IP address.

Somewhat less obviously, HTTPS proxying is a pre-requisite to proxying via SPDY or HTTP/2. These multiplexed protocols are extremely well suited for use in connecting to a proxy because a large fraction (often 100%) of a clients transactions are funneled through the same proxy and therefore only 1 TCP session is required when using a prioritized multiplexing protocol. When using HTTP/1 a large number of connections are required to avoid head of line blocking and it is difficult to meaningfully manage them to reflect prioritization. When connecting to remote proxies (i.e. those with a high latency such as those in the cloud) this becomes an even more important advantage as the handshakes that are avoided are especially slow in that environment.

This multiplexing can really warp the old noodle to think about after a while - especially if you have multiple spdy/h2 sessions tunneled inside a spdy/h2 connection to the proxy. That can result in the top level multiplexing several streams with http:// transactions served by the proxy as well as connect streams to multiple origins that each contain their own end to end spdy sessions carrying multiple https:// transactions.

To utilize HTTPS proxying just return the HTTPS proxy type from your FindProxyForURL() PAC function (instead of the traditional HTTP type). This is compatible with Google's Chrome, which has a similar feature.

function FindProxyForURL(url, host) {
  if (url.substring(0,7) == "http://") {
   return "HTTPS proxy.mydomain.net:443;"
  }
  return "DIRECT;"
}


Squid supports HTTP/1 HTTPS proxying. Spdy proxying can be done via Ilya's node.js based spdy-proxy. nghttp can be used for building HTTP/2 proxying solutions (H2 is not yet enabled by default on firefox release channels - see about:config network.http.spdy.enabled.http2 and network.http.spdy.enabled.http2draft to enable some version of it early). There are no doubt other proxies with appropriate support too.

If you need to add a TOFU exception for use of your proxy it cannot be done in proxy mode. Disable proxying, connect to the proxy host and port directly from the location bar and add the exception. Then enable proxying and the certificate exception will be honored. Obviously, your authentication guarantee will be better if you use a normal WebPKI validated certificate.

Kevin NgoPushing Hybrid Mobile Apps to the Forefront

Mozilla Festival 2014 was held in London in October.

At Mozilla Festival 2014, I facilitated a session on Pushing Hybrid Mobile Apps to the Forefront. Before, I had been building a poker app to keep track of my poker winning statistics, record notes on opponents, and crunch poker math. I used the web as a platform, but having an iPhone, wanted this app to be on iOS. Thus, the solution was hybrid mobile apps, apps written in HTML5 technologies that are wrapped to run "natively" on all platforms (e.g., iOS, Android, FirefoxOS).

I stumbled upon the Ionic hybrid mobile app framework. This made app development so easy. IT fulfills the promise of the web: write once, run everywhere. In being with Mozilla for over two years, I've read so little hype for hybrid mobile apps. Hybrid mobile apps have potential to convert much more native developers over to the web platform, but hybrid mobile apps aren't getting the ad-time they deserve.

What is a Hybrid Mobile App?

Hybrid mobile apps, well explained in this article from Telerik, are apps written in HTML5 technologies that are enabled to run within a native container. They use the device's browser engine to render the app. And then web-to-native polyfill can be injected, prominently Cordova, in order to access device APIs.

The Current Lack of Exposure for Hybrid Mobile Apps

In all of the Mozilla Developer Network (MDN), there are around three articles on hybrid mobile apps, which aren't really fully fleshed and in need of technical review. There's been a good amount of work from James Longster in the form of Cordova Firefox OS support. There could be more to be done on the documentation side.

Cross-platform capability on mobile should be flaunted more. In MDN's main article on Open Web Apps, there's a list of advantages on open web apps. This article is important since it is a good entry point to into developing web apps. The advantages listed shouldn't really be considered advantages relative to native apps:

  • Local installation and offline storage: to a developer, these should be inherent to an app, not an explicit advantage. Apps are expected to be installable and have offline storage.
  • Hardware access: also should be inherent to an app and not an explicit advantage. Apps are expected to be able to communicate with its device APIs.
  • Breaking the walled gardens: there are no "walls" being broken if these web apps only run in the browser and FirefoxOS. They should be able to live inside the App Store and Play Store to really have any effect.
  • Open Web App stores: well, that is prety cool actually. I built a personal app that I didn't want to be distributed except with me and antoher. So I simply built a page that had the ability to install the app. However, pure web apps alone can't be submitted it to App Store or Play Store so that should be addressed first.

What's missing here is the biggest advantage of all: being able to run cross-platform (e.g., iOS, Android, FirefoxOS, Windows). That's the promise the web, and that's what attracts most developers to the web in the first place. Write it once, run anywhere, no need to port between languages or frameworks, and still be able to submit to the App Store/Play Store duopoly for to gain the most users. For many developers, the web is an appropriate platform, saving time and maintenance.

Additionally, most developers also prefer the traditional idea of apps, that they are packaged up and uploaded to the storefront, rather than self-hosted on a server. On the Firefox Marketplace, the majority of apps are packaged over hosted (4800 to 4100).

There's plenty of bark touting the cross-platform capability of the web, but there's little bite on how to actually achieve that on mobile. Hybrid mobile apps have huge potential to attact more developers to the web platform. But with its lack of exposure, it's wasted potential.

So what can we do? The presence of hybrid mobile apps on MDN could be buffed. I've talked to Chris Mills of the MDN team at Mozfest, and he mentioned it was a goal for 2015. FirefoxOS Cordova plugins may welcome contributors. And I think the biggest way would be to help add official FirefoxOS support to Ionic, a popular hybrid mobile app framework which currently has over 11k stars. They've mentioned they have FirefoxOS on the roadmap.

Building with Ionic

Ionic Framework is a hybrid mobile app framework It has a beautifully designed set of native-like icons and CSS components, pretty UI transitions, web components (through Angular directives for now), build tools, and an easy-to-use command-line interface.

With Ionic, I built my poker app I initially mentioned. It installs on my phone, and I can use it at the tables:

Poker app

Poker app built with Ionic.

For the Mozfest session, I generated a sample app with Ionic (that simply just makes use of the camera), and put it on Github with instructions. To get started with a hybrid mobile app:

  • npm install -g ionic cordova
  • ionic start myApp tabs - creates a template app
  • cordova plugin add org.apache.cordova.camera - installs the Cordova camera plugin (there are many to choose from)
  • ionic platform add <PLATFORM> - where could be ios, android, or firefoxos. This enables the platforms
  • ionic platform build <PLATFORM> - builds the project

To emulate it for iOS or Android:

  • ionic emulate <PLATFORM> - will open the app in XCode for ios or adbtools for android

To simulate it for FirefoxOS, open the project with WebIDE inside platforms/firefoxos/www.

How the Mozfest Session Went

It was difficult to plan since Mozfest is more of a hands-on unconference, where everything is meant to be hands-on and accessible. Mozfest wasn't a deeply technical conference so I tried to cater to those who don't have much development experience and to those who don't bring a laptop.

Thus I set up three laptops: my Macbook, a Thinkpad, and a Vaio. And had three devices: my iPhone, a Nexus 7, and a FirefoxOS Flame. My Macbook would help to demonstrate the iOS side. Whereas the other machines had Linux Mint within a VirtualBox. These VMs had adbtools and Firefox with WebIDE set up. All the mobile devices had the demo apps pre-installed so people could try it out.

I was prepared as a boy scout. Well, until my iPhone was pickpocketed in London, stripping me of the iOS demonstration. Lugging around three laptops in my bag that probably amounted to 20 pounds back and forth between the hotel, subway, and venue wasn't fun. I didn't even know what day I was going to present at Mozfest. Then I didn't even use those meticulously prepared laptops at the session. Everyone who showed up was pretty knowledgable, had a laptop, and had an internet connection.

The session went well nonetheless. After a bit of speech about pushing hybrid mobile apps to the forefront, my Nexus 7 and Flame were passed around to demo the sample hybrid mobile app running. It just had a simple camera button. That morning, everyone had received a free Firefox Flame for attending Mozfest so it turned more into WebIDE session on how to get an app on the Flame. My coworker who attended was able to get the accelerometer working with a "Shake Me / I was shaken." app, and I was able to get geolocation working with an app that displays longitude and latitude coordinates with the GPS.

What I Thought About Mozfest

There was a lot of energy in the building. Unfortunately, the energy didn't reach me, especially since I was heavily aircraft-latencied. Maybe conferences aren't my thing. The place was hectic. Hard to find out what was where. I tried to go to a session that was labeled as "The 6th Floor Hub", which turned out to be a small area of a big open room labelled with a hard-to-spot sign that said "The Hub". When I got there, there was no session being held despite the schedule saying so as the facilitator was MIA.

The sessions didn't connect with me. Perhaps I wanted something more technical and concrete that I could takeaway and use, but most sessions were abstract. There was a big push for Mozilla Webmaker and Appmaker, though those aren't something I use often. They're great teaching tools, but I usually direct to Codecademy for those who want to learn to build stuff.

There was a lot of what I call "the web kool-aid". Don't get me wrong, I love the web, I've drank a lot of the kool-aid, but there was a lot of championing of the web in the keynotes. I guess "agency" is the new buzzword now. Promoting the web is great, though I've just heard it all before.

However, I was glad to add value to those who found it more inspiring and motivating than me. I believe my session went well and attendees took away something hard and practical. As for me, I was just happy to get back home after a long day of travel and go replace my phone.

Soledad Penades“Invest in the future, build for the web!”, take 2, at OSOM

I am right now in Cluj-Napoca, in Romania, for OSOM.ro, an small totally non profit volunteer-organised conference. I gave an updated, shorter revised version of the talk I gave at Amsterdam past June. As usual here are the slides and the source for the slides.

It is more or less the same, but better, and I also omitted some sections and spoke a bit about Firefox Developer Edition.

Also I was wearing this Fox-themed sweater which was imbuing me with special powers for sure:

fox sweater

(I found it at H & M past Saturday, there are more animals if foxes aren’t your thing).

There were some good discussions about open source per se, community building and growing. And no, talks were not recorded.

I feel a sort of strange emptiness now, as this has been my last talk for the year, but it won’t be long until other commitments fill that vacuum. Like MozLandia—by this time next week I’ll be travelling to, or already in, Portland, for our work week. And when I’m back I plan to gradually slide into a downward spiral into the idleness. At least until 2015.

Looking forward to meeting some mozillians I haven’t met yet, and also visiting Ground Kontrol again and exploring new coffee shops when we have a break in Portland, though :-)

flattr this!

Kevin Ngo'Card Not Formatted' Error on Pentax Cameras with Mac OSX Card Reader

With some 64GB SDHC and SDXC cards on Pentax (and possibly other) cameras, you might get a 'Card Not Formatted' error. It may happen if you take some shots, plug the SD card into your Mac's card reader, upload the shots, and then unplug it. I've seen the error on my K30 and K3. Though, it's not an issue with the camera or the card.

The issue is with unplugging it. With some SD cards on OSX, the SD card has to be properly ejected rather than straight-up unplugging it. Or else it'll be in some sort of weirdly formatted state. That may be obvious, but I never ran into issues unplugging cards before.

If you hit the error, you don't have to reformat the card. Simply plug it back into your machine, eject it, and then everything will have properly torn down for the card to be usable.

Gregory SzorcTest Drive the New Headless Try Repository

Mercurial and Git both experience scaling pains as the number of heads in a repository approaches infinity. Operations like push and pull slow to a crawl and everyone gets frustrated.

This is the problem Mozilla's Try repository has been dealing with for years. We know the solution doesn't scale. But we've been content kicking the can by resetting the repository (blowing away data) to make the symptoms temporarily go away.

One of my official goals is to ship a scalable Try solution by the end of 2014.

Today, I believe I finally have enough code cobbled together to produce a working concept. And I could use your help testing it.

I would like people to push their Try, code review, and other miscellaneous heads to a special repository. To do this:

$ hg push -r . -f ssh://hg@hg.gregoryszorc.com/gecko-headless

That is:

  • Consider the changeset belonging to the working copy
  • Allow the creation of new heads
  • Send it to the gecko-headless repo on hg.gregoryszorc.com using SSH

Here's what happening.

I have deployed a special repository to my personal server that I believe will behave very similarly to the final solution.

When you push to this repository, instead of your changesets being applied directly to the repository, it siphons them off to a Mercurial bundle. It then saves this bundle somewhere along with some metadata describing what is inside.

When you run hg pull -r on that repository and ask for a changeset that exists in the bundle, the server does some magic and returns data from the bundle file.

Things this repository doesn't do:

  • This repository will not actually send changesets to Try for you.
  • You cannot hg pull or hg clone the repository and get all of the commits from bundles. This isn't a goal. It will likely never be supported.
  • We do not yet record a pushlog entry for pushes to the repository.
  • The hgweb HTML interface does not yet handle commits that only exist in bundles. People want this to work. It will eventually work.
  • Pulling from the repository over HTTP with a vanilla Mercurial install may not preserve phase data.

The purpose of this experiment is to expose the repository to some actual traffic patterns so I can see what's going on and get a feel for real-world performance, variability, bugs, etc. I plan to do all of this in the testing environment. But I'd like some real-world use on the actual Firefox repository to give me peace of mind.

Please report any issues directly to me. Leave a comment here. Ping me on IRC. Send me an email. etc.

Update 2014-11-21: People discovered a bug with pushed changesets accidentally being advanced to the public phase, despite the repository being non-publishing. I have fixed the issue. But you must now push to the repository over SSH, not HTTP.

Asa DotzlerFlame Distribution Update

About three weeks ago, I ran out of Flame inventory for Mozilla employees and key volunteer contributors. The new order of Flames is arriving in Mountain View late today (Friday) and I’ll be working some over the weekend, but mostly Monday to deliver on the various orders you all have placed with me through email and other arrangements.

If you contacted me for a Flame or a batch of Flames, expect an email update in the next few days with information about shipping or pick-up locations and times. Thanks for your patience these last few weeks. We should not face any more Flame shortages like this going forward.

Jennie Rose HalperinTownhall, not Shopping Mall! Community, making, and the future of the Internet

I presented a version of this talk at the 2014 Futurebook Conference in London, England. They also kindly featured me in the program. Thank you to The Bookseller for a wonderful conference filled with innovation and intelligent people!

A few days ago, I was in the Bodleian Library at Oxford University, often considered the most beautiful library in the world. My enthusiastic guide told the following story:

After the Reformation (when all the books in Oxford were burned), Sir Thomas Bodley decided to create a place where people could go and access all the world’s information at their fingertips, for free.

“What does that sound like?” she asked. “…the Internet?”

While this is a lovely conceit, the part of the story that resonated with me for this talk is the other big change that Bodley made, which was to work with publishers, who were largely a monopoly at that point, to fill his library for free by turning the library into a copyright library. While this seemed antithetical to the ways that publishers worked, in giving a copy of their very expensive books away, they left an indelible and permanent mark on the face of human knowledge. It was not only preservation, but self-preservation.

Bodley was what people nowadays would probably call “an innovator” and maybe even in the parlance of my field, a “community manager.”

By thinking outside of the scheme of how publishing works, he joined together with a group of skeptics and created one of the greatest knowledge repositories in the world, one that still exists 700 years later. This speaks to a few issues:

Sharing economies, community, and publishing should and do go hand in hand and have since the birth of libraries. By stepping outside of traditional models, you are creating a world filled with limitless knowledge and crafting it in new and unexpected ways.

The bound manuscript is one of the most enduring technologies. This story remains relevant because books are still books and people are still reading them.

As the same time, things are definitely changing. For the most part, books and manuscripts were pretty much identifiable as books and manuscripts for the past 1000 years.

But what if I were to give Google Maps to a 16th Century Map Maker? Or what if I were to show Joseph Pulitzer Medium? Or what if I were to hand Gutenberg a Kindle? Or Project Gutenberg for that matter? What if I were to explain to Thomas Bodley how I shared the new Lena Dunham book with a friend by sending her the file instead of actually handing her the physical book? What if I were to try to explain Lena Dunham?

These innovations have all taken place within the last twenty years, and I would argue that we haven’t even scratched the surface in terms of the innovations that are to come.

We need to accept that the future of the printed word may vary from words on paper to an ereader or computer in 500 years, but I want to emphasize that in the 500 years to come, it will more likely vary from the ereader to a giant question mark.

International literacy rates have risen rapidly over the past 100 years and companies are scrambling to be the first to reach what they call “developing markets” in terms of connectivity. In the vein of Mark Surman’s talk at the Mozilla Festival this year, I will instead call these economies post-colonial economies.

Because we (as people of the book) are fundamentally idealists who believe that the printed word can change lives, we need to be engaged with rethinking the printed word in a way that recognizes power structures and does not settle for the limited choices that the corporate Internet provides (think Facebook vs WhatsApp). This is not as a panacea to fix the world’s ills.

In the Atlantic last year, Phil Nichols wrote an excellent piece that paralleled Web literacy and early 20th century literacy movements. The dualities between “connected” and “non-connected,” he writes, impose the same kinds of binaries and blind cure-all for social ills that the “literacy” movement imposed in the early 20th century. In equating “connectedness” with opportunity, we are “hiding an ideology that is rooted in social control.”

Surman, who is director of the Mozilla Foundation, claims that the Web, which had so much potential to become a free and open virtual meeting place for communities, has started to resemble a shopping mall. While I can go there and meet with my friends, it’s still controlled by cameras that are watching my every move and its sole motive is to get me to buy things.

85 percent of North America is connected to the Internet and 40 percent of the world is connected. Connectivity increased at a rate of 676% in the past 13 years. Studies show that literacy and connectivity go hand in hand.

How do you envision a fully connected world? How do you envision a fully literate world? How can we empower a new generation of connected communities to become learners rather than consumers?

I’m not one of these technology nuts who’s going to argue that books are going to somehow leave their containers and become networked floating apparatuses, and I’m not going to argue that the ereader is a significantly different vessel than the physical book.

I’m also not going to argue that we’re going to have a world of people who are only Web literate and not reading books in twenty years. To make any kind of future prediction would be a false prophesy, elitist, and perhaps dangerous.

Although I don’t know what the printed word will look like in the next 500 years,

I want to take a moment to think outside the book,

to think outside traditional publishing models, and to embrace the instantaneousness, randomness, and spontaneity of the Internet as it could be, not as it is now.

One way I want you to embrace the wonderful wide Web is to try to at least partially decouple your social media followers from your community.

Twitter and other forms of social media are certainly a delightful and fun way for communities to communicate and get involved, but your viral campaign, if you have it, is not your community.

True communities of practice are groups of people who come together to think beyond traditional models and innovate within a domain. For a touchstone, a community of practice is something like the Penguin Labs internal innovation center that Tom Weldon spoke about this morning and not like Penguin’s 600,000 followers on Twitter. How can we bring people together to allow for innovation, communication, and creation?

The Internet provides new and unlimited opportunities for community and innovation, but we have to start managing communities and embracing the people we touch as makers rather than simply followers or consumers.

The maker economy is here— participatory content creation has become the norm rather than the exception. You have the potential to reach and mobilize 2.1 billion people and let them tell you what they want, but you have to identify leaders and early adopters and you have to empower them.

How do you recognize the people who create content for you? I don’t mean authors, but instead the ambassadors who want to get involved and stay involved with your brand.

I want to ask you, in the spirit of innovation from the edges

What is your next platform for radical participation? How are you enabling your community to bring you to the next level? How can you differentiate your brand and make every single person you touch psyched to read your content, together? How can you create a community of practice?

Community is conversation. Your users are not your community.

Ask yourself the question Rachel Fershleiser asked when building a community on Tumblr: Are you reaching out to the people who want to hear from you and encouraging them or are you just letting your community be unplanned and organic?

There reaches a point where we reach the limit of unplanned organic growth. Know when you reach this limit.

Target, plan, be upbeat, and encourage people to talk to one another without your help and stretch the creativity of your work to the upper limit.

Does this model look different from when you started working in publishing? Good.

As the story of the Bodelian Library illustrated, sometimes a totally crazy idea can be the beginning of an enduring institution.

To repeat, the book is one of the most durable technologies and publishing is one of the most durable industries in history. Its durability has been put to the test more than once, and it will surely be put to the test again. Think of your current concerns as a minor stumbling block in a history filled with success, a history that has documented and shaped the world.

Don’t be afraid of the person who calls you up and says, “I have this crazy idea that may just change the way you work…” While the industry may shift, the printed word will always prevail.

Publishing has been around in some shape or form for 1000 years. Here’s hoping that it’s around for another 1000 more.

Mozilla Reps CommunityReps Weekly Call – November 20th 2014

Last Thursday we had our regular weekly call about the Reps program, where we talk about what’s going on in the program and what Reps have been doing during the last week.

reps

Summary

  • FOSDEM update.
  • Post-event metrics and receipts (Important reminder)
  • Firefox Tiles Reps FAQ.
  • 10 days of Mozillians.
  • Yahoo agreement.
  • Community newsletter.
  • #fx10 Jakarta
  • Pending Reps applicants.

Detailed notes

AirMozilla video

Don’t forget to comment about this call on Discourse and we hope to see you next week!

Pascal FinetteIntroduction to Exponential Thinking and Technology

A few weeks ago I had the privilege to deliver the closing keynote at GroupM’s What’s Next Illuminate conference in New York City. I gave a short introduction to exponential thinking (the stuff we teach at Singularity University) and then walked the audience through a whole bunch of examples (focussed on media).

The talk was a shortened and more media-related version of my “Technology Trends” talk I give here at SU to groups from all over the world quite often.

Here’s the video:

Mike Taylordocument.body.scrollTop vs document.documentElement.scrollTop

Here's a track from Web Compatibility's Greatest Hits Album (Volume I) that just doesn't want to go away—with the latest club remix titled "scrolling to sections from the menu in the mobile Google News site doesn't work due to setting scrollTop position on document.body in Firefox for Android".

Here's some background for those with less refined musical tastes.

(Why yes I can do this bad metaphor stuff all day long, why do you ask?)

If you want to get or set the vertical scroll position of a document, you can use element.scrollTop. According to the CSSOM View Module spec, if you're in standards mode you need to operate on the document's root element (the <html> element—or document.documentElement in DOM land). In quirks mode you would use the <body> element, via document.body.

This works in IE and Firefox and the late Presto Opera.

In Blink and WebKit browsers, it's the exact opposite. Both have attempted to implement the standard (safari, chrome), but both have had to back out their patches due to sites breaking (some Google properties and webkit.org among them, as luck would have it).

The bug that was filed against WebKit for Facebook breaking as a result of changing to the standard is especially interesting because it shows the tension between following standards (and other browsers) and breaking sites for their own users.

It's also a good example of how user-agent-string-based development can sometimes make it hard, if not impossible, to remove some of the crappier stuff from the web platform.

Here's some excerpts, but the whole bug is a good read.

Comment 15:

It really doesn't matter how faithfully you implemented the spec. If it causes a major backward compatibility with the Web, we can't have it.

Comment 31:

Yes, the regression doesn't reproduce if we fake the UA string as I mentioned in the comment #31.

Maybe sites will update one day and let other browsers do the right thing™. (Not that I'm holding my breath over here.)

Until then I guess we get to have fun writing stuff like this (found on apple.com a few weeks back):

(document.documentElement ||
 document.body.parentNode ||
 document.body).scrollTop;

Yunier José Sosa VázquezNueva estrategia de búsqueda para Firefox promueve la elección y la innovación

Google dejará de ser el buscador por defecto en Firefox a partir de diciembre para Estados Unidos, según el anuncio oficial publicado por Chris Beard en el blog de Mozilla. En otras regiones del planeta también Google será reemplazado por otros “competidores” en aras de promover la elección en la Web.
search-with-yahooLas búsquedas son una parte esencial en la experiencia en Internet para todos, solamente los usuarios de Firefox realizan más de 100 millones de búsquedas por año.

Con Firefox, Mozilla popularizó la integración del buscador en el navegador aliándose con compañías de Internet como Google, Yahoo y otras más para generar remuneración y avanzar en su misión. Google ha sido el buscador por defecto globalmente en Firefox desde el 2004 y al vencerse este año el contrato, Mozilla ha tomado esto como una oportunidad para revisar su estrategia y explorar otras opciones.

Según Beard, al evaluar los socios, para Mozilla la primera consideración fue asegurar una estrategia alineada a los valores de elección e independencia, capaz de posicionarlos y avanzar en su misión para brindar un mejor servicio a los usuarios y a la Web. Al final, cada opción disponible por los socios era fuerte, mejorando los términos económicos y reflejando los valores que Firefox brinda al ecosistema. Pero una opción sobresalió por encima de las demás.

Mozilla ha finalizado su práctica de tener un único buscador global en Firefox y en su lugar han adoptado una forma flexible que permitirá tener buscadores por país:

Estados Unidos

  • Yahoo será el buscador por defecto durante los próximos 5 años.
  • Iniciando en diciembre, los usuarios de Firefox serán introducidos en una nueva y mejorada experiencia de búsqueda con Yahoo en la cual resalta una moderna interfaz.
  • Con esta asociación, Yahoo soportará Do Not Track (DNT) en Firefox
  • Google, Bing, DuckDuckGo, eBay, Amazon, Twitter y Wikipedia continuarán siendo opciones de búsqueda alternativas.

Default-Virgin-flight-27-600x421Rusia

  • Yandex será el buscador por defecto.
  • Google, DuckDuckGo, OZON.ru, Price.ru, Mail.ru, y Wikipedia continuarán siendo opciones de búsqueda alternativas.

RussiaChina

  • Baidu continuará siendo el buscador por defecto.
  • Google, Bing, Youdao, Taobao y otras opciones locales continuarán siendo opciones de búsqueda alternativas.

ChinaResto de los países

  • Firefox es un navegador para todo el mundo, indiferente de la preferencia de búsqueda.
  • Firefox ahora tiene más opciones en cuanto a proveedores de búsqueda que cualquier otro navegador, con 61 proveedores preinstalados por 88 versiones en lenguas diferentes.
  • A pesar que Mozilla decidió no renovar el contrato, Google continuará siendo una opción de búsqueda preinstalada.
  • Google continuará proveyendo la Geolocalización y Navegación segura en Firefox.
  • Mozilla se enfocará en expandir su trabajo con socios motivados a explorar innovadoras y nuevas interfaces de búsqueda, experiencias de contenidos, y mejoras de privacidad en escritorio y móvil.

Search_hr_wlogo-600x402

Por eso la independencia importa. Al no perseguir el lucro, nos permite crear opciones diferentes. Opciones que mantienen la Web abierta, por todos lados e independiente. Pensamos que hoy se da un gran paso hacia esa dirección.

Cabe destacar que Google se mantendrá disponible como buscador por defecto para los demás países pero la oferta está abierta para que otros socios interesados se sumen a esta estrategia.

Fuente: The Mozilla Blog

Fuente: Google System

Matthew RuttleyUsing Web Workers in Firefox Extensions

Web Workers allow you to run code in the background in browsers such as Firefox. This is how to build one into a Firefox Extension, which is slightly different than from just creating one as normal on a page. The documentation for doing this is basically non-existent, so hopefully you’ll find this useful.

Please make sure you have a development environment set up similar to the one described here in my previous post.

How do workers work?

  • Workers in /data/ are not directly connected to scripts in /lib/
  • However, they can communicate by sending messages to each other
  • These messages are text only, so could contain serialized JSON, but nothing else
  • You’ll notice below that we are basically just slinging messages between two scripts

The code for the worker

Navigate to the /data/ directory and create a file called hello_world.js

> pwd
/Users/mruttley/Documents/test
> ls
data		lib		package.json	test
> cd data/
> vim hello_world.js

Now paste the following in there (new users of vim, press

i
  to start typing and
Esc
  followed by
:wq
  to save):

//Code for the worker

self.onmessage = function(messageFromClient) {
    self.postMessage("Hello " + messageFromClient.data);
};

This says that whenever the worker receives a message from the client, then send a message back with the word “Hello” prepended.

One note here: In workers, you can’t use the useful function 

console.log("message")
 , instead use 
dump("message")

Let’s call the worker from the main code

Let’s navigate back to the

/lib/
  folder and edit the
main.js
  file, which is the first thing that runs in the extension.

> cd ../lib/
> vim main.js

Paste in the following code:

var worker = new Worker("hello_world.js");

worker.onmessage = function(e) {
    console.log(e.data);
};

worker.postMessage("Matthew");

And run 

cfx run
 . You’ll notice a messy error:

> cfx run
Using binary at '/Applications/Firefox.app/Contents/MacOS/firefox-bin'.
Using profile at '/var/folders/p1/zzdzcrrx6pq96hgsmy5xjqmh0000gp/T/tmpgFixDP.mozrunner'.
console.error: test: 
  Message: ReferenceError: Worker is not defined
  Stack:
    @resource://jid1-zmowxggdley0aa-at-jetpack/test/lib/main.js:1:9
CuddlefishLoader/options<.load@resource://gre/modules/commonjs/sdk/loader/cuddlefish.js:129:18
run@resource://gre/modules/commonjs/sdk/addon/runner.js:138:19
startup/</<@resource://gre/modules/commonjs/sdk/addon/runner.js:81:7
Handler.prototype.process@resource://gre/modules/Promise-backend.js:865:23
this.PromiseWalker.walkerLoop@resource://gre/modules/Promise-backend.js:744:7

*************************
A coding exception was thrown in a Promise resolution callback.
See https://developer.mozilla.org/Mozilla/JavaScript_code_modules/Promise.jsm/Promise

Full message: ReferenceError: Worker is not defined
Full stack: @resource://jid1-zmowxggdley0aa-at-jetpack/test/lib/main.js:1:9
CuddlefishLoader/options<.load@resource://gre/modules/commonjs/sdk/loader/cuddlefish.js:129:18
run@resource://gre/modules/commonjs/sdk/addon/runner.js:138:19
startup/</<@resource://gre/modules/commonjs/sdk/addon/runner.js:81:7
Handler.prototype.process@resource://gre/modules/Promise-backend.js:865:23
this.PromiseWalker.walkerLoop@resource://gre/modules/Promise-backend.js:744:7

*************************
console.error: test: 
  Message: ReferenceError: Worker is not defined
  Stack:
    @resource://jid1-zmowxggdley0aa-at-jetpack/test/lib/main.js:1:9
CuddlefishLoader/options<.load@resource://gre/modules/commonjs/sdk/loader/cuddlefish.js:129:18
run@resource://gre/modules/commonjs/sdk/addon/runner.js:138:19
startup/</<@resource://gre/modules/commonjs/sdk/addon/runner.js:81:7
Handler.prototype.process@resource://gre/modules/Promise-backend.js:865:23
this.PromiseWalker.walkerLoop@resource://gre/modules/Promise-backend.js:744:7

Aha! The key line here is: 

ReferenceError: Worker is not defined
 . This is because Firefox Extensions use something called a ChromeWorker instead. We need to import this in main.js by pasting this at the top:

var {ChromeWorker} = require("chrome")

and changing the line that references the hello_world.js file to call a ChromeWorker instead:

//var worker = new Worker("hello_world.js"); //remove this
var worker = new ChromeWorker("hello_world.js"); //add this instead

Ok let’s try running it again! Try 

cfx run
 . Wtf another error?!

> cfx run
Using binary at '/Applications/Firefox.app/Contents/MacOS/firefox-bin'.
Using profile at '/var/folders/p1/zzdzcrrx6pq96hgsmy5xjqmh0000gp/T/tmpJJXeC4.mozrunner'.
console.error: test: 
  Message: Error: Malformed script URI: hello_world.js
  Stack:
    @resource://jid1-zmowxggdley0aa-at-jetpack/test/lib/main.js:3:14
CuddlefishLoader/options<.load@resource://gre/modules/commonjs/sdk/loader/cuddlefish.js:129:18
run@resource://gre/modules/commonjs/sdk/addon/runner.js:138:19
startup/</<@resource://gre/modules/commonjs/sdk/addon/runner.js:81:7
Handler.prototype.process@resource://gre/modules/Promise-backend.js:865:23
this.PromiseWalker.walkerLoop@resource://gre/modules/Promise-backend.js:744:7

console.error: test: 
  Message: Error: Malformed script URI: hello_world.js
  Stack:
    @resource://jid1-zmowxggdley0aa-at-jetpack/test/lib/main.js:3:14
CuddlefishLoader/options<.load@resource://gre/modules/commonjs/sdk/loader/cuddlefish.js:129:18
run@resource://gre/modules/commonjs/sdk/addon/runner.js:138:19
startup/</<@resource://gre/modules/commonjs/sdk/addon/runner.js:81:7
Handler.prototype.process@resource://gre/modules/Promise-backend.js:865:23
this.PromiseWalker.walkerLoop@resource://gre/modules/Promise-backend.js:744:7

The key line here is: 

Malformed script URI: hello_world.js
 . This cryptic error is because firefox can’t yet access anything in the
/data/
  folder. We have to use another part of the SDK to enable access to it.

Open

main.js
  and put this at the top:

var self = require("sdk/self");

Now we can use the function

self.data.url()
 . When you put a filename as the first argument, it will return a string like 
resource://jid1-zmowxggdley0aa-at-jetpack/test/data/whatever_file.js
 which properly refers to it in the case of extensions. Modify the worker import line as follows:

//let worker = new Worker("hello_world.js"); //remove this
let worker = new ChromeWorker(self.data.url("hello_world.js")); //add this

Now let’s run the extension again using

cfx run
 :

> cfx run
Using binary at '/Applications/Firefox.app/Contents/MacOS/firefox-bin'.
Using profile at '/var/folders/p1/zzdzcrrx6pq96hgsmy5xjqmh0000gp/T/tmppvMjZp.mozrunner'.
console.log: test: Hello Matthew

Yay it works! The Worker returned the message “Hello Matthew“.

FAQ

  • What does this
    {notation}
      mean?

It is shorthand for:

var chrome = require("chrome")
var Worker = chrome['ChromeWorker']

Basically this means that 

require("chrome")
 returns an Object, and we just need the value that is referenced by the key “ChromeWorker”. This is a very succinct way of extracting things from JavaScript Objects that will come in handy in the future.

  • Why is Worker now called ChromeWorker? Are we doing something with Google Chrome?

This is a naming coincidence and nothing to do with Chrome as in the browser. Chrome in this case refers to Firefox Addon internals.

Mozilla FundraisingMozilla Now Accepts Bitcoin

For some time, Mozilla supporters have asked for the ability to donate using bitcoin. We are finally able to fulfill that request. Beginning today, we accept bitcoin as one of the many ways people can choose to support Mozilla. Read … Continue reading

Pascal FinetteCommunity Building (Lessons from Mozilla)

Today I had the great honor and pleasure to teach a class on Building (Online) Communities at Electronic Arts’ internal leadership development program. For my presentation I took a trip down memory lane and pulled out the key insights and learnings from my time at Mozilla.

Here’s the deck:

Lukas BlakkArtisanal Contributors

Part 1: Start In Person

Ascend had very few ‘rules’ but there was one which was non-negotiable: it’s an in-person program. We didn’t do distance learning, online coursework, or video-based classes. We did bring in a couple of speakers virtually to speak to the room of 20 participants but the opposite was never true.

This was super important in how we were going to build a strong cohort. Don’t get me wrong, I’m a fan of remote work and global contribution as well as with people working from wherever they are. This was a 6 week intensive program though and in order to build the inter-dependent cohort I was hoping to1, it had to be in person at first. Those cruicial early stages where someone is more likely to ‘disappear’ if things were hard, confusing, or if they couldn’t get someone’s attention to ask a question.

It’s been over 5 years since I graduated from my software development program and over 8 years since I started lurking in IRC channels2 and getting to know Mozillians in digital space first. I wouldn’t have stuck with it, or gotten so deeply involved without my coursework with Dave Humphrey though. That was a once a week class, but it meant the world to be in the same room as other people who were learning and struggling with the same or similar problems. It was an all-important thread connecting what I was trying to do in my self-directed time with actual people who could show more caring about me and my ability to participate.

Even as an experienced open source contributor I can jump into IRC channels for projects I’m trying to work on – most recently dd-wrt for my home server setup – and when I ask a question (with lots of evidence for what I’ve already tried and an awareness of what the manual has to say) I get no response, aka: Crickets. There are a host of reasons, and I know more than a beginner might about what those could be: timezones, family comitments, no one with the expertise currently in the channel, and more. None of that matters when you’re new to this type of environment. Silence is interpreted as a big “GO AWAY YOU DON’T BELONG HERE” despite the best intentions of any community.

In person learning is the best way to counter that. Being able to turn to a colleague or a mentor and say what’s happening helps get you both reassurance that it’s not you, but also someone who can help you get unstuck on what to do next. While you wait for a response, check out this other topic we’re studying. Perhaps you can try other methods of communication too, like in a bug or an email.

Over the course of our first pilot I also discovered that removing myself from the primary workroom the Ascend participants were in helped the cohort to rapidly built up strengths in helping each other first3. The workflow looked more like: have a question/problem, ask a cohort member (or several), if you still can’t figure it out ask on IRC, and if then if you’re still stuck find your course leader. This put me at the end of the escalation path4 and meant that people were learning to rely both on in-person communications as well as IRC but more importantly were building up the muscle of “don’t stop asking for help until you get it” which is really where open source becomes such a great space to work in.

Back to my recent dd-wrt experience, I didn’t hear anything back in IRC and I felt I had exhausted the forums & wikis their community provided. I started asking in other IRC channels where tech-minded people hung out (thanks womenwhohack!) and then I tried yet another search with slightly different terms. In the end I found what I needed in a YouTube tutorial. I hope that sufficiently demonstrates that a combination of tactics are what culminate in an ability to be persistent when learning in open source projects.

Never underestimate the importance of removing isolation for new contributors to a project. In person help, even just at first, can be huge.


  1. Because the ultimate goal of Ascend was to give people skills for long-term contribution and participation and a local cohort of support and fellow learners seemed like a good bet for that to be possible once the barrier-removing help of the 6 week intensive was no longer in place. 
  2. By the way, I’m such a huge fan of IRC that I wrote the tutorial for it at Mozilla in order to help get more non-engineering folks using it, in my perfect world everyone is in IRC all the time with scrollback options and logging. 
  3. Only after the first three weeks when we moved to the more independent work, working on bugs, stage. 
  4. Which is awesome because I was always struggling to keep up with the course creation as we were running it, I didn’t realize that teaching 9-5 was asking for disaster and next time we’ll do 10-4 for the participants to give the mentors pre and post prep time. 

Matthew RuttleySetting up a development environment for Firefox Extensions.

This is the method I use to create simple firefox extensions. This tutorial is a precursor to the next one which is about using Web Workers (i.e. allowing code to run on background threads).

Setting up the environment

We’re going to need the Firefox Addon SDK. This is a collection of python files that will let you run a test (optionally blank) version of Firefox. To download it:

mruttley13338:addon mruttley$ wget https://ftp.mozilla.org/pub/mozilla.org/labs/jetpack/jetpack-sdk-latest.tar.gz

Now extract it and remove the tarball:

mruttley13338:addon mruttley$ tar -zxvf jetpack-sdk-latest.tar.gz
mruttley13338:addon mruttley$ rm -f jetpack-sdk-latest.tar.gz

Go to the directory and startup the special shell:

mruttley13338:addon mruttley$ cd addon-sdk-1.17/
mruttley13338:addon-sdk-1.17 mruttley$ source bin/activate
Welcome to the Add-on SDK. For the docs, visit https://addons.mozilla.org/en-US/developers/docs/sdk/latest/
(addon-sdk-1.17)mruttley13338:addon-sdk-1.17 mruttley$

Now you can see that the shell has prepended (addon-sdk-1.17) in brackets to the prompt. This means that the window is probably half filled with text so we can reduce that with the command:

(addon-sdk-1.17)mruttley13338:addon-sdk-1.17 mruttley$ PS1="> "
>

Much cleaner! :)

Setting up the extension template

Now that we have this special addon-sdk shell, navigate back to your documents and create a new folder for our extension.

> cd ~/Documents/
> mkdir tutorial
> cd tutorial

This special shell has various useful commands included, which all look like 

cfx xyz
 . For more about them see here. In this case we use 
cfx init

> cfx init
* lib directory created
* data directory created
* test directory created
* generated jID automatically: jid1-FYWu7Ld5UwDbRw
* package.json written
* test/test-main.js written
* lib/main.js written

Your sample add-on is now ready.
Do "cfx test" to test it and "cfx run" to try it.  Have fun!
>

Let’s inspect what was created:

> ls -lah
total 8
drwxr-xr-x   6 mruttley  staff   204B 20 Nov 16:47 .
drwx------+ 29 mruttley  staff   986B 20 Nov 16:45 ..
drwxr-xr-x   2 mruttley  staff    68B 20 Nov 16:47 data
drwxr-xr-x   3 mruttley  staff   102B 20 Nov 16:47 lib
-rw-r--r--   1 mruttley  staff   174B 20 Nov 16:47 package.json
drwxr-xr-x   3 mruttley  staff   102B 20 Nov 16:47 test

  •  lib
      contains a file called
    main.js
      which is the main handler file for all extension code
  • data
      is empty but can be used to store things like workers (which we will come to later) or large data files
  • test
      can contain unit tests (quite hard to set up but useful for test driven development later)
  • package.json
      contains metadata about the extension – version number, name of the creator, description, licensing etc

You can start writing code in main.js and it will run in the browser. Once finished, use 

cfx run
 to test it!

See the next tutorial on how to write a firefox extension using web workers!

Lukas BlakkRelease Management Tooling: Past, Present, and Future

Release Management Tooling: Past, Present, and Future

As I was interviewing a potential intern for the summer of 2015 I realized I had outlined all our major tools and what the next enhancement for each could be but that this wasn’t well documented anywhere else yet.

By coming to Release Management from my beginnings as a Release Engineer, I’ve been a part of seeing our overall release automation improve across the whole spectrum of what it takes to put out packaged software for multiple platforms and we’ve come a long way so this post is also intended to capture how the main tools we use have gotten to their current state as well as share where they are heading.

Ship-It

Past: Release Manager on point for a release sent an email to the Release-Drivers mailing list with an hg changeset, a version, build number, and this was the “go” to build for Release Engineering to take over and execute a combination of automated/manual steps (there was even a time when it was only said in IRC, email became the constant when Joduinn pushed for consistency and a traceable trail of events). Release Engineers would update a config files & locale changes, get them attached to a bug, approved, uplifted, then go reconfigure the build machines so they could kick off the release build automation.

Present: Ship-It is an app developed by Release Engineering (bhearsum) that allows a Release Manager to input the configurations needed (changeset, version, build number, partials to be created, l10n changesets) all in one place, and on submit the build automation picks up this change from a db, reconfigures the build machine, and triggers builds. When all goes well, there are zero human hands between the “go” and the availability of builds to QA.

Future: In two parts:
1. To have a simple app that can take a list of bug numbers and check them for landing to {branch} (where branch is Beta, Release, or ESR), once all the bug numbers listed have landed, check tree herder for green status on that last changeset, submit to Ship-It if builds are successful. Benefits: hands off even sooner, knowing that all the important fixes are on the branch in question, and that the tree is totally green prior to build (sometimes we “go” without all the results because of human timing needs).
2. Complete End-To-End Release Checklist, dynamically updated to show what stage a release job is at and who’s got the ball in their court. This should track from buglist added (for the final landings a RM is waiting on) all the way until the release notes are live and QA signs off on updates for the general release being in the wild.

Nucleus (aka Release Note App)

Past: Oh dear, you probably don’t even want to know how our release notes used to be made. It’s worse than sausage. There was a sqlite db file, a script that pulled from that db and generated html based on templates and then the Release Manager had to manually re-order the html to get the desired appearance on final pages, all this was then committed to SVN and with that comes the power to completely break mozilla.org properties. Fun stuff. Really. Also once Release Management was more than just one person we shared this sqlite db over Dropbox which had some fun quirks, like clobbering your changes if two people had the file open at the same time. Nowhere to go but up from here!

Present: Thanks to the web production team (jgmize, hoosteeno, craigcook, jbertsch) we got a new Django app in place that gives us a proper databse that’s redundant, production quality, and not in our hands. We add in release notes as well as releases and can publish notes to both staging and production without any more commits to SVN. There’s also an API that can be scripted to.

Future: The future’s so bright in this area, let me get my shades. We have a flag in Bugzilla for relnote-firefox where it can get set to ? when something is nominated and then when we decide to take on that bug as a release note we can set it to {versionNum}+. With a little tweaking on the Bugzilla side of things we could either have a dedicated field for “release-note text” or we could parse it out of a syntax in a comment (though that’s gonna be more prone to user error, so I prefer the former) and then automatically grab all the release notes for a version, create the release in Nucleus, add the notes, publish to staging, and email the link around for feedback without any manual interference. This also means we can dynamically adjust release notes using Bugzilla (and yes, this will need to be really cautiously done), and it makes sure that our recent convention of having every release note connect to a bug persist and become the standard.

Release Dash

Past: Our only way to visualize the work we were doing was a spreadsheet, and graphs generated from it, of how many crasher bugs were tracked for a version, how many bugs tracked/fixed over the course of 18 weeks for a version, and not much else. We also pay attention to the crash rate at ship time, whether we had to do a dot release or chemspill, and any other release-version-specific issues are sort of lost in the fray after we’re a couple of weeks out from a release. This means we don’t have a great sense of our own history, what we’re doing that works in generating a more stable/successful release, and whether a release is in fact ready to go out the door. It’s a gamble, and we take it every 6 weeks.

Present: We have in place a dashboard that is supposed to allow us to view the current crash data, select Talos (performance) data, custom bug queries, and be able to compare a current release coming down the pipe to previous releases. We do not use this dashboard yet because it’s been a side project for the past year and a half, primarily being created and improved upon by fabulous – yet short-term – interns at Mozilla. The dashboard relies on Elastic Search for Bugzilla data and the cluster it points to is not always up. The dash is written in php and that’s no one’s strong suit on our current team, our last intern did his work by creating a Python Flask app that would work into the current dash. The present situation is basically: we need to work on this.

Future: In the future, this dashboard will be robust, reliable, production-quality (and supported), and it will be able to go up on Mozilla office screens in the dashboard rotation where it will make clear to any viewer:
* Where we are in the current release cycle
* What blockers remain for releas
* How our stability is (over/under acceptable rates)
* If we’re meeting performance expectations
And hopefully more. We have to find more ways to get visibility into issues a release might hit once it’s with the larger population. I’d love to see us get more of our Beta user’s feedback by asking for it on specific features/fixes, get a broader Beta audience that is more reflective of our overall release population (by hardware, location, language, user types) and then grow their ability to report issues well. Then we can find ways to get that front and center too – including to developers because they are great at confirming if something unusual is happening.

What Else?

Well, we used to have an automated script that reminded teams of their open & tracked bugs on Beta/Aurora/Nightly in order to provide a priority order that was visible to devs & their managers. It’s a finicky script that breaks often. I’d like to see that replaced with something that’s not just a cronjob on my personal VPS. We’re also this close to not needed to update product-details (still in SVN) on every release. The fact that the Release Management team has the ability to accidentally take down all mozilla.org properties when a mistake is made submitting svn propedits is not desireable or necessary. We should get the heck away from that asap.

We’ll have more discussions of this in Portland, especially with the teams we work closely with and Sylvestre and I will be talking up our process & future goals at FOSDEM in 2015 as well as following it with a work week in Paris where we can put our heads down and code. Next summer we get an intern again and so we’ll have another set of skilled hands to put on tooling & web service improvements.

Always improving. Always automating. These are the things that make me excited for the next year of Release Management.

Mozilla Open Policy & Advocacy BlogSpotlight on the Open Technology Institute: A Ford-Mozilla Open Web Fellow Host

{This is the third installment in our series highlighting the 2015 Host Organizations for the Ford-Mozilla Open Web Fellows program. We are now accepting applications to be a 2015 fellow. We are thrilled to feature the New America Foundation’s Open Technology Institute as a host. Over the years, OTI has been a meaningful change agent, helping to protect the free and open Web. Working at OTI, the Open Web Fellow will be developing tools that lead to greater transparency, enabling all stakeholders to better understand how public policy and business practices impact the Web experience.}

Spotlight on the Open Technology Institute: A Ford-Mozilla Open Web Fellow Host Organization
By Kevin Bankston, Policy Director, and Georgia Bullen, Senior Data Analyst; Open Technology Institute

Last month’s MozFest 2014 provided us a welcome opportunity to think about what we at New America’s Open Technology Institute hope to do over the next year as one of the few organizations lucky enough to host a Ford-Mozilla Open Web Fellow during that fellowship program’s inaugural year. At OTI, we are committed to freedom and social justice in the digital age. To achieve these goals, we engage in policy debates, build technology, and work with communities to understand needs, test tools and build alternative models of infrastructure. And we are looking for a passionate maker to help us with our work in 2015. In particular, to help make more transparent the workings of the Internet and the companies that offer services over it.

OTI-Institute-CMYK [Converted]-01

So much of what impacts our online experience happens without us seeing it, making it easy to overlook.

For example, look at the Net Neutrality debate, where decisions made at interconnection points deep in the network have both business and policy implications. At OTI, we have tools that allow us to dig into the technical depths of the issue through our Measurement Lab platform, and we recently published a major report laying out much of that data.  But we need help figuring out how to make this information more available and more clear so that policy experts, advocates, industry professionals and everyday Internet users can understand what interconnection is, how it works, and how it affects the online experience. We’ve started on one of these efforts by working on a visualization tool that we’re calling the Measurement Lab Observatory, but there’s so much more we can do with the Measurement Lab data, as well as the platform and tools to make it more accessible to everyone–if only we can find the right fellow.

With the help of the participants at our MozFest usability workshop, we thought about other ways to get people involved in Internet measurement, such as building a network troubleshooting tool that could generate new M-Lab data while also testing your connection.  We also talked about developing out our Firefox Browser extension to have different themes depending on a user’s needs, such as a journalist or advocate dashboard which includes recent news about Internet policy issues, or a “notebook” app with which Internet citizen scientists can run and annotate tests as part of the M-Lab research team.

These are just the types of ideas that we’re hoping our incoming Ford-Mozilla Fellow can run with.

On the policy and governance side, there’s also a lot more that we could be doing to reveal what happens behind the scenes between governments and Internet companies. Many companies now publish “Transparency Reports” that include information about how and when governments ask for user’s data. However, there’s no standardization in how companies report, making it hard to meaningfully combine or compare the data from different companies — and hard for new companies to get into the reporting game. Building on some of our previous research and education efforts around transparency reporting, in 2015 we will be launching a project called the Transparency Reporting Toolkit.  We’re going to build a Web portal filled with best practices information and tools to help companies create and upload reports in a standardized way, and tools for others to mash up and visualize the data from multiple companies’ reports. OTI’s technologists and data visualization experts are gearing up to build those tools, but it’s a big project and we could use some help — possibly yours.

Ultimately, we can only make good policy with good information, and we can only get good information – and, crucially, understand that information – with good tools.  We’re ready to move forward on all of these projects in 2015, full steam ahead. All we need now is the right technologist to help us make those tools. If that sounds exciting to you, apply to be a 2015 Ford-Mozilla Open Web Fellow and work with us and the Mozilla community to help build new windows into the technical and political depths of the Internet.


Apply to be a 2015 Ford-Mozilla Open Web Fellow. Visit www.mozilla.org/advocacy.

Soledad PenadesUsing the Firefox Developer Edition dark theme with Nightly

With a recent version of Nightly, go to about:config and set browser.devedition.theme.enabled to true.

Open DevTools (I use alt + cmd + i, or you can also go to the Tools → Web Developer → Toggle tools menu). Then open DevTools preferences by clicking on the gear icon, and select “Dark Theme” on the top right, underneath the Themes.

Screenshot for clarification:

nightly with dev edition theme

Note: you might not get the full effect if there is “legacy stuff” in your profile. If it doesn’t look as you expect… your best option might be to just create a new profile when you start the browser.

Note 2: for some reason the tabs weren’t rendering correctly on my normal nightly profile because the about:config browser.tabs.drawInTitlebar entry was set to false instead of true—I set it to true and now everything looks fine for me.

Or just use the standard Firefox Developer Edition if you’re not an impatient person like me :-P

flattr this!

David Rajchenbach TellerRFC: We deserve better than runtime warnings

Consider the following scenario:

  1. Module A prints warnings when it’s used incorrectly;
  2. Module B uses module A correctly;
  3. Some future refactoring of module B starts using module A incorrectly, hence displaying the warnings;
  4. Nobody realises for months, because we have too many warnings;
  5. Eventually, something breaks.

How often has this happened to everyone of us?

This scenario has many variants (e.g. module A changed and nobody realized that module B is now in a situation it misuses module A), but they all boil down to the same thing: runtime warnings are designed to be lost, not fixed. To make things worse, many of our warnings are not actionable, simply because we have no way of knowing where they come from – I’m looking at you, Cu.reportError.

So how do we fix this?

We would certainly save considerable amounts of time if warnings caused immediate assertion failures, or alternatively test failures (i.e. fail, but only when running the unit tests). Unfortunately, we can do neither, as we have a number of tests that trigger the warnings either

  • by design (e.g. to check that we can recover from such misuses of A, or because we still need a now-considered-incorrect use of an API to keep working until we have ported all the clients to the better API);
  • or for historical reasons (e.g. the now incorrect use of A used to be correct, but we haven’t fixed all tests that depend on it yet).

However, I believe that causing test failures is still the solution. We just need a mechanism that supports a form of whitelisting to cope with the aforementioned cases.

Introducing RuntimeAssert

RuntimeAssert is an experiment at provoding a standard mechanism to replace warnings. I have a prototype implemented as part of bug 1080457. Feedback would be appreciated.

The key features are the following:

  • when a test suite is running, a call to `RuntimeAssert` causes the test suite to fail;
  • when a test suite is running, a call to `RuntimeAssert` contains at least the filename/line number of where it was triggered, preferably a stack wherever available;
  • individual tests can whitelist families of calls to `RuntimeAssert` and mark them either as expected;
  • individual tests can whitelist families of calls to `RuntimeAssert` and mark them as pending fix;
  • when a test suite is not running, a call to `RuntimeAssert` does nothing costly (it may default to PR_LOG or Cu.reportError).

Possible API:

  • in JS, we trigger a test failure by calling RuntimeAssert.fail(keyword, string or Error) from production code;
  • in C++, we likewise trigger a test failure by calling MOZ_RUNTIME_ASSERT(keyword, string);
  • in the testsuite, we may whitelist errors by calling Assert.whitelist.expected(keyword, regexp)  or Assert.whitelist.FIXME(keyword, regexp).

Examples:

//
// Module
//
let MyModule = {
  oldAPI: function(foo) {
    RuntimeAssert.fail(“Deprecation”, “Please use MyModule.newAPI instead of MyModule.oldAPI”);
    // ...
  },
  newAPI: function(foo) {
    // ...
  },
};

let MyModule2 = {
  api: function() {
    return somePromise().then(null, error => {
      RuntimeAssert.fail(“MyModule2.api”, error);
      // Rather than leaving this error uncaught, let’s make it actionable.
    });
  },

  api2: function(date) {
    if (typeof date == “number”) {
      RuntimeAssert.fail(“MyModule2.api2”, “Passing a number has been deprecated, please pass a Date”);
      date = new Date(date);
    }
    // ...
   }
}


//
// Whitelisting a RuntimeAssert in a test.
//

// This entire test is about MyModule.oldAPI, warnings are normal.
Assert.whitelist.expected(“Deprecation”, /Please use MyModule.newAPI/);

// We haven’t fixed all calls to MyModule2.api2, so they should still warn, but not cause an orange.
Assert.whitelist.FIXME(“MyModule2.api2”, /please pass a Date/);

Assert.whitelist.expected(“MyModule2.api”, /TypeError/, function() {
  // In this test, we will trigger a TypeError in MyModule2.api, that’s entirely expected.
  // Ignore such errors within the (async) scope of this function.
});

Applications

In the long-term, I believe that RuntimeAssert (or some other mechanism) should replace almost all our calls to Cu.reportError.

In the short-term, I plan to use this for reporting

  • uncaught Promise rejections, which currently require a bit too much hacking for my tastes;
  • errors in XPCOM.lazyModuleGetter & co;
  • failures during AsyncShutdown;
  • deprecation warnings as part of Deprecated.jsm.


Doug BelshawFirefox Interest Dashboard: privacy-respecting analytics for your web browsing history

On a recent Mozilla project call I heard about the new Firefox Interest Dashboard. As someone who loves self-tracking, but stopped using my Fitbit due to privacy concerns, this is awesome.

My Firefox Interest Dashboard

Some of the numbers may be a bit off, and the categorisation certainly is in some cases, but it’s a promising start! The great thing is that if you use Firefox Sync it uses your data from other installations you use, too!

From the Content Services team:

This is an early version of interest categorization we’re working on. We invite you to test out this experimental beta add-on and help us out with the misclassified results. We would love to hear from you on suggestions on improvement or any feedback through the flag icon on the interest timeline.

Unlike other analytics services, the FAQ assures users that “all of the interest analysis and categorization is done on the client-side of your browser. No personal data is stored on Mozilla’s servers.”

Download the add-on (Firefox only)


Questions? Comments? Direct them to doug@mozillafoundation.org or discuss in the #TeachTheWeb discussion forum.

Julien VehentSSL/TLS for the Pragmatic

Tonight I had the pleasure to present "SSL/TLS for the Pragmatic" to the fine folks of Bucks County Devops. It was a fun evening, and I want to thank the organizers, Mike Smalley & Ben Krein, for the invitation.

It was a great opportunity to summarize 18 months of work at Mozilla on building the Server Side TLS Guidelines. By the feedback I received tonight, and on several other occasions, I think we've achieved the goal of building a document that is useful to operations people, and made TLS just a little easier to understand.

We are not, however, anywhere done with the process of teaching TLS to the Internet. Stats speak for themselves, with 70% of sites still supporting SSLv3, 86% enabling RC4, and about 32% still not preferring PFS over RSA handshakes. But things are getting better every day, and ongoing efforts may bring safe defaults in Linux servers as soon as Fedora 21. We live in exciting times!

The slides from my talk are below, and on github as well. I hope you enjoy them. Feel free to share your comments at julien[at]linuxwall.info.

Giorgio Maones/http(:\/\/(?:noscript|flashgot|hackademix)\.net)/https\1/

I'm glad to announce noscript.net, flashgot.net and hackademix.net have been finally switched to full, permanent TLS with HSTS

Please do expect a smörgåsbord of bugs and bunny funny stuff :)

Andreas GalYahoo and Mozilla Form Strategic Partnership

SUNNYVALE, Calif. and MOUNTAIN VIEW, Calif., Wednesday, November 19, 2014 – Yahoo Inc. (NASDAQ: YHOO) and Mozilla Corporation today announced a strategic five-year partnership that makes Yahoo the default search experience for Firefox in the United States on mobile and desktop. The agreement also provides a framework for exploring future product integrations and distribution opportunities to other markets.

The deal represents the most significant partnership for Yahoo in five years. As part of this partnership, Yahoo will introduce an enhanced search experience for U.S. Firefox users which is scheduled to launch in December 2014. It features a clean, modern and immersive design that reflects input from the Mozilla team.

“We’re thrilled to partner with Mozilla. Mozilla is an inspirational industry leader who puts users first and focuses on building forward-leaning, compelling experiences. We’re so proud that they’ve chosen us as their long-term partner in search, and I can’t wait to see what innovations we build together,” said Marissa Mayer, Yahoo CEO. “At Yahoo, we believe deeply in search – it’s an area of investment, opportunity and growth for us. This partnership helps to expand our reach in search and also gives us an opportunity to work closely with Mozilla to find ways to innovate more broadly in search, communications, and digital content.”

“Search is a core part of the online experience for everyone, with Firefox users alone searching the Web more than 100 billion times per year globally,” said Chris Beard, Mozilla CEO. “Our new search strategy doubles down on our commitment to make Firefox a browser for everyone, with more choice and opportunity for innovation. We are excited to partner with Yahoo to bring a new, re-imagined Yahoo search experience to Firefox users in the U.S. featuring the best of the Web, and to explore new innovative search and content experiences together.”

To learn more about this, please visit the Yahoo Corporate Tumblr and the Mozilla blog.

About Yahoo

Yahoo is focused on making the world’s daily habits inspiring and entertaining. By creating highly personalized experiences for our users, we keep people connected to what matters most to them, across devices and around the world. In turn, we create value for advertisers by connecting them with the audiences that build their businesses. Yahoo is headquartered in Sunnyvale, California, and has offices located throughout the Americas, Asia Pacific (APAC) and the Europe, Middle East and Africa (EMEA) regions. For more information, visit the pressroom (pressroom.yahoo.net) or the Company’s blog (yahoo.tumblr.com).

About Mozilla

Mozilla has been a pioneer and advocate for the Web for more than a decade. We create and promote open standards that enable innovation and advance the Web as a platform for all. Today, hundreds of millions of people worldwide use Mozilla Firefox to discover, experience and connect to the Web on computers, tablets and mobile phones. For more information please visit https://www.mozilla.com/press

Yahoo is registered trademark of Yahoo! Inc. All other names are trademarks and/or registered trademarks of their respective owners.


Filed under: Mozilla

Monty MontgomeryDaala Demo 6: Perceptual Vector Quantization (by J.M. Valin)

Jean-Marc has finished the sixth Daala demo page, this one about PVQ, the foundation of our encoding scheme in both Daala and Opus.

(I suppose this also means we've finally settled on what the acronym 'PVQ' stands for: Perceptual Vector Quantization. It's based on, and expanded from, an older technique called Pyramid Vector Quantization, and we'd kept using 'PVQ' for years even though our encoding space was actually spherical. I'd suggested we call it 'Pspherical Vector Quantization' with a silent P so that we could keep the acronym, and that name appears in some of my slide decks. Don't get confused, it's all the same thing!)

David DahlEncryptr: ‘zero knowledge’ essential information storage

Encryptr is one of the first “in production” applications built on top of Crypton. Encryptr can store short pieces of text like passwords, credit card numbers and other random pieces of information privately, in the cloud. Since it uses Crypton, all data that is saved to the server is encrypted first, making even a server compromise an exercise in futility for the attacker.

A key feature is that you can run Encryptr on your phone as well as your desktop and all data is available in each place immediately. Have a look:


The author of Encryptr, my colleague Tommy @therealdevgeeks, has recently blogged about building Encryptr. I hope you give it a try and send him feedback through the Github project page.


Christian HeilmannSimple things: styling ordered lists

This blog started as a scratch pad of simple solutions to problems I encountered. So why not go back to basics?

It is pretty easy to get an ordered list into a document. All you have to do is add an OL element with LI child elements:

<ol>
  <li>Collect underpants</li>
  <li>???</li>
  <li>Profit</li>
</ol>

But what if you want to style the text differently from the numbers? What if you don’t like that they end with a full stop? The generated numbers of the OL are somewhat of that dark magic browsers do for us (something we work on dragging into the sunlight with ShadowDOM).

In order to make those more style-able in the past you had to add another element to get a hook:

<ol class="oldschool">
  <li><span>Collect underpants</span></li>
  <li><span>???</span></li>
  <li><span>Profit</span></li>
</ol>

.oldschool li {
  color: green;
}
.oldschool span {
  color: lime;
}

Which is kind of a terrible hack and doesn’t quite scale as you may never know who edits your list. With newer browsers we have a better way of doing that using CSS counters. The browser support is ridiculously good, so there should be no excuse for us not to use them:

counters

Using counter, you keep the HTML structure:

<ol class="counter">
  <li>Collect underpants</li>
  <li>???</li>
  <li>Profit</li>
</ol>

You then reset the counter for each of the lists with this class:

.counter { 
  counter-reset: list;
}

This means each list will start at 1 and not go on through the document tree. You then get rid of the list style and style the list item like you want to. In this case we give it a colour and we position it relative. This allows us to position other, new content in there and contain it to the list item:

.counter li {
  list-style: none;
  position: relative;
  color: lime;
}

Once you hid the normal numbering with the list-style: none; you can create your own numbers using counter and generated CSS content:

.counter li::before {
  counter-increment: list;
  content: counter(list) '.';
  position: absolute;
  top: 0px;
  left: -1.2em;
  color: green;
}

If you wanted to remove the full stop, all you need to do is remove it in the CSS. You have now full styling control over these numbers, for example you can animate them slightly moving from one colour to another and zoom out a bit:

demo animation of the effect

.animated li::before {
  transition: 0.5s;
  color: green;
}
.animated li:hover::before {
  color: white;
  transform: scale(1.5);
}

Counters allow you for a lot of different times of numbering. You can for example add a leading zero by using counter(list,decimal-leading-zero), you can use Roman numerals with counter(list,lower-roman) or even Greek ones with counter(list,lower-greek).

If you want to see all of that in action, check out this Fiddle:

Pretty simple, and quite powerful. Here are some more places to read up on this:

Jonathan WattConverting Mozilla's SVG implementation to Moz2D - part 2

This is part 2 of a pair of posts describing my work to convert Mozilla's SVG implementation to directly use Moz2D. Part 1 provided some background information and details about the process. This post will discuss the performance benefits of the conversion of the SVG code and future work.

Benefits

For the most part the performance improvements from the conversion to Moz2D were gradual; as code was incrementally converted, little by little gfxContext overhead was avoided. On doing an audit of our open SVG performance bugs it seems that painting performance is no longer one of the reasons that we perform poorly, except for when we us cairo backed DrawTargets (Linux, Windows XP and other Windows versions with blacklisted drivers), and with the exception of one bug that needs further investigation. (See below for the issues that still do causes SVG performance problems.)

Besides the incremental improvements, there have been a couple of interesting perf bumps that are worth mentioning.

The biggest perf bump by far came when I converted the code that does the actual filling and stroking of SVG geometry to directly use a DrawTarget. The time taken to render this map then dropped from about 15s to 1-2s on my Mac. On the same machine Chrome Canary shows a blank window for about 5s, and then takes a further 20s to render. Now, to be honest, this improvement will be down to something pathological that has been removed rather than being down to avoiding Thebes overhead. (I haven't got to the bottom of exactly what that was yet.) The DrawTarget object being drawn to is ultimately the same object, and Thebes overhead isn't likely to be more than a few percent of any time spent in this code. Nevertheless, it's still a welcome win.

Another perf bump that came from the Moz2D conversion was that it enabled us to cache path objects. When using Thebes, paths are built up using gfxContext API calls and the consumer never gets to touch the resulting path. This prevents the consumer from keeping hold of the path and reusing it in future. This can be a disadvantage when the path is reused frequently, especially when D2D is being used where path creation is relatively expensive. Converting to Moz2D has allowed the SVG code to hold on to the path objects that it creates and reuse them. (For example, in addition to their obvious use during rasterization, paths might be reused for bounds calculations (think invalidation areas, objectBoundingBox content, getBBox() calls) and hit-testing.) Caching paths made us noticeably more responsive on this cool data visualization (temporarily mirrored here while the site is down) when mousing over the table rows, and gave us a +25% boost on this NYT article, for example.

For those of you that are interested in Talos, I did take a look at the SVG test data, but the unfortunately frequent up-and-down of unrelated regressions and wins makes it impossible to use that to show any overall impact of Moz2D conversion on the Talos tests. (Since the beginning of the year the times on Windows have improved slightly while on Mac they have regressed slightly.) The incremental nature of most of the work also unfortunately meant that the impact of individual patches couldn't usually be distinguished from the noise in Talos' results. One notable exception was the change to make SVG geometry use a path object directly which resulted in an improvement in the region of 6% for the svg_opacity suite on Windows 7 and 8.

Other than the performance benefits, several parts of the SVG implementation that were pretty messy and hard to get into and debug have become a lot more approachable. This has already allowed me to fix various SVG bugs that would otherwise have taken a lot longer to work out, and I hope it makes the code easier to approach for devs who aren't so familiar with it.

One final note on performance for any of you that will do your own testing to compare build - note that the enabling of e10s and tiled layers has caused significant changes in performance characteristics. You might want to turn those off.

Future SVG work

As I noted above there are still SVG performance issues unrelated to graphics speed. There are three sources of significant SVG performance issues that can make Mozilla perform poorly on SVG relative to other implementations. There is our lack of hardware acceleration of SVG filters; there's the issue of display list overhead dwarfing painting on SVGs that contain huge numbers of elements (display lists being an implementation detail, and one that gave us very large wins in many other cases); and there are a whole bunch of "strange" bugs that I expect are related to our layers infrastructure that are causing us to over invalidate (and thus do work painting when we shouldn't need to).

Currently these three issues are not on a schedule, but as other higher priority Mozilla work gets ticked of I expect we'll add them.

Future Moz2D work

The performance benefits from the Moz2D conversion on the SVG code do seem to have been positive enough that I expect that we will continue converting the rest of layout in the future. As usual, it will all depend on relative priorities though.

One thing that we should do is audit all the code that creates DrawTargets to check for backend type compatibility. Mixing hardware and software backed DrawTargets when we don't need to can cause us to unwittingly be taking big performance hits due to readback from and/or upload to the GPU. I fixed several instances of mismatch that I happened to notice during the conversion work, and in one case accidentally introduced one which fortunately was caught because it caused a 10-25% regression in a specific Talos test. We know that we still have outstanding bugs on this (such as bug 944571) and I'm sure there are a bunch of cases that we're unaware of.

I mentioned above that painting performance is still a significant issue on machines that fall back to using cairo backed DrawTargets. I believe that the Graphics team's plan to solve this is to finish the Skia backend for Moz2D and use that on the platforms that don't support D2D.

There are a few things that need to be added to Moz2D before we can completely get rid of gfxContext. The main thing we're missing is push-group API on DrawTarget. This is the main reason that gfxContexts actually wraps a stack of DrawTargets, which has all sorts of irritating fallout. Most annoying it makes it hazardous to set clip paths or transforms directly on DrawTargets that may be accessed via a wrapping gfxContext before the DrawTarget's clip stack and transform has been restored, and why I had to continue passing gfxContexts to a lot of code that now only paints directly via the DrawTarget.

The only Moz2D design decision that I've found myself to be somewhat unhappy with is the decision to make patterns relative to user-space. This is what most other hardware accelerated libraries do, but I don't think it's a good fit for 2D browser rendering. Typically crisp rendering is very important to web content, so we render patterns assuming a specific user-space to device-space transform and device space pixel alignment. To maintain crisp rendering we have to make sure that patterns are used with the device-space transform that they were created for, and having to do this manually can be irksome. Anyway, it's a small detail, but something I'll be discussing with the Graphics guys when I see them face-to-face in a couple of weeks.

Modulo the two issues above (and all the changes that I and others had made to it over the last year) I've found the Moz2D API to be a pleasure to work with and I feel the SVG code is better performing and a lot cleaner for converting to it. Well done Graphics team!

Jonathan WattConverting Mozilla's SVG implementation to Moz2D - part 1

One of my main work items this year was the conversion of the graphics portions of Mozilla's SVG implementation to directly use Moz2D APIs instead of using the old gfxContext/gfxASurface Thebes APIs. This pair of posts will provide some information on that work. This post will give some background and information on the conversion process, while part 2 will provide some discussion about the benefits of the work and what steps we might want to carry out next.

For background on why Mozilla is building Moz2D (formerly called Azure) and how it can improve Mozilla's performance see some of the earlier posts by Joe, Bas and Robert.

Early Moz2D development

When Moz2D was first being put together it was initially developed and tested as an alternative rendering backend for Mozilla's implementation of HTML <canvas>. Canvas was chosen as the initial testbed because its drawing is largely self contained, it requires a relatively small number of features from any rendering backend, and because we knew from profiling that it was being particularly impacted by Thebes/cairo overhead.

As Moz2D started to become more stable, Thebes' gfxContext class was extended to allow it to wrap a Moz2D DrawTarget (prior to that it was backed only by an instance of a Thebes gfxASurface subclass, in turn backed by a cairo_surface_t). This might seem a bit strange since, after all, Moz2D is supposed to replace Thebes, not be wrapped by it adding yet another layer of abstraction and overhead. However, it was an important step to allow the Graphics team to start testing Moz2D on Mozilla's more complicated, non-canvas, rendering scenarios. It allowed many classes of Moz2D bugs and missing Moz2D features to be worked on/out before beginning a larger effort to convert the masses of non-canvas rendering code to Moz2D.

In order to switch any of the large number of instances of gfxContext to be backed by a DrawTarget, any code that might encounter that gfxContext and try to get a gfxASurface from it had to be updated to handle DrawTargets too. For example, lots of forks in the code had to be added to BasicLayerManager, and gfxFont required a new GlyphBufferAzure class to be written. As this work progressed some instances of Thebes gfxContexts were permanently flipped to being backed by a Moz2D DrawTarget, helping keep working Moz2D code paths from regressing.

SVG, the next Guinea pig

Towards the end of 2013 it was felt that Moz2D was sufficiently ready to start thinking about converting Mozilla's layout code to use Moz2D directly and eliminate its use of gfxContext API. (The layout code being the code that decides where and how most things are placed on the screen, and by far the biggest consumer of the graphics code.) Before committing a lot of engineering time and resources to a large scale conversion, Jet wanted to convert a specific part of the layout code to ensure that Moz2D could meet its needs and determine what performance benefits it could provide to layout. The SVG code was chosen for this purpose since it was considered to be the most complicated to convert (if Moz2D could work for SVG, it could work for the rest of layout).

Stage 1 - Converting all gfxContexts to wrap a DrawTarget

After drawing up a rough list of the work to convert the SVG code to Moz2D I got stuck in. The initial plan was to add code paths to the SVG code to check for and extract DrawTargets from gfxContexts that were passed in (if the gfxContext was backed by one) and operate directly on the DrawTarget in that case. (At some future point the Thebes forks could then be removed.) It soon became apparent that these forks were often not how we would want the code to be structured on completion of Moz2D conversion though. To leverage Moz2D more effectively I frequently found myself wanting to refactor the code quite substantially, and in ways that were not compatible with the existing Thebes code paths. Rather than spending months writing suboptimal Moz2D code paths only to have to rewrite things again when we got rid of the Thebes paths I decided to save time in the long run and first make sure that any gfxContexts that were passed into SVG code would be wrapping a DrawTarget. That way maintaining Thebes forks would be unnecessary.

It wasn't trivial to determine which gfxContexts might end up being passed to SVG code. The complexity of the code paths and the virtually limitless permutations in which Web content can be combined meant that I only identified about a dozen gfxContexts that could not end up in SVG code. As a result I ended up working to convert all gfxContexts in the Mozilla code. (The small amount of additional work to convert the instances that couldn't end up in SVG code allowed us to reduce a whole bunch of code complexity (and remove a lot of then dead code) and simplified things for other devs working with Thebes/Moz2D.)

Ensuring that all the gfxContexts that might be passed to SVG code would be backed by a DrawTarget turned out to be quite a task. I started this work when relatively few gfxContexts had been converted to wrap a DrawTarget so unsurprisingly things were a bit rough. I tripped over several Moz2D bugs at this point. Mostly though the headaches were caused by the amount of code that assumed gfxContexts wrapped and could provide them with a gfxASurface/cairo_surface_t/platform library object, possibly getting or then passing those objects from/to seemingly far corners of the Mozilla code. Particularly challenging was converting the image code where the sources and destinations of gfxASurfaces turned out to be particularly far reaching requiring the code to be converted incrementally in 34 separate bugs. Doing this without temporary performance regressions was tricky.

Besides preparing the ground for the SVG conversion, this work resulted in a decent number of performance improvements in its own right.

Stage 2 - Converting the SVG code to Moz2D

Converting the SVG code to Moz2D was a lot more than a simple case of switching calls from one graphics API to another. The stateful context provided by a retained mode API like Thebes or cairo allows consumer code to set context state (for example, fill pattern, or anti-alias mode) in points of the code that can seem far removed from other code that takes an action (for example, filling a path) that relies on that state having been set. The SVG code made use of this a lot since in many cases (for example, when passing things through for callbacks) it simplified the code to only pass a context rather than a context and some state to set.

This wouldn't have been all that bad if it wasn't for another fundamental difference between Thebes/cairo and Moz2D -- in Moz2D paths and patterns are relative to user-space, whereas in Thebes/cairo they are relative to device-space. Whereas with Thebes we could set a path/pattern and then change the transform before drawing (perhaps, say, to apply a clip in a different space) and the position of the path/pattern would be unaffected, with Moz2D such a transform change would change (and thus break) the rendering. This, incidentally, was why the SVG code was expected to be the hardest area to switch to Moz2D. Partly for historic reasons, and partly because some of the features that SVG supports lead it to, the SVG code did a lot of setting state, changing transforms, setting some more state and then drawing. Often the complexity of the code made it difficult to figure out which code could be setting relevant state before a transform change, requiring more involved refactoring. On the plus side, sorting this out has made parts of the code significantly easier to understand, and has been something I've wanted to find the time to do for years.

Benefits and next steps

To continue reading about the performance benefits of the conversion of the SVG code and some possible next steps continue to part 2.

Giorgio MaoneAvast, you're kidd... killing me - said NoScript >:(

If NoScript keeps disappearing from your Firefox, Avast! Antivirus is likely the culprit.
It's gone Berserk and mass-deleting add-ons without a warning.
I'm currently receiving tons of reports by confused and angry users.
If the antivirus is dead (as I've been preaching for 7 years), looks like it's not dead enough, yet.

Christian HeilmannWhat I am looking for in a guest writer on this blog

Simple: go try guest writing someplace else. This is my personal blog and if I am interested in something, I come to you and do it interview style in order to point to your work or showcase something amazingly cool that you have done.

anteater-sound-of-music

Please, please, please with cherry on top, stop sending me emails like this one:

Hi,

I’m {NAME}, a freelance writer/education consultant. I found “Christian Heilmann” on a Google search and thought I would contact you to see if you would like to work with me. I own a website on Job Application Service that I’m currently promoting for myself. I thought we could benefit each other somehow? If you are interested, I’d be happy to write a very high-quality article for your site and get a couple permanent links from it? While your website is benefiting from my high-quality article, I’m getting links from your site, making this proposition mutually beneficial.
Shall I write an article that matches your niche and send it across for your review or do you need me to write on a particular topic that interests you and your readers, I’m open to any topic, thoughts please?
If this does not interest you, I am sorry to have bothered you. Have a good day! If this does great I hope we can build a long-term business relationship together! If you wish to have a chat on the phone please let me know your phone number and when a good time to call is :) If you’d like, I can share samples with you.
Regards,
{FIRSTNAME}

I am very happy you know how to enter a name in Google and find the blog of that person. That’s a good start. Nobody got hurt, you didn’t overdo it with the research or spent too much effort before asking for my phone number and pointing out just how much you would get out of this “mutually beneficial relationship”. Seriously, I would love to be a fly on the wall when you try dating.

I’ve worked hard on this blog, that’s why it has some success or is at least found. Go work on yours yourself. That’s how it should be. A blog is you. Just like this one is mine.

Gervase MarkhamBMO show_bug Load Times 2x Faster Since January

The load time for viewing bugs on bugzilla.mozilla.org has got 2x faster since January. See this tweet for graphical evidence.

If you are looking for a direction in which to send your bouquets, glob is your man.

David Rajchenbach TellerThe Future of Promise

If you are writing JavaScript in mozilla-central or in an add-on, or if you are writing WebIDL code, by now, you have probably made use of Promise. You may even have noticed that we now have several implementations of Promise in mozilla-central, and that things are moving fast, and sometimes breaking.
At the moment, we have two active implementations of Promise:
(as well as a little code using an older, long deprecated, implementation of Promise)
This is somewhat confusing, but the good news is that we are working hard at making it simpler and moving everything to DOM Promise.

General Overview

Many components of mozilla-central have been using Promise for several years, way before a standard was adopted, or even discussed. So we had to come up with our implementation(s) of Promise. These implementations were progressively folded into Promise.jsm, which is now used pervasively in mozilla-central and add-ons.
In parallel, Promise were specified, submitted for standardisation, implemented in Firefox, and finally standardised. This is the second implementation we call DOM Promise. This implementation is starting to be used in many places on the web.
Having two implementations of Promise with the same feature set doesn’t make sense. Fortunately, Promise.jsm was designed to match the API of Promise that we believed would be standardised, and was progressively refactored and extended to follow these developments, so both APIs are almost identical.
Our objective is to move entirely to DOM Promise. There are still a few things that need to happen before this is possible, but we are getting close. I hope that we can get there by the end of 2014.

Missing pieces

Debugging and testing

At the moment, Promise.jsm is much better than DOM Promise in two aspects:
  • it is easier to inspect a promise from Promise.jsm for debugging purposes (not anymore, things have been moving fast while I was writing this blog entry);
  • Promise.jsm integrates nicely in the test suite, to make sure that uncaught errors are reported and cause test failures.
In both topics, we are hard at work bringing DOM Promise to feature parity with Promise.jsm and then some (bug 989960, bug 1083361). Most of the patches are in the pipeline already.

API differences

  • Promise.jsm offers an additional function Promise.defer, which didn’t make it to standardization.
This function may easily be written on top of DOM Promise, so this is not a hard blocker. We are going to add this function to a module `PromiseUtils.jsm`.
  • Also, there is a slight bug in DOM Promise that gives it a slightly unexpected behavior in a few edge cases. This should not hit developers who use DOM Promise as expected, but this might surprise people who know the exact scheduling algorithm and expect it to be consistent between Promise.jsm and DOM Promise.

Oh, wait, that’s fixed already.

Wrapping it up

Once we have done all of this, we will be able to replace Promise.jsm with an empty shell that defers all implementations to DOM Promise. Eventually, we will deprecate and remove this module.

As a developer, what should I do?

For the moment, you should keep using Promise.jsm, because of the better testing/debugging support. However, please do not use Promise.defer. Rather, use PromiseUtils.defer, which is strictly equivalent but is not going away.
We will inform everyone once DOM Promise becomes the right choice for everything.
If your code doesn’t use Promise.defer, migrating to DOM Promise should be as simple as removing the line that imports Promise.jsm in your module.

Doug BelshawNative apps, the open web, and web literacy

In a recent blog post, John Gruber argues that native apps are part of the web. This was in response to a WSJ article in which Christopher Mims stated his belief that the web is dying; apps are killing it. In this post, I want to explore the relationship between native apps and web literacy. This is important as we work towards a new version of Mozilla’s Web Literacy Map. It’s something that I explored preliminarily in a post earlier this year entitled What exactly is ‘the mobile web’? (and what does it mean for web literacy?). This, again, was in response to Gruber.

Native app

This blog focuses on new literacies, so I’ll not be diving too much into technical specifications, etc. I’m defining web literacy in the same way as we do with the Web Literacy Map v1.1: ‘the skills and competencies required to read, write and participate on the web’. If the main question we’re considering is are native apps part of the web? then the follow-up question is and what does this mean for web literacy?

Defining our terms

First of all, let’s make sure we’re clear about what we’re talking about here. It’s worth saying right away that 'app’ is almost always used as a shorthand for 'mobile app’. These apps are usually divided into three categories:

  1. Native app
  2. Hybrid app
  3. Web app

From this list, it’s probably easiest to describe a web app:

A web application or web app is any software that runs in a web browser. It is created in a browser-supported programming language (such as the combination of JavaScript, HTML and CSS) and relies on a web browser to render the application. ( Wikipedia)

It’s trickier to define a native app, but the essence can be seen most concretely through Apple’s ecosystem that include iOS and the App Store. Developers use a specific programming language and work within constraints set by the owner of the ecosystem. By doing so, native apps get privileged access to all of the features of the mobile device.

A hybrid app is a native app that serves as a 'shell’ or 'wrapper’ for a web app. This is usually done for the sake of convenience and, in some cases, speed.

The boundary between a native app and a web app used to be much more clear and distinct. However, the lines are increasingly blurred. For example:

  • APK files (i.e. native apps) can be downloaded from the web and installed on Android devices.
  • Developments as part of Firefox OS mean that web technologies can securely access low-level functionality of mobile devices (e.g. camera, GPS, accelerometer).
  • The specifications for HTML5 and CSS3 allow beautiful and engaging web apps to be used offline.

Web literacy and native apps

As a result of all this, it’s probably easier these days to differentiate between a native app and a web app by talking about ecosystems and silos. Understanding it this way, a native app is one that is built specifically using the technologies and is subject to the constraints of a particular ecosystem. So a developer creating an app for Apple’s App Store would have to go through a different process and use a different programming language than if they were creating one for Google’s Play Store. And so on.

Does this mean that we need to talk of a separate 'literacy’ for each ecosystem? Should we define 'Google literacy’ as the skills and competencies required to read, write and participate in Google’s ecosystem? I don’t think so. While there may be variations in the way things are done within the different ecosystems, these procedural elements do not constitute 'literacy’.

What we’re aiming for with the Web Literacy Map is a holistic overview of the skills and competencies people require when using the web. I think at this juncture we’ve got a couple of options. The first would be define 'the web’ more loosely to really mean 'the internet’.

This is John Gruber’s preferred option. He thinks we should focus less on web browsers (i.e. HTML) and more on the connections (i.e. HTTP). For example, in a 2010 talk he pointed out a difference between 'web apps’ and 'browser apps’. His argument rested on a technical point, which he illustrated with an example. When a user scrolls through their timeline using the Twitter app for iPhone, they’re not using a web browser, but they are using HTTP technologies. This, said Gruber, means that ecosystems such as Apple’s and the web are not in opposition to one another.

While this is technically correct, it’s a red herring. HTML does matter because the important thing here is the open web. Check out Gruber’s sleight of hand in this closing paragraph:

Arguments about “open” and “closed” often devolve into unresolvable cross-talk where the two sides have different definitions of what open and closed really mean. But the weird thing about a truly open platform is that its openness allows closed things to be built on top of it. In broad strokes, that’s why GNU/GPL software isn’t “open” in the way that BSD software is (and why Richard Stallman outright rejects the term “open source”). If you expand your view of “the web” from merely that which renders inside the confines of a web browser to instead encompass all network traffic sent over HTTP/S, the explosive growth of native mobile apps is just another stage in the growth of the web. Far from killing it, native apps have made the open web even stronger.

I think Gruber needs to read up on enclosure and the Commons. To use a 16th-century English agricultural metaphor, the important thing isn’t that the grass is growing in the field, it’s that it’s been fenced off and people are excluded.

A way forward

A second approach is to double-down on what makes the web different and unique. Mozilla’s mission is to promote openness, innovation & opportunity on the web and the Web Literacy Map is a platform for doing this. Even if we don’t tie development of the Web Literacy Map explicitly to the Mozilla manifesto it’s still a key consideration. Therefore, when we’re talking about 'web literacy’ it’s probably more accurate to define it as 'the skills and competencies required to read, write and participate on the open web.

What do we mean by the 'open web’? While Tantek Çelik approaches it from a technical standpoint, I like Brad Neuberg’s (2008) focus on the open web as a series of philosophies:

Decentralization - Rather than controlled by one entity or centralized, the web is decentralized – anyone can create a web site or web service. Browsers can work with millions of entities, rather than tying into one location. It’s not the Google or Microsoft Web, but rather simply the web, an open system that anyone can plug into and create information at the end-points.
Transparency - An Open Web should have transparency at all levels. This includes being able to view the source of web pages; having human-readable network identifiers, such as URLs; and having clear network entry points, such as HTTP and REST exposes.
Hackability - It should be easy to lash together and script the different portions of this web. MySpace, for example, allows users to embed components from all over the web; Google’s AdSense, another example, allows ads to be integrated onto arbitrary web pages. What would you like to hack together, using the web as a base?
Openness - Whether the protocols used are de facto or de-jure, they should either be documented with open specifications or open code. Any entity should be able to implement these standards or use this code to hook into the system, without penalty of patents, copyright of standards, etc.
From Gift Economies to Free Markets - The Open Web should support extreme gift economies, such as open source and Wikis, all the way to traditional free market entities, such as Amazon.com and Google. I call this Freedom of Social Forms; the tent is big enough to support many forms of social and economic organization, including ones we haven’t imagined yet. Third-Party Integration - At all layers of the system third-parties should be able to hook into the system, whether creating web browsers, web servers, web services, etc.
Third-Party Innovation - Parties should be able to innovate and create without asking the powers-that-be for permission.
Civil Society and Discourse - An open web promotes both many-to-many and one-to-many communication, allowing for millions of conversations by millions of people, across a range of conversation modalities.
Two-Way Communication - An Open Web should allow anyone to assume three different roles: Readers, Writers, and Code Hackers. Readers read content, Writers write content, and Code Hackers hack new network services that empower the first two roles.
End-User Usability and Integration - One of the original insights of the web was to bind all of this together with an easy to use web browser that was integrated for ease of use, despite the highly decentralized nature of the web. The Open Web should continue to empower the mainstream rather than the tech elite with easy to use next generation browsers that are highly usable and integrated despite having an open infrastructure. Open should not mean hard to use. Why can’t we have the design brilliance of Steve Jobs coupled with the geek openness of Steve Wozniak? Making them an either/or is a false dichotomy.

Conclusion

The Web Literacy Map describes the skills and competencies required to read, write and participate on the open web. But it’s also prescriptive. It’s a way to develop an open attitude towards the world:

Open is a willingness to share, not only resources, but processes, ideas, thoughts, ways of thinking and operating. Open means working in spaces and places that are transparent and allow others to see what you are doing and how you are doing it, giving rise to opportunities for people who could help you to connect with you, jump in and offer that help. And where you can reciprocate and do the same.

Native apps can mitigate against the kind of reciprocity required for an open web. In many ways, it’s the 21st century enclosure of the commons. I believe that web literacy, as defined and promoted through the Web Literacy Map, should not consider native apps part of the open web. Such apps may be built on top of web technologies, they may link to the open web, but native apps are something qualitatively different. Those who want to explore what reading, writing and participating means in closed ecosystems have other vocabularies – provided by media literacy, information literacy, and digital literacy – with which to do so.


Comments? Questions? Direct them here: doug@mozillafoundation.org or discuss this post in the #TeachTheWeb discussion forum