Robert NymanLeaving Mozilla

This is a really hard blog post to write, but I need to share this with you: I’m leaving Mozilla.

It started in 2009

At the end of 2008 I had started learning to code extensions for Firefox, and in March 2009 I went to Berlin to give my first international presentation at an add-ons workshop.

It was amazing! The rush of being on stage, teaching people, learning from them; helping, discussing and having a great time! I really loved it and at that time I felt like I had found home, what I was supposed to work with.

The following years I was part of the Mozilla community, speaking at more workshops and attending MozCamps. In 2011, a position came up as a Technical Evangelist and I joined Mozilla full time.

What has happened since

Since I started I’ve gotten to meet numerous fantastic and inspiring people, both employees and people in the great Mozilla community. I’ve traveled extensively and became the Most well-travelled speaker on Lanyrd – now the count is up to 32 countries.

I’ve also written more in detail about Why I travel and about working with developer relations and Why I Do What I Do. There’s also lots more in the Travel category.

image

I’ve worked on a lot of things at Mozilla over the years, and a couple of the things I’m really proud of is having run the Mozilla Hacks blog over the last two years, and having published 350 quality posts in 2 years! I also took the initiative to launch feedback channels around the Firefox Developer Tools and Open Web Apps and we’ve gotten great feedback from developers there.

Moving on

Alas, it’s time to move on. I’ve always preached to developers to always strive for more, whether that’s a new position in the current company or changing jobs, to ensure they keep on evolving and don’t stagnate. And I feel I really have to follow my own example in this regard.

I’ve gotten to learn and experience a lot of things at Mozilla and for that I’m eternally grateful.

Mozilla is going through a number of challenges at the moment, and to be honest, it’s my belief that the upper management need to acknowledge and address these.

I believe Mozilla is representing a great cause and I wish they can fix and tend to what they’re facing and that they come out stronger. I believe the Open Web and people need Mozilla and I wish it, and all the great people I know there, all the best.

What happens next?

I will be starting a new job, and I’ll tell you about it tomorrow, Thursday. For now, I’ll just let this sink in and then I’ll talk more about it.

If you have any questions or thoughts, please let me know here in the comments or e-mail me at robert [at] robertnyman [dot] com.

I’m always here for you. Thanks for reading.

Doug BelshawToward The Development of a Web Literacy Map: Exploring, Building, and Connecting Online

The title of this post is also the title of a presentation I’m giving at the Literacy Research Association conference next week. The conference has the theme ‘The Dialogic Construction of Literacies’ – so this session is a great fit. It’s been organised by Ian O'Byrne and Greg McVerry, both researchers and Mozilla contributors.

Tiger

I’m cutting short my participation in the Mozilla work week in Portland, Oregon next week to fly to present at this conference. This is not only because I think it’s important to honour prior commitments, but because I want to encourage more literacy researchers to get involved in developing the Web Literacy Map.

I’ve drafted the talk in the style in which I’d deliver it. The idea isn’t to read it, but to use this to ensure that my presentation is backed up by slides, rather than vice-versa. I’ll then craft speaker notes to ensure I approximate what’s written here.

Click here to read the text of the draft presentation

I’d very much appreciate your feedback. Here’s the specific things I’m looking for answers to:

  • Gaps - what have I missed?
  • Structure - does it 'flow’?
  • Red flags - is there anything in there liable to cause problems/issues?

I’ve created a thread on the #TeachTheWeb discussion forum for your responses - or you can email me directly: doug@mozillafoundation.org

Thanks in advance! And remember, it doesn’t matter how new you are to the Web Literacy Map or the process of creating it. I’m interested in the views of newbies and veterans alike.

\(@ ̄∇ ̄@)/

Andrea MarchesiniSwitchy 0.9 released

Break-news: Finally I had time to update Switchy to the latest addon-sdk 1.7 and now, version 0.9.x is restart-less!

What is Switchy? Switchy is an add-on for Firefox to better manage several profiles. This add-on allows the user to create Firefox profiles, rename, delete and open them just with a click.

By using Switchy, you can open more profiles at the same time: an important feature for those who are concerned about security and privacy. For instance, you can have a separate profile for Facebook and other social networks while browsing other websites or have a separate profile for Google so you are not always logged in.

Don’t we have some similar addons? There are other similar add-ons but Switchy has extra features. You can assign websites to be exclusive for particular profiles. This means that, when from profile X I try to open one of websites saved in a specific profile, Switchy allows me to “switch” to the correct profile with just 1 click. For example, if I open ‘Facebook’ from my default profile, Switchy immediately offers me the opportunity to open the correct profile where I am logged in on Facebook - which is nice!

What is new in version 0.9? Restart-less, and a new awesome UI for the Switchy panel.

I hope you enjoy it!

François MarierHiding network disconnections using an IRC bouncer

A bouncer can be a useful tool if you rely on IRC for team communication and instant messaging. The most common use of such a server is to be permanently connected to IRC and to buffer messages while your client is disconnected.

However, that's not what got me interested in this tool. I'm not looking for another place where messages accumulate and wait to be processed later. I'm much happier if people email me when I'm not around.

Instead, I wanted to do to irssi what mosh did to ssh clients: transparently handle and hide temporary disconnections. Here's how I set everything up.

Server setup

The first step is to install znc:

apt-get install znc

Make sure you get the 1.0 series (in jessie or trusty, not wheezy or precise) since it has much better multi-network support.

Then, as a non-root user, generate a self-signed TLS certificate for it:

openssl req -x509 -sha256 -newkey rsa:2048 -keyout znc.pem -nodes -out znc.crt -days 365

and make sure you use something like irc.example.com as the subject name, that is the URL you will be connecting to from your IRC client.

Then install the certificate in the right place:

mkdir ~/.znc
mv znc.pem ~/.znc/
cat znc.crt >> ~/.znc/znc.pem

Once that's done, you're ready to create a config file for znc using the znc --makeconf command, again as the same non-root user:

  • create separate znc users if you have separate nicks on different networks
  • use your nickserv password as the server password for each network
  • enable ssl
  • say no to the chansaver and nickserv plugins

Finally, open the IRC port (tcp port 6697 by default) in your firewall:

iptables -A INPUT -p tcp --dport 6697 -j ACCEPT

Client setup (irssi)

On the client side, the official documentation covers a number of IRC clients, but the irssi page was quite sparse.

Here's what I used for the two networks I connect to (irc.oftc.net and irc.mozilla.org):

servers = (
  {
    address = "irc.example.com";
    chatnet = "OFTC";
    password = "fmarier/oftc:Passw0rd1!";
    port = "6697";
    use_ssl = "yes";
    ssl_verify = "yes";
    ssl_cafile = "~/.irssi/certs/znc.crt";
  },
  {
    address = "irc.example.com";
    chatnet = "Mozilla";
    password = "francois/mozilla:Passw0rd1!";
    port = "6697";
    use_ssl = "yes";
    ssl_verify = "yes";
    ssl_cafile = "~/.irssi/certs/znc.crt";
  }
);

Of course, you'll need to copy your znc.crt file from the server into ~/.irssi/certs/znc.crt.

Make sure that you're no longer authenticating with the nickserv from within irssi. That's znc's job now.

Wrapper scripts

So far, this is a pretty standard znc+irssi setup. What makes it work with my workflow is the wrapper script I wrote to enable znc before starting irssi and then prompt to turn it off after exiting:

#!/bin/bash
ssh irc.example.com "pgrep znc || znc"
irssi
read -p "Terminate the bouncer? [y/N] " -n 1 -r
echo
if [[ $REPLY =~ ^[Yy]$ ]]
then
  ssh irc.example.com killall -sSIGINT znc
fi

Now, instead of typing irssi to start my IRC client, I use irc.

If I'm exiting irssi before commuting or because I need to reboot for a kernel update, I keep the bouncer running. At the end of the day, I say yes to killing the bouncer. That way, I don't have a backlog to go through when I wake up the next day.

Gervase MarkhamNot That Secret, Actually…

(Try searching Google Maps for “Secret Location”… there’s one in Norway, one in Toronto, and two in Vancouver!)

Karl DubostFix Your Flexbox Web site

Web compatibility issues takes many forms. Some are really hard to solve and there are sound business reasons behind them. On the other hand, some Web compatibility issues are really easy to fix with the benefits of allowing more potential market shares for the Web site. CSS Flexbox is one of those. I have written about it in the past. Let's make another practical demonstration on how to fix some of the flexbox issues.

8 Lines of CSS Code

Spoiler alert: This is the final result before and after fixing the CSS.

Screenshots of Hao123 site

How we did it? Someone had reported that the layout was broken on hao123.com on Firefox OS (Mobile). Two things are at happening. First of all, because Hao123 was not sending the mobile version to Firefox OS, we relied on User Agent overriding. Faking the Firefox Android user agent, we had access to the mobile version. Unfortunately, this version is partly tailored for -webkit- CSS properties.

Inspecting the stylesheets with the developer tools, we can easily discover the culprit.

 grep -i "display:-webkit" hao123-old.css
    display:-webkit-box;
    display:-webkit-box;
    display:-webkit-box
    display:-webkit-box;
    display:-webkit-box;
    display:-webkit-box
    display:-webkit-box;
 grep -i "flex" hao123-old.css
    -webkit-box-flex:1;

So I decided to fix it by adding

  1. display:flex; for each display:-webkit-box;
  2. flex-grow: 1; for -webkit-box-flex:1;

The thing which is amazing which this kind of fix is that the site dynamically fixes itself in your viewport as you go with it. You are literally sculpting the page. And if the company is telling why they should bother about it? Because for something that will take around 10 minutes to fix, they will suddenly have a much bigger coverage of devices… which means users… which means marketshares.

Guides For Fixing Web Compatibility Issues

I started a repository to help people fixing their own Web site with the most common issues in Web Compatibility. Your contribution on the project is more than welcome.

Otsukare.

Hannah KaneHow to Mofo

OpenMatt and I have been talking about the various ways of working at Mofo, and we compiled this list of what we think works best. What do y’all Mofos think?

When starting a new project:

  • Clearly state the problem or goal. Don’t jump ahead to the solution. Ameliorating the problem is what you’ll measure success against, not your ability to implement an arbitrary solution.
  • Explicitly state assumptions. And, whenever possible, test those assumptions before you build anything. You may have assumptions about the nature of the problem you’re trying to solve, who’s experiencing it, or your proposed solution.
  • Have clear success metrics. How will you know if you’re winning? Do you have the instruments you need to measure success?
  • Determine what resources you need. Think about design, development, content, engagement, evaluation, and ongoing maintenance. We’re working on improving the ways we allocate resources throughout the organization, but to start, be clear about what resources your project will need.
  • Produce a project brief. Detail all of the above in a single document. (Examples templates here and here.) Use the project brief when you…
  • …Have a project kick-off meeting. Invite *all* the stakeholders to get involved early.

Communication:

  • Have a check-in plan. Will you have daily check-ins? Weekly email updates? How are you checking in and holding each other accountable?
  • Build a workbench and keep it updated.  We recommend a wiki page that will serve as a one-stop shop for anyone needing information about the project. Things to include: links to project briefs and notes, logistics for meetings, a timeline, a list of who’s involved, and, of course, bugs! Examples here, here, and here.
  • Put your notes in one spot.  A single canonical pad for notes and agendas. We don’t need to create a  new pad every time you have a meeting or a thought! That makes them very hard to track and find later. Examples here and here.

Doing the do:

  • Plan in two-week heartbeats. This helps us stay on track and makes it clear what the priorities are. Speaking of priorities…
  • Learn the Fine Art of Prioritizing. Hint: Not everything can be P1. The product owner or project manager should rank tasks in order of value added. Remember: prioritization is part of managing workflow. It may be true that all or most of the tasks are required for a successful launch, but that doesn’t help a developer or designer who’s trying to decide what to work on next.
  • Work with your friendly neighborhood Tactical Priorities Syndicate. The name sounds scary, but they’re here to serve you. They meet weekly to get your priorities into each two-week heartbeat process. https://wiki.mozilla.org/Webmaker/TPS

Roberto A. VitilloClustering Firefox hangs

Jim Chen recently implemented a system to collect stacktraces of threads running some code for more than 500ms. A summary of the aggregated data is displayed in a nice dashboard in which the top N aggregated stacks are shown according to different filters.

I have looked at a different way to group the frames that would help us identify the culprits of main-thread hangs, aka jank. The problem with aggregating stackframes and looking at the top N is that there is a very long tail of stacks that are not considered. It might very well be that in that tail some important patterns could be lurking that we are missing.

So I tried different clustering techniques until I settled with the very simple solution of aggregating the traces by their last frame. Why the last frame? When I used k-means to cluster the traces I noticed that, for many of the more interesting clusters the algorithm found, most stacks had the last frame in common, e.g.:

  • Startup::XRE_Main, (chrome script), Timer::Fire, nsRefreshDriver::Tick, PresShell::Flush, PresShell::DoReflow
  • Startup::XRE_Main, Timer::Fire, nsRefreshDriver::Tick, PresShell::Flush, PresShell::DoReflow
  • Startup::XRE_Main, EventDispatcher::Dispatch, (content script), PresShell::Flush, PresShell::DoReflow

Aggregating by the last frame yields clusters that are big enough to be considered interesting in terms of number of stacktraces and are likely to explain the most common issues our users experience.

Currently on Aurora, the top 10 meaningful offending main-thread frames are in order of importance:

  1. PresShell::DoReflow accounts for 5% of all stacks
  2. nsCycleCollector::collectSlice accounts for 4.5% of all stacks
  3. nsJSContext::GarbageCollectNow accounts for 3% of all stacks
  4. IPDL::PPluginInstance::SendPBrowserStreamConstructor accounts for 3% of all stacks
  5. (chrome script) accounts for 3% all stacks
  6. filterStorage.js (Adblock Plus?) accounts for 2.7% of all stacks
  7. nsStyleSet::FileRules accounts for 2.7% of all stacks
  8. IPDL::PPluginInstance::SendNPP_Destroy accounts for 2% of all stacks
  9. IPDL::PPluginScriptableObject::SendHasProperty accounts for 2% of all stacks
  10. IPDL::PPluginScriptableObject::SendInvoke accounts for 1.7% of all stacks

Even without showing sample stacks for each cluster, there is some useful information here. The elephants in the room are clearly plugins; or should I say Flash? But just how much do “plugins” hurt our responsiveness? In total, plugin related traces account for about 15% of all hangs. It also seems that the median duration of a plugin hang is not different from a non-plugin one, i.e. between 1 and 2 seconds.

Tha analysis was run on a week’s worth of data for Aurora (over 50M stackframes) and I got similar results when re-running on previous weeks, so those numbers seem to be pretty stable.

There is some work in progress to improve the status quo. Aaron Klotz’s formidable async plugin initialization is going to eliminate trace 4 and he might tackle frame 8 in the future. Furthermore, a recent improvent in cycle collection is hopefully going to reduce the impact of frame 2.


Mozilla FundraisingOfficial Mozilla Gear Is Now Open for Business

Today is the day: The new Official Mozilla Gear website is open for business: https://gear.mozilla.org/ Official Mozilla Gear is the public site where anyone can buy branded gear for their own personal use or to give to loved ones.  Consider … Continue reading

Mozilla ThunderbirdThunderbird Reorganizes at 2014 Toronto Summit

In October 2014, 22 active contributors to Thunderbird gathered at the Mozilla office in Toronto to discuss the status of Thunderbird, and plan for the future.

Toronto Contributors at 2014 Toronto Summit

Thunderbird contributors gather in Toronto to plan the future.

As background, Mitchell Baker, Chair of the Mozilla Foundation, posted in July 2012 that Mozilla would significantly reduce paid staff dedicated to Thunderbird, and asked community volunteers to move Thunderbird forward. Mozilla at that time committed several paid staff to maintain Thunderbird, each working part-time on Thunderbird but with a main commitment to other Mozilla projects. The staff commitment in total was approximately one full-time equivalent.

Over the last two years, those individuals had slowly reduced their commitment to Thunderbird, yet the formal leadership of Thunderbird remained with these staff. By 2014 Thunderbird had reached the point where nobody was effectively in charge, and it was difficult to make important decisions. By gathering the key active contributors in one place, we were able to make real decisions, plan our future governance, and move to complete the transition from being staff-led to community-led.

At the Summit, we made a number of key decisions:

  • A group of seven individuals were elected to comprise a Thunderbird Council with the authority to make decisions affecting Thunderbird. I (Kent James) am currently the Chair of this council.
  • For our next major release, Thunderbird 38 due in May 2015, we set this roadmap:
    • Folders: allow >4GByte mbox folders, plus finish support for maildir
    • Instant Messaging: Support WebRTC
    • Calendaring: Merge Lightning into Thunderbird as a shipped addon
    • Accounts: Merge the New Account Types binary addon into core, allowing new account types to be defined using addons in the future.
    • IMAP: support OAUTH authorization in GMail.
  • We agreed that Thunderbird needs to have one or more full-time, paid staff to support shipping a stable, reliable product, and allow progress to be made on frequently-requested features. To this end, we plan to appeal directly to our users for donations.
  • The Thunderbird active contributors are proud to be part of Mozilla, expect to remain part of Mozilla for the foreseeable future, and believe we have an important role to play in fulfilling the goals of the Mozilla Manifesto.

There is a lot of new energy in Thunderbird since the Summit, a number of people are stepping forward to take on some critical roles, and we are looking forward to a great next release. More help is always welcome though!

Brian R. BondyAutomated end to end testing at Khan Academy using Gecko

Developers at Khan Academy are responsible for shipping new stuff they create to khanacademy.org as it's ready. As a whole, the site is deployed several times per day. Testing deploys of khanacademy.org can take up a lot of time.

We have tons of JavaScript and Python unit tests, but they do not catch various errors that can only happen on the live site, such as Content Security Policy (CSP) errors.

We recently deployed a new testing environment for end to end testing which will result in safer deploys. End to end testing is not meant to replace manual testing at deploy time completely, but over time, it will reduce the amount of time taken for manual testing.


Which types of errors do the tests catch?

The end to end tests catch things like missing resources on pages, JavaScript errors, and CSP errors. They do not replace unit tests, and unit tests should be favoured when it's possible.


Which frameworks are we using?


We chose to implement the end to end testing with CasperJS powered by the SlimerJS engine. Actually we even have one more abstraction on top of that so that tests are very simple and clean to write.

SlimerJS is similar and mostly compatible with the more known PhantomJS, but SlimerJS is based on Firefox's Gecko rendering engine instead of WebKit. At the time of this writing, it's based on Gecko 33. CasperJS is a set of higher level APIs and can be configured to use PhantomJS or SlimerJS.

The current version of PhantomJS is based on Webkit and is too far behind to be useful to end to end tests for our site yet. There's a newer version of PhantomJS coming, but it's not ready yet. We also considered using Selenium to automates browsers to do the testing, but it didn't meet our objectives for various reasons.


What do the tests do?

They test the actual live site. They can load a list of pages, run scripts on the pages, and detect errors. The scripts emulate a user of the site who fills out forms, logs in, clicks things, waits for things, etc.

We also have scripts for creating and saving programs in our CS learning environment, doing challenges, and we'll even have some for playing videos.


Example script

Here's an example end-to-end test script that logs in, and tests a couple pages. It will return an error if there are any JavaScript errors, CSP errors, network errors, or missing resources:

EndToEnd.test("Basic logged in page load tests", function(casper, test) {
    Auth.thenLogin(casper);
    [
        [ "Home page", "/"],
        [ "Mission dashboard", "/mission/cc-sixth-grade-math"]
    ].map(function(testPage) {
        thenEcho(casper, "Loading page: " + testPage[0]);
        KAPageNav.thenOpen(casper, testPage[1]);
    });
    Auth.thenLogout(casper);
});

When are tests run?

Developers are currently prompted to run the tests when they do a deploy, but we'll be moving this to run automatically from Jenkins during the deploy process. Tests are run both on the staged website version before it is set as the default, and after it is set as the default version.

The output of tests looks like this:

Henrik SkupinFirefox Automation report – week 39/40 2014

In this post you can find an overview about the work happened in the Firefox Automation team during week 39 and 40.

Highlights

One of our goals for last quarter was to get locale testing enabled in Mozmill-CI for each and every supported locale of Firefox beta and release builds. So Cosmin investigated the timing and other possible side-effects, which could happen when you test about 90 locales across all platforms! The biggest change we had to do was for the retention policy of logs from executed builds due to disk space issues. Here we not only delete the logs after a maximum amount of builds, but also after 3 full days now. That gives us enough time for investigation of test failures. Once that was done we were able to enable the remaining 60 locales. For details of all the changes necessary, you can have a look at the mozmill-ci pushlog.

During those two weeks Henrik spent his time on finalizing the Mozmill update tests to support the new signed builds on OS X. Once that was done he also released the new mozmill-automation 2.0.8.1 package.

Individual Updates

For more granular updates of each individual team member please visit our weekly team etherpad for week 39 and week 40.

Meeting Details

If you are interested in further details and discussions you might also want to have a look at the meeting agenda, the video recording, and notes from the Firefox Automation meetings of week 39 and week 40.

Byron Joneshappy bmo push day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1100942] Attachment links in request.cgi should go to the attachment and not default to &action=edit
  • [1101659] Remove curtisk from the auto-cc of the sec portion of the moz project review
  • [1102420] Remove “Firefox Screen Sharing Whitelist Submission” link from new-bug page
  • [1103069] Please fix the colo-trip field for Infrastructure and Operations :: DCops
  • [1102229] custom css stylesheets are not loaded if CONCATENATE_ASSETS is false
  • [1103837] Clicking on a “Bug Bounty” attachment should edit that attachment with the bug-bounty form

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

Nicholas NethercoteTwo suggestions for the Portland work week

Mozilla is having a company-wide work week in Portland next week. It’s extremely rare to have this many Mozilla employees in the same place at the same time, and I have two suggestions.

  • Write down a list of people that you want to meet. This will probably contain people you’ve interacted with online but not in person. And send an email to everybody on that list saying “I would like to meet you in person in Portland next week”. I’ve done this at previous work weeks and it has worked well. (And I did it again earlier today.)
  • During the week, don’t do much work you could do at home. This includes most solo coding tasks. If you’re tempted to do such work, stand up and try to find someone to talk to (or listen to) who you couldn’t normally talk to easily. (This is a rule of thumb; if a zero-day security exploit is discovered in your code on Tuesday morning, yes, you should fix it.) Failing that, gee, you might as well do something that you can only do in Portland.

That’s it. Have a great week!

David BoswellRadical Participation Idea: Slow Down

The Portland Coincidental Work Week is next week and we’ll be working on our plans for 2015. One of the things we want to include in our planning is Mitchell’s question about what does radical participation look like for Mozilla today?

Everyone who is interested in this question is welcome to join us next Thursday and Friday for the Participation work week. Please come with ideas you have about this question. Here is one idea I’m thinking about that feels like an important part of a radical participation plan.

Slow Down

I’ve worked at small software start-ups and I’ve worked at large volunteer-based organizations. There are many differences between the two. The speed that information reaches everyone is a major difference.

For example, I worked at a small start-up called Alphanumerica. There were a dozen of us all working together in the same small space. Here’s a picture of me in my corner (to give you an idea of how old this photo is it was taken on a digital camera that stored photos on a floppy disk.)

MVC-017F

To make sure everyone knew about changes, you could get everyone’s attention and tell them. People could then go back to work and everyone would be on the same page. In this setting, moving fast and breaking things works.

Information doesn’t spread this quickly in a globally distributed group of tens of thousands of staff and volunteers. In this setting, if things are moving too fast then no one is on the same page and coordinating becomes very difficult.

communities_map

Mozilla is not a small start-up where everyone is physically together in the same space. We need to move fast though, so how can we iterate and respond quickly and keep everyone on the same page?

Slow Down To Go Fast Later

It might seem odd, but there is truth to the idea that you can slow down now in order to go faster later. There is even research that backs this up. There’s a Harvard Business Review article on this topic worth reading—this paragraph covers the main take-aways:

In our study, higher-performing companies with strategic speed made alignment a priority. They became more open to ideas and discussion. They encouraged innovative thinking. And they allowed time to reflect and learn. By contrast, performance suffered at firms that moved fast all the time, focused too much on maximizing efficiency, stuck to tested methods, didn’t foster employee collaboration, and weren’t overly concerned about alignment

For Mozilla, would radical participation look like setting goals around alignment and open discussions? Would it be radical to look at other large volunteer-based organizations and see what they optimize for instead of using start-ups as a model?

I’m very interested to hear what people think about the value of slowing down at Mozilla as well as hearing other ideas about what radical participation looks like. Feel free to comment here, post your own blog and join us in Portland.


Armen ZambranoPinning mozharness from in-tree (aka mozharness.json)

Since mozharness came around 2-3 years ago, we have had the same issue where we test a mozharness change against the trunk trees, land it and get it backed out because we regress one of the older release branches.

This is due to the nature of the mozharness setup where once a change is landed all jobs start running the same code and it does not matter on which branch that job is running.

I have recently landed some code that is now active on Ash (and soon on Try) that will read a manifest file that points your jobs to the right mozharness repository and revision. We call this process to "pin mozhaness". In other words, what we do is to fix an external factor to our job execution.

This will allow you to point your Try pushes to your own mozharness repository.

In order to pin your jobs to a repository/revision of mozharness you have to change a file called mozharness.json which indicates the following two values:
  • "repo": "https://hg.mozilla.org/build/mozharness",
  • "revision": "production"


This is a similar concept as talos.json introduced which locks every job to a specific revision of talos. The original version of it landed in 2011.

Even though we have a similar concept since 2011, that doesn't mean that it was as easy to make it happen for mozharness. Let me explain a bit why:

  • For talos, mozharness has been checking out the right revision of talos.
  • In the case of mozharness, we can't make mozharness check itself out.
    • Well, we could but it would be a bigger mess
    • Instead we have made buildbot ScriptFactory be a bit more flexible
Coming up:
  • Enable on Try
  • Free up Ash and Cypress
    • They have been used to test custom mozharness patches and the default branch of Mozharness (pre-production)
Long term:
  • Enable the feature on all remaining Gecko trees
    • We would like to see this run at scale for a bit before rolling it out
    • This will allow mozharness changes to ride the trains
If you are curious, the patches are in bug 791924.

Thanks for Rail for all his patch reviews and Jordan for sparking me to tackle it.



Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Tristan NitotEn vrac du lundi

Christian HeilmannDiversifight – a talk at the diversity hackathon at Spotify Sweden

Yesterday afternoon I presented at the “Diversify” hackathon in the offices of Spotify in Stockholm, Sweden. The event was aimed at increasing diversity in IT by inviting a group of students that represented a good mix of gender and ethnic background to work together on hacks with music and music data. There was no strict competitive aspect to this hackathon and no prizes or winners – it was all about working together and seeing how a mixed group can achieve better results.

speaking at diversify
the earth needs rebels
Photos by Sofie Lindblom and Ejay Janis

When I was asked to speak at an event about diversity in IT, I was flattered but also confused. Being very white and male I don’t really have a chance to speak from a viewpoint of a group that brings diversity to the mix. But I do have a lot of experience and I looked into the matter quite a bit. Hence I put together a talk that covers a few things I see going wrong, a few ideas and tools we have to make things better by bettering ourselves and a reminder that the world of web development used to be much more diverse and we lost these opportunities. In essence, the break-neck speed of our market, the hype the press and events living on overselling the amazing world of startups and the work environments we put together seem to actively discouraging diversity. And that is what I wanted the students to consider and fight once they go out and start working in various companies.

Diversity is nothing we can install – it is something we need to fight for. And it makes no sense if only those belonging to disadvantaged groups do that.

This talk is pretty raw and unedited, and it is just a screencast. I would love to give a more polished version of it soon.

You can watch the the screencast on YouTube.

The slides are available on Slideshare.

Resources I covered in the talk:

The feedback was amazing, students really liked it and I am happy I managed to inspire a few people to think deeper about a very important topic.

A big thank you to the Spotify Street Team and especially Caroline Arkenson for having me over (and all the hedgehog photos in the emails).

Yunier José Sosa VázquezEl nuevo botón “Olvidar” de Firefox

Protege tu privacidad con el nuevo botón Olvidar, disponible solo en la última versión de Firefox. En tan solo unos clics, puedes eliminar tu historial e información personal más reciente -desde los últimos cinco minutos y hasta 24 horas- sin tocar el resto. El botón Olvidar es muy útil si usas un equipo público y quieres limpiar tu información, o si llegas a un sitio web dudoso y necesitas salir de ahí rápidamente.

Olvidar

Sino encuentras este botón en la barra de herramientas de Firefox, en el Menú  menu, elige menuPanel-customize Personalizar y arrastra el botón Olvidar hacia donde desees. Allí también puedes configurar tu navegador como más te plazca, quitando y añadiendo botones hacia el Menú o hacia la barra de herramientas.

La última versión de Firefox la puedes obtener desde nuestra Zona de Descargas para Windows, Mac, Linux y Android.

Staś MałolepszyMeet.js talk on Clientside localization in Firefox OS

Firefox OS required a fast and lean localization method that could scale up to 70 languages, cater to the needs of hundreds of developers worldwide all speaking different languages and support a wide spectrum of devices with challenging hardware specs.

At the end of September, I went to Poznań to speak about localization technology in Firefox OS at Meet.js Summit. In my talk I discussed how we had been able to create a localization framework which embraces new Web technologies like Web components and mutation observers, how we'd come up with new developer tools to make localization work easier and what exciting challenges lay ahead of us.

Botond BalloTrip Report: C++ Standards Meeting in Urbana-Champaign, November 2014

Summary / TL;DR

Project Status
C++14 Finalized and approved, will be published any day now
C++17 Some minor features so far. Many ambitious features are being explored. for (e : range) was taken out.
Networking TS Sockets library based on Boost.ASIO moving forward
Filesystems TS On track to be published early 2015
Library Fundamentals TS Contains optional, any, string_view and more. No major changes since last meeting. Expected 2015.
Library Fundamentals TS II Follow-up to Library Fundamentals TS; will contain array_view and more. In early stage, with many features planned.
Array Extensions TS Continues to be completely stalled. A new proposal was looked at but failed to gain consensus.
Parallelism TS Progressing well. Expected 2015.
Concurrency TS Progressing well. Expected 2015. Will have a follow-up, Concurrency TS II.
Transactional Memory TS Progressing well. Expected 2015.
Concepts (“Lite”) TS Progressing well. Expected 2015.
Reflection Looking at two different proposals. Too early to say anything definitive.
Graphics 2D Graphics TS based on cairo moving forward
Modules Microsoft and Clang have implementations at various stages of completeness. They are iterating on it and trying to converge on a design.
Coroutines Proposals for both stackless and stackful variants will be developed, in a single TS.

Introduction

Last week I attended another meeting of the ISO C++ Standards Committee at the Univeristy of Illinois at Urbana-Champaign. This was the third and last Committee meeting in 2014; you can find my reports on the previous meetings here (February 2014, Issaquah) and here (June 2014, Rapperswil). These reports, particularly the Rapperswil one, provide useful context for this post.

The focus of this meeting was moving forward with the various Technical Specifications (TS) that are in progress, and looking ahead to C++17.

C++14

C++14 was formally approved as an Internal Standard in August when it passed its final ballot (the “DIS”, or Draft International Standard, ballot; see my Issaquah report for a description of the procedure for publishing a new language standard).

It will take another few weeks for ISO to publish the approved standard; it’s expected to happen before the end of the year.

C++17

Strategic Vision

With C++14 being approved, the Committee is turning its attention towards what its strategic goals are for the next revision of the language standard, C++17.

As I explained in my Rapperswil report, most major new features are targeted for standardization in two steps: first, as a Technical Specification (TS), an experimental publication vehicle with no backwards-compatibility requirements, to gain implementation and use experience; and then, by incorporation into an International Standard (IS), such as C++17.

Therefore, a significant amount of the content of C++17 is expected is to consist of features being published as Technical Specifications in the near future. It’s not immediately clear which TS’s will be ready for inclusion in C++17; it depends on when the TS itself is published, and whether any concerns about it come up as it’s being implemented and used. Hopefully, at least the ones being published over the next year or so, such as Filesystems, Concepts, Parallelism, Library Fundamentals I, and Transactional Memory, are considered for inclusion in C++17.

In addition, there are some major features that do not yet have a Technical Specification in progress which many hope will be in C++17: namely, Modules and Reflection. Due to the size and scope of these features, it is increasingly likely that the committee will deem it safer to standardize these as TS’s first as well, rather than targetting them directly at C++17. In this case, there may not be time for the additional step of gaining experience with the TS and merging it into the IS in time for C++17; however, it’s too early to know with any confidence at this point.

Minor Features

That said, C++17 will certainly contain some language and library features, and some smaller ones have already made it in. I mentioned a few in my Rapperrswil report, but some new ones came out of this meeting:

  • Language features
    • The most notable and exciting feature in my books is folding expressions. These give you the ability to expand a parameter pack over a binary operator. For example, if Args is a non-type parameter pack of booleans, then Args &&... is a new expression which is the ‘and’ of all the booleans in the pack. All binary operators support this; for operators that have a logical identity element (e.g. 0 for addition), an empty pack is allowed and evaluates to that identity.
    • Another notable change was not an addition, but a removal: the terse form of the range-based for loop, for (elem : range) (which would have meant for (auto&& elem : range)), was removed. (Technically, it was never added, because the C++ working draft was locked for additions in Rapperswil while the C++14 DIS ballot was in progress. However, there was consensus in the Evolution and Core Working Groups in Rapperswil to add it, and there was wording ready to be merged to the working draft as soon as the ballot concluded and it was unlocked for C++17 additions. That consensus disappeared when the feature was put up for a vote in front of full committee in Urbana.) The reason for the removal was that in for (elem : range), there is no clear indication that elem is a new variable being declared; if there already is a variable named elem in scope, one can easily get confused and think the existing variable is being used in the loop. Proponents of the feature pointed out that there is precedent for introducing a new name without explicit syntax for declaring it (such as a type) in generalized lambda captures ([name = init](){ ... } declares a new variable named name), but this argument was not found convincing enough to garner consensus for keeping the feature.
    • std::uncaught_exceptions(), a function that allows you to determine accurately whether a destructor is being called due to stack unwinding or not. There is an existing function, std::uncaught_exception() (note the singular) that was intended for the same purpose, but was inaccurate by design in some cases, as explained in the proposal. This is considered a language feature even though it’s exposed as a library function, because implementing this function requires compiler support.
    • Attributes for namespaces and enumerators. This fills a grammatical hole in the language, where most entities could have an attribute attached to them, but namespaces and enumerators couldn’t; now they can.
    • A shorthand syntax for nested namespace definition.
    • u8 character literals.
    • A piece of official terminology, “forwarding references”, was introduced for a particular use of rvalue references. Some educators have previously termed this use “universal references”, but the committee felt the term “forwarding references” was more accurate.
    • Allowing full constant expressions in non-type template arguments. This plugs a small hole in the language where the template arguments for certain categories of non-type template parameters were restricted to be of a certain form without good reason.
  • Library features

Evolution Working Group

As usual, I spent most of my time in the Evolution Working Group (EWG), which concerns itself with the long-term evolution of the core language. In spite of there being a record number of proposals addressed to EWG in the pre-Urbana mailing, EWG managed to get through all of them.

Incoming proposals were categorized into three rough categories:

  • Accepted. The proposal is approved without design changes. They are sent on to the Core Working Group (CWG), which revises them at the wording level, and then puts them in front of the committee at large to be voted into whatever IS or TS they are targeting.
  • Further Work. The proposal’s direction is promising, but it is either not fleshed out well enough, or there are specific concerns with one or more design points. The author is encouraged to come back with a modified proposal that is more fleshed out and/or addresses the stated concerns.
  • Rejected. The proposal is unlikely to be accepted even with design changes.

Accepted proposals (note: I’m not including here the ones which also passed CWG the same meeting and were voted into the standard – see above for those):

  • Source code information capture, a proposal to provide a replacement for the __FILE__, __LINE__, and __FUNCTION__ macros that doesn’t involve the preprocessor. I think this proposal constitutes a major advance because it removes one of the main remaining uses of the preprocessor.
  • Alias-set attributes, a mechanism to pass information to the optimizer about pointer aliasing (like restrict in C, but better). Some design feedback was given, but generally the proposal was considered baked enough that the next revision can go directly to CWG.
  • A few small design changes to the Transactional Memory TS.
  • A proposal to specify that the behaviour of standard library comparison function objects for comparing pointers is consistent with the behaviour of the built-in comparison operators, where the latter behaviour is defined. This was a matter of tweaking the specification to say something that people took for granted to begin with.
  • A modification to the Concepts Lite TS: removing constexpr constraints, which were one of the kinds of constraints allowed in requires-expressions. The reason for the removal is that they are tricky to specify and implement, and have no major motivating uses.
  • A compile-time string class, templated only on the string length, which stores its data in a constexpr character array. This one was one two competing compile-time string proposals, the other one being a variadic char... template class which encodes the string contents in the template arguments themselves. The two proposals present a tradeoff between expressiveness and compile-time efficiency: one the one hand, encoding the string contents in the template arguments allows processing the string via template metaprogramming, while in the other proposal the string can only be processed with constexpr functions; on the other hand, the variadic approach involves creating lots of template instantiations for string processing, which can slow down compile times signficantly. EWG’s view was that the compile-time efficiency consideration was the more important one, especially as constexpr functions are getting more and more powerful. Therefore, the constexpr array-based proposal was selected to move forward. As the proposal has both core language and library components, it will be going to LEWG for design review of the library components before being sent to CWG and LWG.

Proposals for which further work is encouraged:

  • Destructive move, which addresses classes for which an operation that moves from an object and destroys the moved-from object at the same time is more efficient than moving and destroying separately, because the intermediate (moved-from but not yet destroyed) state would require extra object state to represent.
  • Default comparisons. Three different proposals on this topic were presented: one which would automatically give all classes comparison operators unless they opted out by =delete-ing them, or defined their own; one which would allow opting in to compiler-defined comparison operators via =default; and one which would synthesize comparison operators using reflection. As suggested by the variety of the proposals, this is a feature that everyone wants but no one can agree exactly how it should work. Design considerations that came up included opt-in vs. opt-out, special handling for certain types of fields (such as mutable fields and pointers), special handling for classes with a single member, compile-time performance, and different strengths of ordering (such as weak vs. total orders). After discussing the proposal for half a day, we ran out of time, and decided to pick up at the next meeting in Lenexa, possibly armed with revised proposals. There was one poll taken which provided fairly clear guidance on a single aspect of the proposal: there was much stronger consensus for opt-in behaviour than for opt-out.
  • A [[noreturn]] attribute for main(), designed for programs that are never meant to finish, such as some software running on embedded systems. This would allow the optimizer to remove code for running some cleanup such as the destructors of global objects. EWG liked the proposal, and sent it to CWG with one change, naming the attribute [[noexit]] instead. CWG, however, pointed out that global destructors are potentially generated by all translation units, not just the one that defines main(), and therefore the proposal is not implementable without link-time optimization. EWG discussed the proposal further, but didn’t reach any consensus, and decided to put it off until Lenexa.
  • A paper concerning violations of the zero-overhead principle in exception handling. The motivation behind this discussion was resource-constrained systems such as embedded systems, where the overhead associated with exception handling was unwelcome. The general feedback given was to try to evaluate and address such overhead in a comprehensive manner, rather than trying to avoid running into it in a few specific cases.
  • Proposals for a unified function call syntax. Two alternative proposals were presented: one for partial unification (calling non-member functions with member function call syntax), and one for complete unification (either kind of function can be called with either syntax); the latter would either involve breaking code, or having separate name lookup rules for the two syntaxes (and thus not fully achieving the intended unification in spirit). People were somewhat in favour of the first proposal, and a lot more cautious about the second. There seemed to be enough interest to encourage further exploration of the idea.
  • A proposal to allow initializer lists with elements of a move-only type. There was consensus that we want some way to do this, but no consensus for this specific approach; it was not immediately clear what a superior approach would be.
  • Overloading the member access operator (operator .), similarly to how operator -> can be overloaded. This would enable writing “smart reference” classes, much like how overloading operator -> enables writing smart pointer classes. This would be a significant new feature, and many design considerations remain to be explored; however, there was general interest in the idea.
  • Mechanisms for selecting from parameter packs. This proposal has two parts. The first part is a simple syntax for indexing into a parameter pack: if Ts is a parameter pack, and N is a compile-time integral constant, Ts.[N] is the parameter at index N in Ts (or a SFINAE-eligible error if the index N is out of range). The dot is necessary for disambiguation (if the syntax were simply Ts[N], then consider Ts[Ns]..., where Ns is a parameter pack of size equal to Ts; is this a pack of array types T_1[N_1], T_2[N_2], ..., or is it T_(N_1), T_(N_2), ...?). While people weren’t ecstatic about this syntax (the dot seemed arbitrary), there weren’t any better suggestions raised, and people preferred to have the feature with this syntax than to not have it at all. The second part of the proposal was less baked, and concerned “subsetting” a parameter pack with a pack of indices to yield a new pack; EWG encouraged further thought about this part, and suggested exploring two aspects separately: pack literals (for example 0 ...< 5 might be hypothetical syntax for a pack literal which expands to 0, 1, 2, 3, 4) and pack transformations, which are operations that take a parameter pack as input, and transform it to another parameter pack.
  • A proposal to fix a counter-intuitive aspect of the definition of “trivially copyable”.
  • Supporting custom diagnostics for SFINAE-eligible errors. This proposal aimed to resolve a long-standing deficiency in template design: you had to pick between making an incorrect use of a template SFINAE-eligible (expressing the constraint via enable_if or similar), or giving a custom diagnostic for it (expressing the constraint via a static_assert). The specific suggestion was to allow annotating a = delete-ed function with a custom error message that would be shown if it were chosen as the best match in overload resolution. EWG felt that this was a problem worth solving, but preferred a more general solution, and encouraged the author to come back with one.
  • A proposal to specify the order of evaluation of subexpressions within an expression for some types of expressions. EWG felt this change would be valuable, as the order of evaluation being currently unspecified is a common cause of surprise and bugs, but the exact rules still need some thought.
  • Another proposal for classes with runtime size. Unfortunately, EWG continues to be pretty much deadlocked on this topic:
    • People want arrays of runtime bound, together with a mechanism for them to be used as class members.
    • There is no consensus for having arrays of runtime bound without such a mechanism.
    • There are hard technical problems that need to be solved to allow classes of runtime size. One of the biggest challenges is that some platforms’ ABIs would have to be rearchitected to accomodate classes with a runtime-size data member in the middle (this includes class hierarchies where one of the subobjects that’s not at the end has a runtime-sized member at the end).
    • No one has yet come up with a comprehensive solution to these technical problems.
    • There is a divide between two ways of looking of these proposals: one is to say that stack allocation is an optimization, and implementations are free to place runtime-sized arrays on the heap in situations where placing them on the stack is too difficult; the other is to want a guarantee that the allocation is on the stack. Proponents of the second view argue that we don’t need a new syntax for the “stack allocation is an optimization” use case; we should instead improve our optimizers so they can optimize uses of std::vector and similar into stack allocations.

    Given this state of affairs, the future of classes with runtime size (and of arrays of runtime bound, which people want to tie to classes with runtime size) continues to be uncertain.

  • Inline variables. After some discussion, it became clear that this was a proposal for two separate features with a single syntax: a way to declare and initialize global constants in a header file without having to define them in a .cpp file (which is something everyone wants); and a way to define “expression aliases”. EWG expressed interest in these problems, and encouraged fleshing out separate proposals for them.
  • Categorically qualified classes. This proposal provides a mechanism to express that a class’s objects are meant to be used only as named objects, not temporaries (useful for “scope guard”-type classes), or that a class’s objects are meant to only be used as temporaries (useful for expression templates). For classes in the latter category, it’s useful to provide a mechanism to convert objects of this type to objects of another type when initializing a named variable; as such, this part of the proposal overlaps with the operator auto proposal that was discussed (and encouraged for further work) in Rapperswil. EWG felt that the two use cases (scope guards and expression templates) weren’t sufficiently similar to necessitate fixing them the same way, and that the design questions raised during the operator auto discussion weren’t adequately addressed in this proposal; encouragement was given to continue exploring the problem space, being open to different approaches for the two use cases.
  • Generalized lifetime extension. This paper outlined a set of rules for determining whether the result of an expression refers to any temporaries that appear as subexpressions, and proposed that when the result of an expression is bound to a named variable of reference type (at local scope), the temporaries referred to by the result have their lifetimes extended to the lifetime of the variable. A very limited form of this exists in C++ today; this proposal would generalize it considerably. I found this proposal to be very interesting; it has the potential to dramatically reduce the number of use-after-free errors that occur due to temporaries being destroyed earlier than we intend them to be. On the other hand, if not done carefully, the proposal would have the potential to cause programmers to be more laissez-fare about their analysis of temporary lifetimes, possibly leading to more errors. For EWG, the sticking point was that performing the refers-to analysis for function call expressions where the function body is in another translation unit requires the co-operation of the function author. The paper proposed annotating parameters with the keyword export to indicate that the function’s return value refers to this parameter. EWG didn’t like this, feeling that these annotations would be “exception specifications all over again”, i.e. components of a function declaration that are not quite part of its type, for which we need ad-hoc rules to determine their behaviour with respect to redeclarations, function pointers, overrides in derived classes, being passed as non-type template arguments, and so on. The conclusion was that the problem this proposal addresses is a problem we want solved, but that this approach was not in the right direction for solving the problem.

Rejected proposals:

  • A proposal to let return {expr} be explicit, in the sense that it would allow invoking constructors of the function’s return type even if they were explicit. This proposal had support in Rapperswil, but after several new papers argued against it, EWG decided to shelve it.
  • The proposal for named arguments that Ehsan and I wrote. While a few people in the room liked the idea, the majority had reservations about it; prominent among these were the proposal encouraging functions with many parameters, and the additional maintenance burden on library authors caused by parameter name changes breaking code.
  • A proposal for a null-coalescing conditional operator, a ?: b, which would have been equivalent to a ? a : b. EWG felt the utility wasn’t sufficiently compelling to warrant a language change.
  • Checked-dereference conditions. This would have made if (T x : expr) { S } equivalent to if (auto p = expr) { T x = *p; S } (and similarly for while loops and the test-expressions of for loops). EWG felt this shorthand wasn’t sufficiently compelling, and could cause confusion due to the similarity of the syntax to the range-based for loop.
  • A proposal for uniform handling of subobjects. This would allow data members and bases of a class to be interleaved in any order. EWG felt this change was too intrusive and insufficiently motivated.

Contracts

EWG held a special evening session on the topic of contracts, as there was a lot of interest in them at this meeting. Several papers on the topic were presented; a couple of others were not due to lack of time or a presenter.

The only proposal that was specifically considered was a proposal to turn the assert macro into a compiler-recognized operator with one of a specified set of semantics based on the value of the NDEBUG macro; it was rejected, mostly on the basis that it was infeasible to muck with assert and NDEBUG for backwards-compatibility reasons.

Other than that, the discussion was more about high-level design aspects for contract programming rather specific proposals. Some issues that came up were:

  • Where to specify the contracts for a function – in the declaration, in the implementation, or potentially either – and what the implications are.
  • Whether optimizers should be allowed to assume that contracts are obeyed, such that non-obeyance (e.g. precondition violation) implies undefined behaviour.
  • Whether the standard should specify different modes of behaviour (e.g. “release” vs. “debug”) with respect to contract checking (and if so, how to control the mode, or if this should be left implementation-defined).
  • What the behaviour should be upon contract violation (“keep going” but undefined behaviour, custm handler, terminate, throw, etc.).

The discussion closed with some polls to query the consensus of the room:

  • There was consensus that we want some form of contracts.
  • There was consensus that ensuring correctness and realizing performance gains are both important goals of a contracts proposal, with correctness being with primary one.
  • There was consensus that we need to support contracts in interfaces / declarations (at least).
  • There was no consensus for having some notion of “build modes” specified in the standard to control aspects of contract checking.

These views will likely guide future proposals on this topic.

Coroutines

Coroutines was another topic with a lot of interest at this meeting. There were three proposals on the table: “resumable functions”, “resumable lambdas”, and a library interface based on Boost.Coroutine. These proposals started out under the purview of SG 1 (Concurrency), but then they started growing into a language feature with applications unrelated to concurrency as well, so the proposals were presented in an evening session to give EWG folks a chance to chime in too.

The coroutines proposals fall into two categories: stackful and stackless, with the “resumable functions” and “resumable lambdas” proposals being variations on a stackless approach, and Boost.Coroutine proposal being a stackful approach.

The two approaches have an expressiveness/performance tradeoff. Stackful coroutines have more overhead, because a stack needs to be reserved for them; the size of the stack is configurable, but making it too small risks undefined behaviour (via a stack overflow), while making it too large wastes space. Stackless coroutines, on the other hand, use only as much space as they need by allocating space for each function call on the heap (these are called activation frames; in some cases, the heap allocation can be optimized into stack allocation). The price they pay in expressiveness is that any function that calls a resumable function (i.e. a stackless coroutine) must itself be resumable, so the compiler knows to allocate activation frames on the heap when calling it, too. By contrast, with the stackful approach, any old function can call into a stackful coroutine, because execution just switches to using the coroutine’s side stack for the duration of the call.

Within the “stackless” camp, the difference between the “resumable functions” and “resumable lambdas” approaches is relatively small. The main difference is that the “resumable lambdas” approach allows coroutines to be passed around as first-class objects (since lambdas are objects).

The authors of the “resumable functions” and Boost.Coroutine proposals have attempted to come up with a unified proposal that combines the power of “stackful” with the expressiveness of “stackless”, but haven’t succeeded, and in fact have come to believe that the tradeoff is inherent. In light of this, and since both approaches have compelling use cases, the committee was of the view that both approaches should be pursued independently, both targetting a single Coroutines Technical Specification, with the authors co-operating to try to capture any commonalities between their approaches (if nothing else then a common, consistent set of terminology) even if a unified proposal isn’t possible. For the stackless approach, participants were polled for a preference between the “resumable functions” and “resumable lambdas” approaches; there was stronger support for the “resumable functions” approach, though I think this was at least in part due to the “resumable lambdas” approach being newer and less well understood.

I had a chance to speak to Chris Kohlhoff, the author of the “resumable lambdas” proposal, susbequent to this session. He had an idea for combining the “stackless” and “stackful” approaches under a single syntax that I found very interesting, which he plans to prototype. If it pans out, it might end up as the basis of compelling unified proposal after all.

I’m quite excited about the expressivity coroutines would add to the language, and I await developments on this topic eagerly, particularly on Chris’s unified approach.

Embedded Systems

The topic of forming a Study Group to explore ways to make C++ more suitable for embedded systems came up again. In addition to the two papers presented on the topic, some further ideas in this space were containers that can be stored in ROM (via constexpr), and having exceptions without RTTI. It was pointed out that overhead reductions of this sort might be of interest to other communities, such as gaming, graphics, real-time programming, low-latency programming, and resource-constrained systems. EWG encouraged discussion across communities before forming a Study Group.

Library/Library Evolution Working Groups (LWG and LEWG)

I mentioned the library features that are targeted for C++17 in the “C++17″ section above. Here I’ll talk about progress on the Library Fundamentals Technical Specifications, and future work.

Library Fundamentals TS I

The first Library Fundamentals TS has already gone through its first formal ballot, the PDTS (Preliminary Draft Technical Specification) ballot. LWG addressed comments sent in by national standards bodies in response to the ballot; the resulting changes were very minor, the most notable being the removal of the network byte-order conversion functions (htonl() and friends) over concerns that they clash with similarly-named macros. LWG will continue addressing the comments during a teleconference in December, and then they plan to send out the specification for its DTS (Draft Technical Specification) ballot, which, if successful, will be its last before publication.

Library Fundamentals TS II

The second Library Fundamentals TS is in the active development stage. Coming into the meeting, it contained a single proposal, for a generalized callable negator. During this meeting, several new features were added to it:

There will very likely be more features added at the next meeting, in May 2015; the TS is tentatively scheduled to be sent out for its PDTS ballot at the end of that meeting.

Future Work

In addition to the proposals which have already been added into C++17 or one of the TS’s, there are a lot of other library proposals in various stages of consideration.

Proposals approved by LEWG and under review by LWG:

Proposals approved by LEWG for which LWG review is yet to start:

Proposal for which LEWG is encouraging further work:

Proposals rejected by LEWG:

There will be a special library-only meeting in Cologne, Germany in February to allow LWG and LEWG to catch up a bit on all these proposals.

Study Groups

SG 1 (Concurrency)

SG 1’s main projects are the Concurrency TS and the Parallelism TS. As with the Library Fundamentals TS, both are likely to be the start of a series of TS’s (so e.g. the Parallelism TS will be followed by a Parallelism TS II).

Besides coroutines, which I talked about above, I haven’t had a chance to follow SG 1’s work in any amount of detail, but I will mention the high-level status:

The Parallelism TS already had its PDTS ballot; comments were addressed this week, resulting in minor changes, including the addition of a transform-reduce algorithm. SG 1 will continue addressing comments during a teleconference in December, and then plans to send the spec out for its DTS ballot. As mentioned above, there are plans for a Parallelism TS II, but no proposals have been approved for it yet.

The Concurrency TS has not yet been sent out for its PDTS ballot; that is now planned for Lenexa.

Some library proposals that have been approved by LEWG for the Concurrency TS:

Task regions are still being considered by LEWG, and would likely target Concurrency TS II.

A major feature being looked at by SG 1 is executors and schedulers, with two competing proposals. The two approaches were discussed, and SG 1 felt that at this stage there’s still design work to be done and it’s too early to make a choice. This feature is targeting the second Concurrency TS as it’s unlikely to be ready in time for Lenexa, and SG 1 doesn’t want to hold up the first Concurrency TS beyond Lenexa.

Coroutines are also a concurrency feature, but as mentioned above, they are now targeting a separate TS.

SG 2 (Modules)

EWG spent an afternoon discussing modules. At this point, Microsoft and Clang both have modules implementations, at various levels of completion. The Microsoft effort is spearheaded by Gabriel Dos Reis, who summarized the current state of affairs in a presentation.

The goals of modules are:

  • componentization
  • isolation from macros
  • improving build times
  • making it easier to write semantics-aware developer tools
  • being a code distribution mechanism is, at the moment, an explicit non-goal

The aspects of a modules design that people generally agree on at this point are:

  • modules are not a scoping mechanism (i.e. they are independent of namespaces)
  • while performing template instantiation while compiling a module, the compiler has access to the full module being compiled, but only to the interfaces of imported modules
  • the interface of a module can be separated from its implementation
  • module interfaces cannot have cyclic dependencies
  • only one module owns the definition of an entity

Design points that still need further thought are:

  • visibility of private class members across module boundaries
  • ordering of static/dynamic initialization
  • can macros flow into modules? (e.g. NDEBUG)
    • one view on this is that there should be no standard way to provide an input macro to a module, but implementations can provide implementation-defined mechanisms, such as defining NDEBUG on the compiler command line to build a debug version of a module
    • another option is to “parameterize” a module on certain input parameters (such as the value of the NDEBUG macro)
      • this in turn raises the question of a more general parameterization mechanism for modules, akin to templates
  • can macros flow out of modules? (e.g. can the Boost.Preprocessor library be packaged up into a module?)
  • semantics of entities with internal linkage in a module interface
  • can a module interface be spread across several files?
  • the syntax for defining a module
  • how to deal with #includes in a module

EWG was generally pleased with the progress being made, and encouraged implementors to continue collaborating to get their designs to converge, and report back in Lenexa.

The Clang folks also reported promising performance numbers from their implementation, but detailed/comprehensive benchmarks remain to be performed.

SG 3 (Filesystems)

SG 3 did not meet in Urbana. The Filesystems TS is waiting for its DTS ballot to close; assuming it’s successful (which is the general expectation), it will be published early next year.

Proposals targeting a follow-up Filesystems TS II are welcome; none have been received so far.

SG 4 (Networking)

Organizationally, the work of SG 4 has been conducted directly by LEWG over the past few meetings. This arrangement has been formalized at this meeting, with SG 4’s chair, Kyle Kloepper, retiring, and the SG becoming “dormant” until LEWG decides to reactivate it.

In Rapperswil, LEWG had favourably reviewed a proposal for a C++ networking library based on Boost.ASIO, and asked the author (Chris Kohlhoff, whom I’ve talked about earlier in the context of coroutines) to update the proposal to leverage C++14 language features. Chris has done so, and presented an updated proposal to LEWG in Urbana; this update was also received favourably, and was voted to become the initial working draft of the Networking TS, which now joins the roster of Technical Specifications being worked on by the committee. In other words, we’re one step closer to having a standard sockets library!

SG 5 (Transactional Memory)

I haven’t been following the work of SG 5 very closely, but I know the Transactional Memory TS is progressing well. Its working draft has been created based on two papers, and it’s going to be sent out for its PDTS ballot shortly (after a review conducted via teleconference), with the intention being that the ballot closes in time to look at the comments in Lenexa.

SG 6 (Numerics)

Topics of discussion in SG 6 included:

  • a replacement for std::rand which combines the security of the C++11 <random> facilities with the simple interface of std::rand
  • special math functions for C++17
  • typedefs similar to int16_t for floating-point types
  • bignums, ratios, fixed-point arithmetic

A Numerics TS containing proposals for some of the above may be started in the near future.

There is an existing TR (Technical Report, an older name for a Technical Specification) for decimal floating-point arithmetic. There is a proposal to integrate this into C++17, but there hasn’t been any new progress on that in Urbana.

SG 7 (Reflection)

SG 7 looked at two reflection proposals: an updated version of a proposal for a set of type traits for reflecting the members of classes, unions, and enumerations, and a a significantly reworked version of a comprehensive proposal for static reflection.

The reflection type trait proposal was already favourably reviewed in Rapperswil. At this meeting, additional feedback was given on two design points:

  • Access control. There was consensus that reflection over inaccessible members should be allowed, but that it should occur via a separate mechanism that is spelt differently in the code (for example, there might be one namespace called std::reflect which provides traits for reflecting accessible members only, and another called std::reflect_invasively which provides traits for reflecting all members including inaccessible ones). The rationale is that for some use cases, reflecting only over accessible members is appropriate, while for others, reflecting over all members is appropriate, and we want to be able to spot uses of an inappropriate mechanism easily. Some people also expressed a desire to opt-out from invasive reflection on a per-class basis.
  • Syntax. The proposal’s syntax for e.g. accessing the name of the second member of a class C is std::class_member::name<C,1>. A preference was expressed a) for an additional level of grouping of reflection-related traits into a namespace or class reflect, e.g. std::reflect::class_member::name<C,1>, and b) for not delaying the provision of all inputs until the last component of the trait, e.g. std::reflect<C>::class_member<1>::name. (This last form has the disadvantage that it would actually need to be std::reflect<C>::template class_member<1>::name; some suggestions were thrown around for avoiding this by making the syntax use some compiler magic (as the traits can’t be implemented purely as a library anyways)).

It was also reiterated that this proposal has some limitations (notably, member templates cannot be reflected, nor can members of reference or bitfield type), but SG 7 remains confident that the proposal can be extended to fill these gaps in due course (in some cases with accompanying core language changes).

The comprehensive static reflection proposal didn’t have a presenter, so it was only looked at briefly. Here are some key points from the discussion:

  • This proposal allows reflection at a much greater level of detail – often at the level of what syntax was used, rather than just what entities were declared. For example, this proposal allows distinguishing between the use of different typedefs for the same type in the declaration of a class member; the reflection type traits proposal does not.
  • No one has yet explored this area enough to form a strong opinion on whether having access to this level of detail is a good thing.
  • SG 7 is interested in seeing motivating use cases that are served by this proposal but not by the reflection type traits proposal.
  • Reflecting namespaces – a feature included in this proposal – is viewed as an advanced reflection feature that is best left off a first attempt at a reflection spec.
  • The author is encouraged to do further work on this proposal, with the above in mind. Splitting the proposal into small components is likely to help SG 7 make progress on evaluating it.

There is also a third proposal for reflection, “C++ type reflection via variadic template expansion”, which sort of fell off SG 7’s radar because it was in the post-Issaquah mailing and had no presenter in Rapperswil or Urbana; SG 7 didn’t look at it in Urbana, but plans to in Lenexa.

SG 8 (Concepts)

The Core Working Group continued reviewing the Concepts TS (formerly called “Concepts Lite”) in Urbana. The fundamental design has not changed over the course of this review, but many details have. A few changes were run by EWG for approval (I mentioned these in the EWG section above: the removal of constexpr constraints, and the addition of folding expressions). The hope was to be ready to send out the Concepts TS for its PDTS ballot at the end of the meeting, but it didn’t quite make it. Instead, CWG will continue the review via teleconferences, and possibly a face-to-face meeting, for Concepts only, in January. If all goes well, the PDTS ballot might still be sent out in time for the comments to arrive by Lenexa.

SG 9 (Ranges)

As far as SG 9 is concerned, this has been the most exciting meeting yet. Eric Niebler presented a detailed and well fleshed-out proposal for integrating ranges into the standard library.

Eric’s ranges are built on top of iterators, thus fitting on top of today’s iterator-based algorithms almost seamlessly, with one significant change: the begin and end iterators of a range are not required to be of the same type. As the proposal explains, this small change allows a variety of ranges to be represented efficiently that could not be under the existing same-type model, including sentinel- and predicate-based ranges.

The main parts of the proposal are a set of range-related concepts, a set of range algorithms, and a set of range views. The foundational concept is Iterable, which corresponds roughly to what we conversationally call (and also what the Boost.Range library calls) a “range”. An Iterable represents a range of elements delimited by an Iterator at the beginning and a Sentinel at the end. Two important refinements of the Iterable concept are Container, which is an Iterable that owns its elements, and Range, which is a lightweight Iterable that doesn’t own its elements. The range algorithms are basically updated versions of the standard library algorithms that take ranges as Iterables; there are also versions that take (Iterator, Sentinel) pairs, for backwards-compatibiltiy with today’s callers. Finally, the range views are ways of transforming ranges into new ranges; they correspond to what the Boost.Range library calls range adaptors. There is also a suggestion to enhance algorithms with “projections”; I personally see this as unnecessary, since I think range views serve their use cases better.

Eric has fully implemented this proposal, thus convincingly demonstrating its viability.

Importantly, this proposal depends on the Concepts TS to describe the concepts associated with ranges and define algorithms and views in terms of these functions. (Eric’s implementation emulates the features of the Concepts TS with a C++11 concepts emulation layer.)

The proposal was overall very well received; there was clear consensus that Eric should pursue the high-level design he presented and come back with a detailed proposed specification.

An important practical point that needed to be addressed is that this proposal is not 100% backwards-compatible with the current STL. This wasn’t viewed as a problem, as previous experience trying to introduce C++0x concepts to the STL while not breaking anything has demonstrated that this wasn’t possible without a lot of contortions, and people have largely accepted that a clean break from the old STL is needed to build a tidy, concepts-enabled “STL 2.0″. Eric’s proposal covers large parts of what such an STL 2.0 would look like, so there is good convergence here. The consensus was that Eric should collaborate with Andrew Sutton (primary author and editor of the Concepts TS) on a proposal for a Technical Specification for a concepts-enabled ranges library; the exact scope (i.e. whether it will be just a ranges library, or a complete STL overhaul) is yet to be determined.

SG 10 (Feature Test)

The Feature Test Standing Document (the not-quite-a-standard document used by the committee to specify feature test macros) has been updated with C++14 features.

The feature test macros are enjoying adoption by multiple implementors, including GCC, Clang, EDG, and others.

SG 12 (Undefined Behaviour)

SG 12 looked at:

SG 13 (I/O, formerly “Graphics”)

SG 13 has been working on a proposal for 2D Graphics TS based on cario’s API. In Urbana, an updated version of this proposal which included some proposed wording was presented to LEWG. LEWG encouraged the authors to complete the wording, and gave a couple of pieces of design advice:

  • If possible, put in place some reasonable defaults (e.g., a default foreground color that’s in place if you don’t explicitly set a foreground color) so a “Hello world” type program can be written more concisely.
  • Where the API differs from a mechanical transliteration of the cairo API, document the rationale for the difference.

Next Meeting

The next full meeting of the Committee will be in Lenexa, Kansas, the week of May 4th, 2015.

There will also be a library-only meeting in Cologne, Germany the week of Feberuary 23rd, and a Concepts-specific meeting in Skillman, New Jersey from January 26-28.

Conclusion

This was probably the most action-packed meeting I’ve been to yet! My personal highlights:

  • The amount of interest in coroutines, and the green-light that was given to develop proposals for both the stackful and stackless versions. I think coroutines have the potential to revolutionize how C++ programmers express control flow in many application domains.
  • Eric Niebler’s proposal for bringing ranges to the standard library. It’s the first cohesive proposal I’ve seen that addresses all the tough practical questions involved in such an endeavour, and it was accordingly well-received.
  • The continuing work on modules, particularly the fact that Microsoft and Clang both have implementations in progress and are cooperating to converge on a final design.

Stay tuned for further developments!


PomaxNew Entry

%3Cp%3Eclick%20the%20entry%20to%20start%20typing%3C%2Fp%3E%0A

PomaxRSS description testing

My RSS generator wasn't adding article bodyies to the RSS, which caused some problems for certain RSS readers. Let's see if this fixes it.

Patrick McManusProxy Connections over TLS - Firefox 33

There have been a bunch of interesting developments over the past few months in Mozilla Platform Networking that will be news to some folks. I've been remiss in not noting them here. I'll start with the proxying over TLS feature. It landed as part of Firefox 33, which is the current release.

This feature is from bug 378637 and is sometimes known as HTTPS proxying. I find that naming a bit ambigous - the feature is about connecting to your proxy server over HTTPS but it supports proxying for both http:// and https:// resources (as well as ftp://, ws://, and ws:/// for that matter). https:// transactions are tunneled via end to end TLS through the proxy via the CONNECT method in addition to the connection to the proxy being made over a separate TLS session.. For https:// and wss:// that means you actually have end to end TLS wrapped inside a second TLS connection between the client and the proxy.

There are some obvious and non obvious advantages here - but proxying over TLS is strictly better than traditional plaintext proxying. One obvious reason is that it provides authentication of your proxy choice - if you have defined a proxy then you're placing an extreme amount of trust in that intermediary. Its nice to know via TLS authentication that you're really talking to the right device.

Also, of course the communication between you and the proxy is also kept confidential which is helpful to your privacy with respect to observers of the link between client and proxy though this is not end to end if you're not accessing a https:// resource. Proxying over TLS connections also keep any proxy specific credentials strictly confidential. There is an advantage even when accessing https:// resources through a proxy tunnel - encrypting the client to proxy hop conceals some information (at least for that hop) that https:// normally leaks such as a hostname through SNI and the server IP address.

Somewhat less obviously, HTTPS proxying is a pre-requisite to proxying via SPDY or HTTP/2. These multiplexed protocols are extremely well suited for use in connecting to a proxy because a large fraction (often 100%) of a clients transactions are funneled through the same proxy and therefore only 1 TCP session is required when using a prioritized multiplexing protocol. When using HTTP/1 a large number of connections are required to avoid head of line blocking and it is difficult to meaningfully manage them to reflect prioritization. When connecting to remote proxies (i.e. those with a high latency such as those in the cloud) this becomes an even more important advantage as the handshakes that are avoided are especially slow in that environment.

This multiplexing can really warp the old noodle to think about after a while - especially if you have multiple spdy/h2 sessions tunneled inside a spdy/h2 connection to the proxy. That can result in the top level multiplexing several streams with http:// transactions served by the proxy as well as connect streams to multiple origins that each contain their own end to end spdy sessions carrying multiple https:// transactions.

To utilize HTTPS proxying just return the HTTPS proxy type from your FindProxyForURL() PAC function (instead of the traditional HTTP type). This is compatible with Google's Chrome, which has a similar feature.

function FindProxyForURL(url, host) {
  if (url.substring(0,7) == "http://") {
   return "HTTPS proxy.mydomain.net:443;"
  }
  return "DIRECT;"
}


Squid supports HTTP/1 HTTPS proxying. Spdy proxying can be done via Ilya's node.js based spdy-proxy. nghttp can be used for building HTTP/2 proxying solutions (H2 is not yet enabled by default on firefox release channels - see about:config network.http.spdy.enabled.http2 and network.http.spdy.enabled.http2draft to enable some version of it early). There are no doubt other proxies with appropriate support too.

If you need to add a TOFU exception for use of your proxy it cannot be done in proxy mode. Disable proxying, connect to the proxy host and port directly from the location bar and add the exception. Then enable proxying and the certificate exception will be honored. Obviously, your authentication guarantee will be better if you use a normal WebPKI validated certificate.

Kevin NgoPushing Hybrid Mobile Apps to the Forefront

Mozilla Festival 2014 was held in London in October.

At Mozilla Festival 2014, I facilitated a session on Pushing Hybrid Mobile Apps to the Forefront. Before, I had been building a poker app to keep track of my poker winning statistics, record notes on opponents, and crunch poker math. I used the web as a platform, but having an iPhone, wanted this app to be on iOS. Thus, the solution was hybrid mobile apps, apps written in HTML5 technologies that are wrapped to run "natively" on all platforms (e.g., iOS, Android, FirefoxOS).

I stumbled upon the Ionic hybrid mobile app framework. This made app development so easy. IT fulfills the promise of the web: write once, run everywhere. In being with Mozilla for over two years, I've read so little hype for hybrid mobile apps. Hybrid mobile apps have potential to convert much more native developers over to the web platform, but hybrid mobile apps aren't getting the ad-time they deserve.

What is a Hybrid Mobile App?

Hybrid mobile apps, well explained in this article from Telerik, are apps written in HTML5 technologies that are enabled to run within a native container. They use the device's browser engine to render the app. And then web-to-native polyfill can be injected, prominently Cordova, in order to access device APIs.

The Current Lack of Exposure for Hybrid Mobile Apps

In all of the Mozilla Developer Network (MDN), there are around three articles on hybrid mobile apps, which aren't really fully fleshed and in need of technical review. There's been a good amount of work from James Longster in the form of Cordova Firefox OS support. There could be more to be done on the documentation side.

Cross-platform capability on mobile should be flaunted more. In MDN's main article on Open Web Apps, there's a list of advantages on open web apps. This article is important since it is a good entry point to into developing web apps. The advantages listed shouldn't really be considered advantages relative to native apps:

  • Local installation and offline storage: to a developer, these should be inherent to an app, not an explicit advantage. Apps are expected to be installable and have offline storage.
  • Hardware access: also should be inherent to an app and not an explicit advantage. Apps are expected to be able to communicate with its device APIs.
  • Breaking the walled gardens: there are no "walls" being broken if these web apps only run in the browser and FirefoxOS. They should be able to live inside the App Store and Play Store to really have any effect.
  • Open Web App stores: well, that is prety cool actually. I built a personal app that I didn't want to be distributed except with me and antoher. So I simply built a page that had the ability to install the app. However, pure web apps alone can't be submitted it to App Store or Play Store so that should be addressed first.

What's missing here is the biggest advantage of all: being able to run cross-platform (e.g., iOS, Android, FirefoxOS, Windows). That's the promise the web, and that's what attracts most developers to the web in the first place. Write it once, run anywhere, no need to port between languages or frameworks, and still be able to submit to the App Store/Play Store duopoly for to gain the most users. For many developers, the web is an appropriate platform, saving time and maintenance.

Additionally, most developers also prefer the traditional idea of apps, that they are packaged up and uploaded to the storefront, rather than self-hosted on a server. On the Firefox Marketplace, the majority of apps are packaged over hosted (4800 to 4100).

There's plenty of bark touting the cross-platform capability of the web, but there's little bite on how to actually achieve that on mobile. Hybrid mobile apps have huge potential to attact more developers to the web platform. But with its lack of exposure, it's wasted potential.

So what can we do? The presence of hybrid mobile apps on MDN could be buffed. I've talked to Chris Mills of the MDN team at Mozfest, and he mentioned it was a goal for 2015. FirefoxOS Cordova plugins may welcome contributors. And I think the biggest way would be to help add official FirefoxOS support to Ionic, a popular hybrid mobile app framework which currently has over 11k stars. They've mentioned they have FirefoxOS on the roadmap.

Building with Ionic

Ionic Framework is a hybrid mobile app framework It has a beautifully designed set of native-like icons and CSS components, pretty UI transitions, web components (through Angular directives for now), build tools, and an easy-to-use command-line interface.

With Ionic, I built my poker app I initially mentioned. It installs on my phone, and I can use it at the tables:

Poker app

Poker app built with Ionic.

For the Mozfest session, I generated a sample app with Ionic (that simply just makes use of the camera), and put it on Github with instructions. To get started with a hybrid mobile app:

  • npm install -g ionic cordova
  • ionic start myApp tabs - creates a template app
  • cordova plugin add org.apache.cordova.camera - installs the Cordova camera plugin (there are many to choose from)
  • ionic platform add <PLATFORM> - where could be ios, android, or firefoxos. This enables the platforms
  • ionic platform build <PLATFORM> - builds the project

To emulate it for iOS or Android:

  • ionic emulate <PLATFORM> - will open the app in XCode for ios or adbtools for android

To simulate it for FirefoxOS, open the project with WebIDE inside platforms/firefoxos/www.

How the Mozfest Session Went

It was difficult to plan since Mozfest is more of a hands-on unconference, where everything is meant to be hands-on and accessible. Mozfest wasn't a deeply technical conference so I tried to cater to those who don't have much development experience and to those who don't bring a laptop.

Thus I set up three laptops: my Macbook, a Thinkpad, and a Vaio. And had three devices: my iPhone, a Nexus 7, and a FirefoxOS Flame. My Macbook would help to demonstrate the iOS side. Whereas the other machines had Linux Mint within a VirtualBox. These VMs had adbtools and Firefox with WebIDE set up. All the mobile devices had the demo apps pre-installed so people could try it out.

I was prepared as a boy scout. Well, until my iPhone was pickpocketed in London, stripping me of the iOS demonstration. Lugging around three laptops in my bag that probably amounted to 20 pounds back and forth between the hotel, subway, and venue wasn't fun. I didn't even know what day I was going to present at Mozfest. Then I didn't even use those meticulously prepared laptops at the session. Everyone who showed up was pretty knowledgable, had a laptop, and had an internet connection.

The session went well nonetheless. After a bit of speech about pushing hybrid mobile apps to the forefront, my Nexus 7 and Flame were passed around to demo the sample hybrid mobile app running. It just had a simple camera button. That morning, everyone had received a free Firefox Flame for attending Mozfest so it turned more into WebIDE session on how to get an app on the Flame. My coworker who attended was able to get the accelerometer working with a "Shake Me / I was shaken." app, and I was able to get geolocation working with an app that displays longitude and latitude coordinates with the GPS.

What I Thought About Mozfest

There was a lot of energy in the building. Unfortunately, the energy didn't reach me, especially since I was heavily aircraft-latencied. Maybe conferences aren't my thing. The place was hectic. Hard to find out what was where. I tried to go to a session that was labeled as "The 6th Floor Hub", which turned out to be a small area of a big open room labelled with a hard-to-spot sign that said "The Hub". When I got there, there was no session being held despite the schedule saying so as the facilitator was MIA.

The sessions didn't connect with me. Perhaps I wanted something more technical and concrete that I could takeaway and use, but most sessions were abstract. There was a big push for Mozilla Webmaker and Appmaker, though those aren't something I use often. They're great teaching tools, but I usually direct to Codecademy for those who want to learn to build stuff.

There was a lot of what I call "the web kool-aid". Don't get me wrong, I love the web, I've drank a lot of the kool-aid, but there was a lot of championing of the web in the keynotes. I guess "agency" is the new buzzword now. Promoting the web is great, though I've just heard it all before.

However, I was glad to add value to those who found it more inspiring and motivating than me. I believe my session went well and attendees took away something hard and practical. As for me, I was just happy to get back home after a long day of travel and go replace my phone.

Soledad Penades“Invest in the future, build for the web!”, take 2, at OSOM

I am right now in Cluj-Napoca, in Romania, for OSOM.ro, an small totally non profit volunteer-organised conference. I gave an updated, shorter revised version of the talk I gave at Amsterdam past June. As usual here are the slides and the source for the slides.

It is more or less the same, but better, and I also omitted some sections and spoke a bit about Firefox Developer Edition.

Also I was wearing this Fox-themed sweater which was imbuing me with special powers for sure:

fox sweater

(I found it at H & M past Saturday, there are more animals if foxes aren’t your thing).

There were some good discussions about open source per se, community building and growing. And no, talks were not recorded.

I feel a sort of strange emptiness now, as this has been my last talk for the year, but it won’t be long until other commitments fill that vacuum. Like MozLandia—by this time next week I’ll be travelling to, or already in, Portland, for our work week. And when I’m back I plan to gradually slide into a downward spiral into the idleness. At least until 2015.

Looking forward to meeting some mozillians I haven’t met yet, and also visiting Ground Kontrol again and exploring new coffee shops when we have a break in Portland, though :-)

flattr this!

Kevin Ngo'Card Not Formatted' Error on Pentax Cameras with Mac OSX Card Reader

With some 64GB SDHC and SDXC cards on Pentax (and possibly other) cameras, you might get a 'Card Not Formatted' error. It may happen if you take some shots, plug the SD card into your Mac's card reader, upload the shots, and then unplug it. I've seen the error on my K30 and K3. Though, it's not an issue with the camera or the card.

The issue is with unplugging it. With some SD cards on OSX, the SD card has to be properly ejected rather than straight-up unplugging it. Or else it'll be in some sort of weirdly formatted state. That may be obvious, but I never ran into issues unplugging cards before.

If you hit the error, you don't have to reformat the card. Simply plug it back into your machine, eject it, and then everything will have properly torn down for the card to be usable.

Gregory SzorcTest Drive the New Headless Try Repository

Mercurial and Git both experience scaling pains as the number of heads in a repository approaches infinity. Operations like push and pull slow to a crawl and everyone gets frustrated.

This is the problem Mozilla's Try repository has been dealing with for years. We know the solution doesn't scale. But we've been content kicking the can by resetting the repository (blowing away data) to make the symptoms temporarily go away.

One of my official goals is to ship a scalable Try solution by the end of 2014.

Today, I believe I finally have enough code cobbled together to produce a working concept. And I could use your help testing it.

I would like people to push their Try, code review, and other miscellaneous heads to a special repository. To do this:

$ hg push -r . -f ssh://hg@hg.gregoryszorc.com/gecko-headless

That is:

  • Consider the changeset belonging to the working copy
  • Allow the creation of new heads
  • Send it to the gecko-headless repo on hg.gregoryszorc.com using SSH

Here's what happening.

I have deployed a special repository to my personal server that I believe will behave very similarly to the final solution.

When you push to this repository, instead of your changesets being applied directly to the repository, it siphons them off to a Mercurial bundle. It then saves this bundle somewhere along with some metadata describing what is inside.

When you run hg pull -r on that repository and ask for a changeset that exists in the bundle, the server does some magic and returns data from the bundle file.

Things this repository doesn't do:

  • This repository will not actually send changesets to Try for you.
  • You cannot hg pull or hg clone the repository and get all of the commits from bundles. This isn't a goal. It will likely never be supported.
  • We do not yet record a pushlog entry for pushes to the repository.
  • The hgweb HTML interface does not yet handle commits that only exist in bundles. People want this to work. It will eventually work.
  • Pulling from the repository over HTTP with a vanilla Mercurial install may not preserve phase data.

The purpose of this experiment is to expose the repository to some actual traffic patterns so I can see what's going on and get a feel for real-world performance, variability, bugs, etc. I plan to do all of this in the testing environment. But I'd like some real-world use on the actual Firefox repository to give me peace of mind.

Please report any issues directly to me. Leave a comment here. Ping me on IRC. Send me an email. etc.

Update 2014-11-21: People discovered a bug with pushed changesets accidentally being advanced to the public phase, despite the repository being non-publishing. I have fixed the issue. But you must now push to the repository over SSH, not HTTP.

Asa DotzlerFlame Distribution Update

About three weeks ago, I ran out of Flame inventory for Mozilla employees and key volunteer contributors. The new order of Flames is arriving in Mountain View late today (Friday) and I’ll be working some over the weekend, but mostly Monday to deliver on the various orders you all have placed with me through email and other arrangements.

If you contacted me for a Flame or a batch of Flames, expect an email update in the next few days with information about shipping or pick-up locations and times. Thanks for your patience these last few weeks. We should not face any more Flame shortages like this going forward.

Jennie Rose HalperinTownhall, not Shopping Mall! Community, making, and the future of the Internet

I presented a version of this talk at the 2014 Futurebook Conference in London, England. They also kindly featured me in the program. Thank you to The Bookseller for a wonderful conference filled with innovation and intelligent people!

A few days ago, I was in the Bodleian Library at Oxford University, often considered the most beautiful library in the world. My enthusiastic guide told the following story:

After the Reformation (when all the books in Oxford were burned), Sir Thomas Bodley decided to create a place where people could go and access all the world’s information at their fingertips, for free.

“What does that sound like?” she asked. “…the Internet?”

While this is a lovely conceit, the part of the story that resonated with me for this talk is the other big change that Bodley made, which was to work with publishers, who were largely a monopoly at that point, to fill his library for free by turning the library into a copyright library. While this seemed antithetical to the ways that publishers worked, in giving a copy of their very expensive books away, they left an indelible and permanent mark on the face of human knowledge. It was not only preservation, but self-preservation.

Bodley was what people nowadays would probably call “an innovator” and maybe even in the parlance of my field, a “community manager.”

By thinking outside of the scheme of how publishing works, he joined together with a group of skeptics and created one of the greatest knowledge repositories in the world, one that still exists 700 years later. This speaks to a few issues:

Sharing economies, community, and publishing should and do go hand in hand and have since the birth of libraries. By stepping outside of traditional models, you are creating a world filled with limitless knowledge and crafting it in new and unexpected ways.

The bound manuscript is one of the most enduring technologies. This story remains relevant because books are still books and people are still reading them.

As the same time, things are definitely changing. For the most part, books and manuscripts were pretty much identifiable as books and manuscripts for the past 1000 years.

But what if I were to give Google Maps to a 16th Century Map Maker? Or what if I were to show Joseph Pulitzer Medium? Or what if I were to hand Gutenberg a Kindle? Or Project Gutenberg for that matter? What if I were to explain to Thomas Bodley how I shared the new Lena Dunham book with a friend by sending her the file instead of actually handing her the physical book? What if I were to try to explain Lena Dunham?

These innovations have all taken place within the last twenty years, and I would argue that we haven’t even scratched the surface in terms of the innovations that are to come.

We need to accept that the future of the printed word may vary from words on paper to an ereader or computer in 500 years, but I want to emphasize that in the 500 years to come, it will more likely vary from the ereader to a giant question mark.

International literacy rates have risen rapidly over the past 100 years and companies are scrambling to be the first to reach what they call “developing markets” in terms of connectivity. In the vein of Mark Surman’s talk at the Mozilla Festival this year, I will instead call these economies post-colonial economies.

Because we (as people of the book) are fundamentally idealists who believe that the printed word can change lives, we need to be engaged with rethinking the printed word in a way that recognizes power structures and does not settle for the limited choices that the corporate Internet provides (think Facebook vs WhatsApp). This is not as a panacea to fix the world’s ills.

In the Atlantic last year, Phil Nichols wrote an excellent piece that paralleled Web literacy and early 20th century literacy movements. The dualities between “connected” and “non-connected,” he writes, impose the same kinds of binaries and blind cure-all for social ills that the “literacy” movement imposed in the early 20th century. In equating “connectedness” with opportunity, we are “hiding an ideology that is rooted in social control.”

Surman, who is director of the Mozilla Foundation, claims that the Web, which had so much potential to become a free and open virtual meeting place for communities, has started to resemble a shopping mall. While I can go there and meet with my friends, it’s still controlled by cameras that are watching my every move and its sole motive is to get me to buy things.

85 percent of North America is connected to the Internet and 40 percent of the world is connected. Connectivity increased at a rate of 676% in the past 13 years. Studies show that literacy and connectivity go hand in hand.

How do you envision a fully connected world? How do you envision a fully literate world? How can we empower a new generation of connected communities to become learners rather than consumers?

I’m not one of these technology nuts who’s going to argue that books are going to somehow leave their containers and become networked floating apparatuses, and I’m not going to argue that the ereader is a significantly different vessel than the physical book.

I’m also not going to argue that we’re going to have a world of people who are only Web literate and not reading books in twenty years. To make any kind of future prediction would be a false prophesy, elitist, and perhaps dangerous.

Although I don’t know what the printed word will look like in the next 500 years,

I want to take a moment to think outside the book,

to think outside traditional publishing models, and to embrace the instantaneousness, randomness, and spontaneity of the Internet as it could be, not as it is now.

One way I want you to embrace the wonderful wide Web is to try to at least partially decouple your social media followers from your community.

Twitter and other forms of social media are certainly a delightful and fun way for communities to communicate and get involved, but your viral campaign, if you have it, is not your community.

True communities of practice are groups of people who come together to think beyond traditional models and innovate within a domain. For a touchstone, a community of practice is something like the Penguin Labs internal innovation center that Tom Weldon spoke about this morning and not like Penguin’s 600,000 followers on Twitter. How can we bring people together to allow for innovation, communication, and creation?

The Internet provides new and unlimited opportunities for community and innovation, but we have to start managing communities and embracing the people we touch as makers rather than simply followers or consumers.

The maker economy is here— participatory content creation has become the norm rather than the exception. You have the potential to reach and mobilize 2.1 billion people and let them tell you what they want, but you have to identify leaders and early adopters and you have to empower them.

How do you recognize the people who create content for you? I don’t mean authors, but instead the ambassadors who want to get involved and stay involved with your brand.

I want to ask you, in the spirit of innovation from the edges

What is your next platform for radical participation? How are you enabling your community to bring you to the next level? How can you differentiate your brand and make every single person you touch psyched to read your content, together? How can you create a community of practice?

Community is conversation. Your users are not your community.

Ask yourself the question Rachel Fershleiser asked when building a community on Tumblr: Are you reaching out to the people who want to hear from you and encouraging them or are you just letting your community be unplanned and organic?

There reaches a point where we reach the limit of unplanned organic growth. Know when you reach this limit.

Target, plan, be upbeat, and encourage people to talk to one another without your help and stretch the creativity of your work to the upper limit.

Does this model look different from when you started working in publishing? Good.

As the story of the Bodelian Library illustrated, sometimes a totally crazy idea can be the beginning of an enduring institution.

To repeat, the book is one of the most durable technologies and publishing is one of the most durable industries in history. Its durability has been put to the test more than once, and it will surely be put to the test again. Think of your current concerns as a minor stumbling block in a history filled with success, a history that has documented and shaped the world.

Don’t be afraid of the person who calls you up and says, “I have this crazy idea that may just change the way you work…” While the industry may shift, the printed word will always prevail.

Publishing has been around in some shape or form for 1000 years. Here’s hoping that it’s around for another 1000 more.

Mozilla Reps CommunityReps Weekly Call – November 20th 2014

Last Thursday we had our regular weekly call about the Reps program, where we talk about what’s going on in the program and what Reps have been doing during the last week.

reps

Summary

  • FOSDEM update.
  • Post-event metrics and receipts (Important reminder)
  • Firefox Tiles Reps FAQ.
  • 10 days of Mozillians.
  • Yahoo agreement.
  • Community newsletter.
  • #fx10 Jakarta
  • Pending Reps applicants.

Detailed notes

AirMozilla video

Don’t forget to comment about this call on Discourse and we hope to see you next week!

Pascal FinetteIntroduction to Exponential Thinking and Technology

A few weeks ago I had the privilege to deliver the closing keynote at GroupM’s What’s Next Illuminate conference in New York City. I gave a short introduction to exponential thinking (the stuff we teach at Singularity University) and then walked the audience through a whole bunch of examples (focussed on media).

The talk was a shortened and more media-related version of my “Technology Trends” talk I give here at SU to groups from all over the world quite often.

Here’s the video:

Mike Taylordocument.body.scrollTop vs document.documentElement.scrollTop

Here's a track from Web Compatibility's Greatest Hits Album (Volume I) that just doesn't want to go away—with the latest club remix titled "scrolling to sections from the menu in the mobile Google News site doesn't work due to setting scrollTop position on document.body in Firefox for Android".

Here's some background for those with less refined musical tastes.

(Why yes I can do this bad metaphor stuff all day long, why do you ask?)

If you want to get or set the vertical scroll position of a document, you can use element.scrollTop. According to the CSSOM View Module spec, if you're in standards mode you need to operate on the document's root element (the <html> element—or document.documentElement in DOM land). In quirks mode you would use the <body> element, via document.body.

This works in IE and Firefox and the late Presto Opera.

In Blink and WebKit browsers, it's the exact opposite. Both have attempted to implement the standard (safari, chrome), but both have had to back out their patches due to sites breaking (some Google properties and webkit.org among them, as luck would have it).

The bug that was filed against WebKit for Facebook breaking as a result of changing to the standard is especially interesting because it shows the tension between following standards (and other browsers) and breaking sites for their own users.

It's also a good example of how user-agent-string-based development can sometimes make it hard, if not impossible, to remove some of the crappier stuff from the web platform.

Here's some excerpts, but the whole bug is a good read.

Comment 15:

It really doesn't matter how faithfully you implemented the spec. If it causes a major backward compatibility with the Web, we can't have it.

Comment 31:

Yes, the regression doesn't reproduce if we fake the UA string as I mentioned in the comment #31.

Maybe sites will update one day and let other browsers do the right thing™. (Not that I'm holding my breath over here.)

Until then I guess we get to have fun writing stuff like this (found on apple.com a few weeks back):

(document.documentElement ||
 document.body.parentNode ||
 document.body).scrollTop;

Yunier José Sosa VázquezNueva estrategia de búsqueda para Firefox promueve la elección y la innovación

Google dejará de ser el buscador por defecto en Firefox a partir de diciembre para Estados Unidos, según el anuncio oficial publicado por Chris Beard en el blog de Mozilla. En otras regiones del planeta también Google será reemplazado por otros “competidores” en aras de promover la elección en la Web.
search-with-yahooLas búsquedas son una parte esencial en la experiencia en Internet para todos, solamente los usuarios de Firefox realizan más de 100 millones de búsquedas por año.

Con Firefox, Mozilla popularizó la integración del buscador en el navegador aliándose con compañías de Internet como Google, Yahoo y otras más para generar remuneración y avanzar en su misión. Google ha sido el buscador por defecto globalmente en Firefox desde el 2004 y al vencerse este año el contrato, Mozilla ha tomado esto como una oportunidad para revisar su estrategia y explorar otras opciones.

Según Beard, al evaluar los socios, para Mozilla la primera consideración fue asegurar una estrategia alineada a los valores de elección e independencia, capaz de posicionarlos y avanzar en su misión para brindar un mejor servicio a los usuarios y a la Web. Al final, cada opción disponible por los socios era fuerte, mejorando los términos económicos y reflejando los valores que Firefox brinda al ecosistema. Pero una opción sobresalió por encima de las demás.

Mozilla ha finalizado su práctica de tener un único buscador global en Firefox y en su lugar han adoptado una forma flexible que permitirá tener buscadores por país:

Estados Unidos

  • Yahoo será el buscador por defecto durante los próximos 5 años.
  • Iniciando en diciembre, los usuarios de Firefox serán introducidos en una nueva y mejorada experiencia de búsqueda con Yahoo en la cual resalta una moderna interfaz.
  • Con esta asociación, Yahoo soportará Do Not Track (DNT) en Firefox
  • Google, Bing, DuckDuckGo, eBay, Amazon, Twitter y Wikipedia continuarán siendo opciones de búsqueda alternativas.

Default-Virgin-flight-27-600x421Rusia

  • Yandex será el buscador por defecto.
  • Google, DuckDuckGo, OZON.ru, Price.ru, Mail.ru, y Wikipedia continuarán siendo opciones de búsqueda alternativas.

RussiaChina

  • Baidu continuará siendo el buscador por defecto.
  • Google, Bing, Youdao, Taobao y otras opciones locales continuarán siendo opciones de búsqueda alternativas.

ChinaResto de los países

  • Firefox es un navegador para todo el mundo, indiferente de la preferencia de búsqueda.
  • Firefox ahora tiene más opciones en cuanto a proveedores de búsqueda que cualquier otro navegador, con 61 proveedores preinstalados por 88 versiones en lenguas diferentes.
  • A pesar que Mozilla decidió no renovar el contrato, Google continuará siendo una opción de búsqueda preinstalada.
  • Google continuará proveyendo la Geolocalización y Navegación segura en Firefox.
  • Mozilla se enfocará en expandir su trabajo con socios motivados a explorar innovadoras y nuevas interfaces de búsqueda, experiencias de contenidos, y mejoras de privacidad en escritorio y móvil.

Search_hr_wlogo-600x402

Por eso la independencia importa. Al no perseguir el lucro, nos permite crear opciones diferentes. Opciones que mantienen la Web abierta, por todos lados e independiente. Pensamos que hoy se da un gran paso hacia esa dirección.

Cabe destacar que Google se mantendrá disponible como buscador por defecto para los demás países pero la oferta está abierta para que otros socios interesados se sumen a esta estrategia.

Fuente: The Mozilla Blog

Fuente: Google System

Matthew RuttleyUsing Web Workers in Firefox Extensions

Web Workers allow you to run code in the background in browsers such as Firefox. This is how to build one into a Firefox Extension, which is slightly different than from just creating one as normal on a page. The documentation for doing this is basically non-existent, so hopefully you’ll find this useful.

Please make sure you have a development environment set up similar to the one described here in my previous post.

How do workers work?

  • Workers in /data/ are not directly connected to scripts in /lib/
  • However, they can communicate by sending messages to each other
  • These messages are text only, so could contain serialized JSON, but nothing else
  • You’ll notice below that we are basically just slinging messages between two scripts

The code for the worker

Navigate to the /data/ directory and create a file called hello_world.js

> pwd
/Users/mruttley/Documents/test
> ls
data		lib		package.json	test
> cd data/
> vim hello_world.js

Now paste the following in there (new users of vim, press

i
  to start typing and
Esc
  followed by
:wq
  to save):

//Code for the worker

self.onmessage = function(messageFromClient) {
    self.postMessage("Hello " + messageFromClient.data);
};

This says that whenever the worker receives a message from the client, then send a message back with the word “Hello” prepended.

One note here: In workers, you can’t use the useful function 

console.log("message")
 , instead use 
dump("message")

Let’s call the worker from the main code

Let’s navigate back to the

/lib/
  folder and edit the
main.js
  file, which is the first thing that runs in the extension.

> cd ../lib/
> vim main.js

Paste in the following code:

var worker = new Worker("hello_world.js");

worker.onmessage = function(e) {
    console.log(e.data);
};

worker.postMessage("Matthew");

And run 

cfx run
 . You’ll notice a messy error:

> cfx run
Using binary at '/Applications/Firefox.app/Contents/MacOS/firefox-bin'.
Using profile at '/var/folders/p1/zzdzcrrx6pq96hgsmy5xjqmh0000gp/T/tmpgFixDP.mozrunner'.
console.error: test: 
  Message: ReferenceError: Worker is not defined
  Stack:
    @resource://jid1-zmowxggdley0aa-at-jetpack/test/lib/main.js:1:9
CuddlefishLoader/options<.load@resource://gre/modules/commonjs/sdk/loader/cuddlefish.js:129:18
run@resource://gre/modules/commonjs/sdk/addon/runner.js:138:19
startup/</<@resource://gre/modules/commonjs/sdk/addon/runner.js:81:7
Handler.prototype.process@resource://gre/modules/Promise-backend.js:865:23
this.PromiseWalker.walkerLoop@resource://gre/modules/Promise-backend.js:744:7

*************************
A coding exception was thrown in a Promise resolution callback.
See https://developer.mozilla.org/Mozilla/JavaScript_code_modules/Promise.jsm/Promise

Full message: ReferenceError: Worker is not defined
Full stack: @resource://jid1-zmowxggdley0aa-at-jetpack/test/lib/main.js:1:9
CuddlefishLoader/options<.load@resource://gre/modules/commonjs/sdk/loader/cuddlefish.js:129:18
run@resource://gre/modules/commonjs/sdk/addon/runner.js:138:19
startup/</<@resource://gre/modules/commonjs/sdk/addon/runner.js:81:7
Handler.prototype.process@resource://gre/modules/Promise-backend.js:865:23
this.PromiseWalker.walkerLoop@resource://gre/modules/Promise-backend.js:744:7

*************************
console.error: test: 
  Message: ReferenceError: Worker is not defined
  Stack:
    @resource://jid1-zmowxggdley0aa-at-jetpack/test/lib/main.js:1:9
CuddlefishLoader/options<.load@resource://gre/modules/commonjs/sdk/loader/cuddlefish.js:129:18
run@resource://gre/modules/commonjs/sdk/addon/runner.js:138:19
startup/</<@resource://gre/modules/commonjs/sdk/addon/runner.js:81:7
Handler.prototype.process@resource://gre/modules/Promise-backend.js:865:23
this.PromiseWalker.walkerLoop@resource://gre/modules/Promise-backend.js:744:7

Aha! The key line here is: 

ReferenceError: Worker is not defined
 . This is because Firefox Extensions use something called a ChromeWorker instead. We need to import this in main.js by pasting this at the top:

var {ChromeWorker} = require("chrome")

and changing the line that references the hello_world.js file to call a ChromeWorker instead:

//var worker = new Worker("hello_world.js"); //remove this
var worker = new ChromeWorker("hello_world.js"); //add this instead

Ok let’s try running it again! Try 

cfx run
 . Wtf another error?!

> cfx run
Using binary at '/Applications/Firefox.app/Contents/MacOS/firefox-bin'.
Using profile at '/var/folders/p1/zzdzcrrx6pq96hgsmy5xjqmh0000gp/T/tmpJJXeC4.mozrunner'.
console.error: test: 
  Message: Error: Malformed script URI: hello_world.js
  Stack:
    @resource://jid1-zmowxggdley0aa-at-jetpack/test/lib/main.js:3:14
CuddlefishLoader/options<.load@resource://gre/modules/commonjs/sdk/loader/cuddlefish.js:129:18
run@resource://gre/modules/commonjs/sdk/addon/runner.js:138:19
startup/</<@resource://gre/modules/commonjs/sdk/addon/runner.js:81:7
Handler.prototype.process@resource://gre/modules/Promise-backend.js:865:23
this.PromiseWalker.walkerLoop@resource://gre/modules/Promise-backend.js:744:7

console.error: test: 
  Message: Error: Malformed script URI: hello_world.js
  Stack:
    @resource://jid1-zmowxggdley0aa-at-jetpack/test/lib/main.js:3:14
CuddlefishLoader/options<.load@resource://gre/modules/commonjs/sdk/loader/cuddlefish.js:129:18
run@resource://gre/modules/commonjs/sdk/addon/runner.js:138:19
startup/</<@resource://gre/modules/commonjs/sdk/addon/runner.js:81:7
Handler.prototype.process@resource://gre/modules/Promise-backend.js:865:23
this.PromiseWalker.walkerLoop@resource://gre/modules/Promise-backend.js:744:7

The key line here is: 

Malformed script URI: hello_world.js
 . This cryptic error is because firefox can’t yet access anything in the
/data/
  folder. We have to use another part of the SDK to enable access to it.

Open

main.js
  and put this at the top:

var self = require("sdk/self");

Now we can use the function

self.data.url()
 . When you put a filename as the first argument, it will return a string like 
resource://jid1-zmowxggdley0aa-at-jetpack/test/data/whatever_file.js
 which properly refers to it in the case of extensions. Modify the worker import line as follows:

//let worker = new Worker("hello_world.js"); //remove this
let worker = new ChromeWorker(self.data.url("hello_world.js")); //add this

Now let’s run the extension again using

cfx run
 :

> cfx run
Using binary at '/Applications/Firefox.app/Contents/MacOS/firefox-bin'.
Using profile at '/var/folders/p1/zzdzcrrx6pq96hgsmy5xjqmh0000gp/T/tmppvMjZp.mozrunner'.
console.log: test: Hello Matthew

Yay it works! The Worker returned the message “Hello Matthew“.

FAQ

  • What does this
    {notation}
      mean?

It is shorthand for:

var chrome = require("chrome")
var Worker = chrome['ChromeWorker']

Basically this means that 

require("chrome")
 returns an Object, and we just need the value that is referenced by the key “ChromeWorker”. This is a very succinct way of extracting things from JavaScript Objects that will come in handy in the future.

  • Why is Worker now called ChromeWorker? Are we doing something with Google Chrome?

This is a naming coincidence and nothing to do with Chrome as in the browser. Chrome in this case refers to Firefox Addon internals.

Mozilla FundraisingMozilla Now Accepts Bitcoin

For some time, Mozilla supporters have asked for the ability to donate using bitcoin. We are finally able to fulfill that request. Beginning today, we accept bitcoin as one of the many ways people can choose to support Mozilla. Read … Continue reading

Pascal FinetteCommunity Building (Lessons from Mozilla)

Today I had the great honor and pleasure to teach a class on Building (Online) Communities at Electronic Arts’ internal leadership development program. For my presentation I took a trip down memory lane and pulled out the key insights and learnings from my time at Mozilla.

Here’s the deck:

Lukas BlakkArtisanal Contributors

Part 1: Start In Person

Ascend had very few ‘rules’ but there was one which was non-negotiable: it’s an in-person program. We didn’t do distance learning, online coursework, or video-based classes. We did bring in a couple of speakers virtually to speak to the room of 20 participants but the opposite was never true.

This was super important in how we were going to build a strong cohort. Don’t get me wrong, I’m a fan of remote work and global contribution as well as with people working from wherever they are. This was a 6 week intensive program though and in order to build the inter-dependent cohort I was hoping to1, it had to be in person at first. Those cruicial early stages where someone is more likely to ‘disappear’ if things were hard, confusing, or if they couldn’t get someone’s attention to ask a question.

It’s been over 5 years since I graduated from my software development program and over 8 years since I started lurking in IRC channels2 and getting to know Mozillians in digital space first. I wouldn’t have stuck with it, or gotten so deeply involved without my coursework with Dave Humphrey though. That was a once a week class, but it meant the world to be in the same room as other people who were learning and struggling with the same or similar problems. It was an all-important thread connecting what I was trying to do in my self-directed time with actual people who could show more caring about me and my ability to participate.

Even as an experienced open source contributor I can jump into IRC channels for projects I’m trying to work on – most recently dd-wrt for my home server setup – and when I ask a question (with lots of evidence for what I’ve already tried and an awareness of what the manual has to say) I get no response, aka: Crickets. There are a host of reasons, and I know more than a beginner might about what those could be: timezones, family comitments, no one with the expertise currently in the channel, and more. None of that matters when you’re new to this type of environment. Silence is interpreted as a big “GO AWAY YOU DON’T BELONG HERE” despite the best intentions of any community.

In person learning is the best way to counter that. Being able to turn to a colleague or a mentor and say what’s happening helps get you both reassurance that it’s not you, but also someone who can help you get unstuck on what to do next. While you wait for a response, check out this other topic we’re studying. Perhaps you can try other methods of communication too, like in a bug or an email.

Over the course of our first pilot I also discovered that removing myself from the primary workroom the Ascend participants were in helped the cohort to rapidly built up strengths in helping each other first3. The workflow looked more like: have a question/problem, ask a cohort member (or several), if you still can’t figure it out ask on IRC, and if then if you’re still stuck find your course leader. This put me at the end of the escalation path4 and meant that people were learning to rely both on in-person communications as well as IRC but more importantly were building up the muscle of “don’t stop asking for help until you get it” which is really where open source becomes such a great space to work in.

Back to my recent dd-wrt experience, I didn’t hear anything back in IRC and I felt I had exhausted the forums & wikis their community provided. I started asking in other IRC channels where tech-minded people hung out (thanks womenwhohack!) and then I tried yet another search with slightly different terms. In the end I found what I needed in a YouTube tutorial. I hope that sufficiently demonstrates that a combination of tactics are what culminate in an ability to be persistent when learning in open source projects.

Never underestimate the importance of removing isolation for new contributors to a project. In person help, even just at first, can be huge.


  1. Because the ultimate goal of Ascend was to give people skills for long-term contribution and participation and a local cohort of support and fellow learners seemed like a good bet for that to be possible once the barrier-removing help of the 6 week intensive was no longer in place. 
  2. By the way, I’m such a huge fan of IRC that I wrote the tutorial for it at Mozilla in order to help get more non-engineering folks using it, in my perfect world everyone is in IRC all the time with scrollback options and logging. 
  3. Only after the first three weeks when we moved to the more independent work, working on bugs, stage. 
  4. Which is awesome because I was always struggling to keep up with the course creation as we were running it, I didn’t realize that teaching 9-5 was asking for disaster and next time we’ll do 10-4 for the participants to give the mentors pre and post prep time. 

Matthew RuttleySetting up a development environment for Firefox Extensions.

This is the method I use to create simple firefox extensions. This tutorial is a precursor to the next one which is about using Web Workers (i.e. allowing code to run on background threads).

Setting up the environment

We’re going to need the Firefox Addon SDK. This is a collection of python files that will let you run a test (optionally blank) version of Firefox. To download it:

mruttley13338:addon mruttley$ wget https://ftp.mozilla.org/pub/mozilla.org/labs/jetpack/jetpack-sdk-latest.tar.gz

Now extract it and remove the tarball:

mruttley13338:addon mruttley$ tar -zxvf jetpack-sdk-latest.tar.gz
mruttley13338:addon mruttley$ rm -f jetpack-sdk-latest.tar.gz

Go to the directory and startup the special shell:

mruttley13338:addon mruttley$ cd addon-sdk-1.17/
mruttley13338:addon-sdk-1.17 mruttley$ source bin/activate
Welcome to the Add-on SDK. For the docs, visit https://addons.mozilla.org/en-US/developers/docs/sdk/latest/
(addon-sdk-1.17)mruttley13338:addon-sdk-1.17 mruttley$

Now you can see that the shell has prepended (addon-sdk-1.17) in brackets to the prompt. This means that the window is probably half filled with text so we can reduce that with the command:

(addon-sdk-1.17)mruttley13338:addon-sdk-1.17 mruttley$ PS1="> "
>

Much cleaner! :)

Setting up the extension template

Now that we have this special addon-sdk shell, navigate back to your documents and create a new folder for our extension.

> cd ~/Documents/
> mkdir tutorial
> cd tutorial

This special shell has various useful commands included, which all look like 

cfx xyz
 . For more about them see here. In this case we use 
cfx init

> cfx init
* lib directory created
* data directory created
* test directory created
* generated jID automatically: jid1-FYWu7Ld5UwDbRw
* package.json written
* test/test-main.js written
* lib/main.js written

Your sample add-on is now ready.
Do "cfx test" to test it and "cfx run" to try it.  Have fun!
>

Let’s inspect what was created:

> ls -lah
total 8
drwxr-xr-x   6 mruttley  staff   204B 20 Nov 16:47 .
drwx------+ 29 mruttley  staff   986B 20 Nov 16:45 ..
drwxr-xr-x   2 mruttley  staff    68B 20 Nov 16:47 data
drwxr-xr-x   3 mruttley  staff   102B 20 Nov 16:47 lib
-rw-r--r--   1 mruttley  staff   174B 20 Nov 16:47 package.json
drwxr-xr-x   3 mruttley  staff   102B 20 Nov 16:47 test

  •  lib
      contains a file called
    main.js
      which is the main handler file for all extension code
  • data
      is empty but can be used to store things like workers (which we will come to later) or large data files
  • test
      can contain unit tests (quite hard to set up but useful for test driven development later)
  • package.json
      contains metadata about the extension – version number, name of the creator, description, licensing etc

You can start writing code in main.js and it will run in the browser. Once finished, use 

cfx run
 to test it!

See the next tutorial on how to write a firefox extension using web workers!

Lukas BlakkRelease Management Tooling: Past, Present, and Future

Release Management Tooling: Past, Present, and Future

As I was interviewing a potential intern for the summer of 2015 I realized I had outlined all our major tools and what the next enhancement for each could be but that this wasn’t well documented anywhere else yet.

By coming to Release Management from my beginnings as a Release Engineer, I’ve been a part of seeing our overall release automation improve across the whole spectrum of what it takes to put out packaged software for multiple platforms and we’ve come a long way so this post is also intended to capture how the main tools we use have gotten to their current state as well as share where they are heading.

Ship-It

Past: Release Manager on point for a release sent an email to the Release-Drivers mailing list with an hg changeset, a version, build number, and this was the “go” to build for Release Engineering to take over and execute a combination of automated/manual steps (there was even a time when it was only said in IRC, email became the constant when Joduinn pushed for consistency and a traceable trail of events). Release Engineers would update a config files & locale changes, get them attached to a bug, approved, uplifted, then go reconfigure the build machines so they could kick off the release build automation.

Present: Ship-It is an app developed by Release Engineering (bhearsum) that allows a Release Manager to input the configurations needed (changeset, version, build number, partials to be created, l10n changesets) all in one place, and on submit the build automation picks up this change from a db, reconfigures the build machine, and triggers builds. When all goes well, there are zero human hands between the “go” and the availability of builds to QA.

Future: In two parts:
1. To have a simple app that can take a list of bug numbers and check them for landing to {branch} (where branch is Beta, Release, or ESR), once all the bug numbers listed have landed, check tree herder for green status on that last changeset, submit to Ship-It if builds are successful. Benefits: hands off even sooner, knowing that all the important fixes are on the branch in question, and that the tree is totally green prior to build (sometimes we “go” without all the results because of human timing needs).
2. Complete End-To-End Release Checklist, dynamically updated to show what stage a release job is at and who’s got the ball in their court. This should track from buglist added (for the final landings a RM is waiting on) all the way until the release notes are live and QA signs off on updates for the general release being in the wild.

Nucleus (aka Release Note App)

Past: Oh dear, you probably don’t even want to know how our release notes used to be made. It’s worse than sausage. There was a sqlite db file, a script that pulled from that db and generated html based on templates and then the Release Manager had to manually re-order the html to get the desired appearance on final pages, all this was then committed to SVN and with that comes the power to completely break mozilla.org properties. Fun stuff. Really. Also once Release Management was more than just one person we shared this sqlite db over Dropbox which had some fun quirks, like clobbering your changes if two people had the file open at the same time. Nowhere to go but up from here!

Present: Thanks to the web production team (jgmize, hoosteeno, craigcook, jbertsch) we got a new Django app in place that gives us a proper databse that’s redundant, production quality, and not in our hands. We add in release notes as well as releases and can publish notes to both staging and production without any more commits to SVN. There’s also an API that can be scripted to.

Future: The future’s so bright in this area, let me get my shades. We have a flag in Bugzilla for relnote-firefox where it can get set to ? when something is nominated and then when we decide to take on that bug as a release note we can set it to {versionNum}+. With a little tweaking on the Bugzilla side of things we could either have a dedicated field for “release-note text” or we could parse it out of a syntax in a comment (though that’s gonna be more prone to user error, so I prefer the former) and then automatically grab all the release notes for a version, create the release in Nucleus, add the notes, publish to staging, and email the link around for feedback without any manual interference. This also means we can dynamically adjust release notes using Bugzilla (and yes, this will need to be really cautiously done), and it makes sure that our recent convention of having every release note connect to a bug persist and become the standard.

Release Dash

Past: Our only way to visualize the work we were doing was a spreadsheet, and graphs generated from it, of how many crasher bugs were tracked for a version, how many bugs tracked/fixed over the course of 18 weeks for a version, and not much else. We also pay attention to the crash rate at ship time, whether we had to do a dot release or chemspill, and any other release-version-specific issues are sort of lost in the fray after we’re a couple of weeks out from a release. This means we don’t have a great sense of our own history, what we’re doing that works in generating a more stable/successful release, and whether a release is in fact ready to go out the door. It’s a gamble, and we take it every 6 weeks.

Present: We have in place a dashboard that is supposed to allow us to view the current crash data, select Talos (performance) data, custom bug queries, and be able to compare a current release coming down the pipe to previous releases. We do not use this dashboard yet because it’s been a side project for the past year and a half, primarily being created and improved upon by fabulous – yet short-term – interns at Mozilla. The dashboard relies on Elastic Search for Bugzilla data and the cluster it points to is not always up. The dash is written in php and that’s no one’s strong suit on our current team, our last intern did his work by creating a Python Flask app that would work into the current dash. The present situation is basically: we need to work on this.

Future: In the future, this dashboard will be robust, reliable, production-quality (and supported), and it will be able to go up on Mozilla office screens in the dashboard rotation where it will make clear to any viewer:
* Where we are in the current release cycle
* What blockers remain for releas
* How our stability is (over/under acceptable rates)
* If we’re meeting performance expectations
And hopefully more. We have to find more ways to get visibility into issues a release might hit once it’s with the larger population. I’d love to see us get more of our Beta user’s feedback by asking for it on specific features/fixes, get a broader Beta audience that is more reflective of our overall release population (by hardware, location, language, user types) and then grow their ability to report issues well. Then we can find ways to get that front and center too – including to developers because they are great at confirming if something unusual is happening.

What Else?

Well, we used to have an automated script that reminded teams of their open & tracked bugs on Beta/Aurora/Nightly in order to provide a priority order that was visible to devs & their managers. It’s a finicky script that breaks often. I’d like to see that replaced with something that’s not just a cronjob on my personal VPS. We’re also this close to not needed to update product-details (still in SVN) on every release. The fact that the Release Management team has the ability to accidentally take down all mozilla.org properties when a mistake is made submitting svn propedits is not desireable or necessary. We should get the heck away from that asap.

We’ll have more discussions of this in Portland, especially with the teams we work closely with and Sylvestre and I will be talking up our process & future goals at FOSDEM in 2015 as well as following it with a work week in Paris where we can put our heads down and code. Next summer we get an intern again and so we’ll have another set of skilled hands to put on tooling & web service improvements.

Always improving. Always automating. These are the things that make me excited for the next year of Release Management.

Mozilla Open Policy & Advocacy BlogSpotlight on the Open Technology Institute: A Ford-Mozilla Open Web Fellow Host

{This is the third installment in our series highlighting the 2015 Host Organizations for the Ford-Mozilla Open Web Fellows program. We are now accepting applications to be a 2015 fellow. We are thrilled to feature the New America Foundation’s Open Technology Institute as a host. Over the years, OTI has been a meaningful change agent, helping to protect the free and open Web. Working at OTI, the Open Web Fellow will be developing tools that lead to greater transparency, enabling all stakeholders to better understand how public policy and business practices impact the Web experience.}

Spotlight on the Open Technology Institute: A Ford-Mozilla Open Web Fellow Host Organization
By Kevin Bankston, Policy Director, and Georgia Bullen, Senior Data Analyst; Open Technology Institute

Last month’s MozFest 2014 provided us a welcome opportunity to think about what we at New America’s Open Technology Institute hope to do over the next year as one of the few organizations lucky enough to host a Ford-Mozilla Open Web Fellow during that fellowship program’s inaugural year. At OTI, we are committed to freedom and social justice in the digital age. To achieve these goals, we engage in policy debates, build technology, and work with communities to understand needs, test tools and build alternative models of infrastructure. And we are looking for a passionate maker to help us with our work in 2015. In particular, to help make more transparent the workings of the Internet and the companies that offer services over it.

OTI-Institute-CMYK [Converted]-01

So much of what impacts our online experience happens without us seeing it, making it easy to overlook.

For example, look at the Net Neutrality debate, where decisions made at interconnection points deep in the network have both business and policy implications. At OTI, we have tools that allow us to dig into the technical depths of the issue through our Measurement Lab platform, and we recently published a major report laying out much of that data.  But we need help figuring out how to make this information more available and more clear so that policy experts, advocates, industry professionals and everyday Internet users can understand what interconnection is, how it works, and how it affects the online experience. We’ve started on one of these efforts by working on a visualization tool that we’re calling the Measurement Lab Observatory, but there’s so much more we can do with the Measurement Lab data, as well as the platform and tools to make it more accessible to everyone–if only we can find the right fellow.

With the help of the participants at our MozFest usability workshop, we thought about other ways to get people involved in Internet measurement, such as building a network troubleshooting tool that could generate new M-Lab data while also testing your connection.  We also talked about developing out our Firefox Browser extension to have different themes depending on a user’s needs, such as a journalist or advocate dashboard which includes recent news about Internet policy issues, or a “notebook” app with which Internet citizen scientists can run and annotate tests as part of the M-Lab research team.

These are just the types of ideas that we’re hoping our incoming Ford-Mozilla Fellow can run with.

On the policy and governance side, there’s also a lot more that we could be doing to reveal what happens behind the scenes between governments and Internet companies. Many companies now publish “Transparency Reports” that include information about how and when governments ask for user’s data. However, there’s no standardization in how companies report, making it hard to meaningfully combine or compare the data from different companies — and hard for new companies to get into the reporting game. Building on some of our previous research and education efforts around transparency reporting, in 2015 we will be launching a project called the Transparency Reporting Toolkit.  We’re going to build a Web portal filled with best practices information and tools to help companies create and upload reports in a standardized way, and tools for others to mash up and visualize the data from multiple companies’ reports. OTI’s technologists and data visualization experts are gearing up to build those tools, but it’s a big project and we could use some help — possibly yours.

Ultimately, we can only make good policy with good information, and we can only get good information – and, crucially, understand that information – with good tools.  We’re ready to move forward on all of these projects in 2015, full steam ahead. All we need now is the right technologist to help us make those tools. If that sounds exciting to you, apply to be a 2015 Ford-Mozilla Open Web Fellow and work with us and the Mozilla community to help build new windows into the technical and political depths of the Internet.


Apply to be a 2015 Ford-Mozilla Open Web Fellow. Visit www.mozilla.org/advocacy.

Soledad PenadesUsing the Firefox Developer Edition dark theme with Nightly

With a recent version of Nightly, go to about:config and set browser.devedition.theme.enabled to true.

Open DevTools (I use alt + cmd + i, or you can also go to the Tools → Web Developer → Toggle tools menu). Then open DevTools preferences by clicking on the gear icon, and select “Dark Theme” on the top right, underneath the Themes.

Screenshot for clarification:

nightly with dev edition theme

Note: you might not get the full effect if there is “legacy stuff” in your profile. If it doesn’t look as you expect… your best option might be to just create a new profile when you start the browser.

Note 2: for some reason the tabs weren’t rendering correctly on my normal nightly profile because the about:config browser.tabs.drawInTitlebar entry was set to false instead of true—I set it to true and now everything looks fine for me.

Or just use the standard Firefox Developer Edition if you’re not an impatient person like me :-P

flattr this!

David Rajchenbach TellerRFC: We deserve better than runtime warnings

Consider the following scenario:

  1. Module A prints warnings when it’s used incorrectly;
  2. Module B uses module A correctly;
  3. Some future refactoring of module B starts using module A incorrectly, hence displaying the warnings;
  4. Nobody realises for months, because we have too many warnings;
  5. Eventually, something breaks.

How often has this happened to everyone of us?

This scenario has many variants (e.g. module A changed and nobody realized that module B is now in a situation it misuses module A), but they all boil down to the same thing: runtime warnings are designed to be lost, not fixed. To make things worse, many of our warnings are not actionable, simply because we have no way of knowing where they come from – I’m looking at you, Cu.reportError.

So how do we fix this?

We would certainly save considerable amounts of time if warnings caused immediate assertion failures, or alternatively test failures (i.e. fail, but only when running the unit tests). Unfortunately, we can do neither, as we have a number of tests that trigger the warnings either

  • by design (e.g. to check that we can recover from such misuses of A, or because we still need a now-considered-incorrect use of an API to keep working until we have ported all the clients to the better API);
  • or for historical reasons (e.g. the now incorrect use of A used to be correct, but we haven’t fixed all tests that depend on it yet).

However, I believe that causing test failures is still the solution. We just need a mechanism that supports a form of whitelisting to cope with the aforementioned cases.

Introducing RuntimeAssert

RuntimeAssert is an experiment at provoding a standard mechanism to replace warnings. I have a prototype implemented as part of bug 1080457. Feedback would be appreciated.

The key features are the following:

  • when a test suite is running, a call to `RuntimeAssert` causes the test suite to fail;
  • when a test suite is running, a call to `RuntimeAssert` contains at least the filename/line number of where it was triggered, preferably a stack wherever available;
  • individual tests can whitelist families of calls to `RuntimeAssert` and mark them either as expected;
  • individual tests can whitelist families of calls to `RuntimeAssert` and mark them as pending fix;
  • when a test suite is not running, a call to `RuntimeAssert` does nothing costly (it may default to PR_LOG or Cu.reportError).

Possible API:

  • in JS, we trigger a test failure by calling RuntimeAssert.fail(keyword, string or Error) from production code;
  • in C++, we likewise trigger a test failure by calling MOZ_RUNTIME_ASSERT(keyword, string);
  • in the testsuite, we may whitelist errors by calling Assert.whitelist.expected(keyword, regexp)  or Assert.whitelist.FIXME(keyword, regexp).

Examples:

//
// Module
//
let MyModule = {
  oldAPI: function(foo) {
    RuntimeAssert.fail(“Deprecation”, “Please use MyModule.newAPI instead of MyModule.oldAPI”);
    // ...
  },
  newAPI: function(foo) {
    // ...
  },
};

let MyModule2 = {
  api: function() {
    return somePromise().then(null, error => {
      RuntimeAssert.fail(“MyModule2.api”, error);
      // Rather than leaving this error uncaught, let’s make it actionable.
    });
  },

  api2: function(date) {
    if (typeof date == “number”) {
      RuntimeAssert.fail(“MyModule2.api2”, “Passing a number has been deprecated, please pass a Date”);
      date = new Date(date);
    }
    // ...
   }
}


//
// Whitelisting a RuntimeAssert in a test.
//

// This entire test is about MyModule.oldAPI, warnings are normal.
Assert.whitelist.expected(“Deprecation”, /Please use MyModule.newAPI/);

// We haven’t fixed all calls to MyModule2.api2, so they should still warn, but not cause an orange.
Assert.whitelist.FIXME(“MyModule2.api2”, /please pass a Date/);

Assert.whitelist.expected(“MyModule2.api”, /TypeError/, function() {
  // In this test, we will trigger a TypeError in MyModule2.api, that’s entirely expected.
  // Ignore such errors within the (async) scope of this function.
});

Applications

In the long-term, I believe that RuntimeAssert (or some other mechanism) should replace almost all our calls to Cu.reportError.

In the short-term, I plan to use this for reporting

  • uncaught Promise rejections, which currently require a bit too much hacking for my tastes;
  • errors in XPCOM.lazyModuleGetter & co;
  • failures during AsyncShutdown;
  • deprecation warnings as part of Deprecated.jsm.


Doug BelshawFirefox Interest Dashboard: privacy-respecting analytics for your web browsing history

On a recent Mozilla project call I heard about the new Firefox Interest Dashboard. As someone who loves self-tracking, but stopped using my Fitbit due to privacy concerns, this is awesome.

My Firefox Interest Dashboard

Some of the numbers may be a bit off, and the categorisation certainly is in some cases, but it’s a promising start! The great thing is that if you use Firefox Sync it uses your data from other installations you use, too!

From the Content Services team:

This is an early version of interest categorization we’re working on. We invite you to test out this experimental beta add-on and help us out with the misclassified results. We would love to hear from you on suggestions on improvement or any feedback through the flag icon on the interest timeline.

Unlike other analytics services, the FAQ assures users that “all of the interest analysis and categorization is done on the client-side of your browser. No personal data is stored on Mozilla’s servers.”

Download the add-on (Firefox only)


Questions? Comments? Direct them to doug@mozillafoundation.org or discuss in the #TeachTheWeb discussion forum.

Julien VehentSSL/TLS for the Pragmatic

Tonight I had the pleasure to present "SSL/TLS for the Pragmatic" to the fine folks of Bucks County Devops. It was a fun evening, and I want to thank the organizers, Mike Smalley & Ben Krein, for the invitation.

It was a great opportunity to summarize 18 months of work at Mozilla on building the Server Side TLS Guidelines. By the feedback I received tonight, and on several other occasions, I think we've achieved the goal of building a document that is useful to operations people, and made TLS just a little easier to understand.

We are not, however, anywhere done with the process of teaching TLS to the Internet. Stats speak for themselves, with 70% of sites still supporting SSLv3, 86% enabling RC4, and about 32% still not preferring PFS over RSA handshakes. But things are getting better every day, and ongoing efforts may bring safe defaults in Linux servers as soon as Fedora 21. We live in exciting times!

The slides from my talk are below, and on github as well. I hope you enjoy them. Feel free to share your comments at julien[at]linuxwall.info.

Giorgio Maones/http(:\/\/(?:noscript|flashgot|hackademix)\.net)/https\1/

I'm glad to announce noscript.net, flashgot.net and hackademix.net have been finally switched to full, permanent TLS with HSTS

Please do expect a smörgåsbord of bugs and bunny funny stuff :)

Andreas GalYahoo and Mozilla Form Strategic Partnership

SUNNYVALE, Calif. and MOUNTAIN VIEW, Calif., Wednesday, November 19, 2014 – Yahoo Inc. (NASDAQ: YHOO) and Mozilla Corporation today announced a strategic five-year partnership that makes Yahoo the default search experience for Firefox in the United States on mobile and desktop. The agreement also provides a framework for exploring future product integrations and distribution opportunities to other markets.

The deal represents the most significant partnership for Yahoo in five years. As part of this partnership, Yahoo will introduce an enhanced search experience for U.S. Firefox users which is scheduled to launch in December 2014. It features a clean, modern and immersive design that reflects input from the Mozilla team.

“We’re thrilled to partner with Mozilla. Mozilla is an inspirational industry leader who puts users first and focuses on building forward-leaning, compelling experiences. We’re so proud that they’ve chosen us as their long-term partner in search, and I can’t wait to see what innovations we build together,” said Marissa Mayer, Yahoo CEO. “At Yahoo, we believe deeply in search – it’s an area of investment, opportunity and growth for us. This partnership helps to expand our reach in search and also gives us an opportunity to work closely with Mozilla to find ways to innovate more broadly in search, communications, and digital content.”

“Search is a core part of the online experience for everyone, with Firefox users alone searching the Web more than 100 billion times per year globally,” said Chris Beard, Mozilla CEO. “Our new search strategy doubles down on our commitment to make Firefox a browser for everyone, with more choice and opportunity for innovation. We are excited to partner with Yahoo to bring a new, re-imagined Yahoo search experience to Firefox users in the U.S. featuring the best of the Web, and to explore new innovative search and content experiences together.”

To learn more about this, please visit the Yahoo Corporate Tumblr and the Mozilla blog.

About Yahoo

Yahoo is focused on making the world’s daily habits inspiring and entertaining. By creating highly personalized experiences for our users, we keep people connected to what matters most to them, across devices and around the world. In turn, we create value for advertisers by connecting them with the audiences that build their businesses. Yahoo is headquartered in Sunnyvale, California, and has offices located throughout the Americas, Asia Pacific (APAC) and the Europe, Middle East and Africa (EMEA) regions. For more information, visit the pressroom (pressroom.yahoo.net) or the Company’s blog (yahoo.tumblr.com).

About Mozilla

Mozilla has been a pioneer and advocate for the Web for more than a decade. We create and promote open standards that enable innovation and advance the Web as a platform for all. Today, hundreds of millions of people worldwide use Mozilla Firefox to discover, experience and connect to the Web on computers, tablets and mobile phones. For more information please visit https://www.mozilla.com/press

Yahoo is registered trademark of Yahoo! Inc. All other names are trademarks and/or registered trademarks of their respective owners.


Filed under: Mozilla

Monty MontgomeryDaala Demo 6: Perceptual Vector Quantization (by J.M. Valin)

Jean-Marc has finished the sixth Daala demo page, this one about PVQ, the foundation of our encoding scheme in both Daala and Opus.

(I suppose this also means we've finally settled on what the acronym 'PVQ' stands for: Perceptual Vector Quantization. It's based on, and expanded from, an older technique called Pyramid Vector Quantization, and we'd kept using 'PVQ' for years even though our encoding space was actually spherical. I'd suggested we call it 'Pspherical Vector Quantization' with a silent P so that we could keep the acronym, and that name appears in some of my slide decks. Don't get confused, it's all the same thing!)

David DahlEncryptr: ‘zero knowledge’ essential information storage

Encryptr is one of the first “in production” applications built on top of Crypton. Encryptr can store short pieces of text like passwords, credit card numbers and other random pieces of information privately, in the cloud. Since it uses Crypton, all data that is saved to the server is encrypted first, making even a server compromise an exercise in futility for the attacker.

A key feature is that you can run Encryptr on your phone as well as your desktop and all data is available in each place immediately. Have a look:


The author of Encryptr, my colleague Tommy @therealdevgeeks, has recently blogged about building Encryptr. I hope you give it a try and send him feedback through the Github project page.


Christian HeilmannSimple things: styling ordered lists

This blog started as a scratch pad of simple solutions to problems I encountered. So why not go back to basics?

It is pretty easy to get an ordered list into a document. All you have to do is add an OL element with LI child elements:

<ol>
  <li>Collect underpants</li>
  <li>???</li>
  <li>Profit</li>
</ol>

But what if you want to style the text differently from the numbers? What if you don’t like that they end with a full stop? The generated numbers of the OL are somewhat of that dark magic browsers do for us (something we work on dragging into the sunlight with ShadowDOM).

In order to make those more style-able in the past you had to add another element to get a hook:

<ol class="oldschool">
  <li><span>Collect underpants</span></li>
  <li><span>???</span></li>
  <li><span>Profit</span></li>
</ol>

.oldschool li {
  color: green;
}
.oldschool span {
  color: lime;
}

Which is kind of a terrible hack and doesn’t quite scale as you may never know who edits your list. With newer browsers we have a better way of doing that using CSS counters. The browser support is ridiculously good, so there should be no excuse for us not to use them:

counters

Using counter, you keep the HTML structure:

<ol class="counter">
  <li>Collect underpants</li>
  <li>???</li>
  <li>Profit</li>
</ol>

You then reset the counter for each of the lists with this class:

.counter { 
  counter-reset: list;
}

This means each list will start at 1 and not go on through the document tree. You then get rid of the list style and style the list item like you want to. In this case we give it a colour and we position it relative. This allows us to position other, new content in there and contain it to the list item:

.counter li {
  list-style: none;
  position: relative;
  color: lime;
}

Once you hid the normal numbering with the list-style: none; you can create your own numbers using counter and generated CSS content:

.counter li::before {
  counter-increment: list;
  content: counter(list) '.';
  position: absolute;
  top: 0px;
  left: -1.2em;
  color: green;
}

If you wanted to remove the full stop, all you need to do is remove it in the CSS. You have now full styling control over these numbers, for example you can animate them slightly moving from one colour to another and zoom out a bit:

demo animation of the effect

.animated li::before {
  transition: 0.5s;
  color: green;
}
.animated li:hover::before {
  color: white;
  transform: scale(1.5);
}

Counters allow you for a lot of different times of numbering. You can for example add a leading zero by using counter(list,decimal-leading-zero), you can use Roman numerals with counter(list,lower-roman) or even Greek ones with counter(list,lower-greek).

If you want to see all of that in action, check out this Fiddle:

Pretty simple, and quite powerful. Here are some more places to read up on this:

Jonathan WattConverting Mozilla's SVG implementation to Moz2D - part 2

This is part 2 of a pair of posts describing my work to convert Mozilla's SVG implementation to directly use Moz2D. Part 1 provided some background information and details about the process. This post will discuss the performance benefits of the conversion of the SVG code and future work.

Benefits

For the most part the performance improvements from the conversion to Moz2D were gradual; as code was incrementally converted, little by little gfxContext overhead was avoided. On doing an audit of our open SVG performance bugs it seems that painting performance is no longer one of the reasons that we perform poorly, except for when we us cairo backed DrawTargets (Linux, Windows XP and other Windows versions with blacklisted drivers), and with the exception of one bug that needs further investigation. (See below for the issues that still do causes SVG performance problems.)

Besides the incremental improvements, there have been a couple of interesting perf bumps that are worth mentioning.

The biggest perf bump by far came when I converted the code that does the actual filling and stroking of SVG geometry to directly use a DrawTarget. The time taken to render this map then dropped from about 15s to 1-2s on my Mac. On the same machine Chrome Canary shows a blank window for about 5s, and then takes a further 20s to render. Now, to be honest, this improvement will be down to something pathological that has been removed rather than being down to avoiding Thebes overhead. (I haven't got to the bottom of exactly what that was yet.) The DrawTarget object being drawn to is ultimately the same object, and Thebes overhead isn't likely to be more than a few percent of any time spent in this code. Nevertheless, it's still a welcome win.

Another perf bump that came from the Moz2D conversion was that it enabled us to cache path objects. When using Thebes, paths are built up using gfxContext API calls and the consumer never gets to touch the resulting path. This prevents the consumer from keeping hold of the path and reusing it in future. This can be a disadvantage when the path is reused frequently, especially when D2D is being used where path creation is relatively expensive. Converting to Moz2D has allowed the SVG code to hold on to the path objects that it creates and reuse them. (For example, in addition to their obvious use during rasterization, paths might be reused for bounds calculations (think invalidation areas, objectBoundingBox content, getBBox() calls) and hit-testing.) Caching paths made us noticeably more responsive on this cool data visualization (temporarily mirrored here while the site is down) when mousing over the table rows, and gave us a +25% boost on this NYT article, for example.

For those of you that are interested in Talos, I did take a look at the SVG test data, but the unfortunately frequent up-and-down of unrelated regressions and wins makes it impossible to use that to show any overall impact of Moz2D conversion on the Talos tests. (Since the beginning of the year the times on Windows have improved slightly while on Mac they have regressed slightly.) The incremental nature of most of the work also unfortunately meant that the impact of individual patches couldn't usually be distinguished from the noise in Talos' results. One notable exception was the change to make SVG geometry use a path object directly which resulted in an improvement in the region of 6% for the svg_opacity suite on Windows 7 and 8.

Other than the performance benefits, several parts of the SVG implementation that were pretty messy and hard to get into and debug have become a lot more approachable. This has already allowed me to fix various SVG bugs that would otherwise have taken a lot longer to work out, and I hope it makes the code easier to approach for devs who aren't so familiar with it.

One final note on performance for any of you that will do your own testing to compare build - note that the enabling of e10s and tiled layers has caused significant changes in performance characteristics. You might want to turn those off.

Future SVG work

As I noted above there are still SVG performance issues unrelated to graphics speed. There are three sources of significant SVG performance issues that can make Mozilla perform poorly on SVG relative to other implementations. There is our lack of hardware acceleration of SVG filters; there's the issue of display list overhead dwarfing painting on SVGs that contain huge numbers of elements (display lists being an implementation detail, and one that gave us very large wins in many other cases); and there are a whole bunch of "strange" bugs that I expect are related to our layers infrastructure that are causing us to over invalidate (and thus do work painting when we shouldn't need to).

Currently these three issues are not on a schedule, but as other higher priority Mozilla work gets ticked of I expect we'll add them.

Future Moz2D work

The performance benefits from the Moz2D conversion on the SVG code do seem to have been positive enough that I expect that we will continue converting the rest of layout in the future. As usual, it will all depend on relative priorities though.

One thing that we should do is audit all the code that creates DrawTargets to check for backend type compatibility. Mixing hardware and software backed DrawTargets when we don't need to can cause us to unwittingly be taking big performance hits due to readback from and/or upload to the GPU. I fixed several instances of mismatch that I happened to notice during the conversion work, and in one case accidentally introduced one which fortunately was caught because it caused a 10-25% regression in a specific Talos test. We know that we still have outstanding bugs on this (such as bug 944571) and I'm sure there are a bunch of cases that we're unaware of.

I mentioned above that painting performance is still a significant issue on machines that fall back to using cairo backed DrawTargets. I believe that the Graphics team's plan to solve this is to finish the Skia backend for Moz2D and use that on the platforms that don't support D2D.

There are a few things that need to be added to Moz2D before we can completely get rid of gfxContext. The main thing we're missing is push-group API on DrawTarget. This is the main reason that gfxContexts actually wraps a stack of DrawTargets, which has all sorts of irritating fallout. Most annoying it makes it hazardous to set clip paths or transforms directly on DrawTargets that may be accessed via a wrapping gfxContext before the DrawTarget's clip stack and transform has been restored, and why I had to continue passing gfxContexts to a lot of code that now only paints directly via the DrawTarget.

The only Moz2D design decision that I've found myself to be somewhat unhappy with is the decision to make patterns relative to user-space. This is what most other hardware accelerated libraries do, but I don't think it's a good fit for 2D browser rendering. Typically crisp rendering is very important to web content, so we render patterns assuming a specific user-space to device-space transform and device space pixel alignment. To maintain crisp rendering we have to make sure that patterns are used with the device-space transform that they were created for, and having to do this manually can be irksome. Anyway, it's a small detail, but something I'll be discussing with the Graphics guys when I see them face-to-face in a couple of weeks.

Modulo the two issues above (and all the changes that I and others had made to it over the last year) I've found the Moz2D API to be a pleasure to work with and I feel the SVG code is better performing and a lot cleaner for converting to it. Well done Graphics team!

Jonathan WattConverting Mozilla's SVG implementation to Moz2D - part 1

One of my main work items this year was the conversion of the graphics portions of Mozilla's SVG implementation to directly use Moz2D APIs instead of using the old gfxContext/gfxASurface Thebes APIs. This pair of posts will provide some information on that work. This post will give some background and information on the conversion process, while part 2 will provide some discussion about the benefits of the work and what steps we might want to carry out next.

For background on why Mozilla is building Moz2D (formerly called Azure) and how it can improve Mozilla's performance see some of the earlier posts by Joe, Bas and Robert.

Early Moz2D development

When Moz2D was first being put together it was initially developed and tested as an alternative rendering backend for Mozilla's implementation of HTML <canvas>. Canvas was chosen as the initial testbed because its drawing is largely self contained, it requires a relatively small number of features from any rendering backend, and because we knew from profiling that it was being particularly impacted by Thebes/cairo overhead.

As Moz2D started to become more stable, Thebes' gfxContext class was extended to allow it to wrap a Moz2D DrawTarget (prior to that it was backed only by an instance of a Thebes gfxASurface subclass, in turn backed by a cairo_surface_t). This might seem a bit strange since, after all, Moz2D is supposed to replace Thebes, not be wrapped by it adding yet another layer of abstraction and overhead. However, it was an important step to allow the Graphics team to start testing Moz2D on Mozilla's more complicated, non-canvas, rendering scenarios. It allowed many classes of Moz2D bugs and missing Moz2D features to be worked on/out before beginning a larger effort to convert the masses of non-canvas rendering code to Moz2D.

In order to switch any of the large number of instances of gfxContext to be backed by a DrawTarget, any code that might encounter that gfxContext and try to get a gfxASurface from it had to be updated to handle DrawTargets too. For example, lots of forks in the code had to be added to BasicLayerManager, and gfxFont required a new GlyphBufferAzure class to be written. As this work progressed some instances of Thebes gfxContexts were permanently flipped to being backed by a Moz2D DrawTarget, helping keep working Moz2D code paths from regressing.

SVG, the next Guinea pig

Towards the end of 2013 it was felt that Moz2D was sufficiently ready to start thinking about converting Mozilla's layout code to use Moz2D directly and eliminate its use of gfxContext API. (The layout code being the code that decides where and how most things are placed on the screen, and by far the biggest consumer of the graphics code.) Before committing a lot of engineering time and resources to a large scale conversion, Jet wanted to convert a specific part of the layout code to ensure that Moz2D could meet its needs and determine what performance benefits it could provide to layout. The SVG code was chosen for this purpose since it was considered to be the most complicated to convert (if Moz2D could work for SVG, it could work for the rest of layout).

Stage 1 - Converting all gfxContexts to wrap a DrawTarget

After drawing up a rough list of the work to convert the SVG code to Moz2D I got stuck in. The initial plan was to add code paths to the SVG code to check for and extract DrawTargets from gfxContexts that were passed in (if the gfxContext was backed by one) and operate directly on the DrawTarget in that case. (At some future point the Thebes forks could then be removed.) It soon became apparent that these forks were often not how we would want the code to be structured on completion of Moz2D conversion though. To leverage Moz2D more effectively I frequently found myself wanting to refactor the code quite substantially, and in ways that were not compatible with the existing Thebes code paths. Rather than spending months writing suboptimal Moz2D code paths only to have to rewrite things again when we got rid of the Thebes paths I decided to save time in the long run and first make sure that any gfxContexts that were passed into SVG code would be wrapping a DrawTarget. That way maintaining Thebes forks would be unnecessary.

It wasn't trivial to determine which gfxContexts might end up being passed to SVG code. The complexity of the code paths and the virtually limitless permutations in which Web content can be combined meant that I only identified about a dozen gfxContexts that could not end up in SVG code. As a result I ended up working to convert all gfxContexts in the Mozilla code. (The small amount of additional work to convert the instances that couldn't end up in SVG code allowed us to reduce a whole bunch of code complexity (and remove a lot of then dead code) and simplified things for other devs working with Thebes/Moz2D.)

Ensuring that all the gfxContexts that might be passed to SVG code would be backed by a DrawTarget turned out to be quite a task. I started this work when relatively few gfxContexts had been converted to wrap a DrawTarget so unsurprisingly things were a bit rough. I tripped over several Moz2D bugs at this point. Mostly though the headaches were caused by the amount of code that assumed gfxContexts wrapped and could provide them with a gfxASurface/cairo_surface_t/platform library object, possibly getting or then passing those objects from/to seemingly far corners of the Mozilla code. Particularly challenging was converting the image code where the sources and destinations of gfxASurfaces turned out to be particularly far reaching requiring the code to be converted incrementally in 34 separate bugs. Doing this without temporary performance regressions was tricky.

Besides preparing the ground for the SVG conversion, this work resulted in a decent number of performance improvements in its own right.

Stage 2 - Converting the SVG code to Moz2D

Converting the SVG code to Moz2D was a lot more than a simple case of switching calls from one graphics API to another. The stateful context provided by a retained mode API like Thebes or cairo allows consumer code to set context state (for example, fill pattern, or anti-alias mode) in points of the code that can seem far removed from other code that takes an action (for example, filling a path) that relies on that state having been set. The SVG code made use of this a lot since in many cases (for example, when passing things through for callbacks) it simplified the code to only pass a context rather than a context and some state to set.

This wouldn't have been all that bad if it wasn't for another fundamental difference between Thebes/cairo and Moz2D -- in Moz2D paths and patterns are relative to user-space, whereas in Thebes/cairo they are relative to device-space. Whereas with Thebes we could set a path/pattern and then change the transform before drawing (perhaps, say, to apply a clip in a different space) and the position of the path/pattern would be unaffected, with Moz2D such a transform change would change (and thus break) the rendering. This, incidentally, was why the SVG code was expected to be the hardest area to switch to Moz2D. Partly for historic reasons, and partly because some of the features that SVG supports lead it to, the SVG code did a lot of setting state, changing transforms, setting some more state and then drawing. Often the complexity of the code made it difficult to figure out which code could be setting relevant state before a transform change, requiring more involved refactoring. On the plus side, sorting this out has made parts of the code significantly easier to understand, and has been something I've wanted to find the time to do for years.

Benefits and next steps

To continue reading about the performance benefits of the conversion of the SVG code and some possible next steps continue to part 2.

Giorgio MaoneAvast, you're kidd... killing me - said NoScript >:(

If NoScript keeps disappearing from your Firefox, Avast! Antivirus is likely the culprit.
It's gone Berserk and mass-deleting add-ons without a warning.
I'm currently receiving tons of reports by confused and angry users.
If the antivirus is dead (as I've been preaching for 7 years), looks like it's not dead enough, yet.

Christian HeilmannWhat I am looking for in a guest writer on this blog

Simple: go try guest writing someplace else. This is my personal blog and if I am interested in something, I come to you and do it interview style in order to point to your work or showcase something amazingly cool that you have done.

anteater-sound-of-music

Please, please, please with cherry on top, stop sending me emails like this one:

Hi,

I’m {NAME}, a freelance writer/education consultant. I found “Christian Heilmann” on a Google search and thought I would contact you to see if you would like to work with me. I own a website on Job Application Service that I’m currently promoting for myself. I thought we could benefit each other somehow? If you are interested, I’d be happy to write a very high-quality article for your site and get a couple permanent links from it? While your website is benefiting from my high-quality article, I’m getting links from your site, making this proposition mutually beneficial.
Shall I write an article that matches your niche and send it across for your review or do you need me to write on a particular topic that interests you and your readers, I’m open to any topic, thoughts please?
If this does not interest you, I am sorry to have bothered you. Have a good day! If this does great I hope we can build a long-term business relationship together! If you wish to have a chat on the phone please let me know your phone number and when a good time to call is :) If you’d like, I can share samples with you.
Regards,
{FIRSTNAME}

I am very happy you know how to enter a name in Google and find the blog of that person. That’s a good start. Nobody got hurt, you didn’t overdo it with the research or spent too much effort before asking for my phone number and pointing out just how much you would get out of this “mutually beneficial relationship”. Seriously, I would love to be a fly on the wall when you try dating.

I’ve worked hard on this blog, that’s why it has some success or is at least found. Go work on yours yourself. That’s how it should be. A blog is you. Just like this one is mine.

Gervase MarkhamBMO show_bug Load Times 2x Faster Since January

The load time for viewing bugs on bugzilla.mozilla.org has got 2x faster since January. See this tweet for graphical evidence.

If you are looking for a direction in which to send your bouquets, glob is your man.

David Rajchenbach TellerThe Future of Promise

If you are writing JavaScript in mozilla-central or in an add-on, or if you are writing WebIDL code, by now, you have probably made use of Promise. You may even have noticed that we now have several implementations of Promise in mozilla-central, and that things are moving fast, and sometimes breaking.
At the moment, we have two active implementations of Promise:
(as well as a little code using an older, long deprecated, implementation of Promise)
This is somewhat confusing, but the good news is that we are working hard at making it simpler and moving everything to DOM Promise.

General Overview

Many components of mozilla-central have been using Promise for several years, way before a standard was adopted, or even discussed. So we had to come up with our implementation(s) of Promise. These implementations were progressively folded into Promise.jsm, which is now used pervasively in mozilla-central and add-ons.
In parallel, Promise were specified, submitted for standardisation, implemented in Firefox, and finally standardised. This is the second implementation we call DOM Promise. This implementation is starting to be used in many places on the web.
Having two implementations of Promise with the same feature set doesn’t make sense. Fortunately, Promise.jsm was designed to match the API of Promise that we believed would be standardised, and was progressively refactored and extended to follow these developments, so both APIs are almost identical.
Our objective is to move entirely to DOM Promise. There are still a few things that need to happen before this is possible, but we are getting close. I hope that we can get there by the end of 2014.

Missing pieces

Debugging and testing

At the moment, Promise.jsm is much better than DOM Promise in two aspects:
  • it is easier to inspect a promise from Promise.jsm for debugging purposes (not anymore, things have been moving fast while I was writing this blog entry);
  • Promise.jsm integrates nicely in the test suite, to make sure that uncaught errors are reported and cause test failures.
In both topics, we are hard at work bringing DOM Promise to feature parity with Promise.jsm and then some (bug 989960, bug 1083361). Most of the patches are in the pipeline already.

API differences

  • Promise.jsm offers an additional function Promise.defer, which didn’t make it to standardization.
This function may easily be written on top of DOM Promise, so this is not a hard blocker. We are going to add this function to a module `PromiseUtils.jsm`.
  • Also, there is a slight bug in DOM Promise that gives it a slightly unexpected behavior in a few edge cases. This should not hit developers who use DOM Promise as expected, but this might surprise people who know the exact scheduling algorithm and expect it to be consistent between Promise.jsm and DOM Promise.

Oh, wait, that’s fixed already.

Wrapping it up

Once we have done all of this, we will be able to replace Promise.jsm with an empty shell that defers all implementations to DOM Promise. Eventually, we will deprecate and remove this module.

As a developer, what should I do?

For the moment, you should keep using Promise.jsm, because of the better testing/debugging support. However, please do not use Promise.defer. Rather, use PromiseUtils.defer, which is strictly equivalent but is not going away.
We will inform everyone once DOM Promise becomes the right choice for everything.
If your code doesn’t use Promise.defer, migrating to DOM Promise should be as simple as removing the line that imports Promise.jsm in your module.

Doug BelshawNative apps, the open web, and web literacy

In a recent blog post, John Gruber argues that native apps are part of the web. This was in response to a WSJ article in which Christopher Mims stated his belief that the web is dying; apps are killing it. In this post, I want to explore the relationship between native apps and web literacy. This is important as we work towards a new version of Mozilla’s Web Literacy Map. It’s something that I explored preliminarily in a post earlier this year entitled What exactly is ‘the mobile web’? (and what does it mean for web literacy?). This, again, was in response to Gruber.

Native app

This blog focuses on new literacies, so I’ll not be diving too much into technical specifications, etc. I’m defining web literacy in the same way as we do with the Web Literacy Map v1.1: ‘the skills and competencies required to read, write and participate on the web’. If the main question we’re considering is are native apps part of the web? then the follow-up question is and what does this mean for web literacy?

Defining our terms

First of all, let’s make sure we’re clear about what we’re talking about here. It’s worth saying right away that 'app’ is almost always used as a shorthand for 'mobile app’. These apps are usually divided into three categories:

  1. Native app
  2. Hybrid app
  3. Web app

From this list, it’s probably easiest to describe a web app:

A web application or web app is any software that runs in a web browser. It is created in a browser-supported programming language (such as the combination of JavaScript, HTML and CSS) and relies on a web browser to render the application. ( Wikipedia)

It’s trickier to define a native app, but the essence can be seen most concretely through Apple’s ecosystem that include iOS and the App Store. Developers use a specific programming language and work within constraints set by the owner of the ecosystem. By doing so, native apps get privileged access to all of the features of the mobile device.

A hybrid app is a native app that serves as a 'shell’ or 'wrapper’ for a web app. This is usually done for the sake of convenience and, in some cases, speed.

The boundary between a native app and a web app used to be much more clear and distinct. However, the lines are increasingly blurred. For example:

  • APK files (i.e. native apps) can be downloaded from the web and installed on Android devices.
  • Developments as part of Firefox OS mean that web technologies can securely access low-level functionality of mobile devices (e.g. camera, GPS, accelerometer).
  • The specifications for HTML5 and CSS3 allow beautiful and engaging web apps to be used offline.

Web literacy and native apps

As a result of all this, it’s probably easier these days to differentiate between a native app and a web app by talking about ecosystems and silos. Understanding it this way, a native app is one that is built specifically using the technologies and is subject to the constraints of a particular ecosystem. So a developer creating an app for Apple’s App Store would have to go through a different process and use a different programming language than if they were creating one for Google’s Play Store. And so on.

Does this mean that we need to talk of a separate 'literacy’ for each ecosystem? Should we define 'Google literacy’ as the skills and competencies required to read, write and participate in Google’s ecosystem? I don’t think so. While there may be variations in the way things are done within the different ecosystems, these procedural elements do not constitute 'literacy’.

What we’re aiming for with the Web Literacy Map is a holistic overview of the skills and competencies people require when using the web. I think at this juncture we’ve got a couple of options. The first would be define 'the web’ more loosely to really mean 'the internet’.

This is John Gruber’s preferred option. He thinks we should focus less on web browsers (i.e. HTML) and more on the connections (i.e. HTTP). For example, in a 2010 talk he pointed out a difference between 'web apps’ and 'browser apps’. His argument rested on a technical point, which he illustrated with an example. When a user scrolls through their timeline using the Twitter app for iPhone, they’re not using a web browser, but they are using HTTP technologies. This, said Gruber, means that ecosystems such as Apple’s and the web are not in opposition to one another.

While this is technically correct, it’s a red herring. HTML does matter because the important thing here is the open web. Check out Gruber’s sleight of hand in this closing paragraph:

Arguments about “open” and “closed” often devolve into unresolvable cross-talk where the two sides have different definitions of what open and closed really mean. But the weird thing about a truly open platform is that its openness allows closed things to be built on top of it. In broad strokes, that’s why GNU/GPL software isn’t “open” in the way that BSD software is (and why Richard Stallman outright rejects the term “open source”). If you expand your view of “the web” from merely that which renders inside the confines of a web browser to instead encompass all network traffic sent over HTTP/S, the explosive growth of native mobile apps is just another stage in the growth of the web. Far from killing it, native apps have made the open web even stronger.

I think Gruber needs to read up on enclosure and the Commons. To use a 16th-century English agricultural metaphor, the important thing isn’t that the grass is growing in the field, it’s that it’s been fenced off and people are excluded.

A way forward

A second approach is to double-down on what makes the web different and unique. Mozilla’s mission is to promote openness, innovation & opportunity on the web and the Web Literacy Map is a platform for doing this. Even if we don’t tie development of the Web Literacy Map explicitly to the Mozilla manifesto it’s still a key consideration. Therefore, when we’re talking about 'web literacy’ it’s probably more accurate to define it as 'the skills and competencies required to read, write and participate on the open web.

What do we mean by the 'open web’? While Tantek Çelik approaches it from a technical standpoint, I like Brad Neuberg’s (2008) focus on the open web as a series of philosophies:

Decentralization - Rather than controlled by one entity or centralized, the web is decentralized – anyone can create a web site or web service. Browsers can work with millions of entities, rather than tying into one location. It’s not the Google or Microsoft Web, but rather simply the web, an open system that anyone can plug into and create information at the end-points.
Transparency - An Open Web should have transparency at all levels. This includes being able to view the source of web pages; having human-readable network identifiers, such as URLs; and having clear network entry points, such as HTTP and REST exposes.
Hackability - It should be easy to lash together and script the different portions of this web. MySpace, for example, allows users to embed components from all over the web; Google’s AdSense, another example, allows ads to be integrated onto arbitrary web pages. What would you like to hack together, using the web as a base?
Openness - Whether the protocols used are de facto or de-jure, they should either be documented with open specifications or open code. Any entity should be able to implement these standards or use this code to hook into the system, without penalty of patents, copyright of standards, etc.
From Gift Economies to Free Markets - The Open Web should support extreme gift economies, such as open source and Wikis, all the way to traditional free market entities, such as Amazon.com and Google. I call this Freedom of Social Forms; the tent is big enough to support many forms of social and economic organization, including ones we haven’t imagined yet. Third-Party Integration - At all layers of the system third-parties should be able to hook into the system, whether creating web browsers, web servers, web services, etc.
Third-Party Innovation - Parties should be able to innovate and create without asking the powers-that-be for permission.
Civil Society and Discourse - An open web promotes both many-to-many and one-to-many communication, allowing for millions of conversations by millions of people, across a range of conversation modalities.
Two-Way Communication - An Open Web should allow anyone to assume three different roles: Readers, Writers, and Code Hackers. Readers read content, Writers write content, and Code Hackers hack new network services that empower the first two roles.
End-User Usability and Integration - One of the original insights of the web was to bind all of this together with an easy to use web browser that was integrated for ease of use, despite the highly decentralized nature of the web. The Open Web should continue to empower the mainstream rather than the tech elite with easy to use next generation browsers that are highly usable and integrated despite having an open infrastructure. Open should not mean hard to use. Why can’t we have the design brilliance of Steve Jobs coupled with the geek openness of Steve Wozniak? Making them an either/or is a false dichotomy.

Conclusion

The Web Literacy Map describes the skills and competencies required to read, write and participate on the open web. But it’s also prescriptive. It’s a way to develop an open attitude towards the world:

Open is a willingness to share, not only resources, but processes, ideas, thoughts, ways of thinking and operating. Open means working in spaces and places that are transparent and allow others to see what you are doing and how you are doing it, giving rise to opportunities for people who could help you to connect with you, jump in and offer that help. And where you can reciprocate and do the same.

Native apps can mitigate against the kind of reciprocity required for an open web. In many ways, it’s the 21st century enclosure of the commons. I believe that web literacy, as defined and promoted through the Web Literacy Map, should not consider native apps part of the open web. Such apps may be built on top of web technologies, they may link to the open web, but native apps are something qualitatively different. Those who want to explore what reading, writing and participating means in closed ecosystems have other vocabularies – provided by media literacy, information literacy, and digital literacy – with which to do so.


Comments? Questions? Direct them here: doug@mozillafoundation.org or discuss this post in the #TeachTheWeb discussion forum

Wladimir Palant"Unloading" frame scripts in restartless extensions

The big news is: e10s is coming to desktop Firefox after all, and it was even enabled in the nightly builds already. And while most of the times the add-ons continue working without any changes, this doesn’t always work correctly. Plus, using the compatibility shims faking a single-process environment might not be the most efficient approach. So reason enough for add-on authors to look into the dreaded and underdocumented message manager and start working with frame scripts again.

I tried porting a simple add-on to this API. The good news: the API hasn’t changed since Firefox 17, so the changes will be backwards-compatible. And the bad news? Well, there are several.

  • Bug 1051238 means that frame scripts are cached — so when a restartless add-on updates the old frame script code will still be used. You can work around that by randomizing the URL of your frame script (e.g. add "?" + Math.random() to it).
  • Bug 673569 means that all frame scripts run in the same shared scope prior to Firefox 29, so you should make sure there are no conflicting global variables. This can be worked around by wrapping your frame script in an anonymous function.
  • Duplicating the same script for each tab (originally there was only a single instance of that code) makes me wonder about the memory usage here. Sadly, I don’t see a way to figure that out. I assume that about:memory shows frame scripts under the outOfProcessTabChildGlobal entry. But due to the shared scope there is no way to see individual frame scripts there.
  • Finally, you cannot unload frame scripts if your restartless extension is uninstalled or disabled. messageManager.removeDelayedFrameScript() will merely make sure that the frame script won’t be injected into any new tabs. But what about tabs that are already open?

Interestingly, it seems that Mark Finkle was the only one to ask himself that question so far. The solution is: if you cannot unload the frame script, you should at least make sure it doesn’t have any effect. So when the extension unloads it should send a "myaddon@example.com:disable" message to the frame scripts and the frame scripts should stop doing anything.

So far so good. But isn’t there a race condition? Consider the following scenario:

  • An update is triggered for a restartless extension.
  • The old version is disabled and broadcasts “disable” message to the frame scripts.
  • The new version is installed and starts its frame scripts.
  • The “disable” message arrives and disabled all frame scripts (including the ones belonging to the new extension version).

The feedback I got from Dave Townsend says that this race condition doesn’t actually happen and that loadFrameScript and broadcastAsyncMessage are guaranteed to affect frame scripts in the order called. It would be nice to see this documented somewhere, until then it is an implementation detail that cannot be relied on. The work-around I found here: since the frame script URL is randomized anyway (due to bug 1051238), I can send it along with the “disable” message:

messageManager.broadcastAsyncMessage("myaddon@example.com:disable", frameScriptURL);

The frame script then processes the message only if the URL matches its own URL:

addMessageListener("myaddon@example.com:disable", function(message)
{
  if (message.data == Components.stack.filename)
  {
    ...
  }
});

Raniere SilvaMathML November Meeting

MathML November Meeting

Note

Sorry for the delay to write this.

This is a report about the Mozilla MathML November IRC Meeting (see the announcement here). The topics of the meeting can be found in this PAD (local copy of the PAD) and the IRC log (local copy of the IRC log) is also available.

The next meeting will be in January 7th at 8pm UTC (check the time at your location here). Please add topics in the PAD.

Note

Yes. Our December meeting was cancelled. =(

Leia mais...

Christie KoehlerHappy 10th Birthday, MozillaWiki!

Last Monday, Firefox turned 10 years old. Thunderbird turns 10 on 7 December.

This week we celebrate another birthday: MozillaWiki turns 10 on Wednesday, 18 November!

I’m immensely proud of our wiki, its ten year history, and of all the work Mozillians do to make MozillaWiki a hub of collaboration and a living memory for the Mozilla Project.

To show our appreciation for your efforts over the last decade, the MozillaWiki team has created a 10th Birthday badge.

MozillaWiki 10th Birthday BadgeMozillaWiki 10th Birthday Badge

All you need to do to join in the celebration and claim the badge is log in to MozillaWiki. Once you’ve done that, you’ll see a link to claim the badge at the top of the page. Don’t have a MozillaWiki account? No worries! Create one during this Birthday celebration and you can claim the badge too.

A bit of MozillaWiki history

Before I talk about all the good work we’ve done, and what we have planned for the remainder of this year and beyond, let’s take a quick stroll through the last 10 years. Thank you Internet Archive for hosting these snapshots of the wiki!

July 2004

The earliest snapshot I could find of the domain wiki.mozilla.org was from July 2004. It looks like we were hosting separate wiki installations, which may or may not have been Mediawiki.

wik.mozilla.org July 2004wik.mozilla.org July 2004
wiki.mozilla.org/GeckoDev August 2004wiki.mozilla.org/GeckoDev August 2004

November-December 2004

According to WikiApiary, the current installation of MozillaWiki was created on 18 November 2004. The closest snapshot to this date in the Internet Archive is 11 December 2004:

MozillaWiki December 2004MozillaWiki December 2004

April 2005

By April 2005, the wiki had been upgraded, had a new theme (Cavendish), and had started using Apache rewrite rules to make the url pretty (e.g. no index.php).

Mozilla Wiki, April 2005Mozilla Wiki, April 2005

August 2008

Three years later, in April 2008, we were still rockin’ the Cavendish theme and the Main Page had some more content, including links to the weekly project call that continues to this day.

MozillaWiki August 2008MozillaWiki August 2008

December 2010

We started tracking releases in December 2007 (see version). Here’s what the Releases page looked like in December 2010.

MozillaWiki December 2010 Releases pageMozillaWiki December 2010 Releases page

May 2011

In May 2011, after 6 years of service, Cavendish was retired as the default skin and replaced with GMO.

MozillaWiki May 2011 - New GMO skinMozillaWiki May 2011 – New GMO skin

July 2012

A year later, July 2012, MozillaWiki looked much the same.

MozillaWiki July 2012MozillaWiki July 2012

July 2013

By July 2013, the Main Page was edited to include a few recent changes, but otherwise looked very similar.

MozillaWiki July 2013MozillaWiki July 2013

August 2014

By August 2014, the revitalization of the MozillaWiki was in full swing and we were preparing for a major update to both the skin (GMO to Vector) as well as the underlying software (Mediawiki 1.19 to 1.23). We also had made significant changes to the content of the Main Page based on results of our recent user survey.

MozillaWiki August 2013MozillaWiki August 2013

November 2014

Here’s what the wiki looks like today, 17 November, the day before it’s birthday. We’re running a slightly modified Vector skin and Mediawiki 1.23.x branch.

MozillaWiki November 2014MozillaWiki November 2014

MozillaWiki today

Pages, visitors and accounts

As of 16 November, MozillaWiki has 115,912 pages, all public, and nearly 10k uploaded files. About 630 people per month, on average, log in and make contributions to the wiki. These include both staff and volunteers. Want to track these stats yourself? Visit Special:Statistics.

The number of daily visitors ranges from 9k-30k, with an average likely around 13-14k. Who are these visitors? According our analytics software we get visitors from all over the world, with the greatest concentration being from the US, Canada and UK.

The wiki has over 330,000 registered user accounts. I estimate that about 300k of these are inactive spam accounts, so the real number for user accounts is probably closer to 30,000.

What kinda of content is hosted on MozillaWiki?

All kinds of project activity is coordinated and recorded on the wiki. This includes activity related to our products: Firefox, Firefox OS, WebMaker, etc. It also includes community activities such as Reps, Firefox Student Ambassadors, etc. Most project activities have some representation on MozillaWiki. People also use the wiki to track projects and goals on an individual level. In this regard, it served as a place for Mozillians’ profiles long before we had mozillians.org.

The MozillaWiki isn’t setup for localized content now, but this hasn’t stopped our localized community from translating content. Every day a significant portion of account requests come from volunteers from regional communities and are often in a language other than English. In 2015, depending on resources available, we plan to significantly improve support for localized content on MozillaWiki.

2014 Accomplishments

This year we’ve made significant progress towards revitalizing MozillaWiki.

Accomplishments include:

  • Forming a team of dedicated volunteers to lead a revitalization effort.
  • Creating an About page for MozillaWiki that clarifies its scope and role in the project, including what is appropriate content and how to report issues.
  • Fixing years-old bugs that cause significant usability problems (table sorting, unavailability of Wikieditor, etc.).
  • Identifying a product owner for MozillaWiki and creating a Module for it, lead by a mix of staff and contributors.
  • Halting the creation of new spam and cleaning up significant amounts of spam content.
  • Upgrading Mediawiki from 1.19.x branch to 1.23.x branch AND changing the default theme without any significant downtime or disruptions to users.
  • Organizing a user survey and using those results to guide much of our roadmap, including the redesign of the Main Page and sidebar navigation.

Thank you everyone who has been a part of this work!

There’s still plenty to do, and many ways to contribute

We’ve made so much progress on the technical and infrastructure debt of MozillaWiki that we’re now ready to focus on improving content and collaboration mechnisms.

How can I help?

The are many ways you can help, and we have contribution opportunities for all kinds of skill levels and time commitments.

We’re working on documenting and organizing these contribution opportunities here: https://wiki.mozilla.org/MozillaWiki:Contribute so check that page often.

Join our mailing-list or community call

If you’d like to help us organize those opportunities, or have other ideas for improving the wiki, join one of our MozillaWiki Team communication channels or one of our community meetings. These meetings are held twice a month on Tuesday at 8:30 PST / 15:30 UTC. Our next meeting is 16 December. All who are interested in contributing to the wiki are welcome.

In the meantime, log in to MozillaWiki and celebrate its birthday with us by claiming the birthday badge!

Christian HeilmannWe have a massive recruitment problem

A few months ago, I flew over to see my parents for their 50th wedding anniversary. As some of you may know, I have a humble background. My dad was a coal miner and then factory worker and my mother has always been a home maker / housewife. I am the only one in my family that went to college and I skipped university as the thing to do was to make money in a job as soon as you are 18.

It was humbling and almost embarassing to have conversations with my family. Half of them are either unemployed or worried about their jobs. The rest are unhappy in their jobs but see no way to change that as they need the security. Finding joy in family life and leisure time is more important than enjoying the work. A job is a job, you got to do what you got to do and all that.

Futurama: you gotta do what you gotta do

That’s why it feels surreal to come back into “our world” and get offers for jobs I don’t want. A lot of them. Some with ridiculous amounts of money offered and most with perks that would make my family blush and sense a trap.

Why are we not recruiter compatible and vice versa?

We’re lucky to be that sought after and yet it seems there is no happy symbiosis between us and recruiters. On the contrary, as soon as you even mention recruiting most of us techies start ranting.

I feel uneasy doing that. I feel like an arrogant ass and I feel that we should be more grateful about the opportunities we get. The relationship of recruiter and job seeker should be high-fives and unicorns. Instead there is a massive sense of dread: “Oh god, another job offer, how tiring”.

There are reasons for our dismay:

  • A lot of headhunters/recruiters work on a commission basis and are measured by how many contacts they had that day. This leads to a scattergun approach and you get offers that are not “you” at all.
  • Many recruiters seem to just look for keywords and then send the offer out to you. That’s why you get Java positions when you have a JavaScript background. Just like you would send car mechanic jobs to a carpet expert.
  • Others go for company names. A great example was this recruiter trying to hire someone’s dog as a Java/Python developer.
  • Many recruiting sites are very pushy to get you into their database to show potential hiring companies just how many job searchers they have. This leads to very old and outdated profiles and you get offers for jobs you’ve done years ago. Basically they don’t want to find you a job, they want you as an ad.
  • People write ridiculous job descriptions and send them to us. In the past I wrote what kind of people I try to hire and once it went through HR and recruitment review something completely ridiculous ended up online. You’ve seen those. Asking people for two degrees, but not older than 20. Seven years experience in a half year old technology, and similar confusing points.
  • There is probably nothing more intrusive to someone who feels at home online to be called by somebody. Recruiters, however, seem to see the “personal touch” as the most important thing.

On the other side of this issue, we are not innocent either:

  • Instead of telling people why we didn’t want their offer, we just ignore them. There is no learning on either side.
  • We love our own tools and are not too interested in changing that. Every recruitment department I worked with needed a CV in a document format for filing and keeping. Instead of having one of those at hand we love to create online CVs and portfolios or point people to our GitHub account as “real people who would hire me find all they need there”. This is navel gazing and arrogant. If I want to go on a bus, I need a bus ticket. A macaroni picture with glitter on it saying “most amazing responsive bus ticket” will not get me on there. Have the tool for the task.
  • We don’t keep our presence up-to-date. If you’re not seeking, say so on LinkedIn. Have a template to send back to recruiters telling them “thanks, but no.”
  • We also shouldn’t create profiles of our dog on LinkedIn. This is a professional tool, if we don’t use these in a professional manner we shouldn’t be surprised that they go to the dogs.
  • Keep your skills up-to-date. If you never ever want to work with a certain product any longer, remove it from your online presence. That way keyword searchers don’t find you.

We need to communicate better

I feel there is a massive waste going on and an accumulation of frustration on both sides. We need to get better in helping another to make this the natural partnership it should be. I feel terrible hearing about friends not in our world who send out hundreds of applications and don’t get answers whilst we complain about people trying to offer us jobs. It feels almost unreal.

There are a few good ideas around and there is a start to clean this mess up. Joblint is a tool that comes to mind. It is an analysing tool that takes job descriptions and allows you to

“Test tech job specs for issues with sexism, culture, expectations, and recruiter fails”.

A lot of miscommunication could be avoided simply by using that.

Considering giving a helping hand

Maybe I should do something about this and use my time off to reach out and try to change something. I wonder if a workshop for recruiters about issues to avoid would be of interest? In any case, let’s try to be more understanding. Recruiters do their jobs the same way we do ours. By understanding their drives and goals, we can make both of our lives easier. By being arrogant and come across as divas we shouldn’t be surprised if job descriptions start calling out for rockstars, ninjas, gurus and mavens.

Let’s highlight the great experiences we had, and share what worked. Maybe that could be the lever we need to crack this nut open.

Josh AasLet’s Encrypt

Today we announced a project that I’ve been working on for a while now – Let’s Encrypt. This is a new Certificate Authority (CA) that is intended to be free, fully automated, and transparent. We want to help make the dream of TLS everywhere a reality. See the official announcement blog post I wrote for more information.

Eric Rescorla and I decided to try to make this happen during the summer of 2012. We were trying to figure out how to increase SSL/TLS deployment, and felt that an innovative new CA would likely be the best way to do so. Mozilla agreed to help us out as our first major sponsor, and by May of 2013 we had incorporated Internet Security Research Group (ISRG). By September 2013 we had merged a similar project started by EFF and researchers from the University of Michigan into ISRG, and submitted our 501(c)(3) application. Since then we’ve put a lot of work into ISRG’s governance, found the right sponsors, and put together the plans for our CA, Let’s Encrypt.

I’ll be serving as ISRG’s Executive Director while we search for more permanent leadership. During this time I’ll remain with Mozilla.

Too many people to thank for their help here, many of whom work for our sponsors, but I want to call out Eric Rescorla (Mozilla) and Kevin Dick (Right Side Capital Management) in particular. Eric was my original co-conspirator, and Kevin has spent innumerable hours with me helping to create partnerships and the necessary legal infrastructure for ISRG. Both are incredible at what they do, and I’ve learned a lot from working with them.

Now it’s time to finish building the CA – lots of software to write, hardware to install, and auditing to complete. If you have relevant skills, we hope you’ll join us.


Jean-Marc ValinDaala: Perceptual Vector Quantization (PVQ)

Here's my new contribution to the Daala demo effort. Perceptual Vector Quantization has been one of the core ideas in Daala, so it was time for me to explain how it works. The details involve lots of maths, but hopefully this demo will make the general idea clear enough. I promise that the equations in the top banner are the only ones you will see!

Read more!

Andreas GalFirefox and Cisco’s Project Squared

Yesterday I was at Cisco’s Collaboration Summit where Cisco’s CTO for Collaboration Jonathan Rosenberg and I showed Cisco’s new WebRTC-based Project Squared collaboration service running in Firefox, talking to a Cisco Collaboration Desktop endpoint without requiring transcoding.

This demo is the culmination of a year long collaboration between Cisco and Mozilla in the WebRTC space. WebRTC enables voice and video communication directly from within the browser. This means that anyone can build a video conferencing service just using WebRTC and HTML5 standards, without the need for the user to download a plugin or a native application.

Cisco is not only developing WebRTC-based services that run on the Web. They have  also joined a growing number of organizations and companies helping Mozilla to build a better Web. Over the last year Cisco has contributed numerous technical improvements to Mozilla’s WebRTC implementation, including support for screen sharing and the H.264 video codec. These features are now shipping in Firefox. We intend to use them in the future in Mozilla’s own Hello communication service that we are bringing to Firefox.

Cisco’s contributions to the Web go beyond just advancing Firefox. For the last three years the IETF, the standards body defining the networking protocols for WebRTC, has been unable to agree on a mandatory video codec for WebRTC, putting ubiquitous interoperability in doubt.

One of the major blockers to coming to a consensus was that H.264 is subject to royalty-bearing patents, which made it problematic for open source projects such as Firefox to deploy it. To break this logjam, Cisco open-sourced its H.264 code base and made it available in plugin form. Any product  — not just Firefox — can download the plugin and use it to enable H.264 without paying any royalties.

This collaboration between Mozilla and Cisco enabled Firefox to add support for H.264 in WebRTC, and also played a significant role in the compromise reached at the last IETF meeting to adopt both H.264 and VP8 as mandatory video codecs for WebRTC in browsers. As a result of this compromise, in the future all browsers should match the capabilities already available in Firefox.

Mozilla will continue to work on advancing Firefox and the Web, and we are excited to have strong partners like Cisco who share our commitment to the open Web as a shared technology platform.


Filed under: Mozilla Tagged: Mozilla, WebRTC

Alexander SurkovAccessibility goes into DOM

PWFG group suggested two new methods for DOM Element interface. These methods reflect role and name accessibility concepts, and corresponding methods were named as computedRole and computedLabel.

I have bunch of issues with the approach I wanted to outline here. Just to keep things in one place.

The purpose


I've been told that primary reason is a testing propose, but having role and name only is not enough to run UAIG tests or any accessibility automation tool since it would require other accessibility properties.

Also they say that it might be used for non accessibility proposes. I realize that semantics, the ARIA adds, can be used by non assistive technologies. In Firefox we have a large number of non AT consumers but we don't have a good idea in most of cases what they are for. So I don't really have the use case, and thus it's hard to say whether accessible role and name only works well for non a11y proposes.

Concerning to assistive technologies I think they also need a much larger API.

Blowing the DOM


Anything useful should require extra accessible properties as I said above. These are accessible description, states, relations, ability to navigate the hierarchy etc. That means sooner or later the Element interface has to be changed to a great extent. Check out AtkObject to get an idea about possible changes.

In beginning of times accessibility interfaces was built on top of DOM and later they were turned into full APIs. Now we are faced to backward process, accessibility APIs are getting back to DOM. I'm not sure if that's a good idea because accessibility tasks are something very specific, and accessibility API might be not suitable for common needs of web apps.

Restrictions


Not every semantically meaningful piece on the screen has a DOM node, for example, list bullets don't necessary have DOM elements associated with them. So Element based accessibility API is too restrictive to fit the requirements of the assistive technologies.

Performance


Last but not least is performance issue. In most of browsers the accessibility engine is kept separately and it gets running on demand. If accessibility is merged with the DOM then nothing tells the user this method may trigger heavy accessibility computations and make his app slower. Surely the browsers will learn how to get smarter but the approach will have a perf hit either way.

What's it going to be then, eh?


The idea is to provide a separate accessibility interface. If you like it then it can be done by parts, for example, introduce role and name only for the first round same as the original proposal says. Later you can think of adding all other properties.

This idea was welcomed initially, then later it was rejected as being too complex and accessibility centric. But - and that's most important thing - it doesn't have disadvantages the Element approach has.

Andreas GalLet’s Encrypt: One more step on the road to TLS Everywhere

Principle 4 of the Mozilla Manifesto states: Individuals’ security and privacy on the Internet are fundamental and must not be treated as optional.

Unfortunately treating user security as optional is exactly what happens when sites let users connect over insecure HTTP rather than HTTP over TLS (HTTPS). What insecure means here is that your network traffic is totally unprotected and can be read and/or modified by anyone who shares a network with you, including random people sharing Starbucks or airport WiFi.

One of the biggest reasons that web sites don’t deploy TLS is the requirement to get a digital certificate — a cryptographic credential which allows a user’s browser to know it’s talking to the right site and not to an attacker. Certificates are issued by Certificate Authorities (CAs) often using a clumsy and error-prone manual process. A further disincentive to deployment is that  most CAs charge a fee for their certificates, which not only prices some people out of the market but also interferes with automatic issuance and renewal.

Mozilla, along with our partners Akamai, Cisco, EFF, and Identrust decided to do something about this situation. Together, we’ve formed a new consortium, the Internet Security Research Group, which is starting Let’s Encrypt, a new certificate authority designed to bring security to everyone. Let’s Encrypt is built around a few key principles:

  • Free: Certificates will be offered at no cost.
  • Automatic: Certificates will be issued via a public and published API, allowing Web server software to automatically obtain new certificates at installation time and without manual intervention.
  • Independent: No piece of infrastructure this important should be controlled by a single company. ISRG, the parent entity of Let’s Encrypt, is governed by a board drawn from industry, academia, and nonprofits, ensuring that it will be operated in the public interest.
  • Open: Let’s Encrypt will be publishing its source code and protocols, as well as submitting the protocols for standardization so that server software as well as other CAs can take advantage of them.

Let’s Encrypt will be issuing its first real certificates in Q2 2015. In the meantime, we have published some initial protocol drafts along with a demonstration client and server at: https://github.com/letsencrypt/node-acme and https://github.com/letsencrypt/heroku-acme. These are functional today and can be used to issue test certificates.

It’s been a long road getting here and we’re not done yet, but this is an important step towards a world with TLS Everywhere.


Filed under: Mozilla

Soledad PenadesTools for the 21st century musician—super abridged dotJS edition

I attended dotJS yesterday where I gave a very short version of past past week’s talk at Full Frontal (18 minutes versus 40).

The conference happened in a theatre and we were asked not to use bright background so I changed my slides to be darker and classier.

It didn’t really go as smoothly as I expected (a kernel panic a bit before the start of the talk, and I got nervous and distracted so I got more nervous and…), but I guess I can’t always WIN! It was fun to speak in French if only one line, though: Je suis très contente d’être parmi vous!–thanks to Thomas for the assistance in coming up with the perfect presentation line, and Guillaume and Sasha for listening to me repeat it until it resembled passable French!

While the video is edited and released, here’s a sample in the form of slides, online and their source code in GitHub.

It was fun to use CSS filters to invert the images so they would not be a big white block on top of a dark background. Yay CSS filters!

.filter-invert {
    filter: invert(100%) brightness(2);
}

Also, using them in transitions between slides. I discovered that I could blur between slides. Cinematic effects! (sorta, as I cannot get vertical/horizontal blur). But:

.bespoke-active.emphatic-text {
  filter: none;
}
.bespoke-inactive.emphatic-text {
  filter: blur(10px);
}

I use my custom plugin presentation-fullscreen for getting real fullscreen in my slides. It’s on npm:

npm install presentation-fullscreen --save

then just

require('presentation-fullscreen');

will add a new option to the contextual menu for making the whole body go fullscreen.

I shall write about this tip and how I use bespoke.js in general, and a couple thoughts and ideas I had during the conference soon. Topics including (so I don’t forget): why a mandatory lack of anonymity is not the solution to doxxing, and the ideal talk length.

flattr this!

Mozilla Release Management TeamFirefox 34 release date moving to Dec 1/2

The Firefox 34 release date will move out one week from Nov 25 to Dec 1/2. This change impacts Firefox Desktop, Firefox for Android, Firefox ESR, and Thunderbird.
The purpose of this change is to allow for an additional week of stabilization during the 34 cycle.

Details of the change:

  • Release date change from Nov 25 to Dec 1/2 (need to determine the date that works best given the work week)
  • Merge date change from Tue, Nov 24 to Fri, Nov 28
  • Two additional desktop betas (10 and 11) will be added to the calendar this week on our usual beta build schedule (build Mon and Thu, release Tue and Fri)
  • One additional mobile beta (beta 11) will be added to the schedule.
    Note that mobile beta 10 will gtb on schedule on Mon.
    Mobile beta 11 will gtb on Thu with desktop in order to be ready early the following week.
  • RC builds will happen on Mon, Nov 24
Note that we are effectively moving an extra week that we had previously added to the 35 Beta cycle in the 34 Beta cycle. 35 will have a 7 week Aurora cycle instead of a 7 week Beta cycle.

Patrick FinchIs the Web dying?

This article may or may not be pay-walled, depending on how you arrive at it.   It is an exploration of the shift to apps.

The history of computing is companies trying to use their market power to shut out rivals, even when it’s bad for innovation and the consumer….That doesn’t mean the Web will disappear. Facebook and Google still rely on it to furnish a stream of content that can be accessed from within their apps. But even the Web of documents and news items could go away. Facebook has announced plans to host publishers’ work within Facebook itself, leaving the Web nothing but a curiosity, a relic haunted by hobbyists.

This is something I was getting at with my post yesterday: that advertising remains one of the Web’s unique selling points.  It is much more effective as an advertising platform than mobile apps are.  At the moment, the Internet giants extract an enormous amount of value from the content on the Web, using it to drive engagement with their services.  The Web has very low barriers to entry, but economic sustainability is difficult and the only proven revenue model appears to be advertising at scale.  The model needs liberating.

(Note: The source of this article, the Wall Street Journal, may appear to refute that, (given it has a paywall), but I believe that their model is essentially freemium and it isn’t clear to me what revenues they derive from subscription customers.)


Wil ClouserAltering large tables without bringing down your service

When we run ALTER statements on our big tables we have to plan ahead to keep from breaking whatever service is using the database. In MySQL, many times* a simple change to a column (say, from being a short varchar to being a text field) can read-lock the entire table for however long it takes to make the change. If you have a service using the table when you begin the query you'll start eating into your downtime budget.

If you have a large enough site to have database slaves you'll have a double-whammy - all reads will block on the master altering the table, and then, by default, the change will be replicated out to your slaves and not only will they read-lock the table while they alter it, but they will pause any further replication until the change is done, potentially adding many more hours of outdated data being returned to your service as the replication catches up.

The good news is, in some situations, we can take advantage of having database slaves to keep the site at 100% uptime while we make time consuming changes to the table structure. The notes below assume a single master with multiple independent slaves (meaning, the slaves aren't replicating to each other).

Firstly, it should go without saying, but the client application needs to gracefully handle both the existing structure and the anticipated structure.

When you're ready to begin, pull a slave out of rotation and run your alter statement on it. When it completes, put the slave back into the cluster and let it catch up on replication. Repeat those steps for each slave. Then failover one of the slaves as a new master and pull the old master out of rotation and run the alter statement on it. Once it has finished put it back in the cluster as a slave. When the replication catches up you can promote it back to the master and switch the temporary master back to a slave.

At this point you should have the modified table structure everywhere and be back to your original cluster configuration.

Special thanks to Sheeri who explained how to do all the above and saved us from temporarily incapacitating our service.

*What changes will lock a table vary depending on the version of MySQL. Look for "Allows concurrent DML?" in the table on this manual page.

Byron Joneshappy bmo push day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1096565] GET REST calls should allow arbitrary URL parameters to be passed in addition the values in the path
  • [1097813] Bug.search causes error when using simple token auth and specifying ‘token’ instead of ‘Bugzilla_token’
  • [1036802] Requests to the native rest/bzapi endpoints with gzip encoding always result in HTTP/200 responses
  • [1097382] OS sniffing should detect Windows 10 from “Windows NT 6.4″ instead of detecting Windows NT
  • [1098956] remove autoland support
  • [1100368] css concatenation breaks data: urls

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

Alex VincentAnother 100K bug reports… and nobody noticed

Bugzilla bug report #1,100,000

We used to have a little cheering for every 100,000 bug reports filing.  Bugzilla hasn’t keeled over yet!

But somehow, in the aftermath of the megabug, I think we forgot to plan for this one.  Oops!

It looks like it took us about 19 months to get here from the previous one.  I’ll leave it to Gervase Markham to dig up the appropriate statistics.

Jennifer BorissRedesign of reddit’s Login/Account Creation Window and reddit.com/login

We’ve just launched a cleanup of our login and account creation dialog and reddit.com/login. Here’s a comparison between the old version and new version: Props to new engineer aurora-73 for […]

Tim TaubertThe sad state of server-side TLS Session Resumption implementations

The probably oldest complaint about TLS is that its handshake is slow and together with the transport encryption has a lot of CPU overhead. This certainly is not true anymore if configured correctly (even if some companies choose to ignore that).

One of the most important features to improve user experience for visitors accessing your site via TLS is session resumption. Session resumption is the general idea of avoiding a full TLS handshake by storing the secret information of previous sessions and reusing those when connecting to a host the next time. This drastically reduces latency and CPU usage.

Enabling session resumption in web servers and proxies can however easily compromise forward secrecy. To find out why having a de-factor standard TLS library (i.e. OpenSSL) can be a bad thing and how to avoid botching PFS let us take a closer look at forward secrecy, and the current state of server-side implementation of session resumption features.

What is (Perfect) Forward Secrecy?

(Perfect) Forward Secrecy is an important part of modern TLS setups. The core of it is to use ephemeral (short-lived) keys for key exchange so that an attacker gaining access to a server cannot use any of the keys found there to decrypt past TLS sessions they may have recorded previously.

We must not use a server’s RSA key pair, whose public key is contained in the certificate, for key exchanges if we want PFS. This key pair is long-lived and will most likely outlive certificate expiration dates as you would just use the same key pair to generate a new certificate after the current expired. In case the server is compromised it would be far too easy to determine the location of the private key on disk or in memory and use it to decrypt recorded TLS sessions from the past.

Using Diffie-Hellman key exchanges where key generation is a lot cheaper we can use a key pair exactly once and discard it afterwards. An attacker with access to the server can still compromise the authentication part as shown above and {M,W}ITM everything from here on using the certificate’s private key, but past TLS sessions stay protected.

How can Session Resumption botch PFS?

TLS provides two session resumption features: Session IDs and Session Tickets. To better understand how those can be attacked it is worth looking at them in more detail.

Session IDs

In a full handshake the server sends a Session ID as part of the “hello” message. On a subsequent connection the client can use this session ID and pass it to the server when connecting. Because both server and client have saved the last session’s “secret state” under the session ID they can simply resume the TLS session where they left off.

To support session resumption via session IDs the server must maintain a cache that maps past session IDs to those sessions’ secret states. The cache itself is the main weak spot, stealing the cache contents allows to decrypt all sessions whose session IDs are contained in it.

The forward secrecy of a connection is thus bounded by how long the session information is retained on the server. Ideally, your server would use a medium-sized cache that is purged daily. Purging your cache might however not help if the cache itself lives on a persistent storage as it might be feasible to restore deleted data from it. An in-memory storage should be more resistant to these kind of attacks if it turns over about once a day and ensures old data is overridden properly.

Session Tickets

The second mechanism to resume a TLS session are Session Tickets. This extension transmits the server’s secret state to the client, encrypted with a key only known to the server. That ticket key is protecting the TLS connection now and in the future and is the weak spot an attacker will target.

The client will store its secret information for a TLS session along with the ticket received from the server. By transmitting that ticket back to the server at the beginning of the next TLS connection both parties can resume their previous session, given that the server can still access the secret key that was used to encrypt.

We ideally want the same secrecy bounds for Session Tickets as for Session IDs. To achieve this we need to ensure that the key used to encrypt tickets is rotated about daily. It should just as the session cache not live on a persistent storage to not leave any trace.

Apache configuration

Now that we determined how we ideally want session resumption features to be configured we should take a look at a popular web servers and load balancers to see whether that is supported, starting with Apache.

Configuring the Session Cache

The Apache HTTP Server offers the SSLSessionCache directive to configure the cache that contains the session IDs of previous TLS sessions along with their secret state. You should use shmcb as the storage type, that is a high-performance cyclic buffer inside a shared memory segment in RAM. It will be shared between all threads or processes and allow session resumption no matter which of those handles the visitor’s request.

SSLSessionCache shmcb:/path/to/ssl_gcache_data(512000)

The example shown above establishes an in-memory cache via the path /path/to/ssl_gcache_data with a size of 512 KiB. Depending on the amount of daily visitors the cache size might be too small (i.e. have a high turnover rate) or too big (i.e. have a low turnover rate).

We ideally want a cache that turns over daily and there is no really good way to determine the right session cache size. What we really need is a way to tell Apache the maximum time an entry is allowed to stay in the cache before it gets overridden. This must happen regardless of whether the cyclic buffer has actually cycled around yet and must be a periodic background job to ensure the cache is purged even when there have not been any requests in a while.

You might wonder whether the SSLSessionCacheTimeout directive can be of any help here - unfortunately no. The timeout is only checked when a session ID is given at the start of a TLS connection. It does not cause entries to be purged from the session cache.

Configuring Session Tickets

While Apache offers the SSLSessionTicketKeyFile directive to specify a key file that should contain 48 random bytes, it is recommended to not specify one at all. Apache will simply generate a random key on startup and use that to encrypt session tickets for as long as it is running.

The good thing about this is that the session ticket key will not touch persistent storage, the bad thing is that it will never be rotated. Generated once on startup it is only discarded when Apache restarts. For most of the servers out there that means they use the same key for months, if not years.

To provide forward secrecy we need to rotate the session ticket key about daily and current Apache versions provide no way of doing that. The only way to achieve that might be use a cron job to gracefully restart Apache daily to ensure a new key is generated. That does not sound like a real solution though and nothing ensures the old key is properly overridden.

Changing the key file while Apache is running does not do it either, you would still need to gracefully restart the service to apply the new key. An do not forget that if you use a key file it should be stored on a temporary file system like tmpfs.

Disabling Session Tickets

Although disabling session tickets will undoubtedly have a negative performance impact, for the moment being you will need to do that in order to provide forward secrecy:

SSLOpenSSLConfCmd Options -SessionTicket

Ivan Ristic adds that to disable session tickets for Apache using SSLOpenSSLConfCmd, you have to be running OpenSSL 1.0.2 which has not been released yet. If you want to disable session tickets with earlier OpenSSL versions, Ivan has a few patches for the Apache 2.2.x and Apache 2.4.x branches.

To securely support session resumption via tickets Apache should provide a configuration directive to specify the maximum lifetime for session ticket keys, at least if auto-generated on startup. That would allow us to simply generate a new random key and override the old one daily.

Nginx configuration

Another very popular web server is Nginx. Let us see how that compares to Apache when it comes to setting up session resumption.

Configuring the Session Cache

Nginx offers the ssl_session_cache directive to configure the TLS session cache. The type of the cache should be shared to share it between multiple workers:

ssl_session_cache shared:SSL:10m;

The above line establishes an in-memory cache with a size of 10 MB. We again have no real idea whether 10 MB is the right size for the cache to turn over daily. Just as Apache, Nginx should provide a configuration directive to allow cache entries to be purged automatically after a certain time. Any entries not purged properly could simply be read from memory by an attacker with full access to the server.

You guessed right, the ssl_session_timeout directive again only applies when trying to resume a session at the beginning of a connection. Stale entries will not be removed automatically after they time out.

Configuring Session Tickets

Nginx allows to specify a session ticket file using the ssl_session_ticket_key directive, and again you are probably better off by not specifying one and having the service generate a random key on startup. The session ticket key will never be rotated and might be used to encrypt session tickets for months, if not years.

Nginx, too, provides no way to automatically rotate keys. Reloading its configuration daily using a cron job might work but does not come close to a real solution either.

Disabling Session Tickets

The best you can do to provide forward secrecy to visitors is thus again switch off session ticket support until a proper solution is available.

ssl_session_tickets off;

HAproxy configuration

HAproxy, a popular load balancer, suffers from basically the same problems as Apache and Nginx. All of them rely on OpenSSL’s TLS implementation.

Configuring the Session Cache

The size of the session cache can be set using the tune.ssl.cachesize directive that accepts a number of “blocks”. The HAproxy documentation tries to be helpful and explain how many blocks would be needed per stored session but we again cannot ensure an at least daily turnover. We would need a directive to automatically purge entries just as for Apache and Nginx.

And yes, the tune.ssl.lifetime directive does not affect how long entries are persisted in the cache.

Configuring Session Tickets

HAproxy does not allow configuring session ticket parameters. It implicitly supports this feature because OpenSSL enables it by default. HAproxy will thus always generate a session ticket key on startup and use it to encrypt tickets for the whole lifetime of the process.

A graceful daily restart of HAproxy might be the only way to trigger key rotation. This is a pure assumption though, please do your own testing before using that in production.

Disabling Session Tickets

You can disable session ticket support in HAproxy using the no-tls-tickets directive:

ssl-default-bind-options no-sslv3 no-tls-tickets

A previous version of the post said it would be impossible to deactivate session tickets. Thanks to the HAproxy team for correcting me!

Session Resumption with multiple servers

If you have multiple web servers that act as front-ends for a fleet of back-end servers you will unfortunately not get away with not specifying a session ticket key file and a dirty hack that reloads the service configuration at midnight.

Sharing a session cache between multiple machines using memcached is possible but using session tickets you “only” have to share one or more session ticket keys, not the whole cache. Clients would take care of storing and discarding tickets for you.

Twitter wrote a great post about how they manage multiple web front-ends and distribute session ticket keys securely to each of their machines. I suggest reading that if you are planning to have a similar setup and support session tickets to improve response times.

Keep in mind though that Twitter had to write their own web server to handle forward secrecy in combination with session tickets properly and this might not be something you want to do yourselves.

It would be great if either OpenSSL or all of the popular web servers and load balancers would start working towards helping to provide forward secrecy by default and server admins could get rid of custom front-ends or dirty hacks to rotate keys.