Adrian GaudebertSpectateur, custom reports for crash-stats

The users of Socorro at Mozilla, the Stability team, have very specific needs that vary over time. They need specific reports for the data we have, new aggregations or views with some special set of parameters. What we developers of Socorro used to do was to build those reports for them. It's a long process that usually requires adding something to our database's schema, adding a middleware endpoint and creating a new page in our webapp. All those steps take a long time, and sometimes we understand the needs incorrectly, so it takes even longer. Not the best way to invest our time.

Nowadays, we have Super Search, a flexible interface to our data, that allows users to do a lot of those specific things they need. As it is highly configurable, it's easy to keep the pace of new additions to the crash reports and to evolve the capabilities of this tool. Couple that with our public API and we can say that our users have pretty good tools to solve most of their problems. If Super Search's UI is not good enough, they can write a script that they run locally, hitting our API, and they can do pretty much anything we can do.

But that still has problems. Local scripts are not ideal: it's inconvenient to share them or to expose their results, it's hard to work on them collaboratively, it requires working on some rendering and querying the API where one could just focus on processing the data, and it doesn't integrate with our Web site. I think we can do better. And to demonstrate that, I built a prototype. Introducing...


spectateur.jpg Spectateur is a service that takes care of querying the API and rendering the data for you. All you need to do is work on the data, make it what you want it to be, and share your custom report with the rest of the world. It uses a language commonly known, JavaScript, so that most people (at least at Mozilla) can understand and hack what you have done. It lets you easily save your report and gives you a URL to bookmark and to share. And that's about it, because it's just a prototype, but it's still pretty cool, isn't it?

To explain it a little more: Spectateur contains three parts. The Model lets you choose what data you want. It uses Super Search and gives you about the same capabilities that Socorro's UI has. Once you have set your filters and chosen the aggregations you need, we move to the Controller. That's a simple JavaScript editor (using Ace) and you can type almost anything in there. Just keep the function transform, the callback and the last lines that set the interface, otherwise it won't work at all. There are also some limitations for security: the code is executed in a Web Worker in an iframe, so you have no access to the main page's scope. Network requests are blocked, among other things. I'm using a wonderful library called jailed, if you want to know more, please read its documentation.

Once you are done writing your controller, and you have exposed your data, you can click the Run button to create the View. It will fetch the data, run your processor on that data and then render the results following the rules you have exposed. The data can currently be displayed as a table (using jsGrid) or as a chart (using Chart.js). For details, please read the documentation of Spectateur (there's a link at the top). When you are satisfied with your custom report, click the button Save. That will save the Model and the Controller and give you a URL (by updating the URL bar). Come back to that URL to reload your report. Note that if you make a change to your report and click Save again, a new URL will be generated, the previous report won't be overwritten.

As an example, here is a report that shows, for our B2G product, a graph of the top versions, a chart of the top signatures and a list of crash reports, all of that based on data from the last 7 days:

I hope this tool will be useful to our users. As usual, if you have comments, feedback, criticisms, if you feel this is a waste of time and we should not invest any more time in it, or on the contrary you think this is what you needed this whole time, please please please let us know!

Robert O'CallahanGreat Barrier Island

Last weekend a couple of Mozillians --- David Baron and Jean-Yves Avenard --- plus myself and my children flew to Great Barrier Island for the weekend. Great Barrier is in the outer Hauraki Gulf, not far from Auckland; it takes about 30 minutes to fly there from Auckland Airport in a very small plane. The kids and I camped on Friday night at "The Green" campsite at Whangaparapara, while David and Jean-Yves stayed at Great Barrier Lodge nearby. On Saturday we did the Aotea Track in clockwise direction, heading up the west side of the hills past Maungapiko, then turning east along the South Fork Track to Mt Heale Hut for the night. (The usual continuation past Kaiaraara Hut along the Kaiaraara track had been washed out by storms last year, and we saw evidence of storm damage in the form of slips almost everywhere we went.) Even the South Fork Track had been partially rerouted along the bed of the Kaiaraara Stream. We were the only people at Mt Heale Hut and had a good rest after a reasonably taxing walk. But inspired by Jean-Yves, we found the energy to do a side trip to Mt Hobson --- the highest point on the island --- before sunset.

On Sunday we walked south out of the hills to the Kaitoke hot strings and had a dip in the hot, sulphurous water --- very soothing. Then along the road to Claris and a well-earned lunch at the "Claris, Texas" cafe. We still had lots of time to kill before our flight so we dropped our bags at the airport (I use the term loosely) and walked out to Kaitoke Beach. A few of us swam there, carefully, since the surf felt very treacherous.

I'd never been tramping overnight at the Barrier before and really enjoyed this trip. There aren't many weekend-sized hut tramps near Auckland, so this is a great option if you don't mind paying to fly out there. The flight itself is a lot of fun.

Mozilla ThunderbirdThunderbird Usage Continues to Grow

We’re happy to report that Thunderbird usage continues to expand.

Mozilla measures program usage by Active Daily Installations (ADI), which is the number of pings that Mozilla servers receive as installations do their daily plugin block-list update. This is not the same as the number of active users, since some users don’t access their program each day, and some installations are behind firewalls. An estimate of active monthly users is typically done by multiplying the ADI by a factor of 3.

To plot changes in Thunderbird usage over time, I’ve picked the peak ADI for each month for the last few years. Here’s the result:

Thunderbird Active Daily Installations, peak value per month.

Germany has long been our #1 country for usage, but in 4th quarter 2014, Japan exceeded US as the #2 country. Here’s the top 10 countries, taken from the ADI count of February 24, 2015:

Rank Country ADI 2015-02-24
1 Germany 1,711,834
2 Japan 1,002,877
3 United States 927,477
4 France 777,478
5 Italy 514,771
6 Russian Federation 494,645
7 Poland 480,496
8 Spain 282,008
9 Brazil 265,820
10 United Kingdom 254,381
All Others 2,543,493
Total 9,255,280

Country Rankings for Thunderbird Usage, February 24, 2015

The Thunderbird team is now working hard preparing our next major release, which will be Thunderbird 38 in May 2015. We’ll be blogging more about that release in the next few weeks, including reporting on the many new features that we have added.

Liz HenryA useful Bugzilla trick

At the beginning of February I changed teams within Mozilla and am now working as a release manager. It follows naturally from a lot of the work I’ve already been doing at Mozilla and I’m excited to join the team working with Lukas, Lawrence, and Sylvestre!

I just learned a cool trick for dealing with several bugs at once, on MacOS X.

1) Install Bugzilla Services.

2) Add a keyboard shortcut as Alex Keybl describes in the blog post above. (I am using Control-Command-B)

3) Install the BugzillaJS (Tweaks for Bugzilla) addon.

4) Install the Tree Style Tab addon.

Now, from any text, whether in email, a desktop text file, or anywhere in the browser, I can highlight a bunch of text and bug number will be parsed out of the text. For example, from an email this morning:

Bug 1137050 - Startup up Crash - patch should land soon, potentially risky
David Major seems to think it is risky for the release.

Besides that, we are going to take:
Bug 1137469 - Loop exception - patch waiting for review
Bug 1136855 - print preferences - patch approved
Bug 1137141 - Fx account + hello - patch waiting for review
Bug 1136300 - Hello + share buttons - Mike  De Boer will work on a patch today

And maybe a fix for the ANY query (bug 1093983) if we have one...

I highlighted the entire email and hit the “open in bugzilla” keystroke. This resulted in a Bugzilla list view for the 6 bugs mentioned in the email.

Bugzilla list view example

With BugzillaJS installed, I have an extra option at the bottom of the page, “Open All in Tabs”, so if I wanted to triage these bugs, I can open them all at once. The tabs show up in my sidebar, indented from their parent tab. This is handy if I want to collapse this group of tabs, or close the parent tab and all its children at once (The original list view of these 6 bugs, and each of its individual tabs.) Tree Style Tab is my new favorite thing!

Tree style tabs bugzilla

In this case, after I had read each bug from this morning and closed the tabs, my coworker Sylvestre asked me to make sure I cc-ed myself into all of them to keep an eye on them later today and over the weekend so that when fixes are checked in, I can approve them for release.

Here I did not want to open up every bug in its own tab but instead went for “Change Several Bugs at Once” which is also at the bottom of the page.

Bugzilla batch edit

This batch edit view of bugs is a bit scarily powerful since it will result in bugmail to many people for each bug’s changes. When you need it, it’s a great feature. I added myself to the cc: field all in one swoop instead of having to click each tab open, click around several times in each bug to add myself and save and close the tab again.

It was a busy day yesterday at work but I had a nice time working from the office rather than at home. Here is the view from the SF Mozilla office 7th floor deck where I was working and eating cake in the sun. Cannot complain about life, really.
Mozilla bridge view

Related posts:

Chris AtLeeDiving into python logging

Python has a very rich logging system. It's very easy to add structured or unstructured log output to your python code, and have it written to a file, or output to the console, or sent to syslog, or to customize the output format.

We're in the middle of re-examining how logging works in mozharness to make it easier to factor-out code and have fewer mixins.

Here are a few tips and tricks that have really helped me with python logging:

There can be only more than one

Well, there can be only one logger with a given name. There is a special "root" logger with no name. Multiple getLogger(name) calls with the same name will return the same logger object. This is an important property because it means you don't need to explicitly pass logger objects around in your code. You can retrieve them by name if you wish. The logging module is maintaining a global registry of logging objects.

You can have multiple loggers active, each specific to its own module or even class or instance.

Each logger has a name, typically the name of the module it's being used from. A common pattern you see in python modules is this:

# in module
import logging
log = logging.getLogger(__name__)

This works because inside, __name__ is equal to "foo". So inside this module the log object is specific to this module.

Loggers are hierarchical

The names of the loggers form their own namespace, with "." separating levels. This means that if you have have loggers called, and foo.baz, you can do things on logger foo that will impact both of the children. In particular, you can set the logging level of foo to show or ignore debug messages for both submodules.

# Let's enable all the debug logging for all the foo modules
import logging

Log messages are like events that flow up through the hierarchy

Let's say we have a module

import logging
log = logging.getLogger(__name__)  # __name__ is "" here

def make_widget():
    log.debug("made a widget!")

When we call make_widget(), the code generates a debug log message. Each logger in the hierarchy has a chance to output something for the message, ignore it, or pass the message along to its parent.

The default configuration for loggers is to have their levels unset (or set to NOTSET). This means the logger will just pass the message on up to its parent. Rinse & repeat until you get up to the root logger.

So if the logger hasn't specified a level, the message will continue up to the foo logger. If the foo logger hasn't specified a level, the message will continue up to the root logger.

This is why you typically configure the logging output on the root logger; it typically gets ALL THE MESSAGES!!! Because this is so common, there's a dedicated method for configuring the root logger: logging.basicConfig()

This also allows us to use mixed levels of log output depending on where the message are coming from:

import logging

# Enable debug logging for all the foo modules

# Configure the root logger to log only INFO calls, and output to the console
# (the default)

# This will output the debug message

If you comment out the setLevel(logging.DEBUG) call, you won't see the message at all.

exc_info is teh awesome

All the built-in logging calls support a keyword called exc_info, which if isn't false, causes the current exception information to be logged in addition to the log message. e.g.:

import logging

log = logging.getLogger(__name__)

    assert False
except AssertionError:"surprise! got an exception!", exc_info=True)

There's a special case for this, log.exception(), which is equivalent to log.error(..., exc_info=True)

Python 3.2 introduced a new keyword, stack_info, which will output the current stack to the current code. Very handy to figure out how you got to a certain point in the code, even if no exceptions have occurred!

"No handlers found..."

You've probably come across this message, especially when working with 3rd party modules. What this means is that you don't have any logging handlers configured, and something is trying to log a message. The message has gone all the way up the logging hierarchy and fallen off of the chain (maybe I need a better metaphor).

import logging
log = logging.getLogger()
log.error("no log for you!")


No handlers could be found for logger "root"

There are two things that can be done here:

  1. Configure logging in your module with basicConfig() or similar

  2. Library authors should add a NullHandler at the root of their module to prevent this. See the cookbook and this blog for more details here.

Want more?

I really recommend that you read the logging documentation and cookbook which have a lot more great information (and are also very well written!) There's a lot more you can do, with custom log handlers, different output formats, outputting to many locations at once, etc. Have fun!

Blake WintonA long time ago, on a computer far far away…

Six years ago, I started contributing to Mozilla.

Read more… (3 min remaining to read)

Mike ConleyThe Joy of Coding (Episode 3)

The third episode is up! My machine was a little sluggish this time, since I had OBS chugging in the background attempting to do a hi-res screen recording simultaneously.

Richard Milewski and I are going to try an experiment where I try to stream with OBS next week, which should result in a much higher-resolution stream. We’re also thinking about having recording occur on a separate machine, so that it doesn’t bog me down while I’m working. Hopefully we’ll have that set up for next week.

So this third episode was pretty interesting. Probably the most interesting part was when I discovered in the last quarter that I’d accidentally shipped a regression in Firefox 36. Luckily, I’ve got a patch that fixes the problem that has been approved for uplift to Aurora and Beta. A point release is also planned for 36, so I’ve got approval to get the fix in there too. \o/

Here are the notes for the bug I was working on. The review feedback from karlt is in this bug, since I kinda screwed up where I posted the review request with MozReview.

Doug BelshawThe final push for Web Literacy Map v1.5 (and how you can get involved!)

By the end of March 2015 we should have a new, localised iteration of Mozilla’s Web Literacy Map. We’re calling this ‘version 1.5’ and it’s important to note that this is a point release rather than a major new version.

Cat with skills

Right now we’re at the point where we’ve locked down the competencies and are now diving into the skills underpinning those competencies. To help, we’ve got a epic spreadsheet with a couple of tabs:


The REVIEW tab contains lots of comments about the suitability of the skills for v1.5. On this week’s community call we copied those skills that had no comments about them to the REFINE tab:


This is where we need your help. We’ve got skills in the REVIEW tab that, with some tweaking, can help round out those skills we’ve already transferred. It would be great if you could help us discuss and debate those. There’s also some new competencies that have no skills defined at present.

We’ve got weekly community calls where we work on this stuff, but not everyone can make these. That’s why we’re using GitHub issues to discuss and debate the skills asychronously.

Here’s how to get involved:

  1. Make sure you’ve got a (free) GitHub account
  2. Head to the meta-issue for all of the work we’re doing around v1.5 skills
  3. Have a look through the skills under the various competencies (e.g. Remixing)
  4. Suggest an addition, add a question, or point out overlaps
  5. Get email updates when people reply - and then continue the conversation!

We really do need as many eyes as possible on this. No matter whether you’re an old hand or a complete n00b, you’re welcome. The community is very inviting and tolerant, so please dive in!

Comments? Questions? These would be better in GitHub, but if you want to get in touch directly I’m @dajbelshaw on Twitter or you can email me:

Mozilla Reps CommunityReps Weekly Call – February 26th 2015

Last Thursday we had our weekly call about the Reps program, where we talk about what’s going on in the program and what Reps have been doing during the last week.



  • RepsIDMeetup
  • Alumni status and leaving SOP
  • New mentors coming soon
  • GMRT event Pune
  • Teach The Web Talks
  • FOSS Asia
  • BuddyUp
  • Say Hello Day

Detailed notes

AirMozilla video

Don’t forget to comment about this call on Discourse and we hope to see you next week!

Kaustav Das ModakFirefox OS for everyone

While a good number of events around Firefox OS have focused on helping developers understand the technology behind it, there has been very little effort in helping people understand Firefox OS from a non-technical and non-developer perspective. I’ve started a series of events titled “Firefox OS for everyone” to solve this gap. The content has […]

Pierros PapadeasFOSDEM 2015 Bug sprint tool

“Reclama” Experiment report


FOSDEM 2015 is a premier open source developers’ event in Europe. Mozilla is heavily participating in the event for over 10 years now, with booths and dev-rooms. This year the Community Development team decided to run an experimental approach on recruiting new contributors by promoting good-first bugs.


Given the highly technical nature of the audience in FOSDEM 2015, the approach decided has a straight-forward promotion of bugs and prizes for them, for people to track and get involved.

Following sign-off meeting with the stakeholders (William Quiviger, Francisco Piccolini and Brian King) the specifications agreed for the first iteration of the experiment were as follows:

  1. A straight-forward interface displaying hand-selected good first bugs, ready to be worked on
  2. Next to bugs, prizes should be displayed.
  3. Viewing from mobile devices should be also accounted for.
  4. Public access to all content. Ability to edit and add events on selected accounts (hardcoded for v1)
  5. The interface has to be properly Mozilla branded (high visibility and promotion in an event)
  6. It needs to be future-proof for event to come (easily add new events and/or bugs after deployment)
  7. Solution URL should be short and easily memorable and digestible.
  8. It needs to be delivered before the start of FOSDEM 2015 ;)
  9. A report should be compiled with usage statistics and impact analysis for the experiment


Given the extremely short timeframe (less than 3 days), ready, quick or off-the-shelf solutions were evaluated like:

  • An online spreadsheet (gdocs)
    • Not meeting requirements #3, #5, #6, #7
  • A bugzilla query
    • Not meeting requirements #2, #7, #6
  • A scrappy HTML plain coded solution
    • Not meeting requirements #4, #6

Thus, given the expertise of the group it was decided to create a django application to meet all requirements in time.

Following 2 days of non-stop development (fantastic work from Nikos and Nemo!), testing and iteration we met all requirements by developing the app which is codenamed “reclama” (Italian -shortof-  for claim).

Code can be found here:

Screen Shot 2015-02-27 at 14.57.26Deployment

In order to meet requirement #7 (short and memorable URL) and given the timeframe, we decided to acquire a URL quickly ( ) and deploy the application.

For usage statistics, awstats was deployed on top of the app to track incoming traffic.

Usage statistics

During the weekend of FOSDEM 2015, 500 people with almost 5000 hits visited the website. That’s almost 10% of the event participants.mozbugsprint-stats

Saturday 31-Jan-2015 traffic analysis

mozbugsprint-saturdayBooths and promotion of the experiment started at 9:00 as expected. With mid-day (noon) pick which is consistent with increased traffic in booths area of the event.

Traffic continues to flow in steadily even after the end of the event, which indicates that people keep the URL and interact with our experiment substantial time after the face to face interaction with our booth. Browsing continues through the night, and help might be needed (on call people/mentors ?) during that too.

Sunday 1-Feb-2015 traffic analysis

mozbugsprint-sundaySecond day in FOSDEM 2015 included a Mozilla dedicated dev-room. The assumption that promotion through the dev-room would increase the traffic in our experiment proved to be false, as the traffic continued on the same levels for day 2.
As expected there was a sharp cut-off after 16:00 (when the final keynote starts) and also people seem to not come back after the event. Thus the importance of hyper-specific event-focused (and branded) challenge seems to be high, as people relate to that understanding the one-off nature of it.


32 coding bugs from different product areas were presented to people. 9 of them were edited (assigned, commented, worked on) during or immediately (one day) after FOSDEM 2015. Out of those 9 , 4 ended up on first patch submission (new contributors) and 3 received no response (blocked contributor) from mozilla staff (or core contributors).


  • Re-do experiment in a different cultural environment, but still with related audience, so we can cross-compare.
  • Continue the experiment by implementing new functionality as suggested by the stakeholders (notes, descriptions etc).
  • Experiment random sorting of bugs, as current order seemed to affect what has been worked on.
  • Submission of bugs to be featured on an event should be coordinated by event owner and related to event theme and topics.
  • A/B test how prize allocation affects what bugs are worked on.
  • Expand promotional opportunities throughout the event. (booth ideas?)
  • On call developers on promoted bugs would eliminate the non-answered section of our bugs

PS. Special thanks to Elio for crafting an excellent visual identity and logo once again!

Gervase MarkhamAn Encounter with Ransomware

An organization which I am associated with (not Mozilla) recently had its network infected with the CryptoWall 3.0 ransomware, and I thought people might be interested in my experience with it.

The vector of infection is unknown but once the software ran, it encrypted most data files (chosen by extension) on the local hard drive and all accessible shares, left little notes everywhere explaining how to get the private key, and deleted itself. The notes were placed in each directory where files were encrypted, as HTML, TXT, PNG and as a URL file which takes you directly to their website.

Their website is accessible as either a TOR hidden service or over plain HTTP – both options are given. Presumably plain HTTP is for ease for less technical victims; Tor is for if their DNS registrations get attacked. However, as of today, that hasn’t happened – the site is still accessible either way (although it was down for a while earlier in the week). Access is protected by a CAPTCHA, presumably to prevent people writing automated tools that work against it. It’s even localised into 5 languages.

CryptoWall website CAPTCHA

The price for the private key was US$500. (I wonder if they set that based on GeoIP?) However, as soon as I accessed the custom URL, it started a 7-day clock, after which the price doubled to US$1000. Just like parking tickets, they incentivise you to pay up quickly, because argument and delay will just make it cost more. If you haven’t paid after a month, they delete your secret key and personal page.

While what these thieves do is illegal, immoral and sinful, they do run a very professional operation. The website had the following features:

  • A “decrypt one file” button, which allows them to prove they have the private key and re-establish trust. It is, of course, also protected by a CAPTCHA. (I didn’t investigate to see whether it was also protected by numerical limits.)
  • A “support” button, which allows you to send a message to the thieves in case you are having technical difficulties with payment or decryption.

The organization’s last backup was a point-in-time snapshot from July 2014. “Better backups” had been on the ToDo list for a while, but never made it to the top. After discussion with the organization, we decided that recreating the data would have taken much more time than the value of the ransom, and so were were going to pay. I tried out the “Decrypt One File” function and it worked, so I had some confidence that they were able to provide what they said they were.

I created a wallet at, and used an exchange to buy exactly the right amount of Bitcoin. (The first exchange I tried had a ‘no ransomware’ policy, so I had to go elsewhere.) However, when I then went to pay, I discovered that there was a 0.0001BTC transaction fee, so I didn’t have enough to pay them the full amount! I was concerned that they had automated validation and might not release the key if the amount was even a tiny bit short. So, I had to go on IRC and talk to friends to blag a tiny fraction of Bitcoin in order to afford the transfer fee.

I made the payment, and pasted the transaction ID into the form on the ransomware site. It registered the ID and set status to “pending”. Ten or twenty minutes later, once the blockchain had moved on, it accepted the transaction and gave me a download link.

While others had suggested that there was no guarantee that we’d actually get the private key, it made sense to me. After all, word gets around – if they don’t provide the keys, people will stop paying. They have a strong incentive to provide good ‘customer’ service.

The download was a ZIP file containing a simple Windows GUI app which was a recursive decryptor, plus text files containing the public key and the private key. The app worked exactly as advertised and, after some time, we were able to decrypt all of the encrypted files. We are now putting in place a better backup solution, and better network security.

A friend who is a Bitcoin expert did do a little “following the money”, although we think it went into a mixer fairly quickly. However, before it did so, it was aggregated into an account with $80,000+ in it, so it seems that this little enterprise is fairly lucrative.

So, 10/10 for customer service, 0/10 for morality.

The last thing I did was send them a little message via the “Support” function of their website, in both English and Russian:

Such are the ways of everyone who is greedy for unjust gain; it takes away the life of its possessors.

Таковы пути всех, кто жаждет преступной добычи; она отнимает жизнь у завладевших ею.

‘The time has come,’ Jesus said. ‘The kingdom of God has come near. Repent and believe the good news!’

– Пришло время, – говорил Он, – Божье Царство уже близко! Покайтесь и верьте в Радостную Весть!

Mozilla Reps CommunityImpact teams: a new approach for functional impact at Reps

When the new participation plan was forming one of the first questions was: how can the Reps program enable more and deeper participation in Mozilla? We know that Reps are empowering local and regional communities and have been playing an important role in various project like Firefox OS launches, but there wasn’t an organized and more importantly scalable way to provide support to functional teams at Mozilla. The early attempts of the program to help connect volunteers with functional areas were the Special Interest Groups (SIG). Although in some cases and for some periods of time the SIGs worked very well and were impactful, they wasn’t sustainable in the long run. We couldn’t provide a structure that ensured mutual benefit and commitment.

With the renewed focus on participation we’re trying to think differently about the way that Reps can connect to functional teams, align with their goals and participate in every part of Mozilla. And this is where the “Impact teams” come in. Instead of forming loose interest groups, we want to form teams that work well together and are defined by the impact they are having, as well as excited by future opportunity to not only have deeper participation but personal growth as part of a dedicated team where colleagues include project staff.

The idea of these new impact teams is to make sure that the virtuous circle of mutual benefit is created. This means that we will work with functional teams to ensure that we find participation opportunities for volunteers that have direct impact on project goals, but at the same time we make sure that the volunteers will benefit from participating, widening their skills, learning new ones.


These teams will crystallize through the work on concrete projects, generating immediate impact for the team, but also furthering the skills of volunteers. That will allow the impact team to take on bigger challenges with time: both volunteers and functional teams will learn to collaborate and volunteers with new skills will be able to take the lead and mentor others.

We’re of course at the beginning and many questions are still open. How can we organize this in an agile way? How can we make this scalable? Will the scope of the role of Reps change if they are more integrated in functional activities? How can we make sure that all Mozillians, Reps and non Reps are part of the teams? Will we have functional mentors? And we think the only way to answer those questions is to start trying. That’s why we’re talking to different functional areas, trying to find new participation opportunities that provide value for volunteers. We want to learn by doing, being agile and adjusting as we learn.

The impact teams are therefore not set in stone, we’re working with different teams, trying loose structures and specially putting our energy into making this really beneficial for both functional teams and volunteers. Currently we are working to the Marketplace team, the Firefox OS Market research team and the developer relations team. And we’ll be soon reaching out to Mozillians and Reps who have a track record in those areas to ask them to help us build these impact teams.

We’re just at the beginning of a lot of pilots, tests, prototypes. But we’re excited to start moving fast and learn! We have plenty of work to do and many questions to answer, join us in shaping these new impact teams. Specially help us now how your participation at Mozilla can benefit your life, make you grow, learn, develop yourself. Emma Irwin is working on making education a centerpiece of participation, but do you have any other ideas? Share them with us!

Tantek Çelik#IndieWeb: Homebrew Website Club 2015-02-25 Summary

2015-02-25 Homebrew Website Club participants, seven of them, sit in two rows for a photograph

At last night's Homebrew Website Club we discussed, shared experiences, and how-tos about realtime indie readers, changing/choosing your webhost, indie RSVPs, moving from Blogger/Tumblr to your own site, new IndieWebCamp Slack channel, and

See for the writeup.

Tantek ÇelikDisappointed in @W3C for Recommending Longdesc

W3C has advanced the longdesc attribute to a Recommendation, overruling objections from browser makers.

Not a single browser vendor supported advancing this specification to recommendation.

Apple formally objected when it was a Candidate Recommendation and provided lengthy research and documentation (better than anyone has before or since) on why longdesc is bad technology (in practice has not and does not solve the problems it claims to).

Mozilla formally objected when it was a Proposed Recommendation, agreeing with Apple’s research and reasoning.

Both formal objections were overruled.

For all the detailed reasons noted in Apple’s formal objection, I also recommend avoid using longdesc, and instead:

  • Always provide good alt (text alternative) attributes for images, that read well inline if and when the image does not load. Or if there’s no semantic loss without the image, use an empty alt="".
  • For particularly rich or complex images, either provide longer descriptions of images in normal visible markup, or linked from a image caption or other visible affordance. See accessibility expert James Craig’s excellent Longdesc alternatives in HTML5 resource for even more and better techniques.

Perhaps the real tragedy is that many years have been wasted on a broken technology that could have been spent on actually improving accessibility of open web technologies. Not to mention the harrassment that’s occurred in the name of longdesc.

Sometimes web standards go wrong. This is one of those times.

Planet Mozilla InternsMichael Sullivan: MicroKanren (μKanren) in Haskell

Our PL reading group read the paper “μKanren: A Minimal Functional Core for Relational Programming” this week. It presents a minimalist logic programming language in Scheme in 39 lines of code. Since none of us are really Schemers, a bunch of us quickly set about porting the code to our personal pet languages. Chris Martens produced this SML version. I hacked up a version in Haskell.

The most interesting part about this was the mistake I made in the initial version. To deal with recursion and potentially infinite search trees, the Scheme version allows some laziness; streams of results can be functions that delay search until forced; when a Scheme μKanren program wants to create a recursive relation it needs wrap the recursive call in a dummy function (and plumb through the input state); the Scheme version wraps this in a macro called Zzz to make doing it more palatable. I originally thought that all of this could be dispensed with in Haskell; since Haskell is lazy, no special work needs to be done to prevent self reference from causing an infinite loop. It served an important secondary purpose, though: providing a way to detect recursion so that we can switch which branch of the tree we are exploring. Without this, although the fives test below works, the fivesRev test infinite loops without producing anything.

The initial version was also more generalized. The type signatures allowed for operating over any MonadPlus, thus allowing pluggable search strategies. KList was just a newtype wrapper around lists. When I had to add delay I could have defined a new MonadPlusDelay typeclass and parametrized over that, but it didn’t’ seem worthwhile.

A mildly golfed version that drops blank lines, type annotations, comments, aliases, and test code clocks in at 33 lines.

<noscript>View the code on <a href="">Gist</a>.</noscript>

Doug BelshawAn important day for the Internet

As I’ve just been explaining to my son, when he’s my age and looks back at the history of the Internet, 26 February 2015 will be seen as a very important day.

Why? The Mozilla blog summarises it well:

We just accomplished something very important together. Today, the U.S. Federal Communications Commission voted for strong net neutrality protections. This happened because millions of people — including many hundreds of thousands in Mozilla’s community — joined together as citizens of the Web to demand those strong protections.

Net Neutrality can be a difficult thing to understand and, especially if you’re not based in the USA, it can feel like something that doesn’t affect you. However, it is extremely important, and it impacts everyone.

Last year we put together a Webmaker training module on Net Neutrality. I’d like to think it helped towards what was achieved today. As Mitchell Baker stated, this victory was far from inevitable, and the success will benefit all of humankind.

It’s worth finding out more about Net Neutrality, for the next time it’s threatened. Forewarned is forearmed.

Image CC BY Kendrick Erickson

Air MozillaBrown Bag, "Analysing Gaia with Semmle"

Brown Bag, "Analysing Gaia with Semmle" Title: Analysing Gaia with Semmle Abstract: Semmle has recently added support for JavaScript to its analysis platform. As one of our first major JavaScript analysis...

The Mozilla BlogA Major Victory for the Open Web

We just accomplished something very important together. Today, the U.S. Federal Communications Commission voted for strong net neutrality protections. This happened because millions of people — including many hundreds of thousands
in Mozilla’s community — joined together as citizens of the Web to demand those strong protections.

This is an important victory for the world’s largest public resource, the open Web. Net neutrality is a key aspect of enabling innovation from everywhere, and especially from new players and unexpected places. Net neutrality allows citizens and consumers to access new innovations and judge the merit for themselves. It allows individual citizens to make decisions, without gate-keepers who decide which possibilities can become real. Today’s net neutrality rules help us protect this open and innovative potential of the Internet.

Mozilla builds our products to put this openness and opportunity into the hands of individuals. We are organized as a non-profit so that the assets we create benefit everyone. Our products go hand-in-hand with net neutrality; they need net neutrality to bring the full potential of the Internet to all of us.

Today’s net neutrality rules are an important step in protecting opportunity for all. This victory was not inevitable. It occurred because so many people took action, so many people put their voice into the process. To each of you we say “Thank you.” Thank you for taking the time to understand the issue, for recognizing it’s important, and for taking action. Thank you for helping us build openness and opportunity into the very fabric of the Internet.

Video message from Mitchell Baker, Executive Chairwoman, Mozilla Foundation

Rosana ArdilaImpact teams: a new approach for functional impact at Reps

When the new participation plan was forming one of the first questions was: how can the Reps program enable more and deeper participation in Mozilla? We know that Reps are empowering local and regional communities and have been playing an important role in various project like Firefox OS launches, but there wasn’t an organized and more importantly scalable way to provide support to functional teams at Mozilla. The early attempts of the program to help connect volunteers with functional areas were the Special Interest Groups (SIG). Although in some cases and for some periods of time the SIGs worked very well and were impactful, they wasn’t sustainable in the long run. We couldn’t provide a structure that ensured mutual benefit and commitment.

With the renewed focus on participation we’re trying to think differently about the way that Reps can connect to functional teams, align with their goals and participate in every part of Mozilla. And this is where the “Impact teams” come in. Instead of forming loose interest groups, we want to form teams that work well together and are defined by the impact they are having, as well as excited by future opportunity to not only have deeper participation but personal growth as part of a dedicated team where colleagues include project staff.

The idea of these new impact teams is to make sure that the virtuous circle of mutual benefit is created. This means that we will work with functional teams to ensure that we find participation opportunities for volunteers that have direct impact on project goals, but at the same time we make sure that the volunteers will benefit from participating, widening their skills, learning new ones.


These teams will crystallize through the work on concrete projects, generating immediate impact for the team, but also furthering the skills of volunteers. That will allow the impact team to take on bigger challenges with time: both volunteers and functional teams will learn to collaborate and volunteers with new skills will be able to take the lead and mentor others.

We’re of course at the beginning and many questions are still open. How can we organize this in an agile way? How can we make this scalable? Will the scope of the role of Reps change if they are more integrated in functional activities? How can we make sure that all Mozillians, Reps and non Reps are part of the teams? Will we have functional mentors? And we think the only way to answer those questions is to start trying. That’s why we’re talking to different functional areas, trying to find new participation opportunities that provide value for volunteers. We want to learn by doing, being agile and adjusting as we learn.

The impact teams are therefore not set in stone, we’re working with different teams, trying loose structures and specially putting our energy into making this really beneficial for both functional teams and volunteers. Currently we are working to the Marketplace team, the Firefox OS Market research team and the developer relations team. And we’ll be soon reaching out to Mozillians and Reps who have a track record in those areas to ask them to help us build these impact teams.

We’re just at the beginning of a lot of pilots, tests, prototypes. But we’re excited to start moving fast and learn! We have plenty of work to do and many questions to answer, join us in shaping these new impact teams. Specially help us now how your participation at Mozilla can benefit your life, make you grow, learn, develop yourself. Emma Irwin is working on making education a centerpiece of participation, but do you have any other ideas? Share them with us!

Pascal FinetteWhat Every Entrepreneur Should Know About Pitching

The following post is a summary of a series of earlier Heretic posts on the subject compiled into one comprehensive list - compiled by the wonderful folks at

Your pitch deck MUST start with a description of what it is that you’re doing. Your second slide (after the cover slide) is titled “What is NAME-OF-YOUR-COMPANY” (e.g. “What is eBay”). Explain in simple English what you’re doing. This is not the place to be clever, show off your extraordinary grasp of the English language or think that your pitch deck is a novel where you build tension and excitement in the first half and surprise the reader in the end.

If I (or any investor for that matter) don’t understand what you are doing in the first 10-15 seconds you already lost me. I know investors who don’t go past slide two if they don’t grasp what the company does.

Simple and obvious, eh? The crazy thing is, I get tons of decks which make me literally go “WTF?!” Let me illustrate the point. Here are two pitches which get it right (names and some product details have been changed):

What is ACME CORP?

  • A unique coffee subscription business…
  • 1) Rare, unique, single-origin coffee
  • 2) Freshly roasted to order each month
  • 3) A different coffee every month, curated by experts
  • delivered monthly through the letterbox.
  • A subscription-only business—sign up online—available for self-purchase or as a special gift

Moving on — pitch #2:

ACME CORP is an E-FASHION club that provides affordable trendy shoes, accessories and PERSONALIZED style recommendations to European Women!

Clear. They are a shopping club focussed on shoes for European women. Crisp and clear. No fancy language. Just the facts.

And now for something different—pitch #3:

ACME CORP is an online collaboration hub and development environment for makers, hobbyist and engineers.

I have mostly no clue what they are doing. Worse — as I actually know the team and their product — this isn’t even an accurate statement of what they are up to. What they build is Github plus a deployment system for Arduinos. Their statement is overly broad and unspecific.

So your first slide (not the cover — your first actual content slide) is the most important slide in your deck. It’s the slide where I decide if I want to see and hear more. It’s the slide which sets the tone for the rest of your interaction. And it’s the slide which forms my first image of you and your company. Spend time on it. Make it perfect. Pitch it to strangers who know nothing about your company and see if they get it. Show them just this one slide and then ask them to describe back to you what your company does. If they don’t get it, or it’s inaccurate, go back and revise the slide until you get it right.

The Team Slide

We all know that investors put their money into teams not ideas. Ideas are cheap and plentiful. Incredible execution is rare. Ideas evolve (yeah — sometimes they “pivot“). Great teams excel at this, mediocre teams fall apart. So you invest into people.

Which means your team slide better be good.

I can’t tell you too much about the composition of your team — as this is highly dependent on your idea and the industry you’re in.

Teams of one are usually a bad sign. If you can’t bring a team to the table when you ask for funding it just doesn’t reflect well on your ability to recruit. Teams that have a bunch of people listed as “will come on board once we can pay her/him a salary” don’t work. People who are not willing to take the risk are employees, not cofounders.

Don’t bullshit when you talk about your team. Sentences such as “X has 10 years of Internet experience” make me cringe, then laugh and then delete your email. Every man and his dog has ten years of “Internet experience” by now. Be honest, tell me what your team did. If your team hasn’t done anything interesting. Well, that’s something you should think about. You won’t be able to hide it anyway. “Y is a Ruby ninja?” I let your teammate speak with one of our portfolio companies for three minutes and I know if he’s a ninja or a backwater coder who still designs MySpace pages for his school chorus. Oh, and by the way: Nobody is a fucking ninja, superstar or what-have-you. Cut the lingo.

Lastly—and this shows your attention to detail — make sure the pictures you use have a common look and feel and don’t look like a collection of randomly selected vacation snapshots. Unless you’re all totally wasted in the shots. That might be funny.

Let’s Talk About Advisors

Typically you add them to your team slide. And most of the time you see something along the lines of a list of names a la “John Doe, Founder ACME Corp.,” sometimes with a picture.

Here’s the deal—advisors provide two signals for an investor:

  1. Instant credibility if you have “brand name” advisors
  2. A strong support network

The first one only works if you have truly recognizable names which are relevant to your field. Sergey Brin works, a random director at a large company doesn’t. The second one is trickier. In pitch presentations I often wonder what those people actually do for you — as often the entrepreneurs either just rattle down the names of the advisors on their slide or even glance over them and say something to the tune of, “and we have a bunch of awesome advisors.”

If you want to make your pitch stronger I recommend you make sure that your advisors are relevant (no, your dad’s buddy from the plumbing shop down the road most likely doesn’t count) and that they are actual advisors and not only people with whom you emailed once. You can spend 15 seconds in your pitch describing the relationship you have with your advisors. (e.g. “We assembled a team of relevant advisors with deep expertise in X, Y and Z. To leverage their expertise and network we meet with them every month for a one-hour session and can also ask for advice via phone and email anytime in between.”)

By the way, there is something to be said about the celebrity advisor. As awesome as it might be that you got Tim Cook from Apple to be an advisor I instantly ask myself how much time you actually get out of him—he’s busy as hell. So you might want to anticipate that (silent) question and address it head on in your pitch.

The Dreaded Finance Slide

The one slide that is made up out of one hundred percent pure fantasy. And yet it seems (and it is) so important. Everybody knows that whatever you write down on your finance slide is not reality but (in the best case) wishful thinking. That includes your investor.

Why bother? Simple. The finance slide shows your investor that you understand the fundamentals of your business. That you have a clue. That he can trust you with his money.

So what do you need to know about your finance slide? As so often in life the answer is: it depends. Here’s my personal take. For starters you want to show a one-year plan which covers month-by-month revenue and costs. Lump costs into broader buckets and don’t fall into the trap of pretended accuracy by showing off precise numbers. Nobody will believe that you really know that your marketing costs will be precisely $6,786 in month eight. You’re much better off eyeballing these numbers. Also don’t present endless lists of minutia details such as your telecommunication costs per month. Show your business literacy by presenting ballpark numbers that make sense (e.g. salaries should come as fully loaded head counts—not doing this is a strong indicator that you don’t know the fundamentals of running a business).

On the revenue side you want to do a couple of things. First of all explain your business model (this is something you might want to pull out on its own slide). Then give me your assumptions—sometimes it makes sense to work with scenarios (best, base and worst case). And then use this model to validate your revenue assumptions bottom-up. If you say you will have 5,000 customers in month three, what does that mean in terms of customer acquisition per day, how does that influence your cost model, etc.

This is probably the most useful part of putting together your financials. It allows you to make an educated guess about your model and will, if done right, feed you tons of information back into your own thinking. Weirdly a lot of entrepreneurs don’t do this and then fall onto their face when, in a pitch meeting, their savvy investor picks the numbers apart and points out the gapping holes (something I like to do—it really shows if someone thought hard about their business or if they are only wanna-be entrepreneurs).

And then you do the same for year two and three — this time on a quarterly basis.

Above all, do this for yourself, not because you need to do this for your investors. Use this exercise to validate your assumptions, to get a deep understanding of the underlying logic of your business. Your startup will be better off for it.

The Business Model Slide

You have a business model, right? Or at least you pretend to have one? If you don’t and you believe you can get by, by using a variant of the “we’ll figure it out” phrase you better have stratospheric growth and millions of users.

Here’s the thing about your business model slide in your pitch deck: If you spend time to make it clear, concise, easy to understand and grasp, I am so much more likely to believe that you have at least a foggy clue about what you’re doing. If you, in contrast, do what I see so many startups do and give me a single bullet, hidden on one of your other slides which reads something like “Freemium Model” and that’s it… well, that’s a strong indicator that you haven’t thought about this a whole lot, that you are essentially clueless about your business and that I really shouldn’t trust you with my money.

With that being said, what does a great business model slide look like? It really depends on what your business model is (or you believe it is to be precise—these things tend to change). What I look for is a clear expression of your model, the underlying assumptions and the way the model works out. Often this can be neatly expressed in an info graphic—showing where and how the money comes in, how the value chain looks like and what the margins are alongside the chain. Here’s an example: it’s not perfect yet much better than simply expressing your model as “we take a 25% margin.”

Spend some time on your business model slide. Make it clear and concise. The litmus test is: Show someone who doesn’t know anything about your company just this one slide and ask them to explain back to you your business model. If they get it and they get it in its entirety you are off to the races.

The Ask

You know that you always have to have an ask in your pitch, right? It might be for money, it might be for partnerships or just feedback—but never ever leave an audience without asking for something.

There’s a lot of art and science to asking. Here’s my view on the science piece: Let’s assume you pitch for money. Don’t start your pitch with your ask. By putting your ask first you A) rob yourself of the wonderful poetry of a well delivered pitch (as everyone only thinks about the dollars), B) you might loose a good chunk of your audience who might not fall into your price bracket (believe me, more often than not they gladly invest if they just hear you out and get excited by your company) and C) you will have every investor judge every single slide against the dollar amount you put up.

That said, don’t just close your pitch with a “We’re raising $500k. Thank you very much.” but give me a bit more detail what you need the money for and how long it is projected to last (pro tip: Make sure you budget for enough runway—raising for too short an amount of time is a classic beginners mistake). You want to say something like: “We’re raising $500k which will go towards building out our engineering team, building our Android app and getting the first wave of marketing out which should get us to 500k users. The round is projected to last 12 months and will validate our milestones.”

A Word About Design

One question I get quite often about pitch decks is: How much text should be or need to be on the slides? This can be a tricky question—you will both present the deck to an audience (in which case you tend to want to have less text and more emphasis on your delivery) and you’ll send the deck to investors via email (in which case a slide with just an image on it won’t work—the recipient doesn’t have the context of your verbal presentation).

Guy Kawasaki famously formulated the 10/20/30 rule: 10 Slides, 20 Minutes, 30 Point Minimal Font Size. This is a great starting point—and what I would recommend for your in-person investor presentation. But it might not work for the deck you want to email.

Here’s what I would do (and have done): Start with a slide deck where each and every slide can stand on its own. Assume you give the slide deck to someone who has no context about you and your startup. And at the same time—treat your deck like a deck and not a word document. Keep text short, reduce it to the bare necessity, cut out unnecessary language. Keep the whole deck short—one should be able to flip through your deck in a few minutes and digest the whole thing in 10-15 minutes.

Once you have this, which will be the deck you send out to investors, you take the deck and cut out all the words which are not necessary for an in-person presentation. This will give you the deck that you present. Keeping the two decks in sync with regards to slides, order and design will make it easier for someone who saw your deck recognize it in your pitch.


The best pitch deck fails to deliver if your overall flow isn’t right. Flow has as much to do with the logical order of your slides as it has with a certain level of theatrical drama (tension/relieve) and your personal delivery.

Guy recommends ten slides in the following order:

  • Problem
  • Your solution
  • Business model
  • Underlying magic/technology
  • Marketing and sales
  • Competition
  • Team
  • Projections and milestones
  • Status and timeline
  • Summary and call to action

Personally I think this is as good an order as most. Some people like to talk about the team earlier (as investors invest into people first and foremost), others have specific slides talking about some intricate issues specific to the business they are pitching.

For me, it comes down to a logical order: talk about the problem you’re solving first, then present the solution (the tension and relieve arch), and what feels good for you. I prefer a slide deck that is a bit off but comes with amazing in-person presence over a great slide deck and an uninspired presentation any day.

Note that you want to have a bit of drama in your deck—yet it’s not your school play where you try to outcompete Shakespeare. Don’t spend half an hour on building tension and then, TADA!, present your solution. In the end it’s all about balance.

And hey, send me your deck and I’ll provide honest, direct feedback. I won’t bite. Promise.

Cameron KaiserIonPower passes V8!

At least in Baseline-only mode, but check it out!

Starting program: /Volumes/BruceDeuce/src/mozilla-36t/obj-ff-dbg/dist/bin/js --no-ion --baseline-eager -f run.js
warning: Could not find malloc init callback function.
Make sure malloc is initialized before calling functions.
Reading symbols for shared libraries ....................................................................+++......... done
Richards: 144
DeltaBlue: 137
Crypto: 215
RayTrace: 230
EarleyBoyer: 193
RegExp: 157
Splay: 140
NavierStokes: 268
Score (version 7): 180

Program exited normally.

Please keep in mind this is a debugging version and performance is impaired relative to PPCBC (and if I had to ship a Baseline-only compiler in TenFourFox 38, it would still be PPCBC because it has the best track record). However, all of the code cleanup for IonPower and its enhanced debugging capabilities paid off: with one exception, all of the bugs I had to fix to get it passing V8 were immediately flagged by sanity checks during code generation, saving much labourious single stepping through generated assembly to find problems.

I have a Master's program final I have to study for, so I'll be putting this aside for a few days, but after I thoroughly bomb it the next step is to mount phase 4, where IonPower can pass the test suite in Baseline mode. Then the real fun will begin -- true Ion-level compilation on big-endian PowerPC. We are definitely on target for 38, assuming all goes well.

I forgot to mention one other advance in IonPower, which Ben will particularly appreciate if he still follows this blog: full support for all eight bitfields of the condition register. Unfortunately, it's mostly irrelevant to generated code because Ion assumes, much to my disappointment, that the processor possesses only a single set of flags. However, some sections of code that we fully control can now do multiple comparisons in parallel over several condition registers, reducing our heavy dependence upon (and usually hopeless serialization of) cr0, and certain FPU operations that emit to cr1 (or require the FPSCR to dump to it) can now branch directly upon that bitfield instead of having to copy it. Also, emulation of mcrxr on G5/POWER4+ no longer has a hard-coded dependency upon cr7, simplifying much conditional branching code. It's a seemingly minor change that nevertheless greatly helps to further unlock the untapped Power in PowerPC.

Karl DubostBugs and spring cleaning

We have a load of bugs in Tech Evangelism which are not taken care of. Usually the pattern is:

  1. UNCONFIRMED: bug reported
  2. NEW: analysis done ➜ [contactready]
  3. ASSIGNED: attempt at a contact ➜ [sitewait]
  4. 🌱, ☀️, 🍂, ❄️… and 🌱
  5. one year later nothing has happened. The bug is all dusty.

A couple of scenarios might have happened during this time:

  • The bug has been silently fixed (Web dev didn't tell, the site changed, the site disappeared, etc.)
  • We didn't get a good contact
  • We got a good contact, but the person has forgotten to push internally
  • The contact/company decided it would be a WONTFIX (Insert here 👺 or 👹)

I set for myself a whining mail available from the bugzilla administration panel. The feature is defined by.

Whining: Set queries which will be run at some specified date and time, and get the result of these queries directly per email. This is a good way to create reminders and to keep track of the activity in your installation.

The event is defined by associating a saved search and associating it with a recurrent schedule.

Admin panel for setting the whining mail

Every monday (Japanese time), I'm receiving an email which lists the bugs which have not received comments for the last 30 weeks (6+ months). It's usually in between 2 to 10 bugs to review every monday. Not a high load of work.

Email sent by my whining schedule

Still that doesn't seem very satisfying and it relies specifically on my own setup. How do we help contributors, volunteers or even other employees at Mozilla.

All doomed?

There might be actions to improve. Maybe some low hanging fruits could help us.

  • To switch from [sitewait] to [needscontact] when there was no comment on the bug for 𝓃 weeks. The rationale here is that the contact was probably not good, and we need to make the bug again has something that someone can feel free to take actions.
  • To improve the automated testing of Web sites (bottom of the page) done by Hallvord. The Web site tests are collected on Github. And you can contribute new tests. For example, when the test reports that the site has been FIXED, a comment could be automatically posted on the bug itself with a needinfo on the ASSIGNED person. Though that might create an additional issue related to false positives.
  • To have a script for easily setting up the whining email that I set up for myself to help contributors have this kind of reminder if they wish so.

Other suggestions? This is important, because apart of Mozilla, we will slowly have the same issue on 💡 webcompat site for the already contacted Web sites. Please discuss on your own blog and/or send an email to compatibility list on this thread.


PS: No emojis were harmed in this blog post. All emojis appearing in this work are fictitious. Any resemblance to real persons, living or dead, is purely coincidental.

The Mozilla BlogA Major Victory for the Open Web

We just accomplished something very important together. Today, the U.S. Federal Communications Commission voted for strong net neutrality protections. This happened because millions of people — including many hundreds of thousands
in Mozilla’s community — joined together as citizens of the Web to demand those strong protections.

This is an important victory for the world’s largest public resource, the open Web. Net neutrality is a key aspect of enabling innovation from everywhere, and especially from new players and unexpected places. Net neutrality allows citizens and consumers to access new innovations and judge the merit for themselves. It allows individual citizens to make decisions, without gate-keepers who decide which possibilities can become real. Today’s net neutrality rules help us protect this open and innovative potential of the Internet.

Mozilla builds our products to put this openness and opportunity into the hands of individuals. We are organized as a non-profit so that the assets we create benefit everyone. Our products go hand-in-hand with net neutrality; they need net neutrality to bring the full potential of the Internet to all of us.

Today’s net neutrality rules are an important step in protecting opportunity for all. This victory was not inevitable. It occurred because so many people took action, so many people put their voice into the process. To each of you we say “Thank you.” Thank you for taking the time to understand the issue, for recognizing it’s important, and for taking action. Thank you for helping us build openness and opportunity into the very fabric of the Internet.

Video message from Mitchell Baker, Executive Chairwoman, Mozilla Foundation

Planet Mozilla InternsMichael Sullivan: Parallelizing compiles without parallelizing linking – using make

I have to build LLVM and Clang a lot for my research. Clang/LLVM is quite large and takes a long time to build if I don’t use -j8 or so to parallelize the build; but I also quickly discovered that parallelizing the build didn’t work either! I work on a laptop with 8gb of RAM and while this can easily handle 8 parallel compiles, 8 parallel links plus Firefox and Emacs and everything else is a one way ticket to swap town.

So I set about finding a way to parallelize the compiles but not the links. Here I am focusing on building an existing project. There are probably nicer ways that someone writing the Makefile could use to make this easier for people or the default, but I haven’t really thought about that.

My first attempt was the hacky (while ! pgrep ld.bfd.real; do sleep 1; done; killall make ld.bfd.real) & make -j8; sleep 2; make. Here we wait until a linker has run, kill make, then rerun make without parallel execution. I expanded this into a more general script:

<noscript>View the code on <a href="">Gist</a>.</noscript>

This approach is kind of terrible. It’s really hacky, it has a concurrency bug (that I would fix if the whole thing wasn’t already so bad), and it slows things down way more than necessary; as soon as one link has started, nothing more is done in parallel.

A better approach is by using locking to make sure only one link command can run at a time. There is a handy command, flock, that does just that: it uses a file link to serialize execution of a command. We can just replace the Makefile’s linker command with a command that calls flock and everything will sort itself out. Unfortunately there is no totally standard way for Makefiles to represent how they do linking, so some Makefile source diving becomes necessary. (Many use $(LD); LLVM does not.) With LLVM, the following works: make -j8 'Link=flock /tmp/llvm-build $(Compile.Wrapper) $(CXX) $(CXXFLAGS) $(LD.Flags) $(LDFLAGS) $(TargetCommonOpts) $(Strip)'

That’s kind of nasty, and we can do a bit better. Many projects use $(CC) and/or $(CXX) as their underlying linking command; if we override that with something that uses flock then we’ll wind up serializing compiles as well as links. My hacky solution was to write a wrapper script that scans its arguments for “-c”; if it finds a “-c” it assumes it is a compile, otherwise it assumes it is a link and uses locking. We can then build LLVM with: make -j8 'CXX=lock-linking /tmp/llvm-build-lock clang++'.

<noscript>View the code on <a href="">Gist</a>.</noscript>

Is there a better way to do this sort of thing?

Air MozillaProduct Coordination Meeting

Product Coordination Meeting Weekly coordination meeting for Firefox Desktop & Android product planning between Marketing/PR, Engineering, Release Scheduling, and Support.

Kim MoirRelease Engineering special issue now available

The release engineering special issue of IEEE software was published yesterday.  This issue focuses on the current state of release engineering, from both an industry and research perspective. Lots of exciting work happening in this field!

I'm interviewed in the roundtable article on the future of release engineering, along with Chuck Rossi of Facebook and Boris Debic of Google.  Interesting discussions on the current state of release engineering at organizations that scale large number of builds and tests, and release frequently.  As well,  the challenges with mobile releases versus web deployments are discussed. And finally, a discussion of how to find good release engineers, and what the future may hold.

Thanks to the other guest editors on this issue -  Stephany Bellomo, Tamara Marshall-Klein, Bram Adams, Foutse Khomh and Christian Bird - for all their hard work that make this happen!

As an aside, when I opened the issue, the image on the front cover made me laugh.  It's reminiscent of the cover on a mid-century science fiction anthology.  I showed Mr. Releng and he said "Robot birds? That is EXACTLY how I pictured working in releng."  Maybe it's meant to represent that we let software fly free.  In any case, I must go back to tending the flock of robotic avian overlords.

David HumphreyRepeating old mistakes

This morning I've been reading various posts about the difficulty ahead for the Pointer Events spec, namely, Apple's (and by extension Google's) unwillingness to implement it. I'd encourage you to read both pieces, and whatever else gets written on this in the coming days.

I want to comment not on the spec as such, but on the process on display here, and the effect it has on the growth and development of the web. There was a time when the web's course was plotted by a single vendor (at the time, Microsoft), and decisions about what was and wasn't headed for the web got made by employees of that corporation. This story is so often retold that I won't pretend you need to read it again.

And yet here we are in 2015, where the web on mobile, and therefore the web in general, is effectively in the control of one vendor; a vendor who, despite its unmatched leadership and excellence in creating beautiful hardware, has shown none of the same abilities in its stewardship and development of the web.

If the only way to improve and innovate the web is to become an employee of Apple Inc., the web is in real danger.

I think that the larger web development community has become lax in its care for the platform on which it relies. While I agree with the trend toward writing JS to supplement and extend the browser, I think that it also tends to lull developers into thinking that their job is done. We can't simply write code on the platform, and neglect writing code for the platform. To ignore the latter, to neglect the very foundations of our work, is to set ourselves up for a time when everything collapses into the sand.

We need more of our developer talent and resources going into the web platform. We need more of our web developers to drop down a level in the stack and put their attention on the platform. We need more people in our community with the skills and resources to design, implement, and test new specs. We need to ensure that the web isn't something we simply use, but something we are active in maintaining and building.

Many web developers that I talk to don't think about the issues of the "web as platform" as being their problem. "If only Vendor X would fix their bugs or implement Spec Y--we'll have to wait and see." There is too often a view that the web is the problem of a small number of vendors, and that we're simply powerless to do anything other than complain.

In actual fact there is a lot that one can do even without the blessing or permission of the browser vendors. Because so much of the web is still open, and the code freely available, we can and should be experimenting and innovating as much as possible. While there's no guarantee that code you write will be landed and shipped, there is still a great deal of value in writing patches instead of just angry tweets: it is necessary to change peoples' minds about what the web is and what it can do, and there is no better way that with working code.

I would encourage the millions of web developers who are putting time into technologies and products on top of the web to also consider investing some time in the web itself. Write a patch, make some builds, post them somewhere, and blog about the results. Let data be the lever you use to shift the conversation. People will tell you that something isn't possible, or that one spec is better than another. Nothing will do more to convince people than a working build that proves the opposite.

There's no question that working on the web platform is harder than writing things for the web. The code is bigger, older, more complicated, and requires different tooling and knowledge. However, it's not impossible. I've been doing it for years with undergraduate students at Seneca, and if 3rd and 4th years can tackle it, so too can the millions of web developers who are betting their careers and companies on the web.

Having lived through and participated in every stage of the web's development, it's frustrating to see that we're repeating mistakes of the past, and allowing large vendors to control too great a stake of the web. The web is too important for that, and it needs the attention and care of a global community. There's something you can do, something I can do, and we need to get busy doing it.

Air MozillaBugzilla Development Meeting

Bugzilla Development Meeting Help define, plan, design, and implement Bugzilla's future!

Christian HeilmannSimple things: Storing a list of booleans as a single number

This blog started as a scratch pad of simple solutions to problems I encountered. So why not go back to basics?

Yesterday I was asked by someone if there is a possibility to store the state of a collection of checkboxes in a single value. The simplest way I could think of doing this is by using binary conversion.

Tron binary

You can see the result of my approach in this JSBin:

Storing a list of booleans as a single number

What’s going on here? The state of a checkbox is a Boolean, meaning it is checked or unchecked. This could be true or false, or, as JavaScript is a lenient language, 1 or 0. That’s why we can loop over all the checkboxes and assemble a string of their state that compromises of zeros and ones:

var inputs = document.querySelectorAll('input');
var all = inputs.length;
for (var i = 0; i < all; i++){
  state += inputs[i].checked ? '1' : '0';

This results in a string like 1001010101. This could be our value to store, but looks pretty silly and with 50 or more checkboxes becomes very long to store, for example, as a URL parameter string.

That’s why we can use parseInt() to convert this binary number into a decimal one. That’s what the second parameter in parseInt() is for – not only to please Douglas Crockford and JSLint (as it is preset to decimal – 10 – people keep omitting it). The counterpart of parseInt() in this case is toString() and that one also takes an optional parameter that is the radix of the number system you convert from. That way you can convert this state back and forth:

x = parseInt('1001010101',2);
// x -> 579
// "1001010101"

Once converted, you turn it back into a string and loop over the values to set the checked state of the checkboxes accordingly.

A small niggle: leading zeroes don’t work

One little problem here is that if the state results in a string with leading zeroes, you get a wrong result back as toString() doesn’t create them (there is no knowing how long the string needs to be, all it does is convert the number).

x = parseInt('00001010101',2);

You can avoid this is in two ways: either pad the string by always starting it with a 1 or by reversing the string and looping over the checkboxes in reverse. In the earlier example I did the padding part, in this JSBin you can see the reversing trick:

Storing a list of booleans as a single number (reverse)r

Personally, I like the reversing method better, it just feels cleaner. It does rely a lot on falsy/truthy though as the size of the resulting arrays differs.


In any case, this only works when the amount of checkboxes doesn’t change in between the storing and re-storing, but that’s another issue.

As pointed out by Matthias Reuter on Twitter this is also limited to 52 checkboxes, so if you need more, this is not the solution.

Karl DubostWeb Compatibility Summit 2015

The Web Compatibility Summit was organized in Mountain View (USA) on February 18, 2015. I summarize the talks that we have been given during the day. I encourage to continue the discussion on

If you want to look at the talks:

Intro to Web Compatibility

Mike Taylor (Mozilla) has introduced the Web Compatibility topic. The Web being a giant set of new and old things. We need to care for a lot of different things sometimes incompatible. Features will disappear, new features emerge, but you still need to make the Web usable for all users whatever their devices, countries, browsers.

Mike reminded us of the evolution of User Agent strings and how they grew with more and more terms. The reason is that the User Agent string became an ID for having the content rendered. So any new User Agent is trying to get access to the Web site content by not being filtered out.

Then he went through some traditional bugs (horrors) be JavaScript, CSS, etc. has been created to give a space for users to report Web Compatibility issues they have on a Web site. But the space is useful for browser vendors, it is relatively easy to tie the bug reporting of browsers directly to


Beyond Vendor Prefixes

Jacob Rossi (Microsoft) introducing the purpose and the caveats of vendor prefixes. Vendor prefixes have been created for helping people to test the new API safely without breaking other browser vendors. It shields against collision with other implementations, but it also creates Web Compatibility issues. The main issue being that Web developers use these on production sites and on articles, examples, documentations.

Microsoft tried to contact Web sites with bogus code examples and articles. They also created new content with the correct way of writing things. The results were promising for the FullScreen API but the experiment was less successful for other properties. Basically, fix the problem before it happens.

So how do we keep the possibility to have a large surface for feedback and at the same time limit the usage so that it doesn't become a requirement to use a specific browser. The first idea to put new features behind flags. Then the issue becomes that the feedback is shallow. So Jacob is floating the idea of an API trial, where someone would register to get a key for enabling a feature. It would help the developer to test and at the same time, make it possible to set deadlines for the test.

It would probably require a community effort to set up. It has a cost. Where this discussion should happen? It could be a community group at W3C. Not all APIs need to be used at scale for having a real sense of feedback. IndexedDB, appcache would have been good candidates for this thing. If there was a community site, it would be yet another good way to build awareness about the features.

A good discussion has been recorded on the video.

How CSS is being used in the wild

Alex McPherson (QuickLeft) introduced the wonderful work he did about CSS properties on the Web as they are currently (2014) used by Web devs. The report was initially done for understanding what QuickLeft should recommend in terms of technology when they tackle a new project. For this report, they scrap the CSS of the top 15000 Web sites and checking the frequency, values, doing graph distributions. The purpose was not to be an exact academic research. So they are definitely caveats in the study, but it gives a rough overview of what is done. The data were collected through a series of HTTP requests. One of the consequences is that we probably miss everything which is set through JavaScript. A better study would include a browser crawler handling the DOM. There are probably variations with regards to the user agent too.

It would be good to have a temporal view of these data and repeat the study continuously. So we can identify how the Web is evolving. Browser vendors seems to have more resources to do this type of studies than a single person in an agency.

Engaging with Web Developers for Site Outreach

Colleen Williams (Microsoft) talked about what it takes to do daily Web Compatibility work. Contacting Web developers is really about trying to convince people that there could be a benefit for them to implement with a wider scope of platforms.

Social networking and using linkedin are very useful to be to contact the right persons in companies. It's very important to be very upfront and to tell developers:

  • Who we are?
  • What we are doing?
  • Why are we contacting them?

Microsoft IE team has a list of top sites and systematically test every new version of IE into these sites. It's an opportunity for contacting Web sites which are broken. When contacting Web sites, it's important to understand that you are building a relationship on a longterm. You might have to recontact the people working for this company and/or Web sites in a couple of months. It's important to nurture a respectful and interesting relation with the Web developers. You can't say "your code sucks". It's important to talk to technical people directly.

Do not share your contact list with business departements. We need a level of privacy with the persons we are contacting. They keep contact with you because they are eager to help of solving technical issues. Empathy in the relationship goes a long way in terms of creating a mutual trust. The relationship should always go both ways.

Having a part or the full solution for solving the issue will help you a lot in getting the issue fixed. It's better to show code and help the developer demonstrate what could work.

Companies user support channels are usually not the best tool, unfortunately, for reaching the company. There's a difference in between having a contact and having the right contact.

Web Compatibility Data: A Shared Resource

Finally Justin Crawford (Mozilla) introduced the project about having a better set of site compatibility data. But I encouraged you to read his own summary on his blog.


We discuss at the end of the day using an unconference format. Alexa Roman moderated the session. We discussed about User Agent Sniffing and the format of the UA string, console.log for reporting Web Compatibility issues, API trials, documentation, etc.

More information

You can contact and interact with us: IRC: #webcompat Discuss issues on Twitter: @webcompat Report bug:

Planet Mozilla InternsMichael Sullivan: Forcing memory barriers on other CPUs with mprotect(2)

I have something of an unfortunate fondness for indefensible hacks.

Like I discussed in my last post, RCU is a synchronization mechanism that excels at protecting read mostly data. It is a particularly useful technique in operating system kernels because full control of the scheduler permits many fairly simple and very efficient implementations of RCU.

In userspace, the situation is trickier, but still manageable. Mathieu Desnoyers and Paul E. McKenney have built a Userspace RCU library that contains a number of different implementations of userspace RCU. For reasons I won’t get into, efficient read side performance in userspace seems to depend on having a way for a writer to force all of the reader threads to issue a memory barrier. The URCU library has one version that does this using standard primitives: it sends signals to all other threads; in their signal handlers the other threads issue barriers and indicate so; the caller waits until every thread has done so. This is very heavyweight and inefficient because it requires running all of the threads in the process, even those that aren’t currently executing! Any thread that isn’t scheduled now has no reason to execute a barrier: it will execute one as part of getting rescheduled. Mathieu Desnoyers attempted to address this by adding a membarrier() system call to Linux that would force barriers in all other running threads in the process; after more than a dozen posted patches to LKML and a lot of back and forth, it got silently dropped.

While pondering this dilemma I thought of another way to force other threads to issue a barrier: by modifying the page table in a way that would force an invalidation of the Translation Lookaside Buffer (TLB) that caches page table entries! This can be done pretty easily with mprotect or munmap.

Full details in the patch commit message.

Planet Mozilla InternsMichael Sullivan: Why We Fight

Why We Fight, or

Why Your Language Needs A (Good) Memory Model, or

The Tragedy Of memory_order_consume’s Unimplementability

This, one of the most terrifying technical documents I’ve ever read, is why we fight:


For background, RCU is a mechanism used heavily in the Linux kernel for locking around read-mostly data structures; that is, data structures that are read frequently but fairly infrequently modified. It is a scheme that allows for blazingly fast read-side critical sections (no atomic operations, no memory barriers, not even any writing to cache lines that other CPUs may write to) at the expense of write-side critical sections being quite expensive.

The catch is that writers might be modifying the data structure as readers access it: writers are allowed to modify the data structure (often a linked list) as long as they do not free any memory removed until it is “safe”. Since writers can be modifying data structures as readers are reading from it, without any synchronization between them, we are now in danger of running afoul of memory reordering. In particular, if a writer initializes some structure (say, a routing table entry) and adds it to an RCU protected linked list, it is important that any reader that sees that the entry has been added to the list also sees the writes that initialized the entry! While this will always be the case on the well-behaved x86 processor, architectures like ARM and POWER don’t provide this guarantee.

The simple solution to make the memory order work out is to add barriers on both sides on platforms where it is need: after initializing the object but before adding it to the list and after reading a pointer from the list but before accessing its members (including the next pointer). This cost is totally acceptable on the write-side, but is probably more than we are willing to pay on the read-side. Fortunately, we have an out: essentially all architectures (except for the notoriously poorly behaved Alpha) will not reorder instructions that have a data dependency between them. This means that we can get away with only issuing a barrier on the write-side and taking advantage of the data dependency on the read-side (between loading a pointer to an entry and reading fields out of that entry). In Linux this is implemented with macros “rcu_assign_pointer” (that issues a barrier if necessary, and then writes the pointer) on the write-side and “rcu_dereference” (that reads the value and then issues a barrier on Alpha) on the read-side.

There is a catch, though: the compiler. There is no guarantee that something that looks like a data dependency in your C source code will be compiled as a data dependency. The most obvious way to me that this could happen is by optimizing “r[i ^ i]” or the like into “r[0]”, but there are many other ways, some quite subtle. This document, linked above, is the Linux kernel team’s effort to list all of the ways a compiler might screw you when you are using rcu_dereference, so that you can avoid them.

This is no way to run a railway.

Language Memory Models

Programming by attempting to quantify over all possible optimizations a compiler might perform and avoiding them is a dangerous way to live. It’s easy to mess up, hard to educate people about, and fragile: compiler writers are feverishly working to invent new optimizations that will violate the blithe assumptions of kernel writers! The solution to this sort of problem is that the language needs to provide the set of concurrency primitives that are used as building blocks (so that the compiler can constrain its code transformations as needed) and a memory model describing how they work and how they interact with regular memory accesses (so that programmers can reason about their code). Hans Boehm makes this argument in the well-known paper Threads Cannot be Implemented as a Library.

One of the big new features of C++11 and C11 is a memory model which attempts to make precise what values can be read by threads in concurrent programs and to provide useful tools to programmers at various levels of abstraction and simplicity. It is complicated, and has a lot of moving parts, but overall it is definitely a step forward.

One place it falls short, however, is in its handling of “rcu_dereference” style code, as described above. One of the possible memory orders in C11 is “memory_order_consume”, which establishes an ordering relationship with all operations after it that are data dependent on it. There are two problems here: first, these operations deeply complicate the semantics; the C11 memory model relies heavily on a relation called “happens before” to determine what writes are visible to reads; with consume, this relation is no longer transitive. Yuck! Second, it seems to be nearly unimplementable; tracking down all the dependencies and maintaining them is difficult, and no compiler yet does it; clang and gcc both just emit barriers. So now we have a nasty semantics for our memory model and we’re still stuck trying to reason about all possible optimizations. (There is work being done to try to repair this situation; we will see how it turns out.)

Shameless Plug

My advisor, Karl Crary, and I are working on designing an alternate memory model (called RMC) for C and C++ based on explicitly specifying the execution and visibility constraints that you depend on. We have a paper on it and I gave a talk about it at POPL this year. The paper is mostly about the theory, but the talk tried to be more practical, and I’ll be posting more about RMC shortly. RMC is quite flexible. All of the C++11 model apart from consume can be implemented in terms of RMC (although that’s probably not the best way to use it) and consume style operations are done in a more explicit and more implementable (and implemented!) way.

Planet Mozilla InternsMichael Sullivan: The x86 Memory Model

Often I’ve found myself wanting to point someone to a description of the x86’s memory model, but there wasn’t any that quite laid it out the way I wanted. So this is my take on how shared memory works on multiprocessor x86 systems. The guts of this description is adapted/copied from “A Better x86 Memory Model: x86-TSO” by Scott Owens, Susmit Sarkar, and Peter Sewell; this presentation strips away most of the math and presents it in a more operational style. Any mistakes are almost certainly mine and not theirs.

Components of the System:

There is a memory subsystem that supports the following operations: store, load, fence, lock, unlock. The memory subsystem contains the following:

  1. Memory: A map from addresses to values
  2. Write buffers: Per-processor lists of (address, value) pairs; these are pending writes, waiting to be sent to memory
  3. “The Lock”: Which processor holds the lock, or None, if it is not held. Roughly speaking, while the lock is held, only the processor that holds it can perform memory operations.

There is a set of processors that execute instructions in program order, dispatching commands to the memory subsystem when they need to do memory operations. Atomic instructions are implemented by taking “the lock”, doing whatever reads and writes are necessary, and then dropping “the lock”. We abstract away from this.


A processor is “not blocked” if either the lock is unheld or it holds the lock.

Memory System Operation

Processors issue commands to the memory subsystem. The subsystem loops, processing commands; each iteration it can pick the command issued by any of the processors to execute. (Each will only have one.) Some of the commands issued by processors may not be eligible to execute because their preconditions do not hold.

  1. If a processor p wants to read from address a and p is not blocked:
    a. If there are no pending writes to a in p’s write buffer, return the value from memory
    b. If there is a pending write to a in p’s write buffer, return the most recent value in the write buffer
  2. If a processor p wants to write value v to address a, add (a, v) to the back of p’s write buffer
  3. At any time, if a processor p is not blocked, the memory subsystem can remove the oldest entry (a, v) from p’s write buffer and update memory so that a maps to v
  4. If a processor p wants to issue a barrier
    a. If the barrier is an MFENCE, p’s write buffer must be empty
    b. If the barrier is an LFENCE/SFENCE, there are no preconditions; these are no-ops **
  5. If a processor p’s wants to lock the lock, the lock must not be held and p’s write buffer must be empty; the lock is set to be p
  6. If a processor p’s wants to unlock the lock, the lock must held by p and p’s write buffer must be empty; the lock is set to be None


So, the only funny business that can happen is that a load can happen before a prior store to a different location has been flushed from the write buffer into memory. This means that if CPU0 executes “x = 1; r0 = y” and CPU1 executes “y = 1; r1 = x”, with x and y both initially zero, we can get “r0 == r1 == 0″.

The common intuition that atomic instructions act like there is an MFENCE before and after them is basically right; MFENCE requires the write buffer to empty before it can execute and so do lock and unlock.

x86 is a pleasure to compile atomics code for. The “release” and “acquire” operations in the C++11 memory model don’t require any fencing to work. Neither do the notions of “execution order” and “visibility order” in my advisor and my RMC memory model.

** The story about LFENCE/SFENCE is a little complicated. Some sources insist that they actually do things. The Cambridge model models them as no-ops. The guarantees that they are documented to provide are just true all the time, though. I think they are useful when using non-temporal memory accesses (which I’ve never done), but not in general.


Brad LasseyDetecting slow add-ons

As we have been progressing towards a multi-process Firefox we have seen many bugs filed about slowness that after investigation turned out to be related to add-ons interacting poorly with the way multi-process Firefox now operates. This doesn’t necessarily mean that the add-on author has done anything wrong. Instead code that worked fine with single-process Firefox now behaves differently as seemingly innocuous calls can cause synchronous calls from one process to the other. See my previous post on unsafe CPOW usage as example of how this can occur.

This motivated us to start trying to measure add-ons and detect when they may be slowing down a user’s browser such that we can notify the user. As luck would have it, we recently made a key change that makes all of this possible. In bug 990729 we introduced “compartment-per-add-on” (though there may be multiple compartments per add-on), such that add-on code and Firefox’s js code run in separate compartments. This allowed us to then track the amount of time we running a particular add-on by measuring the time spent in the compartments associated with that add-on. We did that in bug 1096666.

That has allowed us to start notifying users when we notice add-ons running slowly on their system. The first cut of that recently landed from bug 1071880, but a more polished UX is being tracked by bug 1124073. If you’re using Nightly and testing e10s (multi-process Firefox) and are notified of a slow add-on, please try disabling it to see if it improves how Firefox is running and report your results to You can see the raw data of what we are measuring by having a look at about:compartments. Also note that all of this is experimental. In fact, we’re already rewriting how we measure the time we spend in compartments in bug 674779.

Rizky AriestiyansyahUndo git rm -r command in github

Well, when I tried to delete remote folder on my github repository the command that I used 100% is wrong, I am using git rm -r folder/ command ;( it also delete the folder in local repo.

Armen Zambranomozci 0.2.4 released - Support for intermittent oranges + major bug fixes

Big thanks to vaibhav1994 for his intermittent orange script contribution.

Also thanks to adusca and valeriat for their many contributions in this release.

Release notes

The major feature is being able to analyze data about reported intermittent oranges on bugzilla and give the user the ability to trigger jobs to spot where the regression started (

A lot of bug fixed and optimizations. I'm only highligthing some from this list: 0.2.3...0.2.4

Highlighted fixes:
  • Fixed/improved issues with jobs completed today
  • Added builds-4hr support
  • allthethings.json gets clobbered after 24 hours
    • This prevents relying on an old file
  • Drop the need to use --repo-name for all scripts
    • his prevents the user having to add a redundant option
Release notes:
PyPi package:

Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Justin CrawfordReport: Web Compatibility Summit

Last week I spoke at Mozilla’s Compatibility Summit. I spoke about compatibility data. Here are my slides.

Compatibility data…

  • is information curated by web developers about the cross-browser compatibility of the web’s features.
  • documents browser implementation of standards, browser anomalies (both bugs and features), and workarounds for common cross-browser bugs. It theoretically covers hundreds of browser/version combinations, but naturally tends to focus on browsers/versions with the most use.
  • is partly structured (numbers and booleans can answer the question, “does Internet Explorer 10 support input type=email?”) and partly unstructured (numbers and booleans cannot say, “Safari Mobile for iOS applies a default style of opacity: 0.4 to disabled textual <input> elements.”).

Compatibility data changes all the time. As a colleague pointed out last week, “Every six weeks we have all new browsers” — each with an imperfect implementation of web standards, each introducing new incompatibilities into the web.

Web developers are often the first to discover and document cross-browser incompatibilities since their work product is immediately impacted. And they do a pretty good job of sharing compatibility data. But as I said in my talk: Compatibility data is an oral tradition. Web developers gather at the town pump and share secrets for making boxes with borders float in Browser X. Web developers sit at the feet of venerated elders and hear how to make cross-browser compatible CSS transitions. We post our discoveries and solutions in blogs; in answers on StackOverflow; in GitHub repositories. We find answers on old blogs; in countless abandoned PHP forums; on the third page of search results.

There are a small number of truly canonical sources for compatibility data. We surveyed MDN visitors in January and learned that they refer to two sources far more than any others: MDN and MDN has more comprehensive information — more detailed, for more browser versions, accompanied by encyclopedic reference materials. caniuse has a much better interface and integrates with browser market share data. Both have good communities of contributors. Together, they represent the canon of compatibility data.

Respondents to our recent survey said they use the two sites differently: They use caniuse for planning new sites and debugging existing issues, and use MDN for exploring new features and when answering questions that come up when writing code.

On MDN, we’re working on a new database of compatibility data with a read/write API. This project solves some maintenance issues for us, and promises to create more opportunities for web developers to build automation around web compatibility data (for an example of this, see, which scans your CSS for features covered by caniuse and returns a browser coverage report).

Of course the information in MDN and caniuse is only one tool for improving web compatibility, which is why the different perspectives at the Compat Summit were so valuable.


If we think of web compatibility as a set of concentric circles, MDN and caniuse (and the entire, sprawling web compatibility data corpus) occupy a middle ring. In the illustration above, the rings get more diffuse as they get larger, representing the increasing challenge of finding and solving incompatibility as it moves from vendor to web developer to site.

  • By the time most developers encounter cross-browser compatibility issues, those issues have been deployed in browser versions. So browser vendors have a lot of responsibility to make web standards possible; to deploy as few standards-breaking features and bugs as possible. Jacob Rossi from Microsoft invited Compat Summit participants to collaborate on a framework that would allow browser vendors to innovate and push the web forward without creating durable incompatibility issues in deployed websites.
  • When incompatibilities land in browser releases, web developers find them, blog about them, and build their websites around them. At the Compat Summit, Alex McPherson from Quickleft presented his clever work quantifying some of these issues, and I invited all present to start treating compatibility data like an important public resource (as described above).
  • Once cross-browser incompatibilities are discussed in blog posts and deployed on the web, the only way to address incompatibilities is to politely ask web developers to fix them. Mike Taylor and Colleen Williams talked about Mozilla’s and Microsoft’s developer outreach activities — efforts like, “bug reporting for the internet”.

At the end of the Compat Summit, Mike Taylor asked the participants whether we should have another. My answer takes the form of two questions:

  1. Should someone work to keep the importance of cross-browser compatibility visible among browser vendors and web developers?
  2. If we do not, who will?

I think the answer is clear.

Special thanks to Jérémie Patonnier for helping me get up to speed on web compatibility data.

Armen ZambranoListing builder differences for a buildbot-configs patch improved

Up until now, we updated the buildbot-configs repository to the "default" branch instead of "production" since we normally write patches against that branch.

However, there is a problem with this, buildbot-configs is always to be on the same branch as buildbotcustom. Otherwise, we can have changes land in one repository which require changes on the other one.

The fix was to simply make sure that both repositories are either on default or their associated production branches.

Besides this fix, I have landed two more changes:

  1. Use the production branches instead of 'default'
    • Use -p
  2. Clobber our whole set up (e.g. ~/.mozilla/releng)
    • Use -c

Here are the two changes:

Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Nathan Froydmeasuring power usage with power gadget and joulemeter

In the continuing evaluation of how Firefox’s energy usage might be measured and improved, I looked at two programs, Microsoft Research’s Joulemeter and Intel’s Power Gadget.

As you might expect, Joulemeter only works on Windows. Joulemeter is advertised as “a software tool that estimates the power consumption of your computer.” Estimates for the power usage of individual components (CPU/monitor/disk/”base”) are provided while you’re running the tool. (No, I’m not sure what “base” is, either. Perhaps things like wifi?) A calibration step is required for trying to measure anything. I’m not entirely sure what the calibration step does, but since you’re required to be running on battery, I presume that it somehow obtains statistics about how your battery drains, and then apportions power drain between the individual components. Desktop computers can use a WattsUp power meter in lieu of running off battery. Statistics about individual apps are also obtainable, though only power draw based on CPU usage is measured (estimated). CSV logfiles can be captured for later analysis, taking samples every second or so.

Power Gadget is cross-platform, and despite some dire comments on the download page above, I’ve had no trouble running it on Windows 7 and OS X Yosemite (though I do have older CPUs in both of those machines). It works by sampling energy counters maintained by the processor itself to estimate energy usage. As a side benefit, it also keeps track of the frequency and temperature of your CPU. While the default mode of operation is to draw pretty graphs detailing this information in a window, Power Gadget can also log detailed statistics to a CSV file of your choice, taking samples every ~100ms. The CSV file also logs the power consumption of all “packages” (i.e. CPU sockets) on your system.

I like Power Gadget more than Joulemeter: Power Gadget is cross-platform, captures more detailed statistics, and seems a little more straightforward in explaining how power usage is measured.

Roberto Vitillo and Joel Maher wrote a tool called energia that compares energy usage between different browsers on pre-selected sets of pages; Power Gadget is one of the tools that can be used for gathering energy statistics. I think this sort of tool is the primary use case of Power Gadget in diagnosing power problems: it helps you see whether you might be using too much power, but it doesn’t provide insight into why you’re using that power. Taking logs along with running a sampling-based stack profiler and then attempting to correlate the two might assist in providing insight, but it’s not obvious to me that stacks of where you’re spending CPU time are necessarily correlated with power usage. One might have turned on discrete graphics in a laptop, or high-resolution timers on Windows, for instance, but that wouldn’t necessarily be reflected in a CPU profile. Perhaps sampling something different (if that’s possible) would correlate better.

The Servo BlogThis Week In Servo 25

This week, we merged 60 pull requests.

Selector matching has been extracted into an independent library! You can see it here.

We now have automation set up for our Gonk (Firefox OS) port, and gate on it.

Notable additions


Parallel painting, visualized:

New contributors



  • Rust in Gecko: We’re slowly figuring out what we need to do (and have a tracking bug). Some more candidates for a Rust component in Gecko are an MP4 demultiplexer, and a replacement of safe browsing code.
  • Selector matching is now in a new library
  • ./mach build-gonk works. At the moment it needs a build of B2G, which is huge (and requires a device), but we’re working on packaging up the necessary bits which are relatively small.

Cameron KaiserTwo victories

Busted through my bug with stubs late last night (now that I've found the bug I am chagrined at how I could have been so dense) and today IonPower's Baseline implementation successfully computed π to an arbitrary number of iterations using the nice algorithm by Daniel Pepin:

% ../../../obj-ff-dbg/dist/bin/js --no-ion --baseline-eager -e 'var pi=4,top=4,bot=3,minus = true;next(pi,top,bot,minus,30);function next(pi,top,bot,minus,num){for(var i=0;i<num;i++){pi += (minus == true)?-(top/bot):(top/bot);minus = \!minus;bot+=2;}print(pi);}'
% ../../../obj-ff-dbg/dist/bin/js --no-ion --baseline-eager -e 'var pi=4,top=4,bot=3,minus = true;next(pi,top,bot,minus,30000);function next(pi,top,bot,minus,num){for(var i=0;i<num;i++){pi += (minus == true)?-(top/bot):(top/bot);minus = \!minus;bot+=2;}print(pi);}'

Still work to be done on the rudiments before attacking the test suite, but code of this complexity running correctly so far is a victory. And, in a metaphysical sense, speaking from my perspective as a Christian (and a physician aware of the nature of his illness), here is another victory: a Mozillian's last post from the end stages of his affliction. Even for those who do not share that religious perspective, it is a truly brave final statement and one I have not seen promoted enough.

Daniel Stenbergcurl, smiley-URLs and libc

Some interesting Unicode URLs have recently been seen used in the wild – like in this billboard ad campaign from Coca Cola, and a friend of mine asked me about curl in reference to these and how it deals with such URLs.


(Picture by stevencoleuk)

I ran some tests and decided to blog my observations since they are a bit curious. The exact URL I tried was ‘www.😃.ws’ (not the same smiley as shown on this billboard: 😂) – it is really hard to enter by hand so now is the time to appreciate your ability to cut and paste! It appears they registered several domains for a set of different smileys.

These smileys are not really allowed IDN (where IDN means International Domain Names) symbols which make these domains a bit different. They should not (see below for details) be converted to punycode before getting resolved but instead I assume that the pure UTF-8 sequence should or at least will be fed into the name resolver function. Well, either way it should either pass in punycode or the UTF-8 string.

If curl was built to use libidn, it still won’t convert this to punycode and the verbose output says “Failed to convert www.😃.ws to ACE; String preparation failed

curl (exact version doesn’t matter) using the stock threaded resolver

  • Debian Linux (glibc 2.19) – FAIL
  • Windows 7 - FAIL
  • Mac OS X 10.9 – SUCCESS

But then also perhaps to no surprise, the exact same results are shown if I try to ping those host names on these systems. It works on the mac, it fails on Linux and Windows. Wget 1.16 also fails on my Debian systems (just as a reference and I didn’t try it on any of the other platforms).

My curl build on Linux that uses c-ares for name resolving instead of glibc succeeds perfectly. host, nslookup and dig all work fine with it on Linux too (as well as nslookup on Windows):

$ host www.😃.ws
www.\240\159\152\ has address
$ ping www.😃.ws
ping: unknown host www.😃.ws

While the same command sequence on the mac shows:

$ host www.😃.ws
www.\240\159\152\ has address
$ ping www.😃.ws
PING www.😃.ws ( 56 data bytes
64 bytes from icmp_seq=0 ttl=44 time=191.689 ms
64 bytes from icmp_seq=1 ttl=44 time=191.124 ms

Slightly interesting additional tidbit: if I rebuild curl to use gethostbyname_r() instead of getaddrinfo() it works just like on the mac, so clearly this is glibc having an opinion on how this should work when given this UTF-8 hostname.

Pasting in the URL into Firefox and Chrome works just fine. They both convert the name to punycode and use “” which then resolves to the same IPv4 address.

Update: as was pointed out in a comment below, the “″ IP address is not the correct IP for the site. It is just the registrar’s landing page so it sends back that response to any host or domain name in the .ws domain that doesn’t exist!

What do the IDN specs say?

The U-263A smileyThis is not my area of expertise. I had to consult Patrik Fältström here to get this straightened out (but please if I got something wrong here the mistake is still all mine). Apparently this smiley is allowed in RFC 3940 (IDNA2003), but that has been replaced by RFC 5890-5892 (IDNA2008) where this is DISALLOWED. If you read the spec, this is 263A.

So, depending on which spec you follow it was a valid IDN character or it isn’t anymore.

What does the libc docs say?

The POSIX docs for getaddrinfo doesn’t contain enough info to tell who’s right but it doesn’t forbid UTF-8 encoded strings. The regular glibc docs for getaddrinfo also doesn’t say anything and interestingly, the Apple Mac OS X version of the docs says just as little.

With this complete lack of guidance, it is hardly any additional surprise that the glibc gethostbyname docs also doesn’t mention what it does in this case but clearly it doesn’t do the same as getaddrinfo in the glibc case at least.

What’s on the actual site?

A redirect to which shows a rather boring page.


Who’s right?

I don’t know. What do you think?

Michael KaplyWhat I Do

One of the trickier things about being self-employed, especially around an open source project like Firefox, is knowing where to draw the line between giving out free advice and asking people to pay for my services. I'm always hopeful that answering a question here and there will one day lead to folks hiring me or purchasing CCK2 support. That's why I try to be as helpful as I can.

That being said, I'm still surprised at the number of times a month I get an email from someone requesting that I completely customize Firefox to their requirements and deliver it to them for free. Or write an add-on for them. Or diagnose the problem they are having with Firefox.

While I appreciate their faith in me, somehow I think it's gotten lost somewhere that this is what I do for a living.

So I just wanted to take a moment and let people know that you can hire me. If you need help with customizing and deploying Firefox, or building Firefox add-ons or building a site-specific browser, that's what I do. And I'd be more than happy to help you do that.

But if all you have is a question, that's great too. The best place to ask is on my support site at

The Mozilla BlogFirefox for Android Gets a Simpler, Sleeker New Look for Tablets

We’re constantly working on ways to give you the best Firefox experience, everywhere you are. That’s why our goal with the new Firefox for Android look on tablets was to simplify the interaction with tabs and allow users to create, remove and switch tabs with a single tap.

We’ve also revamped the toolbar and added a new, full screen tab panel for a sleek, visual overview of tabs.

The Mozilla BlogUnreal Engine 4.7 Binary Release Includes HTML5 Export

With the launch of Unreal Engine 4.7, Epic Games has added the ability to export to HTML5 as a regular part of their Windows binary engine releases. One of the world’s most advanced game engines is one step closer to delivering truly amazing content right in your browser. With this addition, developers will be able to create content in Unreal Engine 4.7, immediately compile it to the Web, and launch it in their browser of choice with the click of a button.

InstructionEpic Games’ work with WebGL and HTML5 has been part of the Unreal Engine 4 source code distribution for many months and has been maturing over the past year. While still a pre-release version, the HTML5 output is robust enough for developers to use it with their content and give feedback on the technology. Mozilla is excited to support Epic Games in their continuing effort to bring this amazing engine to the Web.


Screenshot of output from UE4

In the leadup to GDC, Mozilla will be publishing a series of articles about different aspects of the technology that makes it possible to bring native engines to the Web. Mozilla will also be showcasing several next generation Web technologies at our booth including WebVR demos built in Unreal Engine 4. Output from the engine will also be used to showcase Firefox Developer Tools and demonstrate how they can be leveraged with this type of content.

Mozilla will be taking part in Epic’s HTML5 export presentation which will be broadcast live on Twitch at 2pm PT Thursday, March 5th, and viewed at

For more information on this news from Epic Games, visit their blog.

Come take a look at where the Web is heading at the Firefox Booth (South Hall Booth #2110) or learn more about Unreal Engine 4.7 at Epic Games’ Booth (South Hall Booth #1024).

Christian HeilmannMaking distributed team meetings work

Being in a distributed team can be tough. Here are a few tricks I learned over the years to make distributed meetings easier.
This is cross-posted on Medium, you can comment there.

Man on phone with 'shut up' on whiteboard behind himPhoto Credit: Tim Caynes

Working as distributed teams is a red flag for a lot of companies. Finding a job that allows you to work remote is becoming a rare opportunity. As someone who worked remote for the last few years I can say though that it is a much better way of working. And it results in more effective and above all, happier, teams.

What it needs is effort by all involved to succeed and often a few simple mistakes can derail the happy train.

The big issue, of course, is time difference. It is also a massive opportunity. A well organised team can cover a product in half the time. To achieve that, you need well organised and honest communication. Any promise you can’t keep means you lost a whole day instead of being able to deliver quicker.

There are many parts to this, but today, I want to talk about meetings with distributed teams. In the worst case, these are phone conferences. In the best case, you have a fancy video conferencing system all people in your team can use.

Quick aside: a lot of people will use this as an opportunity to tell me about their amazing software solutions for this problem. I am sure you can tell me a lot about your multi-media communication system with supercharged servers and affordable rates. Please, don’t. I don’t buy these things and I also have seen far too many fail. Connectivity and travel don’t go too well together. The crystal clear, uninterrupted, distraction-free meetings we see in advertisements are a product of fiction. You might as well add unicorns, at least that would make them fabulous. I’ve seen too many bad connections and terrible surroundings. What they taught me is that planning for failure beats any product you can buy. Hence these tips here.

Synchronous Asynchronicity?

World online information
Meetings are synchronous – you expect all people to be in one place at the same time. Distributed teams work asynchronous. While one part is doing the work, the others are sleeping or having a social life until it is time to switch over.
Thus, if you have a meeting at 10 in the morning California time and you talk to people in Europe you have two kind of people in the virtual room:

  • Those who already had a whole day of work with all the joys and annoyances it brings
  • Those who just got up, far too early for their own liking and got stuck in transit. Either on a packed motorway or involuntarily nestling in the armpit of a total stranger in an overcrowded train

In any case, both are kind of weary and annoyed. Without proper planning, this isn’t an opportunity for knowledge sharing. It is a recipe for disaster as both groups have wildly diverging agendas about this meeting.

  • One group wants to give an update what they did today, hand over and call it a day and
  • The other group wants to know what is important today and get on with it to fight off the cobwebs of commuting

As an expert remote worker you start to diffuse this issue a bit by shifting your daily agenda around. You allow for a few hours of overlap, either by staying longer on the job and starting later in the early afternoon. Or by getting up much earlier and communicating with your colleagues at home before leaving for the office. This works well as long as you don’t have a partner that works normal hours or you live in a country where shops close early.

In any case, you need to find a way to make both groups more efficient in this meeting, so here is the first trick:

Separate the meeting into remote updates and social interactions

sandwichPhoto by Ron Dolette

This may sound weird, but you can’t have both. Having a meeting together in a room in the office is great for the locals:

  • You can brainstorm some ideas in an animated discussion where everyone talks
  • You can cover local happenings (“did you see the game last night? What a ludicrous display”)
  • You can have a chat about what’s bothering you (“damn, the office is cold today”) and
  • talk about location-specific issues and socialise.

This doesn’t work when the topic of discussion is current and about the location you are in. Telling someone eight hours ahead and far away from you that you will enjoy company-provided food right after the event is not helping their morale — on the contrary. It alienates people outside the group even more than they already feel. It is great for the team in the room though. Don’t let one’s group perk become the others reason for jealousy.

By separating your meeting into four parts, you can focus better on giving the right people what they came for and get their updates. Make it a meeting sandwich:

  • Meet in the room, have a quick social chat,
  • Dial in the remote participants, ask them about a quick social update,
  • Have a focused info session with both groups,
  • Let the remote people disconnect and phase out the meeting with another local, social update.

This way everybody uses their time efficiently, feels listened to and not bothered by updates or benefits that don’t apply to them. I have seen this work well in the past. It also resulted in much shorter meetings and we all love those.

Have a clear agenda and turn it into live meeting notes

a clear plan where we are and what's coming
Sending out an agenda before the meeting makes things crystal clear. People know if it is worth to join and can choose not to. Make sure that all the resources you cover in the meeting are linked in the agenda. Make sure that these resources are publicly available to all people in the meeting and not on someone’s internal IP or file share only a few have access to. Having to ask “what is Margaret talking about, where can I find this?” is distracting and frustrating.

During the meeting you can add notes and findings to the agenda items. This has a lot of benefits:

  • People who can not attend the meeting or drop off half way through can look up what happened later.
  • You have an archive of your meetings without having the extra work of publishing meeting notes.
  • People who missed the meeting can scan the meeting results. This is much easier than listening to an hour long recording or watching a video of people in a room talking to a microphone. As beneficial as a video call is when you are all live, it gets tedious and hard to navigate to the items you care about when it is a recording.

Be aware of sound distractions

tis be the time of hammer
In essence what you are dealing with here is a many to one conversation. If you have several people talking to you and you can see them, this is not an issue. If you can’t see them and you don’t even have a spatial recognition where the sound comes from, it all blurs together. That’s why it is important to have one person speak at any time and others to be aware of the fact that any noise they make is distracting. This means:

  • As someone remote, mute your microphone. There is nothing more annoying than the clatter of a keyboard magnified by the microphone just above it
  • As someone in the room with others, lean away from the microphone. Don’t cough into it, don’t shift things around on the table the mic is standing on. Coffee mugs, spoons and pens can be incredibly loud on those.
  • As the speaker, lean into the microphone and speak clearly – always assume there is static and sound loss. A mumbled remark in the back of the room followed by laughter by all could come across to a remote listener as an inside joke or even an insult. No need to risk such misunderstandings.
  • If you switch the speaker, tell them to introduce themselves. This may feel silly in the room, but it helps avoiding confusion on the other side.

Use a chat system as the lifeline of the meeting

Bad skype quality
Video and Audio chat will always go wrong in one way or another. The problem is that when you are presenting the system won’t tell you that. You are not aware that the crystal clear image of yourself with perfect sound is a pixelated mess with a robot voice that makes Daft Punk jealous on the other end.

Having a chat running at the same time covers a few scenarios:

  • People can tell you when something is wrong on their end or when you stopped being comprehensible
  • People can comment without interrupting your audio flow. Many systems switch the presenter as soon as someone speaks – which is unnecessary in this case.
  • People can post resources without having to interrupt you. “In case you missed it, Jessica is talking about”

Have a consumable backup for each live update

upload and downloadPhoto by
John Trainor

If you are presenting things in the meeting, send out screencasts or slide decks of what you present beforehand. Far too many times you can not see it as a remote participant when someone shares their screens. Switching from presenter to presenter always ends up in a lot of time wasted waiting for the computer to do the thing we want it to.

For the presenter this allows for better explanations of what is happening. It is distracting when you present and press a button and nothing happens. The remote attendees might also have some lag and get a delayed display. This means you talk about something different than they are currently seeing on their screen.

Plan for travel and meeting as a whole team

group hugPhoto by Joris Louwes

Last, but not least, plan to have visits where you meet as a whole team. Do social things together, give people a chance to get to know one another. Once you connected a real human with the flat, blurry face on the screen, things get much easier. Build up a cadence of this. Meet every quarter or twice a year. This might look expensive at first sight, but pays out in team efficiency. My US colleagues had an “a-hah” moment when they were on a laptop late in the evening waiting for others to stop talking about local things they couldn’t care less about. I felt much more empathy for my colleagues once I had to get up much earlier than I wanted to be in the office in time to meet my European colleagues. Let the team walk in each other’s shoes for a while and future meetings will be much smoother.

Mike TaylorWebKitCSSMatrix and and web compatibility stuff

Wow, early January late February 2015, time to blog about more exciting web compatibility bugs.

Lately we've been receiving a number of reports, both in Bugzilla and on, that isn't working in Firefox mobile browsers. With compelling articles like How to blog for cash and How to do your own SEO for your blog or website, this is not a great situation for my fellow Firefox Mobile-using-"how-to-get-rich-by-web-logging" friends.

And now for a brief message from our sponsor, go learn about Art and programming and Mary Cassatt. // TODO(mike): ask jenn for money or how to do SEO.

The biggest obstacle to getting the site to behave is: ReferenceError: WebKitCSSMatrix is not defined. There are few minor CSS issues, but nothing that sprinkling a few unprefixed properties won't solve.

If you're not familiar with WebKitCSSMatrix, Apple has some docs. It's pretty cool (and much nicer than using regular expressions to dice up serialized transform matrices and then doing math and string wrangling yourself). Microsoft even has an equivalent called MSCSSMatrix (which WebKitCSSMatrix is mapped to for compat reasons).

Once upon a time, this thing was specced as CSSMatrix in the CSS3 2D Transforms spec, but eventually was removed (because I forget and am too lazy to search through www-style archives). It returned as a superset of CSSMatrix and SVGMatrix and got the sweet new name of DOMMatrix—which now allows it to be mutuble or immutable, depending on your needs, i.e., do I want a billion new object instances or can I just modify this thing in place.

There's a handful of polyshims available on GitHub if you need them, but the simplest is to just map WebKitCSSMatrix to DOMMatrix, which is supported in Firefox since Gecko 33.

Impress your friends like so:

if (!'WebKitCSSMatrix' in window && window.DOMMatrix) {
  window.WebKitCSSMatrix = window.DOMMatrix;

In theory that's cool and useful, but if you do a little bit more digging you'll find that it only gets you so far on the web. Back to, here's the code that powers the PageSlider class here:

function r() {
  var e = document.defaultView.getComputedStyle(E[0], null),
  t = new WebKitCSSMatrix(e.webkitTransform);
  return t.m41

// some method does W.At = r()
// then some other method calls d(W.At + o, 'none')

function d(e, t) {
  var n = 'none' != t ? 'all ' + t + 's ease-out' : 'none';
  E[0].style.webkitTransition = n,
  E[0].style.webkitTransform = 'translate3d(' + e + 'px,0,0)'

Unfortunately, WebKitCSSMatrix and webkitTransform aren't really possible to tease apart, if you're trying to be compatibile with the deployed web. So a straightforward WebKitCSSMatrix to DOMMatrix mapping won't get you very far.

Byron Joneshappy bmo push day!

the following changes have been pushed to

  • [1088086] Possible duplicate search doesn’t return any results if you input “a->b” (for any a/b)
  • [1102364] Add microdata to HTML bugmail so GMail can display a “View bug” button
  • [1130721] Implement support for the attachment author for pronoun substitution in Custom Search
  • [1123275] Changes to form.reps.mentorship

discuss these changes on

Filed under: bmo, mozilla

Yunier José Sosa VázquezNueva interfaz para Firefox en tabletas

Nuevamente es martes de actualizaciones para Firefox y tengo el placer de compartir con ustedes las novedades que encontrarán en la nueva versión del panda rojo tanto para escritorios como dispositivos móviles. En este lanzamiento sobresalen la nueva interfaz para las tabletas y la introducción de los primeros cambios hacia una arquitectura multiprocesos.

En aras de mejorar la experiencia de usuario, el nuevo Firefox adquiere un buen balance entre simplicidad y poder al ofrecer una barra de pestañas horizontal y un panel de pestañas a pantalla completa. Diseñada para uso horizontal y vertical, la nueva interfaz permite aprovechar todo el tamaño de pantalla que ofrecen las tabletas disponibles en el mercado.


La introducción de los primeros cambios hacia una arquitectura multiprocesos, hará que Firefox maneje las pestañas, complementos y plugins fuera del proceso principal. Electrolysis (e10s) es el nombre del proyecto que busca añadir esta característica al núcleo del navegador y una vez terminado, proveerá mayor seguridad, estabilidad y, si algo falla, Firefox no se cerrará inesperadamente.

El moderno diseño para las búsquedas ahora debuta en otras localizaciones del navegador más traducido del mundo. Al visitar una página web que sugiera adicionar un motor de búsqueda a Firefox, se mostrará un ícono + en la barra de búsqueda.


Los sitios “fijados” en la página nueva pestaña se podrán sincronizar a través de las cuentas Firefox en todos tus dispositivos. Se ha dado soporte completo a la versión 2 del protocolo HTTP, el cual habilita una web más rápida, escalable y responsiva. También se ha localizado el Panda Rojo en uzbeco gracias a la Comunidad de Uzbequistán.

Para Android Lollipop se ha añadido el idioma Maithili (mai).

Otras novedades

Si deseas conocer más, puedes leer las notas de lanzamiento (en inglés).

Puedes obtener esta versión desde nuestra zona de Descargas en español e inglés para Linux, Mac, Windows y Android. Recuerda que para navegar a través de servidores proxy debes modificar la preferencia network.negotiate-auth.allow-insecure-ntlm-v1 a true desde about:config.

Nicholas NethercoteFix your damned data races

Nathan Froyd recently wrote about how he has been using ThreadSanitizer to find data races in Firefox, and how a number of Firefox developers — particular in the networking and JS GC teams — have been fixing these.

This is great news. I want to re-emphasise and re-state one of the points from Nathan’s post, which is that data races are a class of bug that almost everybody underestimates. Unless you have, say, authored a specification of the memory model for a systems programming language, your intuition about the potential impact of many data races is probably wrong. And I’m going to give you three links to explain why.

Hans Boehm’s paper How to miscompile programs with “benign” data races explains very clearly that it’s possible to correctly conclude that a data race is benign at the level of machine code, but it’s almost impossible at the level of C or C++ code. And if you try to do the latter by inspecting the code generated by a C or C++ compiler, you are not allowing for the fact that other compilers (including future versions of the compiler you used) can and will generate different code, and so your conclusion is both incomplete and temporary.

Dmitri Vyukov’s blog post Benign data races: what could possibly go wrong? covers similar ground, giving more examples of how compilers can legitimately compile things in surprising ways. For example, at any point the storage used by a local variable can be temporarily used to hold a different variable’s value (think register spilling). If another thread reads this storage in an racy fashion, it could read the value of an unrelated value.

Finally, John Regehr’s blog has many posts that show how C and C++ compilers take advantage of undefined behaviour to do surprising (even shocking) program transformations, and how the extent of these transformations has steadily increased over time. Compilers genuinely are getting smarter, and are increasingly pushing the envelope of what a language will let them get away with. And the behaviour of a C or C++ programs is undefined in the presence of data races. (This is sometimes called “catch-fire” semantics, for reasons you should be able to guess.)

So, in summary: if you write C or C++ code that deals with threads in Firefox — and that probably includes everybody who writes C or C++ code in Firefox — you should have read at least the first two links I’ve given above. If you haven’t, please do so now. If you have read them and they haven’t made you at least slightly terrified, you should read them again. And if TSan identifies a data race in code that you are familiar with, please take it seriously, and fix it. Thank you.

Mozilla WebDev CommunityBeer and Tell – February 2015

Once a month, web developers from across the Mozilla Project get together to speedrun classic video games. Between runs, we find time to talk about our side projects and drink, an occurrence we like to call “Beer and Tell”.

There’s a wiki page available with a list of the presenters, as well as links to their presentation materials. There’s also a recording available courtesy of Air Mozilla.

Michael Kelly: Refract

Osmose (that’s me!) started off with Refract, a website that can turn any website into an installable application. It does this by generating an Open Web App on the fly that does nothing but redirect to the specified site as soon as it is opened. The name and icon of the generated app are auto-detected from the site, or they can be customized by the user.

Michael Kelly: Sphere Online Judge Utility

Next, Osmose shared spoj, a Python-based command line tool for working on problems from the Sphere Online Judge. The tool lets you list and read problems, as well as create solutions and test them against the expected input and output.

Adrian Gaudebert: Spectateur

Next up was adrian, who shared Spectateur, a tool to run reports against the Crash-Stats API. The webapp lets you set up a data model using attributes available from the API, and then process that data via JavaScript that the user provides. The JavaScript is executed in a sandbox, and the resulting view is displayed at the bottom of the page. Reports can also be saved and shared with others.

Peter Bengtsson: Autocompeter

Peterbe stopped by to share Autocompeter, which is a service for very fast auto-completion. Autocompeter builds upon peterbe’s previous work with fast autocomplete backed by Redis. The site is still not production-ready, but soon users will be able to request an API key to send data to the service for indexing, and Air Mozilla will be one of the first sites using it.

Pomax: inkdb

The ever-productive Pomax returns with, a combination of the many color- and ink-related tools he’s been sharing recently. Among other things, inkdb lets you browse fountain pen inks, map them on a graph based on similarity, and find inks that match the colors in an image. The website is also a useful example of the Mozilla Foundation Client-side Prototype in action.

Matthew Claypotch: rockbot

Lastly, potch shared a web interface for suggesting songs to a Rockbot station. Rockbot currently only has Android and iOS apps, and potch decided to create a web interface to allow people without Rockbot accounts or phones to suggest songs.

No one could’ve anticipated willkg’s incredible speedrun of Mario Paint. When interviewed after his blistering 15 hour and 24 minute run, he refused to answer any questions and instead handed out fliers for the grand opening of his cousin’s Inkjet Cartridge and Unlicensed Toilet Tissue Outlet opening next Tuesday at Shopper’s World on Worcester Road.

If you’re interested in attending the next Beer and Tell, sign up for the mailing list. An email is sent out a week beforehand with connection details. You could even add yourself to the wiki and show off your side-project!

See you next month!

Nick CameronCreating a drop-in replacement for the Rust compiler

Many tools benefit from being a drop-in replacement for a compiler. By this, I mean that any user of the tool can use `mytool` in all the ways they would normally use `rustc` - whether manually compiling a single file or as part of a complex make project or Cargo build, etc. That could be a lot of work; rustc, like most compilers, takes a large number of command line arguments which can affect compilation in complex and interacting ways. Emulating all of this behaviour in your tool is annoying at best, especially if you are making many of the same calls into librustc that the compiler is.

The kind of things I have in mind are tools like rustdoc or a future rustfmt. These want to operate as closely as possible to real compilation, but have totally different outputs (documentation and formatted source code, respectively). Another use case is a customised compiler. Say you want to add a custom code generation phase after macro expansion, then creating a new tool should be easier than forking the compiler (and keeping it up to date as the compiler evolves).

I have gradually been trying to improve the API of librustc to make creating a drop-in tool easier to produce (many others have also helped improve these interfaces over the same time frame). It is now pretty simple to make a tool which is as close to rustc as you want it to be. In this tutorial I'll show how.

Note/warning, everything I talk about in this tutorial is internal API for rustc. It is all extremely unstable and likely to change often and in unpredictable ways. Maintaining a tool which uses these APIs will be non- trivial, although hopefully easier than maintaining one that does similar things without using them.

This tutorial starts with a very high level view of the rustc compilation process and of some of the code that drives compilation. Then I'll describe how that process can be customised. In the final section of the tutorial, I'll go through an example - stupid-stats - which shows how to build a drop-in tool.

Continue reading on GitHub...

Andreas GalSearch works differently than you may think

Search is the main way we all navigate the Web, but it works very differently than you may think. In this blog post I will try to explain how it worked in the past, why it works differently today and what role you play in the process.

The services you use for searching, like Google, Yahoo and Bing, are called a search engines. The very name suggests that they go through a huge index of Web pages to find every one that contains the words you are searching for. 20 years ago search engines indeed worked this way. They would “crawl” the Web and index it, making the content available for text searches.

As the Web grew larger, searches would often find the same word or phrase on more and more pages. This was starting to make search results less and less useful because humans don’t like to read through huge lists to manually find the page that best matches their search. A search for the word “door” on Google, for example, gives you more than 1.9 billion results. It’s impractical — even impossible — for anyone look through all of them to find the most relevant page.


Google finds about 1.9 billion results for the search query “door”.

To help navigate the ever growing Web, search engines introduced algorithms to rank results by their relevance. In 1996, two Stanford graduate students, Larry Page and Sergey Brin, discovered a way to use the information available on the Web itself to rank results. They called it PageRank.

Pages on the Web are connected by links. Each link contains anchor text that explains to readers why they should follow the link. The link itself points to another page that the author of the source page felt was relevant to the anchor text. Page and Brin discovered that they could rank results by analyzing the incoming links to a page and treating each one as a vote for its quality. A result is more likely to be relevant if many links point to it using anchor text that is similar to the search terms. Page and Brin founded a search engine company in 1998 to commercialize the idea: Google.

PageRank worked so well that it completely changed the way people interact with search results. Because PageRank correctly offered the most relevant results at the top of the page, users started to pay less attention to anything below that. This also meant that pages that didn’t appear on top of the results page essentially started to become “invisible”: users stopped finding and visiting them.

To experience the “invisible Web” for yourself, head over to Google and try to look through more than just the first page of results. So few users ever wander beyond the first page that Google doesn’t even bother displaying all the 1.9 billion search results it claims to have found for “door.” Instead, the list just stops at page 63, about a 100 million pages short of what you would have expected.

Despite reporting over 1.9 billion results, in reality Google’s search results for “door” are quite finite and end at page 63.

With publishers and online commerce sites competing for that small number of top search results, a new business was born: search engine optimization (or SEO). There are many different methods of SEO, but the principal goal is to game the PageRank algorithm in your favor by increasing the number of incoming links to your own page and tuning the anchor text. With sites competing for visitors — and billions in online revenue at stake — PageRank eventually lost this arms race. Today, links and anchor text are no longer useful to determine the most relevant results and, as a result, the importance of PageRank has dramatically decreased.

Search engines have since evolved to use machine learning to rank results. People perform 1.2 trillion searches a year on Google alone  — that’s about 3 billion a day and 40,000 a second. Each search becomes part of this massive query stream as the search engine simultaneously “sees” what billions of people are searching for all over the world. For each search, it offers a range of results and remembers which one you considered most relevant. It then uses these past searches to learn what’s most relevant to the average user to provide the most relevant results for future searches.

Machine learning has made text search all but obsolete. Search engines can answer 90% or so of searches by looking at previous search terms and results. They no longer search the Web in most cases — they instead search past searches and respond based on the preferred result of previous users.

This shift from PageRank to machine learning also changed your role in the process. Without your searches — and your choice of results — a search engine couldn’t learn and provide future answers to others. Every time you use a search engine, the search engine uses you to rank its results on a massive scale. That makes you its most important asset.

Filed under: Mozilla

David TenserUser Success – We’re hiring!

Just a quick +1 to Roland’s plug for the Senior Firefox Community Support Lead:

  • Ever loved a piece of software so much that you learned everything you
    could about it and helped others with it?
  • Ever coordinated an online community? Especially one around supporting users?
  • Ever measured and tweaked a website’s content so that more folks could find it and learn from it?

Got 2 out of 3 of the above?

Then work with me (since Firefox works closely with my area: Firefox for Android and in the future iOS via cloud services like Sync) and the rest of my colleagues on the fab Mozilla User Success team (especially my fantastic Firefox savvy colleagues over at User Advocacy).

And super extra bonus: you’ll also work with our fantastic community like all Mozilla employees AND Firefox product management, marketing and engineering.

Take a brief detour and head over to Roland’s blog to get a sense of one of the awesome people you’d get to work closely with in this exciting role (trust me, you’ll want to work with Roland!). After that, I hope you know what to do! :)

Ben KellyThat Event Is So Fetch

The Service Workers builds have been updated as of yesterday, February 22:

Firefox Service Worker Builds

Notable contributions this week were:

  • Josh Mathews landed Fetch Event support in Nightly. This is important, of course, because without the Fetch Event you cannot actually intercept any network requests with your Service Worker. | bug 1065216
  • Catalin Badea landed more of the Service Worker API in Nightly, including the ability to communicate with the Service Worker using postMessage(). | bug 982726
  • Nikhil Marathe landed some more of his spec implementations to handle unloading documents correctly and to treat activations atomically. | bug 1041340 | bug 1130065
  • Andrea Marchesini landed fixes for FirefoxOS discovered by the team in Paris. | bug 1133242
  • Jose Antonio Olivera Ortega contributed a work-in-progress patch to force Service Worker scripts to update when dom.serviceWorkers.test.enabled is set. | bug 1134329
  • I landed my implementation of the Fetch Request and Response clone() methods. | bug 1073231

As always, please let us know if you run into any problems. Thank you for testing!

Mozilla Release Management TeamFirefox 36 beta10 to rc

For the RC build, we landed a few last minutes changes. We disabled <meta referrer> because of a last minute issue, we landed a compatibility fix for addons and, last but not least, some graphic crash fixes.

Note that a RC2 has been build from the same source code in order to tackle the AMD CPU bug (see comment #22).

  • 11 changesets
  • 32 files changed
  • 220 insertions
  • 48 deletions



List of changesets:

Robert StrongBug 945192 - Followup to support Older SDKs in loaddlls.cpp. r=bbondy a=Sylvestre - cce919848572
Armen Zambrano GasparnianBug 1110286 - Pin mozharness to 2264bffd89ca instead of production. r=rail a=testing - 948a2c2e31d4
Jared WeinBug 1115227 - Loop: Add part of the UITour PageID to the Hello tour URLs as a query parameter. r=MattN, a=sledru - 1a2baaf50371
Boris ZbarskyBug 1134606 - Disable <meta referrer> in Firefox 36 pending some loose ends being sorted out. r=sstamm, a=sledru - 521cf86d194b
Milan SreckovicBug 1126918 - NewShSurfaceHandle can return null. Guard against it. r=jgilbert, a=sledru - 89cfa8ff9fc5
Ryan VanderMeulenMerge beta to m-r. a=merge - 2f2abd6ffebb
Matt WoodrowBug 1127925 - Lazily open shared handles in DXGITextureHostD3D11 to avoid holding references to textures that might not be used. r=jrmuizel, a=sledru - 47ec64cc562f
Rob WuBug 1128478 - sdk/panel's show/hide events not emitted if contentScriptWhen != 'ready'. r=erikvold, a=sledru - c2a6bab25617
Matt WoodrowBug 1128170 - Use UniquePtr for TextureClient KeepAlive objects to make sure we don't leak them. r=jrmuizel, a=sledru - 67d9db36737e
Hector ZhaoBug 1129287 - Fix not rejecting partial name matches for plugin blocklist entries. r=gfritzsche, a=sledru - 7d4016a05dd3
Ryan VanderMeulenMerge beta to m-r. a=merge - a2ffa9047bf4

Adam LoftingThe week ahead 23 Feb 2015

First, I’ll note that even taking the time to write these short ‘note to self’ type blog posts each week takes time and is harder to do than I expected. Like so many priorities, the long term important things often battle with the short term urgent things. And that’s in a culture where working open is more than just acceptable, it’s encouraged.

Anyway, I have some time this morning sitting in an airport to write this, and I have some time on a plane to catch up on some other reading and writing that hasn’t made it to the top of the todo list for a few weeks. I may even get to post a blog post or two in the near future.

This week, I have face-to-face time with lots of colleagues in Toronto. Which means a combination of planning, meetings, running some training sessions, and working on tasks where timezone parity is helpful. It’s also the design team work week, and though I’m too far gone from design work to contribute anything pretty, I’m looking forward to seeing their work and getting glimpses of the future Webmaker product. Most importantly maybe, for a week like this, I expect unexpected opportunities to arise.

One of my objectives this week is working with Ops to decide where my time is best spent this year to have the most impact, and to set my goals for the year. That will get closer to a metrics strategy this year to improve on last years ‘reactive’ style of work.

IMG_0456If you’re following along for the exciting stories of my shed>to>office upgrades. I don’t have much to show today, but I’m building a new desk next and insulation is my new favourite thing. This photo shows the visible difference in heat loss after fitting my first square of insulation material to the roof.

The Mozilla BlogMWC 2015: Experience the Latest Firefox OS Devices, Discover what Mozilla is Working on Next

Preview TVs and phones powered by Firefox OS and demos such as an NFC payment prototype at the Mozilla booth. Hear Mozilla speakers discuss privacy, innovation for inclusion and future of the internet.

Panasonic unveiled their new line of 4K Ultra HD TVs powered by Firefox OS at their convention in Frankfurt today. The Panasonic 2015 4k UHD (Ultra HD) LED VIERA TV, which will be shipping this spring, will also be showcased at Mozilla’s stand at Mobile World Congress 2015 in Barcelona. Like last year, Firefox OS will take its place in Hall 3, Stand 3C30, alongside major operators and device manufacturers.

Mozilla's stand at Mobile World Congress 2015, Hall 3, Stand 3C30

Mozilla’s stand at Mobile World Congress 2015, Hall 3, Stand 3C30

In addition to the Panasonic TV and the latest Firefox OS smartphones announced, visitors have the opportunity to learn more about Mozilla’s innovation projects during talks at the “Fox Den” at Mozilla’s stand, Hall 3, Stand 3C30. Just one example from the demo program:

Mozilla, in collaboration with its partners at Deutsche Telekom Innovation Labs (centers in Silicon Valley and Berlin) and T-Mobile Poland, developed the design and implementation of Firefox OS’s NFC infrastructure to enable several applications including mobile payments, transportation services, door access and media sharing. The mobile wallet demo covering ‘MasterCard® Contactless’ technology together with few non-payment functionalities will be showcased in “FoxDen” talks.

Visit for the full list of topics and schedule of “Fox Den” talks.

MAP: How to find Mozilla and Firefox OS at Mobile World Congress 2015 (Hall 3)

How to find Mozilla and Firefox OS at Mobile World Congress 2015 (Hall 3)

Schedule of Events and Speaking Appearances

Hear from Mozilla executives on trending topics in mobile at the following sessions:

‘Digital Inclusion: Connecting an additional one billion people to the mobile internet’ Seminar

Executive Director of the Mozilla Foundation Mark Surman will join a seminar that will explore the barriers and opportunities relating to the growth of mobile connectivity in developing markets, particularly in rural areas.
Date: Monday 2 March 12:00 – 13:30 CET
Location: GSMA Seminar Theatre CC1.1

‘Ensuring User-Centred Privacy in a Connected World’ Panel

Denelle Dixon-Thayer, SVP of business and legal affairs at Mozilla, will take part in a session that explores user-centric privacy in a connected world.
Date: Monday, 2 March 16:00 – 17:30 CET
Location: Hall 4, Auditorium 3

‘Innovation for Inclusion’ Keynote Panel

Mozilla Executive Chairwoman and Co-Founder Mitchell Baker will discuss how mobile will continue to empower individuals and societies.
Date: Tuesday, 3 March 11:15 – 12:45 CET
Location: Hall 4, Auditorium 1 (Main Conference Hall)

‘Connected Citizens, Managing Crisis’ Panel

Mark Surman, Executive Director of the Mozilla Foundation, will contribute to a panel on how mobile technology is playing an increasingly central role in shaping responses to some of the most critical humanitarian problems facing the global community today.
Date: Tuesday, 3 March 14:00 – 15:30 CET
Location: Hall 4, Auditorium 2

‘Defining the Future of the Internet’ Panel

Andreas Gal, CTO at Mozilla, will take part in a session that explores the future of the Internet, bringing together industry leaders to the forefront of the net neutrality debate.
Date: Wednesday, 4 March 15:15 – 16:15 CET
Location: Hall 4, Auditorium 5

More information:

  • Please visit Mozilla and experience Firefox OS in Hall 3, Stand 3C30, at the Fira Gran Via, Barcelona from March 2-5, 2015
  • To learn more about Mozilla at MWC, please visit:
  • For further details or to schedule a meeting at the show please contact
  • For additional resources, such as high-resolution images and b-roll video, visit:

Nick ThomasFileMerge bug

FileMerge is a nice diff and merge tool for OS X, and I use it a lot for larger code reviews where lots of context is helpful. It also supports intra-line diff, which comes in pretty handy.

filemerge screenshot

However in recent releases, at least in v2.8 which comes as part of XCode 6.1, it assumes you want to be merging and shows that bottom pane. Adjusting it away doesn’t persist to the next time you use it, *gnash gnash gnash*.

The solution is to open a terminal and offer this incantation:

defaults write MergeHeight 0

Unfortunately, if you use the merge pane then you’ll have to do that again. Dear Apple, pls fix!

Ahmed NefzaouiA Bit Of Consulting: Entering the RTL Web Market [Part 1]

As I’m writing this article, we (as Mozilla and everyone else working together on Firefox OS) were never closer to releasing a version of Firefox Operating System containing this much RTL (Right-To-Left UI) support inside, which is (as the one who were/is responsible for most of it I can tell you we have) a pretty damn competitive implementation that *no* one else currently has.

v2.2 which is the version that we’re talking about here. And as you already know by now (since, you know, assuming know about Firefox OS since you’re reading this) the OS is web-based, which means it relates a lot to the web, in fact it *is* the web we want!

So I decided to write a little blog post for anyone out there who wants to get started extending their web products/websites/services support to the RTL Market.
For starters, the RTL Market is anywhere in the world where a country has a language starting from the right as their native language. This means North Africa, Middle East and bits of Asia.

RTL Is NOT Translating To Arabic

Localizing for RTL means more than translating your website into Arabi, Farsi, Urdu or Hebrew and calling it a day. It might sound harsh but here’s my advice: Do RTL right or don’t bother. If you half-arse it, it will be obvious, and you will lose money and credibility. Know who you’re talking to and how they use the web and you’ll be that much closer to a meaningful connection with your users.

RTL Is UI, Patterns And More

So there’s this one time after I had a new Ubuntu install and used Gnome for it instead of Unity, and while personalizing it, I chose for the time & calendar the setting “Arabic (Tunisia)” and suddenly 4:45 PM became litteraly PM 45:4.
Seriously, WTF ^^ I can’t read this even though I’m a native Arabic speaker (so native RTL user), even though the time pattern is flipped and so on, but still it’s WRONG, we don’t read time that way.
So, moral of the story: RTL is not about flipping UI wherever you see a horizontal list of items near each other.

So I ended up switching to English (UK) for my time and date format. And again, if you do RTL wrong it will be obvious and people will not use it.

Back to the topic, in RTL there are exceptions that you should know about before kicking off, don’t be shy to ask the community, especially from the open source community, basically any native could give you valid advices about most of the UI.

Feel free to drop in any questions you’ve got :)

Daniel StenbergBug finding is slow in spite of many eyeballs

“given enough eyeballs, all bugs are shallow”

The saying (also known as Linus’ law) doesn’t say that the bugs are found fast and neither does it say who finds them. My version of the law would be much more cynical, something like: “eventually, bugs are found“, emphasizing the ‘eventually’ part.

(Jim Zemlin apparently said the other day that it can work the Linus way, if we just fund the eyeballs to watch. I don’t think that’s the way the saying originally intended.)

Because in reality, many many bugs are never really found by all those given “eyeballs” in the first place. They are found when someone trips over a problem and is annoyed enough to go searching for the culprit, the reason for the malfunction. Even if the code is open and has been around for years it doesn’t necessarily mean that any of all the people who casually read the code or single-stepped over it will actually ever discover the flaws in the logic. The last few years several world-shaking bugs turned out to have existed for decades until discovered. In code that had been read by lots of people – over and over.

So sure, in the end the bugs were found and fixed. I would argue though that it wasn’t because the projects or problems were given enough eyeballs. Some of those problems were found in extremely popular and widely used projects. They were found because eventually someone accidentally ran into a problem and started digging for the reason.

Time until discovery in the curl project

I decided to see how it looks in the curl project. A project near and dear to me. To take it up a notch, we’ll look only at security flaws. Not only because they are the probably most important bugs we’ve had but also because those are the ones we have the most carefully noted meta-data for. Like when they were reported, when they were introduced and when they were fixed.

We have no less than 30 logged vulnerabilities for curl and libcurl so far through-out our history, spread out over the past 16 years. I’ve spent some time going through them to see if there’s a pattern or something that sticks out that we should put some extra attention to in order to improve our processes and code. While doing this I gathered some random info about what we’ve found so far.

On average, each security problem had been present in the code for 2100 days when fixed – that’s more than five and a half years. On average! That means they survived about 30 releases each. If bugs truly are shallow, it is still certainly not a fast processes.

Perhaps you think these 30 bugs are really tricky, deeply hidden and complicated logic monsters that would explain the time they took to get found? Nope, I would say that every single one of them are pretty obvious once you spot them and none of them take a very long time for a reviewer to understand.

Vulnerability ages

This first graph (click it for the large version) shows the period each problem remained in the code for the 30 different problems, in number of days. The leftmost bar is the most recent flaw and the bar on the right the oldest vulnerability. The red line shows the trend and the green is the average.

The trend is clearly that the bugs are around longer before they are found, but since the project is also growing older all the time it sort of comes naturally and isn’t necessarily a sign of us getting worse at finding them. The average age of flaws is aging slower than the project itself.

Reports per year

How have the reports been distributed over the years? We have a  fairly linear increase in number of lines of code but yet the reports were submitted like this (now it goes from oldest to the left and most recent on the right – click for the large version):


Compare that to this chart below over lines of code added in the project (chart from openhub and shows blanks in green, comments in grey and code in blue, click it for the large version):

curl source code growth

We received twice as many security reports in 2014 as in 2013 and we got half of all our reports during the last two years. Clearly we have gotten more eyes on the code or perhaps users pay more attention to problems or are generally more likely to see the security angle of problems? It is hard to say but clearly the frequency of security reports has increased a lot lately. (Note that I here count the report year, not the year we announced the particular problems, as they sometimes were done on the following year if the report happened late in the year.)

On average, we publish information about a found flaw 19 days after it was reported to us. We seem to have became slightly worse at this over time, the last two years the average has been 25 days.

Did people find the problems by reading code?

In general, no. Sure people read code but the typical pattern seems to be that people run into some sort of problem first, then dive in to investigate the root of it and then eventually they spot or learn about the security problem.

(This conclusion is based on my understanding from how people have reported the problems, I have not explicitly asked them about these details.)

Common patterns among the problems?

I went over the bugs and marked them with a bunch of descriptive keywords for each flaw, and then I wrote up a script to see how the frequent the keywords are used. This turned out to describe the flaws more than how they ended up in the code. Out of the 30 flaws, the 10 most used keywords ended up like this, showing number of flaws and the keyword:

8 cert-check
8 buffer-overflow

6 info-leak
3 URL-parsing
3 openssl
3 http-headers
3 cookie

I don’t think it is surprising that TLS, HTTP or certificate checking are common areas of security problems. TLS and certs are complicated, HTTP is huge and not easy to get right. curl is mostly C so buffer overflows is a mistake that sneaks in, and I don’t think 27% of the problems tells us that this is a problem we need to handle better. Also, only 2 of the last 15 flaws (13%) were buffer overflows.

The discussion following this blog post is on hacker news.

This Week In RustThis Week in Rust 71

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Send me an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors or omissions in this week's issue, please submit a PR.

The big news

Rust 1.0.0-alpha.2 was released on Friday, but keep using nightlies. Six more weeks until the beta, which should become 1.0. Only six more weeks.

What's cooking on master?

157 pull requests were merged in the last week, and 15 RFC PRs.

Now you can follow breaking changes as they happen!

Breaking Changes

Other Changes

New Contributors

  • Adam Jacob
  • Alexander Bliskovsky
  • Brian Brooks
  • caipre
  • Darrell Hamilton
  • Dave Huseby
  • Denis Defreyne
  • Elantsev Serj
  • Henrik Schopmans
  • Ingo Blechschmidt
  • Jormundir
  • Lai Jiangshan
  • posixphreak
  • Ryan Riginding
  • Wesley Wiser
  • Will
  • wonyong kim

Approved RFCs

This covers two weeks since last week I wasn't able review RFCs in time.

New RFCs

Friend of the Tree

The Rust Team likes to occassionally recognize people who have made outstanding contributions to The Rust Project, its ecosystem, and its community. These people are 'friends of the tree'.

This week's friend of the tree was ... Toby Scrace.

"Today I would like to nominate Toby Scrace as Friend of the Tree. Toby emailed me over the weekend about a login vulnerability on where you could log in to whomever the previously logged in user was regardless of whether the GitHub authentication was successful or not. I very much appreciate Toby emailing me privately ahead of time, and I definitely feel that Toby has earned becoming Friend of the Tree."

Quote of the Week

<Manishearth> In other news, I have r+ on rust now :D
<Ms2ger> No good deed goes unpunished

From #servo. Thanks to SimonSapin for the tip.

Notable Links

Project Updates

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

Rizky AriestiyansyahGithub Pages Auto Publication Based on Master Git Commits

A simple configuration to auto publish the git commits on master to gh-pages,just add few line code (git command) in .git/config file. Here is the code to mirrored the master branch to gh-pages

Nick DesaulniersHidden in Plain Sight - Public Key Crypto

How is it possible for us to communicate securely when there’s the possibility of a third party eavesdropping on us? How can we communicate private secrets through public channels? How do such techniques enable us to bank online and carry out other sensitive transactions on the Internet while trusting numerous relays? In this post, I hope to explain public key cryptography, with actual code examples, so that the concepts are a little more concrete.

First, please check out this excellent video on public key crypto:

Hopefully that explains the gist of the technique, but what might it actually look like in code? Let’s take a look at example code in JavaScript using the Node.js crypto module. We’ll later compare the upcoming WebCrypto API and look at a TLS handshake.

Meet Alice. Meet Bob. Meet Eve. Alice would like to send Bob a secret message. Alice would not like Eve to view the message. Assume Eve can intercept, but not tamper with, everything Alice and Bob try to share with each other.

Alice chooses a modular exponential key group, such as modp4, then creates a public and private key.

var group = "modp4";
var aliceDH = crypto.getDiffieHellman(group);

A modular exponential key group is simply a “sufficiently large” prime number, paired with a generator (specific number), such as those defined in RFC2412 and RFC3526.

The public key is meant to be shared; it is ok for Eve to know the public key. The private key must not ever be shared, even with the person communicating to.

Alice then shares her public key and group with Bob.

Public Key:
 <Buffer 96 33 c5 9e b9 07 3e f2 ec 56 6d f4 1a b4 f8 4c 77 e6 5f a0 93 cf 32 d3 22 42 c8 b4 7b 2b 1f a9 55 86 05 a4 60 17 ae f9 ee bf b3 c9 05 a9 31 31 94 0f ... >

Bob now creates a public and private key pair with the same group as Alice.

var bobDH = crypto.getDiffieHellman(group);

Bob shares his public key with Alice.

Public key:
 <Buffer ee d7 e2 00 e5 82 11 eb 67 ab 50 20 30 81 b1 74 7a 51 0d 7e 2a de b7 df db cf ac 57 de a4 f0 bd bc b5 7e ea df b0 3b c3 3a e2 fa 0e ed 22 90 31 01 67 ... >

Alice and Bob now compute a shared secret.

var aliceSecret = aliceDH.computeSecret(bobDH.getPublicKey(), null, "hex");
var bobSecret = bobDH.computeSecret(aliceDH.getPublicKey(), null, "hex");

Alice and Bob have now derived a shared secret from each others’ public keys.

aliceSecret === bobSecret; // => true

Meanwhile, Eve has intercepted Alice and Bob’s public keys and group. Eve tries to compute the same secret.

var eveDH = crypto.getDiffieHellman(group);
var eveSecret = eveDH.computeSecret(aliceDH.getPublicKeys(), null, "hex");

eveSecret === aliceSecret; // => false

This is because Alice’s secret is derived from Alice and Bob’s private keys, which Eve does not have. Eve may not realize her secret is not the same as Alice and Bob’s until later.

That was asymmetric encryption; using different keys. The shared secret may now be used in symmetric encryption; using the same keys.

Alice creates a symmetric block cypher using her favorite algorithm, a hash of their secret as a key, and random bytes as an initialization vector.

var cypher = "aes-256-ctr";
var hash = "sha256";
var aliceIV = crypto.randomBytes(128);
var aliceHashedSecret = crypto.createHash(hash).update(aliceSecret).digest("binary");
var aliceCypher = crypto.createCypher(cypher, aliceHashedSecret, aliceIV);

Alice then uses her cypher to encrypt her message to Bob.

var cypherText = aliceCypher.update("...");

Alice then sends the cypher text, cypher, and hash to Bob.

 <Buffer bd 29 96 83 fa a8 7d 9c ea 90 ab>

Bob now constructs a symmetric block cypher using the algorithm from Alice, and a hash of their shared secret.

var bobHashedSecret = crypto.createHash(hash).update(bobSecret).digest("binary");
var bobCypher = crypto.createDecipher(cypher, bobHashedSecret);

Bob now decyphers the encrypted message (cypher text) from Alice.

var plainText = bobCypher.update(cypherText);
console.log(plainText); // => "I love you"

Eve has intercepted the cypher text, cypher, hash, and tries to decrypt it.

var eveHashedSecret = crypto.createHash(hash).update(eveSecret).digest("binary");
var eveCypher = crypto.createDecipher(cypher, eveHashedSecret);

// => ��_r](�i)

Here’s where Eve realizes her secret is not correct.

This prevents passive eavesdropping, but not active man-in-the-middle (MITM) attacks. For example, how does Alice know that the messages she was supposedly receiving from Bob actually came from Bob, not Eve posing as Bob?

Today, we use a system of certificates to provide authentication. This system certainly has its flaws, but it is what we use today. This is more advanced topic that won’t be covered here. Trust is a funny thing.

What’s interesting to note is that the prime and generator used to generate Diffie-Hellman public and private keys have strings that represent the corresponding modular key exponential groups, ie “modp14”. Web crypto’s API gives you finer grain control to specify the generator and large prime number in a Typed Array. I’m not sure why this is; if it allows you to have finer grain control, or allows you to support newer groups before the implementation does? To me, it seems like a source for errors to be made; hopefully someone will make a library to provide these prime/generator pairs.

One issue with my approach is that I assumed that Alice and Bob both had support for the same hashing algorithms, modular exponential key group, and symmetric block cypher. In the real world, this is not always the case. Instead, it is much more common for the client to broadcast publicly all of the algorithms it supports, and the server to pick one. This list of algorithms is called a “suite,” ie “cypher suit.” I learned this the hard way recently trying to upgrade the cypher suit on my ssh server and finding out that my client did not support the lastest cyphers. In this case, Alice and Bob might not have the same versions of Node.js, which statically link their own versions of OpenSSL. Thus, one should use cryto.getCiphers() and crypto.getHashes() before assuming the party they’re communicating to can do the math to decrypt. We’ll see “cypher suites” come up again in TLS handshakes. The NSA publishes a list of endorsed cryptographic components, for what it’s worth. There are also neat tricks we can do to prevent the message from being decrypted at a later time should the private key be compromised and encrytped message recorded, called Perfect Forward Secrecy.

Let’s take a look now at how a browser does a TLS handshake. Here’s a capture from Wireshark of me navigating to First we have a TLSv1.2 Client Hello to start the handshake. Here we can see a list of the cypher suites.

Next is the response from the server, a TLSv1.2 Server Hello. Here you can see the server has picked a cypher to use.

The server then sends its certificate, which contains a copy of its public key.

Now that we’ve agreed on a cypher suite, the client now sends its public key. The server sets up a session, that way it may abbreviate the handshake in the future. Finally, the client may now start making requests to the server with encrypted application data.

For more information on TLS handshakes, you should read Ilya Grigorik’s High Performance Browser Networking book chapter TLS Handshake, Mozilla OpSec’s fantastic wiki, and this exellent Stack Exchange post. As you might imagine, all of these back and forth trips made during the TLS handshake add latency overhead when compared to unencrypted HTTP requests.

I hope this post helped you understand how we can use cryptography to exchange secret information through public channels. This isn’t enough information to implement a perfectly secure system; end to end security means one single mistake can compromise the entire system. Peer review and open source, battle tested implementations go a long way.

A cryptosystem should be secure even if everything about the system, except the key, is public knowledge.

Kerckhoffs’s principle

I wanted to write this post because I believe abstinence-only crypto education isn’t working and I cant stand when anyone acts like part of a cabal from their ivory tower to those trying to learn new things. Someone will surely cite Javascript Cryptography Considered Harmful, which while valid, misses my point of simply trying to show people more concrete basics with code examples. The first crypto system you implement will have its holes, but you can’t go from ignorance of crypto to perfect knowledge without implementing a few imperfect systems. Don’t be afraid to, just don’t start with trying to protect high value data. Crypto is dangerous, because it can be difficult to impossible to tell when your system fails. Assembly is also akin to juggling knives, but at least you’ll usually segfault if you mess up and program execution will halt.

With upcoming APIs like Service Workers requiring TLS, protocols like HTTP2, pushes for all web traffic to be encrypted, and shitty things governments, politicians, and ISPs do, web developers are going to have to start boning up on their crypto knowledge.

What are your recommendations for correctly learning crypto? Leave me some thoughts in the comments below.

Nick FitzgeraldMemory Management In Oxischeme

I've recently been playing with the Rust programming language, and what better way to learn a language than to implement a second language in the language one wishes to learn?! It almost goes without saying that this second language being implemented should be Scheme. Thus, Oxischeme was born.

Why implement Scheme instead of some other language? Scheme is a dialect of LISP and inherits the simple parenthesized list syntax of its LISP-y origins, called s-expressions. Thus, writing a parser for Scheme syntax is rather easy compared to doing the same for most other languages' syntax. Furthermore, Scheme's semantics are also minimal. It is a small language designed for teaching, and writing a metacircular interpreter (ie a Scheme interpreter written in Scheme itself) takes only a few handfuls of lines of code. Finally, Scheme is a beautiful language: its design is rooted in the elegant λ-calculus.

Scheme is not "close to the metal" and doesn't provide direct access to the machine's hardware. Instead, it provides the illusion of infinite memory and its structures are automatically garbage collected rather than explicitly managed by the programmer. When writing a Scheme implementation in Scheme itself, or any other language with garbage collection, one can piggy-back on top of the host language's garbage collector, and use that to manage the Scheme's structures. This is not the situation I found myself in: Rust is close to the metal, and does not have a runtime with garbage collection built in (although it has some other cool ideas regarding lifetimes, ownership, and when it is safe to deallocate an object). Therefore, I had to implement garbage collection myself.

Faced with this task, I had a decision to make: tracing garbage collection or reference counting?

Reference counting is a technique where we keep track of the number of other things holding references to any given object or resource. When new references are taken, the count is incremented. When a reference is dropped, the count is decremented. When the count reaches zero, the resource is deallocated and it decrements the reference count of any other objects it holds a reference to. Reference counting is great because once an object becomes unreachable, it is reclaimed immediately and doesn't sit around consuming valuable memory space while waiting for the garbage collector to clean it up at some later point in time. Additionally, the reclamation process happens incrementally and program execution doesn't halt while every object in the heap is checked for liveness. On the negative side, reference counting runs into trouble when it encounters cycles. Consider the following situation:

A -> B
^    |
|    v
D <- C

A, B, C, and D form a cycle and all have a reference count of one. Nothing from outside of the cycle holds a reference to any of these objects, so it should be safe to collect them. However, because each reference count will never get decremented to zero, none of these objects will be deallocated. In practice, the programmer must explicitly use (potentially unsafe) weak references, or the runtime must provide a means for detecting and reclaiming cycles. The former defeats the general purpose, don't-worry-about-it style of managed memory environments. The latter is equivalent to implementing a tracing collector in addition to the existing reference counting memory management.

Tracing garbage collectors start from a set of roots and recursively traverse object references to discover the set of live objects in the heap graph. Any object that is not an element of the live set cannot be used again in the future, because the program has no way to refer to that object. Therefore, the object is available for reclaiming. This has the advantage of collecting dead cycles, because if the cycle is not reachable from the roots, then it won't be in the live set. The cyclic references don't matter because those edges are never traversed. The disadvantage is that, without a lot of hard work, when the collector is doing its bookkeeping, the program is halted until the collector is finished analyzing the whole heap. This can result in long, unpredictable GC pauses.

Reference counting is to tracing as yin is to yang. The former operates on dead, unreachable objects while the latter operates on live, reachable things. Fun fact: every high performance GC algorithm (such as generational GC or reference counting with "purple" nodes and trial deletion) uses a mixture of both, whether it appears so on the surface or not. "A Unified Theory of Garbage Collection" by Bacon, et all discusses this in depth.

I opted to implement a tracing garbage collector for Oxischeme. In particular, I implemented one of the simplest GC algorithms: stop-the-world mark-and-sweep. The steps are as follows:

  1. Stop the Scheme program execution.
  2. Mark phase. Trace the live heap starting from the roots and add every reachable object to the marked set.
  3. Sweep phase. Iterate over each object x in the heap:
    • If x is an element of the marked set, continue.
    • If x is not an element of the marked set, reclaim it.
  4. Resume execution of the Scheme program.

Because the garbage collector needs to trace the complete heap graph, any structure that holds references to a garbage collected type must participate in garbage collection by tracing the GC things it is holding alive. In Oxischeme, this is implemented with the oxischeme::heap::Trace trait, whose implementation requires a trace function that returns an iterable of GcThings:

pub trait Trace {
    /// Return an iterable of all of the GC things referenced by
    /// this structure.
    fn trace(&self) -> IterGcThing;

Note that Oxischeme separates tracing (generic heap graph traversal) from marking (adding live nodes in the heap graph to a set). This enables using Trace to implement other graph algorithms on top of the heap graph. Examples include computing dominator trees and retained sizes of objects, or finding the set of retaining paths of an object that you expected to be reclaimed by the collector, but hasn't been.

If we were to introduce a Trio type that contained three cons cells, we would implement tracing like this:

struct Trio {
    first: ConsPtr,
    second: ConsPtr,
    third: ConsPtr,

impl Trace for Trio {
    fn trace(&self) -> IterGcThing {
        let refs = vec!(GcThing::from_cons_ptr(self.first),

What causes a garbage collection? As we allocate GC things, GC pressure increases. Once that pressure crosses a threshold — BAM! — a collection is triggered. Oxischeme's pressure application and threshold are very naive at the moment: every N allocations a collection is triggered, regardless of size of the heap or size of individual allocations.

A root is an object in the heap graph that is known to be live and reachable. When marking, we start tracing from the set of roots. For example, in Oxischeme, the global environment is a GC root.

In addition to permanent GC roots, like the global environment, sometimes it is necessary to temporarily root GC things referenced by pointers on the stack. Garbage collection can be triggered by any allocation, and it isn't always clear which Rust functions (or other functions called by those functions, or even other functions called by those functions called from the first function, and so on) might allocate a GC thing, triggering collection. The situation we want to avoid is a Rust function using a temporary variable that references a GC thing, then calling another function which triggers a collection and collects the GC thing that was referred to by the temporary variable. That results in the temporary variable becoming a dangling pointer. If the Rust function accesses it again, that is Undefined Behavior: it might still get the value it was pointing at, or it might be a segfault, or it might be a freshly allocated value being used by something else! Not good!

let a = pointer_to_some_gc_thing;
// Oops! A collection was triggered and dereferencing this
// pointer leads to Undefined Behavior!

There are two possible solutions to this problem. The first is conservative garbage collection, where we walk the native stack and if any value on the stack looks like it might be a pointer and if coerced to a pointer happens to point to a GC thing in the heap, we assume that it is in fact a pointer. Under this assumption, it isn't safe to reclaim the object pointed to, and so we treat that GC thing a root. Note that this strategy is simple and easy to retrofit because it doesn't involve changes in any code other than adding the stack scanning, but it results in false positives. The second solution is precise rooting. With precise rooting, it is the responsibility of the Rust function's author to explicitly root and unroot pointers to GC things used in variables on the stack. The advantage this provides is that there are no false positives: you know exactly which stack values are pointers to GC things. The disadvantage is the requirement of explicitly telling the GC about every pointer to a GC thing you ever reference on the stack.

Almost every modern, high performance tracing collector for managed runtimes uses precise rooting because it is a prerequisite* of a moving collector: a GC that relocates objects while performing collection. Moving collectors are desirable because they can compact the heap, creating a smaller memory footprint for programs and better cache locality. They can also implement pointer bumping allocation, that is both simpler and faster than maintaining a free list. Finally, they can split the heap into generations. Generational GCs gain performance wins from the empirical observation that most allocations are short lived, and those objects that are most recently allocated are most likely to be garbage, so we can focus the GC's efforts on them to get the most bang for our buck. Precise rooting is a requirement for a moving collector because it has to update all references to point to the new address of each moved GC thing. A conservative collector doesn't know for sure if a given value on the stack is a reference to a GC thing or not, and if the value just so happens not to be a reference to a GC thing (it is a false positive), and the collector "helpfully" updates that value to the moved address, then the collector is introducing migraine-inducing bugs into the program execution.

* Technically, there do exist some moving and generational collectors that are "mostly copying" and conservatively mark the stack but precisely mark the heap. These collectors only move objects which are not conservatively reachable.

Oxischeme uses precise rooting, but is not a moving GC (yet). Precise rooting is implemented with the oxischeme::heap::Rooted<T> smart pointer RAII type, which roots its referent upon construction and unroots it when the smart pointer goes out of scope and is dropped.

Using precise rooting and Rooted, we can solve the dangling stack pointer problem like this:

    // The pointed to GC thing gets rooted when wrapped
    // with `Rooted`.
    let a = Rooted::new(heap, pointer_to_some_gc_thing);
    // Dereferencing `a` is now safe, because the referent is
    // a GC root, and can't be collected!
// `a` is now out of scope, and its referent is unrooted.

With all of that out of the way, here is the implementation of our mark-and-sweep collector:

impl Heap {
    pub fn collect_garbage(&mut self) {

        // First, trace the heap graph and mark everything that
        // is reachable.

        let mut pending_trace = self.get_roots();

        while !pending_trace.is_empty() {
            let mut newly_pending_trace = vec!();

            for thing in pending_trace.drain() {
                if !thing.is_marked() {

                    for referent in thing.trace() {

            pending_trace.append(&mut newly_pending_trace);

        // Second, sweep each `ArenaSet`.


Why do we have four calls to sweep, one for each type that Oxischeme implements? To explain this, first we need to understand Oxischeme's allocation strategy.

Oxischeme does not allocate each individual object directly from the operating system. In fact, most Scheme "allocations" do not actually perform any allocation from the operating system (eg, call malloc or Box::new). Oxischeme uses a set of oxischeme::heap::Arenas, each of which have a preallocated object pool with each item in the pool either being used by live GC things, or waiting to be used in a future allocation. We keep track of an Arena's available objects with a "free list" of indices into its pool.

type FreeList = Vec<usize>;

/// An arena from which to allocate `T` objects from.
pub struct Arena<T> {
    pool: Vec<T>,

    /// The set of free indices into `pool` that are available
    /// for allocating an object from.
    free: FreeList,

    /// During a GC, if the nth bit of `marked` is set, that
    /// means that the nth object in `pool` has been marked as
    /// reachable.
    marked: Bitv,

When the Scheme program allocates a new object, we remove the first entry from the free list and return a pointer to the object at that entry's index in the object pool. If the every Arena is at capacity (ie, its free list is empty), a new Arena is allocated from the operating system and its object pool is used for the requested Scheme allocation.

impl<T: Default> ArenaSet<T> {
    pub fn allocate(&mut self) -> ArenaPtr<T> {
        for arena in self.arenas.iter_mut() {
            if !arena.is_full() {
                return arena.allocate();

        let mut new_arena = Arena::new(self.capacity);
        let result = new_arena.allocate();

impl<T: Default> Arena<T> {
    pub fn allocate(&mut self) -> ArenaPtr<T> {
        match {
            Some(idx) => {
                let self_ptr : *mut Arena<T> = self;
                ArenaPtr::new(self_ptr, idx)
            None => panic!("Arena is at capacity!"),

For simplicity, Oxischeme has separate arenas for separate types of objects. This sidesteps the problem of finding an appropriately sized free block of memory when allocating different sized objects from the same pool, the fragmentation that can occur because of that, and lets us use a plain old vector as the object pool. However, this also means that we need a separate ArenaSet<T> for each T object that a Scheme program can allocate and why oxischeme::heap::Heap::collect_garbage has four calls to sweep().

During the sweep phase of Oxischeme's garbage collector, we return the entries of any dead object back to the free list. If the Arena is empty (ie, the free list is full) then we return the Arena's memory to the operating system. This prevents retaining the peak amount of memory used for the rest of the program execution.

impl<T: Default> Arena<T> {
    pub fn sweep(&mut self) { = range(0, self.capacity())
            .filter(|&n| {
                    .expect("marked should have length == self.capacity()")

        // Reset `marked` to all zero.

impl<T: Default> ArenaSet<T> {
    pub fn sweep(&mut self) {
        for arena in self.arenas.iter_mut() {

        // Deallocate any arenas that do not contain any
        // reachable objects.
        self.arenas.retain(|a| !a.is_empty());

This concludes our tour of Oxischeme's current implementation of memory management, allocation, and garbage collection for Scheme programs. In the future, I plan to make Oxischeme's collector a moving collector, which will pave the way for a compacting and generational GC. I might also experiment with incrementalizing marking for lower latency and shorter GC pauses, or making sweeping lazy. Additionally, I intend to declare to the rust compiler that operations on un-rooted GC pointers unsafe, but I haven't settled on an implementation strategy yet. I would also like to experiment with writing a syntax extension for the Rust compiler so that it can derive Trace implementations, and they don't need to be written by hand.

Thanks to Tom Tromey and Zach Carter for reviewing drafts.


Tantek ÇelikNovember Project Book Survey Answers #NP_Book

The November Project recently wrapped up a survey for a book project. I had the tab open and finally submitted my answers, but figured why not post them on my own site as well. Some of this I've blogged about before, some of it is new.

The basics

Tribe Location
San Francisco
Member Name
Tantek Çelik
Date of Birth
March 11th
Date and Location of First NP Workout
2013-10-30 Alamo Square, San Francisco, CA, USA
Contact Info

Pre-NP fitness

Describe your pre-NP fitness background and routine.

  • 2011 started mixed running/jogging/walking every week, short distances 0.5-3 miles.
  • 2008 started bicycling regularly around SF
  • 2007 started rock climbing, eventually 3x a week
  • 1998 started regular yoga and pilates as part of recovering from a back injury

First hear about NP

How did you first hear about the group?

I saw chalkmarks in Golden Gate Park for "NovemberProject 6:30am Kezar!" and thought what the heck is that? 6:30am? Sounds crazy. More: Learning About NP

First NP workout

Recount your first workout, along with the vibe, and how they may have differed from your expectations.

My first NovemberProject workout was a 2013 NPSF PR Wednesday workout, and it was the hardest physical workout I'd ever done. However before it destroyed me, I held my hand up as a newbie, and was warmly welcomed and hugged. My first NP made a strong positive impression. More: My First Year at NP: Newbie

Meeting BG and Bojan

For those who've crossed paths, what was your first impression of BG? Of Bojan?

I first met BG and Bojan at a traverbal Boston destination deck workout. BG and Bojan were (are!) larger than life, with voices to match. Yet their booming matched with self-deprecating humor got everyone laughing and feeling like they belonged.

First Harvard Stadium workout

Boston Only: If you had a particularly memorable newbie meeting and virgin workout at Harvard Stadium, I'd like to know about it for a possible separate section. If so, please describe.

My first Boston Harvard Stadium workout was one to remember. Two days after my traverbal Boston destination deck workout, I joined the newbie orientation since I hadn't done the stadium before. I couldn't believe how many newbies there were. By the time we got to the starting steps I was ready to bolt. I completed 26 sections, far more than I thought I would.

Elevated my fitness

How has NP elevated your fitness level? How have you measured this?

NP has made me a lot faster. After a little over 6 months of NPSF, I cut over 16 minutes in my Bay To Breakers 12km personal record.

Affected personal life

Give an example of how NP has affected your personal life and/or helped you overcome a challenge.

NP turned me from a night person to a morning person, with different activities, and different people. NP inspired me to push myself to overcome my inability to run hills, one house at a time until I could run half a block uphill, then I started running NPSF hills. More: My First Year at NP: Scared of Hills

Impacted relationship with my city

How has NP impacted your relationship with your city?

I would often run into NPSF regulars on my runs to and from the workout, so I teamed up with a couple of them and started an unofficial "rungang". We posted times and corners of our running routes, including to hills. NPSF founder Laura challenged our rungang to run ~4 miles (more than halfway across San Francisco) south to a destination hills workout at Bernal Heights and a few of us did. After similar pre-workout runs North to the Marina, and East to the Embarcadero, I feel like I can confidently run to anywhere in the city, which is an amazing feeling.

Why rapid traction?

Why do you think NP has gained such traction so rapidly?

Two words: community positivity. Yes there's a workout too, but there are lots of workout groups. What makes NP different (beyond that it's free), are the values of community and barrier-breaking positivity that the leaders instill into every single workout. More: My First Year at NP: Positive Community — Just Show Up

Most memorable moment

Describe your most memorable workout or a quintessential NP moment.

Catching the positivity award when it was thrown at me across half of NPSF. Tantek holding up the NPSF positivity award backlit by the rising sun.

Weirdest thing

Weirdest thing about NP?

That so many people get up before sunrise, nevermind in sub-freezing temperatures in many cities, to go to a workout. Describe that to anyone who isn't in NP, and it sounds beyond weird.

NP and regular life

How has NP bled into your "regular" life? (Do you inadvertently go in for a hug when meeting a new client? Do you drop F-bombs at inopportune times? Have you gone from a cubicle brooder to the meeting goofball? Are you kinder to strangers?)

I was already a bit of a hugger, but NP has taught me to better recognize when people might quietly want (or be ok with) a hug, even outside of NP. #huglife

The Positivity Award

If you've ever won the Positivity Award, please describe that moment and what it meant to you.

It's hard to describe. I certainly was not expecting it. I couldn't believe how excited people were that I was getting it. There was a brief moment of fear when Braden tossed it at me over dozens of my friends, all the sound suddenly muted while I watched it flying, hands outstretched. Caught it solidly with both hands, and I could hear again. It was a beautiful day, the sun had just risen, and I could see everyone's smiling faces. More than the award itself, it meant a lot to me to see the joy in people's faces.

Non-NP friends and family

What do your non-NP friends and family think of your involvement?

My family is incredibly supportive and ecstatic with my increased fitness. My non-NP friends are anywhere from curious (at best), to wary or downright worried that it's a cult, which they only half-jokingly admit.

NP in one word

Describe NP in one word.


Additional Thoughts

Additional thoughts? Include them here.

You can follow my additional thoughts on NP, fitness, and other things on my site & blog:

Cameron KaiserBiggus diskus (plus: 31.5.0 and how to superphish in your copious spare time)

One of the many great advances that Mac OS 9 had over the later operating system was the extremely flexible (and persistent!) RAM disk feature, which I use on almost all of my OS 9 systems to this day as a cache store for Classilla and temporary work area. It's not just for laptops!

While OS X can configure and use RAM disks, of course, it's not as nicely integrated as the RAM Disk in Classic is and it isn't natively persistent, though the very nice Esperance DV prefpane comes pretty close to duplicating the earlier functionality. Esperance will let you create a RAM disk up to 2GB in size, which for most typical uses of a transient RAM disk (cache, scratch volume) would seem to be more than enough, and can back it up to disk when you exit. But there are some heavy duty tasks that 2GB just isn't enough for -- what if you, say, wanted to compile a PowerPC fork of Firefox in one, he asked nonchalantly, picking a purpose at random not at all intended to further this blog post?

The 2GB cap actually originates from two specific technical limitations. The first applies to G3 and G4 systems: they can't have more than 2GB total physical RAM anyway. Although OS X RAM disks are "sparse" and only actually occupy the amount of RAM needed to store their contents, if you filled up a RAM disk with 2GB of data even on a 2GB-equipped MDD G4 you'd start spilling memory pages to the real hard disk and thrashing so badly you'd be worse off than if you had just used the hard disk in the first place. The second limit applies to G5 systems too, even in Leopard -- the RAM disk is served by /System/Library/PrivateFrameworks/DiskImages.framework/Resources/diskimages-helper, a 32-bit process limited to a 4GB address space minus executable code and mapped-in libraries (it didn't become 64-bit until Snow Leopard). In practice this leaves exactly 4629672 512-byte disk blocks, or approximately 2.26GB, as the largest possible standalone RAM disk image on PowerPC. A full single-architecture build of TenFourFox takes about 6.5GB. Poop.

It dawned on me during one of my careful toilet thinking sessions that the way awound, er, around this pwobproblem was a speech pathology wefewwal to RAID volumes together. I am chagrined that others had independently came up with this idea before, but let's press on anyway. At this point I'm assuming you're going to do this on a G5, because doing this on a G4 (or, egad, G3) would be absolutely nuts, and that your G5 has at least 8GB of RAM. The performance improvement we can expect depends on how the RAM disk is constructed (10.4 gives me the choices of concatenated, i.e., you move from component volume process to component volume process as they fill up, or striped, i.e., the component volume processes are interleaved [RAID 0]), and how much the tasks being performed on it are limited by disk access time. Building TenFourFox is admittedly a rather CPU-bound task, but there is a non-trivial amount of disk access, so let's see how we go.

Since I need at least 6.5GB, I decided the easiest way to handle this was 4 2+GB images (roughly 8.3GB all told). Obviously, the 8GB of RAM I had in my Quad G5 wasn't going to be enough, so (an order to MemoryX and) a couple days later I had a 16GB memory kit (8 x 2GB) at my doorstep for installation. (As an aside, this means my quad is now pretty much maxed out: between the 16GB of RAM and the Quadro FX 4500, it's now the most powerful configuration of the most powerful Power Mac Apple ever made. That's the same kind of sheer bloodymindedness that puts 256MB of RAM into a Quadra 950.)

Now to configure the RAM disk array. I ripped off a script from someone on Mac OS X Hints and modified it to be somewhat more performant. Here it is (it's a shell script you run in the Terminal, or you could use Platypus or something to make it an app; works on 10.4 and 10.5):

% cat ~/bin/ramdisk

/bin/test -e /Volumes/BigRAM && exit

diskutil erasevolume HFS+ r1 \
`hdiutil attach -nomount ram://4629672` &
diskutil erasevolume HFS+ r2 \
`hdiutil attach -nomount ram://4629672` &
diskutil erasevolume HFS+ r3 \
`hdiutil attach -nomount ram://4629672` &
diskutil erasevolume HFS+ r4 \
`hdiutil attach -nomount ram://4629672` &
diskutil createRAID stripe BigRAM HFS+ \
/Volumes/r1 /Volumes/r2 /Volumes/r3 /Volumes/r4

Notice that I'm using stripe here -- you would substitute concat for stripe above if you wanted that mode, but read on first before you do that. Open Disk Utility prior to starting the script and watch the side pane as it runs if you want to understand what it's doing. You'll see the component volume processes start, reconfigure themselves, get aggregated, and then the main array come up. It's sort of a nerdily beautiful disk image ballet.

One complication, however, is you can't simply unmount the array and expect the component RAM volumes to go away by themselves; instead, you have to go seek and kill the component volumes first and then the array will go away by itself. If you fail to do that, you'll run out of memory verrrrry quickly because the RAM will not be reclaimed! Here's a script for that too. I haven't tested it on 10.5, but I don't see why it wouldn't work there either.

% cat ~/bin/noramdisk

/bin/test -e /Volumes/BigRAM || exit

diskutil unmountDisk /Volumes/BigRAM
diskutil checkRAID BigRAM | tail -5 | head -4 | \
cut -c 3-10 | grep -v 'Unknown' | \
sed 's/s3//' | xargs -n 1 diskutil eject

This script needs a little explanation. What it does is unmount the RAM disk array so it can be modified, then goes through the list of its component processes, isolates the diskn that backs them and ejects those. When all the disk array's components are gone, OS X removes the array, and that's it. Naturally shutting down or restarting will also wipe the array away too.

(If you want to use these scripts for a different sized array, adjust the number of diskutil erasevolume lines in the mounter script, and make sure the last line has the right number of images [like /Volumes/r1 /Volumes/r2 by themselves for a 2-image array]. In the unmounter script, change the tail and head parameters to 1+images and images respectively [e.g., tail -3 | head -2 for a 2-image array].)

Since downloading the source code from Mozilla is network-bound (especially on my network), I just dumped it to the hard disk, and patched it on the disk as well so a problem with the array would not require downloading and patching everything again. Once that was done, I made a copy on the RAM disk with hg clone esr31g /Volumes/BigRAM/esr31g and started the build. My hard disk, for comparison, is a 7200rpm 64MB buffer Western Digital SATA drive; remember that all PowerPC OS X-compatible controllers only support SATA I. Here's the timings, with the Quad G5 in Highest performance mode:

hard disk: 2 hours 46 minutes
concatenated: 2 hours 15 minutes (18.7% improvement)
striped: 2 hours 8 minutes (22.9% improvement)

Considering how much of this is limited by the speed of the processors, this is a rather nice boost, and I bet it will be even faster with unified builds in 38ESR (these are somewhat more disk-bound, particularly during linking). Since I've just saved almost two hours of build time over all four CPU builds, this is the way I intend to build TenFourFox in the future.

The 5.2% delta observed here between striping and concatenation doesn't look very large, but it is statistically significant, and actually the difference is larger than this test would indicate -- if our task were primarily disk-bound, the gulf would be quite wide. The reason striping is faster here is because each 2GB slice of the RAM disk array is an independent instance of diskimages-helper, and since we have four slices, each slice can run on one of the Quad's cores. By spreading disk access equally among all the processes, we share it equally over all the processors and achieve lower latency and higher efficiencies. This would probably not be true if we had fewer cores, and indeed for dual G5s two slices (or concatenating four) may be better; the earliest single processor G5s should almost certainly use concatenation only.

Some of you will ask how this compares to an SSD, and frankly I don't know. Although I've done some test builds in an SSD, I've been using a Patriot Blaze SATA III drive connected to my FW800 drive toaster to avoid problems with interfacing, so I doubt any numbers I'd get off that setup would be particularly generalizable and I'd rather use the RAM disk anyhow because I don't have to worry about TRIM, write cycles or cleaning up. However, I would be very surprised if an SSD in a G5 achieved speeds faster than RAM, especially given the (comparatively, mind you) lower SATA bandwidth.

And, with that, 31.5.0 is released for testing (release notes, hashes, downloads). This only contains ESR security/stability fixes; you'll notice the changesets hash the same as 31.4.0 because they are, in fact, the same. The build finalizes Monday PM Pacific as usual.

31.5.0 would have been out earlier (experiments with RAM disks notwithstanding) except that I was waiting to see what Mozilla would do about the Superfish/Komodia debacle: the fact that Lenovo was loading adware that MITM-ed HTTPS connections on their PCs ("Superfish") was bad enough, but the secret root certificate it possessed had an easily crackable private key password allowing a bad actor to create phony certificates, and now it looks like the company that developed the technology behind Superfish, Komodia, has by their willful bad faith actions caused the same problem to exist hidden in other kinds of adware they power.

Assuming you were not tricked into accepting their root certificate in some other fashion (their nastyware doesn't run on OS X and near as I can tell never has), your Power Mac is not at risk, but these kinds of malicious, malfeasant and incredibly ill-constructed root certificates need to be nuked from orbit (as well as the companies that try to sneak them on user's machines; I suggest napalm, castration and feathers), and they will be marked as untrusted in future versions of TenFourFox and Classilla so that false certificates signed with them will not be honoured under any circumstances, even by mistake. Unfortunately, it's also yet another example of how the roots are the most vulnerable part of secure connections (previously, previously).

Development on IonPower continues. Right now I'm trying to work out a serious bug with Baseline stubs and not having a lot of luck; if I can't get this working by 38.0, we'll ship 38 with PPCBC (targeting a general release by 38.0.2 in that case). But I'm trying as hard as I can!

Marco ZeheSocial networks and accessibility: A not so sad picture

This post originally was written in December 2011 and had a slightly different title. Fortunately, the landscape has changed dramatically since then, so it is finally time to update it with more up to date information.

Social networks are part of many people’s lives nowadays. In fact if you’re reading this, chances are pretty high that you came from Twitter, Facebook or some other social network. The majority of referrers to my blog posts come from social networks nowadays, those who read me via an RSS feed seem to be getting less and less.

So let’s look at some of the well-known social networks and see what their state of accessibility is nowadays, both when considering the web interface as well as the native apps for mobile devices most of them have.

In recent years, several popular social networks moved from a fixed relaunch schedule of their services to a more agile, incremental development cycle.. Also, most, if not all, social network providers we’ll look at below have added personell dedicated to either implementing or training other engineers in accessibility skills. Those efforts show great results. There is over-all less breakage of accessibility features, and if something breaks, the teams are usually very quick to react to reports, and the broken feature is fixed in a near future update. So let’s have a look!


Twitter has come a long way since I wrote the initial version of this post. New Twitter was here to stay, but ever since a very skilled engineer boarded the ship, a huge improvement has taken place. One can nowadays use Twitter with keyboard shortcuts to navigate tweets, reply, favorite, retweet and do all sorts of other actions. Screen reader users might want to try turning off their virtual buffers and really use the web site like a desktop app. It works really quite well! I also recommend taking a look at the keyboard shortcut list, and memorizing them when you use Twitter more regularly. You’ll be much much more productive! I wrote something more about the Twitter accessibility team in 2013.


Fortunately, there are a lot of accessible clients out there that allow access to Twitter. The Twitter app for iOS is very accessible now for both iPhone and iPad. The Android client is very accessible, too. Yes, there is the occasional breakage of a feature, but as stated above, the team is very good at reacting to bug reports and fixing them. Twitter releases updates very frequently now, so one doesn’t have to wait long for a fix.

There’s also a web client called Easy Chirp (formerly Accessible Twitter) by Mr. Web Axe Dennis Lembree. It’s now in incarnation 2. This one is marvellous, it offers all the features one would expect from a Twitter client, in your browser, and it’s all accessible to people with varying disabilities! It uses all the good modern web standard stuff like WAI-ARIA to make sure even advanced interaction is done accessibly. I even know many non-disabled people using it for its straight forward interface and simplicity. One cool feature it has is that you can post images and provide an alternative description for visually impaired readers, without having to spoil the tweet where the picture might be the punch line. You just provide the alternative description in an extra field, and when the link to the picture is opened, the description is provided right there. How fantastic is that!

For iOS, there are two more Apps I usually recommend to people. For the iPhone, my Twitter client of choice was, for a long time, TweetList Pro, an advanced Twitter client that has full VoiceOver support, and they’re not even too shy to say it in their app description! They have such things as muting users, hash tags or clients, making it THE Twitter client of choice for many for all intents and purposes. The reason why I no longer use it as my main Twitter client is the steep decline of updates. It’s now February 2015, and as far as I know, it hasn’t even been updated to iOS 8 yet. The last update was some time in October 2013, so it lags behind terribly in recent Twitter API support changes, doesn’t support the iPhone 6 and 6 Plus screens natively, etc.

Another one, which I use on the iPhone and iPad, is Twitterrific by The Icon Factory. Their iPhone and iPad app is fully accessible, the Mac version, on the other hand, is totally inaccessible and outdated. On the Mac, I use the client Yorufukurou (night owl).

Oh yes and if you’re blind and on Windows, there are two main clients available, TheQube, and Chicken Nugget. TheQube is designed specifically for the blind with hardly any visual UI, and it requires a screen reader or at least installed speech synthesizer to talk. Chicken Nugget can be run in UI or non-UI mode, and in non-UI mode, definitely also requires a screen reader to run. Both are updated frequently, so it’s a matter of taste which one you choose.

In short, for Twitter, there is a range of clients, one of which, the EasyChirp web application, is truly cross-platform and useable anywhere, others are for specific platforms. But you have accessible means to get to Twitter services without having to use their web site.


Facebook has come a long long way since my original post as well. When I wrote about the web site originally, it had just relaunched and completely broken accessibility. I’m happy to report that nowadays, the FB desktop and mobile sites both are largely accessible, and Facebook also has a dedicated team that responds to bug reports quickly. They also have a training program in place where they teach other Facebook engineers the skills to make new features accessible and keep existing ones that way when they get updated. I wrote more about the Facebook accessibility changes here, and things constantly got better since then.


Like the web interfaces, the iOS and Android clients for Facebook and Messenger have come a long way and frequently receive updates to fix remaining accessibility problems. Yes, here too, there’s the occasional breakage, but the team is very responsive to bug reports in this area, too, and since FB updates their apps on a two week basis, sometimes even more often if critical issues are discovered, waiting for fixes usually doesn’t take long. If you’re doing messaging on the desktop, you can also integrate FaceBook Messenger/Chat with Skype, which is very accessible on both Mac and Windows. Some features like group chats are, however, reserved for the Messenger clients and web interface.

Google Plus

Google Plus anyone? :) It was THE most hyped thing of the summer of 2011, and as fast as summer went, so did people lose interest in it. Even Google seem to slowly but surely abandon it, cutting back on the requirement to have a Google+ account for certain activities bit by bit. But in terms of accessibility, it is actually quite OK nowadays. As with many of their widgets, Google+ profits from them reusing components that were found in Gmail and elsewhere, giving both keyboard accessibility and screen reader information exposure. Their Android app is also quite accessible from the last time I tried it in the summer of 2014. Their iOS app still seems to be in pretty bad shape, which is surprising considering how well Gmail, Hangouts, and even the Google Docs apps work nowadays. I don’t use it much, even though I recreated an account some time in 2013. But whenever I happen to stumble in, I’m not as dismayed as I was when I wrote the original version of this post.


Yammer is an enterprise social network we at Mozilla and in a lot of other companies use for some internal communication. It was bought by Microsoft some time in 2012, and since then, a lot of its accessibility issues have been fixed. When you tweet them, you usually get a response pointing to a bug entry form, and issues are dealt with  satisfactorily.

iOS client

The iOS client is updated quite frequently. It has problems on and off, but the experience got more stable in recent versions, so one can actually use it. from is a microblogging service similar to Twitter. And unlike Twitter, it’s accessible out of the box! This is good since it does not have a wealth of clients supporting it like Twitter does, so with its own interface being accessible right away, this is a big help! It is, btw, the only open-source social network in these tests. Mean anything? Probably!


All social networks I tested either made significant improvements over the last three years, or they remained accessible (in the case of the last candidate).

In looking for reasons why this is, there are two that come to mind immediately. For one, the introduction of skilled and dedicated personell versed in accessibility matters, or willing to dive in deep and really get the hang of it. These big companies finally understood the social responsibility they have when providing a social network, and leveraged the fact that there is a wealth of information out there on accessible web design. And there’s a community that is willing to help if pinged!

Another reason is that these companies realized that putting in accessibility up-front, making inclusive design decisions, and increasing the test coverage to include accessibility right away not only reduces the cost as opposed to making it bolt-on, but also helps to make a better product for everybody.

A suggestion remains: Look at what others are doing! Learn from them! Don’t be shy to ask questions! If you look at what others! have been doing, you can draw from it! They’ll do that with stuff you put out there, too! And don’t be shy to talk about the good things you do! The Facebook accessibility team does this in monthly updates where they highlight stuff they fixed in the various product lines. I’ve seen signs of that from Twitter engineers, but not as consistent as with Facebook. Talking about the successes in accessibility also serves as an encouragement to others to put inclusive design patterns in their work flows.