Planet Thunderbird

February 28, 2015

Calendar

Strings are Frozen for the Next Major Lightning Release

Together with Thunderbird 38, we will be releasing Lightning 4.0. Both of these releases are not beta versions, but similarly major releases like Lightning 3.3, Lightning 2.6 and their respective Thunderbird counterparts.

We have about 11 weeks left until the release will be final, and while the developers are doing their best to make sure features are stable and there are no regressions, its time to do some translation work.

If you have been missing your language in Lighting in the past, maybe this is a good time to contact the l10n team of your language and express interest to translate Lightning. While the initial hurdle may be large, there are usually not many changes in strings between Lightning releases. If you are lucky, someone had already translated part of Lightning in the past and all you have to do is update your locales. The translation process is fairly simple and can be done using your favorite browser.

If you are already part of the Localization teams, this is the time to head over to mozilla.locamotion.org and translate the remaining Lightning strings. Once you are done translating and the changes have been pushed to the localization repositories, please head over to the Thunderbird l10n dashboard (not the Calendar Dashboard) and sign-off the latest change. Make sure you are signing off the later changeset of Thunderbird and Lightning, as only the newest sign-off will be used.

Should you have any questions, please feel free to send me an email or comment on this post and I will get back to you as soon as possible.

February 28, 2015 02:49 PM

Google Summer of Code 2015 Projects

The one thing I like best about the Google Summer of Code is that it gives us an opportunity work on cool new features I never have time for on my own. Also, its a great opportunity for students to learn about working on a large-scale project and prepare for real life work, which is very much different than the smaller projects I remember from my university. Students that have stayed with us even after the Summer of Code have proven themselves invaluable, showing spirit and enthusiasm for an open source project like the Mozilla Calendar Project gives me a warm feeling in my heart.

This year, we have proposed two projects: Introducing Calendar Accounts and Resource Booking Improvements. As the projects have been available on the wiki for a while (sorry for not blogging about this earlier!), we’ve already had the one or other student interested in applying. However, that doesn’t mean there isn’t any room left for a fine candidate like you!

In the first project, Introducing Calendar Accounts, the goal is to improve our backend layer to move from a flat list of calendars to a hierarchical list with calendars grouped by the accounts they belong to. Aside from the benefits this gives us w.r.t. avoiding code duplication and ugly hacks, it will open Lightning to a load of new features related to accounts, for example notifications if a new calendars was added to the account or improved support for authenticating to calendars on one server with different credentials.

Second, we have proposed a project on Resource Booking Improvements. Right now, our invite attendees dialog is fairly simple and only allows entering email addresses and seeing their free/busy status. What is missing is an easy way to invite resources and rooms, for example when you want to book a conference room for your meeting. There is an inconspicuous feature that allows changing an attendee to a resource entry, although there is no real value in doing this aside from sending more correct data to the calendar server. The user still has to remember the virtual email address associate with the conference room. With this Summer of Code project we want to allow any kind of calendar provider to be able to specify how to search for rooms and resources. Certain CalDAV servers support searching for these entries using custom queries, the goal for this project is mostly to support those servers.

If you are interested, please do get in touch with me, either via email or on irc.mozilla.org, where my nickname is Fallen and I usually hang around in #calendar. Should I not be around, redDragon (a former GSoC Student, by the way!) will be there to help you.

February 28, 2015 02:32 PM

Provider for Google Calendar Postmortem

First of all, I’d like to apologize for not adding in new blog posts once in a while. There have been a few topics I could have written about, but I never got around to it. The consequence is that there will be a few posts in succession now, I hope to be better about this in the future.

In this post, I’d like to tell you a little bit about the changes to the Provider for Google Calendar that have taken place in the last months. With due prior notice, Google has shut down version 1 and version 2 of the Google Calendar API. The previous version of the Provider for Google Calendar, version 0.32, was still using the API v1.

The changes to the API were fairly substantial, so I took the opportunity to rewrite large parts of the Provider to use new JavaScript features and generally make the code more readable. I also added some new features, including:

As such drastic changes are a common source for regressions, I went through 10 rounds of pre-release testing and got some very helpful input from those who commented on the bug or sent me an email. There would have been substantially more issues without these folks, so thank you very much! In the last round the amount of issues was down to a level where I felt comfortable releasing the Provider to the world.

When I release version 1.0, something inevitable happened: nearly 300,000 users find more issues than 140, so I had to do a few additional releases to fix more major issues. The new API version imposes limits on the number of requests being made, so one of the first issues I had to overcome was gaining more quota. Thanks to the fantastic folks at Google I was able to solve this issue using a combination of code changes to reduce the number of requests and also higher quota limits. Here is a roundup of the other issues:

In retrospect, there have been a lot of complaints, but on the other hand a lot of people have noticed how important this addon has become for them. Many have shown their gratitude by sending a donation via the addons page. I hope that version 1.0.4 fixes most of the issues, there are just a few more issues reported. If you continue to experience difficulties, please send me an email or visit the support forum.

 

 

 

February 28, 2015 01:42 PM

February 27, 2015

Thunderbird Blog

Thunderbird Usage Continues to Grow

We’re happy to report that Thunderbird usage continues to expand.

Mozilla measures program usage by Active Daily Installations (ADI), which is the number of pings that Mozilla servers receive as installations do their daily plugin block-list update. This is not the same as the number of active users, since some users don’t access their program each day, and some installations are behind firewalls. An estimate of active monthly users is typically done by multiplying the ADI by a factor of 3.

To plot changes in Thunderbird usage over time, I’ve picked the peak ADI for each month for the last few years. Here’s the result:

Thunderbird Active Daily Installations, peak value per month.

Germany has long been our #1 country for usage, but in 4th quarter 2014, Japan exceeded US as the #2 country. Here’s the top 10 countries, taken from the ADI count of February 24, 2015:

Rank Country ADI 2015-02-24
1 Germany 1,711,834
2 Japan 1,002,877
3 United States 927,477
4 France 777,478
5 Italy 514,771
6 Russian Federation 494,645
7 Poland 480,496
8 Spain 282,008
9 Brazil 265,820
10 United Kingdom 254,381
All Others 2,543,493
Total 9,255,280

Country Rankings for Thunderbird Usage, February 24, 2015

The Thunderbird team is now working hard preparing our next major release, which will be Thunderbird 38 in May 2015. We’ll be blogging more about that release in the next few weeks, including reporting on the many new features that we have added.

February 27, 2015 10:44 PM

Mike Conley

The Joy of Coding (Episode 3)

The third episode is up! My machine was a little sluggish this time, since I had OBS chugging in the background attempting to do a hi-res screen recording simultaneously.

Richard Milewski and I are going to try an experiment where I try to stream with OBS next week, which should result in a much higher-resolution stream. We’re also thinking about having recording occur on a separate machine, so that it doesn’t bog me down while I’m working. Hopefully we’ll have that set up for next week.

So this third episode was pretty interesting. Probably the most interesting part was when I discovered in the last quarter that I’d accidentally shipped a regression in Firefox 36. Luckily, I’ve got a patch that fixes the problem that has been approved for uplift to Aurora and Beta. A point release is also planned for 36, so I’ve got approval to get the fix in there too. \o/

Here are the notes for the bug I was working on. The review feedback from karlt is in this bug, since I kinda screwed up where I posted the review request with MozReview.

February 27, 2015 03:27 PM

February 19, 2015

Mike Conley

The Joy of Coding (Episode 2)

The second episode is up! We seem to have solved the resolution problem this time around – big thanks to Richard Milewski for his work there. This time, however, my microphone levels were just a bit low for the first half-hour. That’s my bad – I’ll make sure my gain is at the right level next time before I air.

Here are the notes for the bug I was working on.

And let me know if there’s anything else I can do to make these episodes more useful or interesting.

February 19, 2015 03:54 AM

February 18, 2015

Meeting Notes

Thunderbird: 2015-02-17

Thunderbird meeting notes 2015-02-17. NOON PST. Previous meetings: https://wiki.mozilla.org/Thunderbird/StatusMeetings#Meeting_Notes

Attendees

fallen, wsmwk, rkent, aceman, paenglab, makemyday, magnus, jorgk,

Current status and discussions

  • 36.0 beta is out

Critical Issues

Critical bugs. Please leave these here until they’re confirmed fixed.

  • Auto-complete improvements – some of those could go into esr31
  • ldap crasher
  • certificate crasher
  • Lightning integration
  • AB all-acount search bug 170270
  • maildir UI
  • video chat The initial set of patches, with IB UI, may land this week (they’re up for final review). We’re considering also landing a set of matching strings for TB so uplifting a port of the UI becomes possible. I’m not sure the feature will be ready to ship in TB38 as it has not undergone much real world testing yet, but you never know, there may not be any nasty surprises ;)

Release Issues

Upcoming

  • Thunderbird 38 moves to Earlybird ~ February 24, 2015
    • string freeze

Lightning to Thunderbird Integration

See https://calendar.etherpad.mozilla.org/thunderbird-integration

  • As underpass has pointed out repeatedly (thanks for your patience!) , we need to rewrite / heavily modify the lightning articles on support.mozilla.org. let me know irc: rolandtanglao on #tb-support-crew or rtanglao AT mozilla.com OR simply start editing the articles

Round Table

Paenglab

  • I’ve requested for bug 1096006 “Add AccountManager to the prefs in tab” for Tracking_TB38.
    • Is this bug desired for TB 38? It would be needed to enable PrefsInTab.
    • If yes, I have a string only patch to land before string freeze.
  • I’ve also requested for Hiro’s bug 1087233 “Create about:downloads to migrate to Downloads.jsm” for Tracking_TB38.
    • I’ve needinfoed him to ask if he has time to finish, but no answer until now.
    • It has also strings in it. I could make a strings only patch if needed.

sshagarwal

  • Plan to land AB fix bug 170270 for TB 38.
  • Bundled chat desktop notifications bug 1127802 waiting for final review.
  • Discussing schema design and appropriate db backend for next gen address book with mconley. We plan to get an approximate idea of the number of contacts in the users’ address books on an average bug 1132588 as a required minimum performance measure.

wsmwk

  • 36.0 beta QA organized
  • triage topcrashes
  • working on HWA question bug 1131879 Disable hardware acceleration (HWA)

aceman

  • having an active week with fixing smaller backend bugs (landing right now), polishing for the release. Proud to fix long-standing dataloss bug 840418.

Question Time

Other

  • Note – meeting notes must be copied from etherpad to wiki before 5AM CET next day so that they will go public in the meeting notes blog.

Action Items

  • organize 36 beta postmortem meeting (wsmwk)
  • lightning integration meeting (rkrent/fallen)

February 18, 2015 04:00 AM

February 17, 2015

Mike Conley

On unsafe CPOW usage, and “why is my Nightly so sluggish with e10s enabled?”

If you’ve opened the Browser Console lately while running Nightly with e10s enabled, you might have noticed a warning message – “unsafe CPOW usage” – showing up periodically.

I wanted to talk a little bit about what that means, and what’s being done about it. Brad Lassey already wrote a bit about this, but I wanted to expand upon it (especially since one of my goals this quarter is to get a handle on unsafe CPOW usage in core browser code).

I also wanted to talk about sluggishness that some of our brave Nightly testers with e10s enabled have been experiencing, and where that sluggishness is coming from, and what can be done about it.

What is a CPOW?

“CPOW” stands for “Cross-process Object Wrapper”1, and is part of the glue that has allowed e10s to be enabled on Nightly without requiring a full re-write of the front-end code. It’s also part of the magic that’s allowing a good number of our most popular add-ons to continue working (albeit slowly).

In sum, a CPOW is a way for one process to synchronously access and manipulate something in another process, as if they were running in the same process. Anything that can be considered a JavaScript Object can be represented as a CPOW.

Let me give you an example.

In single-process Firefox, easy and synchronous access to the DOM of web content was more or less assumed. For example, in browser code, one could do this from the scope of a browser window:

let doc = gBrowser.selectedBrowser.contentDocument;
let contentBody = doc.body;

Here contentBody corresponds to the <body> element of the document in the currently selected browser. In single-process Firefox, querying for and manipulating web content like this is quick and easy.

In multi-process Firefox, where content is processed and rendered in a completely separate process, how does something like this work? This is where CPOWs come in2.

With a CPOW, one can synchronously access and manipulate these items, just as if they were in the same process. We expose a CPOW for the content document in a remote browser with contentDocumentAsCPOW, so the above could be rewritten as:

let doc = gBrowser.selectedBrowser.contentDocumentAsCPOW;
let contentBody = doc.body;

I should point out that contentDocumentAsCPOW and contentWindowAsCPOW are exposed on <xul:browser> objects, and that we don’t make every accessor of a CPOW have the “AsCPOW” suffix. This is just our way of making sure that consumers of the contentWindow and contentDocument on the main process side know that they’re probably working with CPOWs3. contentBody.firstChild would also be a CPOW, since CPOWs can only beget more CPOWs.

So for the most part, with CPOWs, we can continue to query and manipulate the <body> of the document loaded in the current browser just like we used to. It’s like an invisible compatibility layer that hops us right over that process barrier.

Great, right?

Well, not really.

CPOWs are really a crutch to help add-ons and browser code exist in this multi-process world, but they’ve got some drawbacks. Most noticeably, there are performance drawbacks.

Why is my Nightly so sluggish with e10s enabled?

Have you been noticing sluggish performance on Nightly with e10s? Chances are this is caused by an add-on making use of CPOWs (either knowingly or unknowingly). Because CPOWs are used for synchronous reading and manipulation of objects in other processes, they send messages to other processes to do that work, and block the main process while they wait for a response. We call this “CPOW traffic”, and if you’re experiencing a sluggish Nightly, this is probably where the sluggishness if coming from.

Instead of using CPOWs, add-ons and browser code should be updated to use frame scripts sent over the message manager. Frame scripts cannot block the main process, and can be optimized to send only the bare minimum of information required to perform an action in content and return a result.

Add-ons built with the Add-on SDK should already be using “content scripts” to manipulate content, and therefore should inherit a bunch of fixes from the SDK as e10s gets closer to shipping. These add-ons should not require too many changes. Old-style add-ons, however, will need to be updated to use frame scripts unless they want to be super-sluggish and bog the browser down with CPOW traffic.

And what constitutes “unsafe CPOW usage”?

“unsafe” might be too strong a word. “unexpected” might be a better term. Brad Lassey laid this out in his blog post already, but I’ll quickly rehash it.

There are two main cases to consider when working with CPOWs:

  1. The content process is already blocked sending up a synchronous message to the parent process
  2. The content process is not blocked

The first case is what we consider “the good case”. The content process is in a known good state, and its primed to receive IPC traffic (since it’s otherwise just idling). The only bad part about this is the IPC traffic.

The second case is what we consider the bad case. This is when the parent is sending down CPOW messages to the child (by reading or manipulating objects in the content process) when the child process might be off processing other things. This case is far more likely than the first case to cause noticeable performance problems, as the main thread of the content process might be bogged down doing other things before it can handle the CPOW traffic – and the parent will be blocked waiting for the messages to be responded to!

There’s also a more speculative fear that the parent might send down CPOW traffic at a time when it’s “unsafe” to communicate with the content process. There are potentially times when it’s not safe to run JS code in the content process, but CPOWs traffic requires both processes to execute JS. This is a concern that was expressed to me by someone over IRC, and I don’t exactly understand what the implications are – but if somebody wants to comment and let me know, I’ll happily update this post.

So, anyhow, to sum – unsafe CPOW usage is when CPOW traffic is initiated on the parent process side while the content process is not blocked. When this unsafe CPOW usage occurs, we log an “unsafe CPOW usage” message to the Browser Console, along with the script and line number where the CPOW traffic was initiated from.

Measuring

We need to measure and understand CPOW usage in Firefox, as well as add-ons running in Firefox, and over time we need to reduce this CPOW usage. The priority should be on reducing the “unsafe CPOW usage” CPOWs in core browser code.

If there’s anything that working on the Australis project taught me, it’s that in order to change something, you need to know how to measure it first. That way, you can make sure your efforts are having an effect.

We now have a way of measuring the amount of time that Firefox code and add-ons spend processing CPOW messages. You can look at it yourself – just go to about:compartments.

It’s not the prettiest interface, but it’s a start. The second column is the time processing CPOW traffic, and the higher the number, the longer it’s been doing it. Naturally, we’ll be working to bring those numbers down over time.

A possibly quick-fix for a slow Nightly with e10s

As I mentioned, we also list add-ons in about:compartments, so if you’re experiencing a slow Nightly, check out about:compartments and see if there’s an add-on with a high number in the second column. Then, try disabling that add-on to see if your performance problem is reduced.

If so, great! Please file a bug on Bugzilla in this component for the add-on, mention the name of the add-on4, describe the performance problem, and mark it blocking e10s-addons if you can.

We’re hoping to automate this process by exposing some UI that informs the user when an add-on is causing too much CPOW traffic. This will be landing in Nightly near you very soon.

PKE Meter, a CPOW Geiger Counter

Logging “unsafe CPOW usage” is all fine and dandy if you’re constantly looking at the Browser Console… but who is constantly looking at the Browser Console? Certainly not me.

Instead, I whipped up a quick and dirty add-on that plays a click, like a Geiger Counter, anytime “unsafe CPOW usage” is put into the Browser Console. This has already highlighted some places where we can reduce unsafe CPOW usage in core Firefox code – particularly:

  1. The Page Info dialog. This is probably the worse offender I’ve found so far – humongous unsafe CPOW traffic just by opening the dialog, and it’s really sluggish.
  2. Closing tabs. SessionStore synchronously communicates with the content process in order to read the tab state before the tab is closed.
  3. Back / forward gestures, at least on my MacBook
  4. Typing into an editable HTML element after the Findbar has been opened.

If you’re interested in helping me find more, install this add-on5, and listen for clicks. At this point, I’m only interested in unsafe CPOW usage caused by core Firefox code, so you might want to disable any other add-ons that might try to synchronously communicate with content.

If you find an “unsafe CPOW usage” that’s not already blocking this bug, please file a new one! And cc me on it! I’m mconley at mozilla dot com.


  1. I pronounce CPOW as “kah-POW”, although I’ve also heard people use “SEE-pow”. To each his or her own. 

  2. For further reading, Bill McCloskey discusses CPOWs in greater detail in this blog post. There’s also this handy documentation

  3. I say probably, because in the single-process case, they’re not working with CPOWs – they’re accessing the objects directly as they used to. 

  4. And say where to get it from, especially if it’s not on AMO. 

  5. Source code is here 

February 17, 2015 04:47 PM

February 16, 2015

Mike Conley

The Joy of Coding (Episode 1)

Here’s the first episode! I streamed it last Wednesday, and it was mostly concerned with bug 1090439, which is about making the print dialog and progress calls from the child process asynchronous.

Here are the notes for that bug. I still haven’t closed it yet, so perhaps I’ll keep pressing on this next Wednesday when I stream Episode 2. We’ll see!

A note that I did struggle with some resolution issues in this episode. I’m working with Richard Milewski from the Air Mozilla team to make this better for the next episode. Sorry about that!

February 16, 2015 07:29 PM

Rumbling Edge - Thunderbird

2015-02-15 Calendar builds

Common (excluding Website bugs)-specific: (36)

Sunbird will no longer be actively developed by the Calendar team.

Windows builds Official Windows

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

February 16, 2015 07:22 AM

2015-02-15 Thunderbird comm-central builds

Thunderbird-specific: (27)

MailNews Core-specific: (22)

Windows builds Official Windows, Official Windows installer

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

February 16, 2015 07:21 AM

February 11, 2015

Meeting Notes

Thunderbird: 2015-02-10

Thunderbird meeting notes 2015-02-10. Previous meetings: https://wiki.mozilla.org/Thunderbird/StatusMeetings#Meeting_Notes

Attendees

aceman, cloep, florian, jcranmer, Jorg K, mkmelin, Paneglab, rkent, Roland, sshagarwal, wsmwk, MakeMyDay

Action items from last meetings

  • wsmwk to get in touch with Standard8 re: beta.
    • done – bkerensa and sylvestre are on it
  • rkent to work with Standard8 (and Fallen) on issues of 1.management of tracking flags and 2. pushing into aurora and beta for TB 38. (meeting generally agreed that mkmelin and rkent would be appropriate to manage pushing patches forward into aurora and beta).

Critical Issues

Critical bugs. Please leave these here until they’re confirmed fixed.

  • Auto-complete improvements – some of those could go into esr31

Release Issues

  • Current beta blocked due to Windows XP failures. rkent has try server configuration that can test a beta build, and will try Standard8’s suggestions.

Upcoming

  • Thunderbird 38 moves to Earlybird ~ February 24, 2015

We need people to commit to being mentors.

Lightning to Thunderbird Integration

See https://calendar.etherpad.mozilla.org/thunderbird-integration

  • As underpass has pointed out repeatedly (thanks for your patience!) , we need to rewrite / heavily modify the lightning articles on support.mozilla.org. let me know irc: rolandtanglao on #tb-support-crew or rtanglao AT mozilla.com OR simply start editing the articles

Round Table

JosiahOne

  • So I started a new job recently, but because of that plus school, my time for TB stuff is very, very low. I will continue doing ui-reviews and reviews, but implementing anything has pretty much come to an end until summer break.

wsmwk

  • release management https://etherpad.mozilla.org/XxBwrpMHKz
  • disable HWA for 38? it has been suggested by someone in support to disable because “3d acceleration … does little or nothing for Thunderbird but messes menus, font and causes crashes (the kind with no crash reporter reports).” bug 1131879

rkent

  • Hot bugs
    • bug 1125577 – startup crash in NSSCryptoContext_FindCertificateByEncodedCertificate (and similar bug 1128614)
    • bug 1124015 – Add UI to select maildir for storage when creating accounts
    • bug 1119529 – Sending message succeeds but Error “error while running message filters on it.”
  • Unfortunately a long review queue that I will be looking at for the next few days.
  • I now have access to Thunderbird ADI data. Our ADI reached a new peak last month (in spite of SlashDot assuming “Thunderbird usage is dropping”) and Japan has now surpassed US as #2 country (after Germany).

jcranmer

  • Hopefully going to work down my review queue by this weekend
  • Main jsmime perf regression fixes are r? rkent
  • I have a non-promisified version of OAuth2, but still no UI hookup
  • Mozharness-based mozmill tests: I’ve updated the runner, need to make updates to three or four repositories to make it work
    • Trying to get this in progress for Thunderbird 38, so we don’t need to maintain the old mozmill buildbot stuff for ESR
  • I’ve been doing some work with the emailjs team to add functionality to their SMTP libraries (specifically with regards to SASL) that we could share between TB/Whiteout.io/Gaia email teams.

TheOne

Jorg K

I have an XP machine (32 bit), I could run (not build) and debug (with WinDbg) the beta, if that’s of any help. I’d need to know where do download it … and the mentioned suggestions to try. (Contact via e-mail to start off).

mkmelin

  • autocomplete:
    • the critical regressions fixed
    • 3 prominent complaints still not done: the “tab too quickly doesn’t complete”, “show as red even if found”, “insert link missing paste url in context menu”
    • ordering: now landed on esr, some complaints still, need to investigate

Question Time

I’d like to know what happened to the “Thunderbird Discussions with Mozilla”, ie. the letter that was meant to be sent to the Mozilla management, re. funding, donations, staffing, etc. There was a lively discussion on the tb-planning mailing list in early January 2015.

  • won’t happen before 38 branching

Support team

  • As underpass has pointed out, we need to rewrite all the Lightning articles, they are out of date whether or not we finish the integration for TB 38. email me or irc roland or just edit the articles (see above under “Lightning to Thunderbird integration”. Tonnes do you have time to write some of these Lightning articles in English?

Other

  • Note – meeting notes must be copied from etherpad to wiki before 5AM CET next day so that they will go public in the meeting notes blog.

(Extra) Meeting next Tuesday, Feb 17.

Action Items

-none-

February 11, 2015 04:00 AM

February 10, 2015

Bryan Clark

If writing is a muscle

I haven’t been to the gym in a long time.

David Eaves, a person I have immense amounts of respect for, has been using a tag line related to this title/intro on his blog for quite a while, probably longer than I’ve known him.  And I honestly never gave much thought to the idea that writing really is a muscle until recently. I’ve taken a break from being a designer (or a programmer) to work as a product manager for over a year now. Designing and coding require a set of skills I’m very familiar with, code is an interpretive language that people use to communicate with each other about the details of commands they issue a computer. While design is a more visual language of storytelling, heavily using imagery and some text to convey the journey of a user to the team intent on correctly interacting with that user.  Both pursuits are about communication but each uses written language in a very different way.  As a product manager I’m forced to lean on my skills as a writer and I don’t think I had much in the way of skills previously but whatever bedridden muscles have been dormant are reawakening as I realize how young and foolish I really was to ignore this essential form of communication.

I’m hoping there is more to come, perhaps starting with some tech posts about recent projects while I try to grapple with this idea of writing more than a tweet.

February 10, 2015 07:07 AM

February 09, 2015

Mark Banner

Firefox Hello Desktop: Behind the Scenes – Flux and React

This is the first of a few posts that I’m planning regarding discussion about how we implement and work on the desktop and standalone parts of Firefox Hello. We’ve been doing some things in different ways, which we have found to be advantageous. We know other teams are interested in what we do, so its time to share!

Content Processes

First, a little bit of architecture: The panels and conversation window run in content processes (just like regular web pages). The conversation window shares code with the link-clicker pages that are on hello.firefox.com.

Hence those parts run very much in a web-style, and for various reasons, we decided to create them in a web-style manner. As a result, we’ve ended up with using React and Flux to aid our development.

I’ll detail more about the architecture in future posts.

The Flux Pattern

Flux is a recommended pattern for use alongside React, although I think you could use it with other frameworks as well. I’ll detail here about how we use Flux specifically for Hello. As Flux is a pattern, there’s no one set standard and the methods of implementation vary.

Flux Overview

The main parts of a flux system are stores, components and actions. Some of this is a bit like an MVC system, but I find there’s better definition about what does what.
Diagram of Example flow in a Flux patternAn action is effectively a result of an event, that changes the system. For example, in Loop, we use actions for user events, but we also use them for any data incoming from the server.

A store contains the business logic. It listens to actions, when it receives one, it does something based on the action and updates its state appropriately.

A component is a view. The view has a set of properties (passed in values) and/or state (the state is obtained from the store’s state). For a given set of properties and state, you always get the same layout. The components listen for updates to the state in the stores and update appropriately.

We also have a dispatcher. The dispatcher dispatches actions to interested stores. Only one action can be processed at any one time. If a new action comes in, then the dispatcher queues it.

Actions are always synchronous – if changes would happen due to external stimuli then these will be new actions. For example, this prevents actions from blocking other actions whilst waiting for a response from the server.

What advantages do we get?

For Hello, we find the flux pattern fits very nicely. Before, we used a traditional MVC model, however, we kept on getting in a mess with events being all over the place, and application logic being wrapped in amongst the views as well as the models.

Now, we have a much more defined structure:

React provides the component structure, it has defined ways of tracking state and properties, and the re-rendering on state change gives much automation. Since it encourages the separation of immutable properties, a whole class of inadvertent errors is eliminated.

There’s also many advantages with debugging – we have a flag that lets us watch all the actions going through the system, so its much easier to track what events are taking place and the data passed with them. This combined with the fact that actions have limited scope, helps with debugging the data flows.

Simple Unit Testing

For testing, we’re able to do unit testing in a much simpler fashion:

it("should render a muted local audio button", function() {
  var comp = TestUtils.renderIntoDocument(
    React.createElement(sharedViews.MediaControlButton, {
      scope: "local",
      type: "audio",
      action: function(){},
      enabled: false
    }));

  expect(comp.getDOMNode().classList.contains("muted")).eql(true);
});
it("should set the state to READY", function() {
  store.setupRoomInfo(new sharedActions.SetupRoomInfo(fakeRoomInfo));

  expect(store._storeState.roomState).eql(ROOM_STATES.READY);
});

We therefore have many tests written at the unit test level. Many times we’ve found and prevented issues whilst writing these tests, and yet, because these are all content based, we can run the tests in a few seconds. I’ll go more into testing in a future post.

References

Here’s a few references to some of the areas in our code base that are good example of our flux implementation. Note that behind the scenes, everything is known as Loop – the codename for the project.

Conclusion and more coming…

We’ve found using the flux model is much more organised than we were with an MVC, possibly its just a better defined methodology, but it gave us the structure we badly missing. In future posts, I’ll discuss about our development facilities, more about the desktop architecture and whatever else comes up, so please do leave questions in the comments and I’ll try and answer them either direct or with more posts.

February 09, 2015 08:49 PM

February 08, 2015

Mike Conley

The Joy of Coding (or, Firefox Hacking Live!)

A few months back, I started publishing my bug notes online, as a way of showing people what goes on inside a Firefox engineer’s head while fixing a bug.

This week, I’m upping the ante a bit: I’m going to live-hack on Firefox for an hour and a half for the next few Wednesday’s on Air Mozilla. I’m calling it The Joy of Coding1. I’ll be working on real Firefox bugs2 – not some toy exercise-bug where I’ve pre-planned where I’m going. It will be unscripted, unedited, and uncensored. But hopefully not uninteresting3!

Anyhow, the first episode airs this Wednesday. I’ll be using #livehacking on irc.mozilla.org as a backchannel. Not sure what bug(s) I’ll be hacking on – I guess it depends on what I get done on Monday and Tuesday.

Anyhow, we’ll try it for a few weeks to see if folks are interested in watching. Who knows, maybe we can get a few more developers doing this too – I’d enjoy seeing what other folks do to fix their bugs!

Anyhow, I hope to see you there!


  1. Maybe I’ll wear an afro wig while I stream 

  2. Specifically, I’ll be working on Electrolysis bugs, since that’s what my focus is on these days. 

  3. I’ve actually piloted this for the past few weeks, streaming on YouTube Live. Here’s a playlist of the pilot episodes

February 08, 2015 07:58 PM

January 15, 2015

Rumbling Edge - Thunderbird

2015-01-14 Calendar builds

Common (excluding Website bugs)-specific: (16)

Sunbird will no longer be actively developed by the Calendar team.

Windows builds Official Windows

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

January 15, 2015 07:52 AM

2015-01-14 Thunderbird comm-central builds

Thunderbird-specific: (30)

MailNews Core-specific: (14)

Windows builds Official Windows, Official Windows installer

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

January 15, 2015 07:51 AM

January 13, 2015

Joshua Cranmer

Why email is hard, part 8: why email security failed

This post is part 8 of an intermittent series exploring the difficulties of writing an email client. Part 1 describes a brief history of the infrastructure. Part 2 discusses internationalization. Part 3 discusses MIME. Part 4 discusses email addresses. Part 5 discusses the more general problem of email headers. Part 6 discusses how email security works in practice. Part 7 discusses the problem of trust. This part discusses why email security has largely failed.

At the end of the last part in this series, I posed the question, "Which email security protocol is most popular?" The answer to the question is actually neither S/MIME nor PGP, but a third protocol, DKIM. I haven't brought up DKIM until now because DKIM doesn't try to secure email in the same vein as S/MIME or PGP, but I still consider it relevant to discussing email security.

Unquestionably, DKIM is the only security protocol for email that can be considered successful. There are perhaps 4 billion active email addresses [1]. Of these, about 1-2 billion use DKIM. In contrast, S/MIME can count a few million users, and PGP at best a few hundred thousand. No other security protocols have really caught on past these three. Why did DKIM succeed where the others fail?

DKIM's success stems from its relatively narrow focus. It is nothing more than a cryptographic signature of the message body and a smattering of headers, and is itself stuck in the DKIM-Signature header. It is meant to be applied to messages only on outgoing servers and read and processed at the recipient mail server—it completely bypasses clients. That it bypasses clients allows it to solve the problem of key discovery and key management very easily (public keys are stored in DNS, which is already a key part of mail delivery), and its role in spam filtering is strong motivation to get it implemented quickly (it is 7 years old as of this writing). It's also simple: this one paragraph description is basically all you need to know [2].

The failure of S/MIME and PGP to see large deployment is certainly a large topic of discussion on myriads of cryptography enthusiast mailing lists, which often like to partake in propositions of new end-to-end encryption of email paradigms, such as the recent DIME proposal. Quite frankly, all of these solutions suffer broadly from at least the same 5 fundamental weaknesses, and I see it unlikely that a protocol will come about that can fix these weaknesses well enough to become successful.

The first weakness, and one I've harped about many times already, is UI. Most email security UI is abysmal and generally at best usable only by enthusiasts. At least some of this is endemic to security: while it mean seem obvious how to convey what an email signature or an encrypted email signifies, how do you convey the distinctions between sign-and-encrypt, encrypt-and-sign, or an S/MIME triple wrap? The Web of Trust model used by PGP (and many other proposals) is even worse, in that inherently requires users to do other actions out-of-band of email to work properly.

Trust is the second weakness. Consider that, for all intents and purposes, the email address is the unique identifier on the Internet. By extension, that implies that a lot of services are ultimately predicated on the notion that the ability to receive and respond to an email is a sufficient means to identify an individual. However, the entire purpose of secure email, or at least of end-to-end encryption, is subtly based on the fact that other people in fact have access to your mailbox, thus destroying the most natural ways to build trust models on the Internet. The quest for anonymity or privacy also renders untenable many other plausible ways to establish trust (e.g., phone verification or government-issued ID cards).

Key discovery is another weakness, although it's arguably the easiest one to solve. If you try to keep discovery independent of trust, the problem of key discovery is merely picking a protocol to publish and another one to find keys. Some of these already exist: PGP key servers, for example, or using DANE to publish S/MIME or PGP keys.

Key management, on the other hand, is a more troubling weakness. S/MIME, for example, basically works without issue if you have a certificate, but managing to get an S/MIME certificate is a daunting task (necessitated, in part, by its trust model—see how these issues all intertwine?). This is also where it's easy to say that webmail is an unsolvable problem, but on further reflection, I'm not sure I agree with that statement anymore. One solution is just storing the private key with the webmail provider (you're trusting them as an email client, after all), but it's also not impossible to imagine using phones or flash drives as keystores. Other key management factors are more difficult to solve: people who lose their private keys or key rollover create thorny issues. There is also the difficulty of managing user expectations: if I forget my password to most sites (even my email provider), I can usually get it reset somehow, but when a private key is lost, the user is totally and completely out of luck.

Of course, there is one glaring and almost completely insurmountable problem. Encrypted email fundamentally precludes certain features that we have come to take for granted. The lesser known is server-side search and filtration. While there exist some mechanisms to do search on encrypted text, those mechanisms rely on the fact that you can manipulate the text to change the message, destroying the integrity feature of secure email. They also tend to be fairly expensive. It's easy to just say "who needs server-side stuff?", but the contingent of people who do email on smartphones would not be happy to have to pay the transfer rates to download all the messages in their folder just to find one little email, nor the energy costs of doing it on the phone. And those who have really large folders—Fastmail has a design point of 1,000,000 in a single folder—would still prefer to not have to transfer all their mail even on desktops.

The more well-known feature that would disappear is spam filtration. Consider that 90% of all email is spam, and if you think your spam folder is too slim for that to be true, it's because your spam folder only contains messages that your email provider wasn't sure were spam. The loss of server-side spam filtering would dramatically increase the cost of spam (a 10% reduction in efficiency would double the amount of server storage, per my calculations), and client-side spam filtering is quite literally too slow [3] and too costly (remember smartphones? Imagine having your email take 10 times as much energy and bandwidth) to be a tenable option. And privacy or anonymity tends to be an invitation to abuse (cf. Tor and Wikipedia). Proposed solutions to the spam problem are so common that there is a checklist containing most of the objections.

When you consider all of those weaknesses, it is easy to be pessimistic about the possibility of wide deployment of powerful email security solutions. The strongest future—all email is encrypted, including metadata—is probably impossible or at least woefully impractical. That said, if you weaken some of the assumptions (say, don't desire all or most traffic to be encrypted), then solutions seem possible if difficult.

This concludes my discussion of email security, at least until things change for the better. I don't have a topic for the next part in this series picked out (this part actually concludes the set I knew I wanted to discuss when I started), although OAuth and DMARC are two topics that have been bugging me enough recently to consider writing about. They also have the unfortunate side effect of being things likely to see changes in the near future, unlike most of the topics I've discussed so far. But rest assured that I will find more difficulties in the email infrastructure to write about before long!

[1] All of these numbers are crude estimates and are accurate to only an order of magnitude. To justify my choices: I assume 1 email address per Internet user (this overestimates the developing world and underestimates the developed world). The largest webmail providers have given numbers that claim to be 1 billion active accounts between them, and all of them use DKIM. S/MIME is guessed by assuming that any smartcard deployment supports S/MIME, and noting that the US Department of Defense and Estonia's digital ID project are both heavy users of such smartcards. PGP is estimated from the size of the strong set and old numbers on the reachable set from the core Web of Trust.
[2] Ever since last April, it's become impossible to mention DKIM without referring to DMARC, as a result of Yahoo's controversial DMARC policy. A proper discussion of DMARC (and why what Yahoo did was controversial) requires explaining the mail transmission architecture and spam, however, so I'll defer that to a later post. It's also possible that changes in this space could happen within the next year.
[3] According to a former GMail spam employee, if it takes you as long as three minutes to calculate reputation, the spammer wins.

January 13, 2015 04:38 AM

January 10, 2015

Joshua Cranmer

A unified history for comm-central

Several years back, Ehsan and Jeff Muizelaar attempted to build a unified history of mozilla-central across the Mercurial era and the CVS era. Their result is now used in the gecko-dev repository. While being distracted on yet another side project, I thought that I might want to do the same for comm-central. It turns out that building a unified history for comm-central makes mozilla-central look easy: mozilla-central merely had one import from CVS. In contrast, comm-central imported twice from CVS (the calendar code came later), four times from mozilla-central (once with converted history), and imported twice from Instantbird's repository (once with converted history). Three of those conversions also involved moving paths. But I've worked through all of those issues to provide a nice snapshot of the repository [1]. And since I've been frustrated by failing to find good documentation on how this sort of process went for mozilla-central, I'll provide details on the process for comm-central.

The first step and probably the hardest is getting the CVS history in DVCS form (I use hg because I'm more comfortable it, but there's effectively no difference between hg, git, or bzr here). There is a git version of mozilla's CVS tree available, but I've noticed after doing research that its last revision is about a month before the revision I need for Calendar's import. The documentation for how that repo was built is no longer on the web, although we eventually found a copy after I wrote this post on git.mozilla.org. I tried doing another conversion using hg convert to get CVS tags, but that rudely blew up in my face. For now, I've filed a bug on getting an official, branchy-and-tag-filled version of this repository, while using the current lack of history as a base. Calendar people will have to suffer missing a month of history.

CVS is famously hard to convert to more modern repositories, and, as I've done my research, Mozilla's CVS looks like it uses those features which make it difficult. In particular, both the calendar CVS import and the comm-central initial CVS import used a CVS tag HG_COMM_INITIAL_IMPORT. That tagging was done, on only a small portion of the tree, twice, about two months apart. Fortunately, mailnews code was never touched on CVS trunk after the import (there appears to be one commit on calendar after the tagging), so it is probably possible to salvage a repository-wide consistent tag.

The start of my script for conversion looks like this:

#!/bin/bash

set -e

WORKDIR=/tmp
HGCVS=$WORKDIR/mozilla-cvs-history
MC=/src/trunk/mozilla-central
CC=/src/trunk/comm-central
OUTPUT=$WORKDIR/full-c-c

# Bug 445146: m-c/editor/ui -> c-c/editor/ui
MC_EDITOR_IMPORT=d8064eff0a17372c50014ee305271af8e577a204

# Bug 669040: m-c/db/mork -> c-c/db/mork
MC_MORK_IMPORT=f2a50910befcf29eaa1a29dc088a8a33e64a609a

# Bug 1027241, bug 611752 m-c/security/manager/ssl/** -> c-c/mailnews/mime/src/*
MC_SMIME_IMPORT=e74c19c18f01a5340e00ecfbc44c774c9a71d11d

# Step 0: Grab the mozilla CVS history.
if [ ! -e $HGCVS ]; then
  hg clone git+https://github.com/jrmuizel/mozilla-cvs-history.git $HGCVS
fi

Since I don't want to include the changesets useless to comm-central history, I trimmed the history by using hg convert to eliminate changesets that don't change the necessary files. Most of the files are simple directory-wide changes, but S/MIME only moved a few files over, so it requires a more complex way to grab the file list. In addition, I also replaced the % in the usernames with @ that they are used to appearing in hg. The relevant code is here:

# Step 1: Trim mozilla CVS history to include only the files we are ultimately
# interested in.
cat >$WORKDIR/convert-filemap.txt <<EOF
# Revision e4f4569d451a
include directory/xpcom
include mail
include mailnews
include other-licenses/branding/thunderbird
include suite
# Revision 7c0bfdcda673
include calendar
include other-licenses/branding/sunbird
# Revision ee719a0502491fc663bda942dcfc52c0825938d3
include editor/ui
# Revision 52efa9789800829c6f0ee6a005f83ed45a250396
include db/mork/
include db/mdb/
EOF

# Add the S/MIME import files
hg -R $MC log -r "children($MC_SMIME_IMPORT)" \
  --template "{file_dels % 'include {file}\n'}" >>$WORKDIR/convert-filemap.txt

if [ ! -e $WORKDIR/convert-authormap.txt ]; then
hg -R $HGCVS log --template "{email(author)}={sub('%', '@', email(author))}\n" \
  | sort -u > $WORKDIR/convert-authormap.txt
fi

cd $WORKDIR
hg convert $HGCVS $OUTPUT --filemap convert-filemap.txt -A convert-authormap.txt

That last command provides us the subset of the CVS history that we need for unified history. Strictly speaking, I should be pulling a specific revision, but I happen to know that there's no need to (we're cloning the only head) in this case. At this point, we now need to pull in the mozilla-central changes before we pull in comm-central. Order is key; hg convert will only apply the graft points when converting the child changeset (which it does but once), and it needs the parents to exist before it can do that. We also need to ensure that the mozilla-central graft point is included before continuing, so we do that, and then pull mozilla-central:

CC_CVS_BASE=$(hg log -R $HGCVS -r 'tip' --template '{node}')
CC_CVS_BASE=$(grep $CC_CVS_BASE $OUTPUT/.hg/shamap | cut -d' ' -f2)
MC_CVS_BASE=$(hg log -R $HGCVS -r 'gitnode(215f52d06f4260fdcca797eebd78266524ea3d2c)' --template '{node}')
MC_CVS_BASE=$(grep $MC_CVS_BASE $OUTPUT/.hg/shamap | cut -d' ' -f2)

# Okay, now we need to build the map of revisions.
cat >$WORKDIR/convert-revmap.txt <<EOF
e4f4569d451a5e0d12a6aa33ebd916f979dd8faa $CC_CVS_BASE # Thunderbird / Suite
7c0bfdcda6731e77303f3c47b01736aaa93d5534 d4b728dc9da418f8d5601ed6735e9a00ac963c4e, $CC_CVS_BASE # Calendar
9b2a99adc05e53cd4010de512f50118594756650 $MC_CVS_BASE # Mozilla graft point
ee719a0502491fc663bda942dcfc52c0825938d3 78b3d6c649f71eff41fe3f486c6cc4f4b899fd35, $MC_EDITOR_IMPORT # Editor
8cdfed92867f885fda98664395236b7829947a1d 4b5da7e5d0680c6617ec743109e6efc88ca413da, e4e612fcae9d0e5181a5543ed17f705a83a3de71 # Chat
EOF

# Next, import mozilla-central revisions
for rev in $MC_MORK_IMPORT $MC_EDITOR_IMPORT $MC_SMIME_IMPORT; do
  hg convert $MC $OUTPUT -r $rev --splicemap $WORKDIR/convert-revmap.txt \
    --filemap $WORKDIR/convert-filemap.txt
done

Some notes about all of the revision ids in the script. The splicemap requires the full 40-character SHA ids; anything less and the thing complains. I also need to specify the parents of the revisions that deleted the code for the mozilla-central import, so if you go hunting for those revisions and are surprised that they don't remove the code in question, that's why.

I mentioned complications about the merges earlier. The Mork and S/MIME import codes here moved files, so that what was db/mdb in mozilla-central became db/mork. There's no support for causing the generated splice to record these as a move, so I have to manually construct those renamings:

# We need to execute a few hg move commands due to renamings.
pushd $OUTPUT
hg update -r $(grep $MC_MORK_IMPORT .hg/shamap | cut -d' ' -f2)
(hg -R $MC log -r "children($MC_MORK_IMPORT)" \
  --template "{file_dels % 'hg mv {file} {sub(\"db/mdb\", \"db/mork\", file)}\n'}") | bash
hg commit -m 'Pseudo-changeset to move Mork files' -d '2011-08-06 17:25:21 +0200'
MC_MORK_IMPORT=$(hg log -r tip --template '{node}')

hg update -r $(grep $MC_SMIME_IMPORT .hg/shamap | cut -d' ' -f2)
(hg -R $MC log -r "children($MC_SMIME_IMPORT)" \
  --template "{file_dels % 'hg mv {file} {sub(\"security/manager/ssl\", \"mailnews/mime\", file)}\n'}") | bash
hg commit -m 'Pseudo-changeset to move S/MIME files' -d '2014-06-15 20:51:51 -0700'
MC_SMIME_IMPORT=$(hg log -r tip --template '{node}')
popd

# Echo the new move commands to the changeset conversion map.
cat >>$WORKDIR/convert-revmap.txt <<EOF
52efa9789800829c6f0ee6a005f83ed45a250396 abfd23d7c5042bc87502506c9f34c965fb9a09d1, $MC_MORK_IMPORT # Mork
50f5b5fc3f53c680dba4f237856e530e2097adfd 97253b3cca68f1c287eb5729647ba6f9a5dab08a, $MC_SMIME_IMPORT # S/MIME
EOF

Now that we have all of the graft points defined, and all of the external code ready, we can pull comm-central and do the conversion. That's not quite it, though—when we graft the S/MIME history to the original mozilla-central history, we have a small segment of abandoned converted history. A call to hg strip removes that.

# Now, import comm-central revisions that we need
hg convert $CC $OUTPUT --splicemap $WORKDIR/convert-revmap.txt
hg strip 2f69e0a3a05a

[1] I left out one of the graft points because I just didn't want to deal with it. I'll leave it as an exercise to the reader to figure out which one it was. Hint: it's the only one I didn't know about before I searched for the archive points [2].
[2] Since I wasn't sure I knew all of the graft points, I decided to try to comb through all of the changesets to figure out who imported code. It turns out that hg log -r 'adds("**")' narrows it down nicely (1667 changesets to look at instead of 17547), and using the {file_adds} template helps winnow it down more easily.

January 10, 2015 05:55 PM

January 03, 2015

Mike Conley

DocShell in a Nutshell – Part 3: Maturation (2005 – 2010)

Whoops

First off, an apology. I’ve fallen behind on these posts, and that’s not good – the iron has cooled, and I was taught to strike it while it was hot. I was hit with classic blogcrastination.

Secondly, another apology – I made a few errors in my last post, and I’d like to correct them:

  1. It’s come to my attention that I played a little fast and loose with the notions of “global history” and “session history”. They’re really two completely different things. Specifically, global history is what populates the AwesomeBar. Global history’s job is to remember every site you visit, regardless of browser window or tab. Session history is a different beast altogether – session history is the history inside the back-forward buttons. Every time you click on a link from one page, and travel to the next, you create a little nugget of session history. And when you click the back button, you move backwards in that session history. That’s the difference between the two – “like chalk and cheese”, as NeilAway said when he brought this to my attention.
  2. I also said that the docshell/ folder was created on Travis’s first landing on October 15th, 1998. This is not true – the docshell/ folder was created several months earlier, in this commit by “kipp”, dated July 18, 1998.

I’ve altered my last post to contain the above information, along with details on what I found in the time of that commit to Travis’s first landing. Maybe go back and give that a quick skim while I wait. Look for the string “correction” to see what I’ve changed.

I also got some confirmation from Travis himself over Twitter regarding my last post:

@mike_conley Looks like general right flow as far as 14 years ago memory can aid. :) Many context points surround…
@mike_conley 1) At that time, Mozilla was still largely in walls of Netscape, so many reviews/ alignment happened in person vs public docs.
@mike_conley 2) XPCOM ideas were new and many parts of system were straddling C++ objects and Interface models.
@mike_conley 3) XUL was also new and boundaries of what rendering belonged in browser shell vs. general rendering we’re [sic] being defined.
@mike_conley 4) JS access to XPCOM was also new driving rethinking of JS control vs embedding control.
@mike_conley There was a massive unwinding of the monolith and (re)defining of what it meant to build a browser inside a rendered chrome.

It’s cool to hear from the guy who really got the ball rolling here. The web is wonderful!

Finally, one last apology – this is a long-ass blog post. I’ve been working on it off and on for about 3 months, and it’s gotten pretty massive. Strap yourself into whatever chair you’re near, grab a thermos, cancel any appointments, and flip on your answering machine. This is going to be a long ride.

Oh come on, it’s not that bad, right? … right?

OK, let’s get back down to it. Where were we?

2005

A frame spoofing bug

Ah, yes – 2005 had just started. This was just a few weeks after a community driven effort put a full-page ad for Firefox in the New York Times. Only a month earlier, a New York Times article highlighted Firefox, and how it was starting to eat into Internet Explorer’s market share.

So what was going on in DocShell? Here are the bits I found most interesting. They’re kinda few and far between, since DocShell appears to have stabilized quite a bit by now. Mostly tiny bugfixes are landed, with the occasional interesting blip showing up. I guess this is a sign of a “mature” section of the codebase.

I found this commit on January 11th, 2005 pretty interesting. This patch fixes bug 103638 (and bug 273699 while it’s at it). What was going on was that if you had two Firefox windows open, both with <frameset>’s, where two <frames> had the same name attribute, it was possible for links clicked in one to sometimes open in the other. Youch! That’s a pretty serious security vulnerability. jst’s patch added a bunch of checks and smarter selection for link targets.

One of those new checks involved adding a new static function called CanAccessItem to nsDocShell.cpp, and having FindItemWithName (an nsDocShell instance method used to find some child nsIDocShellTreeItem with a particular name) take a new parameter of the “original requestor”, and ensuring that whichever nsIDocShellTreeItem we eventually landed on with the name that was requested passes the CanAccessItem test with the original requestor.

DocShell and session history

There are two commits, one on January 20th, 2005, and one on January 30th, 2005, both of which fix different bugs, but are interrelated and I want to talk about them for a second.

The first commit, for bug 277224, fixes a problem where if we change location to an anchor located within a document within a <script> tag, we stop loading the page content because the browser thinks we’re about to start loading a document at a different location. bz fixed the more common case of location change via setting document.location.href in bug 233963. Bug 277224 is interested in the case where document.location.href is modified with the .replace() method.

The solution that bz uses is to add new flags for nsIDocShellLoadInfo, which gives more power in how to stop loading a page. Specifically, it adds a LOAD_FLAGS_STOP_CONTENT flag which allows the caller to stop the rendering of content and all network activity (where the default was just to stop network activity). I believe what happens is that replace() causes an InternalLoad to kick off, and we need content rendering to be stopped in order for this new load to take over properly. That’s my reading on the situation, anyhow. If bz or anybody else examining that patch has another interpretation, please let me know!

So what about the commit on January 30th? Well that one also involves anchors. What was happening was that if we browsed to some page, and then clicked a link that scrolled us to an anchor in that page, clicking back would reload the entire document off the cache again, when we really just need to restore the old scroll position.

The patch to fix this basically detected the case where we were going back from an anchor to a non-anchor but had the same URL, and allowed a scroll in that case.

So how is this related to the commit for bug 277224? Well, what it shows is that at this time, DocShell was responsible for not just knowing how to load a document and subdocuments, but also about the user’s state in that document – specifically, their scroll position. It also more firmly establishes the link between DocShell and Session History – as DocShell traverses pages, it communicates with Session History to let it know about those transitions, and refers to it when traveling backwards and forwards, and when restoring state for those session history entries.

I just thought that was kinda neat to know.

Window pains

On February 8th, 2005, danm landed a patch to fix bug 278143, which was a bug that caused windows opened with window.open to open in a new window if they had no target specified. This wouldn’t normally be a problem, except that this could override a user preference to open those new windows in new tabs instead. So that was bad.

This was simply a matter of adding a check for the null case for the target window name in nsWindowWatcher. No big deal.

The reason I bring this code up, is because I find it familiar – I brushed by it somewhat when I was working on making it possible to open new windows for multi-process Firefox.

Semi-related (because of the “popup” nature of things), is a commit on February 23rd, 2005. This one is for bug 277574, which makes it so that modal HTTP auth prompts focus the tabs that spawn them. This patch works by making sure HTTP auth prompts fire the same DOMWillOpenModalDialog and DOMModalDialogClosed events that tabbrowser listens for to focus tabs.

The copy and the cache

On March 11, 2005, NeilAway landed a commit to add the “Copy Image” command item to the context menu. This was for bug 135300.

What’s interesting here is that “Copy Image Location” was already in the context menu, and in the bug, it looks like there’s some contention over whether or not to keep it. It seems that right around here, the solution they go with is to copy both the image and the image location to the clipboard, and mark each copy with the right “flavours”, so that if you were to paste to a program or field that accepted an “image” flavour, like Photoshop, you’d get the image. If you pasted to a program or field that accepted a “text” flavour, like Notepad, you’d get the image URL.

That’s the solution that was landed, anyhow. Notice that nowadays, Firefox has context menu items that allow users to copy just the image, and just the URL – so at some point, this approach was deemed wanting. I’ll keep my eye out to see if I can find out where that happened, and why. If anybody knows, please comment!

On April 28, 2005, roc landed a commit for bug 240276, which splits up something called “nsGfxScrollFrame” into two things – nsHTMLScrollFrame and nsXULScrollFrame. It seems like, up until this point, layout for both XUL and HTML scrollable frames were handled by the same code. This meant that we were using XUL box-model style layout for HTML, and XUL layout is… well… kind of tricky to work with. This patch helped to further distance our HTML rendering from our XUL rendering. As for how this affected DocShell, the patch removed some scroll calculations from DocShell, where they probably didn’t belong in the first place.

On May 4, 2005, Brian Ryner landed a patch which made it possible to move back and forward across web pages much more quickly. This was for bug 274784, and a key part of a project called “fastback”. When you view a web page, a DocShell is put in charge of requesting network activity to retrieve the document source, and then passing that source onto an appropriate nsIContentViewer. Up until Brian’s patch, it looks like every nsIContentViewer was just getting thrown away after browsing away from a page. His patch made it possible to store a certain number of these nsIContentViewers in the session history of the window, and then retrieve it when we browse back or forward to the associated page. This is a textbook trade-off between speed (the time to instantiate and initialize an nsIContentViewer) and space (stored nsIContentViewers consume memory). And it looks like the trade-off paid-off! We still cache nsIContentViewers to this day. What’s interesting about Brian’s patch is that it exposes an about:config preference1 for setting how many content viewers are allowed to be cached2. As DocShell seems to go hand in hand with session history, it’s not surprising that Brian’s patch touches DocShell code.

about:neterror arrives, Inner and Outer windows appear, and then Session History gets all snuggly with DocShell

On July 14th of 2005, bsmedberg landed a patch to add about:neterror pages, and close a privilege-escalation security vulnerability. Up until this point, network error pages were shown by browsing the DocShell to chrome URLs3, but this allowed certain types of attacks which load iframes resolving to network error pages to potentially gain chrome privileges4.

So instead of going to a chrome URL, the patch causes DocShell to internally load about:neterror5 The great news about this about:neterror page is that it has restricted permissions, so that security hole got plugged.

On July 30th, 2005, jst landed a patch to introduce the notion of inner and outer windows for bug 296639. Inner and outer windows has confused me for a while, but I think I’ve somewhat wrapped my head around it. This document helped.

The idea goes something like this:

The thing that is showing you web content can be considered the outer window – so that could be a browser tab, or an iframe, for example. The inner window is the content being displayed – it’s transitory, and goes away as you browse the web via the outer window.

The outer window then has a notion of all of the inner windows it might contain, and the inner window (via Javascript) gets a handle on the outer window via the window global.

So, for example, if you call window.open, the returned value is an outer window. Methods that you call on that outer window are then forwarded to the inner window.

I hope I got that right. I was originally trying to piece together the meaning of all of it by reading this WHATWG spec describing browsing context, and that was pretty slow going. The MDN page seemed much more clear.

Please comment with corrections if I got any of that wrong.

I’m not entirely sure, but based entirely on instinct and experience, I’m inclined to believe there are interesting security effects of this split. It seems to add a bit more of a membrane6 between web content and the physical window.

Anyhow, jst’s change was pretty monumental. It’s for bug 296639 if you want to read up more about it.

A semi-related change was landed on August 12th by mrbkab, where the entire inner window is stashed in the bfcache (as opposed to what we were doing before, which looks like serialization and deserialization of window state). That was for bug 303267, and sounds related to the fast back and forward caching work that Brian Ryner was working on back in May.

On August 18th of 2005, radha landed the first in the series of patches to session history. Unfortunately, the commit message for this patch doesn’t have a bug number, so I had some trouble tracking down what this work is for. I think this work is for bug 230363, and is actually a copy of interfaces from xpfe/components/shistory/public to docshell/shistory/public. Like I mentioned earlier, DocShell and session history are closely linked, so I suppose it makes sense to put the session history code under docshell/. Later that day, another patch copies the nsISHistoryListener interfaces over as well. Finally, a patch landed to build those interfaces from their new locations, and removes xpfe/components/shistory from Makefile.in. The bug for that last change is bug 305090.

Last bit of 2005

On August 22 of 2005, mrbkap landed a patch that changed how content viewer caching worked. There’s a special page in Firefox called about:blank – if you go to that page right now, you’re going to get a blank page. Some people like to set that as their home page or new tab page, as it is (or should be) very lightweight to load. That page is also special because, from what I can tell, when a new tab or window opens, it’s initially pointing at about:blank before it goes to the requested destination. Before this patch, we used to cache that about:blank content viewer in session history. We didn’t put an entry in the back-forward cache for about:blank though7, so that was a useless cache and a waste of memory. mrbkap’s patch made DocShell check to see if the page it was traveling to was going to re-use the current inner window, and if so, it’d skip caching it. Memory win!

That was the last thing I found interesting in 2005. On to 2006!

Preferences and threads…

On February 7th, 2006, bz landed a patch that made it possible for embedders to override where popup windows get opened.

There are preferences in Firefox that allow you to tweak how web content is able to open new windows8. Those preferences are browser.link.open_newwindow and browser.link.open_newwindow.restriction. If a page is attempting to open a new window, these preferences allow a user to control what actually occurs. The default (in most cases) is to open a new tab instead – but these preferences allow you to open that new window, or to open the content in the same window that the link is executed in. These are the kind of tweaks that power-users love.

Up until this point, only Firefox had these tweaking capabilities. bz’s patch moved that tweaking logic “up the chain”, so to speak, which means that applications that embedded Gecko could also be tweaked like this. Pretty handy.

For the Gecko hackers reading this, this patch also introduced the nsIWindowProvider interface9.

On May 10th, 2006, “darin” landed a patch for bug 326273 that put the nsIThreadManager interface and implementation into the tree. It’s a big commit, and affected many parts of the codebase. nsIThreadManager is, not surprisingly, used to implement multi-threading and thread manipulation in Gecko. From my look at the patch, it looks like it replaces something called nsIEventQueue / nsIEventQueueService. It looks like Gecko already had some facility for multi-threading10, but nsIThreadManager looks like a different model for multi-threading.

For DocShell, this change meant modifying the way that restoring PresShell’s from history would work. Before, DocShell had a RestorePresentationEvent that extended PLEvent, which allowed it to be posted to an nsIEventQueue. Now, instead, we define an inner-class that implements nsRunnable11, and also define a weak pointer to that runnable on a DocShell.

So the way things would go is this: DocShell::RestorePresentation would get called, and this would cancel any pending RestorePresentationRunnable that the DocShell is weak-pointing to. Next, we’d instantiate a new RestorePresentationRunnable that we’d then dispatch to the main thread. This isn’t really different to what we were doing before, but it makes use of the nsIThreadManager and nsRunnable class instead of nsIEventQueue and nsIEventQueueService.

What’s interesting about this patch, DocShell-wise, is that it shows the usage of FavorPerformanceHint, which looks like a way of trading-off UI interactivity with page-to-screen time. Basically, it looks like the FavorPerformanceHint is used when restoring PresShell’s to tell the nsIAppShell, “hey – we want you to favor native events over other events for a small pocket of time so we can get this stuff to the screen ASAP”. If I’m interpreting that right, it’s a tradeoff between total time to execute and responsiveness here. “Do you want it fast, or do you want it smooth?”.

I was probably wrong about the name

In one of my past posts, I made some guesses about why DocShell was called DocShell. I thought:

I think nsDocShell was given the “shell” monicker because it did the job of taking over most of nsWebShell’s duties. However, since nsWebBrowser was now the touch-point between the embedder and embedee… maybe shell makes less sense. I wonder if we missed an opportunity to name nsDocShell something better.

But now that I look at nsIAppShell, and nsIDocShell, and nsIPresShell… I think I’m starting to understand. A while back, when I first started planning these posts, I asked blassey why he thought nsIDocShell was named the way it was, and he said he thought it might be related to the notion of a command shell – like a terminal input. From my understanding, a shell is a command interface with which one can manipulate and control something pretty complex – like the file-system or processes of a computer. I think blassey is right – I think that’s the “Shell” in nsIDocShell. I think the idea is that this interface would be the one to control and manipulate the process of loading and displaying a document. It seems obvious now, but it sure wasn’t when I started looking into this stuff.

DOM Storage (session and global), KungFuDeathGrip, friendlier search…

On May 19th, 2006, jst picked up, finished and landed a patch originally be Enn that implemented DOM Storage for bug 335540.

This patch adds two new methods to nsIDocShell – getSessionStorageForDomain and addSessionStorage. The first method is accessed in a number of cases, but most importantly when some caller reads sessionStorage or globalStorage off of the window object12.

The relationship between nsGlobalWindow and nsDocShell is brought to my attention with this patch. Here’s a fragment from an old chat I had with Ms2ger, smaug and bz, which started when I asked Ms2ger what he’d rename DocShell to.

14:11 (Ms2ger) mconley, I would call it WindowProxy :)
14:12 (smaug) outer window? yes, WindowProxy please
14:12 (mconley) Ms2ger: wait, outer window = docshell currently?
14:12 (khuey) what are we doing with WindowProxy?
14:13 (Ms2ger) mconley, well, no, there’s nsDocShell and nsGlobalWindow (with IsOuterWindow() true)
14:13 (Ms2ger) mconley, those are pretty much isomorphic
14:13 (mconley) I see
14:13 (bz) nsDocShell and outer nsGlobalWindow are in a 1-1 relationship
14:14 (bz) The fact that they are two separate objects is sort of a historical accident that we may want to rectify sometime
14:14 (mconley) this sounds like another post to write – how nsDocShell and nsGlobalWindow are related…

So I think nsGlobalWindow (instances of which can either be “inner” or “outer”) when it has IsOuterWindow being true, works in tandem with nsDocShell to “be” the outer window. That’s really imprecise, hand-wavey language. I’ll probably need to tighten this up in a follow-up post once somebody reads this and gives me better words to describe things13.

On May 24, 2006, smaug landed a patch to fix bug 336978. Bug 336978 was a crash caused by loading the following code in an iframe:

<html>
<head></head>
<body>
  <script>
    window.addEventListener("pagehide", doe, true);
    function doe(e) {
      var x = parent.document.getElementsByTagName('iframe')[0];
      x.parentNode.removeChild(x);
    }
    setTimeout(doe2,500);
    function doe2() {
      window.location = 'about:blank';
    }
  </script>
</body>
</html>

What this code does is wait 500ms, and then change the location to about:blank. Changing the location causes the pagehide event to fire while we’re unloading the original page, and when we hear it, we get the host of the iframe to remove the iframe from itself.

smaug’s solution to this bug is for nsDocShell to hold a reference via an nsCOMPtr to the nsIContentViewer for the document while the pagehide event is fired. This ensures that the nsIContentViewer doesn’t get destructed before we’re truly done to it. The name we give this nsCOMPtr is “kungFuDeathGrip”. This isn’t the only place where some hold of an object is maintained with a variable called kungFuDeathGrip – check out dxr for some more uses.

I’d seen kungFuDeathGrip over the years, and I never looked closely at what it was doing. I always thought kungFuDeathGrip was some magical global function that destroyed things unequivocally, but on closer inspection, I’m pretty sure it’s really just a way of saying “this variable’s sole purpose is to hold a reference to this thing until I’m done with it.”

I think the phrase “kung fu” distracted me. I thought it did this:

Black Dynamite layin' the smack down.

Woooooo!

But it’s really more like this:

Spock taking out Kirk with the Vulcan nerve pinch thing.

Kkkg….*gurgle*…ngahh….

On June 15th, 2006, “brettw” landed a patch for bug 245597 to make it so that anything that gets put into the AwesomeBar that isn’t parse-able as a URI automatically turns into a keyword search. That’s great! This made both the search input and the AwesomeBar useful for more users. This change occurred in docshell/base/nsDefaultURIFixup.cpp, which is, as I understand it, the central location for code that turns erroneous URIs into what the user probably intended.

nsIMutationObserver, some new about: pages…

On July 2nd, 2006, Jonas Sicking added nsIMutationObserver to the tree for bug 342062, making it possible to observe changes to the DOM within a subtree. It’s a pretty big patch, but it looks like a good chunk of it is just swapping in usage of nsIMutationObserver to replace old usage of nsIDocumentObserver (which supplied the same observations, but for an entire document instead of a subtree). Note that it’d still be a few years before DOM3 Mutation Events would be exposed for web developers to use, and after a few more years, those events were deprecated in favour of the Mutation Observer API.

On September 15th, 2006, several new Gecko-wide about: pages landed, which means they got put into the redirection map in docshell/base/nsAboutRedirector.cpp. These pages were about:buildconfig (bug 140034), about:about (bug 56061), and about:license (bug 256945). That same day, bz landed a patch to make it so that new about: pages didn’t have to have special rules hardcoded into nsScriptSecurityManager::CanExecuteScripts to execute script even if the user has script disabled. Instead, the nsAboutRedirector mapping was extended to allow a boolean for indicating that an about: page required script execution.

Simplifying DocShell, and then some spoofing and malware protection

The next interesting thing (according to me, anyhow) didn’t occur until the following May 6th, 2007. That day, bz landed a patch for bug 377303 which simplified the structure of things inside the DocShell tree.

Up until that point, there had existed both nsIDocShellTreeItem and nsIDocShellTreeNode had both existed as interfaces for interacting with nodes within a DocShell tree. I’ll quote myself from my previous post:

The (somewhat nebulous) distinction of DocShell “treeItems” and “treeNodes” is made. At this point, the difference between the two is that nsIDocShellTreeItem must be implemented by anything that wishes to be a leaf or middle node of the DocShell tree. The interface itself provides accessors to various attributes on the tree item. nsIDocShellTreeNode, on the other hand, is for manipulating one of these items in the tree – for example, finding, adding or removing children. I’m not entirely sure this distinction is useful, but there you have it.

It looks like enough was finally enough. bz didn’t go so far as to fully merge the two interfaces (though he makes a note in his patch about doing so), but instead made the (arguably more complex) nsIDocShellTreeItem interface inherit from nsIDocShellTreeNode14.

Later, on May 17th, 2007, Mats Palmgren landed a patch for bug 376562 to remove a childOffset attribute from nsIDocShellTreeItem, and instead move a setter for the childOffset to the nsIDocShell interface instead.

Reading this bug comment as well as one of Mats’ comments in the patch, it sounds as if childOffset never really worked as advertised, and was a bit of a foot-gun.15

On June 14th, 2007, bz landed a patch for bug 371360, which prevents onUnload handlers from starting any page loads. Before this, it seems that it was possible for a page do to something like this:

<html>
  <body onunload="location.href = 'http://www.somesite.com';">
    <a href="http://slashdot.org/">http://slashdot.org/</a>
  </body>
</html>

With the result being that you could (potentially) phish a user. For example, suppose you’re a member of MySafeBank, which has a site at mysafebank.com. Suppose you’re at my seemingly innocent site totallyevil.com, and also suppose that I’ve registered a domain at rnysafebank.com (that’s an r and an n, which, if you’re not paying attention, look pretty close to a m). If you’re at my site, and I notice that you’re trying to head to mysafebank.com, I could redirect you to rnysafebank.com, which has a very similar user interface and favicon. Yadda yadda yadda, your bank info is now mine.

So bz stopped that one in its tracks by just preventing a DocShell from attempting any kind of load if we’re in the middle of an unload.

On July 3rd, 2007, johnath16 landed a patch for bug 380932 to add a new mode for about:neterror for pages suspected of serving up malware.

If you haven’t seen that page before, count yourself lucky – you’ve been surfing in safe places! This is what the page looks like (or, used to look like, anyhow):

Minefield reporting a Suspected Attack Site

Old version of Firefox showing a Suspected Attack Site

johnath’s patch allowed an about:neterror page to have a specific CSS class associated with it as part of its URL. This allowed for the dramatic styling in the image above17.

showModalDialog

showModalDialog was a non-standard function that Microsoft introduced in Internet Explorer 418. This allows a web page to create a modal dialog that contains web content. jst landed a patch on July 26th, 2007 to implement it in Firefox as part of bug 194404. This patch made it possible for a DocShell to have a modal dialog be its parent.

showModalDialog has since been marked as deprecated on MDN, and Google Chrome have announced they will no longer support it after May of 2015. Firefox will support it until sometime after Firefox 39 on the release channel19.

about:crashes, Larry, and tab tearing

There’s a long gap in time here where nothing really interesting happens under docshell/.

Finally, on January 24th, 2008, Mossop landed a patch for bug 411490 that exposes about:crashes as a handy way of getting at the list of crash reports that have been collected.

As about:crashes is a Gecko-wide about: page, this meant once again adding an entry to the docshell/base/nsAboutRedirector.cpp map, as had been done with about:buildconfig and about:about.

On April 28th, 2008, gavin landed a patch originally by ehsan that adds a friendlier set of icons for reporting SSL errors for bug 430904.

That icon is Larry. Have you met Larry? This is Larry:

Larry, the SSL dude.

This is Larry.

Larry is the name for a series of icons that were developed to describe how secure your communications are with a particular site. You might recognize him from the airport, because he looks a lot like a customs agent or border patrol.

You can read up on Larry here on johnath’s blog.

On August 7th, 2008, bz landed a patch for bug 113934 to lay the foundation for letting users drag tabs out from a window, or drag tabs between windows. This introduced a new method to nsIFrameLoader, “swapFrameLoaders”. This method is the real key to moving tabs between windows – each <xul:browser> implements nsIFrameLoaderOwner, and the nsIFrameLoader is (yet another) thing that can load web content. This is essentially a brain transplant between two <xul:browser>’s. You can see the real guts of the brain transplant in this method of the patch.

On January 13th, in a semi-related patch, bz landed code for bug 449780 to flush the bfcache20 when swapping frameloaders. Apparently, we were storing information in the bfcache that was simply incorrect after a frameloader swap. The best way to avoid internal confusion in such a case was to just invalidate the cache.

Big gaps…

Lots of big gaps between the next few changes.

On March 18th, 2009, Honza Bambas landed a patch for bug 422526 to implement window.localStorage. localStorage was a replacement for globalStorage21 that persisted across browser restarts (unlike sessionStorage).

Note that both localStorage and sessionStorage were synchronous storage APIs. It’d take until around June 24th of 2010 before an asynchronous storage mechanism became available.

On May 7th, 2009, bz landed a patch for bug 490957 to finally get rid of nsWebShell.cpp. If you recall from my earlier blog post, that was one of Travis Bogard’s goals at the start of this whole adventure. bz’s patch essentially folds the functionality of nsWebShell into nsDocShell. The webshell/ folder remained, but just contained interfaces.

Curiously, a good chunk of nsWebShell’s functionality seemed to revolve around anchor pings, a massively unpopular “feature” that allows a website to get your browser to send a request every time you click on a link. Thankfully, this “feature” is disabled by default in Firefox22. Here’s Jorge Villalobos’s post on anchor pings. Correction (Jan 4th, 2015) - I’ve since changed my tune about anchor pings. See these three comments.

On June 29th, 2009, dbolter landed a patch for bug 467144 so that nsIMutationObserver’s, when they observe an attribute being changed, also get a copy of the old attribute as well as the new one. Actually, to be specific, it adds a callback “AttributeWillChange” to include the old attribute. This fires before the “AttributeChanged” callback.

A day later, bsmedberg landed a massive patch to implement remote tabs. There’s no bug number in the commit message, but this is clearly part of the Electrolysis efforts that were just starting up around this time. Remote tabs means browsers that run in different processes, which is the overall goal of Electrolysis, and (possibly unbeknownst to bsmedberg at the time) a foundational piece for Fennec (Firefox for Android)23.

On October 3rd, 2009, vvuk landed about:memory for bug 515354, a key piece of the war against high-memory consumption in Firefox (a.k.a. MemShrink). This is very similar to the about:crashes page that Mossop landed back in 2008.

And finally…

On January 7th, 2010, smaug landed a patch for bug 534226 to remove support for multiple PresShell’s. PresShell stands for “Presentation Shell”, and as I understand it, is the primary interface to the “frame tree”24.

It looks like, up until this point, Gecko had the ability to have multiple frame trees per content tree. I’m not entirely sure what the point of that was, but the capability was there. smaug’s patch simplifies everything by making sure a document has only a single, primary PresShell. This removes a lot of iteration and management code for those multiple PresShell’s, which is nice.

And last, but not least, on June 30th, 2010, Benjamin Stover landed a patch for bug 556400 which made it so that visits to webpages are recorded asynchronously in Places. It looks like this patch takes I/O off the main-thread, so it gets a big thumbs-up from me.

Essentially, this patch adds new asynchronous write methods to the History service, and then makes nsDocShell use those methods on webpage visits. nsDocShell falls back to the synchronous methods of nsIGlobalHistory2 if, for some reason, it can’t get at the History service and its asynchronous methods.

Phew!

Did you make it? Are you still with me? I know it might feel like this:

qwop

everyone is a winner

but I think we’re making real progress here. I think we’re learning important stuff about the history of Firefox, and changes that have occurred in some of its core functionality over time.

So to sum, that, to me, was the most interesting stuff to happen in and around docshell/ from 2005-2010. There might have been other neat stuff in there, but it didn’t catch my eye when I was browsing commit messages.

There’s still much to do – I have to look at commits from 2011 to 2014. After that, I’m planning on doing a line-by-line code review / walkthrough of nsDocShell.cpp, and then I’d like to try to summarize my findings and any recommendations I’ve put together from my time studying this stuff.

Hold tight!


  1. This pref was browser.sessionhistory.max_viewers, if you’re interested – though that preference appears to have been superseded by browser.sessionhistory.max_total_viewers. The default value for that pref is -1, meaning to adjust the number of allowed cached viewers based on how much memory is available. If you’re looking to reduce how much memory Firefox consumes, it’s possible setting this to some low integer will allow you reverse that trade-off between space and speed. 

  2. I assume per session history 

  3. You wouldn’t notice that you were at a chrome URL though, because DocShell loads this URL internally, while pretending to be at the URL that caused the error. The end result is the user going to http://www.sitethatcausesnetworkerror.com still sees that URL in their AwesomeBar, despite the fact that their web content shows the appropriate network error page hosted at a chrome URL. 

  4. “chrome privileges” means that a web page now essentially has the same permissions that Firefox, the program on your computer, has – meaning it can potentially read and write files, and communicate with anybody on your network. Yikes! 

  5. You can visit this page in Firefox right now and see a generic network error. It’s showing a generic error because it hasn’t the foggiest idea how you’ve arrived at about:neterror, since it wasn’t passed any error information. 

  6. Or the infrastructure to create such a membrane. 

  7. So you couldn’t go back to about:blank in cases I described, where a tab or window was initialized at about:blank before going to a new page. 

  8. I’m actually quite familiar with this stuff because I worked on opening new windows for Electrolysis not too long ago. 

  9. From the header of that interface:

    /**
     * The nsIWindowProvider interface exists so that the window watcher's default
     * behavior of opening a new window can be easly modified.  When the window
     * watcher needs to open a new window, it will first check with the
     * nsIWindowProvider it gets from the parent window.  If there is no provider
     * or the provider does not provide a window, the window watcher will proceed
     * to actually open a new window.
     */

     

  10. The nsIEventQueueService service mentions that it is used to manage event queues for a particular thread, and makes use of nsIThread – so multi-threading must have already been a thing. 

  11. Still called RestorePresentationEvent though… strange that the opportunity wasn’t taken to rename this to RestorePresentationRunnable. 

  12. Those two properties are part of a new nsIDOMStorageWindow interface that nsGlobalWindow implements after this patch. That interface is later removed in bug 670331, and the two accessors are moved directly into nsIDOMWindow instead. 

  13. I have a feeling the real answer lies somewhere in the comments in bug 296639

  14. It wasn’t immediately clear to me why the inheritance didn’t go the other way around – especially since bz himself had a comment in nsIDocShellTreeNode suggesting that arrangement. Look at his first comment in the bug though:

    This would allow consumers to start using just nsIDocShellTreeItem in their code, until we can just merge nsIDocShellTreeNode into nsIDocShellTreeItem.

    Basically, it sounds like nsIDocShellTreeNode was being deprecated, and that callers who used to use nsIDocShellTreeNode should migrate to use nsIDocShellTreeItem instead (which inherits nsIDocShellTreeNode’s methods). Then the two interfaces could be merged. 

  15. Mats’ warning was removed on August 17, 2010 as part of bug 462076. It looks like SetChildOffset is still only ever used when adding the child though, so it’s still probably valid. 

  16. The same johnath who is currently the VP of Firefox! 

  17. Later on in August, dcamp would land this patch as part of bug 384941 which prevents suspected malware sites from even loading, instead of just not displaying them. 

  18. To quote Douglas Adams:

    This has made a lot of people very angry and has been widely regarded as a bad move.

     

  19. And the 38 ESR will continue to be supported until mid-2016. If you maintain a site that uses showModalDialog, you’d best get rid of it. 

  20. The bfcache, or “back-forward cache” is a collection of “frozen” pages that are stored in memory for fast back/forward action – see this page for more detail. 

  21. globalStorage, I believed, allowed all web properties read and write access to the same storage – so clearly it was a good idea to replace it. globalStorage was removed on October 9th, 2011 by Honza as part of bug 687579

  22. But according to this, is enabled by default in both Chrome and Opera. Lovely

  23. Remote tabs are also hugely important for Boot2Gecko / Firefox OS

  24. From this document:

    …the frame tree…is the visual representation of the document. Each frame can be thought of as a rectangular area on the page. The content nodes for XML elements are usually associated with one or more frames which display the element — one frame if the element is rectangular, more if the element is something more complex (like a chunk of bolded text that happens to be word-wrapped)…

    And from the nsIPresShell.h header:

    /**
    * Presentation shell interface. Presentation shells are the
    * controlling point for managing the presentation of a document. The
    * presentation shell holds a live reference to the document, the
    * presentation context, the style manager, the style set and the root
    * frame. <p>
    *
    * When this object is Release’d, it will release the document, the
    * presentation context, the style manager, the style set and the root
    * frame.

     

January 03, 2015 08:34 PM

December 21, 2014

Rumbling Edge - Thunderbird

2014-12-21 Calendar builds

Common (excluding Website bugs)-specific: (37)

Sunbird will no longer be actively developed by the Calendar team.

Windows builds Official Windows

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

December 21, 2014 02:10 PM

2014-12-21 Thunderbird comm-central builds

Thunderbird-specific: (43)

MailNews Core-specific: (28)

Windows builds Official Windows, Official Windows installer

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

December 21, 2014 02:08 PM

November 25, 2014

Thunderbird Blog

Thunderbird Reorganizes at 2014 Toronto Summit

In October 2014, 22 active contributors to Thunderbird gathered at the Mozilla office in Toronto to discuss the status of Thunderbird, and plan for the future.

Toronto Contributors at 2014 Toronto Summit

Thunderbird contributors gather in Toronto to plan the future.

As background, Mitchell Baker, Chair of the Mozilla Foundation, posted in July 2012 that Mozilla would significantly reduce paid staff dedicated to Thunderbird, and asked community volunteers to move Thunderbird forward. Mozilla at that time committed several paid staff to maintain Thunderbird, each working part-time on Thunderbird but with a main commitment to other Mozilla projects. The staff commitment in total was approximately one full-time equivalent.

Over the last two years, those individuals had slowly reduced their commitment to Thunderbird, yet the formal leadership of Thunderbird remained with these staff. By 2014 Thunderbird had reached the point where nobody was effectively in charge, and it was difficult to make important decisions. By gathering the key active contributors in one place, we were able to make real decisions, plan our future governance, and move to complete the transition from being staff-led to community-led.

At the Summit, we made a number of key decisions:

There is a lot of new energy in Thunderbird since the Summit, a number of people are stepping forward to take on some critical roles, and we are looking forward to a great next release. More help is always welcome though!

November 25, 2014 06:15 PM

November 19, 2014

Meeting Notes

Thunderbird: 2014-11-18

   Thunderbird meeting notes 2014-11-18

Previous meetings: https://wiki.mozilla.org/Thunderbird/StatusMeetings#Meeting_Notes

Attendees

(partial list)
rkent
florian
wsmwk
jcranmer
mmelin
aceman
theone
clokep
roland

Action items from last meetings

  • wsmwk: Get the Thunderbird 38 bugzilla flag created
    • not heard from standard8

Critical Issues

  • Several critical bugs keeping us from moving from 24 to 31. Please leave these here until they’re confirmed fixed.
    • Frequent POP3 crasher bug 902158 On current aurora and beta?
      • we won’t have data/insight till next week, assuming the relevant builds are built. crash-stats for nightly builds is not useful – direct user testing of the fixed build is required
    • Self-signed certs are having difficulty bug 1036338 SOLVED! REOPENED according to the bug?
    • Auto-complete bugs?bug 1045753 waiting for esr approval bug 1043310 Waiting for review, still
    • Auto-complete improvements (bug 970456, bug 1091675, bug 118624) – some of those could go into esr31
    • bug 1045958TB 31 jp does not display folder pane with OS X

Why are we throttled? Because 1) waiting for TB 31.3, and 2) still hoping for bug 1045958 3) need auto-complete bug approved. Dec 1/2 is now release date.

Upcoming

Round Table

wsmwk

  • got Penelope (pre-OSE eudora) removed from AMO
  • shepherding [1] bug 1045958]]TB 31 jp does not display folder pane with OS X
  • secured potential release drivers
  • “Get Involved” is broken for Thunderbird. TB isn’t offered at https://www.mozilla.org/en-US/contribute/signup/, and it’s unclear who in Thunderbird gets notified. In contact with

Larissa Shapiro

jcranmer

  • Looking into eliminating the comm-central build system bug 1099430
  • Trying to see if I can get some whines set up to listen for potential TB compat changes (e.g., checking the addon-compat keyword)
  • We have telemetry on Nightly for the first time since TB 28!
  • Irving is working through reviews of bug 998191 \o/

clokep

  • Google Talk does not work on comm-central due to disabling of RC4 SSL ciphers, see bug 1092701
    • Some of the security guys have contacted Googlers, apparently an internal ticket was opened with the XMPP team
  • Filed a bug (with a patch) to have firefoxtree work with comm-central, this will automatically add a tag (called “comm”) to the tip of comm-central bug 1100692 and is useful if you’re playing with bookmarks instead of mq
  • Still haven’t fully been able to get Additional Chat Protocols to compile…(Windows still failing)
  • WebRTC work is waiting for reviews from Florian

mkmelin

  • lots of reviews, queue almost empty, yay!
  • finally landed bug 970456
  • bug 1074125 fixed, plan to handle some of the m-c encoding removals next
  • bug 1074793 fixed, for tb we need to set a pref for it to take affect (bug 1095893 awaiting review)

aceman

  • revived effort on 4GB+ folders together with rkent
  • landed a rewrite of the attachment reminder engine: bug 938829 (NOT for ESR). Please mark regressions against that bug. First one is bug 1099866.

Support team

  • [roland] working on Thunderbird profile article because a new volunteer contributor from the SUMO buddy program rewrote it! will review changes!

Action Items

  • wsmwk: Thunderbird start page for anniversary, with localizations
  • wsmwk: Get Involved, get the “Thunderbird path” operating

November 19, 2014 04:00 AM

November 16, 2014

Rumbling Edge - Thunderbird

2014-11-16 Calendar builds

Common (excluding Website bugs)-specific: (11)

Sunbird will no longer be actively developed by the Calendar team.

Windows builds Official Windows

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

November 16, 2014 05:20 PM

2014-11-16 Thunderbird comm-central builds

Thunderbird-specific: (58)

MailNews Core-specific: (37)

Windows builds Official Windows, Official Windows installer

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

November 16, 2014 05:18 PM

October 19, 2014

Rumbling Edge - Thunderbird

2014-10-17 Calendar builds

Common (excluding Website bugs)-specific: (2)

Sunbird will no longer be actively developed by the Calendar team.

Windows builds Official Windows

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

October 19, 2014 09:34 AM

2014-10-17 Thunderbird comm-central builds

Thunderbird-specific: (21)

MailNews Core-specific: (11)

Windows builds Official Windows, Official Windows installer

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

October 19, 2014 09:33 AM

October 15, 2014

Kent James

Thunderbird Summit in Toronto to Plan a Viable Future

On Wednesday, October 15 through Saturday, October 19, 2014, the Thunderbird core contributors (about 20 people in total) are gathering at the Mozilla offices in Toronto, Ontario for a key summit to plan a viable future for Thunderbird. The first two days are project work days, but on Friday, October 18 we will be meeting all day as a group to discuss how we can overcome various obstacles that threaten the continuing viability of Thunderbird as a project. This is an open Summit for all interested parties. Remote participation or viewing of Friday group sessions is possible, beginning at 9:30 AM EDT (6:30 AM Pacific Daylight Time)  using the same channels as the regular weekly Thunderbird status meetings.

Video Instructions: See https://wiki.mozilla.org/Thunderbird/StatusMeetings for details.

Overall Summit Description and Agenda: See https://wiki.mozilla.org/Thunderbird:Summit_2014

Feel free to join in if you are interested in the future of Thunderbird.

October 15, 2014 04:17 AM

October 02, 2014

Philipp Kewisch

Monitor all http(s) network requests using the Mozilla Platform

In an xpcshell test, I recently needed a way to monitor all network requests and access both request and response data so I can save them for later use. This required a little bit of digging in Mozilla’s devtools code so I thought I’d write a short blog post about it.

This code will be used in a testcase that ensures that calendar providers in Lightning function properly. In the case of the CalDAV provider, we would need to access a real server for testing. We can’t just set up a few servers and use them for testing, it would end in an unreasonable amount of server maintenance. Given non-local connections are not allowed when running the tests on the Mozilla build infrastructure, it wouldn’t work anyway. The solution is to create a fakeserver, that is able to replay the requests in the same way. Instead of manually making the requests and figuring out how the server replies, we can use this code to quickly collect all the requests we need.

Without further delay, here is the code you have been waiting for:


Tagged: Mozilla, network, xpcshell

October 02, 2014 02:38 PM

October 01, 2014

Calendar

Calconnect XXXI Interop Testing

Thanks to the wonderful folks at Linagora, I was able to spend the last three days at the Calconnect XXXI meeting in Bedford, England. The goal of this meeting is to get server and client vendors together in a room both for ad-hoc testing and discussions on calendaring standards.

Before arriving, I’ve set myself the goal to go through a big list of bugs that have been sitting around in our CalDAV component to see if they can be resolved. It turns out that I was able to close 48 of the 76 bugs I had picked out:

report-2014-10-01

A good amount of the bugs I’ve resolved were sitting and waiting for any one of our contributors to reproduce with a specific server. This is often a problem because the time it takes to set up and configure the servers is time consuming. The great thing about being here at Calconnect is having a testing instance most of the reported servers readily set up. Not only that, but engineers from the respective servers are sitting together at a table and can answer any questions that may arise, or comment on potential bugs that have been fixed in later versions.

The other category of bugs are support issues, duplicates and bugs that haven’t received an answer from the reporter. These could have been found outside of Calconnect, but its still a good opportunity to take the time to handle these.

Eight of the remaining bugs already have a patch attached, four of them were created while I was here. There is also a new feature coming up that makes it easy to share calendars with other users directly from Lightning. This requires the server to support caldav-sharing, for example the Apple Calendar and Contacts Server and fruux.com.

Screenshot 2014-10-01 11.02.11
Note: The ugly add button will be replaced by an icon. The email addresses are editable.

 

In the next few days we will be going on to the standards discussions. I am actively involved as the chair to TC-API, a technical committee dedicated to producing an abstract calendaring model that ensures that vendors integrating calendaring into their products are aware of the implications of calendaring and scheduling, hopefully resulting in better interoperability in the future. Another goal we have is to find a common understanding for a REST API that is geared towards webpages, which may become a standards document some day.

October 01, 2014 10:13 AM

September 29, 2014

Ludovic Hirlimann

Tips on organizing a pgp key signing party

Over the years I’ve organized or tried to organize pgp key signing parties every time I go somewhere. I the last year I’ve organized 3 that were successful (eg with more then 10 attendees).

1. Have a venue

I’ve tried a bunch of times to have people show up at the hotel I was staying in the morning - that doesn’t work. Having catering at the venues is even better, it will encourage people to come from far away (or long distance commute). Try to show the path in the venues with signs (paper with PGP key signing party and arrows help).

2. Date and time

Meeting in the evening after work works better ( after 18 or 18:30 works better).

Let people know how long it will take (count 1 hour/per 30 participants).

3. Make people sign up

That makes people think twice before saying they will attend. It’s also an easy way for you to know how much beer/cola/ etc.. you’ll need to provide if you cater food.

I’ve been using eventbrite to manage attendance at my last three meeting it let’s me :

4 Reach out

For such a party you need people to attend so you need to reach out.

I always start by a search on biglumber.com to find who are the people using gpg registered on that site for the area I’m visiting (see below on what I send).

Then I look for local linux users groups / *BSD groups  and send an announcement to them with :

for my last announcement it looked like this :

Subject: GnuPG / PGP key signing party September 26 2014
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="t01Mpe56TgLc7mgHKVMajjwkqQdw8XvI4"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--t01Mpe56TgLc7mgHKVMajjwkqQdw8XvI4
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable

Hello my name is ludovic,

I'm a sysadmins at mozilla working remote from europe. I've been
involved with Thunderbird a lot (and still am). I'm organizing a pgp Key
signing party in the Mozilla san francisco office on September the 26th
2014 from 6PM to 8PM.

For security and assurances reasons I need to count how many people will
attend. I'v setup a eventbrite for that at
https://www.eventbrite.com/e/gnupg-pgp-key-signing-party-making-the-web-o=
f-trust-stronger-tickets-12867542165
(please take one ticket if you think about attending - If you change you
mind cancel so more people can come).

I will use the eventbrite tool to send reminders and I will try to make
a list with keys and fingerprint before the event to make things more
manageable (but I don't promise).

for those using lanyrd you will be able to use http://lanyrd.com/ccckzw.

Ludovic
ps sent to buug.org,nblug.org end penlug.org - please feel free to post
where appropriate ( the more the meerier, the stronger the web of trust).=

ps2 I have contacted people listed on biglumber to have more gpg related
people show up.

--=20
[:Usul] MOC Team at Mozilla
QA Lead fof Thunderbird
http://sietch-tabr.tumblr.com/ - http://weusepgp.info/

5. Make it easy to attend

As noted above making a list of participants to hand out helps a lot (I’ve used http://www.phildev.net/pius/ and my own stuff to make a list). It make it easier for you, for attendees. Tell people what they need to bring (IDs, pen, printed fingerprints if you don’t provide a list).

6. Send reminders

Send people reminder and let them know how many people intend to show up. It boosts audience.

September 29, 2014 11:03 AM

September 25, 2014

Rumbling Edge - Thunderbird

2014-09-22 Calendar builds

Common (excluding Website bugs)-specific: (19)

Sunbird will no longer be actively developed by the Calendar team.

Windows builds Official Windows

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

September 25, 2014 04:31 AM

2014-09-22 Thunderbird comm-central builds

Thunderbird-specific: (31)

MailNews Core-specific: (35)

Windows builds Official Windows, Official Windows installer

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

September 25, 2014 04:30 AM

September 22, 2014

Meeting Notes

Thunderbird: 2014-09-11

Thunderbird meeting notes 2014-09-11
Today’s minutes taker, please don’t forget to :
– save etherpad before clearing for this meeting, if etherpad wasn’t previously cleared (but copy “Action items” to the top before clearing)
– after end of meeting, copy entire etherpad contents to a new dated wiki page on http://mzl.la/tbstatus so that they will go public in the meeting notes blog.
– save etherpad, copy “Action items” to “Action items from last meetings”, and clear the rest of etherpad comments

Attendees

  • rkent, florian queze, irving, rolandtanglao, magnus

Action items from last meetings

  • rkent and wsmwk to unblock / deal with bienvenu bugs

Current status and discussions

  • no progress on “bienvenu” bugs
  • comm-central changes on hold pending mercurial changes and signoff by smedberg
  • discussion about kent’s modification to Thunderbird governance proposal is positive
  • much work on summit and video from mconley and rkent
  • summit agenda planning, rkent to ask in tb-planning

Round Table

jcranmer

  • Likely won’t be at meeting today, one of my meetings changed time slots to start 30m before this one
  • according to gps, hg partial checkouts are targetted to land 3.2, more likely to land in 3.3 (release dates of Nov 1/Feb 1, respectively)
    • bsmedberg has said this is his blocker for letting c-c merge into m-c
    • gps plans to make it a high priority to update to newer hg quickly, and now has a new role to make that more possible
  • Played with putting OpenLDAP server in a Docker container

JosiahOne (Not at meeting)

  • Have done reviews and have been giving feedback in bugs, but personal development time has slowed. My development machine has to have a part replaced on Saturday plus I have papers and college applications to finish this month, so availability will be limited until around the time of the Summit.
  • OS X Codesign V2 status is coming along, I’m mostly just waiting for a finished version of the Fx implementation.

rkent

  • Summit: we really need a group of people to work on the agenda. Volunteers?
  • Discussion of reorg plan?

mconley

  • Now acting as interface between TB community and ProTravel Inc
    • The attendees list has been garnered from the wiki, along with 2 extras from Fallen and Florian for Calendar / Chat. Waiting to hear from a few more.
  • Re-connected with Aaron Mandel about a fundraising video. Quote is approximately $600 (+tax), since he likes Mozilla and wants to give us a deal
    • We need to find a good variety (accents, countries of origin, languages) members of the TB community who are comfortable / articulate in front of the camera (about 5-7 people), and come up with “our story”. I suggested our audience be current TB users who don’t actually know what’s happening with Thunderbird.
    • Quickly talk about where Thunderbird came from, the transition to community development, and where we are now – and why we need help. Instead of just talking heads, these interviews will be interspersed with shots of people checking their email, doing calendaring, chatting, etc.
    • We need the quick and punchy stuff: “Put users in control of their email”, “Your email is yours, even when you’re offline”, “Lots of tweaks and add-ons for power users”
    • We also need to send Aaron TB art / assets for graphical work.
    • Once we have our story, we’ll put together a really basic script, so we know who to put in front of the camera, and what questions to ask.
    • Then, I’ll meet with Aaron face-to-face 2 weeks before we shoot to make sure we have everything we need
    • During the summit (probably the Friday), we’ll pull our 5-7 people aside, ask them the questions we’ve scripted (maybe several times to get the sound bites we want), and then Aaron will cut together the video.

clokep (Not attending)

  • Been in contact with some of the DarkMail developers, have been trying to get them to ask questions in #maildev, but they seem to like to ask through me.
    • They’ve said things like “Random people don’t get answers there”
    • More likely they’re working during times when “we’re” not online.

Support team

Action Items

  • mconley: Come up with a short-list of volunteers to go on camera, send out emails with questions on them to get responses, find the responses that resonate, and from that, assemble our script.

September 22, 2014 08:19 PM

September 17, 2014

Ludovic Hirlimann

Gnupg / PGP key signing party in mozilla's San francisco space

I’m organizing a pgp Keysigning party in the Mozilla san francisco office on September the 26th 2014 from 6PM to 8PM.

For security and assurances reasons I need to count how many people will attend. I’ve setup a eventbrite for that at https://www.eventbrite.com/e/gnupg-pgp-key-signing-party-making-the-web-of-trust-stronger-tickets-12867542165 (please take one ticket if you think about attending - If you change you mind cancel so more people can come).

I will use the eventbrite tool to send reminders and I will try to make a list with keys and fingerprint before the event to make things more manageable (but I don’t promise).

For those using lanyrd you will be able to use http://lanyrd.com/ccckzw.(Please tweet the event to get more people in).

September 17, 2014 12:35 AM

August 22, 2014

Robert Kaiser

Mirror, Mirror: Trek Convention and FLOSS Conferences

It's been a while since I did any blogging, but that doesn't mean I haven't been doing anything - on the contrary, I have been too busy to blog, basically. We had a few Firefox releases where I scrambled until the last day of the beta phase to make sure we keep our crash rates as low as our users probably expect by now, I did some prototyping work on QA dashboards (with already-helpful results and more to come) and helped in other process improvements on the Firefox Quality team, worked with different teams to improve stability of our blocklist ping "ADI" data, and finally even was at a QA work week and a vacation in the US. So plenty of stuff done, and I hope to get to blog about at least some pieces of that in the next weeks and months.

That said, one major part of my recent vacation was the Star Trek Las Vegas Convention, which I attended the second time after last year. Since back then, I wanted to blog about some interesting parallels I found between that event (I can't compare to other conventions, as I've never been to any of those) and some Free, Libre and Open Source Software (FLOSS) conferences I've been to, most notably FOSDEM, but also the larger Mozilla events.
Of course, there's the big events in the big rooms and the official schedule - on the conferences it's the keynotes and presentations of developers about what's new in their software, what they learned or where we should go, on the convention it's actors and other guests talking about their experiences, what's new in their lives, and entertaining the crowd - both with questions from the audience. Of course, the topics are wildly different. And there's booths at both, also quite a bit different, as it's autograph and sales booths on one side, and mainly info booths on the other, though there are geeky T-shirts sold at both types of events. ;-)

The largest parallels I found, though, are about the mass of people that are there:
For one thing, the "hallway track" of talking to and meeting other attendees is definitely a main attraction and big piece of the life of the events on both "sides" there. Old friendships are being revived, new found, and the somewhat geeky commonalities are being celebrated and lead to tons of fun and involved conversations - not just the old fun bickering between vi and emacs or Kirk and Picard fans (or different desktop environments / different series and movies). :)
For the other, I learned that both types of events are in the end more about the "regular" attendees than the speakers, even if the latter end up being featured at both. Especially the recurring attendees go there because they want to meet and interact with all the other people going there, with the official schedule being the icing on the cake, really. Not that it would be unimportant or unneeded, but it's not as much the main attraction as people on the outside, and possibly even the organizers, might think. Also, going there means you do for a few days not have to hide your "geekiness" from your surroundings and can actively show and celebrate it. There's also some amount of a "do good" atmosphere in both those communities.
And both events, esp. the Trek and Mozilla ones, tend to have a very inclusive atmosphere of embracing everyone else, no matter what their physical appearance, gender or other social components. And actually, given how deeply that inclusive spirit has been anchored into the Star Trek productions by Gene Roddenberry himself, this might even run deeper in the fans there than it is in the FLOSS world. Notably, I saw a much larger amount of women and of colored people on the Star Trek Conventions than I see on FLOSS conferences - my guess is that at least a third of the Trek fans in Las Vegas were female, for example. I guess we need some more role models in they style of Nichelle Nichols and others in the FLOSS scene.

All in all, there's a lot of similarities and still quite some differences, but quite a twist on an alternate universe like it's depicted in Mirror, Mirror and other episodes - here it's a different crowd with a similar spirit and not the same people with different mindsets and behaviors.
As a very social person, I love attending and immersing myself in both types of events, and I somewhat wonder if and how we should have some more cross-pollination between those communities.
I for sure will be seen on more FLOSS and Mozilla events as well as more Star Trek conventions! :)

August 22, 2014 03:09 PM

August 19, 2014

Rumbling Edge - Thunderbird

2014-08-18 Calendar builds

Common (excluding Website bugs)-specific: (17)

Sunbird will no longer be actively developed by the Calendar team.

Windows builds Official Windows

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds No binaries since July 23, 2014.

August 19, 2014 05:14 AM

2014-08-18 Thunderbird comm-central builds

Thunderbird-specific: (30)

MailNews Core-specific: (34)

Windows builds Official Windows, Official Windows installer

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds No binaries since July 23, 2014.

August 19, 2014 05:11 AM

August 10, 2014

Mike Conley

DocShell in a Nutshell – Part 2: The Wonder Years (1999 – 2004)

When I first announced that I was going to be looking at the roots of DocShell, and how it has changed over the years, I thought I was going to be leafing through a lot of old CVS commits in order to see what went on before the switch to Mercurial.

I thought it’d be so, and indeed it was so. And it was painful. Having worked with DVCS’s like Mercurial and Git so much over the past couple of years, my brain was just not prepared to deal with CVS.

My solution? Take the old CVS tree, and attempt to make a Git repo out of it. Git I can handle.

And so I spent a number of hours trying to use cvs2git to convert my rsync’d mirror of the Mozilla CVS repository into something that I could browse with GitX.

“But why isn’t the CVS history in the Mercurial tree?” I hear you ask. And that’s a good question. It might have to do with the fact that converting the CVS history over is bloody hard – or at least that was my experience. cvs2git has the unfortunate habit of analyzing the entire repository / history and spitting out any errors or corruptions it found at the end.1 This is fine for small repositories, but the Mozilla CVS repo (even back in 1999) was quite substantial, and had quite a bit of history.

So my experience became: run cvs2git, wait 25 minutes, glare at an error message about corruption, scour the web for solutions to the issue, make random stabs at a solution, and repeat.

Not the greatest situation. I did what most people in my position would do, and cast my frustration into the cold, unfeeling void that is Twitter.

But, lo and behold, somebody on the other side of the planet was listening. Unfocused informed me that whoever created the gecko-dev Github mirror somehow managed to type in the black-magic incantations that would import all of the old CVS history into the Git mirror. I simply had to clone gecko-dev, and I was golden.

Thanks Unfocused. :)

So I had my tree. I cracked open Gitx, put some tea on, and started pouring through the commits from the initial creation of the docshell folder (October 15, 1999) to the last change in that folder just before the switch over to 2005 (December 15, 2004)2.

The following are my notes as I peered through those commits.

Artist’s rendering of me reading some old commit messages. I’m not claiming to have magic powers.

“First landing”

That’s the message for the first commit when the docshell/ folder was first created by Travis Bogard. Correction (Jan 3rd, 2015) - this is actually not true at all. The commit I’ve linked to here is the first commit for nsDocShell.cpp, but the docshell/ folder was actually created over a year earlier. It looks like the “docshell” monicker was selected before any of Travis’ refactoring had even begun3.

Without even looking at the code, that’s a pretty strange commit just by the message alone. No bug number, no reviewer, no approval, nothing even approximating a vague description of what was going on.

Leafing through these early commits, I was surprised to find that quite common. In fact, I learned that it was about a year after this work started that code review suddenly became mandatory for commits.

So, for these first bits of code, nobody seems to be reviewing it – or at least, nobody is signing off on it in commit messages.

Like I mentioned, the date for this commit is October 15, 1999. If the timeline in this Wikipedia article about the history of the Mozilla Suite is reliable, that puts us somewhere between Milestone 10 and Milestone 11 of the first 1.0 Mozilla Suite release.4

Travis Bogard

Before we continue, who is this intrepid Travis Bogard who is spearheading this embedding initiative and the DocShell / WebShell rewrite?

At the time, according to his LinkedIn page, he worked for America Online (which at this point in time owned Netscape.5) He’d been working for AOL since 1996, working his way up the ranks from lowly intern all the way to Principal Software Engineer.

Travis was the one who originally wrote the wiki page about how painful it was embedding the web engine, and how unwieldy nsWebShell was.6 It was Travis who led the charge to strip away all of the complexity and mess inside of WebShell, and create smaller, more specialized interfaces for the behind the scenes DocShell class, which would carry out most of the work that WebShell had been doing up until that point.7

So, for these first few months, it was Travis who would be doing most of the work on DocShell.

Parallel development

These first few months, Travis puts the pedal to the metal moving things out of WebShell and into DocShell. Remember – the idea was to have a thin, simple nsWebBrowser that embedders could touch, and a fat, complex DocShell that was privately used within Gecko that was accessible via many small, specialized interfaces.

Wholesale replacing or refactoring a major part of the engine is no easy task, however – and since WebShell was core to the very function of the browser (and the mail/news client, and a bunch of other things), there were two copies of WebShell made.

The original WebShell existed in webshell/ under the root directory. The second WebShell, the one that would eventually replace it, existed under docshell/base. The one under docshell/base is the one that Travis was stripping down, but nobody was using it until it was stable. They’d continue using the one under webshell/, until they were satisfied with their implementation by both manual and automated testing.

When they were satisfied, they’d branch off of the main development line, and start removing occurances of WebShell where they didn’t need to be, and replace them with nsWebBrowser or DocShell where appropriate. When they were done that, they’d merge into main line, and celebrate!

At least, that was the plan.

That plan is spelled out here in the Plan of Attack for the redesign. That plan sketches out a rough schedule as well, and targets November 30th, 1999 as the completion point of the project.

This parallel development means that any bugs that get discovered in WebShell during the redesign needs to get fixed in two places – both under webshell/ and docshell/base.

Breaking up is so hard to do

So what was actually changing in the code? In Travis’ first commit, he adds the following interfaces:

along with some build files. Something interesting here is this nsIHTMLDocShell – where it looked like at this point, the plan was to have different DocShell interfaces depending on the type of document being displayed. Later on, we see this idea dissipate.

If DocShell was a person, these are its baby pictures. At this point, nsIDocShell has just two methods: LoadDocument, LoadDocumentVia, and a single nsIDOMDocument attribute for the document.

And here’s the interface for WebShell, which Travis was basing these new interfaces off of. Note that there’s no LoadDocument, or LoadDocumentVia, or an attribute for an nsIDOMDocument. So it seems this wasn’t just a straight-forward breakup into smaller interfaces – this was a rewrite, with new interfaces to replace the functionality of the old one.8

This is consistent with the remarks in this wikipage where it was mentioned that the new DocShell interfaces should have an API for the caller to supply a document, instead of a URI – thus taking the responsibility of calling into the networking library away from DocShell and putting it on the caller.

nsIDocShellEdit seems to be a replacement for some of the functions of the old nsIClipboardCommands methods that WebShell relied upon. Specifically, this interface was concerned with cutting, copying and pasting selections within the document. There is also a method for searching. These methods are all just stubbed out, and don’t do much at this point.

nsIDocShellFile seems to be the interface used for printing and saving documents.

nsIGenericWindow (which I believe is the ancestor of nsIBaseWindow), seems to be an interface that some embedding window must implement in order for the new nsWebBrowser / DocShell to be embedded in it. I think. I’m not too clear on this. At the very least, I think it’s supposed to be a generic interface for windows supplied by the underlying operating system.

nsIGlobalHistory is an interface for, well, browsing history. This was before tabs, so we had just a single, linear global history to maintain, and I guess that’s what this interface was for. Correction (Jan 3rd, 2015) - global history is really just a recording of everywhere you’ve ever been, ever. It looks like I was getting confused between global history (all the places I’ve been) and session history (all the places I’ve been this session).

nsIScrollable is an interface for manipulating the scroll position of a document.

So these magnificent seven new interfaces were the first steps in breaking up WebShell… what was next?

Enter the Container

nsIDocShellContainer was created so that the DocShells could be formed into a tree and enumerated, and so that child DocShells could be named. It was introduced in this commit.

about:face

In this commit, only five days after the first landing, Travis appears to reverse the decision to pass the responsibility of loading the document onto the caller of DocShell. LoadDocument and LoadDocumentVia are replaced by LoadURI and LoadURIVia. Steve Clark (aka “buster”) is also added to the authors list of the nsIDocShell interface. It’s not clear to me why this decision was reversed, but if I had to guess, I’d say it proved to be too much of a burden on the callers to load all of the documents themselves. Perhaps they punted on that goal, and decided to tackle it again later (though I will point out that today’s nsIDocShell still has LoadURI defined in it).

First implementor

The first implementation of nsIDocShell showed up on October 25, 1999. It was nsHTMLDocShell, and with the exception of nsIGlobalHistory, it implemented all of the other interfaces that I listed in Travis’ first landing.

The base implementation

On October 25th, the stubs of a DocShell base implementation showed up in the repository. The idea, I would imagine, is that for each of the document types that Gecko can display, we’d have a DocShell implementation, and each of these DocShell implementations would inherit from this DocShell base class, and only override the things that they need specialized for their particular document type.

Later on, when the idea of having specialized DocShell implementations evaporates, this base class will end up being nsDocShell.cpp.

That same day, most of the methods were removed from the nsHTMLDocShell implementation, and nsHTMLDocShell was made to inherit from nsDocShellBase.

“Does not compile yet”

The message for this commit on October 27th, 1999 is pretty interesting. It reads:

added a bunch of initial implementation. does not compile yet, but that’s ok because docshell isn’t part of the build yet.

So not only are none of these patches being reviewed (as far as I can tell), and are not mapped to any bugs in the bug tracker, but the patches themselves just straight-up do not build. They are not building on tinderbox.

This is in pretty stark contrast to today’s code conventions. While it’s true that we might land code that is not entered for most Nightly users, we usually hide such code behind an about:config pref so that developers can flip it on to test it. And I think it’s pretty rare (if it ever occurs) for us to land code in mozilla-central that’s not immediately put into the build system.

Perhaps the WebShell tests that were part of the Plan of Attack were being written in parallel and just haven’t landed, but I suspect that they haven’t been written at all at this point. I suspect that the team was trying to stand something up and make it partially work, and then write tests for WebShell and try to make them pass for both old WebShell and DocShell. Or maybe just the latter.

These days, I think that’s probably how we’d go about such a major re-architecture / redesign / refactor; we’d write tests for the old component, land code that builds but is only entered via an about:config pref, and then work on porting the tests over to the new component. Once the tests pass for both, flip the pref for our Nightly users and let people test the new stuff. Once it feels stable, take it up the trains. And once it ships and it doesn’t look like anything is critically wrong with the new component, begin the process of removing the old component / tests and getting rid of the about:config pref.

Note that I’m not at all bashing Travis or the other developers who were working on this stuff back then – I’m simply remarking on how far we’ve come in terms of development practices.

Remember AOL keywords?

Tangentially, I’m seeing some patches go by that have to do with hooking up some kind of “Keyword” support to WebShell.

Remember those keywords? This was the pre-Google era where there were only a few simplistic search engines around, and people were still trying to solve discoverability of things on the web. Keywords was, I believe, AOL’s attempt at a solution.

You can read up on AOL Keywords here. I just thought it was interesting to find some Keywords support being written in here.

One DocShell to rule them all

Now that we have decided that there is only one docshell for all content types, we needed to get rid of the base class/ content type implementation. This checkin takes and moves the nsDocShellBase to be nsDocShell. It now holds the nsIHTMLDocShell stuff. This will be going away. nsCDocShell was created to replace the previous nsCHTMLDocShell.

This commit lands on November 12th (almost a month from the first landing), and is the point where the DocShell-implementation-per-document-type plan breaks down. nsDocShellBase gets renamed to nsDocShell, and the nsIHTMLDocShell interface gets moved into nsIDocShell.idl, where a comment above it indicates that the interface will soon go away.

We have nsCDocShell.idl, but this interface will eventually disappear as well.

[PORKJOCKEY]

So, this commit message on November 13th caught my eye:

pork jockey paint fixes. bug=18140, r=kmcclusk,pavlov

What the hell is a “pork jockey”? A quick search around, and I see yet another reference to it in Bugzilla on bug 14928. It seems to be some kind of project… or code name…

I eventually found this ancient wiki page that documents some of the language used in bugs on Bugzilla, and it has an entry for “pork jockey”: a pork jockey bug is a “bug for work needed on infrastructure/architecture”.

I mentioned this in #developers, and dmose (who was hacking on Mozilla code at the time), explained:

16:52 (dmose) mconley: so, porkjockeys
16:52 (mconley) let’s hear it
16:52 (dmose) mconley: at some point long ago, there was some infrastrcture work that needed to happen
16:52 mconley flips on tape recorder
16:52 (dmose) and when people we’re talking about it, it seemed very hard to carry off
16:52 (dmose) somebody said that that would happen on the same day he saw pigs fly
16:53 (mconley) ah hah
16:53 (dmose) so ultimately the group of people in charge of trying to make that happen were…
16:53 (dmose) the porkjockeys
16:53 (dmose) which was the name of the mailing list too

Here’s the e-mail that Brendan Eich sent out to get the Porkjockey’s flying.

Development play-by-play

On November 17th, the nsIGenericWindow interface was removed because it was being implemented in widget/base/nsIBaseWindow.idl.

On November 27th, nsWebShell started to implement nsIBaseWindow, which helped pull a bunch of methods out of the WebShell implementations.

On November 29th, nsWebShell now implements nsIDocShell – so this seems to be the first point where the DocShell work gets brought into code paths that might be hit. This is particularly interesting, because he makes this change to both the WebShell implementation that he has under the docshell/ folder, as well as the WebShell implementation under the webshell/ folder. This means that some of the DocShell code is now actually being used.

On November 30th, Travis lands a patch to remove some old commented out code. The commit message mentions that the nsIDocShellEdit and nsIDocShellFile interfaces introduced in the first landing are now defunct. It doesn’t look like anything is diving in to replace these interfaces straight away, so it looks like he’s just not worrying about it just yet. The defunct interfaces are removed in this commit one day later.

On December 1st, WebShell (both the fork and the “live” version) is made to implement nsIDocShellContainer.

One day later, nsIDocShellTreeNode interface is added to replace nsIDocShellContainer. The interface is almost identical to nsIDocShellContainer, except that it allows the caller to access child DocShells at particular indices as opposed to just returning an enumerator.

December 3rd was a very big day! Highlights include:

Noticing something? A lot of these changes are getting dumped straight into the live version of WebShell (under the webshell/ directory). That’s not really what the Plan of Attack had spelled out, but that’s what appears to be happening. Perhaps all of this stuff was trivial enough that it didn’t warrant waiting for the WebShell fork to switch over.

On December 12th, nsIDocShellTreeOwner is introduced.

On December 15th, buster re-lands the nsIDocShellEdit and nsIDocShellFile interfaces that were removed on November 30th, but they’re called nsIContentViewerEdit and nsIContentViewerFile, respectively. Otherwise, they’re identical.

On December 17th, WebShell becomes a subclass of DocShell. This means that a bunch of things can get removed from WebShell, since they’re being taken care of by the parent DocShell class. This is a pretty significant move in the whole “replacing WebShell” strategy.

Similar work occurs on December 20th, where even more methods inside WebShell start to forward to the base DocShell class.

That’s the last bit of notable work during 1999. These next bits show up in the new year, and provides further proof that we didn’t all blow up during Y2K.

On Feburary 2nd, 2000, a new interface called nsIWebNavigation shows up. This interface is still used to this day to navigate a browser, and to get information about whether it can go “forwards” or “backwards”.

On February 8th, a patch lands that makes nsGlobalWindow deal entirely in DocShells instead of WebShells. nsIScriptGlobalObject also now deals entirely with DocShells. This is a pretty big move, and the patch is sizeable.

On February 11th, more methods are removed from WebShell, since the refactorings and rearchitecture have made them obsolete.

On February 14th, for Valentine’s day, Travis lands a patch to have DocShell implement the nsIWebNavigation interface. Later on, he lands a patch that relinquishes further control from WebShell, and puts the DocShell in control of providing the script environment and providing the nsIScriptGlobalObjectOwner interface. Not much later, he lands a patch that implements the Stop method from the nsIWebNavigation interface for DocShell. It’s not being used yet, but it won’t be long now. Valentine’s day was busy!

On February 24th, more stuff (like the old Stop implementation) gets gutted from WebShell. Some, if not all, of those methods get forwarded to the underlying DocShell, unsurprisingly.

Similar story on February 29th, where a bunch of the scroll methods are gutted from WebShell, and redirected to the underlying DocShell. This one actually has a bug and some reviewers!9 Travis also landed a patch that gets DocShell set up to be able to create its own content viewers for various documents.

March 10th saw Travis gut plenty of methods from WebShell and redirect to DocShell instead. These include Init, SetDocLoaderObserver, GetDocLoaderObserver, SetParent, GetParent, GetChildCount, AddChild, RemoveChild, ChildAt, GetName, SetName, FindChildWithName, SetChromeEventHandler, GetContentViewer, IsBusy, SetDocument, StopBeforeRequestingURL, StopAfterURLAvailable, GetMarginWidth, SetMarginWidth, GetMarginHeight, SetMarginHeight, SetZoom, GetZoom. A few follow-up patches did something similar. That must have been super satisfying.

March 11th, Travis removes the Back, Forward, CanBack and CanForward methods from WebShell. Consumers of those can use the nsIWebNavigation interface on the DocShell instead.

March 30th sees the nsIDocShellLoadInfo interface show up. This interface is for “specifying information used in a nsIDocShell::loadURI call”. I guess this is certainly better than adding a huge amount of arguments to ::loadURI.

During all of this, I’m seeing references to a “new session history” being worked on. I’m not really exploring session history (at this point), so I’m not highlighting those commits, but I do want to point out that a replacement for the old Netscape session history stuff was underway during all of this DocShell business, and the work intersected quite a bit.

On April 16th, Travis lands a commit that takes yet another big chunk out of WebShell in terms of loading documents and navigation. The new session history is now being used instead of the old.

The last 10% is the hardest part

We’re approaching what appears to be the end of the DocShell work. According to his LinkedIn profile, Travis left AOL in May 2000. His last commit to the repository before he left was on April 24th. Big props to Travis for all of the work he put in on this project – by the time he left, WebShell was quite a bit simpler than when he started. I somehow don’t think he reached the end state that he had envisioned when he’d written the original redesign document – the work doesn’t appear to be done. WebShell is still around (in fact, parts of it around were around until only recently!10 ). Still, it was a hell of chunk of work he put in.

And if I’m interpreting the rest of the commits after this correctly, there is a slow but steady drop off in large architectural changes, and a more concerted effort to stabilize DocShell, nsWebBrowser and nsWebShell. I suspect this is because everybody was buckling down trying to ship the first version of the Mozilla Suite (which finally occurred June 5th, 2002 – still more than 2 years down the road).

There are still some notable commits though. I’ll keep listing them off.

On June 22nd, a developer called “rpotts” lands a patch to remove the SetDocument method from DocShell, and to give the implementation / responsibility of setting the document on implementations of nsIContentViewer.

July 5th sees rpotts move the new session history methods from nsIWebNavigation to a new interface called nsIDocShellHistory. It’s starting to feel like the new session history is really heating up.

On July 18th, a developer named Judson Valeski lands a large patch with the commit message “webshell-docshell consolodation changes”. Paraphrasing from the bug, the point of this patch is to move WebShell into the DocShell lib to reduce the memory footprint. This also appears to be a lot of cleanup of the DocShell code. Declarations are moved into header files. The nsDocShellModule is greatly simplified with some macros. It looks like some dead code is removed as well.

On November 9th, a developer named “disttsc” moves the nsIContentViewer interface from the webshell folder to the docshell folder, and converts it from a manually created .h to an .idl. The commit message states that this work is necessary to fix bug 46200, which was filed to remove nsIBrowserInstance (according to that bug, nsIBrowserInstance must die).

That’s probably the last big, notable change to DocShell during the 2000’s.

2001: A DocShell Odyssey

On March 8th, a developer named “Dan M” moves the GetPersistence and SetPersistence methods from nsIWebBrowserChrome to nsIDocShellTreeOwner. He sounds like he didn’t really want to do it, or didn’t want to be pegged with the responsibility of the decision – the commit message states “embedding API review meeting made me do it.” This work was tracked in bug 69918.

On April 16th, Judson Valeski makes it so that the mimetypes that a DocShell can handle are not hardcoded into the implementation. Instead, handlers can be registered via the CategoryManager. This work was tracked in bug 40772.

On April 26th, a developer named Simon Fraser adds an implementation of nsISimpleEnumerator for DocShells. This implementation is called, unsurprisingly, nsDocShellEnumerator. This was for bug 76758. A method for retrieving an enumerator is added one day later in a patch that fixes a number of bugs, all related to the page find feature.

April 27th saw the first of the NSPR logging for DocShell get added to the code by a developer named Chris Waterson. Work for that was tracked in bug 76898.

On May 16th, for bug 79608, Brian Stell landed a getter and setter for the character set for a particular DocShell.

There’s a big gap here, where the majority of the landings are relatively minor bug fixes, cleanup, or only slightly related to DocShell’s purpose, and not worth mentioning.11

And beyond the infinite…

On January 8th, 2002, for bug 113970, Stephen Walker lands a patch that takes yet another big chunk out of WebShell, and added this to the nsIWebShell.h header:

/**
 * WARNING WARNING WARNING WARNING WARNING WARNING WARNING WARNING !!!!
 *
 * THIS INTERFACE IS DEPRECATED. DO NOT ADD STUFF OR CODE TO IT!!!!
 */

I’m actually surprised it too so long for something like this to get added to the nsIWebShell interface – though perhaps there was a shared understanding that nsIWebShell was shrinking, and such a notice wasn’t really needed.

On January 9th, 2003 (yes, a whole year later – I didn’t find much worth mentioning in the intervening time), I see the first reference to “deCOMtamination”, which is an effort to reduce the amount of XPCOM-style code being used. You can read up more on deCOMtamination here.

On January 13th, 2003, Nisheeth Ranjan lands a patch to use “machine learning” in order to order results in the urlbar autocomplete list. I guess this was the precursor to the frencency algorithm that the AwesomeBar uses today? Interestingly, this code was backed out again on February 21st, 2003 for reasons that aren’t immediately clear – but it looks like, according to this comment, the code was meant to be temporary in order to gather “weights” from participants in the Mozilla 1.3 beta, which could then be hard-coded into the product. The machine-learning bug got closed on June 25th, 2009 due to AwesomeBar and frecency.

On Februrary 11th, 2004, the onbeforeunload event is introduced.

On April 17, 2004, gerv lands the first of several patches to switch the licensing of the code over to the MPL/LGPL/GPL tri-license. That’d be MPL 1.1.

On July 7th, 2004, timeless lands a patch that makes it so that embedders can disable all plugins in a document.

On November 23rd, 2004, the parent and treeOwner attributes for nsIDocShellTreeItem are made scriptable. They are read-only for script.

On December 8th, 2004, bz lands a patch that makes DocShell inherit from DocLoader, and starts a move to eliminate nsIWebShell, nsIWebShellContainer, and nsIDocumentLoader.

And that’s probably the last notable patch related to DocShell in 2004.

Lord of the rings

Reading through all of these commits in the docshell/ and webshell/ folders is a bit like taking a core sample of a very mature tree, and reading its rings. I can see some really important events occurring as I flip through these commits – from the very birth of Mozilla, to the birth of XPCOM and XUL, to porkjockeys, and the start of the embedding efforts… all the way to the splitting out of Thunderbird, deCOMtamination and introduction of the MPL. I really got a sense of the history of the Mozilla project reading through these commits.

I feel like I’m getting a better and better sense of where DocShell came from, and why it does what it does. I hope if you’re reading this, you’re getting that same sense.

Stay tuned for the next bit, where I look at years 2005 to 2010.


  1. Messages like: ERROR: A CVS repository cannot contain both cvsroot/mozilla/browser/base/content/metadata.js,v and cvsroot/mozilla/browser/base/content/Attic/metadata.js,v, for example 

  2. Those commits have hashes 575e116287f99fbe26f54ca3f3dbda377002c5e7 and  60567bb185eb8eea80b9ab08d8695bb27ba74e15 if you want to follow along at home. 

  3. Strangely enough, that docshell folder wasn’t even being built. It was just sitting there, containing a copy of nsWebShell.cpp. That copy mirrored the one under webshell/, and the two copies were kept in sync. Maybe DocShell was always a thing that somebody always wanted to do but never got around to until Travis came along. 

  4. Mozilla Suite 1.0 eventually ships on June 5th, 2002 

  5. According to Wikipedia, Netscape was purchased by AOL on March 17, 1999 

  6. That wiki page was first posted on October 8th, 1999 – exactly a week before the first docshell work had started. 

  7. I should point out that there is no mention of this embedding work in the first roadmap that was published for the Mozilla project. It was only in the second roadmap, published almost a year after the DocShell work began, that embedding was mentioned, albeit briefly. 

  8. More clues as to what WebShell was originally intended for are also in that interface file:

    /**
     * The web shell is a container for implementations of nsIContentViewer.
     * It is a content-viewer-container and also knows how to delegate certain
     * behavior to an nsIWebShellContainer.
     *
     * Web shells can be arranged in a tree.
     *
     * Web shells are also nsIWebShellContainer's because they can contain
     * other web shells.
     */
    

    So that helps clear things up a bit. I think. 

  9. travis reviewed it, and the patch was approved by rickg. Not sure what approval meant back then… was that like a super review? 

  10. This is the last commit to the webshell folder before it got removed on September 30th, 2010. 

  11. Hopefully I didn’t miss anything interesting in there! 

August 10, 2014 11:04 PM

August 06, 2014

Mike Conley

Bugnotes

Over the past few weeks, I’ve been experimenting with taking notes on the bugs I’ve been fixing in Evernote.

I’ve always taken notes on my bugs, but usually in some disposable text file that gets tossed away once the bug is completed.

Evernote gives me more powers, like embedded images, checkboxes, etc. It’s really quite nice, and it lets me export to HTML.

Now that I have these notes, I thought it might be interesting to share them. If I have notes on a bug, here’s what I’m going to aim to do when the bug is closed:

I’ve just posted my first bugnote. It’s raw, unedited, and probably a little hard to follow if you don’t know what I’m working on. I thought maybe it’d be interesting to see what’s going on in my head as I fix a bug. And who knows, if somebody needs to re-investigate a bug or a related bug down the line, these notes might save some people some time.

Anyhow, here are my bugnotes. And if you’re interested in doing something similar, you can fork it (I’m using Jekyll for static site construction).


  1. I didn’t want to pollute this blog with them (plus, dumping these files into WordPress seemed to be a bit heavy)  

August 06, 2014 05:33 PM

Joshua Cranmer

Why email is hard, part 7: email security and trust

This post is part 7 of an intermittent series exploring the difficulties of writing an email client. Part 1 describes a brief history of the infrastructure. Part 2 discusses internationalization. Part 3 discusses MIME. Part 4 discusses email addresses. Part 5 discusses the more general problem of email headers. Part 6 discusses how email security works in practice. This part discusses the problem of trust.

At a technical level, S/MIME and PGP (or at least PGP/MIME) use cryptography essentially identically. Yet the two are treated as radically different models of email security because they diverge on the most important question of public key cryptography: how do you trust the identity of a public key? Trust is critical, as it is the only way to stop an active, man-in-the-middle (MITM) attack. MITM attacks are actually easier to pull off in email, since all email messages effectively have to pass through both the sender's and the recipients' email servers [1], allowing attackers to be able to pull off permanent, long-lasting MITM attacks [2].

S/MIME uses the same trust model that SSL uses, based on X.509 certificates and certificate authorities. X.509 certificates effectively work by providing a certificate that says who you are which is signed by another authority. In the original concept (as you might guess from the name "X.509"), the trusted authority was your telecom provider, and the certificates were furthermore intended to be a part of the global X.500 directory—a natural extension of the OSI internet model. The OSI model of the internet never gained traction, and the trusted telecom providers were replaced with trusted root CAs.

PGP, by contrast, uses a trust model that's generally known as the Web of Trust. Every user has a PGP key (containing their identity and their public key), and users can sign others' public keys. Trust generally flows from these signatures: if you trust a user, you know the keys that they sign are correct. The name "Web of Trust" comes from the vision that trust flows along the paths of signatures, building a tight web of trust.

And now for the controversial part of the post, the comparisons and critiques of these trust models. A disclaimer: I am not a security expert, although I am a programmer who revels in dreaming up arcane edge cases. I also don't use PGP at all, and use S/MIME to a very limited extent for some Mozilla work [3], although I did try a few abortive attempts to dogfood it in the past. I've attempted to replace personal experience with comprehensive research [4], but most existing critiques and comparisons of these two trust models are about 10-15 years old and predate several changes to CA certificate practices.

A basic tenet of development that I have found is that the average user is fairly ignorant. At the same time, a lot of the defense of trust models, both CAs and Web of Trust, tends to hinge on configurability. How many people, for example, know how to add or remove a CA root from Firefox, Windows, or Android? Even among the subgroup of Mozilla developers, I suspect the number of people who know how to do so are rather few. Or in the case of PGP, how many people know how to change the maximum path length? Or even understand the security implications of doing so?

Seen in the light of ignorant users, the Web of Trust is a UX disaster. Its entire security model is predicated on having users precisely specify how much they trust other people to trust others (ultimate, full, marginal, none, unknown) and also on having them continually do out-of-band verification procedures and publicly reporting those steps. In 1998, a seminal paper on the usability of a GUI for PGP encryption came to the conclusion that the UI was effectively unusable for users, to the point that only a third of the users were able to send an encrypted email (and even then, only with significant help from the test administrators), and a quarter managed to publicly announce their private keys at some point, which is pretty much the worst thing you can do. They also noted that the complex trust UI was never used by participants, although the failure of many users to get that far makes generalization dangerous [5]. While newer versions of security UI have undoubtedly fixed many of the original issues found (in no small part due to the paper, one of the first to argue that usability is integral, not orthogonal, to security), I have yet to find an actual study on the usability of the trust model itself.

The Web of Trust has other faults. The notion of "marginal" trust it turns out is rather broken: if you marginally trust a user who has two keys who both signed another person's key, that's the same as fully trusting a user with one key who signed that key. There are several proposals for different trust formulas [6], but none of them have caught on in practice to my knowledge.

A hidden fault is associated with its manner of presentation: in sharp contrast to CAs, the Web of Trust appears to not delegate trust, but any practical widespread deployment needs to solve the problem of contacting people who have had no prior contact. Combined with the need to bootstrap new users, this implies that there needs to be some keys that have signed a lot of other keys that are essentially default-trusted—in other words, a CA, a fact sometimes lost on advocates of the Web of Trust.

That said, a valid point in favor of the Web of Trust is that it more easily allows people to distrust CAs if they wish to. While I'm skeptical of its utility to a broader audience, the ability to do so for is crucial for a not-insignificant portion of the population, and it's important enough to be explicitly called out.

X.509 certificates are most commonly discussed in the context of SSL/TLS connections, so I'll discuss them in that context as well, as the implications for S/MIME are mostly the same. Almost all criticism of this trust model essentially boils down to a single complaint: certificate authorities aren't trustworthy. A historical criticism is that the addition of CAs to the main root trust stores was ad-hoc. Since then, however, the main oligopoly of these root stores (Microsoft, Apple, Google, and Mozilla) have made their policies public and clear [7]. The introduction of the CA/Browser Forum in 2005, with a collection of major CAs and the major browsers as members, and several [8] helps in articulating common policies. These policies, simplified immensely, boil down to:

  1. You must verify information (depending on certificate type). This information must be relatively recent.
  2. You must not use weak algorithms in your certificates (e.g., no MD5).
  3. You must not make certificates that are valid for too long.
  4. You must maintain revocation checking services.
  5. You must have fairly stringent physical and digital security practices and intrusion detection mechanisms.
  6. You must be [externally] audited every year that you follow the above rules.
  7. If you screw up, we can kick you out.

I'm not going to claim that this is necessarily the best policy or even that any policy can feasibly stop intrusions from happening. But it's a policy, so CAs must abide by some set of rules.

Another CA criticism is the fear that they may be suborned by national government spy agencies. I find this claim underwhelming, considering that the number of certificates acquired by intrusions that were used in the wild is larger than the number of certificates acquired by national governments that were used in the wild: 1 and 0, respectively. Yet no one complains about the untrustworthiness of CAs due to their ability to be hacked by outsiders. Another attack is that CAs are controlled by profit-seeking corporations, which misses the point because the business of CAs is not selling certificates but selling their access to the root databases. As we will see shortly, jeopardizing that access is a great way for a CA to go out of business.

To understand issues involving CAs in greater detail, there are two CAs that are particularly useful to look at. The first is CACert. CACert is favored by many by its attempt to handle X.509 certificates in a Web of Trust model, so invariably every public discussion about CACert ends up devolving into an attack on other CAs for their perceived capture by national governments or corporate interests. Yet what many of the proponents for inclusion of CACert miss (or dismiss) is the fact that CACert actually failed the required audit, and it is unlikely to ever pass an audit. This shows a central failure of both CAs and Web of Trust: different people have different definitions of "trust," and in the case of CACert, some people are favoring a subjective definition (I trust their owners because they're not evil) when an objective definition fails (in this case, that the root signing key is securely kept).

The other CA of note here is DigiNotar. In July 2011, some hackers managed to acquire a few fraudulent certificates by hacking into DigiNotar's systems. By late August, people had become aware of these certificates being used in practice [9] to intercept communications, mostly in Iran. The use appears to have been caught after Chromium updates failed due to invalid certificate fingerprints. After it became clear that the fraudulent certificates were not limited to a single fake Google certificate, and that DigiNotar had failed to notify potentially affected companies of its breach, DigiNotar was swiftly removed from all of the trust databases. It ended up declaring bankruptcy within two weeks.

DigiNotar indicates several things. One, SSL MITM attacks are not theoretical (I have seen at least two or three security experts advising pre-DigiNotar that SSL MITM attacks are "theoretical" and therefore the wrong target for security mechanisms). Two, keeping the trust of browsers is necessary for commercial operation of CAs. Three, the notion that a CA is "too big to fail" is false: DigiNotar played an important role in the Dutch community as a major CA and the operator of Staat der Nederlanden. Yet when DigiNotar screwed up and lost its trust, it was swiftly kicked out despite this role. I suspect that even Verisign could be kicked out if it manages to screw up badly enough.

This isn't to say that the CA model isn't problematic. But the source of its problems is that delegating trust isn't a feasible model in the first place, a problem that it shares with the Web of Trust as well. Different notions of what "trust" actually means and the uncertainty that gets introduced as chains of trust get longer both make delegating trust weak to both social engineering and technical engineering attacks. There appears to be an increasing consensus that the best way forward is some variant of key pinning, much akin to how SSH works: once you know someone's public key, you complain if that public key appears to change, even if it appears to be "trusted." This does leave people open to attacks on first use, and the question of what to do when you need to legitimately re-key is not easy to solve.

In short, both CAs and the Web of Trust have issues. Whether or not you should prefer S/MIME or PGP ultimately comes down to the very conscious question of how you want to deal with trust—a question without a clear, obvious answer. If I appear to be painting CAs and S/MIME in a positive light and the Web of Trust and PGP in a negative one in this post, it is more because I am trying to focus on the positions less commonly taken to balance perspective on the internet. In my next post, I'll round out the discussion on email security by explaining why email security has seen poor uptake and answering the question as to which email security protocol is most popular. The answer may surprise you!

[1] Strictly speaking, you can bypass the sender's SMTP server. In practice, this is considered a hole in the SMTP system that email providers are trying to plug.
[2] I've had 13 different connections to the internet in the same time as I've had my main email address, not counting all the public wifis that I have used. Whereas an attacker would find it extraordinarily difficult to intercept all of my SSH sessions for a MITM attack, intercepting all of my email sessions is clearly far easier if the attacker were my email provider.
[3] Before you read too much into this personal choice of S/MIME over PGP, it's entirely motivated by a simple concern: S/MIME is built into Thunderbird; PGP is not. As someone who does a lot of Thunderbird development work that could easily break the Enigmail extension locally, needing to use an extension would be disruptive to workflow.
[4] This is not to say that I don't heavily research many of my other posts, but I did go so far for this one as to actually start going through a lot of published journals in an attempt to find information.
[5] It's questionable how well the usability of a trust model UI can be measured in a lab setting, since the observer effect is particularly strong for all metrics of trust.
[6] The web of trust makes a nice graph, and graphs invite lots of interesting mathematical metrics. I've always been partial to eigenvectors of the graph, myself.
[7] Mozilla's policy for addition to NSS is basically the standard policy adopted by all open-source Linux or BSD distributions, seeing as OpenSSL never attempted to produce a root database.
[8] It looks to me that it's the browsers who are more in charge in this forum than the CAs.
[9] To my knowledge, this is the first—and so far only—attempt to actively MITM an SSL connection.

August 06, 2014 03:39 AM

July 31, 2014

Kent James

Thunderbird’s Future: the TL;DR Version

In the next few months I hope to do a series of blog posts that talk about Mozilla’s Thunderbird email client and its future. Here’s the TL;DR version (though still pretty long). These are my personal views, I have no authority to speak for Mozilla or for the Thunderbird project.

Current Status

Where We Need to Go

How We Get There

The Thunderbird team is currently planning to get together in Toronto in October 2014, and Mozilla staff are trying to plan an all-hands meeting sometimes soon. Let’s discussion the future in conjunction with those events, to make sure that in 2015 we have a sustainable plan for the future.

 

July 31, 2014 08:16 PM

July 25, 2014

Instantbird

Linux nightly builds back!

Back in March, we posted that we had started building nightly builds from mozilla-central/comm-central, but because the version of CentOS we had been using was too old, we were unable to continue providing Linux nightly builds. That has now changed and (as of today) we have both 32-bit and 64-bit Linux nightlies! Since this involved us installing a new operating system (CentOS 6.2) and tweaking some of the build configuration for Linux, please let us know if you see any issues! Additionally, some more up-to-date features that have been available in Mozilla Firefox for a while should now be available in Instantbird (e.g. dbus and pulse audio support) and even some minor bugs were fixed!

Sorry that this took so long, but go grab your updated copy now!

July 25, 2014 01:07 PM

July 22, 2014

Calendar

Lightning 3.3 is Out the Door

I am happy to announce that Lightning 3.3, a new major release, is out of the door. Here are a few release highlights:

There have also been a lot of changes in the backend that are not visible to the user. This includes better testing framework support, which will help avoid regressions in the future. A total of 103 bugs have been fixed since Lightning 2.6.

When installing or updating to Thunderbird 31, you should automatically receive the upgrade to Lightning 3.3. If something goes wrong, you can get the new versions here:

Should you be using Seamonkey, you will have to wait for the 2.28 release, which is postponed as per this thread.

If you encounter any major issues, please comment on this blog post. Support issues are handled on support.mozilla.org. Feature requests and bug reports can be made on bugzilla.mozilla.org in the product Calendar. Be sure to search for existing bugs before you file them.

Addons Update:

There are a number of addons that have compatibility issues with Lightning 3.3. The authors have been notified and a few first fixes are available:

Other Updates:

July 22, 2014 05:43 PM

July 17, 2014

Kent James

Following Wikipedia, Thunderbird Could Raise $1,600,000 in annual donations

What will it take to keep Thunderbird stable and vibrant? Although there is a dedicated, hard-working team of volunteers trying hard to keep Thunderbird alive, there has been very little progress on improvements since Mozilla drastically reduced their funding. I’ve been an advocate for some time that Thunderbird needs income to fulfill its potential, and that the best way to generate that income would be to appeal directly to its users for donations.

One internet organization that has done this successfully has been Wikipedia. How much income could Thunderbird generate if they received the same income per user as Wikipedia? Surely our users, who rely on Thunderbird for critical daily communications, are at least as willing to donate as Wikipedia users.

Estimates of income from Wikipedia’s annual fund raising drive to users are around $20,000,000 per year. Recently Wikipedia is reporting 11824 M pageviews per month and 5 pageviews per user. That results in a daily user count of 78 million users. Thunderbird by contrast has about 6 million daily users (using hits per day to update checks), or about 8% of the daily users of Wikipedia.

If Thunderbird were willing to directly engage users asking for donations, at the same rate per user as Wikipedia, there is a potential to raise $1,600,000 per year. That would certainly be enough income to maintain a serious team to move forward.

Wikipedia’s donation requests were fairly intrusive, with large banners at the top of all Wikipedia pages. When Firefox did a direct appeal to users early this year, the appeal was very subtle (did you even notice it?). I tried to scale the Firefox results to Thunderbird, and estimated that a similar subtle appeal might raise $50,000 – $100,000 per year in Thunderbird. That is not sufficient to make a significant impact. We would have to be willing to be a little intrusive, like Wikipedia, it we are going to be successful. This will generate pushback, as has Wikipedia’s campaign, so we would have to be willing to live with the pushback.

But is it really in the best interest of our users to spare them an annual, slightly intrusive appeal for donations, while letting the product that they depend on each day slowly wither away? I believe that if we truly care about our users, we will take the necessary steps to insure that we give them the best product possible, including undertaking fundraising to keep the product stable and vibrant.

July 17, 2014 06:31 AM

July 11, 2014

Kent James

The Thunderbird Tree is Green!

For the first time in a while, the Thunderbird build tree is all green. That means that all platforms are building, and passing all tests:

The Thunderbird build tree is green!

The Thunderbird build tree is green!

Many thanks to Joshua Cranmer for all of his hard work to make it so!

July 11, 2014 07:05 PM

Ludovic Hirlimann

Thunderbird 31 coming soon to you and needs testing love

We just released the second beta of Thunderbird 31. Please help us improve Thunderbird quality by uncovering bugs now in Thunderbird 31 beta so that developers have time to fix them.

There are two ways you can help

- Use Thunderbird 31 beta in your daily activities. For problems that you find, file a bug report that blocks our tracking bug 1008543.

- Use Thunderbird 31 beta to do formal testing.  Use the moztrap testing system to tests : choose run test - find the Thunderbird product and choose 31 test run.

Visit https://etherpad.mozilla.org/tbird31testing for additional information, and to post your testing questions and results.

Thanks for contributing and helping!

Ludo for the QA team

Updated links

July 11, 2014 10:39 AM

July 09, 2014

Rumbling Edge - Thunderbird

2014-07-08 Calendar builds

Common (excluding Website bugs)-specific: (2)

Sunbird will no longer be actively developed by the Calendar team.

Windows builds Official Windows

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

July 09, 2014 07:21 AM

2014-07-08 Thunderbird comm-central builds

Thunderbird-specific: (14)

MailNews Core-specific: (17)

Windows builds Official Windows, Official Windows installer

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

July 09, 2014 07:20 AM

July 07, 2014

Mike Conley

DocShell in a Nutshell – Part 1: Original Intents

I think in order to truly understand what the DocShell currently is, we have to find out where the idea of creating it came from. That means going way, way back to its inception, and figuring out what its original purpose was.

So I’ve gone back, peered through various archived wiki pages, newsgroup and mailing list posts, and I think I’ve figured out that original purpose.1

The original purpose can be, I believe, summed up in a single word: embedding.

Embedding

Back in the late 90’s, sometime after the Mozilla codebase was open-sourced, it became clear to some folks that the web was “going places”. It was the “bees knees”. It was the “cat’s pajamas”. As such, it was likely that more and more desktop applications were going to need to be able to access and render web content.

The thing is, accessing and rendering web content is hard. Really hard. One does not simply write web browsing capabilities into their application from scratch hoping for the best. Heartbreak is in that direction.

Instead, the idea was that pre-existing web engines could be embedded into other applications. For example, Steam, Valve’s game distribution platform, displays a ton of web content in its user interface. All of those Steam store pages? Those are web pages! They’re using an embedded web engine in order to display that stuff.2

So making Gecko easily embeddable was, at the time, a real goal, and a real project.

nsWebShell

The problem was that embedding Gecko was painful. The top-level component that embedders needed to instantiate and communicate with was called “nsWebShell”, and it was a pretty unwieldy. Lots of internal knowledge about the internal workings of Gecko was leaked through the nsWebShell component, and it’s interface changed far too often.

It was also inefficient – the nsWebShell didn’t just represent the top-level “thing that loads web content”. Instances of nsWebShell were also used recursively for subdocuments within those documents – for example, (i)frames within a webpage. These nested nsWebShell’s formed a tree. That’s all well and good, except for the fact that there were things that the nsWebShell loaded or did that only the top-level nsWebShell really needed to load or do. So there was definitely room for some performance improvement.

In order to correct all of these issues, a plan was concocted to retire nsWebShell in favour of several new components and a slew of new interfaces. Two of those new components were nsDocShell and nsWebBrowser.

nsWebBrowser

nsWebBrowser would be the thing that embedders would drop into the applications – it would be the browser, and would do all of the loading / doing of things that only the top-level web browser needed to do.

The interface for nsWebBrowser would be minimal, just exposing enough so that an embedder could drop one into their application with little fuss, point it at a URL, set up some listeners, and watch it dance.

nsDocShell

nsDocShell would be… well, everything else that nsWebBrowser wasn’t. So that dumping ground that was nsWebShell would get dumped into nsDocShell instead. However, a number of new, logically separated interfaces would be created for nsDocShell.

Examples of those interfaces were:

So instead of a gigantic, ever changing interface, you had lots of smaller interfaces, many of which could eventually be frozen over time (which is good for embedders).

These interfaces also made it possible to shield embedders from various internals of the nsDocShell component that embedders shouldn’t have to worry about.

Ok, but… what was it?

But I still haven’t answered the question – what was the DocShell at this point? What was it supposed to do now that it was created.

This ancient wiki page spells it out nicely:

This class is responsible for initiating the loading and viewing of a document.

This document also does a good job of describing what a DocShell is and does.

Basically, any time a document is to be viewed, a DocShell needs to be created to view it. We create the DocShell, and then we point that DocShell at the URL, and it does the job of kicking off communications via the network layer, and dealing with the content once it comes back.

So it’s no wonder that it was (and still is!) a dumping ground – when it comes to loading and displaying content, nsDocShell is the central nexus point of communications for all components that make that stuff happen.

I believe that was the original purpose of nsDocShell, anyhow.

And why “shell”?

This is a simple question that has been on my mind since I started this. What does the “shell” mean in nsDocShell?

Y’know, I think it’s actually a fragment left over from the embedding work, and that it really has no meaning anymore. Originally, nsWebShell was the separation point between an embedder and the Gecko web engine – so I think I can understand what “shell” means in that context – it’s the touch-point between the embedder, and the embedee.

I think nsDocShell was given the “shell” monicker because it did the job of taking over most of nsWebShell’s duties. However, since nsWebBrowser was now the touch-point between the embedder and embedee… maybe shell makes less sense. I wonder if we missed an opportunity to name nsDocShell something better.

In some ways, “shell” might make some sense because it is the separation between various documents (the root document, any sibling documents, and child documents)… but I think that’s a bit of a stretch.

But now I’m racking my brain for a better name (even though a rename is certainly not worth it at this point), and I can’t think of one.

What would you rename it, if you had the chance?

What is nsDocShell doing now?

I’m not sure what’s happened to nsDocShell over the years, and that’s the point of the next few posts in this series. I’m going to be going through the commits hitting nsDocShell from 1999 until the present day to see how nsDocShell has changed and evolved.

Hold on to your butts.

Further reading

The above was gleaned from the following sources:


  1. I’m very much prepared to be wrong about any / all of this. I’m making assertions and drawing conclusions by reading and interpreting things that other people have written about DocShell – and if the telephone game is any indication, this indirect analysis can be lossy. If I have misinterpreted, misunderstood, or completely missed the point in any of the above, please don’t hesitate to comment, and I will correct it forthwith. 

  2. They happen to be using WebKit, the same web engine that powers Safari, and (until recently) Chromium. According to this, they’re using the Chromium Embedding Framework to display this web content. There are a number of applications that embed Gecko. Firefox is the primary consumer of Gecko. Thunderbird is another obvious one – when you display HTML email, it’s using the Gecko web engine in order to lay it out and display it. WINE uses Gecko to allow Windows-binaries to browse the web. Making your web engine embeddable, however, has a development cost, and over the years, making Gecko embeddable seems to have become less of a priority. Servo is a next-generation web browser engine from Mozilla Research that aims to be embeddable. 

July 07, 2014 05:12 PM

Blake Winton

Figuring out where things are in an image.

People love heatmaps.

They’re a great way to show how much various UI elements are used in relation to each other, and are much easier to read at a glance than a table of click- counts would be. They can also reveal hidden patterns of usage based on the locations of elements, let us know if we’re focusing our efforts on the correct elements, and tell us how effective our communication about new features is. Because they’re so useful, one of the things I am doing in my new role is setting up the framework to provide our UX team with automatically updating heatmaps for both Desktop and Android Firefox.

Unfortunately, we can’t just wave our wands and have a heatmap magically appear. Creating them takes work, and one of the most tedious processes is figuring out where each element starts and stops. Even worse, we need to repeat the process for each platform we’re planning on displaying. This is one of the primary reasons we haven’t run a heatmap study since 2012.

In order to not spend all my time generating the heatmaps, I had to reduce the effort involved in producing these visualizations.

Being a programmer, my first inclination was to write a program to calculate them, and that sort of worked for the first version of the heatmap, but there were some difficulties. To collect locations for all the elements, we had to display all the elements.

Firefox in the process of being customized

Customize mode (as shown above) was an obvious choice since it shows everything you could click on almost by definition, but it led people to think that we weren’t showing which elements were being clicked the most, but instead which elements people customized the most. So that was out.

Next we tried putting everything in the toolbar, or the menu, but those were a little too cluttered even without leaving room for labels, and too wide (or too tall, in the case of the menu).

A shockingly busy toolbar

Similarly, I couldn’t fit everything into the menu panel either. The only solution was to resort to some Photoshop-trickery to fit all the buttons in, but that ended up breaking the script I was using to locate the various elements in the UI.

A surprisingly tall menu panel

Since I couldn’t automatically figure out where everything was, I figured we might as well use a nicely-laid out, partially generated image, and calculate the positions (mostly-)manually.

The current version of the heatmap (Note: This is not the real data.)

I had foreseen the need for different positions for the widgets when the project started, and so I put the widget locations in their own file from the start. This meant that I could update them without changing the code, which made it a little nicer to see what’s changed between versions, but still required me to reload the whole page every time I changed a position or size, which would just have taken way too long. I needed something that could give me much more immediate feedback.

Fortunately, I had recently finished watching a series of videos from Ian Johnson (@enjalot on twitter) where he used a tool he made called Tributary to do some rapid prototyping of data visualization code. It seemed like a good fit for the quick moving around of elements I was trying to do, and so I copied a bunch of the code and data in, and got to work moving things around.

I did encounter a few problems: Tributary wasn’t functional in Firefox Nightly (but I could use Chrome as a workaround) and occasionally sometimes trying to move the cursor would change the value slider instead. Even with these glitches it only took me an hour or two to get from set-up to having all the numbers for the final result! And the best part is that since it's all open source, you can take a look at the final result, or fork it yourself!

July 07, 2014 03:53 PM

June 30, 2014

Mike Conley

Alice in DocShell Land

I’ve been reading a book called The Annotated Alice. In this book, the late and great Martin Gardner shows us the stories of Alice’s Adventures in Wonderland and Through the Looking-Glass but supplies copious footnotes to illustrate the puns, wordplay, allusions, logic problems and satire going on beneath the text. Some of these footnotes delve into pure conjecture (there are still people to this day who theorize about various aspects of the stories), and other footnotes show quite clearly that Carrol wrote these stories with a sophisticated wit and whimsy that isn’t immediately obvious at first glance.

And it’s clear that Gardner (and others like him) have spent hours upon hours thinking and theorizing about these stories. A purposeful misspelling gets awarded a two page footnote here, and a mention of a mirror sends us off talking about matter and anti-matter and other matters (ha) of quantum physics.

So much thinking and effort to interpret these stories, and what you get out of it is a fascinating tapestry of ideas and history.

Needless to say, I’ve been finding the whole thing fascinating. It’s a hell of a read.

While reading it, I’ve wondered what it’d be like to apply the same practice to source code. Take some relatively mysterious piece of source code that only a few people feel comfortable with, and explode it out. Go through the source control history, and all of the old bugs, and see where this code came from. What was its purpose to begin with? What is its purpose now? What are the battle scars?

After much thinking, I’ve decided to try this, and I’m going to try it on a piece of Gecko called “DocShell”.

I think I just heard Ms2ger laughing somewhere.

It’s become pretty clear having talked to a few seasoned Mozilla hackers that DocShell is not well understood. The wiki page on it makes that even more clear – it starts:

The goal of this page is to serve as a dumping/organization ground for docshell docs. When someone finds out something, it should be added here in a reasonable way. By the time this gets unwieldy, hopefully we will have enough material for several actual docs on what docshell does and why.

So, I’m going to attempt to figure out what DocShell was supposed to do, and figure out what it currently does. I’m going to dig through source code, old bugs, and old CVS commits, back to the point where Netscape first open-sourced the Mozilla code-base.

It’s not going to be easy. It’s definitely going to be a multiple month, multiple post effort. I’m likely to get things wrong, or only partially correct. I’ll need help in those cases, so please comment.

And I might not succeed in figuring out what DocShell was supposed to do. But I’m pretty confident I can get a grasp on what it currently does.

So in the end, if I’m lucky, we’ll end up with a few things:

  1. A greater shared understanding of DocShell
  2. Materials that can be used to flesh out the DocShell wiki
  3. Better inline documentation for DocShell maybe?

I’ve also asked bz to forward me feedback requests for DocShell patches, so that way I get another angle of attack on understanding the code.

So, deep breath. Here goes. Watch this space.

June 30, 2014 02:41 AM

June 29, 2014

Mike Conley

Australis Performance Post-mortem Summary

Over the last few months, I’ve been talking about all of the work we put into making Australis feel fast when it shipped in Firefox 29.

I talked about where we started with our performance work, and how we grappled with the ts_paint and tpaint performance (“talos”) tests. After that, I talked a bit about the excellent tools we have (and ones we developed ourselves) to make finding our performance bottlenecks easier.

After a brief delay, I rounded out the series by talking about our tab animation performance work, and the customization transition performance work.

I think over the course of working on these things, I’ve learned quite a bit about performance work in general. If I had to distill it down to a few tidbits, it’d be:

So that’s it on the series. Enjoy your zippy Firefox!

June 29, 2014 02:26 AM

June 19, 2014

Mike Conley

Australis Performance Post-mortem Part 5: The Customize Mode Transition

Another new thing that came out with Firefox 29 is a sexy new customization interface. We wanted to make UI customization something that anybody would feel comfortable doing, instead of something only a few mighty power users might do.

The new customization mode is accessible by pressing the Menu Button (☰), and then clicking Customize.

Give it a shot now if you’ve never done it.

BAM, did you see that? What you just saw was a full-on mode switch in the browser chrome to indicate to you that you’ve put the browser into a different state – specifically, a state where much of the browser UI is malleable. You can drag and drop things in your toolbars and the menu, enable toolbars, etc.

Going in and out of customization mode is not something that most people will do frequently, but we still invested a bunch of time trying to make it smooth. I still think there is more we can do on that front, and I’ll get into that at the end of this post.

Anyhow, as with any performance-related project, we started with a way of measuring. In this case, our trusty performance team re-purposed the TART test that we used for tab animation, and pointed it at the customize mode transition. We called this new test CART (customization animation regression test, natch).

This is slightly different from the tab animation stuff, because we didn’t really have a baseline measurement to compare against – there was no old customization mode transition to try to match or beat. So we just had to do our best to do all of our processing under 16ms in order to draw the transition at 60fps.

That means that instead of comparing two sets of data over time, we’ll only be looking at a single series of points, and how they change over time.

So let’s see where we started, and where we finished, and how we got there.

CART results - pre-optimizations

CART results – pre-optimizations

So in an effort to get this blog post done, I’ve cut some corners and combined the data from a number of different platforms into a single graph. I apologize if this makes it difficult to interpret for some of you – but I’ll do my best to explain what the graph means.

I chose a representative sample of the platforms that we measured – we’ve got OS X (10.6, 10.8), Ubuntu 12.04 (32-bit), Windows XP, Windows 7, and Windows 8.

The X-axis is obviously plotting the time (we started gathering CART data right at the end of January 2014). The Y-axis plots the “final CART value”. CART, like TART, measures a number of things, and we “boil that number down” into a single value that we can plot. Each of the subtests are measured in milliseconds, so I guess you could say the Y-axis is also in milliseconds – but as it’s an aggregate, that’s not too meaningful. It’s not the greatest for detecting small shifts in the subtests (we use Datazilla for that), but it allows me to illustrate the changes over time in a pretty simple way.

So it looks like my sampled platforms are all floating around the 40’s and 50’s. Is that good or bad? Well, it’s neither really. It’s just our starting point, and we wanted to try to improve on that.

And so we set out trying to move those dots downwards (for this data, without going into grand detail, lower milliseconds = faster and smoother animation).

As with TART, we did a lot of profiling, and used a lot of the tools I mentioned in this post. Here are the bugs that seemed to move the needle the most.

The Rogue’s Gallery

As before, this is not a complete list, but captures some of the more interesting bugs we worked on.

Bug 932963 – Break customize mode transition into several phases

This change allowed us to break the transition in and out of customization mode into some phases:

  1. Not in customization mode (default)
  2. Entering customization mode
  3. Entered customization mode
  4. Exiting customization mode

Phases 2, 3 and 4 have attributes on the main browser window that we can target with CSS selectors. We were then able to “lighten” the CSS during phases 2 and 4, in order to optimize the frames during the transition. For example, we don’t display the semi-transparent grid texture on the main window unless we’re in phase 3.

Bug 972485 – Find out why we’re doing a bunch of synchronous file reading at the start of the customize mode transition

This was a rather surprising one – using the Gecko Profiler, it looked like we were doing sync file IO as the blank about:customizing document was loading.

For context, we have a page registered at about:customizing that’s a blank document. When we detect that about:customizing has been loaded or switched to, we enter customize mode.

So this sync file IO was causing some jank at the start of the customization mode transition.

Strangely, it turned out that the XHTML file we were loading for the blank about:customizing document was synchronously loading a bunch of MathML localization stuff as it loaded.

We switched the document from XHTML to XUL, and the sync IO load went away. We filed a follow-up bug to investigate why exactly we were doing sync file IO on loading XHTML files, because that’s a bad thing to do.

Anyhow, this was a small but significant win for almost all platforms.

Bug 975552 – Preload about:customizing like we do with about:newtab

I think this was the biggest win we achieved during the performance work. That browser that we load the blank about:customizing page in is not free – a bunch of stuff gets instantiated in order for a working browser to be created, and all of that expense is wasted on a blank document. That expense also causes enough main thread thinking that it reduces the smoothness of the customization entering transition.

So the solution was a hack that takes after the same strategy we use for about:newtab. Essentially, we preload the about:customizing browser and document in the background at what seems like a logical time (right after the user has opened the menu). This allows us to front-load all of the expense in creating the browser, and we end up with something much smoother.

Here’s a before video and an after video from my Windows 7 machine, for reference.

This was a pretty bodacious hack, and in the future, I’d like to remove it completely, and find some way of sidestepping all of the browser internal loading for the about:customizing document (or, even better, find a way of not using a browser element at all!). That’s filed here.

Bug 977796 – Disable subpixel anti-aliasing during customize mode transition.

A profile on Windows 8 showed that we were spending an inordinate amount of time rendering text during the customize mode transition, and this has to do with subpixel anti-aliasing in the menu that animates in.

So I landed a patch that temporarily disables subpixel anti-aliasing on that element during the customize mode transition, and that bought us a huge win for Window 8 (about 20%). Not much of a win for any of the other platforms.

So where did we end up?

CART results - post-optimizations

CART results – post-optimizations

Those numbers are indeed lower!

Now, before you lose your mind about that epic cross-platform win around Feburary 20th, I investigated that changeset range and didn’t find anything we worked on directly in there.

There are some platform patches in there (specifically in Graphics) that might account for this win, but I’m pretty certain it’s because of bug 974621, which updated our test runners to use a different version of Talos that included the patch in bug 967186. 967186 altered the CART test to be more accurate in its measurements, so actually a lot of our initial data was erroneous. That blows a bit because it adds some unnecessary noise to our graph, but it’s also good because accuracy is a thing we definitely want.

Future work

I still think there’s more wins to be made in the customize mode transition. Finding a way of getting rid of the about:customizing preloading hack and replace it with something smarter (like a thin browser that doesn’t need to instantiate much of its backend, or no browser at all) is probably the first step. More profiling might find more wins. I’m learning more and more about platform as I work on Electrolysis, so maybe I’ll come back to this problem with some new skills and information, and I can get it performing reliably on all platforms as it really should: 60fps, smooth as silk.

June 19, 2014 05:51 PM

June 16, 2014

Calendar

Only 5 Weeks Until Thunderbird 31 – Testing Needed

Thunderbird and Lightning do major releases about every 42 weeks and by now there are only 5 weeks left until Thunderbird 31 will be released. Your matching calendaring extension version is Lightning 3.3. Betas for both products are available and now its up to you: we need all the manpower we can get to make sure there are no regressions and everything still works as expected. Over a million active daily users will thank you for this!

To get started, download the beta builds and follow the standard installation procedure.

Although I doubt there will be any issues, its always a good idea to back up your profile. I myself have been using each of the beta builds and haven’t run into any dataloss issues.

Here is a list of areas that have changed and you should especially look into:

There is one known issue if you use a localized version of Thunderbird and Lightning. Certain locales have not been fully translated, but are included in the beta builds nevertheless. We will fix this issue for the second beta build soon.

If you notice any issues please leave a comment here as soon as possible. The more time we have to fix the issue the better. If you have a bugzilla account (or don’t mind creating one for free) please comment on an existing bug or file a new one if needed.

June 16, 2014 07:26 PM

June 13, 2014

Rumbling Edge - Thunderbird

2014-06-11 Calendar builds

Common (excluding Website bugs)-specific: (3)

Sunbird will no longer be actively developed by the Calendar team.

Windows builds Official Windows

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

June 13, 2014 12:12 AM

2014-06-11 Thunderbird comm-central builds

Thunderbird-specific: (33)

MailNews Core-specific: (19)

Windows builds Official Windows, Official Windows installer

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

June 13, 2014 12:11 AM

May 27, 2014

Joshua Cranmer

Why email is hard, part 6: today's email security

This post is part 6 of an intermittent series exploring the difficulties of writing an email client. Part 1 describes a brief history of the infrastructure. Part 2 discusses internationalization. Part 3 discusses MIME. Part 4 discusses email addresses. Part 5 discusses the more general problem of email headers. This part discusses how email security works in practice.

Email security is a rather wide-ranging topic, and one that I've wanted to cover for some time, well before several recent events that have made it come up in the wider public knowledge. There is no way I can hope to cover it in a single post (I think it would outpace even the length of my internationalization discussion), and there are definitely parts for which I am underqualified, as I am by no means an expert in cryptography. Instead, I will be discussing this over the course of several posts of which this is but the first; to ease up on the amount of background explanation, I will assume passing familiarity with cryptographic concepts like public keys, hash functions, as well as knowing what SSL and SSH are (though not necessarily how they work). If you don't have that knowledge, ask Wikipedia.

Before discussing how email security works, it is first necessary to ask what email security actually means. Unfortunately, the layman's interpretation is likely going to differ from the actual precise definition. Security is often treated by laymen as a boolean interpretation: something is either secure or insecure. The most prevalent model of security to people is SSL connections: these allow the establishment of a communication channel whose contents are secret to outside observers while also guaranteeing to the client the authenticity of the server. The server often then gets authenticity of the client via a more normal authentication scheme (i.e., the client sends a username and password). Thus there is, at the end, a channel that has both secrecy and authenticity [1]: channels with both of these are considered secure and channels without these are considered insecure [2].

In email, the situation becomes more difficult. Whereas an SSL connection is between a client and a server, the architecture of email is such that email providers must be considered as distinct entities from end users. In addition, messages can be sent from one person to multiple parties. Thus secure email is a more complex undertaking than just porting relevant details of SSL. There are two major cryptographic implementations of secure email [3]: S/MIME and PGP. In terms of implementation, they are basically the same [4], although PGP has an extra mode which wraps general ASCII (known as "ASCII-armor"), which I have been led to believe is less recommended these days. Since I know the S/MIME specifications better, I'll refer specifically to how S/MIME works.

S/MIME defines two main MIME types: multipart/signed, which contains the message text as a subpart followed by data indicating the cryptographic signature, and application/pkcs7-mime, which contains an encrypted MIME part. The important things to note about this delineation are that only the body data is encrypted [5], that it's theoretically possible to encrypt only part of a message's body, and that the signing and encryption constitute different steps. These factors combine to make for a potentially infuriating UI setup.

How does S/MIME tackle the challenges of encrypting email? First, rather than encrypting using recipients' public keys, the message is encrypted with a symmetric key. This symmetric key is then encrypted with each of the recipients' keys and then attached to the message. Second, by only signing or encrypting the body of the message, the transit headers are kept intact for the mail system to retain its ability to route, process, and deliver the message. The body is supposed to be prepared in the "safest" form before transit to avoid intermediate routers munging the contents. Finally, to actually ascertain what the recipients' public keys are, clients typically passively pull the information from signed emails. LDAP, unsurprisingly, contains an entry for a user's public key certificate, which could be useful in large enterprise deployments. There is also work ongoing right now to publish keys via DNS and DANE.

I mentioned before that S/MIME's use can present some interesting UI design decisions. I ended up actually testing some common email clients on how they handled S/MIME messages: Thunderbird, Apple Mail, Outlook [6], and Evolution. In my attempts to create a surreptitious signed part to confuse the UI, Outlook decided that the message had no body at all, and Thunderbird decided to ignore all indication of the existence of said part. Apple Mail managed to claim the message was signed in one of these scenarios, and Evolution took the cake by always agreeing that the message was signed [7]. It didn't even bother questioning the signature if the certificate's identity disagreed with the easily-spoofable From address. I was actually surprised by how well people did in my tests—I expected far more confusion among clients, particularly since the will to maintain S/MIME has clearly been relatively low, judging by poor support for "new" features such as triple-wrapping or header protection.

Another fault of S/MIME's design is that it makes the mistaken belief that composing a signing step and an encryption step is equivalent in strength to a simultaneous sign-and-encrypt. Another page describes this in far better detail than I have room to; note that this flaw is fixed via triple-wrapping (which has relatively poor support). This creates yet more UI burden into how to adequately describe in UI all the various minutiae in differing security guarantees. Considering that users already have a hard time even understanding that just because a message says it's from example@isp.invalid doesn't actually mean it's from example@isp.invalid, trying to develop UI that both adequately expresses the security issues and is understandable to end-users is an extreme challenge.

What we have in S/MIME (and PGP) is a system that allows for strong guarantees, if certain conditions are met, yet is also vulnerable to breaches of security if the message handling subsystems are poorly designed. Hopefully this is a sufficient guide to the technical impacts of secure email in the email world. My next post will discuss the most critical component of secure email: the trust model. After that, I will discuss why secure email has seen poor uptake and other relevant concerns on the future of email security.

[1] This is a bit of a lie: a channel that does secrecy and authentication at different times isn't as secure as one that does them at the same time.
[2] It is worth noting that authenticity is, in many respects, necessary to achieve secrecy.
[3] This, too, is a bit of a lie. More on this in a subsequent post.
[4] I'm very aware that S/MIME and PGP use radically different trust models. Trust models will be covered later.
[5] S/MIME 3.0 did add a provision stating that if the signed/encrypted part is a message/rfc822 part, the headers of that part should override the outer message's headers. However, I am not aware of a major email client that actually handles these kind of messages gracefully.
[6] Actually, I tested Windows Live Mail instead of Outlook, but given the presence of an official MIME-to-Microsoft's-internal-message-format document which seems to agree with what Windows Live Mail was doing, I figure their output would be identical.
[7] On a more careful examination after the fact, it appears that Evolution may have tried to indicate signedness on a part-by-part basis, but the UI was sufficiently confusing that ordinary users are going to be easily confused.

May 27, 2014 12:32 AM

May 17, 2014

Rumbling Edge - Thunderbird

2014-05-17 Calendar builds

Common (excluding Website bugs)-specific: (15)

Sunbird will no longer be actively developed by the Calendar team.

Windows builds Official Windows

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

May 17, 2014 09:47 PM

2014-05-17 Thunderbird comm-central builds

Thunderbird-specific: (45)

MailNews Core-specific: (19)

Windows builds Official Windows, Official Windows installer

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

May 17, 2014 09:46 PM

May 15, 2014

Mike Conley

Australis Performance Post-mortem Part 4: On Tab Animation

Whoops, forgot that I had a blog post series going on here where I talk about the stuff we did to make Australis blazing fast. In that time, we’ve shipped the thing (Firefox 29 represent!), so folks are actually feeling the results of this performance work, which is pretty excellent.

I ended the last post on an ominous note – something about how we were clear to land Australis on Nightly, or so we thought.

This next bit is all about timing.

There is a Performance Team at Mozilla, who are charged with making our products crazy-fast and crazy-smooth. These folks are geniuses at wringing out every last millisecond possible from a computation. It’s what they do all day long.

A pretty basic principle that I’ve learned over the years is that if you want to change something, you first have to develop a system for measuring the thing you’re trying to change. That way, you can determine if what you’re doing is actually changing things in the way you want them to.

If you’ve been reading this Australis Performance Post-mortem posts, you’ll know that we have some performance tests like this, and they’re called Talos tests.

Just about as we were finishing off the last of the t_paint and ts_paint regressions, the front-end team was suddenly made aware of a new Talos test that was being developed by the Performance team. This test was called TART, and stands for Tab Animation Regression Test. The purpose of this test is to exercise various tab animation scenarios and measure the time it takes to paint each frame and to proceed through the entire animation of a tab open and a tab close.

The good news was that this new test was almost ready for running on our Nightly builds!

The bad news was that the UX branch, which Australis was still on at the time, was regressing this test. And since we cannot land if we regress performance like this… it meant we couldn’t land.

Bad news indeed.

Or was it? At the time, the lot of us front-end engineers were groaning because we’d just slogged through a ton of other performance regressions. Investigating and fixing performance regressions is exhausting work, and we weren’t too jazzed that another regression had just shown up.

But thinking back, I’m somewhat glad this happened. The test showed that Australis was regressing tab animation performance, and tabs are opened every day by almost every Firefox user. Regressing tab performance is simply not a thing one does lightly. And this test caught us before we landed something that regressed those tabs mightily!

That was a good thing. We wouldn’t have known otherwise until people started complaining that their tabs were feeling sluggish when we released it (since most of us run pretty beefy development machines).

And so began the long process of investigating and fixing the TART regressions.

So how bad were things?

Let’s take a look at the UX branch in comparison with mozilla-central at the time that we heard about the TART regressions.

Here are the regressing platforms. I’ll start with Windows XP:

Where we started with TART on Windows XP

Where we started with TART on Windows XP

Forgive me, I couldn’t get the Graph Server to swap the colours of these two datasets, so my original silent pattern of “red is the regressor” has to be dropped. I could probably spend some time trying to swap the colours through various tricky methods, but I honestly don’t think anybody reading this will care too much.

So here we can see the TART scores for Windows XP, and the UX branch (green) is floating steadily over mozilla-central (red). Higher scores are bad. So here’s the regression.

Now let’s see OS X 10.6.

Where we started with TART on OS X 10.6

Where we started with TART on OS X 10.6

Same problem here – the UX nodes (tan) are clearly riding higher than mozilla-central. This was pretty similar to OS X 10.7 too, so I didn’t include the graph.

On OS X 10.8, things were a little bit better, but not too much:

Where we started with TART on OS X 10.8

Where we started with TART on OS X 10.8

Here, the regression was still easily visible, but not as large in magnitude.

Ubuntu was in the same boat as OS X 10.6/10.7 and Windows XP:

Where we started with TART on Ubuntu

Where we started with TART on Ubuntu

But what about Windows 7 and Windows 8? Well, interesting story – believe it or not, on those platforms, UX seemed to perform better than mozilla-central:

Windows 7 (the blue nodes are mozilla-central, the tan nodes are UX)

Where we started with TART on Windows 7

Where we started with TART on Windows 7

Windows 8 (the green nodes are mozilla-central, the red nodes are UX)

Where we started with TART on Windows 8

Where we started with TART on Windows 8

So what the hell was going on?

Well, we eventually figured it out. I’ll lay it out in the next few paragraphs. The following is my “rogue’s gallery” of regressions. This list does not include many false starts and red herrings that we followed during the months working on these regressions. Think of this as “getting to the good parts”.

Backfilling

The problem with having a new test, and having mozilla-central better than UX, is not knowing where UX got worse; there was no historical measurements that we could look at to see where the regression got introduced.

MattN, smart guy that he is, got us a few talos loaner machines, and wrote some scripts to download the Nightlies for both mozilla-central and UX going back to the point where UX split off. Then, he was able to run TART on these builds, and supply the results to his own custom graph server.

So basically, we were able to backfill our missing TART data, and that helped us find a few points of regression.

With that data, now it was time to focus in on each platform, and figure out what we could do with it.

Windows XP

We started with XP, since on the regressing platforms, that’s where most of our users are.

Here’s what we found and fixed:

Bug 916946 – Stop animating the back-button when enabling or disabling it.

During some of the TART tests, we start with single tab, open a new tab, and then close the new tab, and repeat. That first tab has some history, so the back button is enabled. The new tab has no history, so the back button is disabled.

Apparently, we had some CSS that was causing us to animate the back-button when we were flipping back and forth from the enabled / disabled states. That CSS got introduced in the patch that bound the back/forward/stop/reload buttons to the URL bar. It seemed to affect Windows only. Fixing that CSS gave us our first big win on Windows XP, and gave us more of a lead with Windows 7 and Windows 8!

Bug 907544 - Pass the D3DSurface9 down into Cairo so that it can release the DC and LockRect to get at the bits

I don’t really remember how this one went down (and I don’t want to really spend the time swapping it back in by reading the bug), but from my notes it looks like the Graphics team identified this possible performance bottleneck when I showed them some profiles I gathered when running TART.

The good news on this one, was that it definitely gave the UX branch a win on Windows XP. The bad news is that it gave the same win to mozilla-central. This meant that while overall performance got better on Windows XP, we still had the same regression preventing us from landing.

Bug 919541 - Consider not animating the opacity for Australis tabs

Jeff Muizelaar helped me figure this out while we were using paint flashing to analyze paint activity while opening and closing tabs. When we slowed down the transitions, we noticed that the closing tabs were causing paints even though they weren’t visible. Closing tabs aren’t visible because with Australis, we don’t show the tab shape around tabs when they’re not selected – and closing a tab automatically unselects it for something else.

For some reason, our layout and graphics code still wanted to paint this transition even though the element was not visible. We quickly nipped that in the bud, and got ourselves a nice win on tab close measurements for all platforms!

Bug 921038 – Move selected tab curve clip-paths into SVG-as-an-image so it is cached.

This was the final nail in the coffin for the TART regression on Windows XP. Before this bug, we were drawing the linear-gradients in the tab shape using CSS, and the clipping for the curve background colour was being pulled off using clip-path and an SVG curve defined in the browser.xul document.

In this bug, we moved from clipping a background to create the curve, to simply drawing a filled curve using SVG, and putting the linear-gradient for the texture in the “stroke” image (the image that overlays the border on the tab curve).

That by itself was not enough to win back the regression – but thankfully, Seth Fowler had been working on SVG caching, and with that cache backend, our patch here knocked the XP and Ubuntu regressions out! It also took out a chunk from OS X. Things were looking good!

OS X

Bug 924415 - Find out why setting chromemargin to 0,-1,-1,-1 is so expensive for TART on UX branch on OS X.

I don’t think I’ll ever forget this bug.

I had gotten my hands on a Mac Mini that (after some hardware modifications) matched the specs of our 10.6 Talos test machines. That would prove to be super useful, as I was easily able to reproduce the regression that machine, and we could debug and investigate locally, without having to remote in to some loaner device.

With this machine, it didn’t take us long to identify the drawing of the tabs in the titlebar as the main culprit in the OS X regression. But the “why” eluded us for weeks.

It was clear I wasn’t going to be able to solve it on my own, so Jeff Muizelaar from the Graphics team joined in to help me.

We looked at OpenGL profiles, we looked at apitraces, we looked at profiles using the Gecko Profiler, and we looked at profiles from Instruments – the profiler that comes included with XCode.

It seemed like the performance bottleneck was coming from the operating system, but we needed to prove it.

Jeff and I dug and dug. I remember going home one day, feeling pretty deflated by another day of getting nowhere with this bug, when as I was walking into my apartment, I got a phone call.

It was Jeff. He told me he’d found something rather interesting – when the titlebar of the browser overlapped the titlebar of another window, he was able to reproduce the regression. When it did not overlap, the regression went away.

Talos tests open a small window before they open any test browser windows. That little test window stays in the background, and is (from my understanding) a dispatch point for making talos tests occur. That little window has a titlebar, and when we opened new browser windows, the titlebars would overlap.

Jeff suggested I try modifying the TART test to move the browser approximately 22px (the height of a standard OS X titlebar) so that they would no longer overlap. I set that up, triggered a bunch of test runs, and went to bed.

I wasn’t able to sleep. Around 4AM, I got out of bed to look at the results – SUCCESS! The regression had gone away! Jeff was right!

I slept like a baby the rest of the night.

We closed this bug as a WONTFIX due to it being way outside our control.

Comment 31 and onward in that bug are the ones that describe our findings.

Eat it TART, your tears are delicious

Those were the big regressions we fixed for TART. It was a long haul, but we got there – and in the end, it means faster and smoother tab animation for our users, which means a better experience – and it’s totally worth it.

I’m particularly proud of the work we did here, and I’m also really happy with the cross-team support and collaboration we had – from Performance, to Layout, to Graphics, to Front-end – it was textbook teamwork.

Here’s an e-mail I wrote about us beating TART.

Where did we end up?

After the TART regression was fixed, we were set to land on mozilla-central! We didn’t just land a more beautiful browser, we also landed a more performant one.

Noice.

Stay tuned for Part 5 where I talk about CART.

May 15, 2014 01:46 AM

May 12, 2014

Robert Kaiser

Hacking Day And Linuxwochen

In the last weeks, I spent some time on developer events, which I always embrace a lot because the "hallway tracks" are a constant great flow of interesting information and discussions - and of course the official talks are often quite interesting as well! :)

Image No. 23215

First up was the Mozilla Hacking Day in Berlin on Saturday, April 26. I met Arpad Borsos, fellow Mozillian from Vienna, already at Vienna airport on Friday, as we incidentally were hopping over from the Austrian to the German capital in the same plane. Later in Berlin, we met more Mozillians at the hotel and went out for dinner together.
At the actual Hacking Day, we had ~70 people streaming in for some introductory talks, and then people split up into hacking sessions and some more in-depth talks (like the one in the picture, where fellow "Desktop QA" team member Henrik talked about automated testing), which were followed by even more hands-on hacking. I spent most of the time talking to various people about different topics, showing off Firefox OS a bit (after I had reflashed my Geeksphone Peak and got it into a working stage again during the talks) and my actual "hacking" achievement: Together with Georg Fritzsche, we could find out that an issue with my Lantea Maps app freezing was a WebGL-related bug in Gecko, found a stack, reported it as a new bug and set a needinfo flag to a gfx developer. (Yesterday, I found a hacky workaround so Lantea Maps will soon work again despite that bug.)
On Sunday, I also was able to squeeze in some sightseeing and photo-taking, marching through Berlin for ~3.5h before returning to the airport.

Image No. 23216 Image No. 23217 Image No. 23218

And last week, May 8-10, it was time for the yearly Linuxwochen conference in Vienna. As we do not have an organized group of Mozillians here, we didn't have a Mozilla booth, but I usually share the OpenStreetMap booth (that's why you'll spot that in the pictures), as I'm pretty active in the local community there, and just run around with Mozilla T-shirts and talk about Mozilla stuff in addition to OSM things. I also had submitted talks about Firefox OS and the Makey Makey, but as you'll see in my speaker profile, I was additionally "coerced" into a privacy and security panel discussion last-minute as well. I was put there as more or less a representative of Mozilla and more in general those doing end-user software, and of course also tried to drum home a few points of how Mozilla strongly works for respecting user privacy but also pointed out how it is an interesting field that sometimes has tradeoffs with enabling features that people out there really want, and finding a balance can be hard.
The Firefox OS talk was quite appreciated by people attending, and we had quite a few good questions asked there as well, and interest from people afterwards to actually see and try my FxOS phones - I remember how some hard a hard time believing how well the ZTE Open worked with just 256MB of RAM and an all-web UI. :)
In the "hallway track", I did get a lot of questions about Mozilla on all three days, given my T-shirts clearly pointed out my affiliation with the project. Some of those were Firefox OS, mostly about strategy and availability, where I had to explain a few times how we target features phone converts in emerging markets for now. A number of the concerns and questions were around Australis, with quite a few of the very technical folks there not liking how we messed up their customizations and preferences, including tops-on-bottom or the add-ons bar, or how esp. our tabs looked "just like Chrome" now, but I even heard one or two comments on how awesome the new design was. Some of those I could ease with explanation and pointing to the Classic Theme Restorer add-on, but some of them are just unhappy (and it's not helpful that the very helpful Tour does not run when NoScript is installed and active). And then there were questions about our finances (the "Google dependency" issue) and about the fact that new Sync clashes with Master Password (which also protects casual "friends" from reading through passwords in your password manager), among others. Overall, a lot of talk about Mozilla, and I hope I could make most of those people feel better about us than before, wherever they stood then.

Both events cost me a lot of sleep and energy right there, but feel like they were worth the investment for sure, and the meeting and talking to people also gives me energy of a different form. ;-)

May 12, 2014 04:42 PM

May 10, 2014

Ludovic Hirlimann

The next major release of Thunderbird is around the corner and needs some love

We just released the first beta of Thunderbird 30. There will be two betas for 30 and probably 2 or more for 31. We need to start uncovering bugs nows so that developers have time to fix things.

Now is the time to get the betas and use them as you do with the current release  and file bugs. Makes these bugs block our tracking bug : 1008543.

For the next beta we will need more people to do formal testing - we will use moztrap and eventbrite to track this. The more participants to this (and other during the 31 beta period), the higher the quality. Follow this blog or subscribe to the Thunderbird-tester mailing list if you wish to make 31 a great release.

Ludo for the QA team

May 10, 2014 10:16 AM

May 02, 2014

Jonathan Protzenko

Shutting down my (now redundant) comm-central clone on github

This is just a public service announcement: I'm shutting down the comm-central repository on github I've been maintaining since 2011. Please use the official version: they share the same hashes so it's just a matter of modifying the url in your .git/config. Plus, the official repo has all the cool tags baked in.

(Turns out that due to yet-another hg repository corruption, the repo has been broken since early March. Since no one complained, I suspect no one's really using this anymore. I'm pretty sure the folks at Mozilla are much better at avoiding hg repository corruptions than I am, so here's the end of it!).

May 02, 2014 07:51 PM

April 30, 2014

Robert Kaiser

Finding an Openly Licensed JS Graph Library

As I mentioned at the end of the recent post on stability graphs, I was looking for doing JS-based, dynamic versions of those graphs. Those would be able to always have current data without manually copying stuff around (as I've done previously) and should be available to more people in the Mozilla community.

That said, even though I tried previously to do some canvas magic to create graphs in JS, it's tedious to create all that code by oneself, so I set out to find a library that can help me.

My set of ideal requirements was:
So, I went on a web search and tried to find libraries that would fulfill my requirements. Interestingly, I saw a number of commercially licensed JS libraries floating around, I guess there's some business in creating graphs out there.
The list below is the libraries I took a closer look at (before you ask, flot is not on the list because it requires jQuery and therefore violates my "graphs only" and "lightweight" goals right away):
So none of those turned out to be completely ideal in the light of my goals/preferences. That said, dygraphs looked decent enough so I went with that in the end and have dynamic versions of my graphs implemented.
(Ping me on IRC if you want a link, I don't want to link them on the blog as the uninformed could come to wrong or hasty conclusions looking at the data).

Of course, if you are looking for a graphs library for your project, you might chose completely differently, but maybe the list is helpful to find what fits your needs. ;-)

April 30, 2014 11:12 PM

Mike Conley

Electrolysis Code Spelunking: How links open new windows in Firefox

Hey. I’ve started hacking on Electrolysis bugs. I’m normally a front-end engineer working on Firefox desktop, but I’ve been temporarily loaned out to help get Electrolysis ready to be enabled by default on Nightly.

I’m working on bug 989501. Basically, when you click on a link that targets “_blank” or uses window.open, we open a new tab instead. That’s no good – assuming the user’s profile is set to allow it, we should open the link in a new window.

In order to fix this, I need a clearer picture on what happens in the Firefox platform when we click on one of these links.

This isn’t really a tutorial – I’m not going to go out of my way to explain much here. Think of this more as a public posting of my notes during my exploration.

So, here goes.

(Note that the code in this post was current as of revision 400a31da59a9 of mozilla-central, so if you’re reading this in the future, it’s possible that some stuff has greatly changed).

I know for a fact that once the link is clicked, we eventually call mozilla::dom::TabChild::ProvideWindow. I know this because of conversations I’ve had with smaug, billm and jdm in and out of Bugzilla, IRC, and meatspace.

Because I know this, I can hook up gdb to see how I get to that call. I have some notes here on how to hook up gdb to the content process of an e10s window.

Once that’s hooked up, I set a breakpoint on mozilla::dom::TabChild::ProvideWindow, and click on a link somewhere with target=”_blank”.

I hit my breakpoint, and I get a backtrace. Ready for it? Here we go:

#0  mozilla::dom::TabChild::ProvideWindow (this=0x109afb400, aParent=0x10b098820, aChromeFlags=4094, aCalledFromJS=false, aPositionSpecified=false, aSizeSpecified=false, aURI=0xffe, aName=@0x0, aFeatures=@0x0, aWindowIsNew=0x10b098820, aReturn=0x7fff5fbfb648) at TabChild.cpp:1201
#1  0x00000001018682e4 in nsWindowWatcher::OpenWindowInternal (this=0x10b05b540, aParent=0x10b098820, aUrl=<value temporarily unavailable, due to optimizations>, aName=<value temporarily unavailable, due to optimizations>, aFeatures=<value temporarily unavailable, due to optimizations>, aCalledFromJS=false, aDialog=<value temporarily unavailable, due to optimizations>, aNavigate=<value temporarily unavailable, due to optimizations>, _retval=<value temporarily unavailable, due to optimizations>) at nsWindowWatcher.cpp:601
#2  0x0000000101869544 in non-virtual thunk to nsWindowWatcher::OpenWindow2(nsIDOMWindow*, char const*, char const*, char const*, bool, bool, bool, nsISupports*, nsIDOMWindow**) () at nsWindowWatcher.cpp:417
#3  0x0000000100e5dc63 in nsGlobalWindow::OpenInternal (this=0x10b098800, aUrl=@0x7fff5fbfbf90, aName=@0x7fff5fbfc038, aOptions=@0x103d77320, aDialog=false, aContentModal=false, aCalleePrincipal=<value temporarily unavailable, due to optimizations>, aJSCallerContext=<value temporarily unavailable, due to optimizations>, aReturn=<value temporarily unavailable, due to optimizations>) at /Users/mikeconley/Projects/mozilla-central/dom/base/nsGlobalWindow.cpp:11498
#4  0x0000000100e5e3a4 in non-virtual thunk to nsGlobalWindow::OpenNoNavigate(nsAString_internal const&, nsAString_internal const&, nsAString_internal const&, nsIDOMWindow**) () at /Users/mikeconley/Projects/mozilla-central/dom/base/nsGlobalWindow.cpp:7463
#5  0x000000010184d99d in nsDocShell::InternalLoad (this=<value temporarily unavailable, due to optimizations>, aURI=0x113eed200, aReferrer=0x1134c0fe0, aOwner=0x114a69070, aFlags=0, aWindowTarget=0x10b098820, aLoadType=<value temporarily unavailable, due to optimizations>, aSHEntry=<value temporarily unavailable, due to optimizations>, aSourceDocShell=<value temporarily unavailable, due to optimizations>, aDocShell=<value temporarily unavailable, due to optimizations>, aRequest=<value temporarily unavailable, due to optimizations>) at /Users/mikeconley/Projects/mozilla-central/docshell/base/nsDocShell.cpp:9079
#6  0x0000000101855758 in nsDocShell::OnLinkClickSync (this=0x10b075000, aContent=0x112865eb0, aURI=0x113eed3c0, aTargetSpec=<value temporarily unavailable, due to optimizations>, aFileName=@0x106f27f10, aPostDataStream=0x0, aDocShell=<value temporarily unavailable, due to optimizations>, aRequest=<value temporarily unavailable, due to optimizations>) at /Users/mikeconley/Projects/mozilla-central/docshell/base/nsDocShell.cpp:12699
#7  0x0000000101857f85 in mozilla::Maybe<mozilla::AutoCxPusher>::~Maybe () at /Users/mikeconley/Projects/mozilla-central/obj-x86_64-apple-darwin12.5.0/dist/include/nsCxPusher.h:12499
#8  0x0000000101857f85 in nsCxPusher::~nsCxPusher () at /Users/mikeconley/Projects/mozilla-central/docshell/base/nsDocShell.cpp:41
#9  0x0000000101857f85 in nsCxPusher::~nsCxPusher () at /Users/mikeconley/Projects/mozilla-central/obj-x86_64-apple-darwin12.5.0/dist/include/nsCxPusher.h:66
#10 0x0000000101857f85 in OnLinkClickEvent::Run (this=<value temporarily unavailable, due to optimizations>) at /Users/mikeconley/Projects/mozilla-central/docshell/base/nsDocShell.cpp:12502
#11 0x0000000100084f60 in nsThread::ProcessNextEvent (this=0x106f245e0, mayWait=false, result=0x7fff5fbfc947) at nsThread.cpp:715
#12 0x0000000100023241 in NS_ProcessPendingEvents (thread=<value temporarily unavailable, due to optimizations>, timeout=20) at nsThreadUtils.cpp:210
#13 0x0000000100d41c47 in nsBaseAppShell::NativeEventCallback (this=0x1096e8660) at nsBaseAppShell.cpp:98
#14 0x0000000100cfdba1 in nsAppShell::ProcessGeckoEvents (aInfo=0x1096e8660) at nsAppShell.mm:388
#15 0x00007fff86adeb31 in __CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION__ ()
#16 0x00007fff86ade455 in __CFRunLoopDoSources0 ()
#17 0x00007fff86b017f5 in __CFRunLoopRun ()
#18 0x00007fff86b010e2 in CFRunLoopRunSpecific ()
#19 0x00007fff8ad65eb4 in RunCurrentEventLoopInMode ()
#20 0x00007fff8ad65c52 in ReceiveNextEventCommon ()
#21 0x00007fff8ad65ae3 in BlockUntilNextEventMatchingListInMode ()
#22 0x00007fff8cce1533 in _DPSNextEvent ()
#23 0x00007fff8cce0df2 in -[NSApplication nextEventMatchingMask:untilDate:inMode:dequeue:] ()
#24 0x0000000100cfd266 in -[GeckoNSApplication nextEventMatchingMask:untilDate:inMode:dequeue:] (self=0x106f801a0, _cmd=<value temporarily unavailable, due to optimizations>, mask=18446744073709551615, expiration=0x422d63c37f00000d, mode=0x7fff7205e1c0, flag=1 '\001') at nsAppShell.mm:165
#25 0x00007fff8ccd81a3 in -[NSApplication run] ()
#26 0x0000000100cfe32b in nsAppShell::Run (this=<value temporarily unavailable, due to optimizations>) at nsAppShell.mm:746
#27 0x000000010199b3dc in XRE_RunAppShell () at /Users/mikeconley/Projects/mozilla-central/toolkit/xre/nsEmbedFunctions.cpp:679
#28 0x00000001002a0dae in MessageLoop::AutoRunState::~AutoRunState () at message_loop.cc:229
#29 0x00000001002a0dae in MessageLoop::AutoRunState::~AutoRunState () at /Users/mikeconley/Projects/mozilla-central/ipc/chromium/src/base/message_loop.h:197
#30 0x00000001002a0dae in MessageLoop::Run (this=0x0) at message_loop.cc:503
#31 0x000000010199b0cd in XRE_InitChildProcess (aArgc=<value temporarily unavailable, due to optimizations>, aArgv=<value temporarily unavailable, due to optimizations>, aProcess=<value temporarily unavailable, due to optimizations>) at /Users/mikeconley/Projects/mozilla-central/toolkit/xre/nsEmbedFunctions.cpp:516
#32 0x0000000100000f1d in main (argc=<value temporarily unavailable, due to optimizations>, argv=0x7fff5fbff4d8) at /Users/mikeconley/Projects/mozilla-central/ipc/app/MozillaRuntimeMain.cpp:149

Oh my. Well, the good news is, we can chop off a good chunk of the lower half because that’s all message / event loop stuff. That’s going to be in every single backtrace ever, pretty much, so I can just ignore it. Here’s the more important stuff:

#0  mozilla::dom::TabChild::ProvideWindow (this=0x109afb400, aParent=0x10b098820, aChromeFlags=4094, aCalledFromJS=false, aPositionSpecified=false, aSizeSpecified=false, aURI=0xffe, aName=@0x0, aFeatures=@0x0, aWindowIsNew=0x10b098820, aReturn=0x7fff5fbfb648) at TabChild.cpp:1201
#1  0x00000001018682e4 in nsWindowWatcher::OpenWindowInternal (this=0x10b05b540, aParent=0x10b098820, aUrl=<value temporarily unavailable, due to optimizations>, aName=<value temporarily unavailable, due to optimizations>, aFeatures=<value temporarily unavailable, due to optimizations>, aCalledFromJS=false, aDialog=<value temporarily unavailable, due to optimizations>, aNavigate=<value temporarily unavailable, due to optimizations>, _retval=<value temporarily unavailable, due to optimizations>) at nsWindowWatcher.cpp:601
#2  0x0000000101869544 in non-virtual thunk to nsWindowWatcher::OpenWindow2(nsIDOMWindow*, char const*, char const*, char const*, bool, bool, bool, nsISupports*, nsIDOMWindow**) () at nsWindowWatcher.cpp:417
#3  0x0000000100e5dc63 in nsGlobalWindow::OpenInternal (this=0x10b098800, aUrl=@0x7fff5fbfbf90, aName=@0x7fff5fbfc038, aOptions=@0x103d77320, aDialog=false, aContentModal=false, aCalleePrincipal=<value temporarily unavailable, due to optimizations>, aJSCallerContext=<value temporarily unavailable, due to optimizations>, aReturn=<value temporarily unavailable, due to optimizations>) at /Users/mikeconley/Projects/mozilla-central/dom/base/nsGlobalWindow.cpp:11498
#4  0x0000000100e5e3a4 in non-virtual thunk to nsGlobalWindow::OpenNoNavigate(nsAString_internal const&, nsAString_internal const&, nsAString_internal const&, nsIDOMWindow**) () at /Users/mikeconley/Projects/mozilla-central/dom/base/nsGlobalWindow.cpp:7463
#5  0x000000010184d99d in nsDocShell::InternalLoad (this=<value temporarily unavailable, due to optimizations>, aURI=0x113eed200, aReferrer=0x1134c0fe0, aOwner=0x114a69070, aFlags=0, aWindowTarget=0x10b098820, aLoadType=<value temporarily unavailable, due to optimizations>, aSHEntry=<value temporarily unavailable, due to optimizations>, aSourceDocShell=<value temporarily unavailable, due to optimizations>, aDocShell=<value temporarily unavailable, due to optimizations>, aRequest=<value temporarily unavailable, due to optimizations>) at /Users/mikeconley/Projects/mozilla-central/docshell/base/nsDocShell.cpp:9079
#6  0x0000000101855758 in nsDocShell::OnLinkClickSync (this=0x10b075000, aContent=0x112865eb0, aURI=0x113eed3c0, aTargetSpec=<value temporarily unavailable, due to optimizations>, aFileName=@0x106f27f10, aPostDataStream=0x0, aDocShell=<value temporarily unavailable, due to optimizations>, aRequest=<value temporarily unavailable, due to optimizations>) at /Users/mikeconley/Projects/mozilla-central/docshell/base/nsDocShell.cpp:12699
#7  0x0000000101857f85 in mozilla::Maybe<mozilla::AutoCxPusher>::~Maybe () at /Users/mikeconley/Projects/mozilla-central/obj-x86_64-apple-darwin12.5.0/dist/include/nsCxPusher.h:12499
#8  0x0000000101857f85 in nsCxPusher::~nsCxPusher () at /Users/mikeconley/Projects/mozilla-central/docshell/base/nsDocShell.cpp:41
#9  0x0000000101857f85 in nsCxPusher::~nsCxPusher () at /Users/mikeconley/Projects/mozilla-central/obj-x86_64-apple-darwin12.5.0/dist/include/nsCxPusher.h:66
#10 0x0000000101857f85 in OnLinkClickEvent::Run (this=<value temporarily unavailable, due to optimizations>) at /Users/mikeconley/Projects/mozilla-central/docshell/base/nsDocShell.cpp:12502

That’s a bit more manageable.

So we start inside something called a docshell. I’ve heard that term bandied about a lot, and I can’t say I’ve ever been too sure what it means, or what a docshell does, or why I should care.

I found some documents that make things a little bit clearer.

Basically, my understanding is that a docshell is the thing that connects incoming stuff from some URI (this could be web content, or it might be a XUL document that’s loading the browser UI…), and connects it to the things that make stuff show up on your screen.

So, pretty important.

It seems to be a place where some utility methods and functions go as well, so it’s kind of this abstract thing that seems to have multiple purposes.

But the most important thing for the purposes of this post is this: every time you load a document, you have a docshell taking care of it. All of these docshells are structured in a tree which is rooted with a docshell owner. This will come into play later.

So one thing that a docshell does, is that it notices when a link was clicked inside of its content. That’s nsDocShell.cpp’d OnLinkClickEvent::Run, and that eventually makes its way over to nsDocShell::OnLinkClickSync.

After some initial checks and balances to ensure that this thing really is a link we want to travel to, we get sent off to nsDocShell::InternalLoad.

Inside there, there’s some more checking… there’s a policy check to make sure we’re allowed to open a link. Lots of security going on. Eventually I see this:

if (aWindowTarget && *aWindowTarget)

That’s good. aWindowTarget maps to the target=”_blank” attribute in the anchor. So we’ll be entering this block.

    if (aWindowTarget && *aWindowTarget) {
        // Locate the target DocShell.
        nsCOMPtr<nsIDocShellTreeItem> targetItem;
        rv = FindItemWithName(aWindowTarget, nullptr, this,
                              getter_AddRefs(targetItem));

So now we’re looking for the right docshell to load this new document in. That makes sense – if you have a link where target=”foo”, subsequent links from the same origin targeted at “foo” will open in the same window or tab or what have you. So we’re checking to see if we’ve opened something with the name inside aWindowTarget already.

So now we’re in nsDocShell::FindItemWithName, and I see this:

        else if (name.LowerCaseEqualsLiteral("_blank"))
        {
            // Just return null.  Caller must handle creating a new window with
            // a blank name himself.
            return NS_OK;
        }

Ah hah, so target=”_blank”, as we already knew, is special-cased – and this is where it happens. There’s no existing docshell for _blank because we know we’re going to be opening a new window (or tab if the user has preffed it that way). So we don’t return a pre-existing docshell.

So we’re back in nsDocShell::InternalLoad.

        rv = FindItemWithName(aWindowTarget, nullptr, this,
                              getter_AddRefs(targetItem));
        NS_ENSURE_SUCCESS(rv, rv);

        targetDocShell = do_QueryInterface(targetItem);
        // If the targetDocShell doesn't exist, then this is a new docShell
        // and we should consider this a TYPE_DOCUMENT load
        isNewDocShell = !targetDocShell;

Ok, so now targetItem is nullptr, targetDocShell is also nullptr, and so isNewDocShell is true.

There seems to be more policy checking going on in InternalLoad after this… but eventually, I see this:

   if (aWindowTarget && *aWindowTarget) {
        // We've already done our owner-inheriting.  Mask out that bit, so we
        // don't try inheriting an owner from the target window if we came up
        // with a null owner above.
        aFlags = aFlags & ~INTERNAL_LOAD_FLAGS_INHERIT_OWNER;
        
        bool isNewWindow = false;
        if (!targetDocShell) {
            // If the docshell's document is sandboxed, only open a new window
            // if the document's SANDBOXED_AUXILLARY_NAVIGATION flag is not set.
            // (i.e. if allow-popups is specified)
            NS_ENSURE_TRUE(mContentViewer, NS_ERROR_FAILURE);
            nsIDocument* doc = mContentViewer->GetDocument();
            uint32_t sandboxFlags = 0;

            if (doc) {
                sandboxFlags = doc->GetSandboxFlags();
                if (sandboxFlags & SANDBOXED_AUXILIARY_NAVIGATION) {
                    return NS_ERROR_DOM_INVALID_ACCESS_ERR;
                }
            }

            nsCOMPtr<nsPIDOMWindow> win =
                do_GetInterface(GetAsSupports(this));
            NS_ENSURE_TRUE(win, NS_ERROR_NOT_AVAILABLE);

            nsDependentString name(aWindowTarget);
            nsCOMPtr<nsIDOMWindow> newWin;
            nsAutoCString spec;
            if (aURI)
                aURI->GetSpec(spec);
            rv = win->OpenNoNavigate(NS_ConvertUTF8toUTF16(spec),
                                     name,          // window name
                                     EmptyString(), // Features
                                     getter_AddRefs(newWin));

So we check again to see if we’re targeted at something, and check if we’ve found a target docshell for it. We hadn’t, so we do some security checks, and then … what the hell is nsPIDOMWindow? I’m used to things being called nsIBlahBlah, but now nsPIBlahBlah… what does the P mean?

It took some asking around, but I eventually found out that the P is supposed to be for Private – as in, this is a private XPIDL interface, and non-core embedders should stay away from it.

Ok, and we also see do_GetInterface. This is not the same as QueryInterface, believe it or not. The difference is subtle, but basically it’s this: QueryInterface says “you implement X, but I think you also implement Y. If you do, please return a pointer to yourself that makes you seem like a Y.” GetInterface is different – GetInterface says “I know you know about something that implements Y. It might be you, or more likely, it’s something you’re holding a reference to. Can I get a reference to that please?”. And if successful, it returns it. Here’s more documentation about GetInterface.

It’s a subtle but important difference.

So this docshell knows about a window, and we’ve now got a handle on that window using the private interface nsPIDOMWindow. Neat.

So eventually, we call OpenNoNavigate on that nsPIDOMWindow. That method is pretty much like nsIDOMWindow::Open, except that OpenNoNavigate doesn’t send the window anywhere – it just returns it so that the caller can send it to a URI.

Through the magic of do_GetInterface, nsDocShell::GetInterface, EnsureScriptEnvironment, and NS_NewScriptGlobalObject, I know that the nsPIDOMWindow is being implemented by nsGlobalWindow, and that’s where I should go to to find the OpenNoNavigate implementation.

So off we go!

nsGlobalWindow::OpenNoNavigate just seems to forward the call, after some argument setting, to nsGlobalWindow::OpenInternal, like this:

  return OpenInternal(aUrl, aName, aOptions,
                      false,          // aDialog
                      false,          // aContentModal
                      true,           // aCalledNoScript
                      false,          // aDoJSFixups
                      false,          // aNavigate
                      nullptr, nullptr,  // No args
                      GetPrincipal(),    // aCalleePrincipal
                      nullptr,           // aJSCallerContext
                      _retval);

Having a glance around at the rest of the nsGlobalWindow::Open[foo] methods, it looks like they all call into OpenInternal. It’s the big-mamma opening method.

This method does a few things, including making sure that we’re not being abused by web content that’s trying to spam the user with popups.

Eventually, we get to this:

      rv = pwwatch->OpenWindow2(this, url.get(), name_ptr, options_ptr,
                                /* aCalledFromScript = */ false,
                                aDialog, aNavigate, aExtraArgument,
                                getter_AddRefs(domReturn));

and return the domReturn pointer back after a few more checks to our caller. Remember that the caller is going to take this new window, and navigate it to some URI.

Ok, so, pwwatch. What is that? Well, that appears to be a private interface to nsWindowWatcher, which gives us access to the OpenWindow2 method.

After prepping some arguments, much like nsGlobalWindow::OpenNoNavigate did, we forward the call over to nsWindowWatcher::OpenWindowInternal.

And now we’re almost done – we’re almost at the point where we’re actually going to open a window!

Some key things need to happen though. First, we do this:

nsCOMPtr<nsIDocShellTreeOwner>  parentTreeOwner;  // from the parent window, if any
...
GetWindowTreeOwner(aParent, getter_AddRefs(parentTreeOwner));

So what that does is it tries to get the docshell owner of the docshell that’s attempting to open the window (and that’d be the docshell that we clicked the link in).

After a few more things, we check to see if there’s an existing window with that target name which we can re-use:

  // try to find an extant window with the given name
  nsCOMPtr<nsIDOMWindow> foundWindow = SafeGetWindowByName(name, aParent);
  GetWindowTreeItem(foundWindow, getter_AddRefs(newDocShellItem));

And if so, we set it to newDocShellItem.

After some more security stuff, we check to see if newDocShellItem exists. Because name is nullptr (since we had target=”_blank”, and nsDocShell::FindItemWithName returned nullptr), newDocShellItem is null.

Because it doesn’t exist, we know we’re opening a brand new window!

More security things seem to happen, and then we get to the part that I’m starting to focus on:

      nsCOMPtr<nsIWindowProvider> provider = do_GetInterface(parentTreeOwner);
      if (provider) {
        NS_ASSERTION(aParent, "We've _got_ to have a parent here!");

        nsCOMPtr<nsIDOMWindow> newWindow;
        rv = provider->ProvideWindow(aParent, chromeFlags, aCalledFromJS,
                                     sizeSpec.PositionSpecified(),
                                     sizeSpec.SizeSpecified(),
                                     uriToLoad, name, features, &windowIsNew,
                                     getter_AddRefs(newWindow));

We ask the parentTreeOwner to get us something that it knows about that implements nsIWindowProvider. In the Electrolysis / content process case, that’d be TabChild. In the normal, non-Electrolysis case, that’s nsContentTreeOwner.

The nsIWindowProvider is the thing that we’ll use to get a new window from! So we call ProvideWindow on it, to give us a pointer to new nsIDOMWindow window, assigned to newWindow.

Here’s TabChild::ProvideWindow:

NS_IMETHODIMP
TabChild::ProvideWindow(nsIDOMWindow* aParent, uint32_t aChromeFlags,
                        bool aCalledFromJS,
                        bool aPositionSpecified, bool aSizeSpecified,
                        nsIURI* aURI, const nsAString& aName,
                        const nsACString& aFeatures, bool* aWindowIsNew,
                        nsIDOMWindow** aReturn)
{
    *aReturn = nullptr;

    // If aParent is inside an <iframe mozbrowser> or <iframe mozapp> and this
    // isn't a request to open a modal-type window, we're going to create a new
    // <iframe mozbrowser/mozapp> and return its window here.
    nsCOMPtr<nsIDocShell> docshell = do_GetInterface(aParent);
    if (docshell && docshell->GetIsInBrowserOrApp() &&
        !(aChromeFlags & (nsIWebBrowserChrome::CHROME_MODAL |
                          nsIWebBrowserChrome::CHROME_OPENAS_DIALOG |
                          nsIWebBrowserChrome::CHROME_OPENAS_CHROME))) {

      // Note that BrowserFrameProvideWindow may return NS_ERROR_ABORT if the
      // open window call was canceled.  It's important that we pass this error
      // code back to our caller.
      return BrowserFrameProvideWindow(aParent, aURI, aName, aFeatures,
                                       aWindowIsNew, aReturn);
    }

    // Otherwise, create a new top-level window.
    PBrowserChild* newChild;
    if (!CallCreateWindow(&newChild)) {
        return NS_ERROR_NOT_AVAILABLE;
    }

    *aWindowIsNew = true;
    nsCOMPtr<nsIDOMWindow> win =
        do_GetInterface(static_cast<TabChild*>(newChild)->WebNavigation());
    win.forget(aReturn);
    return NS_OK;
}

The docshell->GetIsInBrowserOrApp() is basically asking “are we b2g?”, to which the answer is “no”, so we skip that block, and go right for CallCreateWindow.

CallCreateWindow is using the IPC library to communicate with TabParent in the UI process, which has a corresponding function called AnswerCreateWindow. Here it is:

bool
TabParent::AnswerCreateWindow(PBrowserParent** retval)
{
    if (!mBrowserDOMWindow) {
        return false;
    }

    // Only non-app, non-browser processes may call CreateWindow.
    if (IsBrowserOrApp()) {
        return false;
    }

    // Get a new rendering area from the browserDOMWin.  We don't want
    // to be starting any loads here, so get it with a null URI.
    nsCOMPtr<nsIFrameLoaderOwner> frameLoaderOwner;
    mBrowserDOMWindow->OpenURIInFrame(nullptr, nullptr,
                                      nsIBrowserDOMWindow::OPEN_NEWTAB,
                                      nsIBrowserDOMWindow::OPEN_NEW,
                                      getter_AddRefs(frameLoaderOwner));
    if (!frameLoaderOwner) {
        return false;
    }

    nsRefPtr<nsFrameLoader> frameLoader = frameLoaderOwner->GetFrameLoader();
    if (!frameLoader) {
        return false;
    }

    *retval = frameLoader->GetRemoteBrowser();
    return true;
}

So after some checks, we call mBrowserDOMWindow’s OpenURIInFrame, with (among other things), nsIBrowserDOMWindow::OPEN_NEWTAB. So that’s why we’ve got a new tab opening instead of a new window.

mBrowserDOMWindow is a reference to this thing implemented in browser.js:

function nsBrowserAccess() { }

nsBrowserAccess.prototype = {
  QueryInterface: XPCOMUtils.generateQI([Ci.nsIBrowserDOMWindow, Ci.nsISupports]),

  _openURIInNewTab: function(aURI, aOpener, aIsExternal) {
    let win, needToFocusWin;

    // try the current window.  if we're in a popup, fall back on the most recent browser window
    if (window.toolbar.visible)
      win = window;
    else {
      let isPrivate = PrivateBrowsingUtils.isWindowPrivate(aOpener || window);
      win = RecentWindow.getMostRecentBrowserWindow({private: isPrivate});
      needToFocusWin = true;
    }

    if (!win) {
      // we couldn't find a suitable window, a new one needs to be opened.
      return null;
    }

    if (aIsExternal && (!aURI || aURI.spec == "about:blank")) {
      win.BrowserOpenTab(); // this also focuses the location bar
      win.focus();
      return win.gBrowser.selectedBrowser;
    }

    let loadInBackground = gPrefService.getBoolPref("browser.tabs.loadDivertedInBackground");
    let referrer = aOpener ? makeURI(aOpener.location.href) : null;

    let tab = win.gBrowser.loadOneTab(aURI ? aURI.spec : "about:blank", {
                                      referrerURI: referrer,
                                      fromExternal: aIsExternal,
                                      inBackground: loadInBackground});
    let browser = win.gBrowser.getBrowserForTab(tab);

    if (needToFocusWin || (!loadInBackground && aIsExternal))
      win.focus();

    return browser;
  },

  openURI: function (aURI, aOpener, aWhere, aContext) {
    ... (removed for brevity)
  },

  openURIInFrame: function browser_openURIInFrame(aURI, aOpener, aWhere, aContext) {
    if (aWhere != Ci.nsIBrowserDOMWindow.OPEN_NEWTAB) {
      dump("Error: openURIInFrame can only open in new tabs");
      return null;
    }

    var isExternal = (aContext == Ci.nsIBrowserDOMWindow.OPEN_EXTERNAL);
    let browser = this._openURIInNewTab(aURI, aOpener, isExternal);
    if (browser)
      return browser.QueryInterface(Ci.nsIFrameLoaderOwner);

    return null;
  },

  isTabContentWindow: function (aWindow) {
    return gBrowser.browsers.some(function (browser) browser.contentWindow == aWindow);
  },

  get contentWindow() {
    return gBrowser.contentWindow;
  }
}

So nsBrowserAccess’s openURIInFrame only supports opening things in new tabs, and then it just calls _openURIInNewTab on itself, which does the job of returning the tab’s remote browser after the tab is opened.

I might follow this up with a post about how nsContentTreeOwner opens a window in the non-Electrolysis case, and how we might abstract some of that out for re-use here. We’ll see.

And that’s about it. Hopefully this is useful to future spelunkers.

April 30, 2014 08:14 PM

Calendar

Icedove and Iceowl-extension compatibility

I have noticed a few reports that Lightning 2.6.4 is not working with the Debian variant of Thunderbird, called Icedove. Together with one of our users (thanks Arie!) this was reported to the Debian package maintainers.

There are slight differences in how Icedove and Thunderbird are compiled, the result is that the Linux symbol versions required are different. The maintainers are working on fixing this, in the meanwhile all you need to do is make sure you are using only one source for the packages:

Unfortunately, this difference in compilation has cost us some negative reviews. If everything is working for you or this post has helped you to use the right versions, please show some love and write a review ♥.

 

April 30, 2014 09:49 AM

April 25, 2014

Mike Conley

Electrolysis: Debugging Child Processes of Content for Make Benefit Glorious Browser of Firefox

Here’s how I’m currently debugging Electrolysis stuff on OS X using gdb. It involves multiple terminal windows. I live with that.

# In Terminal Window 1, I execute my Firefox build with MOZ_DEBUG_CHILD_PROCESS=1.
# That environment variable makes it so that the parent process spits out the child
# process ID as soon as it forks out. I also use my e10s profile so as to not muck up
# my default profile.

MOZ_DEBUG_CHILD_PROCESS=1 ./mach run -P e10s

# So, now my Firefox is spawned up and ready to go. I have
# browser.tabs.remote.autostart set to "true" in my about:config, which means I'm
# using out-of-process tabs by default. That means that right away, I see the
# child process ID dumped into the console. Maybe you get the same thing if
# browser.tabs.remote.autostart is false. I haven't checked.

CHILDCHILDCHILDCHILD
  debug me @ 45326

# ^-- so, this is what comes out in Terminal Window 1.

So, the next step is to open another terminal window. This one will connect to the parent process.

# Maybe there are smarter ways to find the firefox process ID, but this is what I
# use in my new Terminal Window 2.
ps aux | grep firefox

# And this is what I get back:

mikeconley     45391  17.2  5.3  3985032 883932   ??  S     2:39pm   1:58.71 /Applications/FirefoxAurora.app/Contents/MacOS/firefox
mikeconley     45322   0.0  0.4  3135172  69748 s000  S+    2:36pm   0:06.48 /Users/mikeconley/Projects/mozilla-central/obj-x86_64-apple-darwin12.5.0/dist/Nightly.app/Contents/MacOS/firefox -no-remote -foreground -P e10s
mikeconley     45430   0.0  0.0  2432768    612 s002  R+    2:44pm   0:00.00 grep firefox
mikeconley     44878   0.0  0.0        0      0 s000  Z    11:46am   0:00.00 (firefox)

# That second one is what I want to attach to. I can tell, because the executable
# path lies within my local build's objdir. The first row is my main Firefox I just
# use for work browsing. I definitely don't want to attach to that. The third line
# is just me looking for the process with grep. Not sure what that last one is.

# I use sudo to attach to the parent because otherwise, OS X complains about permissions
# for process attachment. I attach to the parent like this:

sudo gdb firefox 45322

# And now I have a gdb for the parent process. Easy peasy.

And finally, to debug the child, I open yet another terminal window.

# That process ID that I got from Terminal Window 1 comes into play now.

sudo gdb firefox 45326

# Boom - attached to child process now.

Setting breakpoints for things like TabChild::foo or TabParent::bar can be done like this:

# In Terminal Window 3, attached to the child:

b mozilla::dom::TabChild::foo

# In Terminal Window 2, attached to the parent:

b mozilla::dom::TabParent::bar

And now we’re cookin’.

April 25, 2014 07:26 PM

April 24, 2014

Rumbling Edge - Thunderbird

2014-04-23 Calendar builds

Common (excluding Website bugs)-specific: (34)

Sunbird will no longer be actively developed by the Calendar team.

Windows builds Official Windows

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

April 24, 2014 07:00 AM

2014-04-23 Thunderbird comm-central builds

Thunderbird-specific: (37)

MailNews Core-specific: (18)

Windows builds Official Windows, Official Windows installer

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

April 24, 2014 06:59 AM

April 23, 2014

Calendar

The 2014 Google Summer of Code Students are here!

I am particularly excited to announce that this year the Calendar Project has received two slots in Google Summer of Code 2014. Both projects target our backend code. This means users won’t have a chance to complain about user interface changes and instead will be blown away by performance and interoperability improvements.

I would like to take a few minutes to introduce our awesome new students to the community, please join me in giving them a warm welcome!

Reid Anderson: Improve Calendar Provider Backends

This project is about performance and stability for our calendar storage. Here is what Reid has to say:

I have been a student at the University of Minnesota since 2012 studying Chemistry and Computer Science. Outside of the classroom, I spend a lot of my time both watching and playing a variety of sports. I also enjoy reading, talking to friends, or playing a quick game of Civilization. I heard about Google Summer of Code before I entered college, and participating in the program had always been a goal of mine.

I was introduced to the Mozilla community when I started submitting patches to Songbird, a desktop media manager built on the Mozilla framework. Throughout the entire community I saw a consistent message of an open web powered by open technology and open software. This is something that I am excited to be a part of, and I am looking forward to contributing. My project is to improve the cached mode for online calendar providers to the point where it can be used as the default setting. This should allow Lightning to function effectively in an offline environment, while also bringing significant performance improvements. Hopefully these will be a useful contributions to the community, and I’m looking forward to getting started.

Isn’t that wonderful? I’m particularly excited about the performance improvements this will bring. Further project updates will be done on his blog and we will of course be posting major updates here too.

Malintha Fernando: Update Invitations to the latest Specification

This project will not only improve our invitations support, it will also guard us from future regressions through more tests. Here is an excerpt from Malintha’s blog post:

My name is Malintha Fernando and I am a student developer from Sri Lanka, currently studying at University of Moratuwa. I started contributing to Mozilla some months back (Still got a lot to learn) as my first contribution in open source and glad to be a part of the Lightning project in GSoC 2014.

The objective of this project is to improve Lightning’s scheduling system by updating the available features to the latest RFC specifications. As we know most of the Lightning’s implementation were done referring to the draft version 4 of the RFC 6638, there are some features lagging behind from the final RFC document.

Do you remember the mess we had when 2.6.x was released? At least one of the bugs we had to fix quickly was a regression in the invitations code. With Malintha’s help this won’t happen again!Whats next?According to Google, we are currently in the “Community Bonding Period”. This means we have a little time to set things up and make preparations. Coding officially begins on May 19th. You can follow progress on the projects as mentioned above, I will also blog about major updates here as we get closer to completion. Lets have some fun with this and continue to make Lightning better. Its about time!

April 23, 2014 07:46 PM

April 21, 2014

Instantbird

Google Summer of Code 2014 has Commenced!

Instantbird again has the pleasure of participating in Google Summer of Code under the Mozilla umbrella. In the past we’ve had a variety of exciting projects and this year is no different. Three students will be working with us this summer:

Saurabh Anand (sawrubh), mentored by Patrick Cloke (clokep), will aim to add support for reliable file transfer to Instantbird using FileLink as a fallback to standard file transfer.

Mayank Kumar (mayanktg), mentored by Benedikt P. (Mic), will be adding voice and video support to Instantbird by integrating WebRTC for XMPP. WebRTC makes it easy for us to have real-time communication without the use of additional plugins.

Nihanth Subramanya (nhnt11), who last year added the “awesometab“, will be looking to improve loading of conversations and history under the guidance of aleth. He will work on adding the ability to search across all logs of a contact and loading the previous context of a conversation when scrolling (“infinite scroll”).

Please feel free to stop by #instantbird on irc.mozilla.org to say hello and congratulate our students! Thanks again to Mozilla for allowing us to participate in Google Summer of Code with them!

April 21, 2014 07:00 PM

April 07, 2014

Mike Conley

Much Ado About Brendan (or As I’ve Seen It)

Since Brendan Eich’s resignation, I’ve been struggling to articulate what I think and feel about the matter. It’s been difficult. I haven’t been able to find what I wanted to say. Many other better, smarter, and more qualified Mozillians have written things about this, and I was about to let it go. I didn’t just want to say “me too”.

I felt I had nothing of substance to contribute. I feebly wrote something about Brendan Eich and the Kobayashi Maru, but it became a rambling mess, and the analogy fell apart quite quickly. I was about to call it quits on contributing my thoughts.

And then this post happened.

Don’t ask me where this came from. A muse woke me up in the night to write it (it’s just past 4AM for crying out loud – muse, let me sleep). Maybe through the lens of this nonsense, some real sense will prevail. I’m not hopeful, but this muse is nodding emphatically (and grinning like a lunatic).

Please believe that I’m not at all trying to trivialize, oversimplify, or make light of the events of the past few weeks by writing this. I’m just trying to understand it, and view it with a looking glass I have at least a little familiarity with.

And maybe it’s mostly catharsis.

I also apologize that it’s not really told like a story from the Bard. I think that’d be too long winded (no offense, Shakey). I’m pretty sure the narrator / stage directions have the most lines. It’s actually quite criminal.

I also want to point out that the only “real world names” in this little travesty is Brendan Eich’s, Jay Sullivan’s, and Mitchell Baker’s. The rest are from the world of Shakespeare.

And I also apologize that it’s not in iambic pentameter – that’d probably be more appropriate, but I have neither the wit nor the patience to pull this off with that much verisimilitude.

Oooh! Verisimilitude! Fancy words! Enough apologies, let’s get started.

Much Ado About Brendan (or As I’ve Seen It)

Prologue

Venice, Italy. Sometime during the Renaissance. This glorious city is composed of many families – the Montagues, the Capulets, the Macbeths, the MacDuffs, the Aguecheeks, the Fortanbras, the Whitmore’s, and many many more. Too many to name or count.

Many of these families argue and disagree about things. There’s almost always one thing that one family does or thinks that another family just cannot abide by.

It is in this turbulent city of families that we find The Merchant’s Building. The Merchant’s of Venice are selling their wares, lending or selling books, playing music, and much more – and people are constantly streaming in and out. It’s a marketplace of endless possibility.

In one section of The Merchant’s Building, is the Mozilla booth. Mozilla does and makes many things – but it’s probably best known for its Firefox jewelry. Mozilla is one of a small number of merchants giving away jewelry – and jewelry, in this building, is special: the more people wear your jewelry, the more of a voice you have at the Merchant’s Weekly Meeting, where the rules of the building are written and refined.

So what is special about this Mozilla merchant? Why should we wear their jewelry? There are certainly other merchants giving away jewelry a few booths down. What does Mozilla bring to the table?

For one thing, the jewelry is beautiful. And it makes you walk faster. And it’s got the latest features. And it makes it harder for sketchy people to follow you. And it doesn’t have a built-in tracking device recording which merchants you’re visiting. And you can add cool charms to it, and make it look exactly how you want it.

And another thing that’s unique to the Mozilla booth is that they’re composed of members of every single family in Venice. Every single family has at least one member working in the Mozilla booth. And what’s more – a bunch of these workers are volunteering their time and efforts to make this stuff!

Why? Why do they volunteer? And why do these family members work side by side with people their families might balk at, or sneer at?

Well, In the very center of the Mozilla booth, overhanging the whole thing, is… The Mission. The Mission is the guiding principals upon which the Mozilla booth operates. This is what these family members bury their gauntlets for. They work, sweat and bleed side by side for this mission. This is their connective tissue. This is what guides them when they vote and argue for things at the Merchant’s Weekly Meeting.

The other truly unique thing about the Mozilla booth is that there are no walls to it! You can walk right in, and watch the craftspeople make jewelry! Heck, you can sit right down at a bench and somebody will show you how to make some yourself. They’ll guide you, and they’ll critique you, and soon, somebody will be wearing a piece of jewelry that you made.

The greatest debates also occur within the Mozilla booth. People stand on soap boxes and give their opinions about jewelry, or other merchandise – or merchandise practices. People say what they think out loud, and perhaps print it on a t-shirt and wear it. Sometimes, discussions get heated, but level thinking usually prevails because these Mozillians are an unusually bright bunch.

ACT I

There is a leadership selection underway. Someone needs to be the Chief of Business Affairs (or CBA) in the Mozilla booth. The current chief, Jay, has been holding the position as an interim chief, and the Board of Business Affairs is trying to select someone to take the position permanently.

Two members of this board already have their bags packed – for a while now, they’ve been neglecting other interests of theirs, and after this chief is selected, they feel they need to do other things.

Enter Brendan Eich. Brendan Eich is chief craftsperson of the makers of jewelry in the Mozilla booth. He’s a brilliant and widely respected craftsperson himself, having invented some of the amazing techniques that are used by all serious jewelry makers. He is also one of the founders of the Mozilla booth, having set it up with Mitchell Baker.

The Board of Business Affairs selects Brendan to be the next Chief of Business Affairs.

They announce this, and there is much applause! People clap Brendan on the back. Many craftspeople are pleased that one of their own will be in charge.

The two board members, as they’ve agreed to, take their bags, salute, and walk off out of the booth and on to other things.

A third board member leaves as well, but for reasons not related to what I describe below.

Suddenly, several Montagues and Montague supporters in the Mozillian booth grow concerned. They recall that several years ago, Brendan had donated $1000 dollars to a law that supported Capulet values – a law which impacted their rights. The Montagues and Montague supporters grow concerned that someone who supports this Capulet law is not fit to be Chief of a booth that houses all of the families, Montagues included.

Several of these Montagues raise these concerns out loud. This is not unusual in the Mozilla booth, as most concerns are raised out loud – and, as usual, debate begins. Brendan states that he will 100% abide by the Mozilla participation guidelines, and what’s more, began supporting a project that a Montague in the Mozilla booth has been working on – to bring more Montagues into the booth.

Vigorous debate continues, as is the Mozilla booth custom.

However, as the booth lets anybody in, and the debate can be heard outside of the booth, several Montagues and Montague supporters hear these concerns and start passing the message along to one another – a Capulet has been selected to be the CBA!

Many of these Montagues are reasonable, and say and write reasonable arguments about why they are concerned, and why Brendan may not be the right choice as CBA.

ACT II

A few meters away, the Cupid booth overhears all of this concern from the Montagues. Perhaps they really are Montague supportors (or, more likely, they just wanted to perk up business), but they suddenly decide to take a stand. For people who try to come into their booth wearing Firefox jewelry, they have to read a big sign that tells them about why the Cupid booth believes that restricting the rights of Montagues is terrible, and that the Mozilla booth is terrible for making a Capulet the CBA. They tell the people wearing Firefox jewelry that they should probably wear other things.

And so some people start to take off their Firefox jewelry. Some Montagues take it off angrily, and smash it into the ground – stomping it with their feet, creating a big dust cloud.

Enter Iago, and his team of writers. There are many writers and story-sellers in the Merchant’s Building, but Iago is one of those writers that just wants people to listen to him. He likes to twist words and make things up, or to insinuate things that are not true. He saw the board members leaving the Mozilla booth and concocts some headlines, insinuating that they left in protest of Brendan’s support of the Capulet laws. He also writes about how all of the Mozillians in the booth were not supporting Brendan’s appointment as CBA (which is not true – it’s true that some were concerned and questioned the wisdom of his appointment, but certainly not all). He writes and he writes, and his messengers pass copies and leaflets around. Montagues and Montague supporters read these leaflets, or hear people talking about them, and they grow very concerned. More Montagues start to take off their Firefox jewelry.

Some Montagues start to engage with Mozillians and try to figure out what is happening. As always, each family has calm and reasonable people to converse with – and that’s always welcome in the Mozilla booth.

However, every family also has their groundlings. The groundlings are the members of a family who are always looking for a fight. Always looking for blood. Always hoping an actor will forget their lines, and will shout distracting things at them to make it happen. They always have a bag of rotten fruit and vegetables with them to throw. Some of them just like to make trouble.

Every family has their groundlings. You’ve probably met some yourself.

The groundlings start to hear these rumors that Iago has been spreading around, copied and recopied, distorted and mutilated – and they see the signs at the Cupid booth.

And they rush the Mozilla booth! They start throwing rotten fruit and vegetables, and they tear off their Firefox jewelry, and swear to never wear it again! They gnash their teeth, and they rip out their own hair in a rage, and they scream and yell and make so much noise – it’s almost impossible for the craftspeople in the Mozilla booth to work!

A tempest of Montague rage was upon the Mozilla booth.

ACT III

After several hours of this, Brendan addresses the crowd outside, and speaks to some storytellers (Iago and his team are among them – he always is).

They ask him if he renounces Capulet ways, or if he will apologize for the Montague rights that were impacted by the Capulet law that he helped fund.

And Brendan says something along the lines of “I don’t think that’s helpful to discuss. I don’t think that’s relevant here. I’m not going to run this booth as if everybody in here were Capulets – I helped make this booth, I know that it’s composed of many families, and I know how it operates.”

But Iago and the groundlings were not satisfied. They put up signs and placards claiming that anybody wearing Firefox jewelry is supporting the Capulets!

The Mozillians look at all of the broken and stomped-on jewelry on the market ground. All their work, being trampled. If this continues, their ability to improve things for all families at the Merchant’s Weekly Meeting will fade. Their ability to enact their Mission will fade. They are agitated, discouraged, upset, angry, sad, anxious, confused – a cocktail of emotion playing pretty much the entire spectrum.

Brendan’s speech had not done anything to quell the groundlings. And Iago could smell blood, and was not going to stop writing about Brendan or Mozilla.

The other leaders look to Brendan. What will we do?

And Brendan said, “This noise is getting absurdly loud. How are we supposed to work under these conditions? There’s no way we can enact the mission like this.”

And Brendan steps onto the proscenium, and says:

To leave, or not to leave, that is the question—
Whether ’tis Nobler in the mind to suffer
The Slings and Arrows of outrageous Fortune,
Or to take Arms against a Sea of troubles,
And by opposing end them?

And so, after much thought, he takes arms. He sacrifices, and he chooses to leave the booth – the booth he helped plant into the ground over 15 years ago. The booth he helped build, the jewelry and techniques he helped craft.

“I think if I leave, you folks might have a chance to keep the mission going.”

And so he leaves, to the heartbreak of many Mozillians, and to the cheering of the Montague groundlings outside.

ACT IV

Several of the more sensible Montagues watch Brendan leave and wonder if perhaps the groundlings in their family have made them look petty and vindictive. Some of them are also sad that Brendan left the Mozilla booth – all they wanted from him was an apology, they say. That would have sufficed, they say. They didn’t expect or want him to leave the whole booth.

But the damage is done, and Brendan has left. There is no chief craftsperson, and there is no CBA. Holy shit.

The Mozillians in the booth start to get back to work, since the cheers of the Montagues outside are much easier to work against as a backdrop than the booing, hissing and food-throwing. A bunch of Montagues dust off their stomped Firefox jewelry (or grab new copies!) and put them back on proudly. Others are happy with the new jewelry they got, and don’t care about the Mission. Still others never took off the Firefox jewelry, but said they did. And now they wear it publicly again, proudly.

But suddenly, the Capulets and Capulet supporters in and around the Mozilla booth look at this gaping void where Brendan was and sense injustice. This was wrong, they cry! This man should not have been chased out of here!

Vigorous debate begins, as is the Mozilla booth custom.

And reasonable Capulets say and write reasonable things about why they think it was wrong for Brendan to have left.

And Iago, who never really left the area, hears all of this, and smells more blood in the air. He takes his poison pen, and writes stories about how Brendan was forcibly removed from the Mozilla booth by an angry mob of Montagues. He writes that, like Julius Cesar, Brendan was heard gasping “Et tu, Brute?” as he was stabbed by his fellow senators – or, like King Hamlet, poisoned and betrayed by the people closest to him.

But as usual, Iago gets this completely wrong. Not that he cares or bothers to check. What a douche. And LOUD too, holy smokes. And people listen to Iago, and read what he writes, and hear what he says, and the rumours abound!

And a second tempest starts to brew.

ACT V

Many reasonable Capulets, both inside and outside of the Mozilla booth are concerned about what this means for them. Does this mean that Capulets aren’t allowed to become CBA’s? That’s certainly against the inclusiveness guidelines, is it not? And much debate resonated, as is the Mozilla way.

But, as you recall, every family has their groundlings, and the Capulets are no exception. The Capulet groundlings heard the rumours that Iago and his ilk were slinging, and they gnashed their teeth, and they pulled out their hair.

“YOU KILLED BRENDAN”, the groundings howled at the Mozilla booth.

“No, he left on his own accord to save us and the mission,” some Mozillians said with sadness.

“NO HE DIDN’T, HE WAS BETRAYED AND MURDERED BY HIS CLOSEST ALLIES!” the groundlings yelled back.

“No, that’s simply not true. He left on his own accord in an attempt to save the booth and the mission.”

And the reasonable Capulets understood this, and they understood the mindblowing complexities of this whole clusterfuck. And they spoke with reason and passion.

The Mozillian craftspeople got up from their work making jewelry to talk to these Capulets, and the supporters of the Capulets. And many were very reasonable and calm – but the groundlings among them were vicious and yelled and made so much noise. In some ways, their rage was indistinguishable from the Montegue groundling rage, which I believe is some kind of irony.

And, as you’d expect, the Capulet groundlings, like all groundlings, love blood. They love a fight. And they tore off their Firefox jewelry, and they stamped it into the ground. Vegetables and rotten fruit started to be thrown at the Mozilla booth. Again.

And the Mozillians in the booth looked at each other. They looked at the gaping void where Brendan used to stand. They all hugged one another, and comforted one another, as the jeers and boos of the groundlings got louder and louder, and as rotten fruit and vegetables slammed into them and their works.

And this is where we currently are, I believe.

Epilogue

If these ramblings have offended,
Think but this, and all is mended,
That you have but slumber’d here
While these visions did appear.
And this weak and idle theme,
No more yielding but a dream,
Gentles, do not reprehend:
if you pardon, we will mend:
And, as I am an honest Mike,
I do yet miss this Brendan Eich.
Now to ‘scape the serpent’s tongue,
We will make amends ere long;
Else the Mike a liar call;
So, good night unto you all.
Give me your hands, if we be friends,
And Robin shall restore amends.

For a less silly and more sober analysis of what happened, I suggest reading this next.

April 07, 2014 08:30 AM

April 06, 2014

Jonathan Protzenko

Freedom of speech in the Mozilla Community

I read this morning Andrew Truong's blog on planet:

I guess it's okay to speak out about how we truly feel when somebody resigns over a controversial topic but not to speak out during the controversy? We should ALWAYS speak out. Freedom of Speech.

The reason why I didn't speak up is, just like many other Mozillians I suspect, for fear of retribution. Seeing the attacks perpetrated on Brendan, a well-respected member of the Mozilla community, I can only imagine the torrent of hatemail that I'm going to get for publishing this blog post. (Update: so far, I didn't get any hatemail. It seems like the fears were unjustified.) The other reason is, I didn't want to further encourage a debate which I hoped would fade off after a few days. I somehow hoped we could go back to "business as usual". We were unable to, and thus I see no reason to hold back this blog post anymore.

I live in Europe. We tend to have a different opinion on these matters. But in any case, before you go any further down, I should just mention that I do support the right for anyone to marry the one they love, and I voted according to that belief for the last few elections in my country. I am a straight person, though.

I'm going to quote a few blog posts that resonate with me, and add a few comments.

Daniel wrote:

Who said "Mozilla Community"? Who said Openness? Pfffff. I've been a Mozillian for fourteen years and I'm not even sure I still recognize myself in today's Mozilla Community. Well done guys, well done. What's the next step? 100% political correctness? Is it still possible to have a legally valid personal opinion while being at Mozilla and express it in public?

From private conversations I had with other (mostly European) Mozillians, I know for certain that people have opinions that they are afraid to express in the Mozilla Community. Some people are religious, and will take great care _not_ to reveal that fact. Some people may have other beliefs that do not align with the dominant, Silicon-Valley progressive ideology. They also make sure that these are not apparent. Andrew Truong mentions freedom of speech. I believe there is freedom of speech in the Mozilla community as long as you happen to have the right opinions.

In my personal case, I fortunately happen to side with the prevalent ideology for most points, but I am now very afraid of slipping and expressing an opinion that is not considered progressive enough. I am now afraid of what is going to happen to me then: will I be kicked out? Will people call out for my name being removed from about:credits? Will people call on Twitter for my being ousted from Mozillians.org?

Ben Moskowitz wrote:

For the record, I don’t believe Brendan Eich is a bigot. He’s stubborn, not hateful. He has an opinion. It’s certainly not my opinion, but it was the opinion of 52% of people who voted on Prop 8 just six years ago, and the world is changing fast.

Again, my European point of view on the matter is simple: I interact in my daily life with many people who do not support same-sex marriage. I'm pretty sure some bosses up my hierarchy do not support that. I believe that, ultimately, there will be less and less such people, and that society will change enough that being against same-sex marriage will be a thing of the past. I also believe that as long as my bosses do their jobs, I'm fine with that. I do not care about the personal life of my president; I just care for him to run the country according to the values that his party adheres to. Same thing goes for Brendan.

Christie, who in a much better position that I am to speak about that, says:

Certainly it would be problematic if Brendan’s behavior within Mozilla was explicitly discriminatory, or implicitly so in the form of repeated microagressions. I haven’t personally seen this (although to be clear, I was not part of Brendan’s reporting structure until today). To the contrary, over the years I have watched Brendan be an ally in many areas and bring clarity and leadership when needed. Furthermore, I trust the oversight Mozilla has in place in the form of our chairperson, Mitchell Baker, and our board of directors.

The way people demanded a public apology reminded me of the glorious times of Soviet Russia and Communist China. What next? Should Brendan be photoshopped out of all the pictures? If we leave the matter as is, the only reasonable thing left to do is to add an extra round of interviews when hiring people: the political interview. There, we should make sure that the people we hire share the "right" political opinion. Otherwise, it seems like they is no space for them in the Mozilla Community.

Andrew Sullivan wrote:

If this is the gay rights movement today – hounding our opponents with a fanaticism more like the religious right than anyone else – then count me out. If we are about intimidating the free speech of others, we are no better than the anti-gay bullies who came before us.

While being a strong supporter of the movement, I really feel sad today: what legitimacy is there now for people who've been dragging through the mud an opponent, instead of treating them like a decent, human being? Is this what Mozilla is about?

(Comments are disabled on this post.)

April 06, 2014 08:25 PM