Planet Thunderbird

April 28, 2015

Calendar

The Third Beta on the way to Lightning 4.0

It’s that time of year again, we have a new major release of Lightning on the horizon. About every 42 weeks, Thunderbird prepares for a major release, we follow up with a matching major version. You may know these as Lightning 2.6 or 3.3.In order to avoid disappointments, we do a series of beta releases before a such major release. This is where we need you. Please help out in making Lightning 4.0 a great success.Time flies when you are preparing for releases, so we are already at Thunderbird 38.0b3 and Lightning 4.0b3. The final release will be on May 12th and there will be at least one more beta. Please download these betas and take a moment to go through all the actions you normally do on a daily basis. Create an event, accept an invitation, complete a task. You probably have your own workflow, these are of course just examples.

Here is how to get the builds. If you have found an issue, you can either leave a comment here or file a bug on bugzilla.

You may wonder what is new. I’ve gone through the bugs fixed since 3.3 and found that most issues are backend fixes that won’t be very visible. We do however have a great new feature to save copies of invitations to your calendar. This helps in case you don’t care about replying to the invitation but would still like to see it in your calendar. We also have more general improvements in invitation compatibility, performance and stability and some slight visual enhancements. The full list of changes can be found on bugzilla.

Although its highly unlikely that severe problems will arise, you are encouraged to make a backup before switching to beta. If it comforts you, I am using beta builds for my production profile and I don’t recall there being a time where I lost events or had to start over.

If you have questions or have found a bug, feel free to leave a comment here.

April 28, 2015 10:21 AM

April 26, 2015

Robert Kaiser

"Nothing to Hide"?

I've been bothered for quite a while with people telling me they "have nothing to hide anyhow" when the topic of Internet privacy comes up.

I guess that mostly comes from the impression that the whole story is our government watching (over) us and the worst thing that can happen is incrimination. While that might threaten some things, most people do nothing that is really interesting enough for a government to go into attack mode over it (or so they believe, and very firmly so). And I even agree that most governments (including the US and EU countries) actually actively seek out what they call "terrorist activities" (even though they often stretch that term in crazy ways) and/or child abuse and similar topics that the vast majority of citizens agree are a bad thing and are not part of - and the vast majority of politicians and government workers believe they act in the best interest of their citizens when "obviously fighting that" via their different programs of privacy-undermining surveillance. That said, most people seem to be OK with their government collecting data about them as long as it's not used to incriminate them (and when that happens, it's too late to protest the practice anyhow).

A lot has been said about that since the "Snowden leaks", but I think the more obvious short-term and direct threat is in corporate surveillance, which has been swept under the rug in most discussions recently - to the joy of Facebook, Google and other major players in that area. I have also seen that when depicting some obvious scenarios resulting of that, people start to think about it much more promptly and realize the effect on their daily lives (even if those are minor issues compared to government starting a manhunt against you with terror allegations or similar).

So what I start asking is:There are probably more examples, those are the ones that came to my mind so far. Even if those are smaller things, people can relate to them as they affect things in their own life and not scenarios that feel very theoretical to them.

And, of course, they are true to a degree even now. Banks are already buying data from Facebook, probably including "private" messages, for determining credit scores, insurances base rates on anything they can find out about you, flight rates as well as prices for some Amazon and other web shop products vary based on what you searched before - and ads both on your screen and even on postal mail get tailored to a profile built on all kinds of your online behavior. My questions above just take all of those another step forward - but a pretty realistic one in my opinion.

I hope thinking about questions like that makes people realize they might actually want to evade some of that and in the end they actually have something to hide.

And then, of course, that a non-profit like Mozilla, which doesn't seek to maximize money, can believably be on their side and help them regain some privacy where they - now - want to.

April 26, 2015 10:38 PM

April 25, 2015

Mike Conley

Things I’ve Learned This Week (April 20 – April 24, 2015)

Short one this week. I must not have learned much! 😀

If you’re using Sublime Text to hack on Firefox or Gecko, make sure it’s not indexing your objdir.

Sublime has this wicked cool feature that lets you quickly search for files within your project folders. On my MBP, the shortcut is Cmd-P. It’s probably something like Ctrl-P on Windows and Linux.

That feature is awesome, because when I need to get to a file, instead of searching the folder hierarchy, I just hit Cmd-P, jam in a few of the characters (they can even be out of order – Sublime does fuzzy matching), and then as soon as my desired file is the top entry, just hit Enter, and BLAM – opened file. It really saves time!

At least, it saves time in theory. I noticed that sometimes, I’d hit Cmd-P, and the UI to enter my search string would take ages to show up. I had no idea why.

Then I noticed that this slowness seemed to show up after I had done a build. My objdir resides beneath my srcdir (as is the defaults with a mozilla-central checkout), so I figured perhaps Sublime was trying to index all of those binaries and choking on them.

I went to Project > Edit Project, and added this to the configuration file that opened:

{
    "folders":
    [
        {
            "path": "/Users/mikeconley/Projects/mozilla-central",
      "folder_exclude_patterns": ["*.sublime-workspace", "obj-*"]
        }
    ]
}

I added the workspace thing too1, because I figure it’s unlikely I’ll ever want to open that thing.

Anyhow, after setting that, I restarted Sublime, and everything was crazy-fast. \o/

If you’re using Sublime, and your objdir is under your srcdir, maybe consider adding the same thing. Even if you’re not using Cmd-P, it’ll probably save your machine from needlessly burning cycles indexing stuff.


  1. That’s where Sublime holds my session state for my project. 

April 25, 2015 09:40 PM

The Joy of Coding (Ep. 11): Cleaning up the View Source Patch

For this episode, Richard Milewski and I figured out the syncing issue I’d been having in Episode 9, so I had my head floating in the bottom right corner while I hacked. Now you can see what I do with my face while hacking, if that’s a thing you had been interested in.

I’ve also started mirroring the episodes to YouTube, if YouTube is your choice platform for video consumption.

So, like last week, I was under a bit of time pressure because of a meeting scheduled for 2:30PM (actually the meeting I was supposed to have the week before – it just got postponed), so that gave me 1.5 hours to move forward with the View Source work we’d started back in Episode 8.

I started the episode by explaining that the cache key stuff we’d figured out in Episode 9 was really important, and that a bug had been filed by the Necko team to get the issue fixed. At the time of the video, there was a patch up for review in that bug, and when we applied it, we were able to retrieve source code out of the network cache after POST requests! Success!

Now that we had verified that our technique was going to work, I spent the rest of the episode cleaning up the patches we’d written. I started by doing a brief self-code-review to smoke out any glaring problems, and then started to fix those problems.

We got a good chunk of the way before I had to cut off the camera.

I know back when I started working on this particular bug, I had said that I wanted to take you through right to the end on camera – but the truth of the matter is, the priority of the bug went up, and I was moving too slowly on it, since I was restricting myself to a few hours on Wednesdays. So unfortunately, after my meeting, I went back to hacking on the bug off-camera, and yesterday I put up a patch for review. Here’s the review request, if you’re interested in seeing where I got to!

I felt good about the continuity experiment, and I think I’ll try it again for the next few episodes – but I think I’ll choose a lower-priority bug; that way, I think it’s more likely that I can keep the work contained within the episodes.

How did you feel about the continuity between episodes? Did it help to engage you, or did it not matter? I’d love to hear your comments!

Episode Agenda

References

Bug 1025146 – [e10s] Never load the source off of the network when viewing sourceNotes

April 25, 2015 09:22 PM

April 22, 2015

Meeting Notes

Thunderbird: 2015-04-21

Thunderbird meeting notes 2015-04-21. NOON PT (Pacific). Check https://wiki.mozilla.org/Thunderbird/StatusMeetings for meeting time conversion, previous meeting notes and call-in details

Attendees

ATTENDEES – put your nick 1. below 2. in comments unless explicit under round table 3. top right of etherpad next to your color

mkmelin, rolandt, pegasus, makemyday jorgk, rkent, gneandr, aceman, merike, Paenglab, wsmwk

Action items from last meetings

  • (rkent, Fallen) AMO addon compat: TheOne said that this late it is probably not worth doing at all. WIth so many other things for me to do, that sounds like a plan.

Friends of the tree

  • glandium, for fixing the various packager bugs that will help package Lightning (nominated by Fallen, who won’t be at the meeting)

Critical Issues

Critical bugs. Leave these here until they’re confirmed fixed. If confirmed, then remove.

  • (rkent) I am enormously frustrated by the inability to get two critical features landed in tb 38: OAuth and Lightning integration. Can we please give this very high priority?
    • OAuth integration: partial landing for beta 2, really REALLY critical that we get this finished.
  • In general, the tracking-tb38 flag shows what are critical issues. In the next week or so, that list will be culled to only include true blockers for the Thunderbird 38 release. There will still be many.
  • I don’t think we have a reasonable chance of shipping a quality release on May 12. More realistic is June 2.
  • We need to decide on how to do release branching. I am uncertain whether Lightning integration requires this or not.
  • Auto-complete improvements – some could go into esr31 (bug 1042561 included in TB38)
  • Lightning integration (below) really REALLY critical that we get this finished.
  • maildir UI: nothing more to do for UI, still want to land a patch for letting IMAP set this.
  • gloda IM search regressions: mostly fixed, some db cleanup necessary for users of TB33+ that nhnt11 will hopefully have ready to land soon.
    • aleth landed a fix to stop duplicated entries from appearing, nhnt11 will take care of the cleaning up the databases of Aurora/Beta/Daily users this weekend and keep us updated
  • bug 1140884, might need late-l10n

removing from critical list/fixed:

  • ldap crash bug 1063829: a patch in beta 37, beta results are unclear – not seen in 38
  • bug 1064230 crashes during LDAP search made worse by Search All Addressbooks bug 170270, needs tracking 38+ and review?rkent/jcranmer – not seen in 38
  • everyone should probably skim http://mzl.la/1DaLo0t version 31-38 regressions for items they can help fix or direct to the right people

Releases

  • Past
    • 31.6.0 shipped
    • 38.0b1 shipped 2015-04-03
    • 38.0b2 shipped 2015-04-20
  • Upcoming
    • 38.0b3 (when?)

Lightning to Thunderbird Integration

See https://calendar.etherpad.mozilla.org/thunderbird-integration

  • As underpass has pointed out repeatedly (thanks for your patience!) , we need to rewrite / heavily modify the lightning articles on support.mozilla.org. let me know irc: rolandtanglao on #tb-support-crew or rtanglao AT mozilla.com OR simply start editing the articles

Unfortunately not much progress because I was away. I hope to have the packaging bits done until the weekend. Glandium did a great job on the packager.py changes, hence I nominated him for Friends of the Tree. (fallen)

MakeMyDay should comment on the opt-out dialog, I think we should get it landed asap. bug 1130852 – Opt-Out dialog had some discussion on prefs

Round Table

wsmwk

  • managed shipping of 31.6.0, 38.0b1, 38.0b2

Jorg K

rkent

  • We have the beginnings of a business development group (rkent, wsmwk, magnus) that after signing NDAs will be given access to Thunderbird business documentation.

mkmelin

  • bug 1134986 autocomplete bug investigated and landed on trunk +++

aceman

Question Time

– PLEASE INCLUDE YOUR NICK with your bullet item —

  • What happened to the Avocet branding? (Jorg K)
    • won’t be persued
  • Info about the meeting with Mitchell Baker on 20th March 2015, funding issues (Jorg K)
  • http://mzl.la/1O9khi4 can we get hiro’s bugs reassigned so the patches contained can get landed, and not lost? (wsmwk)
  • It would be great if some jetpack add-on support were available in thunderbird to share functionality with firefox and fennec. See also bug 1100644. No useful jetpack add-ons seem to exist for thunderbird (earlybird would be fine to use jpm over cfx). Are there any jetpack add-ons available to prove me wrong?

(pegasus) Is it worth looking at going to a 6-week release schedule to avoid the conundrum with getting not-quite-ready features in vs delaying?

Support team

  • Reminder: Roland is leaving Thunderbird May 12, 2015 after the release of Thunderbird 38: working on Thunderbird 38 plan and finally kickstarting Thunderbird User Success Council
    • looking for 3 people: English KB Article Editor, L10N Coordinator and Forum Lead. Is that you we’re looking for? If so email rtanglao AT mozilla.com or ping  :rolandtanglao in #sumo or #tb-support-crew

Other

  • PLEASE PUT THE NEXT MEETINGS IN YOUR (LIGHTNING) CALENDAR :)
  • Note – meeting notes must be copied from etherpad to wiki before 5AM CET next day so that they will go public in the meeting notes blog.

Action Items

  • wsmwk to pat glandium
  • wsmwk to email hiro’s bug list to tb-planning
  • rkent to review tracking list

April 22, 2015 03:00 AM

Rumbling Edge - Thunderbird

2015-04-20 Calendar builds

Common (excluding Website bugs)-specific: (6)

Sunbird will no longer be actively developed by the Calendar team.

Windows builds Official Windows

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

April 22, 2015 02:17 AM

2015-04-20 Thunderbird comm-central builds

Thunderbird-specific: (27)

MailNews Core-specific: (30)

Windows builds Official Windows, Official Windows installer

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

April 22, 2015 02:16 AM

April 18, 2015

Mike Conley

Things I’ve Learned This Week (April 13 – April 17, 2015)

When you send a sync message from a frame script to the parent, the return value is always an array

Example:

// Some contrived code in the browser
let browser = gBrowser.selectedBrowser;
browser.messageManager.addMessageListener("GIMMEFUE,GIMMEFAI", function onMessage(message) {
  return "GIMMEDABAJABAZA";
});

// Frame script that runs in the browser
let result = sendSendMessage("GIMMEFUE,GIMMEFAI");
console.log(result[0]);
// Writes to the console: GIMMEDABAJABAZA

From the documentation:

Because a single message can be received by more than one listener, the return value of sendSyncMessage() is an array of all the values returned from every listener, even if it only contains a single value.

I don’t use sync messages from frame scripts a lot, so this was news to me.

You can use [cocoaEvent hasPreciciseScrollingDeltas] to differentiate between scrollWheel events from a mouse and a trackpad

scrollWheel events can come from a standard mouse or a trackpad1. According to this Stack Overflow post, one potential way of differentiating between the scrollWheel events coming from a mouse, and the scrollWheel events coming from a trackpad is by calling:

bool isTrackpad = [theEvent hasPreciseScrollingDeltas];

since mouse scrollWheel is usually line-scroll, whereas trackpads (and Magic Mouse) are pixel scroll.

The srcdoc attribute for iframes lets you easily load content into an iframe via a string

It’s been a while since I’ve done web development, so I hadn’t heard of srcdoc before. It was introduced as part of the HTML5 standard, and is defined as:

The content of the page that the embedded context is to contain. This attribute
is expected to be used together with the sandbox and seamless attributes. If a
browser supports the srcdoc attribute, it will override the content specified in
the src attribute (if present). If a browser does NOT support the srcdoc
attribute, it will show the file specified in the src attribute instead (if
present).

So that’s an easy way to inject some string-ified HTML content into an iframe.

Primitives on IPDL structs are not initialized automatically

I believe this is true for structs in C and C++ (and probably some other languages) in general, but primitives on IPDL structs do not get initialized automatically when the struct is instantiated. That means that things like booleans carry random memory values in them until they’re set. Having spent most of my time in JavaScript, I found that a bit surprising, but I’ve gotten used to it. I’m slowly getting more comfortable working lower-level.

This was the ultimate cause of this crasher bug that dbaron was running into while exercising the e10s printing code on a debug Nightly build on Linux.

This bug was opened to investigate initializing the primitives on IPDL structs automatically.

Networking is ultimately done in the parent process in multi-process Firefox

All network requests are proxied to the parent, which serializes the results back down to the child. Here’s the IPDL protocol for the proxy.

On bi-directional text and RTL

gw280 and I noticed that in single-process Firefox, a <select> dropdown set with dir=”rtl”, containing an <option> with the value “A)” would render the option as “(A”.

If the value was “A) Something else”, the string would come out unchanged.

We were curious to know why this flipping around was happening. It turned out that this is called “BiDi”, and some documentation for it is here.

If you want to see an interesting demonstration of BiDi, click this link, and then resize the browser window to reflow the text. Interesting to see where the period on that last line goes, no?

It might look strange to someone coming from a LTR language, but apparently it makes sense if you’re used to RTL.

I had not known that.

Some terminal spew

Some terminal spew

Now what’s all this?

My friend and colleague Mike Hoye showed me the above screenshot upon coming into work earlier this week. He had apparently launched Nightly from the terminal, and at some point, all that stuff just showed up.

“What is all of that?”, he had asked me.

I hadn’t the foggiest idea – but a quick DXR showed basic_code_modules.cc inside Breakpad, the tool used to generate crash reports when things go wrong.

I referred him to bsmedberg, since that fellow knows tons about crash reporting.

Later that day, mhoye got back to me, and told me that apparently this was output spew from Firefox’s plugin hang detection code. Mystery solved!

So if you’re running Firefox from the terminal, and suddenly see some basic_code_modules.cc stuff show up… a plugin you’re running probably locked up, and Firefox shanked it.


  1. And probably a bunch of other peripherals as well 

April 18, 2015 10:33 PM

The Joy of Coding (Ep. 10): The Mystery of the Cache Key

In this episode, I kept my camera off, since I was having some audio-sync issues1.

I was also under some time-pressure, because I had a meeting scheduled for 2:30 ET2, giving me exactly 1.5 hours to do what I needed to do.

And what did I need to do?

I needed to figure out why an nsISHEntry, when passed to nsIWebPageDescriptor’s loadPage, was not enough to get the document out from the HTTP cache in some cases. 1.5 hours to figure it out – the pressure was on!

I don’t recall writing a single line of code. Instead, I spent most of my time inside XCode, walking through various scenarios in the debugger, trying to figure out what was going on. And I eventually figured it out! Read this footnote for the TL;DR:3

Episode Agenda

References

Bug 1025146 – [e10s] Never load the source off of the network when viewing sourceNotes


  1. I should have those resolved for Episode 11! 

  2. And when the stream finished, I found out the meeting had been postponed to next week, meaning that next week will also be a short episode. :( 

  3. Basically, the nsIChannel used to retrieve data over the network is implemented by HttpChannelChild in the content process. HttpChannelChild is really just a proxy to a proper nsIChannel on the parent-side. On the child side, HttpChannelChild does not implement nsICachingChannel, which means we cannot get a cache key from it when creating a session history entry. With no cache key, comes no ability to retrieve the document from the network cache via nsIWebDescriptor’s loadPage. 

April 18, 2015 09:40 PM

April 12, 2015

Mike Conley

Things I’ve Learned This Week (April 6 – April 10, 2015)

It’s possible to synthesize native Cocoa events and dispatch them to your own app

For example, here is where we synthesize native mouse events for OS X. I think this is mostly used for testing when we want to simulate mouse activity.

Note that if you attempt to replay a queue of synthesized (or cached) native Cocoa events to trackSwipeEventWithOptions, those events might get coalesced and not behave the way you want. mstange and I ran into this while working on this bug to get some basic gesture support working with Nightly+e10s (Specifically, the history swiping gesture on OS X).

We were able to determine that OS X was coalescing the events because we grabbed the section of code that implements trackSwipeEventWithOptions, and used the Hopper Disassembler to decompile the assembly into some pseudocode. After reading it through, we found some logging messages in there referring to coalescing. We noticed that those log messages were only sent when NSDebugSwipeTrackingLogic was set to true, we executed this:

defaults write org.mozilla.nightlydebug NSDebugSwipeTrackingLogic -bool YES

In the console, and then re-ran our swiping test in a debug build of Nightly to see what messages came out. Sure enough, this is what we saw:

2015-04-09 15:11:55.395 firefox[5203:707] ___trackSwipeWithScrollEvent_block_invoke_0 coalescing scrollevents
2015-04-09 15:11:55.395 firefox[5203:707] ___trackSwipeWithScrollEvent_block_invoke_0 cumulativeDelta:-2.000 progress:-0.002
2015-04-09 15:11:55.395 firefox[5203:707] ___trackSwipeWithScrollEvent_block_invoke_0 cumulativeDelta:-2.000 progress:-0.002 adjusted:-0.002
2015-04-09 15:11:55.396 firefox[5203:707] ___trackSwipeWithScrollEvent_block_invoke_0 call trackingHandler(NSEventPhaseChanged, gestureAmount:-0.002)

This coalescing means that trackSwipeEventWithOptions is only getting a subset of the events that we’re sending, which is not what we had intended. It’s still not clear what triggers the coalescing – I suspect it might have to do with how rapidly we flush our native event queue, but mstange suspects it might be more sophisticated than that. Unfortunately, the pseudocode doesn’t make it too clear.

String templates and toSource might run the risk of higher memory use?

I’m not sure I “learned” this so much, but I saw it in passing this week in this bug. Apparently, there was some section of the Marionette testing framework that was doing request / response logging with toSource and some string templates, and this caused a 20MB regression on AWSY. Doing away with those in favour of old-school string concatenation and JSON.stringify seems to have addressed the issue.

When you change the remote attribute on a <xul:browser> you need to re-add the <xul:browser> to the DOM tree

I think I knew this a while back, but I’d forgotten it. I actually re-figured it out during the last episode of The Joy of Coding. When you change the remoteness of a <xul:browser>, you can’t just flip the remote attribute and call it a day. You actually have to remove it from the DOM and re-add it in order for the change to manifest properly.

You also have to re-add any frame scripts you had specially loaded into the previous incarnation of the browser before you flipped the remoteness attribute.1

Using Mercurial, and want to re-land a patch that got backed out? hg graft is your friend!

Suppose you got backed out, and want to reland your patch(es) with some small changes. Try this:

hg update -r tip
hg graft --force BASEREV:ENDREV

This will re-land your changes on top of tip. Note that you need –force, otherwise Mercurial will skip over changes it notices have already landed in the commit ancestry.

These re-landed changes are in the draft stage, so you can update to them, and assuming you are using the evolve extension2, and commit –amend them before pushing. Voila!

Here’s the documentation for hg graft.


  1. We sidestep this with browser tabs by putting those browsers into “groups”, and having any new browsers, remote or otherwise, immediately load a particular set of framescripts. 

  2. And if you’re using Mercurial, you probably should be. 

April 12, 2015 02:50 PM

April 10, 2015

Mike Conley

The Joy of Coding (Ep. 9): More View Source Hacking!

In this episode1, I continued the work we had started in Episode 8, by trying to make it so that we don’t hit the network when viewing the source of a page in multi-process Firefox.

It was a little bit of a slog – after some thinking, I decided to undo some of the work we had done in the previous episode, and then I set up the messaging infrastructure for talking to the remote browser in the view source window.

I also rebased and landed a patch that we had written in the previous episode, after fixing up some nits2.

Then, I (re)-learned that flipping the “remote” attribute of a browser is not enough in order for it to run out-of-process; I have to remove it from the DOM, and then re-add it. And once it’s been re-added, I have to reload any frame scripts that I had loaded in the previous incarnation of the browser.

Anyhow, by the end of the episode, we were able to view the source from a remote browser inside a remote view source browser!3 That’s a pretty big deal!

Episode Agenda

References

Bug 1025146 – [e10s] Never load the source off of the network when viewing sourceNotes


  1. A note that I also tried an experiment where I keep my camera running during the entire session, and place the feed into the bottom right-hand corner of the recording. It looks like there were some synchronization issues between audio and video, which are a bit irritating. Sorry about that! I’ll see what I can do about that. 

  2. and dropping a nit having conversed with :gabor about it 

  3. We were still loading it off the network though, so I need to figure out what’s going on there in the next episode. 

April 10, 2015 05:00 PM

April 04, 2015

Mike Conley

Things I’ve Learned This Week (March 30 – April 3, 2015)

This is my second post in a weekly series, where I attempt to distill my week down into some lessons or facts that I’ve picked up. Let’s get to it!

ES6 – what’s safe to use in browser development?

As of March 27, 2015, ES6 classes are still not yet safe for use in production browser code. There’s code to support them in Firefox, but they’re Nightly-only behind a build-time pref.

Array.prototype.includes and ArrayBuffer.transfer are also Nightly only at this time.

However, any of the rest of the ES6 Harmony work currently implemented by Nightly is fair-game for use, according to jorendorff. The JS team is also working on a Wiki page to tell us Firefox developers what ES6 stuff is safe for use and what is not.

Getting a profile from a hung process

According to mstange, it is possible to get profiles from hung Firefox processes using lldb1.

  1. After the process has hung, attach lldb.
  2. Type in2, :
    p (void)mozilla_sampler_save_profile_to_file("somepath/profile.txt")
  3. Clone mstange’s handy profile analysis repository.
  4. Run:
    python symbolicate_profile.py somepath/profile.txt

    To graft symbols into the profile. mstange’s scripts do some fairly clever things to get those symbols – if your Firefox was built by Mozilla, then it will retrieve the symbols from the Mozilla symbol server. If you built Firefox yourself, it will attempt to use some cleverness3 to grab the symbols from your binary.

    Your profile will now, hopefully, be updated with symbols.

    Then, load up Cleopatra, and upload the profile.

    I haven’t yet had the opportunity to try this, but I hope to next week. I’d be eager to hear people’s experience giving this a go – it might be a great tool in determining what’s going on in Firefox when it’s hung4!

Parameter vs. Argument

I noticed that when I talked about “things that I passed to functions5”, I would use “arguments” and “parameters” interchangeably. I recently learned that there is more to those terms than I had originally thought.

According to this MSDN article, an argument is what is passed in to a function by a caller. To the function, it has received parameters. It’s like two sides of a coin. Or, as the article puts it, like cars and parking spaces:

You can think of the parameter as a parking space and the argument as an automobile. Just as different automobiles can park in a parking space at different times, the calling code can pass a different argument to the same parameter every time that it calls the procedure.6

Not that it really makes much difference, but I like knowing the details.


  1. Unfortunately, this technique will not work for Windows. :(  

  2. Assuming you’re running a build after this revision landed. 

  3. A binary called dump_syms_mac in mstange’s toolkit, and nm on Linux 

  4. I’m particularly interested in knowing if we can get Javascript stacks via this technique – I can see that being particularly useful with hung content processes. 

  5. Or methods. 

  6. Source 

April 04, 2015 04:00 PM

April 03, 2015

Thunderbird Blog

Thunderbird 38 goes to beta!

The next major release of Thunderbird, version 38, is now in beta and available for testing. You may download Thunderbird 38.0b1 here.

This version of Thunderbird is the first that is mostly managed by volunteer community members rather than by Mozilla staff. We have many new features, including:

Release notes are available here.

There are still a couple of features missing from this beta that we hope to ship in the final version of Thunderbird 38. Those are:

 

April 03, 2015 09:13 AM

Mike Conley

The Joy of Coding (Ep. 8): View Source Hacking

In this episode, I again started with some code review. I reviewed this patch for this bug by fellow Firefox hacker Gijs, and refreshed my memory on var hoisting. I’ve been using let for so long that it was really, really weird to see how var worked.

After that, I quickly gave an update on my plugin crash UI bug I had been working on the last episode – the patches are up, and are currently undergoing review, so there wasn’t much to do there.

Next, I started on a brand new bug1, explained the bug2, and then laid out my plan for attacking it.

Specifically, I’m going to try an experiment: I will only be working on that bug during Joy of Coding sessions. That way, there is continuity from video to video, and you won’t miss any of the development that goes on between episodes.

We sliced off a chunk to get done, and hit some minor roadblocks (as expected). The View Source code is old and crufty, and I have to do my best to make sure I don’t break any of the other applications that depend on it (like Thunderbird and SeaMonkey).

So that was the name of the game – looking to see how other applications use View Source, and trying to come up with a plan for making sure we don’t break them, while at the same time refactoring View Source to be easier to code against (and work with a frame script and messages).

It was a long slog3, but we got to a good point by the end. Let’s see how far we get next week!

Episode Agenda

References

Bug 1148807 – Method moveToAlertPosition in dialog.xml should check if opener is not null

Bug 1110887 – With e10s, plugin crash submit UI is brokenNotes

Bug 1025146 – [e10s] Never load the source off of the network when viewing sourceNotes


  1. I say brand new, except that, as I explain in the video, I had already attacked this bug early on in my e10s work, and had only recently come back to it. 

  2. The View Source tool sometimes re-retrieves the source off of the network when opened from an e10s-browser 

  3. My longest episode ever, clocking in at over 2.5 hours. 

April 03, 2015 02:17 AM

April 01, 2015

Joshua Cranmer

Breaking news

It was brought to my attention recently by reputable sources that the recent announcement of increased usage in recent years produced an internal firestorm within Mozilla. Key figures raised alarm that some of the tech press had interpreted the blog post as a sign that Thunderbird was not, in fact, dead. As a result, they asked Thunderbird community members to make corrections to emphasize that Mozilla was trying to kill Thunderbird.

The primary fear, it seems, is that knowledge that the largest open-source email client was still receiving regular updates would impel its userbase to agitate for increased funding and maintenance of the client to help forestall potential threats to the open nature of email as well as to innovate in the space of providing usable and private communication channels. Such funding, however, would be an unaffordable luxury and would only distract Mozilla from its central goal of building developer productivity tooling. Persistent rumors that Mozilla would be willing to fund Thunderbird were it renamed Firefox Email were finally addressed with the comment, "such a renaming would violate our current policy that all projects be named Persona."

April 01, 2015 07:00 AM

March 29, 2015

Rumbling Edge - Thunderbird

2015-03-28 Calendar builds

Common (excluding Website bugs)-specific: (29)

Sunbird will no longer be actively developed by the Calendar team.

Windows builds Official Windows

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

March 29, 2015 03:58 PM

2015-03-28 Thunderbird comm-central builds

Thunderbird-specific: (67)

MailNews Core-specific: (54)

Windows builds Official Windows, Official Windows installer

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

March 29, 2015 03:57 PM

March 27, 2015

Mike Conley

Things I’ve Learned This Week (March 23 – 27, 2015)

This is the first post in a weekly series, where I’m going to attempt to distill down my week into some lessons or facts I’ve picked up. Maybe they’ll be interesting to others. We’ll see.

  1.  Gecko Media Plugins are used both for WebRTC (the Open H.264 encoding stuff runs inside a GMP), and is also going to be used to hold CDM’s for EME’s. That’s a lot of TLA’s!1
  2. This little notch I saw on the caret on my development build was because I had bidi.browser.ui set to true for some reason. It’s the “bidi caret”:
    Bidi Caret
  3. People hacking on platform are supposed to avoid using the NS_ENSURE_* macros, according to this.2 I originally learned this by reading cpearce’s review of a patch.

So let’s see if I can keep this up for a few weeks. Maybe I’ll get a collection of useful stuff by the end of the experiment!


  1. Three Letter Acronyms 

  2. It says:

    Previously the NS_ENSURE_* macros were used for this purpose, but those macros hide return statements and should not be used in new code.

     

March 27, 2015 03:15 PM

The Joy of Coding (Episode 7): Code review, and a Regression

In this episode, I started with some code review. I was reviewing a patch to make the Findbar (particularly, the Find As You Type feature) e10s-friendly.

With that review out of the way, I had to swap a bunch of information about the plugin crash UI for e10s in my head – and in particular, some non-determinism that we have to handle. I explained that stuff (and hopefully didn’t spend too much time on it).

Then, I showed how far I’d gotten with the plugin crash UI for e10s. I was able to submit a crash report, but I found I wasn’t able to type into the comment text area.

After a while, I noticed that I couldn’t type into the comment text area on Nightly, even without my patch. And then I reproduced it in Aurora. And then in Beta. Luckily, I couldn’t reproduce it in Release – but with Beta transitioning to Release in only a few days, I didn’t have a lot of time to get a bug on file to shine some light on it.

Luckily, our brilliant Steven Michaud was on the case, and has just landed a patch to fix this. Talk about fast work!

Episode Agenda

References:
Bug 1133981 – [e10s] Stop sending unsafe CPOWs after the findbar has been closed in a remote browser

Bug 1110887 – With e10s, plugin crash submit UI is brokenNotes

Bug 1147521 – Cannot type into comment area of plugin crash UI

March 27, 2015 03:03 PM

March 19, 2015

Mike Conley

The Joy of Coding (Episode 6): Plugins!

In this episode, I took the feedback of my audience, and did a bit of code review, but also a little bit of work on a bug. Specifically, I was figuring out the relationship between NPAPI plugins and Gecko Media Plugins, and how to crash the latter type (which is necessary for me in order to work on the crash report submission UI).

A minor goof – for the first few minutes, I forgot to switch my camera to my desktop, so you get prolonged exposure to my mug as I figure out how I’m going to review a patch. I eventually figured it out though. Phew!

Episode Agenda

References:
Bug 1134222 – [e10s] “Save Link As…”/”Bookmark This Link” in remote browser causes unsafe CPOW usage warning

Bug 1110887 – With e10s, plugin crash submit UI is brokenNotes

March 19, 2015 03:13 PM

March 12, 2015

Mike Conley

The Joy of Coding (Episode 5): Much Code Review

In this fifth episode, I didn’t work on any bugs. Instead, I did a bunch of code review. Ever wanted to know what a Firefox engineer does to review a patch? If so, then this episode is for you!

Episode Agenda

References:
Bug 1140898 – [e10s] “View” > “Switch Page Direction” doesn’t work in e10s

Bug 1140878 – [e10s] “Switch Page Direction” in remote browser causes unsafe CPOW usage warnings

Bug 1066531 – [e10s] Switching tabs can result in old content being displayed for a split second after the tab bar is updated

March 12, 2015 01:36 AM

March 05, 2015

Mike Conley

The Joy of Coding (Episode 4)

The fourth episode is up! Richard Milewski and I found the right settings to get OBS working properly on my machine, so this weeks episode is super-readable! If you’ve been annoyed with the poor resolution for past episodes, rejoice!

In this fourth episode, I solve a few things – I clean up a busted rebase, I figure out how I’d accidentally broken Linux printing, I think through a patch to make sure it does what I need it to do, and I review some code!

Episode Agenda

References:
Bug 1136855 – Print settings are not saved from print job to print job
Notes

Bug 1088070 – Instantiate print settings from the content process instead of the parent
Notes

Bug 1090448 – Make e10s printing work on Linux
Notes

Bug 1133577 – [e10s] “Open Link in New Tab” in remote browser causes unsafe CPOW usage warning
Notes

Bug 1133981 – [e10s] Stop sending unsafe CPOWs after the findbar has been closed in a remote browser
Notes

March 05, 2015 03:19 PM

March 03, 2015

Calendar

We are now on Twitter

In the spirit of Twitter I will keep this blog post down to 140 characters. Check out @mozcalendar for more frequent updates on the project.

March 03, 2015 12:39 AM

February 28, 2015

Calendar

Strings are Frozen for the Next Major Lightning Release

Together with Thunderbird 38, we will be releasing Lightning 4.0. Both of these releases are not beta versions, but similarly major releases like Lightning 3.3, Lightning 2.6 and their respective Thunderbird counterparts.

We have about 11 weeks left until the release will be final, and while the developers are doing their best to make sure features are stable and there are no regressions, its time to do some translation work.

If you have been missing your language in Lighting in the past, maybe this is a good time to contact the l10n team of your language and express interest to translate Lightning. While the initial hurdle may be large, there are usually not many changes in strings between Lightning releases. If you are lucky, someone had already translated part of Lightning in the past and all you have to do is update your locales. The translation process is fairly simple and can be done using your favorite browser.

If you are already part of the Localization teams, this is the time to head over to mozilla.locamotion.org and translate the remaining Lightning strings. Once you are done translating and the changes have been pushed to the localization repositories, please head over to the Thunderbird l10n dashboard (not the Calendar Dashboard) and sign-off the latest change. Make sure you are signing off the later changeset of Thunderbird and Lightning, as only the newest sign-off will be used.

Should you have any questions, please feel free to send me an email or comment on this post and I will get back to you as soon as possible.

February 28, 2015 02:49 PM

Google Summer of Code 2015 Projects

The one thing I like best about the Google Summer of Code is that it gives us an opportunity work on cool new features I never have time for on my own. Also, its a great opportunity for students to learn about working on a large-scale project and prepare for real life work, which is very much different than the smaller projects I remember from my university. Students that have stayed with us even after the Summer of Code have proven themselves invaluable, showing spirit and enthusiasm for an open source project like the Mozilla Calendar Project gives me a warm feeling in my heart.

This year, we have proposed two projects: Introducing Calendar Accounts and Resource Booking Improvements. As the projects have been available on the wiki for a while (sorry for not blogging about this earlier!), we’ve already had the one or other student interested in applying. However, that doesn’t mean there isn’t any room left for a fine candidate like you!

In the first project, Introducing Calendar Accounts, the goal is to improve our backend layer to move from a flat list of calendars to a hierarchical list with calendars grouped by the accounts they belong to. Aside from the benefits this gives us w.r.t. avoiding code duplication and ugly hacks, it will open Lightning to a load of new features related to accounts, for example notifications if a new calendars was added to the account or improved support for authenticating to calendars on one server with different credentials.

Second, we have proposed a project on Resource Booking Improvements. Right now, our invite attendees dialog is fairly simple and only allows entering email addresses and seeing their free/busy status. What is missing is an easy way to invite resources and rooms, for example when you want to book a conference room for your meeting. There is an inconspicuous feature that allows changing an attendee to a resource entry, although there is no real value in doing this aside from sending more correct data to the calendar server. The user still has to remember the virtual email address associate with the conference room. With this Summer of Code project we want to allow any kind of calendar provider to be able to specify how to search for rooms and resources. Certain CalDAV servers support searching for these entries using custom queries, the goal for this project is mostly to support those servers.

If you are interested, please do get in touch with me, either via email or on irc.mozilla.org, where my nickname is Fallen and I usually hang around in #calendar. Should I not be around, redDragon (a former GSoC Student, by the way!) will be there to help you.

February 28, 2015 02:32 PM

Provider for Google Calendar Postmortem

First of all, I’d like to apologize for not adding in new blog posts once in a while. There have been a few topics I could have written about, but I never got around to it. The consequence is that there will be a few posts in succession now, I hope to be better about this in the future.

In this post, I’d like to tell you a little bit about the changes to the Provider for Google Calendar that have taken place in the last months. With due prior notice, Google has shut down version 1 and version 2 of the Google Calendar API. The previous version of the Provider for Google Calendar, version 0.32, was still using the API v1.

The changes to the API were fairly substantial, so I took the opportunity to rewrite large parts of the Provider to use new JavaScript features and generally make the code more readable. I also added some new features, including:

As such drastic changes are a common source for regressions, I went through 10 rounds of pre-release testing and got some very helpful input from those who commented on the bug or sent me an email. There would have been substantially more issues without these folks, so thank you very much! In the last round the amount of issues was down to a level where I felt comfortable releasing the Provider to the world.

When I release version 1.0, something inevitable happened: nearly 300,000 users find more issues than 140, so I had to do a few additional releases to fix more major issues. The new API version imposes limits on the number of requests being made, so one of the first issues I had to overcome was gaining more quota. Thanks to the fantastic folks at Google I was able to solve this issue using a combination of code changes to reduce the number of requests and also higher quota limits. Here is a roundup of the other issues:

In retrospect, there have been a lot of complaints, but on the other hand a lot of people have noticed how important this addon has become for them. Many have shown their gratitude by sending a donation via the addons page. I hope that version 1.0.4 fixes most of the issues, there are just a few more issues reported. If you continue to experience difficulties, please send me an email or visit the support forum.

 

 

 

February 28, 2015 01:42 PM

February 27, 2015

Thunderbird Blog

Thunderbird Usage Continues to Grow

We’re happy to report that Thunderbird usage continues to expand.

Mozilla measures program usage by Active Daily Installations (ADI), which is the number of pings that Mozilla servers receive as installations do their daily plugin block-list update. This is not the same as the number of active users, since some users don’t access their program each day, and some installations are behind firewalls. An estimate of active monthly users is typically done by multiplying the ADI by a factor of 3.

To plot changes in Thunderbird usage over time, I’ve picked the peak ADI for each month for the last few years. Here’s the result:

Thunderbird Active Daily Installations, peak value per month.

Germany has long been our #1 country for usage, but in 4th quarter 2014, Japan exceeded US as the #2 country. Here’s the top 10 countries, taken from the ADI count of February 24, 2015:

Rank Country ADI 2015-02-24
1 Germany 1,711,834
2 Japan 1,002,877
3 United States 927,477
4 France 777,478
5 Italy 514,771
6 Russian Federation 494,645
7 Poland 480,496
8 Spain 282,008
9 Brazil 265,820
10 United Kingdom 254,381
All Others 2,543,493
Total 9,255,280

Country Rankings for Thunderbird Usage, February 24, 2015

The Thunderbird team is now working hard preparing our next major release, which will be Thunderbird 38 in May 2015. We’ll be blogging more about that release in the next few weeks, including reporting on the many new features that we have added.

February 27, 2015 10:44 PM

Mike Conley

The Joy of Coding (Episode 3)

The third episode is up! My machine was a little sluggish this time, since I had OBS chugging in the background attempting to do a hi-res screen recording simultaneously.

Richard Milewski and I are going to try an experiment where I try to stream with OBS next week, which should result in a much higher-resolution stream. We’re also thinking about having recording occur on a separate machine, so that it doesn’t bog me down while I’m working. Hopefully we’ll have that set up for next week.

So this third episode was pretty interesting. Probably the most interesting part was when I discovered in the last quarter that I’d accidentally shipped a regression in Firefox 36. Luckily, I’ve got a patch that fixes the problem that has been approved for uplift to Aurora and Beta. A point release is also planned for 36, so I’ve got approval to get the fix in there too. \o/

Here are the notes for the bug I was working on. The review feedback from karlt is in this bug, since I kinda screwed up where I posted the review request with MozReview.

February 27, 2015 03:27 PM

February 19, 2015

Mike Conley

The Joy of Coding (Episode 2)

The second episode is up! We seem to have solved the resolution problem this time around – big thanks to Richard Milewski for his work there. This time, however, my microphone levels were just a bit low for the first half-hour. That’s my bad – I’ll make sure my gain is at the right level next time before I air.

Here are the notes for the bug I was working on.

And let me know if there’s anything else I can do to make these episodes more useful or interesting.

February 19, 2015 03:54 AM

February 18, 2015

Meeting Notes

Thunderbird: 2015-02-17

Thunderbird meeting notes 2015-02-17. NOON PST. Previous meetings: https://wiki.mozilla.org/Thunderbird/StatusMeetings#Meeting_Notes

Attendees

fallen, wsmwk, rkent, aceman, paenglab, makemyday, magnus, jorgk,

Current status and discussions

  • 36.0 beta is out

Critical Issues

Critical bugs. Please leave these here until they’re confirmed fixed.

  • Auto-complete improvements – some of those could go into esr31
  • ldap crasher
  • certificate crasher
  • Lightning integration
  • AB all-acount search bug 170270
  • maildir UI
  • video chat The initial set of patches, with IB UI, may land this week (they’re up for final review). We’re considering also landing a set of matching strings for TB so uplifting a port of the UI becomes possible. I’m not sure the feature will be ready to ship in TB38 as it has not undergone much real world testing yet, but you never know, there may not be any nasty surprises ;)

Release Issues

Upcoming

  • Thunderbird 38 moves to Earlybird ~ February 24, 2015
    • string freeze

Lightning to Thunderbird Integration

See https://calendar.etherpad.mozilla.org/thunderbird-integration

  • As underpass has pointed out repeatedly (thanks for your patience!) , we need to rewrite / heavily modify the lightning articles on support.mozilla.org. let me know irc: rolandtanglao on #tb-support-crew or rtanglao AT mozilla.com OR simply start editing the articles

Round Table

Paenglab

  • I’ve requested for bug 1096006 “Add AccountManager to the prefs in tab” for Tracking_TB38.
    • Is this bug desired for TB 38? It would be needed to enable PrefsInTab.
    • If yes, I have a string only patch to land before string freeze.
  • I’ve also requested for Hiro’s bug 1087233 “Create about:downloads to migrate to Downloads.jsm” for Tracking_TB38.
    • I’ve needinfoed him to ask if he has time to finish, but no answer until now.
    • It has also strings in it. I could make a strings only patch if needed.

sshagarwal

  • Plan to land AB fix bug 170270 for TB 38.
  • Bundled chat desktop notifications bug 1127802 waiting for final review.
  • Discussing schema design and appropriate db backend for next gen address book with mconley. We plan to get an approximate idea of the number of contacts in the users’ address books on an average bug 1132588 as a required minimum performance measure.

wsmwk

  • 36.0 beta QA organized
  • triage topcrashes
  • working on HWA question bug 1131879 Disable hardware acceleration (HWA)

aceman

  • having an active week with fixing smaller backend bugs (landing right now), polishing for the release. Proud to fix long-standing dataloss bug 840418.

Question Time

Other

  • Note – meeting notes must be copied from etherpad to wiki before 5AM CET next day so that they will go public in the meeting notes blog.

Action Items

  • organize 36 beta postmortem meeting (wsmwk)
  • lightning integration meeting (rkrent/fallen)

February 18, 2015 04:00 AM

February 17, 2015

Mike Conley

On unsafe CPOW usage, and “why is my Nightly so sluggish with e10s enabled?”

If you’ve opened the Browser Console lately while running Nightly with e10s enabled, you might have noticed a warning message – “unsafe CPOW usage” – showing up periodically.

I wanted to talk a little bit about what that means, and what’s being done about it. Brad Lassey already wrote a bit about this, but I wanted to expand upon it (especially since one of my goals this quarter is to get a handle on unsafe CPOW usage in core browser code).

I also wanted to talk about sluggishness that some of our brave Nightly testers with e10s enabled have been experiencing, and where that sluggishness is coming from, and what can be done about it.

What is a CPOW?

“CPOW” stands for “Cross-process Object Wrapper”1, and is part of the glue that has allowed e10s to be enabled on Nightly without requiring a full re-write of the front-end code. It’s also part of the magic that’s allowing a good number of our most popular add-ons to continue working (albeit slowly).

In sum, a CPOW is a way for one process to synchronously access and manipulate something in another process, as if they were running in the same process. Anything that can be considered a JavaScript Object can be represented as a CPOW.

Let me give you an example.

In single-process Firefox, easy and synchronous access to the DOM of web content was more or less assumed. For example, in browser code, one could do this from the scope of a browser window:

let doc = gBrowser.selectedBrowser.contentDocument;
let contentBody = doc.body;

Here contentBody corresponds to the <body> element of the document in the currently selected browser. In single-process Firefox, querying for and manipulating web content like this is quick and easy.

In multi-process Firefox, where content is processed and rendered in a completely separate process, how does something like this work? This is where CPOWs come in2.

With a CPOW, one can synchronously access and manipulate these items, just as if they were in the same process. We expose a CPOW for the content document in a remote browser with contentDocumentAsCPOW, so the above could be rewritten as:

let doc = gBrowser.selectedBrowser.contentDocumentAsCPOW;
let contentBody = doc.body;

I should point out that contentDocumentAsCPOW and contentWindowAsCPOW are exposed on <xul:browser> objects, and that we don’t make every accessor of a CPOW have the “AsCPOW” suffix. This is just our way of making sure that consumers of the contentWindow and contentDocument on the main process side know that they’re probably working with CPOWs3. contentBody.firstChild would also be a CPOW, since CPOWs can only beget more CPOWs.

So for the most part, with CPOWs, we can continue to query and manipulate the <body> of the document loaded in the current browser just like we used to. It’s like an invisible compatibility layer that hops us right over that process barrier.

Great, right?

Well, not really.

CPOWs are really a crutch to help add-ons and browser code exist in this multi-process world, but they’ve got some drawbacks. Most noticeably, there are performance drawbacks.

Why is my Nightly so sluggish with e10s enabled?

Have you been noticing sluggish performance on Nightly with e10s? Chances are this is caused by an add-on making use of CPOWs (either knowingly or unknowingly). Because CPOWs are used for synchronous reading and manipulation of objects in other processes, they send messages to other processes to do that work, and block the main process while they wait for a response. We call this “CPOW traffic”, and if you’re experiencing a sluggish Nightly, this is probably where the sluggishness if coming from.

Instead of using CPOWs, add-ons and browser code should be updated to use frame scripts sent over the message manager. Frame scripts cannot block the main process, and can be optimized to send only the bare minimum of information required to perform an action in content and return a result.

Add-ons built with the Add-on SDK should already be using “content scripts” to manipulate content, and therefore should inherit a bunch of fixes from the SDK as e10s gets closer to shipping. These add-ons should not require too many changes. Old-style add-ons, however, will need to be updated to use frame scripts unless they want to be super-sluggish and bog the browser down with CPOW traffic.

And what constitutes “unsafe CPOW usage”?

“unsafe” might be too strong a word. “unexpected” might be a better term. Brad Lassey laid this out in his blog post already, but I’ll quickly rehash it.

There are two main cases to consider when working with CPOWs:

  1. The content process is already blocked sending up a synchronous message to the parent process
  2. The content process is not blocked

The first case is what we consider “the good case”. The content process is in a known good state, and its primed to receive IPC traffic (since it’s otherwise just idling). The only bad part about this is the IPC traffic.

The second case is what we consider the bad case. This is when the parent is sending down CPOW messages to the child (by reading or manipulating objects in the content process) when the child process might be off processing other things. This case is far more likely than the first case to cause noticeable performance problems, as the main thread of the content process might be bogged down doing other things before it can handle the CPOW traffic – and the parent will be blocked waiting for the messages to be responded to!

There’s also a more speculative fear that the parent might send down CPOW traffic at a time when it’s “unsafe” to communicate with the content process. There are potentially times when it’s not safe to run JS code in the content process, but CPOWs traffic requires both processes to execute JS. This is a concern that was expressed to me by someone over IRC, and I don’t exactly understand what the implications are – but if somebody wants to comment and let me know, I’ll happily update this post.

So, anyhow, to sum – unsafe CPOW usage is when CPOW traffic is initiated on the parent process side while the content process is not blocked. When this unsafe CPOW usage occurs, we log an “unsafe CPOW usage” message to the Browser Console, along with the script and line number where the CPOW traffic was initiated from.

Measuring

We need to measure and understand CPOW usage in Firefox, as well as add-ons running in Firefox, and over time we need to reduce this CPOW usage. The priority should be on reducing the “unsafe CPOW usage” CPOWs in core browser code.

If there’s anything that working on the Australis project taught me, it’s that in order to change something, you need to know how to measure it first. That way, you can make sure your efforts are having an effect.

We now have a way of measuring the amount of time that Firefox code and add-ons spend processing CPOW messages. You can look at it yourself – just go to about:compartments.

It’s not the prettiest interface, but it’s a start. The second column is the time processing CPOW traffic, and the higher the number, the longer it’s been doing it. Naturally, we’ll be working to bring those numbers down over time.

A possibly quick-fix for a slow Nightly with e10s

As I mentioned, we also list add-ons in about:compartments, so if you’re experiencing a slow Nightly, check out about:compartments and see if there’s an add-on with a high number in the second column. Then, try disabling that add-on to see if your performance problem is reduced.

If so, great! Please file a bug on Bugzilla in this component for the add-on, mention the name of the add-on4, describe the performance problem, and mark it blocking e10s-addons if you can.

We’re hoping to automate this process by exposing some UI that informs the user when an add-on is causing too much CPOW traffic. This will be landing in Nightly near you very soon.

PKE Meter, a CPOW Geiger Counter

Logging “unsafe CPOW usage” is all fine and dandy if you’re constantly looking at the Browser Console… but who is constantly looking at the Browser Console? Certainly not me.

Instead, I whipped up a quick and dirty add-on that plays a click, like a Geiger Counter, anytime “unsafe CPOW usage” is put into the Browser Console. This has already highlighted some places where we can reduce unsafe CPOW usage in core Firefox code – particularly:

  1. The Page Info dialog. This is probably the worse offender I’ve found so far – humongous unsafe CPOW traffic just by opening the dialog, and it’s really sluggish.
  2. Closing tabs. SessionStore synchronously communicates with the content process in order to read the tab state before the tab is closed.
  3. Back / forward gestures, at least on my MacBook
  4. Typing into an editable HTML element after the Findbar has been opened.

If you’re interested in helping me find more, install this add-on5, and listen for clicks. At this point, I’m only interested in unsafe CPOW usage caused by core Firefox code, so you might want to disable any other add-ons that might try to synchronously communicate with content.

If you find an “unsafe CPOW usage” that’s not already blocking this bug, please file a new one! And cc me on it! I’m mconley at mozilla dot com.


  1. I pronounce CPOW as “kah-POW”, although I’ve also heard people use “SEE-pow”. To each his or her own. 

  2. For further reading, Bill McCloskey discusses CPOWs in greater detail in this blog post. There’s also this handy documentation

  3. I say probably, because in the single-process case, they’re not working with CPOWs – they’re accessing the objects directly as they used to. 

  4. And say where to get it from, especially if it’s not on AMO. 

  5. Source code is here 

February 17, 2015 04:47 PM

February 16, 2015

Mike Conley

The Joy of Coding (Episode 1)

Here’s the first episode! I streamed it last Wednesday, and it was mostly concerned with bug 1090439, which is about making the print dialog and progress calls from the child process asynchronous.

Here are the notes for that bug. I still haven’t closed it yet, so perhaps I’ll keep pressing on this next Wednesday when I stream Episode 2. We’ll see!

A note that I did struggle with some resolution issues in this episode. I’m working with Richard Milewski from the Air Mozilla team to make this better for the next episode. Sorry about that!

February 16, 2015 07:29 PM

Rumbling Edge - Thunderbird

2015-02-15 Calendar builds

Common (excluding Website bugs)-specific: (36)

Sunbird will no longer be actively developed by the Calendar team.

Windows builds Official Windows

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

February 16, 2015 07:22 AM

2015-02-15 Thunderbird comm-central builds

Thunderbird-specific: (27)

MailNews Core-specific: (22)

Windows builds Official Windows, Official Windows installer

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

February 16, 2015 07:21 AM

February 11, 2015

Meeting Notes

Thunderbird: 2015-02-10

Thunderbird meeting notes 2015-02-10. Previous meetings: https://wiki.mozilla.org/Thunderbird/StatusMeetings#Meeting_Notes

Attendees

aceman, cloep, florian, jcranmer, Jorg K, mkmelin, Paneglab, rkent, Roland, sshagarwal, wsmwk, MakeMyDay

Action items from last meetings

  • wsmwk to get in touch with Standard8 re: beta.
    • done – bkerensa and sylvestre are on it
  • rkent to work with Standard8 (and Fallen) on issues of 1.management of tracking flags and 2. pushing into aurora and beta for TB 38. (meeting generally agreed that mkmelin and rkent would be appropriate to manage pushing patches forward into aurora and beta).

Critical Issues

Critical bugs. Please leave these here until they’re confirmed fixed.

  • Auto-complete improvements – some of those could go into esr31

Release Issues

  • Current beta blocked due to Windows XP failures. rkent has try server configuration that can test a beta build, and will try Standard8’s suggestions.

Upcoming

  • Thunderbird 38 moves to Earlybird ~ February 24, 2015

We need people to commit to being mentors.

Lightning to Thunderbird Integration

See https://calendar.etherpad.mozilla.org/thunderbird-integration

  • As underpass has pointed out repeatedly (thanks for your patience!) , we need to rewrite / heavily modify the lightning articles on support.mozilla.org. let me know irc: rolandtanglao on #tb-support-crew or rtanglao AT mozilla.com OR simply start editing the articles

Round Table

JosiahOne

  • So I started a new job recently, but because of that plus school, my time for TB stuff is very, very low. I will continue doing ui-reviews and reviews, but implementing anything has pretty much come to an end until summer break.

wsmwk

  • release management https://etherpad.mozilla.org/XxBwrpMHKz
  • disable HWA for 38? it has been suggested by someone in support to disable because “3d acceleration … does little or nothing for Thunderbird but messes menus, font and causes crashes (the kind with no crash reporter reports).” bug 1131879

rkent

  • Hot bugs
    • bug 1125577 – startup crash in NSSCryptoContext_FindCertificateByEncodedCertificate (and similar bug 1128614)
    • bug 1124015 – Add UI to select maildir for storage when creating accounts
    • bug 1119529 – Sending message succeeds but Error “error while running message filters on it.”
  • Unfortunately a long review queue that I will be looking at for the next few days.
  • I now have access to Thunderbird ADI data. Our ADI reached a new peak last month (in spite of SlashDot assuming “Thunderbird usage is dropping”) and Japan has now surpassed US as #2 country (after Germany).

jcranmer

  • Hopefully going to work down my review queue by this weekend
  • Main jsmime perf regression fixes are r? rkent
  • I have a non-promisified version of OAuth2, but still no UI hookup
  • Mozharness-based mozmill tests: I’ve updated the runner, need to make updates to three or four repositories to make it work
    • Trying to get this in progress for Thunderbird 38, so we don’t need to maintain the old mozmill buildbot stuff for ESR
  • I’ve been doing some work with the emailjs team to add functionality to their SMTP libraries (specifically with regards to SASL) that we could share between TB/Whiteout.io/Gaia email teams.

TheOne

Jorg K

I have an XP machine (32 bit), I could run (not build) and debug (with WinDbg) the beta, if that’s of any help. I’d need to know where do download it … and the mentioned suggestions to try. (Contact via e-mail to start off).

mkmelin

  • autocomplete:
    • the critical regressions fixed
    • 3 prominent complaints still not done: the “tab too quickly doesn’t complete”, “show as red even if found”, “insert link missing paste url in context menu”
    • ordering: now landed on esr, some complaints still, need to investigate

Question Time

I’d like to know what happened to the “Thunderbird Discussions with Mozilla”, ie. the letter that was meant to be sent to the Mozilla management, re. funding, donations, staffing, etc. There was a lively discussion on the tb-planning mailing list in early January 2015.

  • won’t happen before 38 branching

Support team

  • As underpass has pointed out, we need to rewrite all the Lightning articles, they are out of date whether or not we finish the integration for TB 38. email me or irc roland or just edit the articles (see above under “Lightning to Thunderbird integration”. Tonnes do you have time to write some of these Lightning articles in English?

Other

  • Note – meeting notes must be copied from etherpad to wiki before 5AM CET next day so that they will go public in the meeting notes blog.

(Extra) Meeting next Tuesday, Feb 17.

Action Items

-none-

February 11, 2015 04:00 AM

February 10, 2015

Bryan Clark

If writing is a muscle

I haven’t been to the gym in a long time.

David Eaves, a person I have immense amounts of respect for, has been using a tag line related to this title/intro on his blog for quite a while, probably longer than I’ve known him.  And I honestly never gave much thought to the idea that writing really is a muscle until recently. I’ve taken a break from being a designer (or a programmer) to work as a product manager for over a year now. Designing and coding require a set of skills I’m very familiar with, code is an interpretive language that people use to communicate with each other about the details of commands they issue a computer. While design is a more visual language of storytelling, heavily using imagery and some text to convey the journey of a user to the team intent on correctly interacting with that user.  Both pursuits are about communication but each uses written language in a very different way.  As a product manager I’m forced to lean on my skills as a writer and I don’t think I had much in the way of skills previously but whatever bedridden muscles have been dormant are reawakening as I realize how young and foolish I really was to ignore this essential form of communication.

I’m hoping there is more to come, perhaps starting with some tech posts about recent projects while I try to grapple with this idea of writing more than a tweet.

February 10, 2015 07:07 AM

February 09, 2015

Mark Banner

Firefox Hello Desktop: Behind the Scenes – Flux and React

This is the first of a few posts that I’m planning regarding discussion about how we implement and work on the desktop and standalone parts of Firefox Hello. We’ve been doing some things in different ways, which we have found to be advantageous. We know other teams are interested in what we do, so its time to share!

Content Processes

First, a little bit of architecture: The panels and conversation window run in content processes (just like regular web pages). The conversation window shares code with the link-clicker pages that are on hello.firefox.com.

Hence those parts run very much in a web-style, and for various reasons, we decided to create them in a web-style manner. As a result, we’ve ended up with using React and Flux to aid our development.

I’ll detail more about the architecture in future posts.

The Flux Pattern

Flux is a recommended pattern for use alongside React, although I think you could use it with other frameworks as well. I’ll detail here about how we use Flux specifically for Hello. As Flux is a pattern, there’s no one set standard and the methods of implementation vary.

Flux Overview

The main parts of a flux system are stores, components and actions. Some of this is a bit like an MVC system, but I find there’s better definition about what does what.
Diagram of Example flow in a Flux patternAn action is effectively a result of an event, that changes the system. For example, in Loop, we use actions for user events, but we also use them for any data incoming from the server.

A store contains the business logic. It listens to actions, when it receives one, it does something based on the action and updates its state appropriately.

A component is a view. The view has a set of properties (passed in values) and/or state (the state is obtained from the store’s state). For a given set of properties and state, you always get the same layout. The components listen for updates to the state in the stores and update appropriately.

We also have a dispatcher. The dispatcher dispatches actions to interested stores. Only one action can be processed at any one time. If a new action comes in, then the dispatcher queues it.

Actions are always synchronous – if changes would happen due to external stimuli then these will be new actions. For example, this prevents actions from blocking other actions whilst waiting for a response from the server.

What advantages do we get?

For Hello, we find the flux pattern fits very nicely. Before, we used a traditional MVC model, however, we kept on getting in a mess with events being all over the place, and application logic being wrapped in amongst the views as well as the models.

Now, we have a much more defined structure:

React provides the component structure, it has defined ways of tracking state and properties, and the re-rendering on state change gives much automation. Since it encourages the separation of immutable properties, a whole class of inadvertent errors is eliminated.

There’s also many advantages with debugging – we have a flag that lets us watch all the actions going through the system, so its much easier to track what events are taking place and the data passed with them. This combined with the fact that actions have limited scope, helps with debugging the data flows.

Simple Unit Testing

For testing, we’re able to do unit testing in a much simpler fashion:

it("should render a muted local audio button", function() {
  var comp = TestUtils.renderIntoDocument(
    React.createElement(sharedViews.MediaControlButton, {
      scope: "local",
      type: "audio",
      action: function(){},
      enabled: false
    }));

  expect(comp.getDOMNode().classList.contains("muted")).eql(true);
});
it("should set the state to READY", function() {
  store.setupRoomInfo(new sharedActions.SetupRoomInfo(fakeRoomInfo));

  expect(store._storeState.roomState).eql(ROOM_STATES.READY);
});

We therefore have many tests written at the unit test level. Many times we’ve found and prevented issues whilst writing these tests, and yet, because these are all content based, we can run the tests in a few seconds. I’ll go more into testing in a future post.

References

Here’s a few references to some of the areas in our code base that are good example of our flux implementation. Note that behind the scenes, everything is known as Loop – the codename for the project.

Conclusion and more coming…

We’ve found using the flux model is much more organised than we were with an MVC, possibly its just a better defined methodology, but it gave us the structure we badly missing. In future posts, I’ll discuss about our development facilities, more about the desktop architecture and whatever else comes up, so please do leave questions in the comments and I’ll try and answer them either direct or with more posts.

February 09, 2015 08:49 PM

February 08, 2015

Mike Conley

The Joy of Coding (or, Firefox Hacking Live!)

A few months back, I started publishing my bug notes online, as a way of showing people what goes on inside a Firefox engineer’s head while fixing a bug.

This week, I’m upping the ante a bit: I’m going to live-hack on Firefox for an hour and a half for the next few Wednesday’s on Air Mozilla. I’m calling it The Joy of Coding1. I’ll be working on real Firefox bugs2 – not some toy exercise-bug where I’ve pre-planned where I’m going. It will be unscripted, unedited, and uncensored. But hopefully not uninteresting3!

Anyhow, the first episode airs this Wednesday. I’ll be using #livehacking on irc.mozilla.org as a backchannel. Not sure what bug(s) I’ll be hacking on – I guess it depends on what I get done on Monday and Tuesday.

Anyhow, we’ll try it for a few weeks to see if folks are interested in watching. Who knows, maybe we can get a few more developers doing this too – I’d enjoy seeing what other folks do to fix their bugs!

Anyhow, I hope to see you there!


  1. Maybe I’ll wear an afro wig while I stream 

  2. Specifically, I’ll be working on Electrolysis bugs, since that’s what my focus is on these days. 

  3. I’ve actually piloted this for the past few weeks, streaming on YouTube Live. Here’s a playlist of the pilot episodes

February 08, 2015 07:58 PM

January 15, 2015

Rumbling Edge - Thunderbird

2015-01-14 Calendar builds

Common (excluding Website bugs)-specific: (16)

Sunbird will no longer be actively developed by the Calendar team.

Windows builds Official Windows

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

January 15, 2015 07:52 AM

2015-01-14 Thunderbird comm-central builds

Thunderbird-specific: (30)

MailNews Core-specific: (14)

Windows builds Official Windows, Official Windows installer

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

January 15, 2015 07:51 AM

January 13, 2015

Joshua Cranmer

Why email is hard, part 8: why email security failed

This post is part 8 of an intermittent series exploring the difficulties of writing an email client. Part 1 describes a brief history of the infrastructure. Part 2 discusses internationalization. Part 3 discusses MIME. Part 4 discusses email addresses. Part 5 discusses the more general problem of email headers. Part 6 discusses how email security works in practice. Part 7 discusses the problem of trust. This part discusses why email security has largely failed.

At the end of the last part in this series, I posed the question, "Which email security protocol is most popular?" The answer to the question is actually neither S/MIME nor PGP, but a third protocol, DKIM. I haven't brought up DKIM until now because DKIM doesn't try to secure email in the same vein as S/MIME or PGP, but I still consider it relevant to discussing email security.

Unquestionably, DKIM is the only security protocol for email that can be considered successful. There are perhaps 4 billion active email addresses [1]. Of these, about 1-2 billion use DKIM. In contrast, S/MIME can count a few million users, and PGP at best a few hundred thousand. No other security protocols have really caught on past these three. Why did DKIM succeed where the others fail?

DKIM's success stems from its relatively narrow focus. It is nothing more than a cryptographic signature of the message body and a smattering of headers, and is itself stuck in the DKIM-Signature header. It is meant to be applied to messages only on outgoing servers and read and processed at the recipient mail server—it completely bypasses clients. That it bypasses clients allows it to solve the problem of key discovery and key management very easily (public keys are stored in DNS, which is already a key part of mail delivery), and its role in spam filtering is strong motivation to get it implemented quickly (it is 7 years old as of this writing). It's also simple: this one paragraph description is basically all you need to know [2].

The failure of S/MIME and PGP to see large deployment is certainly a large topic of discussion on myriads of cryptography enthusiast mailing lists, which often like to partake in propositions of new end-to-end encryption of email paradigms, such as the recent DIME proposal. Quite frankly, all of these solutions suffer broadly from at least the same 5 fundamental weaknesses, and I see it unlikely that a protocol will come about that can fix these weaknesses well enough to become successful.

The first weakness, and one I've harped about many times already, is UI. Most email security UI is abysmal and generally at best usable only by enthusiasts. At least some of this is endemic to security: while it mean seem obvious how to convey what an email signature or an encrypted email signifies, how do you convey the distinctions between sign-and-encrypt, encrypt-and-sign, or an S/MIME triple wrap? The Web of Trust model used by PGP (and many other proposals) is even worse, in that inherently requires users to do other actions out-of-band of email to work properly.

Trust is the second weakness. Consider that, for all intents and purposes, the email address is the unique identifier on the Internet. By extension, that implies that a lot of services are ultimately predicated on the notion that the ability to receive and respond to an email is a sufficient means to identify an individual. However, the entire purpose of secure email, or at least of end-to-end encryption, is subtly based on the fact that other people in fact have access to your mailbox, thus destroying the most natural ways to build trust models on the Internet. The quest for anonymity or privacy also renders untenable many other plausible ways to establish trust (e.g., phone verification or government-issued ID cards).

Key discovery is another weakness, although it's arguably the easiest one to solve. If you try to keep discovery independent of trust, the problem of key discovery is merely picking a protocol to publish and another one to find keys. Some of these already exist: PGP key servers, for example, or using DANE to publish S/MIME or PGP keys.

Key management, on the other hand, is a more troubling weakness. S/MIME, for example, basically works without issue if you have a certificate, but managing to get an S/MIME certificate is a daunting task (necessitated, in part, by its trust model—see how these issues all intertwine?). This is also where it's easy to say that webmail is an unsolvable problem, but on further reflection, I'm not sure I agree with that statement anymore. One solution is just storing the private key with the webmail provider (you're trusting them as an email client, after all), but it's also not impossible to imagine using phones or flash drives as keystores. Other key management factors are more difficult to solve: people who lose their private keys or key rollover create thorny issues. There is also the difficulty of managing user expectations: if I forget my password to most sites (even my email provider), I can usually get it reset somehow, but when a private key is lost, the user is totally and completely out of luck.

Of course, there is one glaring and almost completely insurmountable problem. Encrypted email fundamentally precludes certain features that we have come to take for granted. The lesser known is server-side search and filtration. While there exist some mechanisms to do search on encrypted text, those mechanisms rely on the fact that you can manipulate the text to change the message, destroying the integrity feature of secure email. They also tend to be fairly expensive. It's easy to just say "who needs server-side stuff?", but the contingent of people who do email on smartphones would not be happy to have to pay the transfer rates to download all the messages in their folder just to find one little email, nor the energy costs of doing it on the phone. And those who have really large folders—Fastmail has a design point of 1,000,000 in a single folder—would still prefer to not have to transfer all their mail even on desktops.

The more well-known feature that would disappear is spam filtration. Consider that 90% of all email is spam, and if you think your spam folder is too slim for that to be true, it's because your spam folder only contains messages that your email provider wasn't sure were spam. The loss of server-side spam filtering would dramatically increase the cost of spam (a 10% reduction in efficiency would double the amount of server storage, per my calculations), and client-side spam filtering is quite literally too slow [3] and too costly (remember smartphones? Imagine having your email take 10 times as much energy and bandwidth) to be a tenable option. And privacy or anonymity tends to be an invitation to abuse (cf. Tor and Wikipedia). Proposed solutions to the spam problem are so common that there is a checklist containing most of the objections.

When you consider all of those weaknesses, it is easy to be pessimistic about the possibility of wide deployment of powerful email security solutions. The strongest future—all email is encrypted, including metadata—is probably impossible or at least woefully impractical. That said, if you weaken some of the assumptions (say, don't desire all or most traffic to be encrypted), then solutions seem possible if difficult.

This concludes my discussion of email security, at least until things change for the better. I don't have a topic for the next part in this series picked out (this part actually concludes the set I knew I wanted to discuss when I started), although OAuth and DMARC are two topics that have been bugging me enough recently to consider writing about. They also have the unfortunate side effect of being things likely to see changes in the near future, unlike most of the topics I've discussed so far. But rest assured that I will find more difficulties in the email infrastructure to write about before long!

[1] All of these numbers are crude estimates and are accurate to only an order of magnitude. To justify my choices: I assume 1 email address per Internet user (this overestimates the developing world and underestimates the developed world). The largest webmail providers have given numbers that claim to be 1 billion active accounts between them, and all of them use DKIM. S/MIME is guessed by assuming that any smartcard deployment supports S/MIME, and noting that the US Department of Defense and Estonia's digital ID project are both heavy users of such smartcards. PGP is estimated from the size of the strong set and old numbers on the reachable set from the core Web of Trust.
[2] Ever since last April, it's become impossible to mention DKIM without referring to DMARC, as a result of Yahoo's controversial DMARC policy. A proper discussion of DMARC (and why what Yahoo did was controversial) requires explaining the mail transmission architecture and spam, however, so I'll defer that to a later post. It's also possible that changes in this space could happen within the next year.
[3] According to a former GMail spam employee, if it takes you as long as three minutes to calculate reputation, the spammer wins.

January 13, 2015 04:38 AM

January 10, 2015

Joshua Cranmer

A unified history for comm-central

Several years back, Ehsan and Jeff Muizelaar attempted to build a unified history of mozilla-central across the Mercurial era and the CVS era. Their result is now used in the gecko-dev repository. While being distracted on yet another side project, I thought that I might want to do the same for comm-central. It turns out that building a unified history for comm-central makes mozilla-central look easy: mozilla-central merely had one import from CVS. In contrast, comm-central imported twice from CVS (the calendar code came later), four times from mozilla-central (once with converted history), and imported twice from Instantbird's repository (once with converted history). Three of those conversions also involved moving paths. But I've worked through all of those issues to provide a nice snapshot of the repository [1]. And since I've been frustrated by failing to find good documentation on how this sort of process went for mozilla-central, I'll provide details on the process for comm-central.

The first step and probably the hardest is getting the CVS history in DVCS form (I use hg because I'm more comfortable it, but there's effectively no difference between hg, git, or bzr here). There is a git version of mozilla's CVS tree available, but I've noticed after doing research that its last revision is about a month before the revision I need for Calendar's import. The documentation for how that repo was built is no longer on the web, although we eventually found a copy after I wrote this post on git.mozilla.org. I tried doing another conversion using hg convert to get CVS tags, but that rudely blew up in my face. For now, I've filed a bug on getting an official, branchy-and-tag-filled version of this repository, while using the current lack of history as a base. Calendar people will have to suffer missing a month of history.

CVS is famously hard to convert to more modern repositories, and, as I've done my research, Mozilla's CVS looks like it uses those features which make it difficult. In particular, both the calendar CVS import and the comm-central initial CVS import used a CVS tag HG_COMM_INITIAL_IMPORT. That tagging was done, on only a small portion of the tree, twice, about two months apart. Fortunately, mailnews code was never touched on CVS trunk after the import (there appears to be one commit on calendar after the tagging), so it is probably possible to salvage a repository-wide consistent tag.

The start of my script for conversion looks like this:

#!/bin/bash

set -e

WORKDIR=/tmp
HGCVS=$WORKDIR/mozilla-cvs-history
MC=/src/trunk/mozilla-central
CC=/src/trunk/comm-central
OUTPUT=$WORKDIR/full-c-c

# Bug 445146: m-c/editor/ui -> c-c/editor/ui
MC_EDITOR_IMPORT=d8064eff0a17372c50014ee305271af8e577a204

# Bug 669040: m-c/db/mork -> c-c/db/mork
MC_MORK_IMPORT=f2a50910befcf29eaa1a29dc088a8a33e64a609a

# Bug 1027241, bug 611752 m-c/security/manager/ssl/** -> c-c/mailnews/mime/src/*
MC_SMIME_IMPORT=e74c19c18f01a5340e00ecfbc44c774c9a71d11d

# Step 0: Grab the mozilla CVS history.
if [ ! -e $HGCVS ]; then
  hg clone git+https://github.com/jrmuizel/mozilla-cvs-history.git $HGCVS
fi

Since I don't want to include the changesets useless to comm-central history, I trimmed the history by using hg convert to eliminate changesets that don't change the necessary files. Most of the files are simple directory-wide changes, but S/MIME only moved a few files over, so it requires a more complex way to grab the file list. In addition, I also replaced the % in the usernames with @ that they are used to appearing in hg. The relevant code is here:

# Step 1: Trim mozilla CVS history to include only the files we are ultimately
# interested in.
cat >$WORKDIR/convert-filemap.txt <<EOF
# Revision e4f4569d451a
include directory/xpcom
include mail
include mailnews
include other-licenses/branding/thunderbird
include suite
# Revision 7c0bfdcda673
include calendar
include other-licenses/branding/sunbird
# Revision ee719a0502491fc663bda942dcfc52c0825938d3
include editor/ui
# Revision 52efa9789800829c6f0ee6a005f83ed45a250396
include db/mork/
include db/mdb/
EOF

# Add the S/MIME import files
hg -R $MC log -r "children($MC_SMIME_IMPORT)" \
  --template "{file_dels % 'include {file}\n'}" >>$WORKDIR/convert-filemap.txt

if [ ! -e $WORKDIR/convert-authormap.txt ]; then
hg -R $HGCVS log --template "{email(author)}={sub('%', '@', email(author))}\n" \
  | sort -u > $WORKDIR/convert-authormap.txt
fi

cd $WORKDIR
hg convert $HGCVS $OUTPUT --filemap convert-filemap.txt -A convert-authormap.txt

That last command provides us the subset of the CVS history that we need for unified history. Strictly speaking, I should be pulling a specific revision, but I happen to know that there's no need to (we're cloning the only head) in this case. At this point, we now need to pull in the mozilla-central changes before we pull in comm-central. Order is key; hg convert will only apply the graft points when converting the child changeset (which it does but once), and it needs the parents to exist before it can do that. We also need to ensure that the mozilla-central graft point is included before continuing, so we do that, and then pull mozilla-central:

CC_CVS_BASE=$(hg log -R $HGCVS -r 'tip' --template '{node}')
CC_CVS_BASE=$(grep $CC_CVS_BASE $OUTPUT/.hg/shamap | cut -d' ' -f2)
MC_CVS_BASE=$(hg log -R $HGCVS -r 'gitnode(215f52d06f4260fdcca797eebd78266524ea3d2c)' --template '{node}')
MC_CVS_BASE=$(grep $MC_CVS_BASE $OUTPUT/.hg/shamap | cut -d' ' -f2)

# Okay, now we need to build the map of revisions.
cat >$WORKDIR/convert-revmap.txt <<EOF
e4f4569d451a5e0d12a6aa33ebd916f979dd8faa $CC_CVS_BASE # Thunderbird / Suite
7c0bfdcda6731e77303f3c47b01736aaa93d5534 d4b728dc9da418f8d5601ed6735e9a00ac963c4e, $CC_CVS_BASE # Calendar
9b2a99adc05e53cd4010de512f50118594756650 $MC_CVS_BASE # Mozilla graft point
ee719a0502491fc663bda942dcfc52c0825938d3 78b3d6c649f71eff41fe3f486c6cc4f4b899fd35, $MC_EDITOR_IMPORT # Editor
8cdfed92867f885fda98664395236b7829947a1d 4b5da7e5d0680c6617ec743109e6efc88ca413da, e4e612fcae9d0e5181a5543ed17f705a83a3de71 # Chat
EOF

# Next, import mozilla-central revisions
for rev in $MC_MORK_IMPORT $MC_EDITOR_IMPORT $MC_SMIME_IMPORT; do
  hg convert $MC $OUTPUT -r $rev --splicemap $WORKDIR/convert-revmap.txt \
    --filemap $WORKDIR/convert-filemap.txt
done

Some notes about all of the revision ids in the script. The splicemap requires the full 40-character SHA ids; anything less and the thing complains. I also need to specify the parents of the revisions that deleted the code for the mozilla-central import, so if you go hunting for those revisions and are surprised that they don't remove the code in question, that's why.

I mentioned complications about the merges earlier. The Mork and S/MIME import codes here moved files, so that what was db/mdb in mozilla-central became db/mork. There's no support for causing the generated splice to record these as a move, so I have to manually construct those renamings:

# We need to execute a few hg move commands due to renamings.
pushd $OUTPUT
hg update -r $(grep $MC_MORK_IMPORT .hg/shamap | cut -d' ' -f2)
(hg -R $MC log -r "children($MC_MORK_IMPORT)" \
  --template "{file_dels % 'hg mv {file} {sub(\"db/mdb\", \"db/mork\", file)}\n'}") | bash
hg commit -m 'Pseudo-changeset to move Mork files' -d '2011-08-06 17:25:21 +0200'
MC_MORK_IMPORT=$(hg log -r tip --template '{node}')

hg update -r $(grep $MC_SMIME_IMPORT .hg/shamap | cut -d' ' -f2)
(hg -R $MC log -r "children($MC_SMIME_IMPORT)" \
  --template "{file_dels % 'hg mv {file} {sub(\"security/manager/ssl\", \"mailnews/mime\", file)}\n'}") | bash
hg commit -m 'Pseudo-changeset to move S/MIME files' -d '2014-06-15 20:51:51 -0700'
MC_SMIME_IMPORT=$(hg log -r tip --template '{node}')
popd

# Echo the new move commands to the changeset conversion map.
cat >>$WORKDIR/convert-revmap.txt <<EOF
52efa9789800829c6f0ee6a005f83ed45a250396 abfd23d7c5042bc87502506c9f34c965fb9a09d1, $MC_MORK_IMPORT # Mork
50f5b5fc3f53c680dba4f237856e530e2097adfd 97253b3cca68f1c287eb5729647ba6f9a5dab08a, $MC_SMIME_IMPORT # S/MIME
EOF

Now that we have all of the graft points defined, and all of the external code ready, we can pull comm-central and do the conversion. That's not quite it, though—when we graft the S/MIME history to the original mozilla-central history, we have a small segment of abandoned converted history. A call to hg strip removes that.

# Now, import comm-central revisions that we need
hg convert $CC $OUTPUT --splicemap $WORKDIR/convert-revmap.txt
hg strip 2f69e0a3a05a

[1] I left out one of the graft points because I just didn't want to deal with it. I'll leave it as an exercise to the reader to figure out which one it was. Hint: it's the only one I didn't know about before I searched for the archive points [2].
[2] Since I wasn't sure I knew all of the graft points, I decided to try to comb through all of the changesets to figure out who imported code. It turns out that hg log -r 'adds("**")' narrows it down nicely (1667 changesets to look at instead of 17547), and using the {file_adds} template helps winnow it down more easily.

January 10, 2015 05:55 PM

January 03, 2015

Mike Conley

DocShell in a Nutshell – Part 3: Maturation (2005 – 2010)

Whoops

First off, an apology. I’ve fallen behind on these posts, and that’s not good – the iron has cooled, and I was taught to strike it while it was hot. I was hit with classic blogcrastination.

Secondly, another apology – I made a few errors in my last post, and I’d like to correct them:

  1. It’s come to my attention that I played a little fast and loose with the notions of “global history” and “session history”. They’re really two completely different things. Specifically, global history is what populates the AwesomeBar. Global history’s job is to remember every site you visit, regardless of browser window or tab. Session history is a different beast altogether – session history is the history inside the back-forward buttons. Every time you click on a link from one page, and travel to the next, you create a little nugget of session history. And when you click the back button, you move backwards in that session history. That’s the difference between the two – “like chalk and cheese”, as NeilAway said when he brought this to my attention.
  2. I also said that the docshell/ folder was created on Travis’s first landing on October 15th, 1998. This is not true – the docshell/ folder was created several months earlier, in this commit by “kipp”, dated July 18, 1998.

I’ve altered my last post to contain the above information, along with details on what I found in the time of that commit to Travis’s first landing. Maybe go back and give that a quick skim while I wait. Look for the string “correction” to see what I’ve changed.

I also got some confirmation from Travis himself over Twitter regarding my last post:

@mike_conley Looks like general right flow as far as 14 years ago memory can aid. :) Many context points surround…
@mike_conley 1) At that time, Mozilla was still largely in walls of Netscape, so many reviews/ alignment happened in person vs public docs.
@mike_conley 2) XPCOM ideas were new and many parts of system were straddling C++ objects and Interface models.
@mike_conley 3) XUL was also new and boundaries of what rendering belonged in browser shell vs. general rendering we’re [sic] being defined.
@mike_conley 4) JS access to XPCOM was also new driving rethinking of JS control vs embedding control.
@mike_conley There was a massive unwinding of the monolith and (re)defining of what it meant to build a browser inside a rendered chrome.

It’s cool to hear from the guy who really got the ball rolling here. The web is wonderful!

Finally, one last apology – this is a long-ass blog post. I’ve been working on it off and on for about 3 months, and it’s gotten pretty massive. Strap yourself into whatever chair you’re near, grab a thermos, cancel any appointments, and flip on your answering machine. This is going to be a long ride.

Oh come on, it’s not that bad, right? … right?

OK, let’s get back down to it. Where were we?

2005

A frame spoofing bug

Ah, yes – 2005 had just started. This was just a few weeks after a community driven effort put a full-page ad for Firefox in the New York Times. Only a month earlier, a New York Times article highlighted Firefox, and how it was starting to eat into Internet Explorer’s market share.

So what was going on in DocShell? Here are the bits I found most interesting. They’re kinda few and far between, since DocShell appears to have stabilized quite a bit by now. Mostly tiny bugfixes are landed, with the occasional interesting blip showing up. I guess this is a sign of a “mature” section of the codebase.

I found this commit on January 11th, 2005 pretty interesting. This patch fixes bug 103638 (and bug 273699 while it’s at it). What was going on was that if you had two Firefox windows open, both with <frameset>’s, where two <frames> had the same name attribute, it was possible for links clicked in one to sometimes open in the other. Youch! That’s a pretty serious security vulnerability. jst’s patch added a bunch of checks and smarter selection for link targets.

One of those new checks involved adding a new static function called CanAccessItem to nsDocShell.cpp, and having FindItemWithName (an nsDocShell instance method used to find some child nsIDocShellTreeItem with a particular name) take a new parameter of the “original requestor”, and ensuring that whichever nsIDocShellTreeItem we eventually landed on with the name that was requested passes the CanAccessItem test with the original requestor.

DocShell and session history

There are two commits, one on January 20th, 2005, and one on January 30th, 2005, both of which fix different bugs, but are interrelated and I want to talk about them for a second.

The first commit, for bug 277224, fixes a problem where if we change location to an anchor located within a document within a <script> tag, we stop loading the page content because the browser thinks we’re about to start loading a document at a different location. bz fixed the more common case of location change via setting document.location.href in bug 233963. Bug 277224 is interested in the case where document.location.href is modified with the .replace() method.

The solution that bz uses is to add new flags for nsIDocShellLoadInfo, which gives more power in how to stop loading a page. Specifically, it adds a LOAD_FLAGS_STOP_CONTENT flag which allows the caller to stop the rendering of content and all network activity (where the default was just to stop network activity). I believe what happens is that replace() causes an InternalLoad to kick off, and we need content rendering to be stopped in order for this new load to take over properly. That’s my reading on the situation, anyhow. If bz or anybody else examining that patch has another interpretation, please let me know!

So what about the commit on January 30th? Well that one also involves anchors. What was happening was that if we browsed to some page, and then clicked a link that scrolled us to an anchor in that page, clicking back would reload the entire document off the cache again, when we really just need to restore the old scroll position.

The patch to fix this basically detected the case where we were going back from an anchor to a non-anchor but had the same URL, and allowed a scroll in that case.

So how is this related to the commit for bug 277224? Well, what it shows is that at this time, DocShell was responsible for not just knowing how to load a document and subdocuments, but also about the user’s state in that document – specifically, their scroll position. It also more firmly establishes the link between DocShell and Session History – as DocShell traverses pages, it communicates with Session History to let it know about those transitions, and refers to it when traveling backwards and forwards, and when restoring state for those session history entries.

I just thought that was kinda neat to know.

Window pains

On February 8th, 2005, danm landed a patch to fix bug 278143, which was a bug that caused windows opened with window.open to open in a new window if they had no target specified. This wouldn’t normally be a problem, except that this could override a user preference to open those new windows in new tabs instead. So that was bad.

This was simply a matter of adding a check for the null case for the target window name in nsWindowWatcher. No big deal.

The reason I bring this code up, is because I find it familiar – I brushed by it somewhat when I was working on making it possible to open new windows for multi-process Firefox.

Semi-related (because of the “popup” nature of things), is a commit on February 23rd, 2005. This one is for bug 277574, which makes it so that modal HTTP auth prompts focus the tabs that spawn them. This patch works by making sure HTTP auth prompts fire the same DOMWillOpenModalDialog and DOMModalDialogClosed events that tabbrowser listens for to focus tabs.

The copy and the cache

On March 11, 2005, NeilAway landed a commit to add the “Copy Image” command item to the context menu. This was for bug 135300.

What’s interesting here is that “Copy Image Location” was already in the context menu, and in the bug, it looks like there’s some contention over whether or not to keep it. It seems that right around here, the solution they go with is to copy both the image and the image location to the clipboard, and mark each copy with the right “flavours”, so that if you were to paste to a program or field that accepted an “image” flavour, like Photoshop, you’d get the image. If you pasted to a program or field that accepted a “text” flavour, like Notepad, you’d get the image URL.

That’s the solution that was landed, anyhow. Notice that nowadays, Firefox has context menu items that allow users to copy just the image, and just the URL – so at some point, this approach was deemed wanting. I’ll keep my eye out to see if I can find out where that happened, and why. If anybody knows, please comment!

On April 28, 2005, roc landed a commit for bug 240276, which splits up something called “nsGfxScrollFrame” into two things – nsHTMLScrollFrame and nsXULScrollFrame. It seems like, up until this point, layout for both XUL and HTML scrollable frames were handled by the same code. This meant that we were using XUL box-model style layout for HTML, and XUL layout is… well… kind of tricky to work with. This patch helped to further distance our HTML rendering from our XUL rendering. As for how this affected DocShell, the patch removed some scroll calculations from DocShell, where they probably didn’t belong in the first place.

On May 4, 2005, Brian Ryner landed a patch which made it possible to move back and forward across web pages much more quickly. This was for bug 274784, and a key part of a project called “fastback”. When you view a web page, a DocShell is put in charge of requesting network activity to retrieve the document source, and then passing that source onto an appropriate nsIContentViewer. Up until Brian’s patch, it looks like every nsIContentViewer was just getting thrown away after browsing away from a page. His patch made it possible to store a certain number of these nsIContentViewers in the session history of the window, and then retrieve it when we browse back or forward to the associated page. This is a textbook trade-off between speed (the time to instantiate and initialize an nsIContentViewer) and space (stored nsIContentViewers consume memory). And it looks like the trade-off paid-off! We still cache nsIContentViewers to this day. What’s interesting about Brian’s patch is that it exposes an about:config preference1 for setting how many content viewers are allowed to be cached2. As DocShell seems to go hand in hand with session history, it’s not surprising that Brian’s patch touches DocShell code.

about:neterror arrives, Inner and Outer windows appear, and then Session History gets all snuggly with DocShell

On July 14th of 2005, bsmedberg landed a patch to add about:neterror pages, and close a privilege-escalation security vulnerability. Up until this point, network error pages were shown by browsing the DocShell to chrome URLs3, but this allowed certain types of attacks which load iframes resolving to network error pages to potentially gain chrome privileges4.

So instead of going to a chrome URL, the patch causes DocShell to internally load about:neterror5 The great news about this about:neterror page is that it has restricted permissions, so that security hole got plugged.

On July 30th, 2005, jst landed a patch to introduce the notion of inner and outer windows for bug 296639. Inner and outer windows has confused me for a while, but I think I’ve somewhat wrapped my head around it. This document helped.

The idea goes something like this:

The thing that is showing you web content can be considered the outer window – so that could be a browser tab, or an iframe, for example. The inner window is the content being displayed – it’s transitory, and goes away as you browse the web via the outer window.

The outer window then has a notion of all of the inner windows it might contain, and the inner window (via Javascript) gets a handle on the outer window via the window global.

So, for example, if you call window.open, the returned value is an outer window. Methods that you call on that outer window are then forwarded to the inner window.

I hope I got that right. I was originally trying to piece together the meaning of all of it by reading this WHATWG spec describing browsing context, and that was pretty slow going. The MDN page seemed much more clear.

Please comment with corrections if I got any of that wrong.

I’m not entirely sure, but based entirely on instinct and experience, I’m inclined to believe there are interesting security effects of this split. It seems to add a bit more of a membrane6 between web content and the physical window.

Anyhow, jst’s change was pretty monumental. It’s for bug 296639 if you want to read up more about it.

A semi-related change was landed on August 12th by mrbkab, where the entire inner window is stashed in the bfcache (as opposed to what we were doing before, which looks like serialization and deserialization of window state). That was for bug 303267, and sounds related to the fast back and forward caching work that Brian Ryner was working on back in May.

On August 18th of 2005, radha landed the first in the series of patches to session history. Unfortunately, the commit message for this patch doesn’t have a bug number, so I had some trouble tracking down what this work is for. I think this work is for bug 230363, and is actually a copy of interfaces from xpfe/components/shistory/public to docshell/shistory/public. Like I mentioned earlier, DocShell and session history are closely linked, so I suppose it makes sense to put the session history code under docshell/. Later that day, another patch copies the nsISHistoryListener interfaces over as well. Finally, a patch landed to build those interfaces from their new locations, and removes xpfe/components/shistory from Makefile.in. The bug for that last change is bug 305090.

Last bit of 2005

On August 22 of 2005, mrbkap landed a patch that changed how content viewer caching worked. There’s a special page in Firefox called about:blank – if you go to that page right now, you’re going to get a blank page. Some people like to set that as their home page or new tab page, as it is (or should be) very lightweight to load. That page is also special because, from what I can tell, when a new tab or window opens, it’s initially pointing at about:blank before it goes to the requested destination. Before this patch, we used to cache that about:blank content viewer in session history. We didn’t put an entry in the back-forward cache for about:blank though7, so that was a useless cache and a waste of memory. mrbkap’s patch made DocShell check to see if the page it was traveling to was going to re-use the current inner window, and if so, it’d skip caching it. Memory win!

That was the last thing I found interesting in 2005. On to 2006!

Preferences and threads…

On February 7th, 2006, bz landed a patch that made it possible for embedders to override where popup windows get opened.

There are preferences in Firefox that allow you to tweak how web content is able to open new windows8. Those preferences are browser.link.open_newwindow and browser.link.open_newwindow.restriction. If a page is attempting to open a new window, these preferences allow a user to control what actually occurs. The default (in most cases) is to open a new tab instead – but these preferences allow you to open that new window, or to open the content in the same window that the link is executed in. These are the kind of tweaks that power-users love.

Up until this point, only Firefox had these tweaking capabilities. bz’s patch moved that tweaking logic “up the chain”, so to speak, which means that applications that embedded Gecko could also be tweaked like this. Pretty handy.

For the Gecko hackers reading this, this patch also introduced the nsIWindowProvider interface9.

On May 10th, 2006, “darin” landed a patch for bug 326273 that put the nsIThreadManager interface and implementation into the tree. It’s a big commit, and affected many parts of the codebase. nsIThreadManager is, not surprisingly, used to implement multi-threading and thread manipulation in Gecko. From my look at the patch, it looks like it replaces something called nsIEventQueue / nsIEventQueueService. It looks like Gecko already had some facility for multi-threading10, but nsIThreadManager looks like a different model for multi-threading.

For DocShell, this change meant modifying the way that restoring PresShell’s from history would work. Before, DocShell had a RestorePresentationEvent that extended PLEvent, which allowed it to be posted to an nsIEventQueue. Now, instead, we define an inner-class that implements nsRunnable11, and also define a weak pointer to that runnable on a DocShell.

So the way things would go is this: DocShell::RestorePresentation would get called, and this would cancel any pending RestorePresentationRunnable that the DocShell is weak-pointing to. Next, we’d instantiate a new RestorePresentationRunnable that we’d then dispatch to the main thread. This isn’t really different to what we were doing before, but it makes use of the nsIThreadManager and nsRunnable class instead of nsIEventQueue and nsIEventQueueService.

What’s interesting about this patch, DocShell-wise, is that it shows the usage of FavorPerformanceHint, which looks like a way of trading-off UI interactivity with page-to-screen time. Basically, it looks like the FavorPerformanceHint is used when restoring PresShell’s to tell the nsIAppShell, “hey – we want you to favor native events over other events for a small pocket of time so we can get this stuff to the screen ASAP”. If I’m interpreting that right, it’s a tradeoff between total time to execute and responsiveness here. “Do you want it fast, or do you want it smooth?”.

I was probably wrong about the name

In one of my past posts, I made some guesses about why DocShell was called DocShell. I thought:

I think nsDocShell was given the “shell” monicker because it did the job of taking over most of nsWebShell’s duties. However, since nsWebBrowser was now the touch-point between the embedder and embedee… maybe shell makes less sense. I wonder if we missed an opportunity to name nsDocShell something better.

But now that I look at nsIAppShell, and nsIDocShell, and nsIPresShell… I think I’m starting to understand. A while back, when I first started planning these posts, I asked blassey why he thought nsIDocShell was named the way it was, and he said he thought it might be related to the notion of a command shell – like a terminal input. From my understanding, a shell is a command interface with which one can manipulate and control something pretty complex – like the file-system or processes of a computer. I think blassey is right – I think that’s the “Shell” in nsIDocShell. I think the idea is that this interface would be the one to control and manipulate the process of loading and displaying a document. It seems obvious now, but it sure wasn’t when I started looking into this stuff.

DOM Storage (session and global), KungFuDeathGrip, friendlier search…

On May 19th, 2006, jst picked up, finished and landed a patch originally be Enn that implemented DOM Storage for bug 335540.

This patch adds two new methods to nsIDocShell – getSessionStorageForDomain and addSessionStorage. The first method is accessed in a number of cases, but most importantly when some caller reads sessionStorage or globalStorage off of the window object12.

The relationship between nsGlobalWindow and nsDocShell is brought to my attention with this patch. Here’s a fragment from an old chat I had with Ms2ger, smaug and bz, which started when I asked Ms2ger what he’d rename DocShell to.

14:11 (Ms2ger) mconley, I would call it WindowProxy :)
14:12 (smaug) outer window? yes, WindowProxy please
14:12 (mconley) Ms2ger: wait, outer window = docshell currently?
14:12 (khuey) what are we doing with WindowProxy?
14:13 (Ms2ger) mconley, well, no, there’s nsDocShell and nsGlobalWindow (with IsOuterWindow() true)
14:13 (Ms2ger) mconley, those are pretty much isomorphic
14:13 (mconley) I see
14:13 (bz) nsDocShell and outer nsGlobalWindow are in a 1-1 relationship
14:14 (bz) The fact that they are two separate objects is sort of a historical accident that we may want to rectify sometime
14:14 (mconley) this sounds like another post to write – how nsDocShell and nsGlobalWindow are related…

So I think nsGlobalWindow (instances of which can either be “inner” or “outer”) when it has IsOuterWindow being true, works in tandem with nsDocShell to “be” the outer window. That’s really imprecise, hand-wavey language. I’ll probably need to tighten this up in a follow-up post once somebody reads this and gives me better words to describe things13.

On May 24, 2006, smaug landed a patch to fix bug 336978. Bug 336978 was a crash caused by loading the following code in an iframe:

<html>
<head></head>
<body>
  <script>
    window.addEventListener("pagehide", doe, true);
    function doe(e) {
      var x = parent.document.getElementsByTagName('iframe')[0];
      x.parentNode.removeChild(x);
    }
    setTimeout(doe2,500);
    function doe2() {
      window.location = 'about:blank';
    }
  </script>
</body>
</html>

What this code does is wait 500ms, and then change the location to about:blank. Changing the location causes the pagehide event to fire while we’re unloading the original page, and when we hear it, we get the host of the iframe to remove the iframe from itself.

smaug’s solution to this bug is for nsDocShell to hold a reference via an nsCOMPtr to the nsIContentViewer for the document while the pagehide event is fired. This ensures that the nsIContentViewer doesn’t get destructed before we’re truly done to it. The name we give this nsCOMPtr is “kungFuDeathGrip”. This isn’t the only place where some hold of an object is maintained with a variable called kungFuDeathGrip – check out dxr for some more uses.

I’d seen kungFuDeathGrip over the years, and I never looked closely at what it was doing. I always thought kungFuDeathGrip was some magical global function that destroyed things unequivocally, but on closer inspection, I’m pretty sure it’s really just a way of saying “this variable’s sole purpose is to hold a reference to this thing until I’m done with it.”

I think the phrase “kung fu” distracted me. I thought it did this:

Black Dynamite layin' the smack down.

Woooooo!

But it’s really more like this:

Spock taking out Kirk with the Vulcan nerve pinch thing.

Kkkg….*gurgle*…ngahh….

On June 15th, 2006, “brettw” landed a patch for bug 245597 to make it so that anything that gets put into the AwesomeBar that isn’t parse-able as a URI automatically turns into a keyword search. That’s great! This made both the search input and the AwesomeBar useful for more users. This change occurred in docshell/base/nsDefaultURIFixup.cpp, which is, as I understand it, the central location for code that turns erroneous URIs into what the user probably intended.

nsIMutationObserver, some new about: pages…

On July 2nd, 2006, Jonas Sicking added nsIMutationObserver to the tree for bug 342062, making it possible to observe changes to the DOM within a subtree. It’s a pretty big patch, but it looks like a good chunk of it is just swapping in usage of nsIMutationObserver to replace old usage of nsIDocumentObserver (which supplied the same observations, but for an entire document instead of a subtree). Note that it’d still be a few years before DOM3 Mutation Events would be exposed for web developers to use, and after a few more years, those events were deprecated in favour of the Mutation Observer API.

On September 15th, 2006, several new Gecko-wide about: pages landed, which means they got put into the redirection map in docshell/base/nsAboutRedirector.cpp. These pages were about:buildconfig (bug 140034), about:about (bug 56061), and about:license (bug 256945). That same day, bz landed a patch to make it so that new about: pages didn’t have to have special rules hardcoded into nsScriptSecurityManager::CanExecuteScripts to execute script even if the user has script disabled. Instead, the nsAboutRedirector mapping was extended to allow a boolean for indicating that an about: page required script execution.

Simplifying DocShell, and then some spoofing and malware protection

The next interesting thing (according to me, anyhow) didn’t occur until the following May 6th, 2007. That day, bz landed a patch for bug 377303 which simplified the structure of things inside the DocShell tree.

Up until that point, there had existed both nsIDocShellTreeItem and nsIDocShellTreeNode had both existed as interfaces for interacting with nodes within a DocShell tree. I’ll quote myself from my previous post:

The (somewhat nebulous) distinction of DocShell “treeItems” and “treeNodes” is made. At this point, the difference between the two is that nsIDocShellTreeItem must be implemented by anything that wishes to be a leaf or middle node of the DocShell tree. The interface itself provides accessors to various attributes on the tree item. nsIDocShellTreeNode, on the other hand, is for manipulating one of these items in the tree – for example, finding, adding or removing children. I’m not entirely sure this distinction is useful, but there you have it.

It looks like enough was finally enough. bz didn’t go so far as to fully merge the two interfaces (though he makes a note in his patch about doing so), but instead made the (arguably more complex) nsIDocShellTreeItem interface inherit from nsIDocShellTreeNode14.

Later, on May 17th, 2007, Mats Palmgren landed a patch for bug 376562 to remove a childOffset attribute from nsIDocShellTreeItem, and instead move a setter for the childOffset to the nsIDocShell interface instead.

Reading this bug comment as well as one of Mats’ comments in the patch, it sounds as if childOffset never really worked as advertised, and was a bit of a foot-gun.15

On June 14th, 2007, bz landed a patch for bug 371360, which prevents onUnload handlers from starting any page loads. Before this, it seems that it was possible for a page do to something like this:

<html>
  <body onunload="location.href = 'http://www.somesite.com';">
    <a href="http://slashdot.org/">http://slashdot.org/</a>
  </body>
</html>

With the result being that you could (potentially) phish a user. For example, suppose you’re a member of MySafeBank, which has a site at mysafebank.com. Suppose you’re at my seemingly innocent site totallyevil.com, and also suppose that I’ve registered a domain at rnysafebank.com (that’s an r and an n, which, if you’re not paying attention, look pretty close to a m). If you’re at my site, and I notice that you’re trying to head to mysafebank.com, I could redirect you to rnysafebank.com, which has a very similar user interface and favicon. Yadda yadda yadda, your bank info is now mine.

So bz stopped that one in its tracks by just preventing a DocShell from attempting any kind of load if we’re in the middle of an unload.

On July 3rd, 2007, johnath16 landed a patch for bug 380932 to add a new mode for about:neterror for pages suspected of serving up malware.

If you haven’t seen that page before, count yourself lucky – you’ve been surfing in safe places! This is what the page looks like (or, used to look like, anyhow):

Minefield reporting a Suspected Attack Site

Old version of Firefox showing a Suspected Attack Site

johnath’s patch allowed an about:neterror page to have a specific CSS class associated with it as part of its URL. This allowed for the dramatic styling in the image above17.

showModalDialog

showModalDialog was a non-standard function that Microsoft introduced in Internet Explorer 418. This allows a web page to create a modal dialog that contains web content. jst landed a patch on July 26th, 2007 to implement it in Firefox as part of bug 194404. This patch made it possible for a DocShell to have a modal dialog be its parent.

showModalDialog has since been marked as deprecated on MDN, and Google Chrome have announced they will no longer support it after May of 2015. Firefox will support it until sometime after Firefox 39 on the release channel19.

about:crashes, Larry, and tab tearing

There’s a long gap in time here where nothing really interesting happens under docshell/.

Finally, on January 24th, 2008, Mossop landed a patch for bug 411490 that exposes about:crashes as a handy way of getting at the list of crash reports that have been collected.

As about:crashes is a Gecko-wide about: page, this meant once again adding an entry to the docshell/base/nsAboutRedirector.cpp map, as had been done with about:buildconfig and about:about.

On April 28th, 2008, gavin landed a patch originally by ehsan that adds a friendlier set of icons for reporting SSL errors for bug 430904.

That icon is Larry. Have you met Larry? This is Larry:

Larry, the SSL dude.

This is Larry.

Larry is the name for a series of icons that were developed to describe how secure your communications are with a particular site. You might recognize him from the airport, because he looks a lot like a customs agent or border patrol.

You can read up on Larry here on johnath’s blog.

On August 7th, 2008, bz landed a patch for bug 113934 to lay the foundation for letting users drag tabs out from a window, or drag tabs between windows. This introduced a new method to nsIFrameLoader, “swapFrameLoaders”. This method is the real key to moving tabs between windows – each <xul:browser> implements nsIFrameLoaderOwner, and the nsIFrameLoader is (yet another) thing that can load web content. This is essentially a brain transplant between two <xul:browser>’s. You can see the real guts of the brain transplant in this method of the patch.

On January 13th, in a semi-related patch, bz landed code for bug 449780 to flush the bfcache20 when swapping frameloaders. Apparently, we were storing information in the bfcache that was simply incorrect after a frameloader swap. The best way to avoid internal confusion in such a case was to just invalidate the cache.

Big gaps…

Lots of big gaps between the next few changes.

On March 18th, 2009, Honza Bambas landed a patch for bug 422526 to implement window.localStorage. localStorage was a replacement for globalStorage21 that persisted across browser restarts (unlike sessionStorage).

Note that both localStorage and sessionStorage were synchronous storage APIs. It’d take until around June 24th of 2010 before an asynchronous storage mechanism became available.

On May 7th, 2009, bz landed a patch for bug 490957 to finally get rid of nsWebShell.cpp. If you recall from my earlier blog post, that was one of Travis Bogard’s goals at the start of this whole adventure. bz’s patch essentially folds the functionality of nsWebShell into nsDocShell. The webshell/ folder remained, but just contained interfaces.

Curiously, a good chunk of nsWebShell’s functionality seemed to revolve around anchor pings, a massively unpopular “feature” that allows a website to get your browser to send a request every time you click on a link. Thankfully, this “feature” is disabled by default in Firefox22. Here’s Jorge Villalobos’s post on anchor pings. Correction (Jan 4th, 2015) - I’ve since changed my tune about anchor pings. See these three comments.

On June 29th, 2009, dbolter landed a patch for bug 467144 so that nsIMutationObserver’s, when they observe an attribute being changed, also get a copy of the old attribute as well as the new one. Actually, to be specific, it adds a callback “AttributeWillChange” to include the old attribute. This fires before the “AttributeChanged” callback.

A day later, bsmedberg landed a massive patch to implement remote tabs. There’s no bug number in the commit message, but this is clearly part of the Electrolysis efforts that were just starting up around this time. Remote tabs means browsers that run in different processes, which is the overall goal of Electrolysis, and (possibly unbeknownst to bsmedberg at the time) a foundational piece for Fennec (Firefox for Android)23.

On October 3rd, 2009, vvuk landed about:memory for bug 515354, a key piece of the war against high-memory consumption in Firefox (a.k.a. MemShrink). This is very similar to the about:crashes page that Mossop landed back in 2008.

And finally…

On January 7th, 2010, smaug landed a patch for bug 534226 to remove support for multiple PresShell’s. PresShell stands for “Presentation Shell”, and as I understand it, is the primary interface to the “frame tree”24.

It looks like, up until this point, Gecko had the ability to have multiple frame trees per content tree. I’m not entirely sure what the point of that was, but the capability was there. smaug’s patch simplifies everything by making sure a document has only a single, primary PresShell. This removes a lot of iteration and management code for those multiple PresShell’s, which is nice.

And last, but not least, on June 30th, 2010, Benjamin Stover landed a patch for bug 556400 which made it so that visits to webpages are recorded asynchronously in Places. It looks like this patch takes I/O off the main-thread, so it gets a big thumbs-up from me.

Essentially, this patch adds new asynchronous write methods to the History service, and then makes nsDocShell use those methods on webpage visits. nsDocShell falls back to the synchronous methods of nsIGlobalHistory2 if, for some reason, it can’t get at the History service and its asynchronous methods.

Phew!

Did you make it? Are you still with me? I know it might feel like this:

qwop

everyone is a winner

but I think we’re making real progress here. I think we’re learning important stuff about the history of Firefox, and changes that have occurred in some of its core functionality over time.

So to sum, that, to me, was the most interesting stuff to happen in and around docshell/ from 2005-2010. There might have been other neat stuff in there, but it didn’t catch my eye when I was browsing commit messages.

There’s still much to do – I have to look at commits from 2011 to 2014. After that, I’m planning on doing a line-by-line code review / walkthrough of nsDocShell.cpp, and then I’d like to try to summarize my findings and any recommendations I’ve put together from my time studying this stuff.

Hold tight!


  1. This pref was browser.sessionhistory.max_viewers, if you’re interested – though that preference appears to have been superseded by browser.sessionhistory.max_total_viewers. The default value for that pref is -1, meaning to adjust the number of allowed cached viewers based on how much memory is available. If you’re looking to reduce how much memory Firefox consumes, it’s possible setting this to some low integer will allow you reverse that trade-off between space and speed. 

  2. I assume per session history 

  3. You wouldn’t notice that you were at a chrome URL though, because DocShell loads this URL internally, while pretending to be at the URL that caused the error. The end result is the user going to http://www.sitethatcausesnetworkerror.com still sees that URL in their AwesomeBar, despite the fact that their web content shows the appropriate network error page hosted at a chrome URL. 

  4. “chrome privileges” means that a web page now essentially has the same permissions that Firefox, the program on your computer, has – meaning it can potentially read and write files, and communicate with anybody on your network. Yikes! 

  5. You can visit this page in Firefox right now and see a generic network error. It’s showing a generic error because it hasn’t the foggiest idea how you’ve arrived at about:neterror, since it wasn’t passed any error information. 

  6. Or the infrastructure to create such a membrane. 

  7. So you couldn’t go back to about:blank in cases I described, where a tab or window was initialized at about:blank before going to a new page. 

  8. I’m actually quite familiar with this stuff because I worked on opening new windows for Electrolysis not too long ago. 

  9. From the header of that interface:

    /**
     * The nsIWindowProvider interface exists so that the window watcher's default
     * behavior of opening a new window can be easly modified.  When the window
     * watcher needs to open a new window, it will first check with the
     * nsIWindowProvider it gets from the parent window.  If there is no provider
     * or the provider does not provide a window, the window watcher will proceed
     * to actually open a new window.
     */

     

  10. The nsIEventQueueService service mentions that it is used to manage event queues for a particular thread, and makes use of nsIThread – so multi-threading must have already been a thing. 

  11. Still called RestorePresentationEvent though… strange that the opportunity wasn’t taken to rename this to RestorePresentationRunnable. 

  12. Those two properties are part of a new nsIDOMStorageWindow interface that nsGlobalWindow implements after this patch. That interface is later removed in bug 670331, and the two accessors are moved directly into nsIDOMWindow instead. 

  13. I have a feeling the real answer lies somewhere in the comments in bug 296639

  14. It wasn’t immediately clear to me why the inheritance didn’t go the other way around – especially since bz himself had a comment in nsIDocShellTreeNode suggesting that arrangement. Look at his first comment in the bug though:

    This would allow consumers to start using just nsIDocShellTreeItem in their code, until we can just merge nsIDocShellTreeNode into nsIDocShellTreeItem.

    Basically, it sounds like nsIDocShellTreeNode was being deprecated, and that callers who used to use nsIDocShellTreeNode should migrate to use nsIDocShellTreeItem instead (which inherits nsIDocShellTreeNode’s methods). Then the two interfaces could be merged. 

  15. Mats’ warning was removed on August 17, 2010 as part of bug 462076. It looks like SetChildOffset is still only ever used when adding the child though, so it’s still probably valid. 

  16. The same johnath who is currently the VP of Firefox! 

  17. Later on in August, dcamp would land this patch as part of bug 384941 which prevents suspected malware sites from even loading, instead of just not displaying them. 

  18. To quote Douglas Adams:

    This has made a lot of people very angry and has been widely regarded as a bad move.

     

  19. And the 38 ESR will continue to be supported until mid-2016. If you maintain a site that uses showModalDialog, you’d best get rid of it. 

  20. The bfcache, or “back-forward cache” is a collection of “frozen” pages that are stored in memory for fast back/forward action – see this page for more detail. 

  21. globalStorage, I believed, allowed all web properties read and write access to the same storage – so clearly it was a good idea to replace it. globalStorage was removed on October 9th, 2011 by Honza as part of bug 687579

  22. But according to this, is enabled by default in both Chrome and Opera. Lovely

  23. Remote tabs are also hugely important for Boot2Gecko / Firefox OS

  24. From this document:

    …the frame tree…is the visual representation of the document. Each frame can be thought of as a rectangular area on the page. The content nodes for XML elements are usually associated with one or more frames which display the element — one frame if the element is rectangular, more if the element is something more complex (like a chunk of bolded text that happens to be word-wrapped)…

    And from the nsIPresShell.h header:

    /**
    * Presentation shell interface. Presentation shells are the
    * controlling point for managing the presentation of a document. The
    * presentation shell holds a live reference to the document, the
    * presentation context, the style manager, the style set and the root
    * frame. <p>
    *
    * When this object is Release’d, it will release the document, the
    * presentation context, the style manager, the style set and the root
    * frame.

     

January 03, 2015 08:34 PM

December 21, 2014

Rumbling Edge - Thunderbird

2014-12-21 Calendar builds

Common (excluding Website bugs)-specific: (37)

Sunbird will no longer be actively developed by the Calendar team.

Windows builds Official Windows

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

December 21, 2014 02:10 PM

2014-12-21 Thunderbird comm-central builds

Thunderbird-specific: (43)

MailNews Core-specific: (28)

Windows builds Official Windows, Official Windows installer

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

December 21, 2014 02:08 PM

November 25, 2014

Thunderbird Blog

Thunderbird Reorganizes at 2014 Toronto Summit

In October 2014, 22 active contributors to Thunderbird gathered at the Mozilla office in Toronto to discuss the status of Thunderbird, and plan for the future.

Toronto Contributors at 2014 Toronto Summit

Thunderbird contributors gather in Toronto to plan the future.

As background, Mitchell Baker, Chair of the Mozilla Foundation, posted in July 2012 that Mozilla would significantly reduce paid staff dedicated to Thunderbird, and asked community volunteers to move Thunderbird forward. Mozilla at that time committed several paid staff to maintain Thunderbird, each working part-time on Thunderbird but with a main commitment to other Mozilla projects. The staff commitment in total was approximately one full-time equivalent.

Over the last two years, those individuals had slowly reduced their commitment to Thunderbird, yet the formal leadership of Thunderbird remained with these staff. By 2014 Thunderbird had reached the point where nobody was effectively in charge, and it was difficult to make important decisions. By gathering the key active contributors in one place, we were able to make real decisions, plan our future governance, and move to complete the transition from being staff-led to community-led.

At the Summit, we made a number of key decisions:

There is a lot of new energy in Thunderbird since the Summit, a number of people are stepping forward to take on some critical roles, and we are looking forward to a great next release. More help is always welcome though!

November 25, 2014 06:15 PM

November 19, 2014

Meeting Notes

Thunderbird: 2014-11-18

   Thunderbird meeting notes 2014-11-18

Previous meetings: https://wiki.mozilla.org/Thunderbird/StatusMeetings#Meeting_Notes

Attendees

(partial list)
rkent
florian
wsmwk
jcranmer
mmelin
aceman
theone
clokep
roland

Action items from last meetings

  • wsmwk: Get the Thunderbird 38 bugzilla flag created
    • not heard from standard8

Critical Issues

  • Several critical bugs keeping us from moving from 24 to 31. Please leave these here until they’re confirmed fixed.
    • Frequent POP3 crasher bug 902158 On current aurora and beta?
      • we won’t have data/insight till next week, assuming the relevant builds are built. crash-stats for nightly builds is not useful – direct user testing of the fixed build is required
    • Self-signed certs are having difficulty bug 1036338 SOLVED! REOPENED according to the bug?
    • Auto-complete bugs?bug 1045753 waiting for esr approval bug 1043310 Waiting for review, still
    • Auto-complete improvements (bug 970456, bug 1091675, bug 118624) – some of those could go into esr31
    • bug 1045958TB 31 jp does not display folder pane with OS X

Why are we throttled? Because 1) waiting for TB 31.3, and 2) still hoping for bug 1045958 3) need auto-complete bug approved. Dec 1/2 is now release date.

Upcoming

Round Table

wsmwk

  • got Penelope (pre-OSE eudora) removed from AMO
  • shepherding [1] bug 1045958]]TB 31 jp does not display folder pane with OS X
  • secured potential release drivers
  • “Get Involved” is broken for Thunderbird. TB isn’t offered at https://www.mozilla.org/en-US/contribute/signup/, and it’s unclear who in Thunderbird gets notified. In contact with

Larissa Shapiro

jcranmer

  • Looking into eliminating the comm-central build system bug 1099430
  • Trying to see if I can get some whines set up to listen for potential TB compat changes (e.g., checking the addon-compat keyword)
  • We have telemetry on Nightly for the first time since TB 28!
  • Irving is working through reviews of bug 998191 \o/

clokep

  • Google Talk does not work on comm-central due to disabling of RC4 SSL ciphers, see bug 1092701
    • Some of the security guys have contacted Googlers, apparently an internal ticket was opened with the XMPP team
  • Filed a bug (with a patch) to have firefoxtree work with comm-central, this will automatically add a tag (called “comm”) to the tip of comm-central bug 1100692 and is useful if you’re playing with bookmarks instead of mq
  • Still haven’t fully been able to get Additional Chat Protocols to compile…(Windows still failing)
  • WebRTC work is waiting for reviews from Florian

mkmelin

  • lots of reviews, queue almost empty, yay!
  • finally landed bug 970456
  • bug 1074125 fixed, plan to handle some of the m-c encoding removals next
  • bug 1074793 fixed, for tb we need to set a pref for it to take affect (bug 1095893 awaiting review)

aceman

  • revived effort on 4GB+ folders together with rkent
  • landed a rewrite of the attachment reminder engine: bug 938829 (NOT for ESR). Please mark regressions against that bug. First one is bug 1099866.

Support team

  • [roland] working on Thunderbird profile article because a new volunteer contributor from the SUMO buddy program rewrote it! will review changes!

Action Items

  • wsmwk: Thunderbird start page for anniversary, with localizations
  • wsmwk: Get Involved, get the “Thunderbird path” operating

November 19, 2014 04:00 AM

November 16, 2014

Rumbling Edge - Thunderbird

2014-11-16 Calendar builds

Common (excluding Website bugs)-specific: (11)

Sunbird will no longer be actively developed by the Calendar team.

Windows builds Official Windows

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

November 16, 2014 05:20 PM

2014-11-16 Thunderbird comm-central builds

Thunderbird-specific: (58)

MailNews Core-specific: (37)

Windows builds Official Windows, Official Windows installer

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

November 16, 2014 05:18 PM

October 19, 2014

Rumbling Edge - Thunderbird

2014-10-17 Calendar builds

Common (excluding Website bugs)-specific: (2)

Sunbird will no longer be actively developed by the Calendar team.

Windows builds Official Windows

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

October 19, 2014 09:34 AM

2014-10-17 Thunderbird comm-central builds

Thunderbird-specific: (21)

MailNews Core-specific: (11)

Windows builds Official Windows, Official Windows installer

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

October 19, 2014 09:33 AM

October 15, 2014

Kent James

Thunderbird Summit in Toronto to Plan a Viable Future

On Wednesday, October 15 through Saturday, October 19, 2014, the Thunderbird core contributors (about 20 people in total) are gathering at the Mozilla offices in Toronto, Ontario for a key summit to plan a viable future for Thunderbird. The first two days are project work days, but on Friday, October 18 we will be meeting all day as a group to discuss how we can overcome various obstacles that threaten the continuing viability of Thunderbird as a project. This is an open Summit for all interested parties. Remote participation or viewing of Friday group sessions is possible, beginning at 9:30 AM EDT (6:30 AM Pacific Daylight Time)  using the same channels as the regular weekly Thunderbird status meetings.

Video Instructions: See https://wiki.mozilla.org/Thunderbird/StatusMeetings for details.

Overall Summit Description and Agenda: See https://wiki.mozilla.org/Thunderbird:Summit_2014

Feel free to join in if you are interested in the future of Thunderbird.

October 15, 2014 04:17 AM

October 02, 2014

Philipp Kewisch

Monitor all http(s) network requests using the Mozilla Platform

In an xpcshell test, I recently needed a way to monitor all network requests and access both request and response data so I can save them for later use. This required a little bit of digging in Mozilla’s devtools code so I thought I’d write a short blog post about it.

This code will be used in a testcase that ensures that calendar providers in Lightning function properly. In the case of the CalDAV provider, we would need to access a real server for testing. We can’t just set up a few servers and use them for testing, it would end in an unreasonable amount of server maintenance. Given non-local connections are not allowed when running the tests on the Mozilla build infrastructure, it wouldn’t work anyway. The solution is to create a fakeserver, that is able to replay the requests in the same way. Instead of manually making the requests and figuring out how the server replies, we can use this code to quickly collect all the requests we need.

Without further delay, here is the code you have been waiting for:


Tagged: Mozilla, network, xpcshell

October 02, 2014 02:38 PM

October 01, 2014

Calendar

Calconnect XXXI Interop Testing

Thanks to the wonderful folks at Linagora, I was able to spend the last three days at the Calconnect XXXI meeting in Bedford, England. The goal of this meeting is to get server and client vendors together in a room both for ad-hoc testing and discussions on calendaring standards.

Before arriving, I’ve set myself the goal to go through a big list of bugs that have been sitting around in our CalDAV component to see if they can be resolved. It turns out that I was able to close 48 of the 76 bugs I had picked out:

report-2014-10-01

A good amount of the bugs I’ve resolved were sitting and waiting for any one of our contributors to reproduce with a specific server. This is often a problem because the time it takes to set up and configure the servers is time consuming. The great thing about being here at Calconnect is having a testing instance most of the reported servers readily set up. Not only that, but engineers from the respective servers are sitting together at a table and can answer any questions that may arise, or comment on potential bugs that have been fixed in later versions.

The other category of bugs are support issues, duplicates and bugs that haven’t received an answer from the reporter. These could have been found outside of Calconnect, but its still a good opportunity to take the time to handle these.

Eight of the remaining bugs already have a patch attached, four of them were created while I was here. There is also a new feature coming up that makes it easy to share calendars with other users directly from Lightning. This requires the server to support caldav-sharing, for example the Apple Calendar and Contacts Server and fruux.com.

Screenshot 2014-10-01 11.02.11
Note: The ugly add button will be replaced by an icon. The email addresses are editable.

 

In the next few days we will be going on to the standards discussions. I am actively involved as the chair to TC-API, a technical committee dedicated to producing an abstract calendaring model that ensures that vendors integrating calendaring into their products are aware of the implications of calendaring and scheduling, hopefully resulting in better interoperability in the future. Another goal we have is to find a common understanding for a REST API that is geared towards webpages, which may become a standards document some day.

October 01, 2014 10:13 AM

September 29, 2014

Ludovic Hirlimann

Tips on organizing a pgp key signing party

Over the years I’ve organized or tried to organize pgp key signing parties every time I go somewhere. I the last year I’ve organized 3 that were successful (eg with more then 10 attendees).

1. Have a venue

I’ve tried a bunch of times to have people show up at the hotel I was staying in the morning - that doesn’t work. Having catering at the venues is even better, it will encourage people to come from far away (or long distance commute). Try to show the path in the venues with signs (paper with PGP key signing party and arrows help).

2. Date and time

Meeting in the evening after work works better ( after 18 or 18:30 works better).

Let people know how long it will take (count 1 hour/per 30 participants).

3. Make people sign up

That makes people think twice before saying they will attend. It’s also an easy way for you to know how much beer/cola/ etc.. you’ll need to provide if you cater food.

I’ve been using eventbrite to manage attendance at my last three meeting it let’s me :

4 Reach out

For such a party you need people to attend so you need to reach out.

I always start by a search on biglumber.com to find who are the people using gpg registered on that site for the area I’m visiting (see below on what I send).

Then I look for local linux users groups / *BSD groups  and send an announcement to them with :

for my last announcement it looked like this :

Subject: GnuPG / PGP key signing party September 26 2014
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="t01Mpe56TgLc7mgHKVMajjwkqQdw8XvI4"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--t01Mpe56TgLc7mgHKVMajjwkqQdw8XvI4
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable

Hello my name is ludovic,

I'm a sysadmins at mozilla working remote from europe. I've been
involved with Thunderbird a lot (and still am). I'm organizing a pgp Key
signing party in the Mozilla san francisco office on September the 26th
2014 from 6PM to 8PM.

For security and assurances reasons I need to count how many people will
attend. I'v setup a eventbrite for that at
https://www.eventbrite.com/e/gnupg-pgp-key-signing-party-making-the-web-o=
f-trust-stronger-tickets-12867542165
(please take one ticket if you think about attending - If you change you
mind cancel so more people can come).

I will use the eventbrite tool to send reminders and I will try to make
a list with keys and fingerprint before the event to make things more
manageable (but I don't promise).

for those using lanyrd you will be able to use http://lanyrd.com/ccckzw.

Ludovic
ps sent to buug.org,nblug.org end penlug.org - please feel free to post
where appropriate ( the more the meerier, the stronger the web of trust).=

ps2 I have contacted people listed on biglumber to have more gpg related
people show up.

--=20
[:Usul] MOC Team at Mozilla
QA Lead fof Thunderbird
http://sietch-tabr.tumblr.com/ - http://weusepgp.info/

5. Make it easy to attend

As noted above making a list of participants to hand out helps a lot (I’ve used http://www.phildev.net/pius/ and my own stuff to make a list). It make it easier for you, for attendees. Tell people what they need to bring (IDs, pen, printed fingerprints if you don’t provide a list).

6. Send reminders

Send people reminder and let them know how many people intend to show up. It boosts audience.

September 29, 2014 11:03 AM

September 25, 2014

Rumbling Edge - Thunderbird

2014-09-22 Calendar builds

Common (excluding Website bugs)-specific: (19)

Sunbird will no longer be actively developed by the Calendar team.

Windows builds Official Windows

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

September 25, 2014 04:31 AM

2014-09-22 Thunderbird comm-central builds

Thunderbird-specific: (31)

MailNews Core-specific: (35)

Windows builds Official Windows, Official Windows installer

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

September 25, 2014 04:30 AM

September 22, 2014

Meeting Notes

Thunderbird: 2014-09-11

Thunderbird meeting notes 2014-09-11
Today’s minutes taker, please don’t forget to :
– save etherpad before clearing for this meeting, if etherpad wasn’t previously cleared (but copy “Action items” to the top before clearing)
– after end of meeting, copy entire etherpad contents to a new dated wiki page on http://mzl.la/tbstatus so that they will go public in the meeting notes blog.
– save etherpad, copy “Action items” to “Action items from last meetings”, and clear the rest of etherpad comments

Attendees

  • rkent, florian queze, irving, rolandtanglao, magnus

Action items from last meetings

  • rkent and wsmwk to unblock / deal with bienvenu bugs

Current status and discussions

  • no progress on “bienvenu” bugs
  • comm-central changes on hold pending mercurial changes and signoff by smedberg
  • discussion about kent’s modification to Thunderbird governance proposal is positive
  • much work on summit and video from mconley and rkent
  • summit agenda planning, rkent to ask in tb-planning

Round Table

jcranmer

  • Likely won’t be at meeting today, one of my meetings changed time slots to start 30m before this one
  • according to gps, hg partial checkouts are targetted to land 3.2, more likely to land in 3.3 (release dates of Nov 1/Feb 1, respectively)
    • bsmedberg has said this is his blocker for letting c-c merge into m-c
    • gps plans to make it a high priority to update to newer hg quickly, and now has a new role to make that more possible
  • Played with putting OpenLDAP server in a Docker container

JosiahOne (Not at meeting)

  • Have done reviews and have been giving feedback in bugs, but personal development time has slowed. My development machine has to have a part replaced on Saturday plus I have papers and college applications to finish this month, so availability will be limited until around the time of the Summit.
  • OS X Codesign V2 status is coming along, I’m mostly just waiting for a finished version of the Fx implementation.

rkent

  • Summit: we really need a group of people to work on the agenda. Volunteers?
  • Discussion of reorg plan?

mconley

  • Now acting as interface between TB community and ProTravel Inc
    • The attendees list has been garnered from the wiki, along with 2 extras from Fallen and Florian for Calendar / Chat. Waiting to hear from a few more.
  • Re-connected with Aaron Mandel about a fundraising video. Quote is approximately $600 (+tax), since he likes Mozilla and wants to give us a deal
    • We need to find a good variety (accents, countries of origin, languages) members of the TB community who are comfortable / articulate in front of the camera (about 5-7 people), and come up with “our story”. I suggested our audience be current TB users who don’t actually know what’s happening with Thunderbird.
    • Quickly talk about where Thunderbird came from, the transition to community development, and where we are now – and why we need help. Instead of just talking heads, these interviews will be interspersed with shots of people checking their email, doing calendaring, chatting, etc.
    • We need the quick and punchy stuff: “Put users in control of their email”, “Your email is yours, even when you’re offline”, “Lots of tweaks and add-ons for power users”
    • We also need to send Aaron TB art / assets for graphical work.
    • Once we have our story, we’ll put together a really basic script, so we know who to put in front of the camera, and what questions to ask.
    • Then, I’ll meet with Aaron face-to-face 2 weeks before we shoot to make sure we have everything we need
    • During the summit (probably the Friday), we’ll pull our 5-7 people aside, ask them the questions we’ve scripted (maybe several times to get the sound bites we want), and then Aaron will cut together the video.

clokep (Not attending)

  • Been in contact with some of the DarkMail developers, have been trying to get them to ask questions in #maildev, but they seem to like to ask through me.
    • They’ve said things like “Random people don’t get answers there”
    • More likely they’re working during times when “we’re” not online.

Support team

Action Items

  • mconley: Come up with a short-list of volunteers to go on camera, send out emails with questions on them to get responses, find the responses that resonate, and from that, assemble our script.

September 22, 2014 08:19 PM

September 17, 2014

Ludovic Hirlimann

Gnupg / PGP key signing party in mozilla's San francisco space

I’m organizing a pgp Keysigning party in the Mozilla san francisco office on September the 26th 2014 from 6PM to 8PM.

For security and assurances reasons I need to count how many people will attend. I’ve setup a eventbrite for that at https://www.eventbrite.com/e/gnupg-pgp-key-signing-party-making-the-web-of-trust-stronger-tickets-12867542165 (please take one ticket if you think about attending - If you change you mind cancel so more people can come).

I will use the eventbrite tool to send reminders and I will try to make a list with keys and fingerprint before the event to make things more manageable (but I don’t promise).

For those using lanyrd you will be able to use http://lanyrd.com/ccckzw.(Please tweet the event to get more people in).

September 17, 2014 12:35 AM

August 22, 2014

Robert Kaiser

Mirror, Mirror: Trek Convention and FLOSS Conferences

It's been a while since I did any blogging, but that doesn't mean I haven't been doing anything - on the contrary, I have been too busy to blog, basically. We had a few Firefox releases where I scrambled until the last day of the beta phase to make sure we keep our crash rates as low as our users probably expect by now, I did some prototyping work on QA dashboards (with already-helpful results and more to come) and helped in other process improvements on the Firefox Quality team, worked with different teams to improve stability of our blocklist ping "ADI" data, and finally even was at a QA work week and a vacation in the US. So plenty of stuff done, and I hope to get to blog about at least some pieces of that in the next weeks and months.

That said, one major part of my recent vacation was the Star Trek Las Vegas Convention, which I attended the second time after last year. Since back then, I wanted to blog about some interesting parallels I found between that event (I can't compare to other conventions, as I've never been to any of those) and some Free, Libre and Open Source Software (FLOSS) conferences I've been to, most notably FOSDEM, but also the larger Mozilla events.
Of course, there's the big events in the big rooms and the official schedule - on the conferences it's the keynotes and presentations of developers about what's new in their software, what they learned or where we should go, on the convention it's actors and other guests talking about their experiences, what's new in their lives, and entertaining the crowd - both with questions from the audience. Of course, the topics are wildly different. And there's booths at both, also quite a bit different, as it's autograph and sales booths on one side, and mainly info booths on the other, though there are geeky T-shirts sold at both types of events. ;-)

The largest parallels I found, though, are about the mass of people that are there:
For one thing, the "hallway track" of talking to and meeting other attendees is definitely a main attraction and big piece of the life of the events on both "sides" there. Old friendships are being revived, new found, and the somewhat geeky commonalities are being celebrated and lead to tons of fun and involved conversations - not just the old fun bickering between vi and emacs or Kirk and Picard fans (or different desktop environments / different series and movies). :)
For the other, I learned that both types of events are in the end more about the "regular" attendees than the speakers, even if the latter end up being featured at both. Especially the recurring attendees go there because they want to meet and interact with all the other people going there, with the official schedule being the icing on the cake, really. Not that it would be unimportant or unneeded, but it's not as much the main attraction as people on the outside, and possibly even the organizers, might think. Also, going there means you do for a few days not have to hide your "geekiness" from your surroundings and can actively show and celebrate it. There's also some amount of a "do good" atmosphere in both those communities.
And both events, esp. the Trek and Mozilla ones, tend to have a very inclusive atmosphere of embracing everyone else, no matter what their physical appearance, gender or other social components. And actually, given how deeply that inclusive spirit has been anchored into the Star Trek productions by Gene Roddenberry himself, this might even run deeper in the fans there than it is in the FLOSS world. Notably, I saw a much larger amount of women and of colored people on the Star Trek Conventions than I see on FLOSS conferences - my guess is that at least a third of the Trek fans in Las Vegas were female, for example. I guess we need some more role models in they style of Nichelle Nichols and others in the FLOSS scene.

All in all, there's a lot of similarities and still quite some differences, but quite a twist on an alternate universe like it's depicted in Mirror, Mirror and other episodes - here it's a different crowd with a similar spirit and not the same people with different mindsets and behaviors.
As a very social person, I love attending and immersing myself in both types of events, and I somewhat wonder if and how we should have some more cross-pollination between those communities.
I for sure will be seen on more FLOSS and Mozilla events as well as more Star Trek conventions! :)

August 22, 2014 03:09 PM

August 19, 2014

Rumbling Edge - Thunderbird

2014-08-18 Calendar builds

Common (excluding Website bugs)-specific: (17)

Sunbird will no longer be actively developed by the Calendar team.

Windows builds Official Windows

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds No binaries since July 23, 2014.

August 19, 2014 05:14 AM

2014-08-18 Thunderbird comm-central builds

Thunderbird-specific: (30)

MailNews Core-specific: (34)

Windows builds Official Windows, Official Windows installer

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds No binaries since July 23, 2014.

August 19, 2014 05:11 AM

August 10, 2014

Mike Conley

DocShell in a Nutshell – Part 2: The Wonder Years (1999 – 2004)

When I first announced that I was going to be looking at the roots of DocShell, and how it has changed over the years, I thought I was going to be leafing through a lot of old CVS commits in order to see what went on before the switch to Mercurial.

I thought it’d be so, and indeed it was so. And it was painful. Having worked with DVCS’s like Mercurial and Git so much over the past couple of years, my brain was just not prepared to deal with CVS.

My solution? Take the old CVS tree, and attempt to make a Git repo out of it. Git I can handle.

And so I spent a number of hours trying to use cvs2git to convert my rsync’d mirror of the Mozilla CVS repository into something that I could browse with GitX.

“But why isn’t the CVS history in the Mercurial tree?” I hear you ask. And that’s a good question. It might have to do with the fact that converting the CVS history over is bloody hard – or at least that was my experience. cvs2git has the unfortunate habit of analyzing the entire repository / history and spitting out any errors or corruptions it found at the end.1 This is fine for small repositories, but the Mozilla CVS repo (even back in 1999) was quite substantial, and had quite a bit of history.

So my experience became: run cvs2git, wait 25 minutes, glare at an error message about corruption, scour the web for solutions to the issue, make random stabs at a solution, and repeat.

Not the greatest situation. I did what most people in my position would do, and cast my frustration into the cold, unfeeling void that is Twitter.

But, lo and behold, somebody on the other side of the planet was listening. Unfocused informed me that whoever created the gecko-dev Github mirror somehow managed to type in the black-magic incantations that would import all of the old CVS history into the Git mirror. I simply had to clone gecko-dev, and I was golden.

Thanks Unfocused. :)

So I had my tree. I cracked open Gitx, put some tea on, and started pouring through the commits from the initial creation of the docshell folder (October 15, 1999) to the last change in that folder just before the switch over to 2005 (December 15, 2004)2.

The following are my notes as I peered through those commits.

Artist’s rendering of me reading some old commit messages. I’m not claiming to have magic powers.

“First landing”

That’s the message for the first commit when the docshell/ folder was first created by Travis Bogard. Correction (Jan 3rd, 2015) - this is actually not true at all. The commit I’ve linked to here is the first commit for nsDocShell.cpp, but the docshell/ folder was actually created over a year earlier. It looks like the “docshell” monicker was selected before any of Travis’ refactoring had even begun3.

Without even looking at the code, that’s a pretty strange commit just by the message alone. No bug number, no reviewer, no approval, nothing even approximating a vague description of what was going on.

Leafing through these early commits, I was surprised to find that quite common. In fact, I learned that it was about a year after this work started that code review suddenly became mandatory for commits.

So, for these first bits of code, nobody seems to be reviewing it – or at least, nobody is signing off on it in commit messages.

Like I mentioned, the date for this commit is October 15, 1999. If the timeline in this Wikipedia article about the history of the Mozilla Suite is reliable, that puts us somewhere between Milestone 10 and Milestone 11 of the first 1.0 Mozilla Suite release.4

Travis Bogard

Before we continue, who is this intrepid Travis Bogard who is spearheading this embedding initiative and the DocShell / WebShell rewrite?

At the time, according to his LinkedIn page, he worked for America Online (which at this point in time owned Netscape.5) He’d been working for AOL since 1996, working his way up the ranks from lowly intern all the way to Principal Software Engineer.

Travis was the one who originally wrote the wiki page about how painful it was embedding the web engine, and how unwieldy nsWebShell was.6 It was Travis who led the charge to strip away all of the complexity and mess inside of WebShell, and create smaller, more specialized interfaces for the behind the scenes DocShell class, which would carry out most of the work that WebShell had been doing up until that point.7

So, for these first few months, it was Travis who would be doing most of the work on DocShell.

Parallel development

These first few months, Travis puts the pedal to the metal moving things out of WebShell and into DocShell. Remember – the idea was to have a thin, simple nsWebBrowser that embedders could touch, and a fat, complex DocShell that was privately used within Gecko that was accessible via many small, specialized interfaces.

Wholesale replacing or refactoring a major part of the engine is no easy task, however – and since WebShell was core to the very function of the browser (and the mail/news client, and a bunch of other things), there were two copies of WebShell made.

The original WebShell existed in webshell/ under the root directory. The second WebShell, the one that would eventually replace it, existed under docshell/base. The one under docshell/base is the one that Travis was stripping down, but nobody was using it until it was stable. They’d continue using the one under webshell/, until they were satisfied with their implementation by both manual and automated testing.

When they were satisfied, they’d branch off of the main development line, and start removing occurances of WebShell where they didn’t need to be, and replace them with nsWebBrowser or DocShell where appropriate. When they were done that, they’d merge into main line, and celebrate!

At least, that was the plan.

That plan is spelled out here in the Plan of Attack for the redesign. That plan sketches out a rough schedule as well, and targets November 30th, 1999 as the completion point of the project.

This parallel development means that any bugs that get discovered in WebShell during the redesign needs to get fixed in two places – both under webshell/ and docshell/base.

Breaking up is so hard to do

So what was actually changing in the code? In Travis’ first commit, he adds the following interfaces:

along with some build files. Something interesting here is this nsIHTMLDocShell – where it looked like at this point, the plan was to have different DocShell interfaces depending on the type of document being displayed. Later on, we see this idea dissipate.

If DocShell was a person, these are its baby pictures. At this point, nsIDocShell has just two methods: LoadDocument, LoadDocumentVia, and a single nsIDOMDocument attribute for the document.

And here’s the interface for WebShell, which Travis was basing these new interfaces off of. Note that there’s no LoadDocument, or LoadDocumentVia, or an attribute for an nsIDOMDocument. So it seems this wasn’t just a straight-forward breakup into smaller interfaces – this was a rewrite, with new interfaces to replace the functionality of the old one.8

This is consistent with the remarks in this wikipage where it was mentioned that the new DocShell interfaces should have an API for the caller to supply a document, instead of a URI – thus taking the responsibility of calling into the networking library away from DocShell and putting it on the caller.

nsIDocShellEdit seems to be a replacement for some of the functions of the old nsIClipboardCommands methods that WebShell relied upon. Specifically, this interface was concerned with cutting, copying and pasting selections within the document. There is also a method for searching. These methods are all just stubbed out, and don’t do much at this point.

nsIDocShellFile seems to be the interface used for printing and saving documents.

nsIGenericWindow (which I believe is the ancestor of nsIBaseWindow), seems to be an interface that some embedding window must implement in order for the new nsWebBrowser / DocShell to be embedded in it. I think. I’m not too clear on this. At the very least, I think it’s supposed to be a generic interface for windows supplied by the underlying operating system.

nsIGlobalHistory is an interface for, well, browsing history. This was before tabs, so we had just a single, linear global history to maintain, and I guess that’s what this interface was for. Correction (Jan 3rd, 2015) - global history is really just a recording of everywhere you’ve ever been, ever. It looks like I was getting confused between global history (all the places I’ve been) and session history (all the places I’ve been this session).

nsIScrollable is an interface for manipulating the scroll position of a document.

So these magnificent seven new interfaces were the first steps in breaking up WebShell… what was next?

Enter the Container

nsIDocShellContainer was created so that the DocShells could be formed into a tree and enumerated, and so that child DocShells could be named. It was introduced in this commit.

about:face

In this commit, only five days after the first landing, Travis appears to reverse the decision to pass the responsibility of loading the document onto the caller of DocShell. LoadDocument and LoadDocumentVia are replaced by LoadURI and LoadURIVia. Steve Clark (aka “buster”) is also added to the authors list of the nsIDocShell interface. It’s not clear to me why this decision was reversed, but if I had to guess, I’d say it proved to be too much of a burden on the callers to load all of the documents themselves. Perhaps they punted on that goal, and decided to tackle it again later (though I will point out that today’s nsIDocShell still has LoadURI defined in it).

First implementor

The first implementation of nsIDocShell showed up on October 25, 1999. It was nsHTMLDocShell, and with the exception of nsIGlobalHistory, it implemented all of the other interfaces that I listed in Travis’ first landing.

The base implementation

On October 25th, the stubs of a DocShell base implementation showed up in the repository. The idea, I would imagine, is that for each of the document types that Gecko can display, we’d have a DocShell implementation, and each of these DocShell implementations would inherit from this DocShell base class, and only override the things that they need specialized for their particular document type.

Later on, when the idea of having specialized DocShell implementations evaporates, this base class will end up being nsDocShell.cpp.

That same day, most of the methods were removed from the nsHTMLDocShell implementation, and nsHTMLDocShell was made to inherit from nsDocShellBase.

“Does not compile yet”

The message for this commit on October 27th, 1999 is pretty interesting. It reads:

added a bunch of initial implementation. does not compile yet, but that’s ok because docshell isn’t part of the build yet.

So not only are none of these patches being reviewed (as far as I can tell), and are not mapped to any bugs in the bug tracker, but the patches themselves just straight-up do not build. They are not building on tinderbox.

This is in pretty stark contrast to today’s code conventions. While it’s true that we might land code that is not entered for most Nightly users, we usually hide such code behind an about:config pref so that developers can flip it on to test it. And I think it’s pretty rare (if it ever occurs) for us to land code in mozilla-central that’s not immediately put into the build system.

Perhaps the WebShell tests that were part of the Plan of Attack were being written in parallel and just haven’t landed, but I suspect that they haven’t been written at all at this point. I suspect that the team was trying to stand something up and make it partially work, and then write tests for WebShell and try to make them pass for both old WebShell and DocShell. Or maybe just the latter.

These days, I think that’s probably how we’d go about such a major re-architecture / redesign / refactor; we’d write tests for the old component, land code that builds but is only entered via an about:config pref, and then work on porting the tests over to the new component. Once the tests pass for both, flip the pref for our Nightly users and let people test the new stuff. Once it feels stable, take it up the trains. And once it ships and it doesn’t look like anything is critically wrong with the new component, begin the process of removing the old component / tests and getting rid of the about:config pref.

Note that I’m not at all bashing Travis or the other developers who were working on this stuff back then – I’m simply remarking on how far we’ve come in terms of development practices.

Remember AOL keywords?

Tangentially, I’m seeing some patches go by that have to do with hooking up some kind of “Keyword” support to WebShell.

Remember those keywords? This was the pre-Google era where there were only a few simplistic search engines around, and people were still trying to solve discoverability of things on the web. Keywords was, I believe, AOL’s attempt at a solution.

You can read up on AOL Keywords here. I just thought it was interesting to find some Keywords support being written in here.

One DocShell to rule them all

Now that we have decided that there is only one docshell for all content types, we needed to get rid of the base class/ content type implementation. This checkin takes and moves the nsDocShellBase to be nsDocShell. It now holds the nsIHTMLDocShell stuff. This will be going away. nsCDocShell was created to replace the previous nsCHTMLDocShell.

This commit lands on November 12th (almost a month from the first landing), and is the point where the DocShell-implementation-per-document-type plan breaks down. nsDocShellBase gets renamed to nsDocShell, and the nsIHTMLDocShell interface gets moved into nsIDocShell.idl, where a comment above it indicates that the interface will soon go away.

We have nsCDocShell.idl, but this interface will eventually disappear as well.

[PORKJOCKEY]

So, this commit message on November 13th caught my eye:

pork jockey paint fixes. bug=18140, r=kmcclusk,pavlov

What the hell is a “pork jockey”? A quick search around, and I see yet another reference to it in Bugzilla on bug 14928. It seems to be some kind of project… or code name…

I eventually found this ancient wiki page that documents some of the language used in bugs on Bugzilla, and it has an entry for “pork jockey”: a pork jockey bug is a “bug for work needed on infrastructure/architecture”.

I mentioned this in #developers, and dmose (who was hacking on Mozilla code at the time), explained:

16:52 (dmose) mconley: so, porkjockeys
16:52 (mconley) let’s hear it
16:52 (dmose) mconley: at some point long ago, there was some infrastrcture work that needed to happen
16:52 mconley flips on tape recorder
16:52 (dmose) and when people we’re talking about it, it seemed very hard to carry off
16:52 (dmose) somebody said that that would happen on the same day he saw pigs fly
16:53 (mconley) ah hah
16:53 (dmose) so ultimately the group of people in charge of trying to make that happen were…
16:53 (dmose) the porkjockeys
16:53 (dmose) which was the name of the mailing list too

Here’s the e-mail that Brendan Eich sent out to get the Porkjockey’s flying.

Development play-by-play

On November 17th, the nsIGenericWindow interface was removed because it was being implemented in widget/base/nsIBaseWindow.idl.

On November 27th, nsWebShell started to implement nsIBaseWindow, which helped pull a bunch of methods out of the WebShell implementations.

On November 29th, nsWebShell now implements nsIDocShell – so this seems to be the first point where the DocShell work gets brought into code paths that might be hit. This is particularly interesting, because he makes this change to both the WebShell implementation that he has under the docshell/ folder, as well as the WebShell implementation under the webshell/ folder. This means that some of the DocShell code is now actually being used.

On November 30th, Travis lands a patch to remove some old commented out code. The commit message mentions that the nsIDocShellEdit and nsIDocShellFile interfaces introduced in the first landing are now defunct. It doesn’t look like anything is diving in to replace these interfaces straight away, so it looks like he’s just not worrying about it just yet. The defunct interfaces are removed in this commit one day later.

On December 1st, WebShell (both the fork and the “live” version) is made to implement nsIDocShellContainer.

One day later, nsIDocShellTreeNode interface is added to replace nsIDocShellContainer. The interface is almost identical to nsIDocShellContainer, except that it allows the caller to access child DocShells at particular indices as opposed to just returning an enumerator.

December 3rd was a very big day! Highlights include:

Noticing something? A lot of these changes are getting dumped straight into the live version of WebShell (under the webshell/ directory). That’s not really what the Plan of Attack had spelled out, but that’s what appears to be happening. Perhaps all of this stuff was trivial enough that it didn’t warrant waiting for the WebShell fork to switch over.

On December 12th, nsIDocShellTreeOwner is introduced.

On December 15th, buster re-lands the nsIDocShellEdit and nsIDocShellFile interfaces that were removed on November 30th, but they’re called nsIContentViewerEdit and nsIContentViewerFile, respectively. Otherwise, they’re identical.

On December 17th, WebShell becomes a subclass of DocShell. This means that a bunch of things can get removed from WebShell, since they’re being taken care of by the parent DocShell class. This is a pretty significant move in the whole “replacing WebShell” strategy.

Similar work occurs on December 20th, where even more methods inside WebShell start to forward to the base DocShell class.

That’s the last bit of notable work during 1999. These next bits show up in the new year, and provides further proof that we didn’t all blow up during Y2K.

On Feburary 2nd, 2000, a new interface called nsIWebNavigation shows up. This interface is still used to this day to navigate a browser, and to get information about whether it can go “forwards” or “backwards”.

On February 8th, a patch lands that makes nsGlobalWindow deal entirely in DocShells instead of WebShells. nsIScriptGlobalObject also now deals entirely with DocShells. This is a pretty big move, and the patch is sizeable.

On February 11th, more methods are removed from WebShell, since the refactorings and rearchitecture have made them obsolete.

On February 14th, for Valentine’s day, Travis lands a patch to have DocShell implement the nsIWebNavigation interface. Later on, he lands a patch that relinquishes further control from WebShell, and puts the DocShell in control of providing the script environment and providing the nsIScriptGlobalObjectOwner interface. Not much later, he lands a patch that implements the Stop method from the nsIWebNavigation interface for DocShell. It’s not being used yet, but it won’t be long now. Valentine’s day was busy!

On February 24th, more stuff (like the old Stop implementation) gets gutted from WebShell. Some, if not all, of those methods get forwarded to the underlying DocShell, unsurprisingly.

Similar story on February 29th, where a bunch of the scroll methods are gutted from WebShell, and redirected to the underlying DocShell. This one actually has a bug and some reviewers!9 Travis also landed a patch that gets DocShell set up to be able to create its own content viewers for various documents.

March 10th saw Travis gut plenty of methods from WebShell and redirect to DocShell instead. These include Init, SetDocLoaderObserver, GetDocLoaderObserver, SetParent, GetParent, GetChildCount, AddChild, RemoveChild, ChildAt, GetName, SetName, FindChildWithName, SetChromeEventHandler, GetContentViewer, IsBusy, SetDocument, StopBeforeRequestingURL, StopAfterURLAvailable, GetMarginWidth, SetMarginWidth, GetMarginHeight, SetMarginHeight, SetZoom, GetZoom. A few follow-up patches did something similar. That must have been super satisfying.

March 11th, Travis removes the Back, Forward, CanBack and CanForward methods from WebShell. Consumers of those can use the nsIWebNavigation interface on the DocShell instead.

March 30th sees the nsIDocShellLoadInfo interface show up. This interface is for “specifying information used in a nsIDocShell::loadURI call”. I guess this is certainly better than adding a huge amount of arguments to ::loadURI.

During all of this, I’m seeing references to a “new session history” being worked on. I’m not really exploring session history (at this point), so I’m not highlighting those commits, but I do want to point out that a replacement for the old Netscape session history stuff was underway during all of this DocShell business, and the work intersected quite a bit.

On April 16th, Travis lands a commit that takes yet another big chunk out of WebShell in terms of loading documents and navigation. The new session history is now being used instead of the old.

The last 10% is the hardest part

We’re approaching what appears to be the end of the DocShell work. According to his LinkedIn profile, Travis left AOL in May 2000. His last commit to the repository before he left was on April 24th. Big props to Travis for all of the work he put in on this project – by the time he left, WebShell was quite a bit simpler than when he started. I somehow don’t think he reached the end state that he had envisioned when he’d written the original redesign document – the work doesn’t appear to be done. WebShell is still around (in fact, parts of it around were around until only recently!10 ). Still, it was a hell of chunk of work he put in.

And if I’m interpreting the rest of the commits after this correctly, there is a slow but steady drop off in large architectural changes, and a more concerted effort to stabilize DocShell, nsWebBrowser and nsWebShell. I suspect this is because everybody was buckling down trying to ship the first version of the Mozilla Suite (which finally occurred June 5th, 2002 – still more than 2 years down the road).

There are still some notable commits though. I’ll keep listing them off.

On June 22nd, a developer called “rpotts” lands a patch to remove the SetDocument method from DocShell, and to give the implementation / responsibility of setting the document on implementations of nsIContentViewer.

July 5th sees rpotts move the new session history methods from nsIWebNavigation to a new interface called nsIDocShellHistory. It’s starting to feel like the new session history is really heating up.

On July 18th, a developer named Judson Valeski lands a large patch with the commit message “webshell-docshell consolodation changes”. Paraphrasing from the bug, the point of this patch is to move WebShell into the DocShell lib to reduce the memory footprint. This also appears to be a lot of cleanup of the DocShell code. Declarations are moved into header files. The nsDocShellModule is greatly simplified with some macros. It looks like some dead code is removed as well.

On November 9th, a developer named “disttsc” moves the nsIContentViewer interface from the webshell folder to the docshell folder, and converts it from a manually created .h to an .idl. The commit message states that this work is necessary to fix bug 46200, which was filed to remove nsIBrowserInstance (according to that bug, nsIBrowserInstance must die).

That’s probably the last big, notable change to DocShell during the 2000’s.

2001: A DocShell Odyssey

On March 8th, a developer named “Dan M” moves the GetPersistence and SetPersistence methods from nsIWebBrowserChrome to nsIDocShellTreeOwner. He sounds like he didn’t really want to do it, or didn’t want to be pegged with the responsibility of the decision – the commit message states “embedding API review meeting made me do it.” This work was tracked in bug 69918.

On April 16th, Judson Valeski makes it so that the mimetypes that a DocShell can handle are not hardcoded into the implementation. Instead, handlers can be registered via the CategoryManager. This work was tracked in bug 40772.

On April 26th, a developer named Simon Fraser adds an implementation of nsISimpleEnumerator for DocShells. This implementation is called, unsurprisingly, nsDocShellEnumerator. This was for bug 76758. A method for retrieving an enumerator is added one day later in a patch that fixes a number of bugs, all related to the page find feature.

April 27th saw the first of the NSPR logging for DocShell get added to the code by a developer named Chris Waterson. Work for that was tracked in bug 76898.

On May 16th, for bug 79608, Brian Stell landed a getter and setter for the character set for a particular DocShell.

There’s a big gap here, where the majority of the landings are relatively minor bug fixes, cleanup, or only slightly related to DocShell’s purpose, and not worth mentioning.11

And beyond the infinite…

On January 8th, 2002, for bug 113970, Stephen Walker lands a patch that takes yet another big chunk out of WebShell, and added this to the nsIWebShell.h header:

/**
 * WARNING WARNING WARNING WARNING WARNING WARNING WARNING WARNING !!!!
 *
 * THIS INTERFACE IS DEPRECATED. DO NOT ADD STUFF OR CODE TO IT!!!!
 */

I’m actually surprised it too so long for something like this to get added to the nsIWebShell interface – though perhaps there was a shared understanding that nsIWebShell was shrinking, and such a notice wasn’t really needed.

On January 9th, 2003 (yes, a whole year later – I didn’t find much worth mentioning in the intervening time), I see the first reference to “deCOMtamination”, which is an effort to reduce the amount of XPCOM-style code being used. You can read up more on deCOMtamination here.

On January 13th, 2003, Nisheeth Ranjan lands a patch to use “machine learning” in order to order results in the urlbar autocomplete list. I guess this was the precursor to the frencency algorithm that the AwesomeBar uses today? Interestingly, this code was backed out again on February 21st, 2003 for reasons that aren’t immediately clear – but it looks like, according to this comment, the code was meant to be temporary in order to gather “weights” from participants in the Mozilla 1.3 beta, which could then be hard-coded into the product. The machine-learning bug got closed on June 25th, 2009 due to AwesomeBar and frecency.

On Februrary 11th, 2004, the onbeforeunload event is introduced.

On April 17, 2004, gerv lands the first of several patches to switch the licensing of the code over to the MPL/LGPL/GPL tri-license. That’d be MPL 1.1.

On July 7th, 2004, timeless lands a patch that makes it so that embedders can disable all plugins in a document.

On November 23rd, 2004, the parent and treeOwner attributes for nsIDocShellTreeItem are made scriptable. They are read-only for script.

On December 8th, 2004, bz lands a patch that makes DocShell inherit from DocLoader, and starts a move to eliminate nsIWebShell, nsIWebShellContainer, and nsIDocumentLoader.

And that’s probably the last notable patch related to DocShell in 2004.

Lord of the rings

Reading through all of these commits in the docshell/ and webshell/ folders is a bit like taking a core sample of a very mature tree, and reading its rings. I can see some really important events occurring as I flip through these commits – from the very birth of Mozilla, to the birth of XPCOM and XUL, to porkjockeys, and the start of the embedding efforts… all the way to the splitting out of Thunderbird, deCOMtamination and introduction of the MPL. I really got a sense of the history of the Mozilla project reading through these commits.

I feel like I’m getting a better and better sense of where DocShell came from, and why it does what it does. I hope if you’re reading this, you’re getting that same sense.

Stay tuned for the next bit, where I look at years 2005 to 2010.


  1. Messages like: ERROR: A CVS repository cannot contain both cvsroot/mozilla/browser/base/content/metadata.js,v and cvsroot/mozilla/browser/base/content/Attic/metadata.js,v, for example 

  2. Those commits have hashes 575e116287f99fbe26f54ca3f3dbda377002c5e7 and  60567bb185eb8eea80b9ab08d8695bb27ba74e15 if you want to follow along at home. 

  3. Strangely enough, that docshell folder wasn’t even being built. It was just sitting there, containing a copy of nsWebShell.cpp. That copy mirrored the one under webshell/, and the two copies were kept in sync. Maybe DocShell was always a thing that somebody always wanted to do but never got around to until Travis came along. 

  4. Mozilla Suite 1.0 eventually ships on June 5th, 2002 

  5. According to Wikipedia, Netscape was purchased by AOL on March 17, 1999 

  6. That wiki page was first posted on October 8th, 1999 – exactly a week before the first docshell work had started. 

  7. I should point out that there is no mention of this embedding work in the first roadmap that was published for the Mozilla project. It was only in the second roadmap, published almost a year after the DocShell work began, that embedding was mentioned, albeit briefly. 

  8. More clues as to what WebShell was originally intended for are also in that interface file:

    /**
     * The web shell is a container for implementations of nsIContentViewer.
     * It is a content-viewer-container and also knows how to delegate certain
     * behavior to an nsIWebShellContainer.
     *
     * Web shells can be arranged in a tree.
     *
     * Web shells are also nsIWebShellContainer's because they can contain
     * other web shells.
     */
    

    So that helps clear things up a bit. I think. 

  9. travis reviewed it, and the patch was approved by rickg. Not sure what approval meant back then… was that like a super review? 

  10. This is the last commit to the webshell folder before it got removed on September 30th, 2010. 

  11. Hopefully I didn’t miss anything interesting in there! 

August 10, 2014 11:04 PM

August 06, 2014

Mike Conley

Bugnotes

Over the past few weeks, I’ve been experimenting with taking notes on the bugs I’ve been fixing in Evernote.

I’ve always taken notes on my bugs, but usually in some disposable text file that gets tossed away once the bug is completed.

Evernote gives me more powers, like embedded images, checkboxes, etc. It’s really quite nice, and it lets me export to HTML.

Now that I have these notes, I thought it might be interesting to share them. If I have notes on a bug, here’s what I’m going to aim to do when the bug is closed:

I’ve just posted my first bugnote. It’s raw, unedited, and probably a little hard to follow if you don’t know what I’m working on. I thought maybe it’d be interesting to see what’s going on in my head as I fix a bug. And who knows, if somebody needs to re-investigate a bug or a related bug down the line, these notes might save some people some time.

Anyhow, here are my bugnotes. And if you’re interested in doing something similar, you can fork it (I’m using Jekyll for static site construction).


  1. I didn’t want to pollute this blog with them (plus, dumping these files into WordPress seemed to be a bit heavy)  

August 06, 2014 05:33 PM

Joshua Cranmer

Why email is hard, part 7: email security and trust

This post is part 7 of an intermittent series exploring the difficulties of writing an email client. Part 1 describes a brief history of the infrastructure. Part 2 discusses internationalization. Part 3 discusses MIME. Part 4 discusses email addresses. Part 5 discusses the more general problem of email headers. Part 6 discusses how email security works in practice. This part discusses the problem of trust.

At a technical level, S/MIME and PGP (or at least PGP/MIME) use cryptography essentially identically. Yet the two are treated as radically different models of email security because they diverge on the most important question of public key cryptography: how do you trust the identity of a public key? Trust is critical, as it is the only way to stop an active, man-in-the-middle (MITM) attack. MITM attacks are actually easier to pull off in email, since all email messages effectively have to pass through both the sender's and the recipients' email servers [1], allowing attackers to be able to pull off permanent, long-lasting MITM attacks [2].

S/MIME uses the same trust model that SSL uses, based on X.509 certificates and certificate authorities. X.509 certificates effectively work by providing a certificate that says who you are which is signed by another authority. In the original concept (as you might guess from the name "X.509"), the trusted authority was your telecom provider, and the certificates were furthermore intended to be a part of the global X.500 directory—a natural extension of the OSI internet model. The OSI model of the internet never gained traction, and the trusted telecom providers were replaced with trusted root CAs.

PGP, by contrast, uses a trust model that's generally known as the Web of Trust. Every user has a PGP key (containing their identity and their public key), and users can sign others' public keys. Trust generally flows from these signatures: if you trust a user, you know the keys that they sign are correct. The name "Web of Trust" comes from the vision that trust flows along the paths of signatures, building a tight web of trust.

And now for the controversial part of the post, the comparisons and critiques of these trust models. A disclaimer: I am not a security expert, although I am a programmer who revels in dreaming up arcane edge cases. I also don't use PGP at all, and use S/MIME to a very limited extent for some Mozilla work [3], although I did try a few abortive attempts to dogfood it in the past. I've attempted to replace personal experience with comprehensive research [4], but most existing critiques and comparisons of these two trust models are about 10-15 years old and predate several changes to CA certificate practices.

A basic tenet of development that I have found is that the average user is fairly ignorant. At the same time, a lot of the defense of trust models, both CAs and Web of Trust, tends to hinge on configurability. How many people, for example, know how to add or remove a CA root from Firefox, Windows, or Android? Even among the subgroup of Mozilla developers, I suspect the number of people who know how to do so are rather few. Or in the case of PGP, how many people know how to change the maximum path length? Or even understand the security implications of doing so?

Seen in the light of ignorant users, the Web of Trust is a UX disaster. Its entire security model is predicated on having users precisely specify how much they trust other people to trust others (ultimate, full, marginal, none, unknown) and also on having them continually do out-of-band verification procedures and publicly reporting those steps. In 1998, a seminal paper on the usability of a GUI for PGP encryption came to the conclusion that the UI was effectively unusable for users, to the point that only a third of the users were able to send an encrypted email (and even then, only with significant help from the test administrators), and a quarter managed to publicly announce their private keys at some point, which is pretty much the worst thing you can do. They also noted that the complex trust UI was never used by participants, although the failure of many users to get that far makes generalization dangerous [5]. While newer versions of security UI have undoubtedly fixed many of the original issues found (in no small part due to the paper, one of the first to argue that usability is integral, not orthogonal, to security), I have yet to find an actual study on the usability of the trust model itself.

The Web of Trust has other faults. The notion of "marginal" trust it turns out is rather broken: if you marginally trust a user who has two keys who both signed another person's key, that's the same as fully trusting a user with one key who signed that key. There are several proposals for different trust formulas [6], but none of them have caught on in practice to my knowledge.

A hidden fault is associated with its manner of presentation: in sharp contrast to CAs, the Web of Trust appears to not delegate trust, but any practical widespread deployment needs to solve the problem of contacting people who have had no prior contact. Combined with the need to bootstrap new users, this implies that there needs to be some keys that have signed a lot of other keys that are essentially default-trusted—in other words, a CA, a fact sometimes lost on advocates of the Web of Trust.

That said, a valid point in favor of the Web of Trust is that it more easily allows people to distrust CAs if they wish to. While I'm skeptical of its utility to a broader audience, the ability to do so for is crucial for a not-insignificant portion of the population, and it's important enough to be explicitly called out.

X.509 certificates are most commonly discussed in the context of SSL/TLS connections, so I'll discuss them in that context as well, as the implications for S/MIME are mostly the same. Almost all criticism of this trust model essentially boils down to a single complaint: certificate authorities aren't trustworthy. A historical criticism is that the addition of CAs to the main root trust stores was ad-hoc. Since then, however, the main oligopoly of these root stores (Microsoft, Apple, Google, and Mozilla) have made their policies public and clear [7]. The introduction of the CA/Browser Forum in 2005, with a collection of major CAs and the major browsers as members, and several [8] helps in articulating common policies. These policies, simplified immensely, boil down to:

  1. You must verify information (depending on certificate type). This information must be relatively recent.
  2. You must not use weak algorithms in your certificates (e.g., no MD5).
  3. You must not make certificates that are valid for too long.
  4. You must maintain revocation checking services.
  5. You must have fairly stringent physical and digital security practices and intrusion detection mechanisms.
  6. You must be [externally] audited every year that you follow the above rules.
  7. If you screw up, we can kick you out.

I'm not going to claim that this is necessarily the best policy or even that any policy can feasibly stop intrusions from happening. But it's a policy, so CAs must abide by some set of rules.

Another CA criticism is the fear that they may be suborned by national government spy agencies. I find this claim underwhelming, considering that the number of certificates acquired by intrusions that were used in the wild is larger than the number of certificates acquired by national governments that were used in the wild: 1 and 0, respectively. Yet no one complains about the untrustworthiness of CAs due to their ability to be hacked by outsiders. Another attack is that CAs are controlled by profit-seeking corporations, which misses the point because the business of CAs is not selling certificates but selling their access to the root databases. As we will see shortly, jeopardizing that access is a great way for a CA to go out of business.

To understand issues involving CAs in greater detail, there are two CAs that are particularly useful to look at. The first is CACert. CACert is favored by many by its attempt to handle X.509 certificates in a Web of Trust model, so invariably every public discussion about CACert ends up devolving into an attack on other CAs for their perceived capture by national governments or corporate interests. Yet what many of the proponents for inclusion of CACert miss (or dismiss) is the fact that CACert actually failed the required audit, and it is unlikely to ever pass an audit. This shows a central failure of both CAs and Web of Trust: different people have different definitions of "trust," and in the case of CACert, some people are favoring a subjective definition (I trust their owners because they're not evil) when an objective definition fails (in this case, that the root signing key is securely kept).

The other CA of note here is DigiNotar. In July 2011, some hackers managed to acquire a few fraudulent certificates by hacking into DigiNotar's systems. By late August, people had become aware of these certificates being used in practice [9] to intercept communications, mostly in Iran. The use appears to have been caught after Chromium updates failed due to invalid certificate fingerprints. After it became clear that the fraudulent certificates were not limited to a single fake Google certificate, and that DigiNotar had failed to notify potentially affected companies of its breach, DigiNotar was swiftly removed from all of the trust databases. It ended up declaring bankruptcy within two weeks.

DigiNotar indicates several things. One, SSL MITM attacks are not theoretical (I have seen at least two or three security experts advising pre-DigiNotar that SSL MITM attacks are "theoretical" and therefore the wrong target for security mechanisms). Two, keeping the trust of browsers is necessary for commercial operation of CAs. Three, the notion that a CA is "too big to fail" is false: DigiNotar played an important role in the Dutch community as a major CA and the operator of Staat der Nederlanden. Yet when DigiNotar screwed up and lost its trust, it was swiftly kicked out despite this role. I suspect that even Verisign could be kicked out if it manages to screw up badly enough.

This isn't to say that the CA model isn't problematic. But the source of its problems is that delegating trust isn't a feasible model in the first place, a problem that it shares with the Web of Trust as well. Different notions of what "trust" actually means and the uncertainty that gets introduced as chains of trust get longer both make delegating trust weak to both social engineering and technical engineering attacks. There appears to be an increasing consensus that the best way forward is some variant of key pinning, much akin to how SSH works: once you know someone's public key, you complain if that public key appears to change, even if it appears to be "trusted." This does leave people open to attacks on first use, and the question of what to do when you need to legitimately re-key is not easy to solve.

In short, both CAs and the Web of Trust have issues. Whether or not you should prefer S/MIME or PGP ultimately comes down to the very conscious question of how you want to deal with trust—a question without a clear, obvious answer. If I appear to be painting CAs and S/MIME in a positive light and the Web of Trust and PGP in a negative one in this post, it is more because I am trying to focus on the positions less commonly taken to balance perspective on the internet. In my next post, I'll round out the discussion on email security by explaining why email security has seen poor uptake and answering the question as to which email security protocol is most popular. The answer may surprise you!

[1] Strictly speaking, you can bypass the sender's SMTP server. In practice, this is considered a hole in the SMTP system that email providers are trying to plug.
[2] I've had 13 different connections to the internet in the same time as I've had my main email address, not counting all the public wifis that I have used. Whereas an attacker would find it extraordinarily difficult to intercept all of my SSH sessions for a MITM attack, intercepting all of my email sessions is clearly far easier if the attacker were my email provider.
[3] Before you read too much into this personal choice of S/MIME over PGP, it's entirely motivated by a simple concern: S/MIME is built into Thunderbird; PGP is not. As someone who does a lot of Thunderbird development work that could easily break the Enigmail extension locally, needing to use an extension would be disruptive to workflow.
[4] This is not to say that I don't heavily research many of my other posts, but I did go so far for this one as to actually start going through a lot of published journals in an attempt to find information.
[5] It's questionable how well the usability of a trust model UI can be measured in a lab setting, since the observer effect is particularly strong for all metrics of trust.
[6] The web of trust makes a nice graph, and graphs invite lots of interesting mathematical metrics. I've always been partial to eigenvectors of the graph, myself.
[7] Mozilla's policy for addition to NSS is basically the standard policy adopted by all open-source Linux or BSD distributions, seeing as OpenSSL never attempted to produce a root database.
[8] It looks to me that it's the browsers who are more in charge in this forum than the CAs.
[9] To my knowledge, this is the first—and so far only—attempt to actively MITM an SSL connection.

August 06, 2014 03:39 AM

July 31, 2014

Kent James

Thunderbird’s Future: the TL;DR Version

In the next few months I hope to do a series of blog posts that talk about Mozilla’s Thunderbird email client and its future. Here’s the TL;DR version (though still pretty long). These are my personal views, I have no authority to speak for Mozilla or for the Thunderbird project.

Current Status

Where We Need to Go

How We Get There

The Thunderbird team is currently planning to get together in Toronto in October 2014, and Mozilla staff are trying to plan an all-hands meeting sometimes soon. Let’s discussion the future in conjunction with those events, to make sure that in 2015 we have a sustainable plan for the future.

 

July 31, 2014 08:16 PM

July 25, 2014

Instantbird

Linux nightly builds back!

Back in March, we posted that we had started building nightly builds from mozilla-central/comm-central, but because the version of CentOS we had been using was too old, we were unable to continue providing Linux nightly builds. That has now changed and (as of today) we have both 32-bit and 64-bit Linux nightlies! Since this involved us installing a new operating system (CentOS 6.2) and tweaking some of the build configuration for Linux, please let us know if you see any issues! Additionally, some more up-to-date features that have been available in Mozilla Firefox for a while should now be available in Instantbird (e.g. dbus and pulse audio support) and even some minor bugs were fixed!

Sorry that this took so long, but go grab your updated copy now!

July 25, 2014 01:07 PM

July 22, 2014

Calendar

Lightning 3.3 is Out the Door

I am happy to announce that Lightning 3.3, a new major release, is out of the door. Here are a few release highlights:

There have also been a lot of changes in the backend that are not visible to the user. This includes better testing framework support, which will help avoid regressions in the future. A total of 103 bugs have been fixed since Lightning 2.6.

When installing or updating to Thunderbird 31, you should automatically receive the upgrade to Lightning 3.3. If something goes wrong, you can get the new versions here:

Should you be using Seamonkey, you will have to wait for the 2.28 release, which is postponed as per this thread.

If you encounter any major issues, please comment on this blog post. Support issues are handled on support.mozilla.org. Feature requests and bug reports can be made on bugzilla.mozilla.org in the product Calendar. Be sure to search for existing bugs before you file them.

Addons Update:

There are a number of addons that have compatibility issues with Lightning 3.3. The authors have been notified and a few first fixes are available:

Other Updates:

July 22, 2014 05:43 PM

July 17, 2014

Kent James

Following Wikipedia, Thunderbird Could Raise $1,600,000 in annual donations

What will it take to keep Thunderbird stable and vibrant? Although there is a dedicated, hard-working team of volunteers trying hard to keep Thunderbird alive, there has been very little progress on improvements since Mozilla drastically reduced their funding. I’ve been an advocate for some time that Thunderbird needs income to fulfill its potential, and that the best way to generate that income would be to appeal directly to its users for donations.

One internet organization that has done this successfully has been Wikipedia. How much income could Thunderbird generate if they received the same income per user as Wikipedia? Surely our users, who rely on Thunderbird for critical daily communications, are at least as willing to donate as Wikipedia users.

Estimates of income from Wikipedia’s annual fund raising drive to users are around $20,000,000 per year. Recently Wikipedia is reporting 11824 M pageviews per month and 5 pageviews per user. That results in a daily user count of 78 million users. Thunderbird by contrast has about 6 million daily users (using hits per day to update checks), or about 8% of the daily users of Wikipedia.

If Thunderbird were willing to directly engage users asking for donations, at the same rate per user as Wikipedia, there is a potential to raise $1,600,000 per year. That would certainly be enough income to maintain a serious team to move forward.

Wikipedia’s donation requests were fairly intrusive, with large banners at the top of all Wikipedia pages. When Firefox did a direct appeal to users early this year, the appeal was very subtle (did you even notice it?). I tried to scale the Firefox results to Thunderbird, and estimated that a similar subtle appeal might raise $50,000 – $100,000 per year in Thunderbird. That is not sufficient to make a significant impact. We would have to be willing to be a little intrusive, like Wikipedia, it we are going to be successful. This will generate pushback, as has Wikipedia’s campaign, so we would have to be willing to live with the pushback.

But is it really in the best interest of our users to spare them an annual, slightly intrusive appeal for donations, while letting the product that they depend on each day slowly wither away? I believe that if we truly care about our users, we will take the necessary steps to insure that we give them the best product possible, including undertaking fundraising to keep the product stable and vibrant.

July 17, 2014 06:31 AM

July 11, 2014

Kent James

The Thunderbird Tree is Green!

For the first time in a while, the Thunderbird build tree is all green. That means that all platforms are building, and passing all tests:

The Thunderbird build tree is green!

The Thunderbird build tree is green!

Many thanks to Joshua Cranmer for all of his hard work to make it so!

July 11, 2014 07:05 PM

Ludovic Hirlimann

Thunderbird 31 coming soon to you and needs testing love

We just released the second beta of Thunderbird 31. Please help us improve Thunderbird quality by uncovering bugs now in Thunderbird 31 beta so that developers have time to fix them.

There are two ways you can help

- Use Thunderbird 31 beta in your daily activities. For problems that you find, file a bug report that blocks our tracking bug 1008543.

- Use Thunderbird 31 beta to do formal testing.  Use the moztrap testing system to tests : choose run test - find the Thunderbird product and choose 31 test run.

Visit https://etherpad.mozilla.org/tbird31testing for additional information, and to post your testing questions and results.

Thanks for contributing and helping!

Ludo for the QA team

Updated links

July 11, 2014 10:39 AM

July 09, 2014

Rumbling Edge - Thunderbird

2014-07-08 Calendar builds

Common (excluding Website bugs)-specific: (2)

Sunbird will no longer be actively developed by the Calendar team.

Windows builds Official Windows

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

July 09, 2014 07:21 AM

2014-07-08 Thunderbird comm-central builds

Thunderbird-specific: (14)

MailNews Core-specific: (17)

Windows builds Official Windows, Official Windows installer

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

July 09, 2014 07:20 AM

July 07, 2014

Mike Conley

DocShell in a Nutshell – Part 1: Original Intents

I think in order to truly understand what the DocShell currently is, we have to find out where the idea of creating it came from. That means going way, way back to its inception, and figuring out what its original purpose was.

So I’ve gone back, peered through various archived wiki pages, newsgroup and mailing list posts, and I think I’ve figured out that original purpose.1

The original purpose can be, I believe, summed up in a single word: embedding.

Embedding

Back in the late 90’s, sometime after the Mozilla codebase was open-sourced, it became clear to some folks that the web was “going places”. It was the “bees knees”. It was the “cat’s pajamas”. As such, it was likely that more and more desktop applications were going to need to be able to access and render web content.

The thing is, accessing and rendering web content is hard. Really hard. One does not simply write web browsing capabilities into their application from scratch hoping for the best. Heartbreak is in that direction.

Instead, the idea was that pre-existing web engines could be embedded into other applications. For example, Steam, Valve’s game distribution platform, displays a ton of web content in its user interface. All of those Steam store pages? Those are web pages! They’re using an embedded web engine in order to display that stuff.2

So making Gecko easily embeddable was, at the time, a real goal, and a real project.

nsWebShell

The problem was that embedding Gecko was painful. The top-level component that embedders needed to instantiate and communicate with was called “nsWebShell”, and it was a pretty unwieldy. Lots of internal knowledge about the internal workings of Gecko was leaked through the nsWebShell component, and it’s interface changed far too often.

It was also inefficient – the nsWebShell didn’t just represent the top-level “thing that loads web content”. Instances of nsWebShell were also used recursively for subdocuments within those documents – for example, (i)frames within a webpage. These nested nsWebShell’s formed a tree. That’s all well and good, except for the fact that there were things that the nsWebShell loaded or did that only the top-level nsWebShell really needed to load or do. So there was definitely room for some performance improvement.

In order to correct all of these issues, a plan was concocted to retire nsWebShell in favour of several new components and a slew of new interfaces. Two of those new components were nsDocShell and nsWebBrowser.

nsWebBrowser

nsWebBrowser would be the thing that embedders would drop into the applications – it would be the browser, and would do all of the loading / doing of things that only the top-level web browser needed to do.

The interface for nsWebBrowser would be minimal, just exposing enough so that an embedder could drop one into their application with little fuss, point it at a URL, set up some listeners, and watch it dance.

nsDocShell

nsDocShell would be… well, everything else that nsWebBrowser wasn’t. So that dumping ground that was nsWebShell would get dumped into nsDocShell instead. However, a number of new, logically separated interfaces would be created for nsDocShell.

Examples of those interfaces were:

So instead of a gigantic, ever changing interface, you had lots of smaller interfaces, many of which could eventually be frozen over time (which is good for embedders).

These interfaces also made it possible to shield embedders from various internals of the nsDocShell component that embedders shouldn’t have to worry about.

Ok, but… what was it?

But I still haven’t answered the question – what was the DocShell at this point? What was it supposed to do now that it was created.

This ancient wiki page spells it out nicely:

This class is responsible for initiating the loading and viewing of a document.

This document also does a good job of describing what a DocShell is and does.

Basically, any time a document is to be viewed, a DocShell needs to be created to view it. We create the DocShell, and then we point that DocShell at the URL, and it does the job of kicking off communications via the network layer, and dealing with the content once it comes back.

So it’s no wonder that it was (and still is!) a dumping ground – when it comes to loading and displaying content, nsDocShell is the central nexus point of communications for all components that make that stuff happen.

I believe that was the original purpose of nsDocShell, anyhow.

And why “shell”?

This is a simple question that has been on my mind since I started this. What does the “shell” mean in nsDocShell?

Y’know, I think it’s actually a fragment left over from the embedding work, and that it really has no meaning anymore. Originally, nsWebShell was the separation point between an embedder and the Gecko web engine – so I think I can understand what “shell” means in that context – it’s the touch-point between the embedder, and the embedee.

I think nsDocShell was given the “shell” monicker because it did the job of taking over most of nsWebShell’s duties. However, since nsWebBrowser was now the touch-point between the embedder and embedee… maybe shell makes less sense. I wonder if we missed an opportunity to name nsDocShell something better.

In some ways, “shell” might make some sense because it is the separation between various documents (the root document, any sibling documents, and child documents)… but I think that’s a bit of a stretch.

But now I’m racking my brain for a better name (even though a rename is certainly not worth it at this point), and I can’t think of one.

What would you rename it, if you had the chance?

What is nsDocShell doing now?

I’m not sure what’s happened to nsDocShell over the years, and that’s the point of the next few posts in this series. I’m going to be going through the commits hitting nsDocShell from 1999 until the present day to see how nsDocShell has changed and evolved.

Hold on to your butts.

Further reading

The above was gleaned from the following sources:


  1. I’m very much prepared to be wrong about any / all of this. I’m making assertions and drawing conclusions by reading and interpreting things that other people have written about DocShell – and if the telephone game is any indication, this indirect analysis can be lossy. If I have misinterpreted, misunderstood, or completely missed the point in any of the above, please don’t hesitate to comment, and I will correct it forthwith. 

  2. They happen to be using WebKit, the same web engine that powers Safari, and (until recently) Chromium. According to this, they’re using the Chromium Embedding Framework to display this web content. There are a number of applications that embed Gecko. Firefox is the primary consumer of Gecko. Thunderbird is another obvious one – when you display HTML email, it’s using the Gecko web engine in order to lay it out and display it. WINE uses Gecko to allow Windows-binaries to browse the web. Making your web engine embeddable, however, has a development cost, and over the years, making Gecko embeddable seems to have become less of a priority. Servo is a next-generation web browser engine from Mozilla Research that aims to be embeddable. 

July 07, 2014 05:12 PM

Blake Winton

Figuring out where things are in an image.

People love heatmaps.

They’re a great way to show how much various UI elements are used in relation to each other, and are much easier to read at a glance than a table of click- counts would be. They can also reveal hidden patterns of usage based on the locations of elements, let us know if we’re focusing our efforts on the correct elements, and tell us how effective our communication about new features is. Because they’re so useful, one of the things I am doing in my new role is setting up the framework to provide our UX team with automatically updating heatmaps for both Desktop and Android Firefox.

Unfortunately, we can’t just wave our wands and have a heatmap magically appear. Creating them takes work, and one of the most tedious processes is figuring out where each element starts and stops. Even worse, we need to repeat the process for each platform we’re planning on displaying. This is one of the primary reasons we haven’t run a heatmap study since 2012.

In order to not spend all my time generating the heatmaps, I had to reduce the effort involved in producing these visualizations.

Being a programmer, my first inclination was to write a program to calculate them, and that sort of worked for the first version of the heatmap, but there were some difficulties. To collect locations for all the elements, we had to display all the elements.

Firefox in the process of being customized

Customize mode (as shown above) was an obvious choice since it shows everything you could click on almost by definition, but it led people to think that we weren’t showing which elements were being clicked the most, but instead which elements people customized the most. So that was out.

Next we tried putting everything in the toolbar, or the menu, but those were a little too cluttered even without leaving room for labels, and too wide (or too tall, in the case of the menu).

A shockingly busy toolbar

Similarly, I couldn’t fit everything into the menu panel either. The only solution was to resort to some Photoshop-trickery to fit all the buttons in, but that ended up breaking the script I was using to locate the various elements in the UI.

A surprisingly tall menu panel

Since I couldn’t automatically figure out where everything was, I figured we might as well use a nicely-laid out, partially generated image, and calculate the positions (mostly-)manually.

The current version of the heatmap (Note: This is not the real data.)

I had foreseen the need for different positions for the widgets when the project started, and so I put the widget locations in their own file from the start. This meant that I could update them without changing the code, which made it a little nicer to see what’s changed between versions, but still required me to reload the whole page every time I changed a position or size, which would just have taken way too long. I needed something that could give me much more immediate feedback.

Fortunately, I had recently finished watching a series of videos from Ian Johnson (@enjalot on twitter) where he used a tool he made called Tributary to do some rapid prototyping of data visualization code. It seemed like a good fit for the quick moving around of elements I was trying to do, and so I copied a bunch of the code and data in, and got to work moving things around.

I did encounter a few problems: Tributary wasn’t functional in Firefox Nightly (but I could use Chrome as a workaround) and occasionally sometimes trying to move the cursor would change the value slider instead. Even with these glitches it only took me an hour or two to get from set-up to having all the numbers for the final result! And the best part is that since it's all open source, you can take a look at the final result, or fork it yourself!

July 07, 2014 03:53 PM