Planet Thunderbird

May 23, 2015

Mike Conley

Things I’ve Learned This Week (May 18 – May 22, 2015)

You might have noticed that I had no “Things I’ve Learned This Week” post last week. Sorry about that – by the end of the week, I looked at my Evernote of “lessons from the week”, and it was empty. I’m certain I’d learned stuff, but I just failed to write it down. So I guess the lesson I learned last week was, always write down what you learn.

How to make your mozilla-central Mercurial clone work faster

I like Mercurial. I also like Git, but recently, I’ve gotten pretty used to Mercurial.

One complaint I hear over and over (and I’m guilty of it myself sometimes), is that “Mercurial is slow”. I’ve even experienced that slowness during some of my Joy of Coding episodes.

This past week, I was helping my awesome new intern get set up to tear into some e10s bugs, and at some point we went through this document to get her .hgrc all set up.

This document did not exist when I first started working with Mercurial – back then, I was using mq or sometimes pbranch, and grumbling about how I missed Git.

But there is some gold in this document.

gps has been doing some killer work documenting best practices with Mercurial, and this document is one of the results of his labour.

The part that’s really made the difference for me is the hgwatchman bit.

watchman is a tool that some folks at Facebook wrote to monitor changes in a folder. hgwatchman is an extension for Mercurial that takes advantage of watchman for a repository, smartly precomputing a bunch of stuff when the folder changes so that when you fire a command, like

hg status

It takes a fraction of the time it’d take without hgwatchman. A fraction.

Here’s how I set hgwatchman up on my MacBook (though you should probably go by the Mercurial for Mozillians doc as the official reference):

  1. Install watchman with brew:
    brew install watchman
  2. Clone the hgwatchman extension to some folder that you can easily remember and build it:
    hg clone https://bitbucket.org/facebook/hgwatchman
    cd hgwatchman
    make local
  3. Add the following lines to my user .hgrc:
    [extensions]
    hgwatchman = cloned-in-dir/hgwatchman/hgwatchman
  4. Make sure the extension is properly installed by running:
    hg help extensions
  5. hgwatchman should be listed under “enabled extensions”. If it didn’t work, keep in mind that you want to target the hgwatchman directory
  6. And then in my mozilla-central .hg/.hgrc:
    [watchman]
    mode = on
  7. Boom, you’re done!

Congratulations, hg should feel snappier now!

Next step is to try out this chg thingthough I’m having some issues still.

May 23, 2015 09:54 PM

The Joy of Coding (Ep. 15): OS X Printing Returns

In Episode 15, we kept working on the same bug as the last two episodes – proxying the printing dialog on OS X to the parent process from the content process. At the end of Episode 14, we’d finished the serialization bits, and put in the infrastructure for deserialization. In this episode, we did the rest of the deserialization work.

And then we attempted to print a test page. And it worked!

We did it!

Then, we cleaned up the patches and posted them up for review. I had a lot of questions about my Objective-C++ stuff, specifically with regards to memory management (it seems as if some things in Objective-C++ are memory managed, and it’s not immediately obvious what that applies to). So I’ve requested review, and I hope to hear back from someone more experienced soon!

I also plugged a new show that’s starting up! If you’re a designer, and want to see how a designer at Mozilla does their work, you’ll love The Design Hour, by Ricardo Vazquez. His design chops are formidable, and he shows you exactly how he operates. It’s great!

Finally, I failed to mention that I’m on holiday next week, so I can’t stream live. I have, however, pre-recorded a shorter Episode 16, which should air at the right time slot next week. The show must go on!

Episode Agenda

References

Bug 1091112 – Print dialog doesn’t get focus automatically, if e10s is enabled – Notes

May 23, 2015 03:26 PM

May 20, 2015

Mike Conley

Lost in Data!

Keeping Firefox zippy involves running performance tests on each push to make sure we’re not making Firefox slower.

How does that even work? This used to be a mystery. NO LONGER. jmaher lets you peek behind the curtain here in the first episode of Lost in Data!

May 20, 2015 01:36 AM

May 17, 2015

Mike Conley

The Joy of Coding (Ep. 14): More OS X Printing

In this episode, I kept working on the same bug as last week – proxying the print dialog from the content process on OS X. We actually finished the serialization bit, and started doing deserialization!

Hopefully, next episode we can polish off the deserialization and we’l be done. Fingers crossed!

Note that this episode was about 2 hours and 10 minutes, but the standard-definition recording up on Air Mozilla only plays for about 13 minutes and 5 seconds. Not too sure what’s going on there – we’ve filed a bug with the people who’ve encoded it. Hopefully, we’ll have the full episode up for standard-definition soon.

In the meantime, if you’d like to watch the whole episode, you can go to the Air Mozilla page and watch it in HD, or you can go to the YouTube mirror.

Episode Agenda

References

Bug 1091112 – Print dialog doesn’t get focus automatically, if e10s is enabled – Notes

May 17, 2015 11:09 PM

May 13, 2015

Mark Banner

Using eslint alongside the Firefox Hello code base to help productivity

On Firefox Hello, we recently added the eslint linter to be run against the Hello code base. We started of with a minimal set of rules, just enough to get us something running. Now we’re working on enabling more rules.

Since we enabled it, I feel like I’m able to iterate faster on patches. For example, if just as I finish typing I see something like:

eslint syntax error in sublime I know almost immediately that I’ve forgotten a closing bracket and I don’t have to run anything to find out – less run-edit-run cycles.

Now I think about it, I’m realising it has also helped reduced the amount of review nits on my patches – due to trivial formatting mistakes being caught automatically, e.g. trailing white-space or missing semi-colons.

Talking about reviews, as we’re running eslint on the Hello code, we just have to apply the patch, and run our tests, and we automatically get eslint output:

eslint output - no trailing spacesHopefully our patch authors will be running eslint before uploading the patch anyway, but this is an additional test, and a few less things that we need to look at during review which helps speed up that cycle as well.

I’ve also put together a global config file for eslint (see below), that I use for outside of the Hello code, on the rest of the Firefox code base (and other projects). This is enough, that, when using it in my editor it gives me a reasonable amount of information about bad syntax, without complaining about everything.

I would definitely recommend giving it a try. My patches feel faster overall, and my test runs are for testing, not stupid-mistake catching!

Want more specific details about the setup and advantages? Read on…

My Setup

For my setup, I’ve recently switched to using Sublime. I used to use Aquamacs (an emacs variant), but when eslint came along, the UI for real-time linting within emacs didn’t really seem great.

I use sublime with the SublimeLinter and SublimeLinter-contrib-eslint packages. I’m told other editors have eslint integration as well, but I’ve not looked at any of them.

You need to have eslint installed globally, or at least in your path, other than that, just follow the installation instructions given on the SublimeLinter page.

One configuration I change I did have to make to the global configuration:

{
  "extensions":
  [
    "jsm",
    "jsx",
    "sjs"
  ]
}

This makes sure sublime treats the .jsm and .jsx files as javascript files, which amongst other things turns on eslint for those files.

Global Configuration

I’ve uploaded my global configuration to a gist, if it changes I’ll update it there. It isn’t intended to catch everything – there’s too many inconsistencies across the code base for that to be sensible at the moment. However, it does at least allow general syntax issues to be highlighted for most files – which is obviously useful in itself.

I haven’t yet tried running it across the whole code base via eslint on the command line – there seems to be some sort of configuration issue that is messing it up and I’ve not tracked it down yet.

Firefox Hello’s Configuration

The configuration files for Hello can be found in the mozilla-central source. There’s a few of these because we have both content and chrome code, and some of the content code is shared with a website that can be viewed by most browsers, and hence isn’t currently able to use all the es6 features, whereas the chrome code can. This is another thing that eslint is good for enforcing.

Our eslint configuration is evolving at the moment, as we enable more rules, which we’re tracking in this bug.

Any Questions?

Feel free to ask any questions about eslint or the setup in the comments, or come and visit us in #loop on irc.mozilla.org (IRC info here).

May 13, 2015 07:19 PM

May 11, 2015

Mike Conley

The Joy of Coding (Ep. 13): Printing. Again!

Had to deal with some network issues during this video – sorry if people were getting dropped frames during the live show! I have personally checked this recording, and almost all frames are there.

The only frames that are missing are the ones where I scramble around to connect to the wired network, which was boring anyhow.

In this episode, I worked on proxying the print dialog from the content process on OS X. It was a wild ride, and I learned quite a bit about Cocoa stuff. It was also a throwback to my very first episode, where I essentially did the same thing for Linux!

We’ll probably polish this off in the next episode, or in the episode after.

Episode Agenda

References

Bug 1091112 – Print dialog doesn’t get focus automatically, if e10s is enabled – Notes

May 11, 2015 07:10 PM

May 10, 2015

Mike Conley

Things I’ve Learned This Week (May 4 – May 8, 2015)

How to convert an NSString to a Gecko nsAString

I actually discovered this during my most recent Joy of Coding episode – there is a static utility method to convert between native Cocoa NSStrings and Gecko nsAStrings – nsCocoaUtils::GetStringForNSString. Very handy, and works exactly as advertised.

An “Attach to Process by pid” Keyboard Shortcut for XCode

I actually have colleague Garvan Keeley to thank for this one, and technically I learned this on April 24th. It was only this week that I remembered I had learned it!

When I’m debugging Firefox on OS X, I tend to use XCode, and I usually attach to Firefox after it has started running. I have to navigate some menus in order to bring up the dialog to attach to a process by pid, and I was getting tired of doing that over and over again.

So, as usual, I tweeted my frustration:

AND LO, THE INTERNET SPOKE BACK:

It seems small, but the savings in time for something that I do so frequently quickly adds up. And it always feels good to go faster!

May 10, 2015 08:00 PM

May 07, 2015

Ludovic Hirlimann

My geeking plans for this summer

During July I’ll be visiting family in Mongolia but I’ve also a few things that are very geeky that I want to do.

The first thing I want to do is plug the Ripe Atlas probes I have. It’s litle devices that look like that :

Hello @ripe #Atlas !

They enable anybody with a ripe atlas or ripe account to make measurements for dns queries and others. This helps making a global better internet. I have three of these probes I’d like to install. It’s good because last time I checked Mongolia didn’t have any active probe. These probes will also help Internet become better in Mongolia. I’ll need to buy some network cables before leaving because finding these in mongolia is going to be challenging. More on atlas at https://atlas.ripe.net/.

The second thing I intend to do is map Mongolia a bit better on two projects the first is related to Mozilla and maps gps coordinateswith wifi access point. Only a little part of The capital Ulaanbaatar is covered as per https://location.services.mozilla.com/map#11/47.8740/106.9485 I want this to be way more because having an open data source for this is important in the future. As mapping is my new thing I’ll probably edit Openstreetmap in order to make the urban parts of mongolia that I’ll visit way more usable on all the services that use OSM as a source of truth. There is already a project to map the capital city at http://hotosm.org/projects/mongolia_mapping_ulaanbaatar but I believe osm can server more than just 50% of mongolia’s population.

I got inspired to write this post by mu son this morning, look what he is doing at 17 months :

Geeking on a Sun keyboard at 17 months

May 07, 2015 08:39 AM

May 06, 2015

Meeting Notes

Thunderbird: 2015-05-05

Thunderbird meeting notes 2015-05-05. NOON PT (Pacific). Check https://wiki.mozilla.org/Thunderbird/StatusMeetings for meeting time conversion, previous meeting notes and call-in details

Attendees

aceman, aleth, Jorg K, merike, rkent, roland, wsmwk, MakeMyDay, rolandtanglao

Action items from last meetings

  • (done) wsmwk to pat glandium
  • (done) wsmwk to email hiro’s bug list to tb-planning
  • (done) rkent to review tracking list http://mzl.la/1EOx9Tm

Critical Issues

Critical bugs. Leave these here until they’re confirmed fixed. If confirmed, then remove.

  • AMO compatibility bump! (is not going to happen)
  • In general, the tracking-tb38 flag shows what are critical issues. In the next week or so, that list will be culled to only include true blockers for the Thunderbird 38 release. There will still be many.
  • maildir UI: nothing more to do for UI, still want to land a patch for letting IMAP set this.
  • gloda IM search regressions: mostly fixed, some db cleanup necessary for users of TB33+
    • aleth landed a fix to stop duplicated entries from appearing, nhnt11 patch to clean up the databases of Aurora/Beta/Daily has landed and is awaiting uplift

removing from critical list/fixed:

  • We need to decide on how to do release branching. I am uncertain whether Lightning integration requires this or not.
    • –> We’ve created THUNDERBIRD_38_VERBRANCH on mozilla-release
  • Lightning integration (below) really REALLY critical that we get this finished.
    • –> Patches landed, testing beta 2015-04-30

Releases

  • Past
    • 31.6.0 shipped
    • 38.0b3 shipped 2015-04-26 Sunday
    • 38.0b4 shipped 2015-05-03 Sunday
  • Upcoming
    • 38.0b5 (build Fri 5/8? when?)
    • 38.0b6?
    • 38.0 on May 26?
    • 31.7.0 2015-05-12+ Tues+

Lightning to Thunderbird Integration

See https://calendar.etherpad.mozilla.org/thunderbird-integration

  • As underpass has pointed out repeatedly (thanks for your patience!) , we need to rewrite / heavily modify the lightning articles on support.mozilla.org. let me know irc: rolandtanglao on #tb-support-crew or rtanglao AT mozilla.com OR simply start editing the articles
  • We need to fill the “Learn More” page with content, possibly point it to something more specific bug 1159682
  • Opt-out dialog: change “disable” to “remove” bug 1159698
  • tracking bug for lightning 4.0 bug 1153752

Round Table

wsmwk

Jorg K

  • Mail composition/spelling: bug 967494, bug 717292 (inline spell dictionary inconsistent), waiting for review by M Conley.
  • Editor losing style after image paste: bug 1140617
  • bug 1141446 – JSMIME regression, still awaiting final review
  • Today looked at “double Trash” issue 1156669

rkent

  • rail in releng seems to believe that we cannot overlap tb 31 and tb 38 and claims that was the previous practice, but Standard8 does not remember this the same way. At the moment I have been told that we cannot start building on the esr38 repo without disabling builds on esr31. This is still a developing story … until resolved I think we need to keep using comm-beta for TB 38 betas.
  • we have a backlog of jsmime issues, jcranmer has been quite tied up with real life. Several of us have been trying to fix these, but we need reviews.
  • Let’s review the critical tracking-38 bugs (29 at last count) http://mzl.la/1EOx9Tm

mkmelin

  • reviews
  • thunderbird hotfix support – bug 914225 ready to land

Question Time

– PLEASE INCLUDE YOUR NICK with your bullet item —

aleth

Some testing/verification that chat logs are now being properly and completely indexed by gloda would be helpful, cf bug 1146698 (landed on c-c, a possible candidate for uplift).

Support team

  • Roland owes sumo kb article links for release notes. Hope to have stub articles ready today

Other

  • PLEASE PUT THE NEXT MEETING IN YOUR (LIGHTNING) CALENDAR
  • Note – meeting notes must be copied from etherpad to wiki before 5AM CET next day so that they will go public in the meeting notes blog.

May 06, 2015 03:00 AM

May 04, 2015

Mike Conley

Electrolysis and the Big Tab Spinner of Doom

Have you been using Firefox Nightly and seen this big annoying spinner?

Big Tab Spinner of Doom in an e10s tab

Aw, crap. You again.

I hate that thing. I hate it.

Me, internally, when I see the spinner.

And while we’re working on making the spinner itself less ugly, I’d like to eliminate, or at least reduce its presence to the absolute minimum.

How do I do that? Well, first, know your enemy.

What does it even mean?

That big spinner means that the graphics part of Gecko hasn’t given us a frame yet to paint for this browser tab. That means we have nothing yet to show for the tab you’ve selected.

In the single-process Firefox that we ship today, this graphics operation of preparing a frame is something that Firefox will block on, so the tab will just not switch until the frame is ready. In fact, I’m pretty sure the whole browser will become unresponsive until the frame is ready.

With Electrolysis / multi-process Firefox, things are a bit different. The main browser process tells the content process, “Hey, I want to show the content associated with the tab that the user just selected”, and the content process computes what should be shown, and when the frame is ready, the parent process hears about it and the switch is complete. During that waiting time, the rest of the browser is still responsive – we do not block on it.

So there’s this window of time where the tab switch has been requested, and when the frame is ready.

During that window of time, we keep showing the currently selected tab. If, however, 300ms passes, and we still haven’t gotten a frame to paint, that’s when we show the big spinner.

So that’s what the big spinner means – we waited 300ms, and we still have no frame to draw to the screen.

How bad is it?

I suspect it varies. I see the spinner a lot less on my Windows machine than on my MacBook, so I suspect that performance is somehow worse on OS X than on Windows. But that’s purely subjective. We’ve recently landed some Telemetry probes to try to get a better sense of how often the spinner is showing up, and how laggy our tab switching really is. Hopefully we’ll get some useful data out of that, and as we work to improve tab switch times, we’ll see improvement in our Telemetry numbers as well.

Where is the badness coming from?

This is still unclear. And I don’t think it’s a single thing – many things might be causing this problem. Anything that blocks up the main thread of the content process, like slow JavaScript running on a web-site, can cause the spinner.

I also seem to see the spinner when I have “many” tabs open (~30), and have a build going on in the background (so my machine is under heavy load).

Maybe we’re just doing things inefficiently in the multi-process case. I recently landed profile markers for the Gecko Profiler for async tab switching, to help figure out what’s going on when I experience slow tab switch. Maybe there are optimizations we can make there.

One thing I’ve noticed is that there’s this function in the graphics layer, “ClientTiledLayerBuffer::ValidateTile”, that takes much, much longer in the content process than in the single-process case. I’ve filed a bug on that, and I’ll ask folks from the Graphics Team this week.

How you can help

UPDATE (May 12, 2015): Getting profiles from Windows is currently broken because the symbol server appears to be busted. Any profiles from Windows machines will be useless until this bug is fixed. Similarly, this bug recently changed the format of profiles1, so a different Gecko Profiler add-on will need to be installed until these patches land in the main repository for the add-on. You will also need to set profiler.url to https://people.mozilla.org/~sguo/cleopatra in about:config until those patches land.

If you’d like to help me find more potential causes, Profiles are very useful! NOTE – I don’t mean “user profiles”, as in, your bookmarks / customizations / history, etc, in the profile folder. I don’t mean this thing. I mean a performance profile.

A performance profile is a read-out of everything that Firefox / Gecko is doing over a particular span of time. When the profiler is running, Firefox / Gecko will record where the process is in the stack every 1ms or so. It’ll also record information about how long since it’s serviced the event loop, which helps us find jank.

To help, grab the Gecko Profiler add-on, make sure it’s enabled, and then dump a profile when you see the big spinner of doom. The interesting part will be between two markers, “AsyncTabSwitch:Start” and “AsyncTabSwitch:Finish”. There are also markers for when the parent process displays the spinner – “AsyncTabSwitch:SpinnerShown” and “AsyncTabSwitch:SpinnerHidden”. The interesting stuff, I believe, will be in the “Content” section of the profile between those markers. Here are more comprehensive instructions on using the Gecko Profiler add-on.

And here’s a video of me demonstrating how to use the profiler, and how to attach a profile to the bug where we’re working on improving tab switch times:

And here’s the link I refer you to in the video for getting the add-on.

So hopefully we’ll get some useful data, and we can drive instances of this spinner into the ground.

I’d really like that.


  1. See this mailing list post for details. 

May 04, 2015 02:28 PM

May 02, 2015

Mike Conley

Things I’ve Learned This Week (April 27 – May 1, 2015)

Another short one this week.

You can pass DOM Promises back through XPIDL

XPIDL is what we use to define XPCOM interfaces in Gecko. I think we’re trying to avoid XPCOM where we can, but sometimes you have to work with pre-existing XPCOM interfaces, and, well, you’re just stuck using it unless you want to rewrite what you’re working on.

What I’m working on lately is nsIProfiler, which is the interface to “SPS”, AKA the Gecko Profiler. nsIProfiler allows me to turn profiling on and off with various features, and then retrieve those profiles to send to a file, or to Cleopatra1.

What I’ve been working on recently is Bug 1116188 – [e10s] Stop using sync messages for Gecko profiler, which will probably have me adding new methods to nsIProfiler for async retrieval of profiles.

In the past, doing async stuff through XPCOM / XPIDL has meant using (or defining a new) callback interface which can be passed as an argument to the async method.

I was just about to go down that road, when ehsan (or was it jrmuizel? One of them, anyhow) suggested that I just pass a DOM Promise back.

I find that Promises are excellent. I really like them, and if I could pass a Promise back, that’d be incredible. But I had no idea how to do it.

It turns out that if I can ensure that the async methods are called such that there is a JS context on the stack, I can generate a DOM Promise, and pass it back to the caller as an “nsISupports”. According to ehsan, XPConnect will do the necessary magic so that the caller, upon receiving the return value, doesn’t just get this opaque nsISupports thing, but an actual DOM Promise. This is because, I believe, that DOM Promise is something that is defined via WebIDL. I think. I can’t say I fully understand the mechanics of XPConnect2, but this all sounded wonderful.

I even found an example in our new Service Worker code:

From dom/workers/ServiceWorkerManager.cpp (I’ve edited the method to highlight the Promise stuff):

// If we return an error code here, the ServiceWorkerContainer will
// automatically reject the Promise.
NS_IMETHODIMP
ServiceWorkerManager::Register(nsIDOMWindow* aWindow,
                               nsIURI* aScopeURI,
                               nsIURI* aScriptURI,
                               nsISupports** aPromise)
{
  AssertIsOnMainThread();

  // XXXnsm Don't allow chrome callers for now, we don't support chrome
  // ServiceWorkers.
  MOZ_ASSERT(!nsContentUtils::IsCallerChrome());

  nsCOMPtr<nsPIDOMWindow> window = do_QueryInterface(aWindow);

  // ...

  nsCOMPtr<nsIGlobalObject> sgo = do_QueryInterface(window);
  ErrorResult result;
  nsRefPtr<Promise> promise = Promise::Create(sgo, result);
  if (result.Failed()) {
    return result.StealNSResult();
  }

  // ...

  nsRefPtr<ServiceWorkerResolveWindowPromiseOnUpdateCallback> cb =
    new ServiceWorkerResolveWindowPromiseOnUpdateCallback(window, promise);

  nsRefPtr<ServiceWorkerRegisterJob> job =
    new ServiceWorkerRegisterJob(queue, cleanedScope, spec, cb, documentPrincipal);
  queue->Append(job);

  promise.forget(aPromise);
  return NS_OK;
}

Notice that the outparam aPromise is an nsISupports**, and yet, I do believe the caller will end up handling a DOM Promise. Wicked!


  1. Cleopatra is the web application that can be used to browse a profile retrieved via nsIProfiler 

  2. Like being able to read the black speech of Mordor, there are few who can. 

May 02, 2015 05:09 PM

The Joy of Coding (Ep. 12): Making “Save Page As” Work

After giving some updates on the last bug we were working on together, I started a new bug: Bug 1128050 – [e10s] Save page as… doesn’t always load from cache. The problem here is that if the user were to reach a page via a POST request, attempting to save that page from the Save Page item in the menu would result in silent failure1.

Luckily, the last bug we were working on was related to this – we had a lot of context about cache keys swapped in already.

The other important thing to realize is that fixing this bug is a bandage fix, or a wallpaper fix. I don’t think those are official terms, but it’s what I use. Basically, we’re fixing a thing with the minimum required effort because something else is going to fix it properly down the line. So we just need to do what we can to get the feature to limp along until such time as the proper fix lands.

My proposed solution was to serialize an nsISHEntry on the content process side, deserialize it on the parent side, and pass it off to nsIWebBrowserPersist.

So did it work? Watch the episode and find out!

I also want to briefly apologize for some construction noise during the video – I think it occurs somewhere halfway through minute 20 of the video. It doesn’t last long, I promise!

Episode Agenda

References

Bug 1128050 – [e10s] Save page as… doesn’t always load from cache – Notes


  1. Well, it’d show something in the Browser Console, but for a typical user, I think that’s still a silent failure. 

May 02, 2015 04:42 PM

April 30, 2015

Andrew Sutherland

Talk Script: Firefox OS Email Performance Strategies

Last week I gave a talk at the Philly Tech Week 2015 Dev Day organized by the delightful people at technical.ly on some of the tricks/strategies we use in the Firefox OS Gaia Email app.  Note that the credit for implementing most of these techniques goes to the owner of the Email app’s front-end, James Burke.  Also, a special shout-out to Vivien for the initial DOM Worker patches for the email app.

I tried to avoid having slides that both I would be reading aloud as the audience read silently, so instead of slides to share, I have the talk script.  Well, I also have the slides here, but there’s not much to them.  The headings below are the content of the slides, except for the one time I inline some code.  Note that the live presentation must have differed slightly, because I’m sure I’m much more witty and clever in person than this script would make it seem…

Cover Slide: Who!

Hi, my name is Andrew Sutherland.  I work at Mozilla on the Firefox OS Email Application.  I’m here to share some strategies we used to make our HTML5 app Seem faster and sometimes actually Be faster.

What’s A Firefox OS (Screenshot Slide)

But first: What is a Firefox OS?  It’s a multiprocess Firefox gecko engine on an android linux kernel where all the apps including the system UI are implemented using HTML5, CSS, and JavaScript.  All the apps use some combination of standard web APIs and APIs that we hope to standardize in some form.

Firefox OS homescreen screenshot Firefox OS clock app screenshot Firefox OS email app screenshot

Here are some screenshots.  We’ve got the default home screen app, the clock app, and of course, the email app.

It’s an entirely client-side offline email application, supporting IMAP4, POP3, and ActiveSync.  The goal, like all Firefox OS apps shipped with the phone, is to give native apps on other platforms a run for their money.

And that begins with starting up fast.

Fast Startup: The Problems

But that’s frequently easier said than done.  Slow-loading websites are still very much a thing.

The good news for the email application is that a slow network isn’t one of its problems.  It’s pre-loaded on the phone.  And even if it wasn’t, because of the security implications of the TCP Web API and the difficulty of explaining this risk to users in a way they won’t just click through, any TCP-using app needs to be a cryptographically signed zip file approved by a marketplace.  So we do load directly from flash.

However, it’s not like flash on cellphones is equivalent to an infinitely fast, zero-latency network connection.  And even if it was, in a naive app you’d still try and load all of your HTML, CSS, and JavaScript at the same time because the HTML file would reference them all.  And that adds up.

It adds up in the form of event loop activity and competition with other threads and processes.  With the exception of Promises which get their own micro-task queue fast-lane, the web execution model is the same as all other UI event loops; events get scheduled and then executed in the same order they are scheduled.  Loading data from an asynchronous API like IndexedDB means that your read result gets in line behind everything else that’s scheduled.  And in the case of the bulk of shipped Firefox OS devices, we only have a single processor core so the thread and process contention do come into play.

So we try not to be a naive.

Seeming Fast at Startup: The HTML Cache

If we’re going to optimize startup, it’s good to start with what the user sees.  Once an account exists for the email app, at startup we display the default account’s inbox folder.

What is the least amount of work that we can do to show that?  Cache a screenshot of the Inbox.  The problem with that, of course, is that a static screenshot is indistinguishable from an unresponsive application.

So we did the next best thing, (which is) we cache the actual HTML we display.  At startup we load a minimal HTML file, our concatenated CSS, and just enough Javascript to figure out if we should use the HTML cache and then actually use it if appropriate.  It’s not always appropriate, like if our application is being triggered to display a compose UI or from a new mail notification that wants to show a specific message or a different folder.  But this is a decision we can make synchronously so it doesn’t slow us down.

Local Storage: Okay in small doses

We implement this by storing the HTML in localStorage.

Important Disclaimer!  LocalStorage is a bad API.  It’s a bad API because it’s synchronous.  You can read any value stored in it at any time, without waiting for a callback.  Which means if the data is not in memory the browser needs to block its event loop or spin a nested event loop until the data has been read from disk.  Browsers avoid this now by trying to preload the Entire contents of local storage for your origin into memory as soon as they know your page is being loaded.  And then they keep that information, ALL of it, in memory until your page is gone.

So if you store a megabyte of data in local storage, that’s a megabyte of data that needs to be loaded in its entirety before you can use any of it, and that hangs around in scarce phone memory.

To really make the point: do not use local storage, at least not directly.  Use a library like localForage that will use IndexedDB when available, and then fails over to WebSQLDatabase and local storage in that order.

Now, having sufficiently warned you of the terrible evils of local storage, I can say with a sorta-clear conscience… there are upsides in this very specific case.

The synchronous nature of the API means that once we get our turn in the event loop we can act immediately.  There’s no waiting around for an IndexedDB read result to gets its turn on the event loop.

This matters because although the concept of loading is simple from a User Experience perspective, there’s no standard to back it up right now.  Firefox OS’s UX desires are very straightforward.  When you tap on an app, we zoom it in.  Until the app is loaded we display the app’s icon in the center of the screen.  Unfortunately the standards are still assuming that the content is right there in the HTML.  This works well for document-based web pages or server-powered web apps where the contents of the page are baked in.  They work less well for client-only web apps where the content lives in a database and has to be dynamically retrieved.

The two events that exist are:

DOMContentLoaded” fires when the document has been fully parsed and all scripts not tagged as “async” have run.  If there were stylesheets referenced prior to the script tags, the script tags will wait for the stylesheet loads.

load” fires when the document has been fully loaded; stylesheets, images, everything.

But none of these have anything to do with the content in the page saying it’s actually done.  This matters because these standards also say nothing about IndexedDB reads or the like.  We tried to create a standards consensus around this, but it’s not there yet.  So Firefox OS just uses the “load” event to decide an app or page has finished loading and it can stop showing your app icon.  This largely avoids the dreaded “flash of unstyled content” problem, but it also means that your webpage or app needs to deal with this period of time by displaying a loading UI or just accepting a potentially awkward transient UI state.

(Trivial HTML slide)

<link rel=”stylesheet” ...>
<script ...></script>
DOMContentLoaded!

This is the important summary of our index.html.

We reference our stylesheet first.  It includes all of our styles.  We never dynamically load stylesheets because that compels a style recalculation for all nodes and potentially a reflow.  We would have to have an awful lot of style declarations before considering that.

Then we have our single script file.  Because the stylesheet precedes the script, our script will not execute until the stylesheet has been loaded.  Then our script runs and we synchronously insert our HTML from local storage.  Then DOMContentLoaded can fire.  At this point the layout engine has enough information to perform a style recalculation and determine what CSS-referenced image resources need to be loaded for buttons and icons, then those load, and then we’re good to be displayed as the “load” event can fire.

After that, we’re displaying an interactive-ish HTML document.  You can scroll, you can press on buttons and the :active state will apply.  So things seem real.

Being Fast: Lazy Loading and Optimized Layers

But now we need to try and get some logic in place as quickly as possible that will actually cash the checks that real-looking HTML UI is writing.  And the key to that is only loading what you need when you need it, and trying to get it to load as quickly as possible.

There are many module loading and build optimizing tools out there, and most frameworks have a preferred or required way of handling this.  We used the RequireJS family of Asynchronous Module Definition loaders, specifically the alameda loader and the r-dot-js optimizer.

One of the niceties of the loader plugin model is that we are able to express resource dependencies as well as code dependencies.

RequireJS Loader Plugins

var fooModule = require('./foo');
var htmlString = require('text!./foo.html');
var localizedDomNode = require('tmpl!./foo.html');

The standard Common JS loader semantics used by node.js and io.js are the first one you see here.  Load the module, return its exports.

But RequireJS loader plugins also allow us to do things like the second line where the exclamation point indicates that the load should occur using a loader plugin, which is itself a module that conforms to the loader plugin contract.  In this case it’s saying load the file foo.html as raw text and return it as a string.

But, wait, there’s more!  loader plugins can do more than that.  The third example uses a loader that loads the HTML file using the ‘text’ plugin under the hood, creates an HTML document fragment, and pre-localizes it using our localization library.  And this works un-optimized in a browser, no compilation step needed, but it can also be optimized.

So when our optimizer runs, it bundles up the core modules we use, plus, the modules for our “message list” card that displays the inbox.  And the message list card loads its HTML snippets using the template loader plugin.  The r-dot-js optimizer then locates these dependencies and the loader plugins also have optimizer logic that results in the HTML strings being inlined in the resulting optimized file.  So there’s just one single javascript file to load with no extra HTML file dependencies or other loads.

We then also run the optimizer against our other important cards like the “compose” card and the “message reader” card.  We don’t do this for all cards because it can be hard to carve up the module dependency graph for optimization without starting to run into cases of overlap where many optimized files redundantly include files loaded by other optimized files.

Plus, we have another trick up our sleeve:

Seeming Fast: Preloading

Preloading.  Our cards optionally know the other cards they can load.  So once we display a card, we can kick off a preload of the cards that might potentially be displayed.  For example, the message list card can trigger the compose card and the message reader card, so we can trigger a preload of both of those.

But we don’t go overboard with preloading in the frontend because we still haven’t actually loaded the back-end that actually does all the emaily email stuff.  The back-end is also chopped up into optimized layers along account type lines and online/offline needs, but the main optimized JS file still weighs in at something like 17 thousand lines of code with newlines retained.

So once our UI logic is loaded, it’s time to kick-off loading the back-end.  And in order to avoid impacting the responsiveness of the UI both while it loads and when we’re doing steady-state processing, we run it in a DOM Worker.

Being Responsive: Workers and SharedWorkers

DOM Workers are background JS threads that lack access to the page’s DOM, communicating with their owning page via message passing with postMessage.  Normal workers are owned by a single page.  SharedWorkers can be accessed via multiple pages from the same document origin.

By doing this, we stay out of the way of the main thread.  This is getting less important as browser engines support Asynchronous Panning & Zooming or “APZ” with hardware-accelerated composition, tile-based rendering, and all that good stuff.  (Some might even call it magic.)

When Firefox OS started, we didn’t have APZ, so any main-thread logic had the serious potential to result in janky scrolling and the impossibility of rendering at 60 frames per second.  It’s a lot easier to get 60 frames-per-second now, but even asynchronous pan and zoom potentially has to wait on dispatching an event to the main thread to figure out if the user’s tap is going to be consumed by app logic and preventDefault called on it.  APZ does this because it needs to know whether it should start scrolling or not.

And speaking of 60 frames-per-second…

Being Fast: Virtual List Widgets

…the heart of a mail application is the message list.  The expected UX is to be able to fling your way through the entire list of what the email app knows about and see the messages there, just like you would on a native app.

This is admittedly one of the areas where native apps have it easier.  There are usually list widgets that explicitly have a contract that says they request data on an as-needed basis.  They potentially even include data bindings so you can just point them at a data-store.

But HTML doesn’t yet have a concept of instantiate-on-demand for the DOM, although it’s being discussed by Firefox layout engine developers.  For app purposes, the DOM is a scene graph.  An extremely capable scene graph that can handle huge documents, but there are footguns and it’s arguably better to err on the side of fewer DOM nodes.

So what the email app does is we create a scroll-region div and explicitly size it based on the number of messages in the mail folder we’re displaying.  We create and render enough message summary nodes to cover the current screen, 3 screens worth of messages in the direction we’re scrolling, and then we also retain up to 3 screens worth in the direction we scrolled from.  We also pre-fetch 2 more screens worth of messages from the database.  These constants were arrived at experimentally on prototype devices.

We listen to “scroll” events and issue database requests and move DOM nodes around and update them as the user scrolls.  For any potentially jarring or expensive transitions such as coordinate space changes from new messages being added above the current scroll position, we wait for scrolling to stop.

Nodes are absolutely positioned within the scroll area using their ‘top’ style but translation transforms also work.  We remove nodes from the DOM, then update their position and their state before re-appending them.  We do this because the browser APZ logic tries to be clever and figure out how to create an efficient series of layers so that it can pre-paint as much of the DOM as possible in graphic buffers, AKA layers, that can be efficiently composited by the GPU.  Its goal is that when the user is scrolling, or something is being animated, that it can just move the layers around the screen or adjust their opacity or other transforms without having to ask the layout engine to re-render portions of the DOM.

When our message elements are added to the DOM with an already-initialized absolute position, the APZ logic lumps them together as something it can paint in a single layer along with the other elements in the scrolling region.  But if we start moving them around while they’re still in the DOM, the layerization logic decides that they might want to independently move around more in the future and so each message item ends up in its own layer.  This slows things down.  But by removing them and re-adding them it sees them as new with static positions and decides that it can lump them all together in a single layer.  Really, we could just create new DOM nodes, but we produce slightly less garbage this way and in the event there’s a bug, it’s nicer to mess up with 30 DOM nodes displayed incorrectly rather than 3 million.

But as neat as the layerization stuff is to know about on its own, I really mention it to underscore 2 suggestions:

1, Use a library when possible.  Getting on and staying on APZ fast-paths is not trivial, especially across browser engines.  So it’s a very good idea to use a library rather than rolling your own.

2, Use developer tools.  APZ is tricky to reason about and even the developers who write the Async pan & zoom logic can be surprised by what happens in complex real-world situations.  And there ARE developer tools available that help you avoid needing to reason about this.  Firefox OS has easy on-device developer tools that can help diagnose what’s going on or at least help tell you whether you’re making things faster or slower:

– it’s got a frames-per-second overlay; you do need to scroll like mad to get the system to want to render 60 frames-per-second, but it makes it clear what the net result is

– it has paint flashing that overlays random colors every time it paints the DOM into a layer.  If the screen is flashing like a discotheque or has a lot of smeared rainbows, you know something’s wrong because the APZ logic is not able to to just reuse its layers.

– devtools can enable drawing cool colored borders around the layers APZ has created so you can see if layerization is doing something crazy

There’s also fancier and more complicated tools in Firefox and other browsers like Google Chrome to let you see what got painted, what the layer tree looks like, et cetera.

And that’s my spiel.

Links

The source code to Gaia can be found at https://github.com/mozilla-b2g/gaia

The email app in particular can be found at https://github.com/mozilla-b2g/gaia/tree/master/apps/email

(I also asked for questions here.)

April 30, 2015 08:11 PM

April 28, 2015

Calendar

The Third Beta on the way to Lightning 4.0

It’s that time of year again, we have a new major release of Lightning on the horizon. About every 42 weeks, Thunderbird prepares for a major release, we follow up with a matching major version. You may know these as Lightning 2.6 or 3.3.In order to avoid disappointments, we do a series of beta releases before a such major release. This is where we need you. Please help out in making Lightning 4.0 a great success.Time flies when you are preparing for releases, so we are already at Thunderbird 38.0b3 and Lightning 4.0b3. The final release will be on May 12th and there will be at least one more beta. Please download these betas and take a moment to go through all the actions you normally do on a daily basis. Create an event, accept an invitation, complete a task. You probably have your own workflow, these are of course just examples.

Here is how to get the builds. If you have found an issue, you can either leave a comment here or file a bug on bugzilla.

You may wonder what is new. I’ve gone through the bugs fixed since 3.3 and found that most issues are backend fixes that won’t be very visible. We do however have a great new feature to save copies of invitations to your calendar. This helps in case you don’t care about replying to the invitation but would still like to see it in your calendar. We also have more general improvements in invitation compatibility, performance and stability and some slight visual enhancements. The full list of changes can be found on bugzilla.

Although its highly unlikely that severe problems will arise, you are encouraged to make a backup before switching to beta. If it comforts you, I am using beta builds for my production profile and I don’t recall there being a time where I lost events or had to start over.

If you have questions or have found a bug, feel free to leave a comment here.

April 28, 2015 10:21 AM

April 26, 2015

Robert Kaiser

"Nothing to Hide"?

I've been bothered for quite a while with people telling me they "have nothing to hide anyhow" when the topic of Internet privacy comes up.

I guess that mostly comes from the impression that the whole story is our government watching (over) us and the worst thing that can happen is incrimination. While that might threaten some things, most people do nothing that is really interesting enough for a government to go into attack mode over it (or so they believe, and very firmly so). And I even agree that most governments (including the US and EU countries) actually actively seek out what they call "terrorist activities" (even though they often stretch that term in crazy ways) and/or child abuse and similar topics that the vast majority of citizens agree are a bad thing and are not part of - and the vast majority of politicians and government workers believe they act in the best interest of their citizens when "obviously fighting that" via their different programs of privacy-undermining surveillance. That said, most people seem to be OK with their government collecting data about them as long as it's not used to incriminate them (and when that happens, it's too late to protest the practice anyhow).

A lot has been said about that since the "Snowden leaks", but I think the more obvious short-term and direct threat is in corporate surveillance, which has been swept under the rug in most discussions recently - to the joy of Facebook, Google and other major players in that area. I have also seen that when depicting some obvious scenarios resulting of that, people start to think about it much more promptly and realize the effect on their daily lives (even if those are minor issues compared to government starting a manhunt against you with terror allegations or similar).

So what I start asking is:There are probably more examples, those are the ones that came to my mind so far. Even if those are smaller things, people can relate to them as they affect things in their own life and not scenarios that feel very theoretical to them.

And, of course, they are true to a degree even now. Banks are already buying data from Facebook, probably including "private" messages, for determining credit scores, insurances base rates on anything they can find out about you, flight rates as well as prices for some Amazon and other web shop products vary based on what you searched before - and ads both on your screen and even on postal mail get tailored to a profile built on all kinds of your online behavior. My questions above just take all of those another step forward - but a pretty realistic one in my opinion.

I hope thinking about questions like that makes people realize they might actually want to evade some of that and in the end they actually have something to hide.

And then, of course, that a non-profit like Mozilla, which doesn't seek to maximize money, can believably be on their side and help them regain some privacy where they - now - want to.

April 26, 2015 10:38 PM

April 25, 2015

Mike Conley

Things I’ve Learned This Week (April 20 – April 24, 2015)

Short one this week. I must not have learned much! 😀

If you’re using Sublime Text to hack on Firefox or Gecko, make sure it’s not indexing your objdir.

Sublime has this wicked cool feature that lets you quickly search for files within your project folders. On my MBP, the shortcut is Cmd-P. It’s probably something like Ctrl-P on Windows and Linux.

That feature is awesome, because when I need to get to a file, instead of searching the folder hierarchy, I just hit Cmd-P, jam in a few of the characters (they can even be out of order – Sublime does fuzzy matching), and then as soon as my desired file is the top entry, just hit Enter, and BLAM – opened file. It really saves time!

At least, it saves time in theory. I noticed that sometimes, I’d hit Cmd-P, and the UI to enter my search string would take ages to show up. I had no idea why.

Then I noticed that this slowness seemed to show up after I had done a build. My objdir resides beneath my srcdir (as is the defaults with a mozilla-central checkout), so I figured perhaps Sublime was trying to index all of those binaries and choking on them.

I went to Project > Edit Project, and added this to the configuration file that opened:

{
    "folders":
    [
        {
            "path": "/Users/mikeconley/Projects/mozilla-central",
      "folder_exclude_patterns": ["*.sublime-workspace", "obj-*"]
        }
    ]
}

I added the workspace thing too1, because I figure it’s unlikely I’ll ever want to open that thing.

Anyhow, after setting that, I restarted Sublime, and everything was crazy-fast. \o/

If you’re using Sublime, and your objdir is under your srcdir, maybe consider adding the same thing. Even if you’re not using Cmd-P, it’ll probably save your machine from needlessly burning cycles indexing stuff.


  1. That’s where Sublime holds my session state for my project. 

April 25, 2015 09:40 PM

The Joy of Coding (Ep. 11): Cleaning up the View Source Patch

For this episode, Richard Milewski and I figured out the syncing issue I’d been having in Episode 9, so I had my head floating in the bottom right corner while I hacked. Now you can see what I do with my face while hacking, if that’s a thing you had been interested in.

I’ve also started mirroring the episodes to YouTube, if YouTube is your choice platform for video consumption.

So, like last week, I was under a bit of time pressure because of a meeting scheduled for 2:30PM (actually the meeting I was supposed to have the week before – it just got postponed), so that gave me 1.5 hours to move forward with the View Source work we’d started back in Episode 8.

I started the episode by explaining that the cache key stuff we’d figured out in Episode 9 was really important, and that a bug had been filed by the Necko team to get the issue fixed. At the time of the video, there was a patch up for review in that bug, and when we applied it, we were able to retrieve source code out of the network cache after POST requests! Success!

Now that we had verified that our technique was going to work, I spent the rest of the episode cleaning up the patches we’d written. I started by doing a brief self-code-review to smoke out any glaring problems, and then started to fix those problems.

We got a good chunk of the way before I had to cut off the camera.

I know back when I started working on this particular bug, I had said that I wanted to take you through right to the end on camera – but the truth of the matter is, the priority of the bug went up, and I was moving too slowly on it, since I was restricting myself to a few hours on Wednesdays. So unfortunately, after my meeting, I went back to hacking on the bug off-camera, and yesterday I put up a patch for review. Here’s the review request, if you’re interested in seeing where I got to!

I felt good about the continuity experiment, and I think I’ll try it again for the next few episodes – but I think I’ll choose a lower-priority bug; that way, I think it’s more likely that I can keep the work contained within the episodes.

How did you feel about the continuity between episodes? Did it help to engage you, or did it not matter? I’d love to hear your comments!

Episode Agenda

References

Bug 1025146 – [e10s] Never load the source off of the network when viewing sourceNotes

April 25, 2015 09:22 PM

April 22, 2015

Meeting Notes

Thunderbird: 2015-04-21

Thunderbird meeting notes 2015-04-21. NOON PT (Pacific). Check https://wiki.mozilla.org/Thunderbird/StatusMeetings for meeting time conversion, previous meeting notes and call-in details

Attendees

ATTENDEES – put your nick 1. below 2. in comments unless explicit under round table 3. top right of etherpad next to your color

mkmelin, rolandt, pegasus, makemyday jorgk, rkent, gneandr, aceman, merike, Paenglab, wsmwk

Action items from last meetings

  • (rkent, Fallen) AMO addon compat: TheOne said that this late it is probably not worth doing at all. WIth so many other things for me to do, that sounds like a plan.

Friends of the tree

  • glandium, for fixing the various packager bugs that will help package Lightning (nominated by Fallen, who won’t be at the meeting)

Critical Issues

Critical bugs. Leave these here until they’re confirmed fixed. If confirmed, then remove.

  • (rkent) I am enormously frustrated by the inability to get two critical features landed in tb 38: OAuth and Lightning integration. Can we please give this very high priority?
    • OAuth integration: partial landing for beta 2, really REALLY critical that we get this finished.
  • In general, the tracking-tb38 flag shows what are critical issues. In the next week or so, that list will be culled to only include true blockers for the Thunderbird 38 release. There will still be many.
  • I don’t think we have a reasonable chance of shipping a quality release on May 12. More realistic is June 2.
  • We need to decide on how to do release branching. I am uncertain whether Lightning integration requires this or not.
  • Auto-complete improvements – some could go into esr31 (bug 1042561 included in TB38)
  • Lightning integration (below) really REALLY critical that we get this finished.
  • maildir UI: nothing more to do for UI, still want to land a patch for letting IMAP set this.
  • gloda IM search regressions: mostly fixed, some db cleanup necessary for users of TB33+ that nhnt11 will hopefully have ready to land soon.
    • aleth landed a fix to stop duplicated entries from appearing, nhnt11 will take care of the cleaning up the databases of Aurora/Beta/Daily users this weekend and keep us updated
  • bug 1140884, might need late-l10n

removing from critical list/fixed:

  • ldap crash bug 1063829: a patch in beta 37, beta results are unclear – not seen in 38
  • bug 1064230 crashes during LDAP search made worse by Search All Addressbooks bug 170270, needs tracking 38+ and review?rkent/jcranmer – not seen in 38
  • everyone should probably skim http://mzl.la/1DaLo0t version 31-38 regressions for items they can help fix or direct to the right people

Releases

  • Past
    • 31.6.0 shipped
    • 38.0b1 shipped 2015-04-03
    • 38.0b2 shipped 2015-04-20
  • Upcoming
    • 38.0b3 (when?)

Lightning to Thunderbird Integration

See https://calendar.etherpad.mozilla.org/thunderbird-integration

  • As underpass has pointed out repeatedly (thanks for your patience!) , we need to rewrite / heavily modify the lightning articles on support.mozilla.org. let me know irc: rolandtanglao on #tb-support-crew or rtanglao AT mozilla.com OR simply start editing the articles

Unfortunately not much progress because I was away. I hope to have the packaging bits done until the weekend. Glandium did a great job on the packager.py changes, hence I nominated him for Friends of the Tree. (fallen)

MakeMyDay should comment on the opt-out dialog, I think we should get it landed asap. bug 1130852 – Opt-Out dialog had some discussion on prefs

Round Table

wsmwk

  • managed shipping of 31.6.0, 38.0b1, 38.0b2

Jorg K

rkent

  • We have the beginnings of a business development group (rkent, wsmwk, magnus) that after signing NDAs will be given access to Thunderbird business documentation.

mkmelin

  • bug 1134986 autocomplete bug investigated and landed on trunk +++

aceman

Question Time

– PLEASE INCLUDE YOUR NICK with your bullet item —

  • What happened to the Avocet branding? (Jorg K)
    • won’t be persued
  • Info about the meeting with Mitchell Baker on 20th March 2015, funding issues (Jorg K)
  • http://mzl.la/1O9khi4 can we get hiro’s bugs reassigned so the patches contained can get landed, and not lost? (wsmwk)
  • It would be great if some jetpack add-on support were available in thunderbird to share functionality with firefox and fennec. See also bug 1100644. No useful jetpack add-ons seem to exist for thunderbird (earlybird would be fine to use jpm over cfx). Are there any jetpack add-ons available to prove me wrong?

(pegasus) Is it worth looking at going to a 6-week release schedule to avoid the conundrum with getting not-quite-ready features in vs delaying?

Support team

  • Reminder: Roland is leaving Thunderbird May 12, 2015 after the release of Thunderbird 38: working on Thunderbird 38 plan and finally kickstarting Thunderbird User Success Council
    • looking for 3 people: English KB Article Editor, L10N Coordinator and Forum Lead. Is that you we’re looking for? If so email rtanglao AT mozilla.com or ping  :rolandtanglao in #sumo or #tb-support-crew

Other

  • PLEASE PUT THE NEXT MEETINGS IN YOUR (LIGHTNING) CALENDAR :)
  • Note – meeting notes must be copied from etherpad to wiki before 5AM CET next day so that they will go public in the meeting notes blog.

Action Items

  • wsmwk to pat glandium
  • wsmwk to email hiro’s bug list to tb-planning
  • rkent to review tracking list

April 22, 2015 03:00 AM

Rumbling Edge - Thunderbird

2015-04-20 Calendar builds

Common (excluding Website bugs)-specific: (6)

Sunbird will no longer be actively developed by the Calendar team.

Windows builds Official Windows

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

April 22, 2015 02:17 AM

2015-04-20 Thunderbird comm-central builds

Thunderbird-specific: (27)

MailNews Core-specific: (30)

Windows builds Official Windows, Official Windows installer

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

April 22, 2015 02:16 AM

April 18, 2015

Mike Conley

Things I’ve Learned This Week (April 13 – April 17, 2015)

When you send a sync message from a frame script to the parent, the return value is always an array

Example:

// Some contrived code in the browser
let browser = gBrowser.selectedBrowser;
browser.messageManager.addMessageListener("GIMMEFUE,GIMMEFAI", function onMessage(message) {
  return "GIMMEDABAJABAZA";
});

// Frame script that runs in the browser
let result = sendSendMessage("GIMMEFUE,GIMMEFAI");
console.log(result[0]);
// Writes to the console: GIMMEDABAJABAZA

From the documentation:

Because a single message can be received by more than one listener, the return value of sendSyncMessage() is an array of all the values returned from every listener, even if it only contains a single value.

I don’t use sync messages from frame scripts a lot, so this was news to me.

You can use [cocoaEvent hasPreciciseScrollingDeltas] to differentiate between scrollWheel events from a mouse and a trackpad

scrollWheel events can come from a standard mouse or a trackpad1. According to this Stack Overflow post, one potential way of differentiating between the scrollWheel events coming from a mouse, and the scrollWheel events coming from a trackpad is by calling:

bool isTrackpad = [theEvent hasPreciseScrollingDeltas];

since mouse scrollWheel is usually line-scroll, whereas trackpads (and Magic Mouse) are pixel scroll.

The srcdoc attribute for iframes lets you easily load content into an iframe via a string

It’s been a while since I’ve done web development, so I hadn’t heard of srcdoc before. It was introduced as part of the HTML5 standard, and is defined as:

The content of the page that the embedded context is to contain. This attribute
is expected to be used together with the sandbox and seamless attributes. If a
browser supports the srcdoc attribute, it will override the content specified in
the src attribute (if present). If a browser does NOT support the srcdoc
attribute, it will show the file specified in the src attribute instead (if
present).

So that’s an easy way to inject some string-ified HTML content into an iframe.

Primitives on IPDL structs are not initialized automatically

I believe this is true for structs in C and C++ (and probably some other languages) in general, but primitives on IPDL structs do not get initialized automatically when the struct is instantiated. That means that things like booleans carry random memory values in them until they’re set. Having spent most of my time in JavaScript, I found that a bit surprising, but I’ve gotten used to it. I’m slowly getting more comfortable working lower-level.

This was the ultimate cause of this crasher bug that dbaron was running into while exercising the e10s printing code on a debug Nightly build on Linux.

This bug was opened to investigate initializing the primitives on IPDL structs automatically.

Networking is ultimately done in the parent process in multi-process Firefox

All network requests are proxied to the parent, which serializes the results back down to the child. Here’s the IPDL protocol for the proxy.

On bi-directional text and RTL

gw280 and I noticed that in single-process Firefox, a <select> dropdown set with dir=”rtl”, containing an <option> with the value “A)” would render the option as “(A”.

If the value was “A) Something else”, the string would come out unchanged.

We were curious to know why this flipping around was happening. It turned out that this is called “BiDi”, and some documentation for it is here.

If you want to see an interesting demonstration of BiDi, click this link, and then resize the browser window to reflow the text. Interesting to see where the period on that last line goes, no?

It might look strange to someone coming from a LTR language, but apparently it makes sense if you’re used to RTL.

I had not known that.

Some terminal spew

Some terminal spew

Now what’s all this?

My friend and colleague Mike Hoye showed me the above screenshot upon coming into work earlier this week. He had apparently launched Nightly from the terminal, and at some point, all that stuff just showed up.

“What is all of that?”, he had asked me.

I hadn’t the foggiest idea – but a quick DXR showed basic_code_modules.cc inside Breakpad, the tool used to generate crash reports when things go wrong.

I referred him to bsmedberg, since that fellow knows tons about crash reporting.

Later that day, mhoye got back to me, and told me that apparently this was output spew from Firefox’s plugin hang detection code. Mystery solved!

So if you’re running Firefox from the terminal, and suddenly see some basic_code_modules.cc stuff show up… a plugin you’re running probably locked up, and Firefox shanked it.


  1. And probably a bunch of other peripherals as well 

April 18, 2015 10:33 PM

The Joy of Coding (Ep. 10): The Mystery of the Cache Key

In this episode, I kept my camera off, since I was having some audio-sync issues1.

I was also under some time-pressure, because I had a meeting scheduled for 2:30 ET2, giving me exactly 1.5 hours to do what I needed to do.

And what did I need to do?

I needed to figure out why an nsISHEntry, when passed to nsIWebPageDescriptor’s loadPage, was not enough to get the document out from the HTTP cache in some cases. 1.5 hours to figure it out – the pressure was on!

I don’t recall writing a single line of code. Instead, I spent most of my time inside XCode, walking through various scenarios in the debugger, trying to figure out what was going on. And I eventually figured it out! Read this footnote for the TL;DR:3

Episode Agenda

References

Bug 1025146 – [e10s] Never load the source off of the network when viewing sourceNotes


  1. I should have those resolved for Episode 11! 

  2. And when the stream finished, I found out the meeting had been postponed to next week, meaning that next week will also be a short episode. :( 

  3. Basically, the nsIChannel used to retrieve data over the network is implemented by HttpChannelChild in the content process. HttpChannelChild is really just a proxy to a proper nsIChannel on the parent-side. On the child side, HttpChannelChild does not implement nsICachingChannel, which means we cannot get a cache key from it when creating a session history entry. With no cache key, comes no ability to retrieve the document from the network cache via nsIWebDescriptor’s loadPage. 

April 18, 2015 09:40 PM

April 12, 2015

Mike Conley

Things I’ve Learned This Week (April 6 – April 10, 2015)

It’s possible to synthesize native Cocoa events and dispatch them to your own app

For example, here is where we synthesize native mouse events for OS X. I think this is mostly used for testing when we want to simulate mouse activity.

Note that if you attempt to replay a queue of synthesized (or cached) native Cocoa events to trackSwipeEventWithOptions, those events might get coalesced and not behave the way you want. mstange and I ran into this while working on this bug to get some basic gesture support working with Nightly+e10s (Specifically, the history swiping gesture on OS X).

We were able to determine that OS X was coalescing the events because we grabbed the section of code that implements trackSwipeEventWithOptions, and used the Hopper Disassembler to decompile the assembly into some pseudocode. After reading it through, we found some logging messages in there referring to coalescing. We noticed that those log messages were only sent when NSDebugSwipeTrackingLogic was set to true, we executed this:

defaults write org.mozilla.nightlydebug NSDebugSwipeTrackingLogic -bool YES

In the console, and then re-ran our swiping test in a debug build of Nightly to see what messages came out. Sure enough, this is what we saw:

2015-04-09 15:11:55.395 firefox[5203:707] ___trackSwipeWithScrollEvent_block_invoke_0 coalescing scrollevents
2015-04-09 15:11:55.395 firefox[5203:707] ___trackSwipeWithScrollEvent_block_invoke_0 cumulativeDelta:-2.000 progress:-0.002
2015-04-09 15:11:55.395 firefox[5203:707] ___trackSwipeWithScrollEvent_block_invoke_0 cumulativeDelta:-2.000 progress:-0.002 adjusted:-0.002
2015-04-09 15:11:55.396 firefox[5203:707] ___trackSwipeWithScrollEvent_block_invoke_0 call trackingHandler(NSEventPhaseChanged, gestureAmount:-0.002)

This coalescing means that trackSwipeEventWithOptions is only getting a subset of the events that we’re sending, which is not what we had intended. It’s still not clear what triggers the coalescing – I suspect it might have to do with how rapidly we flush our native event queue, but mstange suspects it might be more sophisticated than that. Unfortunately, the pseudocode doesn’t make it too clear.

String templates and toSource might run the risk of higher memory use?

I’m not sure I “learned” this so much, but I saw it in passing this week in this bug. Apparently, there was some section of the Marionette testing framework that was doing request / response logging with toSource and some string templates, and this caused a 20MB regression on AWSY. Doing away with those in favour of old-school string concatenation and JSON.stringify seems to have addressed the issue.

When you change the remote attribute on a <xul:browser> you need to re-add the <xul:browser> to the DOM tree

I think I knew this a while back, but I’d forgotten it. I actually re-figured it out during the last episode of The Joy of Coding. When you change the remoteness of a <xul:browser>, you can’t just flip the remote attribute and call it a day. You actually have to remove it from the DOM and re-add it in order for the change to manifest properly.

You also have to re-add any frame scripts you had specially loaded into the previous incarnation of the browser before you flipped the remoteness attribute.1

Using Mercurial, and want to re-land a patch that got backed out? hg graft is your friend!

Suppose you got backed out, and want to reland your patch(es) with some small changes. Try this:

hg update -r tip
hg graft --force BASEREV:ENDREV

This will re-land your changes on top of tip. Note that you need –force, otherwise Mercurial will skip over changes it notices have already landed in the commit ancestry.

These re-landed changes are in the draft stage, so you can update to them, and assuming you are using the evolve extension2, and commit –amend them before pushing. Voila!

Here’s the documentation for hg graft.


  1. We sidestep this with browser tabs by putting those browsers into “groups”, and having any new browsers, remote or otherwise, immediately load a particular set of framescripts. 

  2. And if you’re using Mercurial, you probably should be. 

April 12, 2015 02:50 PM

April 10, 2015

Mike Conley

The Joy of Coding (Ep. 9): More View Source Hacking!

In this episode1, I continued the work we had started in Episode 8, by trying to make it so that we don’t hit the network when viewing the source of a page in multi-process Firefox.

It was a little bit of a slog – after some thinking, I decided to undo some of the work we had done in the previous episode, and then I set up the messaging infrastructure for talking to the remote browser in the view source window.

I also rebased and landed a patch that we had written in the previous episode, after fixing up some nits2.

Then, I (re)-learned that flipping the “remote” attribute of a browser is not enough in order for it to run out-of-process; I have to remove it from the DOM, and then re-add it. And once it’s been re-added, I have to reload any frame scripts that I had loaded in the previous incarnation of the browser.

Anyhow, by the end of the episode, we were able to view the source from a remote browser inside a remote view source browser!3 That’s a pretty big deal!

Episode Agenda

References

Bug 1025146 – [e10s] Never load the source off of the network when viewing sourceNotes


  1. A note that I also tried an experiment where I keep my camera running during the entire session, and place the feed into the bottom right-hand corner of the recording. It looks like there were some synchronization issues between audio and video, which are a bit irritating. Sorry about that! I’ll see what I can do about that. 

  2. and dropping a nit having conversed with :gabor about it 

  3. We were still loading it off the network though, so I need to figure out what’s going on there in the next episode. 

April 10, 2015 05:00 PM

April 04, 2015

Mike Conley

Things I’ve Learned This Week (March 30 – April 3, 2015)

This is my second post in a weekly series, where I attempt to distill my week down into some lessons or facts that I’ve picked up. Let’s get to it!

ES6 – what’s safe to use in browser development?

As of March 27, 2015, ES6 classes are still not yet safe for use in production browser code. There’s code to support them in Firefox, but they’re Nightly-only behind a build-time pref.

Array.prototype.includes and ArrayBuffer.transfer are also Nightly only at this time.

However, any of the rest of the ES6 Harmony work currently implemented by Nightly is fair-game for use, according to jorendorff. The JS team is also working on a Wiki page to tell us Firefox developers what ES6 stuff is safe for use and what is not.

Getting a profile from a hung process

According to mstange, it is possible to get profiles from hung Firefox processes using lldb1.

  1. After the process has hung, attach lldb.
  2. Type in2, :
    p (void)mozilla_sampler_save_profile_to_file("somepath/profile.txt")
  3. Clone mstange’s handy profile analysis repository.
  4. Run:
    python symbolicate_profile.py somepath/profile.txt

    To graft symbols into the profile. mstange’s scripts do some fairly clever things to get those symbols – if your Firefox was built by Mozilla, then it will retrieve the symbols from the Mozilla symbol server. If you built Firefox yourself, it will attempt to use some cleverness3 to grab the symbols from your binary.

    Your profile will now, hopefully, be updated with symbols.

    Then, load up Cleopatra, and upload the profile.

    I haven’t yet had the opportunity to try this, but I hope to next week. I’d be eager to hear people’s experience giving this a go – it might be a great tool in determining what’s going on in Firefox when it’s hung4!

Parameter vs. Argument

I noticed that when I talked about “things that I passed to functions5”, I would use “arguments” and “parameters” interchangeably. I recently learned that there is more to those terms than I had originally thought.

According to this MSDN article, an argument is what is passed in to a function by a caller. To the function, it has received parameters. It’s like two sides of a coin. Or, as the article puts it, like cars and parking spaces:

You can think of the parameter as a parking space and the argument as an automobile. Just as different automobiles can park in a parking space at different times, the calling code can pass a different argument to the same parameter every time that it calls the procedure.6

Not that it really makes much difference, but I like knowing the details.


  1. Unfortunately, this technique will not work for Windows. :(  

  2. Assuming you’re running a build after this revision landed. 

  3. A binary called dump_syms_mac in mstange’s toolkit, and nm on Linux 

  4. I’m particularly interested in knowing if we can get Javascript stacks via this technique – I can see that being particularly useful with hung content processes. 

  5. Or methods. 

  6. Source 

April 04, 2015 04:00 PM

April 03, 2015

Thunderbird Blog

Thunderbird 38 goes to beta!

The next major release of Thunderbird, version 38, is now in beta and available for testing. You may download Thunderbird 38.0b1 here.

This version of Thunderbird is the first that is mostly managed by volunteer community members rather than by Mozilla staff. We have many new features, including:

Release notes are available here.

There are still a couple of features missing from this beta that we hope to ship in the final version of Thunderbird 38. Those are:

 

April 03, 2015 09:13 AM

Mike Conley

The Joy of Coding (Ep. 8): View Source Hacking

In this episode, I again started with some code review. I reviewed this patch for this bug by fellow Firefox hacker Gijs, and refreshed my memory on var hoisting. I’ve been using let for so long that it was really, really weird to see how var worked.

After that, I quickly gave an update on my plugin crash UI bug I had been working on the last episode – the patches are up, and are currently undergoing review, so there wasn’t much to do there.

Next, I started on a brand new bug1, explained the bug2, and then laid out my plan for attacking it.

Specifically, I’m going to try an experiment: I will only be working on that bug during Joy of Coding sessions. That way, there is continuity from video to video, and you won’t miss any of the development that goes on between episodes.

We sliced off a chunk to get done, and hit some minor roadblocks (as expected). The View Source code is old and crufty, and I have to do my best to make sure I don’t break any of the other applications that depend on it (like Thunderbird and SeaMonkey).

So that was the name of the game – looking to see how other applications use View Source, and trying to come up with a plan for making sure we don’t break them, while at the same time refactoring View Source to be easier to code against (and work with a frame script and messages).

It was a long slog3, but we got to a good point by the end. Let’s see how far we get next week!

Episode Agenda

References

Bug 1148807 – Method moveToAlertPosition in dialog.xml should check if opener is not null

Bug 1110887 – With e10s, plugin crash submit UI is brokenNotes

Bug 1025146 – [e10s] Never load the source off of the network when viewing sourceNotes


  1. I say brand new, except that, as I explain in the video, I had already attacked this bug early on in my e10s work, and had only recently come back to it. 

  2. The View Source tool sometimes re-retrieves the source off of the network when opened from an e10s-browser 

  3. My longest episode ever, clocking in at over 2.5 hours. 

April 03, 2015 02:17 AM

April 01, 2015

Joshua Cranmer

Breaking news

It was brought to my attention recently by reputable sources that the recent announcement of increased usage in recent years produced an internal firestorm within Mozilla. Key figures raised alarm that some of the tech press had interpreted the blog post as a sign that Thunderbird was not, in fact, dead. As a result, they asked Thunderbird community members to make corrections to emphasize that Mozilla was trying to kill Thunderbird.

The primary fear, it seems, is that knowledge that the largest open-source email client was still receiving regular updates would impel its userbase to agitate for increased funding and maintenance of the client to help forestall potential threats to the open nature of email as well as to innovate in the space of providing usable and private communication channels. Such funding, however, would be an unaffordable luxury and would only distract Mozilla from its central goal of building developer productivity tooling. Persistent rumors that Mozilla would be willing to fund Thunderbird were it renamed Firefox Email were finally addressed with the comment, "such a renaming would violate our current policy that all projects be named Persona."

April 01, 2015 07:00 AM

March 29, 2015

Rumbling Edge - Thunderbird

2015-03-28 Calendar builds

Common (excluding Website bugs)-specific: (29)

Sunbird will no longer be actively developed by the Calendar team.

Windows builds Official Windows

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

March 29, 2015 03:58 PM

2015-03-28 Thunderbird comm-central builds

Thunderbird-specific: (67)

MailNews Core-specific: (54)

Windows builds Official Windows, Official Windows installer

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

March 29, 2015 03:57 PM

March 27, 2015

Mike Conley

Things I’ve Learned This Week (March 23 – 27, 2015)

This is the first post in a weekly series, where I’m going to attempt to distill down my week into some lessons or facts I’ve picked up. Maybe they’ll be interesting to others. We’ll see.

  1.  Gecko Media Plugins are used both for WebRTC (the Open H.264 encoding stuff runs inside a GMP), and is also going to be used to hold CDM’s for EME’s. That’s a lot of TLA’s!1
  2. This little notch I saw on the caret on my development build was because I had bidi.browser.ui set to true for some reason. It’s the “bidi caret”:
    Bidi Caret
  3. People hacking on platform are supposed to avoid using the NS_ENSURE_* macros, according to this.2 I originally learned this by reading cpearce’s review of a patch.

So let’s see if I can keep this up for a few weeks. Maybe I’ll get a collection of useful stuff by the end of the experiment!


  1. Three Letter Acronyms 

  2. It says:

    Previously the NS_ENSURE_* macros were used for this purpose, but those macros hide return statements and should not be used in new code.

     

March 27, 2015 03:15 PM

The Joy of Coding (Episode 7): Code review, and a Regression

In this episode, I started with some code review. I was reviewing a patch to make the Findbar (particularly, the Find As You Type feature) e10s-friendly.

With that review out of the way, I had to swap a bunch of information about the plugin crash UI for e10s in my head – and in particular, some non-determinism that we have to handle. I explained that stuff (and hopefully didn’t spend too much time on it).

Then, I showed how far I’d gotten with the plugin crash UI for e10s. I was able to submit a crash report, but I found I wasn’t able to type into the comment text area.

After a while, I noticed that I couldn’t type into the comment text area on Nightly, even without my patch. And then I reproduced it in Aurora. And then in Beta. Luckily, I couldn’t reproduce it in Release – but with Beta transitioning to Release in only a few days, I didn’t have a lot of time to get a bug on file to shine some light on it.

Luckily, our brilliant Steven Michaud was on the case, and has just landed a patch to fix this. Talk about fast work!

Episode Agenda

References:
Bug 1133981 – [e10s] Stop sending unsafe CPOWs after the findbar has been closed in a remote browser

Bug 1110887 – With e10s, plugin crash submit UI is brokenNotes

Bug 1147521 – Cannot type into comment area of plugin crash UI

March 27, 2015 03:03 PM

March 19, 2015

Mike Conley

The Joy of Coding (Episode 6): Plugins!

In this episode, I took the feedback of my audience, and did a bit of code review, but also a little bit of work on a bug. Specifically, I was figuring out the relationship between NPAPI plugins and Gecko Media Plugins, and how to crash the latter type (which is necessary for me in order to work on the crash report submission UI).

A minor goof – for the first few minutes, I forgot to switch my camera to my desktop, so you get prolonged exposure to my mug as I figure out how I’m going to review a patch. I eventually figured it out though. Phew!

Episode Agenda

References:
Bug 1134222 – [e10s] “Save Link As…”/”Bookmark This Link” in remote browser causes unsafe CPOW usage warning

Bug 1110887 – With e10s, plugin crash submit UI is brokenNotes

March 19, 2015 03:13 PM

March 12, 2015

Mike Conley

The Joy of Coding (Episode 5): Much Code Review

In this fifth episode, I didn’t work on any bugs. Instead, I did a bunch of code review. Ever wanted to know what a Firefox engineer does to review a patch? If so, then this episode is for you!

Episode Agenda

References:
Bug 1140898 – [e10s] “View” > “Switch Page Direction” doesn’t work in e10s

Bug 1140878 – [e10s] “Switch Page Direction” in remote browser causes unsafe CPOW usage warnings

Bug 1066531 – [e10s] Switching tabs can result in old content being displayed for a split second after the tab bar is updated

March 12, 2015 01:36 AM

March 05, 2015

Mike Conley

The Joy of Coding (Episode 4)

The fourth episode is up! Richard Milewski and I found the right settings to get OBS working properly on my machine, so this weeks episode is super-readable! If you’ve been annoyed with the poor resolution for past episodes, rejoice!

In this fourth episode, I solve a few things – I clean up a busted rebase, I figure out how I’d accidentally broken Linux printing, I think through a patch to make sure it does what I need it to do, and I review some code!

Episode Agenda

References:
Bug 1136855 – Print settings are not saved from print job to print job
Notes

Bug 1088070 – Instantiate print settings from the content process instead of the parent
Notes

Bug 1090448 – Make e10s printing work on Linux
Notes

Bug 1133577 – [e10s] “Open Link in New Tab” in remote browser causes unsafe CPOW usage warning
Notes

Bug 1133981 – [e10s] Stop sending unsafe CPOWs after the findbar has been closed in a remote browser
Notes

March 05, 2015 03:19 PM

March 03, 2015

Calendar

We are now on Twitter

In the spirit of Twitter I will keep this blog post down to 140 characters. Check out @mozcalendar for more frequent updates on the project.

March 03, 2015 12:39 AM

February 28, 2015

Calendar

Strings are Frozen for the Next Major Lightning Release

Together with Thunderbird 38, we will be releasing Lightning 4.0. Both of these releases are not beta versions, but similarly major releases like Lightning 3.3, Lightning 2.6 and their respective Thunderbird counterparts.

We have about 11 weeks left until the release will be final, and while the developers are doing their best to make sure features are stable and there are no regressions, its time to do some translation work.

If you have been missing your language in Lighting in the past, maybe this is a good time to contact the l10n team of your language and express interest to translate Lightning. While the initial hurdle may be large, there are usually not many changes in strings between Lightning releases. If you are lucky, someone had already translated part of Lightning in the past and all you have to do is update your locales. The translation process is fairly simple and can be done using your favorite browser.

If you are already part of the Localization teams, this is the time to head over to mozilla.locamotion.org and translate the remaining Lightning strings. Once you are done translating and the changes have been pushed to the localization repositories, please head over to the Thunderbird l10n dashboard (not the Calendar Dashboard) and sign-off the latest change. Make sure you are signing off the later changeset of Thunderbird and Lightning, as only the newest sign-off will be used.

Should you have any questions, please feel free to send me an email or comment on this post and I will get back to you as soon as possible.

February 28, 2015 02:49 PM

Google Summer of Code 2015 Projects

The one thing I like best about the Google Summer of Code is that it gives us an opportunity work on cool new features I never have time for on my own. Also, its a great opportunity for students to learn about working on a large-scale project and prepare for real life work, which is very much different than the smaller projects I remember from my university. Students that have stayed with us even after the Summer of Code have proven themselves invaluable, showing spirit and enthusiasm for an open source project like the Mozilla Calendar Project gives me a warm feeling in my heart.

This year, we have proposed two projects: Introducing Calendar Accounts and Resource Booking Improvements. As the projects have been available on the wiki for a while (sorry for not blogging about this earlier!), we’ve already had the one or other student interested in applying. However, that doesn’t mean there isn’t any room left for a fine candidate like you!

In the first project, Introducing Calendar Accounts, the goal is to improve our backend layer to move from a flat list of calendars to a hierarchical list with calendars grouped by the accounts they belong to. Aside from the benefits this gives us w.r.t. avoiding code duplication and ugly hacks, it will open Lightning to a load of new features related to accounts, for example notifications if a new calendars was added to the account or improved support for authenticating to calendars on one server with different credentials.

Second, we have proposed a project on Resource Booking Improvements. Right now, our invite attendees dialog is fairly simple and only allows entering email addresses and seeing their free/busy status. What is missing is an easy way to invite resources and rooms, for example when you want to book a conference room for your meeting. There is an inconspicuous feature that allows changing an attendee to a resource entry, although there is no real value in doing this aside from sending more correct data to the calendar server. The user still has to remember the virtual email address associate with the conference room. With this Summer of Code project we want to allow any kind of calendar provider to be able to specify how to search for rooms and resources. Certain CalDAV servers support searching for these entries using custom queries, the goal for this project is mostly to support those servers.

If you are interested, please do get in touch with me, either via email or on irc.mozilla.org, where my nickname is Fallen and I usually hang around in #calendar. Should I not be around, redDragon (a former GSoC Student, by the way!) will be there to help you.

February 28, 2015 02:32 PM

Provider for Google Calendar Postmortem

First of all, I’d like to apologize for not adding in new blog posts once in a while. There have been a few topics I could have written about, but I never got around to it. The consequence is that there will be a few posts in succession now, I hope to be better about this in the future.

In this post, I’d like to tell you a little bit about the changes to the Provider for Google Calendar that have taken place in the last months. With due prior notice, Google has shut down version 1 and version 2 of the Google Calendar API. The previous version of the Provider for Google Calendar, version 0.32, was still using the API v1.

The changes to the API were fairly substantial, so I took the opportunity to rewrite large parts of the Provider to use new JavaScript features and generally make the code more readable. I also added some new features, including:

As such drastic changes are a common source for regressions, I went through 10 rounds of pre-release testing and got some very helpful input from those who commented on the bug or sent me an email. There would have been substantially more issues without these folks, so thank you very much! In the last round the amount of issues was down to a level where I felt comfortable releasing the Provider to the world.

When I release version 1.0, something inevitable happened: nearly 300,000 users find more issues than 140, so I had to do a few additional releases to fix more major issues. The new API version imposes limits on the number of requests being made, so one of the first issues I had to overcome was gaining more quota. Thanks to the fantastic folks at Google I was able to solve this issue using a combination of code changes to reduce the number of requests and also higher quota limits. Here is a roundup of the other issues:

In retrospect, there have been a lot of complaints, but on the other hand a lot of people have noticed how important this addon has become for them. Many have shown their gratitude by sending a donation via the addons page. I hope that version 1.0.4 fixes most of the issues, there are just a few more issues reported. If you continue to experience difficulties, please send me an email or visit the support forum.

 

 

 

February 28, 2015 01:42 PM

February 27, 2015

Thunderbird Blog

Thunderbird Usage Continues to Grow

We’re happy to report that Thunderbird usage continues to expand.

Mozilla measures program usage by Active Daily Installations (ADI), which is the number of pings that Mozilla servers receive as installations do their daily plugin block-list update. This is not the same as the number of active users, since some users don’t access their program each day, and some installations are behind firewalls. An estimate of active monthly users is typically done by multiplying the ADI by a factor of 3.

To plot changes in Thunderbird usage over time, I’ve picked the peak ADI for each month for the last few years. Here’s the result:

Thunderbird Active Daily Installations, peak value per month.

Germany has long been our #1 country for usage, but in 4th quarter 2014, Japan exceeded US as the #2 country. Here’s the top 10 countries, taken from the ADI count of February 24, 2015:

Rank Country ADI 2015-02-24
1 Germany 1,711,834
2 Japan 1,002,877
3 United States 927,477
4 France 777,478
5 Italy 514,771
6 Russian Federation 494,645
7 Poland 480,496
8 Spain 282,008
9 Brazil 265,820
10 United Kingdom 254,381
All Others 2,543,493
Total 9,255,280

Country Rankings for Thunderbird Usage, February 24, 2015

The Thunderbird team is now working hard preparing our next major release, which will be Thunderbird 38 in May 2015. We’ll be blogging more about that release in the next few weeks, including reporting on the many new features that we have added.

February 27, 2015 10:44 PM

Mike Conley

The Joy of Coding (Episode 3)

The third episode is up! My machine was a little sluggish this time, since I had OBS chugging in the background attempting to do a hi-res screen recording simultaneously.

Richard Milewski and I are going to try an experiment where I try to stream with OBS next week, which should result in a much higher-resolution stream. We’re also thinking about having recording occur on a separate machine, so that it doesn’t bog me down while I’m working. Hopefully we’ll have that set up for next week.

So this third episode was pretty interesting. Probably the most interesting part was when I discovered in the last quarter that I’d accidentally shipped a regression in Firefox 36. Luckily, I’ve got a patch that fixes the problem that has been approved for uplift to Aurora and Beta. A point release is also planned for 36, so I’ve got approval to get the fix in there too. \o/

Here are the notes for the bug I was working on. The review feedback from karlt is in this bug, since I kinda screwed up where I posted the review request with MozReview.

February 27, 2015 03:27 PM

February 19, 2015

Mike Conley

The Joy of Coding (Episode 2)

The second episode is up! We seem to have solved the resolution problem this time around – big thanks to Richard Milewski for his work there. This time, however, my microphone levels were just a bit low for the first half-hour. That’s my bad – I’ll make sure my gain is at the right level next time before I air.

Here are the notes for the bug I was working on.

And let me know if there’s anything else I can do to make these episodes more useful or interesting.

February 19, 2015 03:54 AM

February 18, 2015

Meeting Notes

Thunderbird: 2015-02-17

Thunderbird meeting notes 2015-02-17. NOON PST. Previous meetings: https://wiki.mozilla.org/Thunderbird/StatusMeetings#Meeting_Notes

Attendees

fallen, wsmwk, rkent, aceman, paenglab, makemyday, magnus, jorgk,

Current status and discussions

  • 36.0 beta is out

Critical Issues

Critical bugs. Please leave these here until they’re confirmed fixed.

  • Auto-complete improvements – some of those could go into esr31
  • ldap crasher
  • certificate crasher
  • Lightning integration
  • AB all-acount search bug 170270
  • maildir UI
  • video chat The initial set of patches, with IB UI, may land this week (they’re up for final review). We’re considering also landing a set of matching strings for TB so uplifting a port of the UI becomes possible. I’m not sure the feature will be ready to ship in TB38 as it has not undergone much real world testing yet, but you never know, there may not be any nasty surprises ;)

Release Issues

Upcoming

  • Thunderbird 38 moves to Earlybird ~ February 24, 2015
    • string freeze

Lightning to Thunderbird Integration

See https://calendar.etherpad.mozilla.org/thunderbird-integration

  • As underpass has pointed out repeatedly (thanks for your patience!) , we need to rewrite / heavily modify the lightning articles on support.mozilla.org. let me know irc: rolandtanglao on #tb-support-crew or rtanglao AT mozilla.com OR simply start editing the articles

Round Table

Paenglab

  • I’ve requested for bug 1096006 “Add AccountManager to the prefs in tab” for Tracking_TB38.
    • Is this bug desired for TB 38? It would be needed to enable PrefsInTab.
    • If yes, I have a string only patch to land before string freeze.
  • I’ve also requested for Hiro’s bug 1087233 “Create about:downloads to migrate to Downloads.jsm” for Tracking_TB38.
    • I’ve needinfoed him to ask if he has time to finish, but no answer until now.
    • It has also strings in it. I could make a strings only patch if needed.

sshagarwal

  • Plan to land AB fix bug 170270 for TB 38.
  • Bundled chat desktop notifications bug 1127802 waiting for final review.
  • Discussing schema design and appropriate db backend for next gen address book with mconley. We plan to get an approximate idea of the number of contacts in the users’ address books on an average bug 1132588 as a required minimum performance measure.

wsmwk

  • 36.0 beta QA organized
  • triage topcrashes
  • working on HWA question bug 1131879 Disable hardware acceleration (HWA)

aceman

  • having an active week with fixing smaller backend bugs (landing right now), polishing for the release. Proud to fix long-standing dataloss bug 840418.

Question Time

Other

  • Note – meeting notes must be copied from etherpad to wiki before 5AM CET next day so that they will go public in the meeting notes blog.

Action Items

  • organize 36 beta postmortem meeting (wsmwk)
  • lightning integration meeting (rkrent/fallen)

February 18, 2015 04:00 AM

February 17, 2015

Mike Conley

On unsafe CPOW usage, and “why is my Nightly so sluggish with e10s enabled?”

If you’ve opened the Browser Console lately while running Nightly with e10s enabled, you might have noticed a warning message – “unsafe CPOW usage” – showing up periodically.

I wanted to talk a little bit about what that means, and what’s being done about it. Brad Lassey already wrote a bit about this, but I wanted to expand upon it (especially since one of my goals this quarter is to get a handle on unsafe CPOW usage in core browser code).

I also wanted to talk about sluggishness that some of our brave Nightly testers with e10s enabled have been experiencing, and where that sluggishness is coming from, and what can be done about it.

What is a CPOW?

“CPOW” stands for “Cross-process Object Wrapper”1, and is part of the glue that has allowed e10s to be enabled on Nightly without requiring a full re-write of the front-end code. It’s also part of the magic that’s allowing a good number of our most popular add-ons to continue working (albeit slowly).

In sum, a CPOW is a way for one process to synchronously access and manipulate something in another process, as if they were running in the same process. Anything that can be considered a JavaScript Object can be represented as a CPOW.

Let me give you an example.

In single-process Firefox, easy and synchronous access to the DOM of web content was more or less assumed. For example, in browser code, one could do this from the scope of a browser window:

let doc = gBrowser.selectedBrowser.contentDocument;
let contentBody = doc.body;

Here contentBody corresponds to the <body> element of the document in the currently selected browser. In single-process Firefox, querying for and manipulating web content like this is quick and easy.

In multi-process Firefox, where content is processed and rendered in a completely separate process, how does something like this work? This is where CPOWs come in2.

With a CPOW, one can synchronously access and manipulate these items, just as if they were in the same process. We expose a CPOW for the content document in a remote browser with contentDocumentAsCPOW, so the above could be rewritten as:

let doc = gBrowser.selectedBrowser.contentDocumentAsCPOW;
let contentBody = doc.body;

I should point out that contentDocumentAsCPOW and contentWindowAsCPOW are exposed on <xul:browser> objects, and that we don’t make every accessor of a CPOW have the “AsCPOW” suffix. This is just our way of making sure that consumers of the contentWindow and contentDocument on the main process side know that they’re probably working with CPOWs3. contentBody.firstChild would also be a CPOW, since CPOWs can only beget more CPOWs.

So for the most part, with CPOWs, we can continue to query and manipulate the <body> of the document loaded in the current browser just like we used to. It’s like an invisible compatibility layer that hops us right over that process barrier.

Great, right?

Well, not really.

CPOWs are really a crutch to help add-ons and browser code exist in this multi-process world, but they’ve got some drawbacks. Most noticeably, there are performance drawbacks.

Why is my Nightly so sluggish with e10s enabled?

Have you been noticing sluggish performance on Nightly with e10s? Chances are this is caused by an add-on making use of CPOWs (either knowingly or unknowingly). Because CPOWs are used for synchronous reading and manipulation of objects in other processes, they send messages to other processes to do that work, and block the main process while they wait for a response. We call this “CPOW traffic”, and if you’re experiencing a sluggish Nightly, this is probably where the sluggishness if coming from.

Instead of using CPOWs, add-ons and browser code should be updated to use frame scripts sent over the message manager. Frame scripts cannot block the main process, and can be optimized to send only the bare minimum of information required to perform an action in content and return a result.

Add-ons built with the Add-on SDK should already be using “content scripts” to manipulate content, and therefore should inherit a bunch of fixes from the SDK as e10s gets closer to shipping. These add-ons should not require too many changes. Old-style add-ons, however, will need to be updated to use frame scripts unless they want to be super-sluggish and bog the browser down with CPOW traffic.

And what constitutes “unsafe CPOW usage”?

“unsafe” might be too strong a word. “unexpected” might be a better term. Brad Lassey laid this out in his blog post already, but I’ll quickly rehash it.

There are two main cases to consider when working with CPOWs:

  1. The content process is already blocked sending up a synchronous message to the parent process
  2. The content process is not blocked

The first case is what we consider “the good case”. The content process is in a known good state, and its primed to receive IPC traffic (since it’s otherwise just idling). The only bad part about this is the IPC traffic.

The second case is what we consider the bad case. This is when the parent is sending down CPOW messages to the child (by reading or manipulating objects in the content process) when the child process might be off processing other things. This case is far more likely than the first case to cause noticeable performance problems, as the main thread of the content process might be bogged down doing other things before it can handle the CPOW traffic – and the parent will be blocked waiting for the messages to be responded to!

There’s also a more speculative fear that the parent might send down CPOW traffic at a time when it’s “unsafe” to communicate with the content process. There are potentially times when it’s not safe to run JS code in the content process, but CPOWs traffic requires both processes to execute JS. This is a concern that was expressed to me by someone over IRC, and I don’t exactly understand what the implications are – but if somebody wants to comment and let me know, I’ll happily update this post.

So, anyhow, to sum – unsafe CPOW usage is when CPOW traffic is initiated on the parent process side while the content process is not blocked. When this unsafe CPOW usage occurs, we log an “unsafe CPOW usage” message to the Browser Console, along with the script and line number where the CPOW traffic was initiated from.

Measuring

We need to measure and understand CPOW usage in Firefox, as well as add-ons running in Firefox, and over time we need to reduce this CPOW usage. The priority should be on reducing the “unsafe CPOW usage” CPOWs in core browser code.

If there’s anything that working on the Australis project taught me, it’s that in order to change something, you need to know how to measure it first. That way, you can make sure your efforts are having an effect.

We now have a way of measuring the amount of time that Firefox code and add-ons spend processing CPOW messages. You can look at it yourself – just go to about:compartments.

It’s not the prettiest interface, but it’s a start. The second column is the time processing CPOW traffic, and the higher the number, the longer it’s been doing it. Naturally, we’ll be working to bring those numbers down over time.

A possibly quick-fix for a slow Nightly with e10s

As I mentioned, we also list add-ons in about:compartments, so if you’re experiencing a slow Nightly, check out about:compartments and see if there’s an add-on with a high number in the second column. Then, try disabling that add-on to see if your performance problem is reduced.

If so, great! Please file a bug on Bugzilla in this component for the add-on, mention the name of the add-on4, describe the performance problem, and mark it blocking e10s-addons if you can.

We’re hoping to automate this process by exposing some UI that informs the user when an add-on is causing too much CPOW traffic. This will be landing in Nightly near you very soon.

PKE Meter, a CPOW Geiger Counter

Logging “unsafe CPOW usage” is all fine and dandy if you’re constantly looking at the Browser Console… but who is constantly looking at the Browser Console? Certainly not me.

Instead, I whipped up a quick and dirty add-on that plays a click, like a Geiger Counter, anytime “unsafe CPOW usage” is put into the Browser Console. This has already highlighted some places where we can reduce unsafe CPOW usage in core Firefox code – particularly:

  1. The Page Info dialog. This is probably the worse offender I’ve found so far – humongous unsafe CPOW traffic just by opening the dialog, and it’s really sluggish.
  2. Closing tabs. SessionStore synchronously communicates with the content process in order to read the tab state before the tab is closed.
  3. Back / forward gestures, at least on my MacBook
  4. Typing into an editable HTML element after the Findbar has been opened.

If you’re interested in helping me find more, install this add-on5, and listen for clicks. At this point, I’m only interested in unsafe CPOW usage caused by core Firefox code, so you might want to disable any other add-ons that might try to synchronously communicate with content.

If you find an “unsafe CPOW usage” that’s not already blocking this bug, please file a new one! And cc me on it! I’m mconley at mozilla dot com.


  1. I pronounce CPOW as “kah-POW”, although I’ve also heard people use “SEE-pow”. To each his or her own. 

  2. For further reading, Bill McCloskey discusses CPOWs in greater detail in this blog post. There’s also this handy documentation

  3. I say probably, because in the single-process case, they’re not working with CPOWs – they’re accessing the objects directly as they used to. 

  4. And say where to get it from, especially if it’s not on AMO. 

  5. Source code is here 

February 17, 2015 04:47 PM

February 16, 2015

Mike Conley

The Joy of Coding (Episode 1)

Here’s the first episode! I streamed it last Wednesday, and it was mostly concerned with bug 1090439, which is about making the print dialog and progress calls from the child process asynchronous.

Here are the notes for that bug. I still haven’t closed it yet, so perhaps I’ll keep pressing on this next Wednesday when I stream Episode 2. We’ll see!

A note that I did struggle with some resolution issues in this episode. I’m working with Richard Milewski from the Air Mozilla team to make this better for the next episode. Sorry about that!

February 16, 2015 07:29 PM

Rumbling Edge - Thunderbird

2015-02-15 Calendar builds

Common (excluding Website bugs)-specific: (36)

Sunbird will no longer be actively developed by the Calendar team.

Windows builds Official Windows

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

February 16, 2015 07:22 AM

2015-02-15 Thunderbird comm-central builds

Thunderbird-specific: (27)

MailNews Core-specific: (22)

Windows builds Official Windows, Official Windows installer

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

February 16, 2015 07:21 AM

February 11, 2015

Meeting Notes

Thunderbird: 2015-02-10

Thunderbird meeting notes 2015-02-10. Previous meetings: https://wiki.mozilla.org/Thunderbird/StatusMeetings#Meeting_Notes

Attendees

aceman, cloep, florian, jcranmer, Jorg K, mkmelin, Paneglab, rkent, Roland, sshagarwal, wsmwk, MakeMyDay

Action items from last meetings

  • wsmwk to get in touch with Standard8 re: beta.
    • done – bkerensa and sylvestre are on it
  • rkent to work with Standard8 (and Fallen) on issues of 1.management of tracking flags and 2. pushing into aurora and beta for TB 38. (meeting generally agreed that mkmelin and rkent would be appropriate to manage pushing patches forward into aurora and beta).

Critical Issues

Critical bugs. Please leave these here until they’re confirmed fixed.

  • Auto-complete improvements – some of those could go into esr31

Release Issues

  • Current beta blocked due to Windows XP failures. rkent has try server configuration that can test a beta build, and will try Standard8’s suggestions.

Upcoming

  • Thunderbird 38 moves to Earlybird ~ February 24, 2015

We need people to commit to being mentors.

Lightning to Thunderbird Integration

See https://calendar.etherpad.mozilla.org/thunderbird-integration

  • As underpass has pointed out repeatedly (thanks for your patience!) , we need to rewrite / heavily modify the lightning articles on support.mozilla.org. let me know irc: rolandtanglao on #tb-support-crew or rtanglao AT mozilla.com OR simply start editing the articles

Round Table

JosiahOne

  • So I started a new job recently, but because of that plus school, my time for TB stuff is very, very low. I will continue doing ui-reviews and reviews, but implementing anything has pretty much come to an end until summer break.

wsmwk

  • release management https://etherpad.mozilla.org/XxBwrpMHKz
  • disable HWA for 38? it has been suggested by someone in support to disable because “3d acceleration … does little or nothing for Thunderbird but messes menus, font and causes crashes (the kind with no crash reporter reports).” bug 1131879

rkent

  • Hot bugs
    • bug 1125577 – startup crash in NSSCryptoContext_FindCertificateByEncodedCertificate (and similar bug 1128614)
    • bug 1124015 – Add UI to select maildir for storage when creating accounts
    • bug 1119529 – Sending message succeeds but Error “error while running message filters on it.”
  • Unfortunately a long review queue that I will be looking at for the next few days.
  • I now have access to Thunderbird ADI data. Our ADI reached a new peak last month (in spite of SlashDot assuming “Thunderbird usage is dropping”) and Japan has now surpassed US as #2 country (after Germany).

jcranmer

  • Hopefully going to work down my review queue by this weekend
  • Main jsmime perf regression fixes are r? rkent
  • I have a non-promisified version of OAuth2, but still no UI hookup
  • Mozharness-based mozmill tests: I’ve updated the runner, need to make updates to three or four repositories to make it work
    • Trying to get this in progress for Thunderbird 38, so we don’t need to maintain the old mozmill buildbot stuff for ESR
  • I’ve been doing some work with the emailjs team to add functionality to their SMTP libraries (specifically with regards to SASL) that we could share between TB/Whiteout.io/Gaia email teams.

TheOne

Jorg K

I have an XP machine (32 bit), I could run (not build) and debug (with WinDbg) the beta, if that’s of any help. I’d need to know where do download it … and the mentioned suggestions to try. (Contact via e-mail to start off).

mkmelin

  • autocomplete:
    • the critical regressions fixed
    • 3 prominent complaints still not done: the “tab too quickly doesn’t complete”, “show as red even if found”, “insert link missing paste url in context menu”
    • ordering: now landed on esr, some complaints still, need to investigate

Question Time

I’d like to know what happened to the “Thunderbird Discussions with Mozilla”, ie. the letter that was meant to be sent to the Mozilla management, re. funding, donations, staffing, etc. There was a lively discussion on the tb-planning mailing list in early January 2015.

  • won’t happen before 38 branching

Support team

  • As underpass has pointed out, we need to rewrite all the Lightning articles, they are out of date whether or not we finish the integration for TB 38. email me or irc roland or just edit the articles (see above under “Lightning to Thunderbird integration”. Tonnes do you have time to write some of these Lightning articles in English?

Other

  • Note – meeting notes must be copied from etherpad to wiki before 5AM CET next day so that they will go public in the meeting notes blog.

(Extra) Meeting next Tuesday, Feb 17.

Action Items

-none-

February 11, 2015 04:00 AM

February 10, 2015

Bryan Clark

If writing is a muscle

I haven’t been to the gym in a long time.

David Eaves, a person I have immense amounts of respect for, has been using a tag line related to this title/intro on his blog for quite a while, probably longer than I’ve known him.  And I honestly never gave much thought to the idea that writing really is a muscle until recently. I’ve taken a break from being a designer (or a programmer) to work as a product manager for over a year now. Designing and coding require a set of skills I’m very familiar with, code is an interpretive language that people use to communicate with each other about the details of commands they issue a computer. While design is a more visual language of storytelling, heavily using imagery and some text to convey the journey of a user to the team intent on correctly interacting with that user.  Both pursuits are about communication but each uses written language in a very different way.  As a product manager I’m forced to lean on my skills as a writer and I don’t think I had much in the way of skills previously but whatever bedridden muscles have been dormant are reawakening as I realize how young and foolish I really was to ignore this essential form of communication.

I’m hoping there is more to come, perhaps starting with some tech posts about recent projects while I try to grapple with this idea of writing more than a tweet.

February 10, 2015 07:07 AM

February 09, 2015

Mark Banner

Firefox Hello Desktop: Behind the Scenes – Flux and React

This is the first of a few posts that I’m planning regarding discussion about how we implement and work on the desktop and standalone parts of Firefox Hello. We’ve been doing some things in different ways, which we have found to be advantageous. We know other teams are interested in what we do, so its time to share!

Content Processes

First, a little bit of architecture: The panels and conversation window run in content processes (just like regular web pages). The conversation window shares code with the link-clicker pages that are on hello.firefox.com.

Hence those parts run very much in a web-style, and for various reasons, we decided to create them in a web-style manner. As a result, we’ve ended up with using React and Flux to aid our development.

I’ll detail more about the architecture in future posts.

The Flux Pattern

Flux is a recommended pattern for use alongside React, although I think you could use it with other frameworks as well. I’ll detail here about how we use Flux specifically for Hello. As Flux is a pattern, there’s no one set standard and the methods of implementation vary.

Flux Overview

The main parts of a flux system are stores, components and actions. Some of this is a bit like an MVC system, but I find there’s better definition about what does what.
Diagram of Example flow in a Flux patternAn action is effectively a result of an event, that changes the system. For example, in Loop, we use actions for user events, but we also use them for any data incoming from the server.

A store contains the business logic. It listens to actions, when it receives one, it does something based on the action and updates its state appropriately.

A component is a view. The view has a set of properties (passed in values) and/or state (the state is obtained from the store’s state). For a given set of properties and state, you always get the same layout. The components listen for updates to the state in the stores and update appropriately.

We also have a dispatcher. The dispatcher dispatches actions to interested stores. Only one action can be processed at any one time. If a new action comes in, then the dispatcher queues it.

Actions are always synchronous – if changes would happen due to external stimuli then these will be new actions. For example, this prevents actions from blocking other actions whilst waiting for a response from the server.

What advantages do we get?

For Hello, we find the flux pattern fits very nicely. Before, we used a traditional MVC model, however, we kept on getting in a mess with events being all over the place, and application logic being wrapped in amongst the views as well as the models.

Now, we have a much more defined structure:

React provides the component structure, it has defined ways of tracking state and properties, and the re-rendering on state change gives much automation. Since it encourages the separation of immutable properties, a whole class of inadvertent errors is eliminated.

There’s also many advantages with debugging – we have a flag that lets us watch all the actions going through the system, so its much easier to track what events are taking place and the data passed with them. This combined with the fact that actions have limited scope, helps with debugging the data flows.

Simple Unit Testing

For testing, we’re able to do unit testing in a much simpler fashion:

it("should render a muted local audio button", function() {
  var comp = TestUtils.renderIntoDocument(
    React.createElement(sharedViews.MediaControlButton, {
      scope: "local",
      type: "audio",
      action: function(){},
      enabled: false
    }));

  expect(comp.getDOMNode().classList.contains("muted")).eql(true);
});
it("should set the state to READY", function() {
  store.setupRoomInfo(new sharedActions.SetupRoomInfo(fakeRoomInfo));

  expect(store._storeState.roomState).eql(ROOM_STATES.READY);
});

We therefore have many tests written at the unit test level. Many times we’ve found and prevented issues whilst writing these tests, and yet, because these are all content based, we can run the tests in a few seconds. I’ll go more into testing in a future post.

References

Here’s a few references to some of the areas in our code base that are good example of our flux implementation. Note that behind the scenes, everything is known as Loop – the codename for the project.

Conclusion and more coming…

We’ve found using the flux model is much more organised than we were with an MVC, possibly its just a better defined methodology, but it gave us the structure we badly missing. In future posts, I’ll discuss about our development facilities, more about the desktop architecture and whatever else comes up, so please do leave questions in the comments and I’ll try and answer them either direct or with more posts.

February 09, 2015 08:49 PM

February 08, 2015

Mike Conley

The Joy of Coding (or, Firefox Hacking Live!)

A few months back, I started publishing my bug notes online, as a way of showing people what goes on inside a Firefox engineer’s head while fixing a bug.

This week, I’m upping the ante a bit: I’m going to live-hack on Firefox for an hour and a half for the next few Wednesday’s on Air Mozilla. I’m calling it The Joy of Coding1. I’ll be working on real Firefox bugs2 – not some toy exercise-bug where I’ve pre-planned where I’m going. It will be unscripted, unedited, and uncensored. But hopefully not uninteresting3!

Anyhow, the first episode airs this Wednesday. I’ll be using #livehacking on irc.mozilla.org as a backchannel. Not sure what bug(s) I’ll be hacking on – I guess it depends on what I get done on Monday and Tuesday.

Anyhow, we’ll try it for a few weeks to see if folks are interested in watching. Who knows, maybe we can get a few more developers doing this too – I’d enjoy seeing what other folks do to fix their bugs!

Anyhow, I hope to see you there!


  1. Maybe I’ll wear an afro wig while I stream 

  2. Specifically, I’ll be working on Electrolysis bugs, since that’s what my focus is on these days. 

  3. I’ve actually piloted this for the past few weeks, streaming on YouTube Live. Here’s a playlist of the pilot episodes

February 08, 2015 07:58 PM

January 15, 2015

Rumbling Edge - Thunderbird

2015-01-14 Calendar builds

Common (excluding Website bugs)-specific: (16)

Sunbird will no longer be actively developed by the Calendar team.

Windows builds Official Windows

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

January 15, 2015 07:52 AM

2015-01-14 Thunderbird comm-central builds

Thunderbird-specific: (30)

MailNews Core-specific: (14)

Windows builds Official Windows, Official Windows installer

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

January 15, 2015 07:51 AM

January 13, 2015

Joshua Cranmer

Why email is hard, part 8: why email security failed

This post is part 8 of an intermittent series exploring the difficulties of writing an email client. Part 1 describes a brief history of the infrastructure. Part 2 discusses internationalization. Part 3 discusses MIME. Part 4 discusses email addresses. Part 5 discusses the more general problem of email headers. Part 6 discusses how email security works in practice. Part 7 discusses the problem of trust. This part discusses why email security has largely failed.

At the end of the last part in this series, I posed the question, "Which email security protocol is most popular?" The answer to the question is actually neither S/MIME nor PGP, but a third protocol, DKIM. I haven't brought up DKIM until now because DKIM doesn't try to secure email in the same vein as S/MIME or PGP, but I still consider it relevant to discussing email security.

Unquestionably, DKIM is the only security protocol for email that can be considered successful. There are perhaps 4 billion active email addresses [1]. Of these, about 1-2 billion use DKIM. In contrast, S/MIME can count a few million users, and PGP at best a few hundred thousand. No other security protocols have really caught on past these three. Why did DKIM succeed where the others fail?

DKIM's success stems from its relatively narrow focus. It is nothing more than a cryptographic signature of the message body and a smattering of headers, and is itself stuck in the DKIM-Signature header. It is meant to be applied to messages only on outgoing servers and read and processed at the recipient mail server—it completely bypasses clients. That it bypasses clients allows it to solve the problem of key discovery and key management very easily (public keys are stored in DNS, which is already a key part of mail delivery), and its role in spam filtering is strong motivation to get it implemented quickly (it is 7 years old as of this writing). It's also simple: this one paragraph description is basically all you need to know [2].

The failure of S/MIME and PGP to see large deployment is certainly a large topic of discussion on myriads of cryptography enthusiast mailing lists, which often like to partake in propositions of new end-to-end encryption of email paradigms, such as the recent DIME proposal. Quite frankly, all of these solutions suffer broadly from at least the same 5 fundamental weaknesses, and I see it unlikely that a protocol will come about that can fix these weaknesses well enough to become successful.

The first weakness, and one I've harped about many times already, is UI. Most email security UI is abysmal and generally at best usable only by enthusiasts. At least some of this is endemic to security: while it mean seem obvious how to convey what an email signature or an encrypted email signifies, how do you convey the distinctions between sign-and-encrypt, encrypt-and-sign, or an S/MIME triple wrap? The Web of Trust model used by PGP (and many other proposals) is even worse, in that inherently requires users to do other actions out-of-band of email to work properly.

Trust is the second weakness. Consider that, for all intents and purposes, the email address is the unique identifier on the Internet. By extension, that implies that a lot of services are ultimately predicated on the notion that the ability to receive and respond to an email is a sufficient means to identify an individual. However, the entire purpose of secure email, or at least of end-to-end encryption, is subtly based on the fact that other people in fact have access to your mailbox, thus destroying the most natural ways to build trust models on the Internet. The quest for anonymity or privacy also renders untenable many other plausible ways to establish trust (e.g., phone verification or government-issued ID cards).

Key discovery is another weakness, although it's arguably the easiest one to solve. If you try to keep discovery independent of trust, the problem of key discovery is merely picking a protocol to publish and another one to find keys. Some of these already exist: PGP key servers, for example, or using DANE to publish S/MIME or PGP keys.

Key management, on the other hand, is a more troubling weakness. S/MIME, for example, basically works without issue if you have a certificate, but managing to get an S/MIME certificate is a daunting task (necessitated, in part, by its trust model—see how these issues all intertwine?). This is also where it's easy to say that webmail is an unsolvable problem, but on further reflection, I'm not sure I agree with that statement anymore. One solution is just storing the private key with the webmail provider (you're trusting them as an email client, after all), but it's also not impossible to imagine using phones or flash drives as keystores. Other key management factors are more difficult to solve: people who lose their private keys or key rollover create thorny issues. There is also the difficulty of managing user expectations: if I forget my password to most sites (even my email provider), I can usually get it reset somehow, but when a private key is lost, the user is totally and completely out of luck.

Of course, there is one glaring and almost completely insurmountable problem. Encrypted email fundamentally precludes certain features that we have come to take for granted. The lesser known is server-side search and filtration. While there exist some mechanisms to do search on encrypted text, those mechanisms rely on the fact that you can manipulate the text to change the message, destroying the integrity feature of secure email. They also tend to be fairly expensive. It's easy to just say "who needs server-side stuff?", but the contingent of people who do email on smartphones would not be happy to have to pay the transfer rates to download all the messages in their folder just to find one little email, nor the energy costs of doing it on the phone. And those who have really large folders—Fastmail has a design point of 1,000,000 in a single folder—would still prefer to not have to transfer all their mail even on desktops.

The more well-known feature that would disappear is spam filtration. Consider that 90% of all email is spam, and if you think your spam folder is too slim for that to be true, it's because your spam folder only contains messages that your email provider wasn't sure were spam. The loss of server-side spam filtering would dramatically increase the cost of spam (a 10% reduction in efficiency would double the amount of server storage, per my calculations), and client-side spam filtering is quite literally too slow [3] and too costly (remember smartphones? Imagine having your email take 10 times as much energy and bandwidth) to be a tenable option. And privacy or anonymity tends to be an invitation to abuse (cf. Tor and Wikipedia). Proposed solutions to the spam problem are so common that there is a checklist containing most of the objections.

When you consider all of those weaknesses, it is easy to be pessimistic about the possibility of wide deployment of powerful email security solutions. The strongest future—all email is encrypted, including metadata—is probably impossible or at least woefully impractical. That said, if you weaken some of the assumptions (say, don't desire all or most traffic to be encrypted), then solutions seem possible if difficult.

This concludes my discussion of email security, at least until things change for the better. I don't have a topic for the next part in this series picked out (this part actually concludes the set I knew I wanted to discuss when I started), although OAuth and DMARC are two topics that have been bugging me enough recently to consider writing about. They also have the unfortunate side effect of being things likely to see changes in the near future, unlike most of the topics I've discussed so far. But rest assured that I will find more difficulties in the email infrastructure to write about before long!

[1] All of these numbers are crude estimates and are accurate to only an order of magnitude. To justify my choices: I assume 1 email address per Internet user (this overestimates the developing world and underestimates the developed world). The largest webmail providers have given numbers that claim to be 1 billion active accounts between them, and all of them use DKIM. S/MIME is guessed by assuming that any smartcard deployment supports S/MIME, and noting that the US Department of Defense and Estonia's digital ID project are both heavy users of such smartcards. PGP is estimated from the size of the strong set and old numbers on the reachable set from the core Web of Trust.
[2] Ever since last April, it's become impossible to mention DKIM without referring to DMARC, as a result of Yahoo's controversial DMARC policy. A proper discussion of DMARC (and why what Yahoo did was controversial) requires explaining the mail transmission architecture and spam, however, so I'll defer that to a later post. It's also possible that changes in this space could happen within the next year.
[3] According to a former GMail spam employee, if it takes you as long as three minutes to calculate reputation, the spammer wins.

January 13, 2015 04:38 AM

January 10, 2015

Joshua Cranmer

A unified history for comm-central

Several years back, Ehsan and Jeff Muizelaar attempted to build a unified history of mozilla-central across the Mercurial era and the CVS era. Their result is now used in the gecko-dev repository. While being distracted on yet another side project, I thought that I might want to do the same for comm-central. It turns out that building a unified history for comm-central makes mozilla-central look easy: mozilla-central merely had one import from CVS. In contrast, comm-central imported twice from CVS (the calendar code came later), four times from mozilla-central (once with converted history), and imported twice from Instantbird's repository (once with converted history). Three of those conversions also involved moving paths. But I've worked through all of those issues to provide a nice snapshot of the repository [1]. And since I've been frustrated by failing to find good documentation on how this sort of process went for mozilla-central, I'll provide details on the process for comm-central.

The first step and probably the hardest is getting the CVS history in DVCS form (I use hg because I'm more comfortable it, but there's effectively no difference between hg, git, or bzr here). There is a git version of mozilla's CVS tree available, but I've noticed after doing research that its last revision is about a month before the revision I need for Calendar's import. The documentation for how that repo was built is no longer on the web, although we eventually found a copy after I wrote this post on git.mozilla.org. I tried doing another conversion using hg convert to get CVS tags, but that rudely blew up in my face. For now, I've filed a bug on getting an official, branchy-and-tag-filled version of this repository, while using the current lack of history as a base. Calendar people will have to suffer missing a month of history.

CVS is famously hard to convert to more modern repositories, and, as I've done my research, Mozilla's CVS looks like it uses those features which make it difficult. In particular, both the calendar CVS import and the comm-central initial CVS import used a CVS tag HG_COMM_INITIAL_IMPORT. That tagging was done, on only a small portion of the tree, twice, about two months apart. Fortunately, mailnews code was never touched on CVS trunk after the import (there appears to be one commit on calendar after the tagging), so it is probably possible to salvage a repository-wide consistent tag.

The start of my script for conversion looks like this:

#!/bin/bash

set -e

WORKDIR=/tmp
HGCVS=$WORKDIR/mozilla-cvs-history
MC=/src/trunk/mozilla-central
CC=/src/trunk/comm-central
OUTPUT=$WORKDIR/full-c-c

# Bug 445146: m-c/editor/ui -> c-c/editor/ui
MC_EDITOR_IMPORT=d8064eff0a17372c50014ee305271af8e577a204

# Bug 669040: m-c/db/mork -> c-c/db/mork
MC_MORK_IMPORT=f2a50910befcf29eaa1a29dc088a8a33e64a609a

# Bug 1027241, bug 611752 m-c/security/manager/ssl/** -> c-c/mailnews/mime/src/*
MC_SMIME_IMPORT=e74c19c18f01a5340e00ecfbc44c774c9a71d11d

# Step 0: Grab the mozilla CVS history.
if [ ! -e $HGCVS ]; then
  hg clone git+https://github.com/jrmuizel/mozilla-cvs-history.git $HGCVS
fi

Since I don't want to include the changesets useless to comm-central history, I trimmed the history by using hg convert to eliminate changesets that don't change the necessary files. Most of the files are simple directory-wide changes, but S/MIME only moved a few files over, so it requires a more complex way to grab the file list. In addition, I also replaced the % in the usernames with @ that they are used to appearing in hg. The relevant code is here:

# Step 1: Trim mozilla CVS history to include only the files we are ultimately
# interested in.
cat >$WORKDIR/convert-filemap.txt <<EOF
# Revision e4f4569d451a
include directory/xpcom
include mail
include mailnews
include other-licenses/branding/thunderbird
include suite
# Revision 7c0bfdcda673
include calendar
include other-licenses/branding/sunbird
# Revision ee719a0502491fc663bda942dcfc52c0825938d3
include editor/ui
# Revision 52efa9789800829c6f0ee6a005f83ed45a250396
include db/mork/
include db/mdb/
EOF

# Add the S/MIME import files
hg -R $MC log -r "children($MC_SMIME_IMPORT)" \
  --template "{file_dels % 'include {file}\n'}" >>$WORKDIR/convert-filemap.txt

if [ ! -e $WORKDIR/convert-authormap.txt ]; then
hg -R $HGCVS log --template "{email(author)}={sub('%', '@', email(author))}\n" \
  | sort -u > $WORKDIR/convert-authormap.txt
fi

cd $WORKDIR
hg convert $HGCVS $OUTPUT --filemap convert-filemap.txt -A convert-authormap.txt

That last command provides us the subset of the CVS history that we need for unified history. Strictly speaking, I should be pulling a specific revision, but I happen to know that there's no need to (we're cloning the only head) in this case. At this point, we now need to pull in the mozilla-central changes before we pull in comm-central. Order is key; hg convert will only apply the graft points when converting the child changeset (which it does but once), and it needs the parents to exist before it can do that. We also need to ensure that the mozilla-central graft point is included before continuing, so we do that, and then pull mozilla-central:

CC_CVS_BASE=$(hg log -R $HGCVS -r 'tip' --template '{node}')
CC_CVS_BASE=$(grep $CC_CVS_BASE $OUTPUT/.hg/shamap | cut -d' ' -f2)
MC_CVS_BASE=$(hg log -R $HGCVS -r 'gitnode(215f52d06f4260fdcca797eebd78266524ea3d2c)' --template '{node}')
MC_CVS_BASE=$(grep $MC_CVS_BASE $OUTPUT/.hg/shamap | cut -d' ' -f2)

# Okay, now we need to build the map of revisions.
cat >$WORKDIR/convert-revmap.txt <<EOF
e4f4569d451a5e0d12a6aa33ebd916f979dd8faa $CC_CVS_BASE # Thunderbird / Suite
7c0bfdcda6731e77303f3c47b01736aaa93d5534 d4b728dc9da418f8d5601ed6735e9a00ac963c4e, $CC_CVS_BASE # Calendar
9b2a99adc05e53cd4010de512f50118594756650 $MC_CVS_BASE # Mozilla graft point
ee719a0502491fc663bda942dcfc52c0825938d3 78b3d6c649f71eff41fe3f486c6cc4f4b899fd35, $MC_EDITOR_IMPORT # Editor
8cdfed92867f885fda98664395236b7829947a1d 4b5da7e5d0680c6617ec743109e6efc88ca413da, e4e612fcae9d0e5181a5543ed17f705a83a3de71 # Chat
EOF

# Next, import mozilla-central revisions
for rev in $MC_MORK_IMPORT $MC_EDITOR_IMPORT $MC_SMIME_IMPORT; do
  hg convert $MC $OUTPUT -r $rev --splicemap $WORKDIR/convert-revmap.txt \
    --filemap $WORKDIR/convert-filemap.txt
done

Some notes about all of the revision ids in the script. The splicemap requires the full 40-character SHA ids; anything less and the thing complains. I also need to specify the parents of the revisions that deleted the code for the mozilla-central import, so if you go hunting for those revisions and are surprised that they don't remove the code in question, that's why.

I mentioned complications about the merges earlier. The Mork and S/MIME import codes here moved files, so that what was db/mdb in mozilla-central became db/mork. There's no support for causing the generated splice to record these as a move, so I have to manually construct those renamings:

# We need to execute a few hg move commands due to renamings.
pushd $OUTPUT
hg update -r $(grep $MC_MORK_IMPORT .hg/shamap | cut -d' ' -f2)
(hg -R $MC log -r "children($MC_MORK_IMPORT)" \
  --template "{file_dels % 'hg mv {file} {sub(\"db/mdb\", \"db/mork\", file)}\n'}") | bash
hg commit -m 'Pseudo-changeset to move Mork files' -d '2011-08-06 17:25:21 +0200'
MC_MORK_IMPORT=$(hg log -r tip --template '{node}')

hg update -r $(grep $MC_SMIME_IMPORT .hg/shamap | cut -d' ' -f2)
(hg -R $MC log -r "children($MC_SMIME_IMPORT)" \
  --template "{file_dels % 'hg mv {file} {sub(\"security/manager/ssl\", \"mailnews/mime\", file)}\n'}") | bash
hg commit -m 'Pseudo-changeset to move S/MIME files' -d '2014-06-15 20:51:51 -0700'
MC_SMIME_IMPORT=$(hg log -r tip --template '{node}')
popd

# Echo the new move commands to the changeset conversion map.
cat >>$WORKDIR/convert-revmap.txt <<EOF
52efa9789800829c6f0ee6a005f83ed45a250396 abfd23d7c5042bc87502506c9f34c965fb9a09d1, $MC_MORK_IMPORT # Mork
50f5b5fc3f53c680dba4f237856e530e2097adfd 97253b3cca68f1c287eb5729647ba6f9a5dab08a, $MC_SMIME_IMPORT # S/MIME
EOF

Now that we have all of the graft points defined, and all of the external code ready, we can pull comm-central and do the conversion. That's not quite it, though—when we graft the S/MIME history to the original mozilla-central history, we have a small segment of abandoned converted history. A call to hg strip removes that.

# Now, import comm-central revisions that we need
hg convert $CC $OUTPUT --splicemap $WORKDIR/convert-revmap.txt
hg strip 2f69e0a3a05a

[1] I left out one of the graft points because I just didn't want to deal with it. I'll leave it as an exercise to the reader to figure out which one it was. Hint: it's the only one I didn't know about before I searched for the archive points [2].
[2] Since I wasn't sure I knew all of the graft points, I decided to try to comb through all of the changesets to figure out who imported code. It turns out that hg log -r 'adds("**")' narrows it down nicely (1667 changesets to look at instead of 17547), and using the {file_adds} template helps winnow it down more easily.

January 10, 2015 05:55 PM

January 03, 2015

Mike Conley

DocShell in a Nutshell – Part 3: Maturation (2005 – 2010)

Whoops

First off, an apology. I’ve fallen behind on these posts, and that’s not good – the iron has cooled, and I was taught to strike it while it was hot. I was hit with classic blogcrastination.

Secondly, another apology – I made a few errors in my last post, and I’d like to correct them:

  1. It’s come to my attention that I played a little fast and loose with the notions of “global history” and “session history”. They’re really two completely different things. Specifically, global history is what populates the AwesomeBar. Global history’s job is to remember every site you visit, regardless of browser window or tab. Session history is a different beast altogether – session history is the history inside the back-forward buttons. Every time you click on a link from one page, and travel to the next, you create a little nugget of session history. And when you click the back button, you move backwards in that session history. That’s the difference between the two – “like chalk and cheese”, as NeilAway said when he brought this to my attention.
  2. I also said that the docshell/ folder was created on Travis’s first landing on October 15th, 1998. This is not true – the docshell/ folder was created several months earlier, in this commit by “kipp”, dated July 18, 1998.

I’ve altered my last post to contain the above information, along with details on what I found in the time of that commit to Travis’s first landing. Maybe go back and give that a quick skim while I wait. Look for the string “correction” to see what I’ve changed.

I also got some confirmation from Travis himself over Twitter regarding my last post:

@mike_conley Looks like general right flow as far as 14 years ago memory can aid. :) Many context points surround…
@mike_conley 1) At that time, Mozilla was still largely in walls of Netscape, so many reviews/ alignment happened in person vs public docs.
@mike_conley 2) XPCOM ideas were new and many parts of system were straddling C++ objects and Interface models.
@mike_conley 3) XUL was also new and boundaries of what rendering belonged in browser shell vs. general rendering we’re [sic] being defined.
@mike_conley 4) JS access to XPCOM was also new driving rethinking of JS control vs embedding control.
@mike_conley There was a massive unwinding of the monolith and (re)defining of what it meant to build a browser inside a rendered chrome.

It’s cool to hear from the guy who really got the ball rolling here. The web is wonderful!

Finally, one last apology – this is a long-ass blog post. I’ve been working on it off and on for about 3 months, and it’s gotten pretty massive. Strap yourself into whatever chair you’re near, grab a thermos, cancel any appointments, and flip on your answering machine. This is going to be a long ride.

Oh come on, it’s not that bad, right? … right?

OK, let’s get back down to it. Where were we?

2005

A frame spoofing bug

Ah, yes – 2005 had just started. This was just a few weeks after a community driven effort put a full-page ad for Firefox in the New York Times. Only a month earlier, a New York Times article highlighted Firefox, and how it was starting to eat into Internet Explorer’s market share.

So what was going on in DocShell? Here are the bits I found most interesting. They’re kinda few and far between, since DocShell appears to have stabilized quite a bit by now. Mostly tiny bugfixes are landed, with the occasional interesting blip showing up. I guess this is a sign of a “mature” section of the codebase.

I found this commit on January 11th, 2005 pretty interesting. This patch fixes bug 103638 (and bug 273699 while it’s at it). What was going on was that if you had two Firefox windows open, both with <frameset>’s, where two <frames> had the same name attribute, it was possible for links clicked in one to sometimes open in the other. Youch! That’s a pretty serious security vulnerability. jst’s patch added a bunch of checks and smarter selection for link targets.

One of those new checks involved adding a new static function called CanAccessItem to nsDocShell.cpp, and having FindItemWithName (an nsDocShell instance method used to find some child nsIDocShellTreeItem with a particular name) take a new parameter of the “original requestor”, and ensuring that whichever nsIDocShellTreeItem we eventually landed on with the name that was requested passes the CanAccessItem test with the original requestor.

DocShell and session history

There are two commits, one on January 20th, 2005, and one on January 30th, 2005, both of which fix different bugs, but are interrelated and I want to talk about them for a second.

The first commit, for bug 277224, fixes a problem where if we change location to an anchor located within a document within a <script> tag, we stop loading the page content because the browser thinks we’re about to start loading a document at a different location. bz fixed the more common case of location change via setting document.location.href in bug 233963. Bug 277224 is interested in the case where document.location.href is modified with the .replace() method.

The solution that bz uses is to add new flags for nsIDocShellLoadInfo, which gives more power in how to stop loading a page. Specifically, it adds a LOAD_FLAGS_STOP_CONTENT flag which allows the caller to stop the rendering of content and all network activity (where the default was just to stop network activity). I believe what happens is that replace() causes an InternalLoad to kick off, and we need content rendering to be stopped in order for this new load to take over properly. That’s my reading on the situation, anyhow. If bz or anybody else examining that patch has another interpretation, please let me know!

So what about the commit on January 30th? Well that one also involves anchors. What was happening was that if we browsed to some page, and then clicked a link that scrolled us to an anchor in that page, clicking back would reload the entire document off the cache again, when we really just need to restore the old scroll position.

The patch to fix this basically detected the case where we were going back from an anchor to a non-anchor but had the same URL, and allowed a scroll in that case.

So how is this related to the commit for bug 277224? Well, what it shows is that at this time, DocShell was responsible for not just knowing how to load a document and subdocuments, but also about the user’s state in that document – specifically, their scroll position. It also more firmly establishes the link between DocShell and Session History – as DocShell traverses pages, it communicates with Session History to let it know about those transitions, and refers to it when traveling backwards and forwards, and when restoring state for those session history entries.

I just thought that was kinda neat to know.

Window pains

On February 8th, 2005, danm landed a patch to fix bug 278143, which was a bug that caused windows opened with window.open to open in a new window if they had no target specified. This wouldn’t normally be a problem, except that this could override a user preference to open those new windows in new tabs instead. So that was bad.

This was simply a matter of adding a check for the null case for the target window name in nsWindowWatcher. No big deal.

The reason I bring this code up, is because I find it familiar – I brushed by it somewhat when I was working on making it possible to open new windows for multi-process Firefox.

Semi-related (because of the “popup” nature of things), is a commit on February 23rd, 2005. This one is for bug 277574, which makes it so that modal HTTP auth prompts focus the tabs that spawn them. This patch works by making sure HTTP auth prompts fire the same DOMWillOpenModalDialog and DOMModalDialogClosed events that tabbrowser listens for to focus tabs.

The copy and the cache

On March 11, 2005, NeilAway landed a commit to add the “Copy Image” command item to the context menu. This was for bug 135300.

What’s interesting here is that “Copy Image Location” was already in the context menu, and in the bug, it looks like there’s some contention over whether or not to keep it. It seems that right around here, the solution they go with is to copy both the image and the image location to the clipboard, and mark each copy with the right “flavours”, so that if you were to paste to a program or field that accepted an “image” flavour, like Photoshop, you’d get the image. If you pasted to a program or field that accepted a “text” flavour, like Notepad, you’d get the image URL.

That’s the solution that was landed, anyhow. Notice that nowadays, Firefox has context menu items that allow users to copy just the image, and just the URL – so at some point, this approach was deemed wanting. I’ll keep my eye out to see if I can find out where that happened, and why. If anybody knows, please comment!

On April 28, 2005, roc landed a commit for bug 240276, which splits up something called “nsGfxScrollFrame” into two things – nsHTMLScrollFrame and nsXULScrollFrame. It seems like, up until this point, layout for both XUL and HTML scrollable frames were handled by the same code. This meant that we were using XUL box-model style layout for HTML, and XUL layout is… well… kind of tricky to work with. This patch helped to further distance our HTML rendering from our XUL rendering. As for how this affected DocShell, the patch removed some scroll calculations from DocShell, where they probably didn’t belong in the first place.

On May 4, 2005, Brian Ryner landed a patch which made it possible to move back and forward across web pages much more quickly. This was for bug 274784, and a key part of a project called “fastback”. When you view a web page, a DocShell is put in charge of requesting network activity to retrieve the document source, and then passing that source onto an appropriate nsIContentViewer. Up until Brian’s patch, it looks like every nsIContentViewer was just getting thrown away after browsing away from a page. His patch made it possible to store a certain number of these nsIContentViewers in the session history of the window, and then retrieve it when we browse back or forward to the associated page. This is a textbook trade-off between speed (the time to instantiate and initialize an nsIContentViewer) and space (stored nsIContentViewers consume memory). And it looks like the trade-off paid-off! We still cache nsIContentViewers to this day. What’s interesting about Brian’s patch is that it exposes an about:config preference1 for setting how many content viewers are allowed to be cached2. As DocShell seems to go hand in hand with session history, it’s not surprising that Brian’s patch touches DocShell code.

about:neterror arrives, Inner and Outer windows appear, and then Session History gets all snuggly with DocShell

On July 14th of 2005, bsmedberg landed a patch to add about:neterror pages, and close a privilege-escalation security vulnerability. Up until this point, network error pages were shown by browsing the DocShell to chrome URLs3, but this allowed certain types of attacks which load iframes resolving to network error pages to potentially gain chrome privileges4.

So instead of going to a chrome URL, the patch causes DocShell to internally load about:neterror5 The great news about this about:neterror page is that it has restricted permissions, so that security hole got plugged.

On July 30th, 2005, jst landed a patch to introduce the notion of inner and outer windows for bug 296639. Inner and outer windows has confused me for a while, but I think I’ve somewhat wrapped my head around it. This document helped.

The idea goes something like this:

The thing that is showing you web content can be considered the outer window – so that could be a browser tab, or an iframe, for example. The inner window is the content being displayed – it’s transitory, and goes away as you browse the web via the outer window.

The outer window then has a notion of all of the inner windows it might contain, and the inner window (via Javascript) gets a handle on the outer window via the window global.

So, for example, if you call window.open, the returned value is an outer window. Methods that you call on that outer window are then forwarded to the inner window.

I hope I got that right. I was originally trying to piece together the meaning of all of it by reading this WHATWG spec describing browsing context, and that was pretty slow going. The MDN page seemed much more clear.

Please comment with corrections if I got any of that wrong.

I’m not entirely sure, but based entirely on instinct and experience, I’m inclined to believe there are interesting security effects of this split. It seems to add a bit more of a membrane6 between web content and the physical window.

Anyhow, jst’s change was pretty monumental. It’s for bug 296639 if you want to read up more about it.

A semi-related change was landed on August 12th by mrbkab, where the entire inner window is stashed in the bfcache (as opposed to what we were doing before, which looks like serialization and deserialization of window state). That was for bug 303267, and sounds related to the fast back and forward caching work that Brian Ryner was working on back in May.

On August 18th of 2005, radha landed the first in the series of patches to session history. Unfortunately, the commit message for this patch doesn’t have a bug number, so I had some trouble tracking down what this work is for. I think this work is for bug 230363, and is actually a copy of interfaces from xpfe/components/shistory/public to docshell/shistory/public. Like I mentioned earlier, DocShell and session history are closely linked, so I suppose it makes sense to put the session history code under docshell/. Later that day, another patch copies the nsISHistoryListener interfaces over as well. Finally, a patch landed to build those interfaces from their new locations, and removes xpfe/components/shistory from Makefile.in. The bug for that last change is bug 305090.

Last bit of 2005

On August 22 of 2005, mrbkap landed a patch that changed how content viewer caching worked. There’s a special page in Firefox called about:blank – if you go to that page right now, you’re going to get a blank page. Some people like to set that as their home page or new tab page, as it is (or should be) very lightweight to load. That page is also special because, from what I can tell, when a new tab or window opens, it’s initially pointing at about:blank before it goes to the requested destination. Before this patch, we used to cache that about:blank content viewer in session history. We didn’t put an entry in the back-forward cache for about:blank though7, so that was a useless cache and a waste of memory. mrbkap’s patch made DocShell check to see if the page it was traveling to was going to re-use the current inner window, and if so, it’d skip caching it. Memory win!

That was the last thing I found interesting in 2005. On to 2006!

Preferences and threads…

On February 7th, 2006, bz landed a patch that made it possible for embedders to override where popup windows get opened.

There are preferences in Firefox that allow you to tweak how web content is able to open new windows8. Those preferences are browser.link.open_newwindow and browser.link.open_newwindow.restriction. If a page is attempting to open a new window, these preferences allow a user to control what actually occurs. The default (in most cases) is to open a new tab instead – but these preferences allow you to open that new window, or to open the content in the same window that the link is executed in. These are the kind of tweaks that power-users love.

Up until this point, only Firefox had these tweaking capabilities. bz’s patch moved that tweaking logic “up the chain”, so to speak, which means that applications that embedded Gecko could also be tweaked like this. Pretty handy.

For the Gecko hackers reading this, this patch also introduced the nsIWindowProvider interface9.

On May 10th, 2006, “darin” landed a patch for bug 326273 that put the nsIThreadManager interface and implementation into the tree. It’s a big commit, and affected many parts of the codebase. nsIThreadManager is, not surprisingly, used to implement multi-threading and thread manipulation in Gecko. From my look at the patch, it looks like it replaces something called nsIEventQueue / nsIEventQueueService. It looks like Gecko already had some facility for multi-threading10, but nsIThreadManager looks like a different model for multi-threading.

For DocShell, this change meant modifying the way that restoring PresShell’s from history would work. Before, DocShell had a RestorePresentationEvent that extended PLEvent, which allowed it to be posted to an nsIEventQueue. Now, instead, we define an inner-class that implements nsRunnable11, and also define a weak pointer to that runnable on a DocShell.

So the way things would go is this: DocShell::RestorePresentation would get called, and this would cancel any pending RestorePresentationRunnable that the DocShell is weak-pointing to. Next, we’d instantiate a new RestorePresentationRunnable that we’d then dispatch to the main thread. This isn’t really different to what we were doing before, but it makes use of the nsIThreadManager and nsRunnable class instead of nsIEventQueue and nsIEventQueueService.

What’s interesting about this patch, DocShell-wise, is that it shows the usage of FavorPerformanceHint, which looks like a way of trading-off UI interactivity with page-to-screen time. Basically, it looks like the FavorPerformanceHint is used when restoring PresShell’s to tell the nsIAppShell, “hey – we want you to favor native events over other events for a small pocket of time so we can get this stuff to the screen ASAP”. If I’m interpreting that right, it’s a tradeoff between total time to execute and responsiveness here. “Do you want it fast, or do you want it smooth?”.

I was probably wrong about the name

In one of my past posts, I made some guesses about why DocShell was called DocShell. I thought:

I think nsDocShell was given the “shell” monicker because it did the job of taking over most of nsWebShell’s duties. However, since nsWebBrowser was now the touch-point between the embedder and embedee… maybe shell makes less sense. I wonder if we missed an opportunity to name nsDocShell something better.

But now that I look at nsIAppShell, and nsIDocShell, and nsIPresShell… I think I’m starting to understand. A while back, when I first started planning these posts, I asked blassey why he thought nsIDocShell was named the way it was, and he said he thought it might be related to the notion of a command shell – like a terminal input. From my understanding, a shell is a command interface with which one can manipulate and control something pretty complex – like the file-system or processes of a computer. I think blassey is right – I think that’s the “Shell” in nsIDocShell. I think the idea is that this interface would be the one to control and manipulate the process of loading and displaying a document. It seems obvious now, but it sure wasn’t when I started looking into this stuff.

DOM Storage (session and global), KungFuDeathGrip, friendlier search…

On May 19th, 2006, jst picked up, finished and landed a patch originally be Enn that implemented DOM Storage for bug 335540.

This patch adds two new methods to nsIDocShell – getSessionStorageForDomain and addSessionStorage. The first method is accessed in a number of cases, but most importantly when some caller reads sessionStorage or globalStorage off of the window object12.

The relationship between nsGlobalWindow and nsDocShell is brought to my attention with this patch. Here’s a fragment from an old chat I had with Ms2ger, smaug and bz, which started when I asked Ms2ger what he’d rename DocShell to.

14:11 (Ms2ger) mconley, I would call it WindowProxy :)
14:12 (smaug) outer window? yes, WindowProxy please
14:12 (mconley) Ms2ger: wait, outer window = docshell currently?
14:12 (khuey) what are we doing with WindowProxy?
14:13 (Ms2ger) mconley, well, no, there’s nsDocShell and nsGlobalWindow (with IsOuterWindow() true)
14:13 (Ms2ger) mconley, those are pretty much isomorphic
14:13 (mconley) I see
14:13 (bz) nsDocShell and outer nsGlobalWindow are in a 1-1 relationship
14:14 (bz) The fact that they are two separate objects is sort of a historical accident that we may want to rectify sometime
14:14 (mconley) this sounds like another post to write – how nsDocShell and nsGlobalWindow are related…

So I think nsGlobalWindow (instances of which can either be “inner” or “outer”) when it has IsOuterWindow being true, works in tandem with nsDocShell to “be” the outer window. That’s really imprecise, hand-wavey language. I’ll probably need to tighten this up in a follow-up post once somebody reads this and gives me better words to describe things13.

On May 24, 2006, smaug landed a patch to fix bug 336978. Bug 336978 was a crash caused by loading the following code in an iframe:

<html>
<head></head>
<body>
  <script>
    window.addEventListener("pagehide", doe, true);
    function doe(e) {
      var x = parent.document.getElementsByTagName('iframe')[0];
      x.parentNode.removeChild(x);
    }
    setTimeout(doe2,500);
    function doe2() {
      window.location = 'about:blank';
    }
  </script>
</body>
</html>

What this code does is wait 500ms, and then change the location to about:blank. Changing the location causes the pagehide event to fire while we’re unloading the original page, and when we hear it, we get the host of the iframe to remove the iframe from itself.

smaug’s solution to this bug is for nsDocShell to hold a reference via an nsCOMPtr to the nsIContentViewer for the document while the pagehide event is fired. This ensures that the nsIContentViewer doesn’t get destructed before we’re truly done to it. The name we give this nsCOMPtr is “kungFuDeathGrip”. This isn’t the only place where some hold of an object is maintained with a variable called kungFuDeathGrip – check out dxr for some more uses.

I’d seen kungFuDeathGrip over the years, and I never looked closely at what it was doing. I always thought kungFuDeathGrip was some magical global function that destroyed things unequivocally, but on closer inspection, I’m pretty sure it’s really just a way of saying “this variable’s sole purpose is to hold a reference to this thing until I’m done with it.”

I think the phrase “kung fu” distracted me. I thought it did this:

Black Dynamite layin' the smack down.

Woooooo!

But it’s really more like this:

Spock taking out Kirk with the Vulcan nerve pinch thing.

Kkkg….*gurgle*…ngahh….

On June 15th, 2006, “brettw” landed a patch for bug 245597 to make it so that anything that gets put into the AwesomeBar that isn’t parse-able as a URI automatically turns into a keyword search. That’s great! This made both the search input and the AwesomeBar useful for more users. This change occurred in docshell/base/nsDefaultURIFixup.cpp, which is, as I understand it, the central location for code that turns erroneous URIs into what the user probably intended.

nsIMutationObserver, some new about: pages…

On July 2nd, 2006, Jonas Sicking added nsIMutationObserver to the tree for bug 342062, making it possible to observe changes to the DOM within a subtree. It’s a pretty big patch, but it looks like a good chunk of it is just swapping in usage of nsIMutationObserver to replace old usage of nsIDocumentObserver (which supplied the same observations, but for an entire document instead of a subtree). Note that it’d still be a few years before DOM3 Mutation Events would be exposed for web developers to use, and after a few more years, those events were deprecated in favour of the Mutation Observer API.

On September 15th, 2006, several new Gecko-wide about: pages landed, which means they got put into the redirection map in docshell/base/nsAboutRedirector.cpp. These pages were about:buildconfig (bug 140034), about:about (bug 56061), and about:license (bug 256945). That same day, bz landed a patch to make it so that new about: pages didn’t have to have special rules hardcoded into nsScriptSecurityManager::CanExecuteScripts to execute script even if the user has script disabled. Instead, the nsAboutRedirector mapping was extended to allow a boolean for indicating that an about: page required script execution.

Simplifying DocShell, and then some spoofing and malware protection

The next interesting thing (according to me, anyhow) didn’t occur until the following May 6th, 2007. That day, bz landed a patch for bug 377303 which simplified the structure of things inside the DocShell tree.

Up until that point, there had existed both nsIDocShellTreeItem and nsIDocShellTreeNode had both existed as interfaces for interacting with nodes within a DocShell tree. I’ll quote myself from my previous post:

The (somewhat nebulous) distinction of DocShell “treeItems” and “treeNodes” is made. At this point, the difference between the two is that nsIDocShellTreeItem must be implemented by anything that wishes to be a leaf or middle node of the DocShell tree. The interface itself provides accessors to various attributes on the tree item. nsIDocShellTreeNode, on the other hand, is for manipulating one of these items in the tree – for example, finding, adding or removing children. I’m not entirely sure this distinction is useful, but there you have it.

It looks like enough was finally enough. bz didn’t go so far as to fully merge the two interfaces (though he makes a note in his patch about doing so), but instead made the (arguably more complex) nsIDocShellTreeItem interface inherit from nsIDocShellTreeNode14.

Later, on May 17th, 2007, Mats Palmgren landed a patch for bug 376562 to remove a childOffset attribute from nsIDocShellTreeItem, and instead move a setter for the childOffset to the nsIDocShell interface instead.

Reading this bug comment as well as one of Mats’ comments in the patch, it sounds as if childOffset never really worked as advertised, and was a bit of a foot-gun.15

On June 14th, 2007, bz landed a patch for bug 371360, which prevents onUnload handlers from starting any page loads. Before this, it seems that it was possible for a page do to something like this:

<html>
  <body onunload="location.href = 'http://www.somesite.com';">
    <a href="http://slashdot.org/">http://slashdot.org/</a>
  </body>
</html>

With the result being that you could (potentially) phish a user. For example, suppose you’re a member of MySafeBank, which has a site at mysafebank.com. Suppose you’re at my seemingly innocent site totallyevil.com, and also suppose that I’ve registered a domain at rnysafebank.com (that’s an r and an n, which, if you’re not paying attention, look pretty close to a m). If you’re at my site, and I notice that you’re trying to head to mysafebank.com, I could redirect you to rnysafebank.com, which has a very similar user interface and favicon. Yadda yadda yadda, your bank info is now mine.

So bz stopped that one in its tracks by just preventing a DocShell from attempting any kind of load if we’re in the middle of an unload.

On July 3rd, 2007, johnath16 landed a patch for bug 380932 to add a new mode for about:neterror for pages suspected of serving up malware.

If you haven’t seen that page before, count yourself lucky – you’ve been surfing in safe places! This is what the page looks like (or, used to look like, anyhow):

Minefield reporting a Suspected Attack Site

Old version of Firefox showing a Suspected Attack Site

johnath’s patch allowed an about:neterror page to have a specific CSS class associated with it as part of its URL. This allowed for the dramatic styling in the image above17.

showModalDialog

showModalDialog was a non-standard function that Microsoft introduced in Internet Explorer 418. This allows a web page to create a modal dialog that contains web content. jst landed a patch on July 26th, 2007 to implement it in Firefox as part of bug 194404. This patch made it possible for a DocShell to have a modal dialog be its parent.

showModalDialog has since been marked as deprecated on MDN, and Google Chrome have announced they will no longer support it after May of 2015. Firefox will support it until sometime after Firefox 39 on the release channel19.

about:crashes, Larry, and tab tearing

There’s a long gap in time here where nothing really interesting happens under docshell/.

Finally, on January 24th, 2008, Mossop landed a patch for bug 411490 that exposes about:crashes as a handy way of getting at the list of crash reports that have been collected.

As about:crashes is a Gecko-wide about: page, this meant once again adding an entry to the docshell/base/nsAboutRedirector.cpp map, as had been done with about:buildconfig and about:about.

On April 28th, 2008, gavin landed a patch originally by ehsan that adds a friendlier set of icons for reporting SSL errors for bug 430904.

That icon is Larry. Have you met Larry? This is Larry:

Larry, the SSL dude.

This is Larry.

Larry is the name for a series of icons that were developed to describe how secure your communications are with a particular site. You might recognize him from the airport, because he looks a lot like a customs agent or border patrol.

You can read up on Larry here on johnath’s blog.

On August 7th, 2008, bz landed a patch for bug 113934 to lay the foundation for letting users drag tabs out from a window, or drag tabs between windows. This introduced a new method to nsIFrameLoader, “swapFrameLoaders”. This method is the real key to moving tabs between windows – each <xul:browser> implements nsIFrameLoaderOwner, and the nsIFrameLoader is (yet another) thing that can load web content. This is essentially a brain transplant between two <xul:browser>’s. You can see the real guts of the brain transplant in this method of the patch.

On January 13th, in a semi-related patch, bz landed code for bug 449780 to flush the bfcache20 when swapping frameloaders. Apparently, we were storing information in the bfcache that was simply incorrect after a frameloader swap. The best way to avoid internal confusion in such a case was to just invalidate the cache.

Big gaps…

Lots of big gaps between the next few changes.

On March 18th, 2009, Honza Bambas landed a patch for bug 422526 to implement window.localStorage. localStorage was a replacement for globalStorage21 that persisted across browser restarts (unlike sessionStorage).

Note that both localStorage and sessionStorage were synchronous storage APIs. It’d take until around June 24th of 2010 before an asynchronous storage mechanism became available.

On May 7th, 2009, bz landed a patch for bug 490957 to finally get rid of nsWebShell.cpp. If you recall from my earlier blog post, that was one of Travis Bogard’s goals at the start of this whole adventure. bz’s patch essentially folds the functionality of nsWebShell into nsDocShell. The webshell/ folder remained, but just contained interfaces.

Curiously, a good chunk of nsWebShell’s functionality seemed to revolve around anchor pings, a massively unpopular “feature” that allows a website to get your browser to send a request every time you click on a link. Thankfully, this “feature” is disabled by default in Firefox22. Here’s Jorge Villalobos’s post on anchor pings. Correction (Jan 4th, 2015) - I’ve since changed my tune about anchor pings. See these three comments.

On June 29th, 2009, dbolter landed a patch for bug 467144 so that nsIMutationObserver’s, when they observe an attribute being changed, also get a copy of the old attribute as well as the new one. Actually, to be specific, it adds a callback “AttributeWillChange” to include the old attribute. This fires before the “AttributeChanged” callback.

A day later, bsmedberg landed a massive patch to implement remote tabs. There’s no bug number in the commit message, but this is clearly part of the Electrolysis efforts that were just starting up around this time. Remote tabs means browsers that run in different processes, which is the overall goal of Electrolysis, and (possibly unbeknownst to bsmedberg at the time) a foundational piece for Fennec (Firefox for Android)23.

On October 3rd, 2009, vvuk landed about:memory for bug 515354, a key piece of the war against high-memory consumption in Firefox (a.k.a. MemShrink). This is very similar to the about:crashes page that Mossop landed back in 2008.

And finally…

On January 7th, 2010, smaug landed a patch for bug 534226 to remove support for multiple PresShell’s. PresShell stands for “Presentation Shell”, and as I understand it, is the primary interface to the “frame tree”24.

It looks like, up until this point, Gecko had the ability to have multiple frame trees per content tree. I’m not entirely sure what the point of that was, but the capability was there. smaug’s patch simplifies everything by making sure a document has only a single, primary PresShell. This removes a lot of iteration and management code for those multiple PresShell’s, which is nice.

And last, but not least, on June 30th, 2010, Benjamin Stover landed a patch for bug 556400 which made it so that visits to webpages are recorded asynchronously in Places. It looks like this patch takes I/O off the main-thread, so it gets a big thumbs-up from me.

Essentially, this patch adds new asynchronous write methods to the History service, and then makes nsDocShell use those methods on webpage visits. nsDocShell falls back to the synchronous methods of nsIGlobalHistory2 if, for some reason, it can’t get at the History service and its asynchronous methods.

Phew!

Did you make it? Are you still with me? I know it might feel like this:

qwop

everyone is a winner

but I think we’re making real progress here. I think we’re learning important stuff about the history of Firefox, and changes that have occurred in some of its core functionality over time.

So to sum, that, to me, was the most interesting stuff to happen in and around docshell/ from 2005-2010. There might have been other neat stuff in there, but it didn’t catch my eye when I was browsing commit messages.

There’s still much to do – I have to look at commits from 2011 to 2014. After that, I’m planning on doing a line-by-line code review / walkthrough of nsDocShell.cpp, and then I’d like to try to summarize my findings and any recommendations I’ve put together from my time studying this stuff.

Hold tight!


  1. This pref was browser.sessionhistory.max_viewers, if you’re interested – though that preference appears to have been superseded by browser.sessionhistory.max_total_viewers. The default value for that pref is -1, meaning to adjust the number of allowed cached viewers based on how much memory is available. If you’re looking to reduce how much memory Firefox consumes, it’s possible setting this to some low integer will allow you reverse that trade-off between space and speed. 

  2. I assume per session history 

  3. You wouldn’t notice that you were at a chrome URL though, because DocShell loads this URL internally, while pretending to be at the URL that caused the error. The end result is the user going to http://www.sitethatcausesnetworkerror.com still sees that URL in their AwesomeBar, despite the fact that their web content shows the appropriate network error page hosted at a chrome URL. 

  4. “chrome privileges” means that a web page now essentially has the same permissions that Firefox, the program on your computer, has – meaning it can potentially read and write files, and communicate with anybody on your network. Yikes! 

  5. You can visit this page in Firefox right now and see a generic network error. It’s showing a generic error because it hasn’t the foggiest idea how you’ve arrived at about:neterror, since it wasn’t passed any error information. 

  6. Or the infrastructure to create such a membrane. 

  7. So you couldn’t go back to about:blank in cases I described, where a tab or window was initialized at about:blank before going to a new page. 

  8. I’m actually quite familiar with this stuff because I worked on opening new windows for Electrolysis not too long ago. 

  9. From the header of that interface:

    /**
     * The nsIWindowProvider interface exists so that the window watcher's default
     * behavior of opening a new window can be easly modified.  When the window
     * watcher needs to open a new window, it will first check with the
     * nsIWindowProvider it gets from the parent window.  If there is no provider
     * or the provider does not provide a window, the window watcher will proceed
     * to actually open a new window.
     */

     

  10. The nsIEventQueueService service mentions that it is used to manage event queues for a particular thread, and makes use of nsIThread – so multi-threading must have already been a thing. 

  11. Still called RestorePresentationEvent though… strange that the opportunity wasn’t taken to rename this to RestorePresentationRunnable. 

  12. Those two properties are part of a new nsIDOMStorageWindow interface that nsGlobalWindow implements after this patch. That interface is later removed in bug 670331, and the two accessors are moved directly into nsIDOMWindow instead. 

  13. I have a feeling the real answer lies somewhere in the comments in bug 296639

  14. It wasn’t immediately clear to me why the inheritance didn’t go the other way around – especially since bz himself had a comment in nsIDocShellTreeNode suggesting that arrangement. Look at his first comment in the bug though:

    This would allow consumers to start using just nsIDocShellTreeItem in their code, until we can just merge nsIDocShellTreeNode into nsIDocShellTreeItem.

    Basically, it sounds like nsIDocShellTreeNode was being deprecated, and that callers who used to use nsIDocShellTreeNode should migrate to use nsIDocShellTreeItem instead (which inherits nsIDocShellTreeNode’s methods). Then the two interfaces could be merged. 

  15. Mats’ warning was removed on August 17, 2010 as part of bug 462076. It looks like SetChildOffset is still only ever used when adding the child though, so it’s still probably valid. 

  16. The same johnath who is currently the VP of Firefox! 

  17. Later on in August, dcamp would land this patch as part of bug 384941 which prevents suspected malware sites from even loading, instead of just not displaying them. 

  18. To quote Douglas Adams:

    This has made a lot of people very angry and has been widely regarded as a bad move.

     

  19. And the 38 ESR will continue to be supported until mid-2016. If you maintain a site that uses showModalDialog, you’d best get rid of it. 

  20. The bfcache, or “back-forward cache” is a collection of “frozen” pages that are stored in memory for fast back/forward action – see this page for more detail. 

  21. globalStorage, I believed, allowed all web properties read and write access to the same storage – so clearly it was a good idea to replace it. globalStorage was removed on October 9th, 2011 by Honza as part of bug 687579

  22. But according to this, is enabled by default in both Chrome and Opera. Lovely

  23. Remote tabs are also hugely important for Boot2Gecko / Firefox OS

  24. From this document:

    …the frame tree…is the visual representation of the document. Each frame can be thought of as a rectangular area on the page. The content nodes for XML elements are usually associated with one or more frames which display the element — one frame if the element is rectangular, more if the element is something more complex (like a chunk of bolded text that happens to be word-wrapped)…

    And from the nsIPresShell.h header:

    /**
    * Presentation shell interface. Presentation shells are the
    * controlling point for managing the presentation of a document. The
    * presentation shell holds a live reference to the document, the
    * presentation context, the style manager, the style set and the root
    * frame. <p>
    *
    * When this object is Release’d, it will release the document, the
    * presentation context, the style manager, the style set and the root
    * frame.

     

January 03, 2015 08:34 PM

December 21, 2014

Rumbling Edge - Thunderbird

2014-12-21 Calendar builds

Common (excluding Website bugs)-specific: (37)

Sunbird will no longer be actively developed by the Calendar team.

Windows builds Official Windows

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

December 21, 2014 02:10 PM

2014-12-21 Thunderbird comm-central builds

Thunderbird-specific: (43)

MailNews Core-specific: (28)

Windows builds Official Windows, Official Windows installer

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

December 21, 2014 02:08 PM

November 25, 2014

Thunderbird Blog

Thunderbird Reorganizes at 2014 Toronto Summit

In October 2014, 22 active contributors to Thunderbird gathered at the Mozilla office in Toronto to discuss the status of Thunderbird, and plan for the future.

Toronto Contributors at 2014 Toronto Summit

Thunderbird contributors gather in Toronto to plan the future.

As background, Mitchell Baker, Chair of the Mozilla Foundation, posted in July 2012 that Mozilla would significantly reduce paid staff dedicated to Thunderbird, and asked community volunteers to move Thunderbird forward. Mozilla at that time committed several paid staff to maintain Thunderbird, each working part-time on Thunderbird but with a main commitment to other Mozilla projects. The staff commitment in total was approximately one full-time equivalent.

Over the last two years, those individuals had slowly reduced their commitment to Thunderbird, yet the formal leadership of Thunderbird remained with these staff. By 2014 Thunderbird had reached the point where nobody was effectively in charge, and it was difficult to make important decisions. By gathering the key active contributors in one place, we were able to make real decisions, plan our future governance, and move to complete the transition from being staff-led to community-led.

At the Summit, we made a number of key decisions:

There is a lot of new energy in Thunderbird since the Summit, a number of people are stepping forward to take on some critical roles, and we are looking forward to a great next release. More help is always welcome though!

November 25, 2014 06:15 PM

November 19, 2014

Meeting Notes

Thunderbird: 2014-11-18

   Thunderbird meeting notes 2014-11-18

Previous meetings: https://wiki.mozilla.org/Thunderbird/StatusMeetings#Meeting_Notes

Attendees

(partial list)
rkent
florian
wsmwk
jcranmer
mmelin
aceman
theone
clokep
roland

Action items from last meetings

  • wsmwk: Get the Thunderbird 38 bugzilla flag created
    • not heard from standard8

Critical Issues

  • Several critical bugs keeping us from moving from 24 to 31. Please leave these here until they’re confirmed fixed.
    • Frequent POP3 crasher bug 902158 On current aurora and beta?
      • we won’t have data/insight till next week, assuming the relevant builds are built. crash-stats for nightly builds is not useful – direct user testing of the fixed build is required
    • Self-signed certs are having difficulty bug 1036338 SOLVED! REOPENED according to the bug?
    • Auto-complete bugs?bug 1045753 waiting for esr approval bug 1043310 Waiting for review, still
    • Auto-complete improvements (bug 970456, bug 1091675, bug 118624) – some of those could go into esr31
    • bug 1045958TB 31 jp does not display folder pane with OS X

Why are we throttled? Because 1) waiting for TB 31.3, and 2) still hoping for bug 1045958 3) need auto-complete bug approved. Dec 1/2 is now release date.

Upcoming

Round Table

wsmwk

  • got Penelope (pre-OSE eudora) removed from AMO
  • shepherding [1] bug 1045958]]TB 31 jp does not display folder pane with OS X
  • secured potential release drivers
  • “Get Involved” is broken for Thunderbird. TB isn’t offered at https://www.mozilla.org/en-US/contribute/signup/, and it’s unclear who in Thunderbird gets notified. In contact with

Larissa Shapiro

jcranmer

  • Looking into eliminating the comm-central build system bug 1099430
  • Trying to see if I can get some whines set up to listen for potential TB compat changes (e.g., checking the addon-compat keyword)
  • We have telemetry on Nightly for the first time since TB 28!
  • Irving is working through reviews of bug 998191 \o/

clokep

  • Google Talk does not work on comm-central due to disabling of RC4 SSL ciphers, see bug 1092701
    • Some of the security guys have contacted Googlers, apparently an internal ticket was opened with the XMPP team
  • Filed a bug (with a patch) to have firefoxtree work with comm-central, this will automatically add a tag (called “comm”) to the tip of comm-central bug 1100692 and is useful if you’re playing with bookmarks instead of mq
  • Still haven’t fully been able to get Additional Chat Protocols to compile…(Windows still failing)
  • WebRTC work is waiting for reviews from Florian

mkmelin

  • lots of reviews, queue almost empty, yay!
  • finally landed bug 970456
  • bug 1074125 fixed, plan to handle some of the m-c encoding removals next
  • bug 1074793 fixed, for tb we need to set a pref for it to take affect (bug 1095893 awaiting review)

aceman

  • revived effort on 4GB+ folders together with rkent
  • landed a rewrite of the attachment reminder engine: bug 938829 (NOT for ESR). Please mark regressions against that bug. First one is bug 1099866.

Support team

  • [roland] working on Thunderbird profile article because a new volunteer contributor from the SUMO buddy program rewrote it! will review changes!

Action Items

  • wsmwk: Thunderbird start page for anniversary, with localizations
  • wsmwk: Get Involved, get the “Thunderbird path” operating

November 19, 2014 04:00 AM

November 16, 2014

Rumbling Edge - Thunderbird

2014-11-16 Calendar builds

Common (excluding Website bugs)-specific: (11)

Sunbird will no longer be actively developed by the Calendar team.

Windows builds Official Windows

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

November 16, 2014 05:20 PM

2014-11-16 Thunderbird comm-central builds

Thunderbird-specific: (58)

MailNews Core-specific: (37)

Windows builds Official Windows, Official Windows installer

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

November 16, 2014 05:18 PM

October 19, 2014

Rumbling Edge - Thunderbird

2014-10-17 Calendar builds

Common (excluding Website bugs)-specific: (2)

Sunbird will no longer be actively developed by the Calendar team.

Windows builds Official Windows

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

October 19, 2014 09:34 AM

2014-10-17 Thunderbird comm-central builds

Thunderbird-specific: (21)

MailNews Core-specific: (11)

Windows builds Official Windows, Official Windows installer

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

October 19, 2014 09:33 AM

October 15, 2014

Kent James

Thunderbird Summit in Toronto to Plan a Viable Future

On Wednesday, October 15 through Saturday, October 19, 2014, the Thunderbird core contributors (about 20 people in total) are gathering at the Mozilla offices in Toronto, Ontario for a key summit to plan a viable future for Thunderbird. The first two days are project work days, but on Friday, October 18 we will be meeting all day as a group to discuss how we can overcome various obstacles that threaten the continuing viability of Thunderbird as a project. This is an open Summit for all interested parties. Remote participation or viewing of Friday group sessions is possible, beginning at 9:30 AM EDT (6:30 AM Pacific Daylight Time)  using the same channels as the regular weekly Thunderbird status meetings.

Video Instructions: See https://wiki.mozilla.org/Thunderbird/StatusMeetings for details.

Overall Summit Description and Agenda: See https://wiki.mozilla.org/Thunderbird:Summit_2014

Feel free to join in if you are interested in the future of Thunderbird.

October 15, 2014 04:17 AM

October 02, 2014

Philipp Kewisch

Monitor all http(s) network requests using the Mozilla Platform

In an xpcshell test, I recently needed a way to monitor all network requests and access both request and response data so I can save them for later use. This required a little bit of digging in Mozilla’s devtools code so I thought I’d write a short blog post about it.

This code will be used in a testcase that ensures that calendar providers in Lightning function properly. In the case of the CalDAV provider, we would need to access a real server for testing. We can’t just set up a few servers and use them for testing, it would end in an unreasonable amount of server maintenance. Given non-local connections are not allowed when running the tests on the Mozilla build infrastructure, it wouldn’t work anyway. The solution is to create a fakeserver, that is able to replay the requests in the same way. Instead of manually making the requests and figuring out how the server replies, we can use this code to quickly collect all the requests we need.

Without further delay, here is the code you have been waiting for:


Tagged: Mozilla, network, xpcshell

October 02, 2014 02:38 PM

October 01, 2014

Calendar

Calconnect XXXI Interop Testing

Thanks to the wonderful folks at Linagora, I was able to spend the last three days at the Calconnect XXXI meeting in Bedford, England. The goal of this meeting is to get server and client vendors together in a room both for ad-hoc testing and discussions on calendaring standards.

Before arriving, I’ve set myself the goal to go through a big list of bugs that have been sitting around in our CalDAV component to see if they can be resolved. It turns out that I was able to close 48 of the 76 bugs I had picked out:

report-2014-10-01

A good amount of the bugs I’ve resolved were sitting and waiting for any one of our contributors to reproduce with a specific server. This is often a problem because the time it takes to set up and configure the servers is time consuming. The great thing about being here at Calconnect is having a testing instance most of the reported servers readily set up. Not only that, but engineers from the respective servers are sitting together at a table and can answer any questions that may arise, or comment on potential bugs that have been fixed in later versions.

The other category of bugs are support issues, duplicates and bugs that haven’t received an answer from the reporter. These could have been found outside of Calconnect, but its still a good opportunity to take the time to handle these.

Eight of the remaining bugs already have a patch attached, four of them were created while I was here. There is also a new feature coming up that makes it easy to share calendars with other users directly from Lightning. This requires the server to support caldav-sharing, for example the Apple Calendar and Contacts Server and fruux.com.

Screenshot 2014-10-01 11.02.11
Note: The ugly add button will be replaced by an icon. The email addresses are editable.

 

In the next few days we will be going on to the standards discussions. I am actively involved as the chair to TC-API, a technical committee dedicated to producing an abstract calendaring model that ensures that vendors integrating calendaring into their products are aware of the implications of calendaring and scheduling, hopefully resulting in better interoperability in the future. Another goal we have is to find a common understanding for a REST API that is geared towards webpages, which may become a standards document some day.

October 01, 2014 10:13 AM

September 29, 2014

Ludovic Hirlimann

Tips on organizing a pgp key signing party

Over the years I’ve organized or tried to organize pgp key signing parties every time I go somewhere. I the last year I’ve organized 3 that were successful (eg with more then 10 attendees).

1. Have a venue

I’ve tried a bunch of times to have people show up at the hotel I was staying in the morning - that doesn’t work. Having catering at the venues is even better, it will encourage people to come from far away (or long distance commute). Try to show the path in the venues with signs (paper with PGP key signing party and arrows help).

2. Date and time

Meeting in the evening after work works better ( after 18 or 18:30 works better).

Let people know how long it will take (count 1 hour/per 30 participants).

3. Make people sign up

That makes people think twice before saying they will attend. It’s also an easy way for you to know how much beer/cola/ etc.. you’ll need to provide if you cater food.

I’ve been using eventbrite to manage attendance at my last three meeting it let’s me :

4 Reach out

For such a party you need people to attend so you need to reach out.

I always start by a search on biglumber.com to find who are the people using gpg registered on that site for the area I’m visiting (see below on what I send).

Then I look for local linux users groups / *BSD groups  and send an announcement to them with :

for my last announcement it looked like this :

Subject: GnuPG / PGP key signing party September 26 2014
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="t01Mpe56TgLc7mgHKVMajjwkqQdw8XvI4"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--t01Mpe56TgLc7mgHKVMajjwkqQdw8XvI4
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable

Hello my name is ludovic,

I'm a sysadmins at mozilla working remote from europe. I've been
involved with Thunderbird a lot (and still am). I'm organizing a pgp Key
signing party in the Mozilla san francisco office on September the 26th
2014 from 6PM to 8PM.

For security and assurances reasons I need to count how many people will
attend. I'v setup a eventbrite for that at
https://www.eventbrite.com/e/gnupg-pgp-key-signing-party-making-the-web-o=
f-trust-stronger-tickets-12867542165
(please take one ticket if you think about attending - If you change you
mind cancel so more people can come).

I will use the eventbrite tool to send reminders and I will try to make
a list with keys and fingerprint before the event to make things more
manageable (but I don't promise).

for those using lanyrd you will be able to use http://lanyrd.com/ccckzw.

Ludovic
ps sent to buug.org,nblug.org end penlug.org - please feel free to post
where appropriate ( the more the meerier, the stronger the web of trust).=

ps2 I have contacted people listed on biglumber to have more gpg related
people show up.

--=20
[:Usul] MOC Team at Mozilla
QA Lead fof Thunderbird
http://sietch-tabr.tumblr.com/ - http://weusepgp.info/

5. Make it easy to attend

As noted above making a list of participants to hand out helps a lot (I’ve used http://www.phildev.net/pius/ and my own stuff to make a list). It make it easier for you, for attendees. Tell people what they need to bring (IDs, pen, printed fingerprints if you don’t provide a list).

6. Send reminders

Send people reminder and let them know how many people intend to show up. It boosts audience.

September 29, 2014 11:03 AM

September 25, 2014

Rumbling Edge - Thunderbird

2014-09-22 Calendar builds

Common (excluding Website bugs)-specific: (19)

Sunbird will no longer be actively developed by the Calendar team.

Windows builds Official Windows

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

September 25, 2014 04:31 AM

2014-09-22 Thunderbird comm-central builds

Thunderbird-specific: (31)

MailNews Core-specific: (35)

Windows builds Official Windows, Official Windows installer

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

September 25, 2014 04:30 AM

September 22, 2014

Meeting Notes

Thunderbird: 2014-09-11

Thunderbird meeting notes 2014-09-11
Today’s minutes taker, please don’t forget to :
– save etherpad before clearing for this meeting, if etherpad wasn’t previously cleared (but copy “Action items” to the top before clearing)
– after end of meeting, copy entire etherpad contents to a new dated wiki page on http://mzl.la/tbstatus so that they will go public in the meeting notes blog.
– save etherpad, copy “Action items” to “Action items from last meetings”, and clear the rest of etherpad comments

Attendees

  • rkent, florian queze, irving, rolandtanglao, magnus

Action items from last meetings

  • rkent and wsmwk to unblock / deal with bienvenu bugs

Current status and discussions

  • no progress on “bienvenu” bugs
  • comm-central changes on hold pending mercurial changes and signoff by smedberg
  • discussion about kent’s modification to Thunderbird governance proposal is positive
  • much work on summit and video from mconley and rkent
  • summit agenda planning, rkent to ask in tb-planning

Round Table

jcranmer

  • Likely won’t be at meeting today, one of my meetings changed time slots to start 30m before this one
  • according to gps, hg partial checkouts are targetted to land 3.2, more likely to land in 3.3 (release dates of Nov 1/Feb 1, respectively)
    • bsmedberg has said this is his blocker for letting c-c merge into m-c
    • gps plans to make it a high priority to update to newer hg quickly, and now has a new role to make that more possible
  • Played with putting OpenLDAP server in a Docker container

JosiahOne (Not at meeting)

  • Have done reviews and have been giving feedback in bugs, but personal development time has slowed. My development machine has to have a part replaced on Saturday plus I have papers and college applications to finish this month, so availability will be limited until around the time of the Summit.
  • OS X Codesign V2 status is coming along, I’m mostly just waiting for a finished version of the Fx implementation.

rkent

  • Summit: we really need a group of people to work on the agenda. Volunteers?
  • Discussion of reorg plan?

mconley

  • Now acting as interface between TB community and ProTravel Inc
    • The attendees list has been garnered from the wiki, along with 2 extras from Fallen and Florian for Calendar / Chat. Waiting to hear from a few more.
  • Re-connected with Aaron Mandel about a fundraising video. Quote is approximately $600 (+tax), since he likes Mozilla and wants to give us a deal
    • We need to find a good variety (accents, countries of origin, languages) members of the TB community who are comfortable / articulate in front of the camera (about 5-7 people), and come up with “our story”. I suggested our audience be current TB users who don’t actually know what’s happening with Thunderbird.
    • Quickly talk about where Thunderbird came from, the transition to community development, and where we are now – and why we need help. Instead of just talking heads, these interviews will be interspersed with shots of people checking their email, doing calendaring, chatting, etc.
    • We need the quick and punchy stuff: “Put users in control of their email”, “Your email is yours, even when you’re offline”, “Lots of tweaks and add-ons for power users”
    • We also need to send Aaron TB art / assets for graphical work.
    • Once we have our story, we’ll put together a really basic script, so we know who to put in front of the camera, and what questions to ask.
    • Then, I’ll meet with Aaron face-to-face 2 weeks before we shoot to make sure we have everything we need
    • During the summit (probably the Friday), we’ll pull our 5-7 people aside, ask them the questions we’ve scripted (maybe several times to get the sound bites we want), and then Aaron will cut together the video.

clokep (Not attending)

  • Been in contact with some of the DarkMail developers, have been trying to get them to ask questions in #maildev, but they seem to like to ask through me.
    • They’ve said things like “Random people don’t get answers there”
    • More likely they’re working during times when “we’re” not online.

Support team

Action Items

  • mconley: Come up with a short-list of volunteers to go on camera, send out emails with questions on them to get responses, find the responses that resonate, and from that, assemble our script.

September 22, 2014 08:19 PM

September 17, 2014

Ludovic Hirlimann

Gnupg / PGP key signing party in mozilla's San francisco space

I’m organizing a pgp Keysigning party in the Mozilla san francisco office on September the 26th 2014 from 6PM to 8PM.

For security and assurances reasons I need to count how many people will attend. I’ve setup a eventbrite for that at https://www.eventbrite.com/e/gnupg-pgp-key-signing-party-making-the-web-of-trust-stronger-tickets-12867542165 (please take one ticket if you think about attending - If you change you mind cancel so more people can come).

I will use the eventbrite tool to send reminders and I will try to make a list with keys and fingerprint before the event to make things more manageable (but I don’t promise).

For those using lanyrd you will be able to use http://lanyrd.com/ccckzw.(Please tweet the event to get more people in).

September 17, 2014 12:35 AM

August 22, 2014

Robert Kaiser

Mirror, Mirror: Trek Convention and FLOSS Conferences

It's been a while since I did any blogging, but that doesn't mean I haven't been doing anything - on the contrary, I have been too busy to blog, basically. We had a few Firefox releases where I scrambled until the last day of the beta phase to make sure we keep our crash rates as low as our users probably expect by now, I did some prototyping work on QA dashboards (with already-helpful results and more to come) and helped in other process improvements on the Firefox Quality team, worked with different teams to improve stability of our blocklist ping "ADI" data, and finally even was at a QA work week and a vacation in the US. So plenty of stuff done, and I hope to get to blog about at least some pieces of that in the next weeks and months.

That said, one major part of my recent vacation was the Star Trek Las Vegas Convention, which I attended the second time after last year. Since back then, I wanted to blog about some interesting parallels I found between that event (I can't compare to other conventions, as I've never been to any of those) and some Free, Libre and Open Source Software (FLOSS) conferences I've been to, most notably FOSDEM, but also the larger Mozilla events.
Of course, there's the big events in the big rooms and the official schedule - on the conferences it's the keynotes and presentations of developers about what's new in their software, what they learned or where we should go, on the convention it's actors and other guests talking about their experiences, what's new in their lives, and entertaining the crowd - both with questions from the audience. Of course, the topics are wildly different. And there's booths at both, also quite a bit different, as it's autograph and sales booths on one side, and mainly info booths on the other, though there are geeky T-shirts sold at both types of events. ;-)

The largest parallels I found, though, are about the mass of people that are there:
For one thing, the "hallway track" of talking to and meeting other attendees is definitely a main attraction and big piece of the life of the events on both "sides" there. Old friendships are being revived, new found, and the somewhat geeky commonalities are being celebrated and lead to tons of fun and involved conversations - not just the old fun bickering between vi and emacs or Kirk and Picard fans (or different desktop environments / different series and movies). :)
For the other, I learned that both types of events are in the end more about the "regular" attendees than the speakers, even if the latter end up being featured at both. Especially the recurring attendees go there because they want to meet and interact with all the other people going there, with the official schedule being the icing on the cake, really. Not that it would be unimportant or unneeded, but it's not as much the main attraction as people on the outside, and possibly even the organizers, might think. Also, going there means you do for a few days not have to hide your "geekiness" from your surroundings and can actively show and celebrate it. There's also some amount of a "do good" atmosphere in both those communities.
And both events, esp. the Trek and Mozilla ones, tend to have a very inclusive atmosphere of embracing everyone else, no matter what their physical appearance, gender or other social components. And actually, given how deeply that inclusive spirit has been anchored into the Star Trek productions by Gene Roddenberry himself, this might even run deeper in the fans there than it is in the FLOSS world. Notably, I saw a much larger amount of women and of colored people on the Star Trek Conventions than I see on FLOSS conferences - my guess is that at least a third of the Trek fans in Las Vegas were female, for example. I guess we need some more role models in they style of Nichelle Nichols and others in the FLOSS scene.

All in all, there's a lot of similarities and still quite some differences, but quite a twist on an alternate universe like it's depicted in Mirror, Mirror and other episodes - here it's a different crowd with a similar spirit and not the same people with different mindsets and behaviors.
As a very social person, I love attending and immersing myself in both types of events, and I somewhat wonder if and how we should have some more cross-pollination between those communities.
I for sure will be seen on more FLOSS and Mozilla events as well as more Star Trek conventions! :)

August 22, 2014 03:09 PM

August 19, 2014

Rumbling Edge - Thunderbird

2014-08-18 Calendar builds

Common (excluding Website bugs)-specific: (17)

Sunbird will no longer be actively developed by the Calendar team.

Windows builds Official Windows

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds No binaries since July 23, 2014.

August 19, 2014 05:14 AM

2014-08-18 Thunderbird comm-central builds

Thunderbird-specific: (30)

MailNews Core-specific: (34)

Windows builds Official Windows, Official Windows installer

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds No binaries since July 23, 2014.

August 19, 2014 05:11 AM